uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
1,116,691,497,415
arxiv
\section{Introduction} \label{sec:intro} Rocks and rock-like materials ({\it e.g.}~concrete and stiff soils) commonly fail in a quasi-brittle manner, characterized by progressive softening during the post-peak stage. During the softening process, numerous microcracks develop, grow, and coalesce to form localized macroscopic fractures. The region of pervasive microcracking---commonly referred to as a fracture process zone---in these materials has a non-negligible size, violating the premise of linear elastic fracture mechanics \revised{(LEFM)}. For this reason, a number of non-linear fracture mechanics approaches have been developed and widely used for modeling the failure process in quasi-brittle materials. Representative examples are cohesive zone models ({\it e.g.}~\cite{barenblatt1962mathematical,dugdale1960yielding,needleman1987continuum,park2009unified}) and damage-type models ({\it e.g.}~\cite{bazant1983crack,bazant1985microplane,pijaudier1987nonlocal,peerlings1996gradient}). Apart from its quasi-brittleness, the cracking behavior of rocks and rock-like materials exhibits a few important characteristics. First, these materials are fractured under compression, showing a rich variety of cracking patterns that emanate from preexisting flaws. These rock cracking patterns often involve complex combinations of tensile (mode \rn{1}) and shear (mode \rn{2}) fractures, which have attracted a large number of experimental and numerical studies for decades ({\it e.g.}~\cite{ingraffea1980finite,bobet1998fracture,bobet1998numerical,wong2009crack-a,wong2009crack-b,wong2009systematic,lee2011experimental,zhang2012cracking,yin2014coalescence,zhou2018experimental}). Second, a sliding fracture under compressive stress entails marked friction along the crack surface. This friction plays an important role not only in the kinematics of fracture but also in the propagation dynamics~\cite{palmer1973growth,puzrin2005growth}. Last but not least, the shear fracture energy of rock is usually much greater than the tensile fracture energy of the same material~\cite{wong1982shear,shen1994modification}. All these characteristics should be properly considered to accurately model cracking processes in rock. Unfortunately, however, computational models that can efficiently simulate a combination of cohesive and frictional fractures remain scarce. Over the past several years, phase-field modeling has gained increasing popularity for rock fracture simulation, mainly due to its ability to capture complex crack patterns without the need for algorithmic tracking of evolving crack geometry. The majority of phase-field simulations of rock fracture have used models that are theoretically equivalent to LEFM for brittle materials ({\it e.g.}~\cite{lee2016pressure,choo2018cracking,ha2018liquid,santillan2018phase}). However, these brittle phase-field models are not fully appropriate for rocks and rock-like materials for the reasons described above. Meanwhile, a few studies have proposed phase-field models tailored to rocks and similar geologic materials. The work of Zhang {\it et al.}~\cite{zhang2017modification} may be the first endeavor to modify a standard phase-field formulation for brittle fracture to distinguish between the mode \rn{1} and mode \rn{2} fracture energies of rock-like materials. The key idea of their modification is to adopt the $\mathcal{F}$-criterion proposed by Shen and Stephansson~\cite{shen1994modification}, whereby the energy release rates of mode \rn{1} and mode \rn{2} fractures are normalized by their corresponding fracture energies. Bryant and Sun~\cite{bryant2018mixed} later used the same idea to develop a phase-field formulation for mixed-mode fracture in anisotropic rocks. However, these models are limited to purely brittle, pressure-insensitive fracture, neglecting softening behavior and friction effects. Alternatively, Choo and Sun~\cite{choo2018coupled} proposed a coupled phase-field and plasticity modeling framework for pressure-sensitive geomaterials. While this modeling framework can well simulate brittle, quasi-brittle, and ductile failures and their transitions, it does not explicitly distinguish between tensile and shear fractures. Also importantly, the phase-field formulations underpinning all these models---originate from brittle fracture theory---inevitably suffer from a drawback that the material strength is sensitive to the length parameter for phase-field regularization. For this reason, previous studies usually calibrated the fracture energy in conjunction with the length parameter such that their combination gives a prescribed peak stress. However, this calibration is undesirable because the fracture energy is a material property, whereas the length parameter emanates from geometric regularization in phase-field modeling. In recent years, a new class of phase-field models has emerged for cohesive tensile fracture. Drawing on the gradient damage models of Lorentz and coworkers~\cite{lorentz2011conergence,lorentz2011gradient,lorentz2017nonlocal}, these phase-field models have incorporated one-dimensional softening behavior through careful design of functions for geometric regularization and material degradation. Notable examples are the phase-field cohesive zone models advanced by Wu and coworkers~\cite{wu2017unified,wu2018length,nguyen2018modeling,feng2018phase,mandal2019length}, as well as the phase-field model for dynamic cohesive fracture by Geelen {\it et al.}~\cite{geelen2019phase}. Apart from the explicit treatment of softening behavior, these models commonly have the feature that the material behavior is virtually insensitive to the phase-field length parameter, allowing one to use the fracture energy as a pure material parameter. These models are thus robust and effective for simulating tensile fracture in quasi-brittle materials; however, they are not suited for shear fracture which is common in rocks. Very recently, the first phase-field model for frictional shear fracture has been developed for geologic materials~\cite{fei2020phaseshear}. Built on the phase-field method for frictional interfaces~\cite{fei2020phasecontact}, the new model has been derived and verified to be insensitive to the length parameter like the phase-field models for cohesive tensile fracture. Remarkably, the new phase-field model explicitly incorporates the frictional energy into the crack propagation mechanism, in a way that is demonstrably consistent with the celebrated theory of Palmer and Rice~\cite{palmer1973growth} for frictional shear fracture. However, the previous work restricted its attention to shear fracture, leaving its extension to mixed-mode fracture as a future research topic. \revised{ Importantly, the phase-field formulations for cohesive tensile fracture and frictional shear fracture---derived for length-insensitive modeling of quasi-brittle behavior---cannot be combined using an existing approach to phase-field modeling of mixed-mode fracture. The existing phase-field models for brittle mixed-mode fracture in rocks~\cite{zhang2017modification,bryant2018mixed} have commonly incorporated the difference in modes \rn{1} and \rn{2} fracture energies by replacing the standard crack driving force with an weighted average of modes \rn{1} and \rn{2} crack driving forces based on the $\mathcal{F}$-criterion. Even though this approach has been successful for phase-field modeling of brittle fracture, it is fundamentally incompatible with that of quasi-brittle fracture. The reason is that this approach unavoidably modifies the crack driving forces (and the degradation functions) of quasi-brittle phase-field models, which should be preserved to model the prescribed softening behavior without length sensitivity. Therefore, for phase-field modeling of quasi-brittle mixed-mode fracture, one needs to develop a new approach that combines two length-insensitive phase-field models without altering their crack driving forces and degradation functions. } In this work, we propose a new phase-field formulation that employs two different phase fields to individually describe cohesive tensile fracture and frictional shear fracture for mixed-mode fracture in rocks and rock-like materials. In the literature, multi-phase-field modeling has been used for fracture in anisotropic materials and composites ({\it e.g.}~\cite{nguyen2017multi,na2018computational,bleyer2018phase,dean2020multi}), and Bleyer {\it et al.}~\cite{bleyer2018phase} have briefly suggested its application to mixed-mode fracture in brittle materials. To our knowledge, however, no previous work has developed a multi-phase-field formulation for mixed-mode fracture in brittle materials, not to mention for mixed cohesive tensile/frictional shear fracture in quasi-brittle materials. \revised{Critically, approaches in the existing multi-phase-models are inadequate for modeling mixed-mode fracture in rocks. For example, the idea of overlapping multiple phase fields~\cite{nguyen2017multi,na2018computational} cannot be applied because different fracture modes should not coexist within the same material point. Stress-based criteria used in other multi-phase-field models~\cite{bleyer2018phase,dean2020multi} cannot properly distinguish between tensile and shear fractures.} To rigorously couple the two phase fields---one for mode \rn{1} fractures and the other for mode \rn{2}---in rocks under compression, here we devise three approaches. First, we decompose the strain energy into the tensile, shear, and pure compression parts, based on the direction of crack at the material point. This approach unifies the phase-field method for frictional contact~\cite{fei2020phasecontact} with the phase-field formulation for opening fracture proposed by Steinke and Kaliske~\cite{steinke2019phase}. Second, we formulate the incremental potential energy of the material point depending on its contact condition: open, slip, or stick. This approach extends the derivation procedure of the phase-field model for frictional shear fracture~\cite{fei2020phaseshear} to double-phase-field modeling of mixed-mode fracture. Third, we determine the dominant fracture mode in each contact condition based on the $\mathcal{F}$-criterion for mixed-mode fracture~\cite{shen1994modification}. Importantly, this approach is different from the way in which the $\mathcal{F}$-criterion is used in the previous single-phase-field models for mixed-mode fracture ({\it e.g.}~\cite{zhang2017modification,bryant2018mixed}). While the previous models have used the criterion to calculate an weighted average of modes \rn{1} and \rn{2} crack driving forces, here we apply it to find the dominant fracture mode and direction based on the current contact condition. Consequently, unlike the previous single-phase-field models, the double-phase-field model clearly distinguishes between modes \rn{1} and \rn{2} fractures. The paper is organized as follows. In Section~\ref{sec:formulation}, we develop a double-phase-field formulation for mixed-mode fracture in quasi-brittle materials, in which one phase-field describes cohesive tensile fracture and the other phase-field describes frictional shear fracture. This section describes the main contributions of this work. Subsequently, Section~\ref{sec:discretization} presents discrete formulations and algorithms for numerical solution to the proposed model using the standard finite element method. The double-phase-field model is then validated in Section~\ref{sec:validation}, both qualitatively and quantitatively, with experimental results on various mixed-mode fractures in rocks. We conclude the work in Section~\ref{sec:closure}. \section{Double-phase-field formulation for mixed-mode fracture} \label{sec:formulation} In this section, we develop a double-phase-field formulation for mixed-mode fracture in rocks and rock-like materials. \revised{For this purpose, we apply the microforce approach in da Silva {\it et al.}~\cite{daSilva2013sharp}---adopted by Geelen {\it et al.}~\cite{geelen2019phase} and Fei and Choo~\cite{fei2020phaseshear} for deriving cohesive and frictional phase-field fracture models, respectively---to double-phase-field modeling of mixed-mode fracture. Although the original phase-field models are formulated based on variational principles for brittle fracture (the seminal work of Francfort and Marigo~\cite{francfort1998revisiting} and its extensions), microforce theory allows one to derive phase-field models for more complex problems for which sound variational principles are unavailable, such as cohesive/frictional fracture (see Choo and Sun~\cite{choo2018coupled} for a detailed discussion). It is noted that for the particular case of brittle fracture, the microforce and variational approaches lead to the same phase-field formulation.} Without loss of generality, we restrict our attention to an isotropic and linear elastic material, infinitesimal deformation, rate-independent fracture, and quasi-static conditions. \subsection{Double-phase-field approximation of tensile and shear fractures} Consider the domain $\Omega$ with boundary $\partial \Omega$. The boundary is decomposed into the displacement (Dirichlet) boundary $\partial_u\Omega$ and the traction (Neumann) boundary $\partial_t\Omega$, satisfying $\overline{\partial_{u}\Omega\cap\partial_{t}\Omega}=\emptyset$ and $\overline{\partial_{u}\Omega\cup\partial_{t}\Omega}=\partial\Omega$. The domain may have two mutually exclusive sets of mode \rn{1} and mode \rn{2} fractures, which are denoted by $\Gamma_{\rn{1}}$ and $\Gamma_{\rn{2}}$, respectively. To approximate the discontinuous surfaces of $\Gamma_{\rn{1}}$ and $\Gamma_{\rn{2}}$, we introduce two different phase fields: (i) $d_{\rn{1}}$ for the mode \rn{1} fractures in $\Gamma_{\rn{1}}$, and (ii) $d_{\rn{2}}$ for the mode \rn{2} fractures in $\Gamma_{\rn{2}}$. Figure~\ref{fig:phase-field-approximation} illustrates this double-phase-field approximation of mixed-mode fracture. Each of the two phase fields is defined in between 0 and 1, {\it i.e.}~$d_{\rn{1}} \in \left[0, 1\right]$ and $d_{\rn{2}} \in \left[0, 1\right]$, such that 0 denotes an intact (undamaged) region and 1 denotes a discontinuous (fully damaged) region for the corresponding mode of fracture. \begin{figure}[h!] \centering \includegraphics[width = \textwidth]{figures/phase-field-approximation.pdf} \caption{Double-phase-field approximation of the discontinuous geometries of mode \rn{1} (in red) and mode \rn{2} (in blue) fractures.} \label{fig:phase-field-approximation} \end{figure} The use of two phase fields results in two crack density functions: (i) $ \Gamma_{d_{\rn{1}}}$ for the mode \rn{1} fractures, and (ii) $\Gamma_{d_{\rn{2}}}$ for the mode \rn{2} fractures. For both crack density functions, we adopt the general form proposed by Wu~\cite{wu2017unified} for phase-field modeling of cohesive fracture. Specifically, \begin{align} \Gamma_{d_{\rn{1}}}(d_{\rn{1}}, \grad d_{\rn{1}}) &= \dfrac{1}{\pi L} \left[ (2d_{\rn{1}} - d_{\rn{1}}^2) + L^2 (\grad d_{\rn{1}})^2 \right], \label{eq:crack-density-function-wu-mode-1}\\ \Gamma_{d_{\rn{2}}}(d_{\rn{2}}, \grad d_{\rn{2}}) &= \dfrac{1}{\pi L} \left[ (2d_{\rn{2}} - d_{\rn{2}}^2) + L^2 (\grad d_{\rn{2}})^2 \right]. \label{eq:crack-density-function-wu-mode-2} \end{align} Here, $L$ is the length parameter for phase-field approximation, which is assumed to be the same for both mode \rn{1} and mode \rn{2} fractures. \subsection{Potential energy density} To derive equations that govern the evolutions of the two phase fields, we should formulate the potential energy density of a material point. The potential energy density, denoted by $\psi$, is decomposed into four terms~\cite{fei2020phaseshear} \begin{align} \psi = \psi^\mathrm{e} + \psi^\mathrm{f} + \psi^\mathrm{d} - \psi^\mathrm{b}\,. \label{eq:TPE} \end{align} Here, $\psi^\mathrm{e}$ is the strain energy stored from elastic deformation, $\psi^\mathrm{f}$ is the frictional energy dissipated by sliding along a crack, $\psi^\mathrm{d}$ is the fracture energy dissipated by generation of a new crack surface, and $\psi^\mathrm{b}$ is the external energy from body force. Expressions for these four terms are described below. \subsubsection*{Strain energy} For double-phase-field modeling of fracture, we need to derive a new form of strain energy in which the two phase fields coexist. \revised{To begin, let us consider quantities of an undamaged ($d_{\rn{1}}=d_{\rn{2}}=0$) material, which are often referred to as ``effective'' quantities in damage mechanics.} The undamaged strain energy can be written as \begin{align} W(\tensor{\strain}) = \dfrac{1}{2} \tensor{\strain}:\bar{\mathbb{C}}:\tensor{\strain}\,, \end{align} where $\tensor{\strain}$ is the infinitesimal strain tensor and $\bar{\mathbb{C}}$ is the undamaged stress-strain tangent tensor. As the undamaged region is assumed to be isotropic and linear elastic, $\bar{\mathbb{C}}$ can be written specifically as \begin{align} \bar{\mathbb{C}} = K \boldsymbol{1} \dyad \boldsymbol{1} + 2G\left(\mathbb{I} - \dfrac{1}{3}\boldsymbol{1} \dyad \boldsymbol{1} \right), \end{align} where $K$ and $G$ are the bulk modulus and the shear modulus, respectively, $\tensor{1}$ is the second-order identity tensor, and $\mathbb{I}$ is the fourth-order symmetric identity tensor. To model mixed-mode fracture, we additively decompose the undamaged strain energy into three parts: (i) the tensile (mode \rn{1}) part, $W^{+}_{\rn{1}}$, (ii) the shear (mode \rn{2}) part, $W^{+}_{\rn{2}}$, and (iii) the pure compression (non-fracturing) part, $W^{-}(\tensor{\strain})$, {\it i.e.} \begin{align} W(\tensor{\strain}) = W^{+}_{\rn{1}}(\tensor{\strain}) + W^{+}_{\rn{2}}(\tensor{\strain}) + W^{-}(\tensor{\strain})\,. \end{align} This decomposition of the undamaged strain energy gives rise to the following three partial undamaged stress tensors: \begin{align} \bar{\tensor{\stress}}^{+}_{\rn{1}} := \frac{\partial W^+_{\rn{1}}(\tensor{\strain})}{\partial \tensor{\strain}}\,, \quad \bar{\tensor{\stress}}^{+}_{\rn{2}} := \frac{\partial W^+_{\rn{2}}(\tensor{\strain})}{\partial \tensor{\strain}}\,, \quad \bar{\tensor{\stress}}^{-} := \frac{\partial W^-(\tensor{\strain})}{\partial \tensor{\strain}}\,. \end{align} By definition, the sum of the three partial undamaged stress tensors should be equal to the total undamaged stress tensor, {\it i.e.} \begin{align} \bar{\tensor{\stress}}^{+}_{\rn{1}} + \bar{\tensor{\stress}}^{+}_{\rn{2}} + \bar{\tensor{\stress}}^{-} = \frac{\partial W(\tensor{\strain})}{\partial \tensor{\strain}} \equiv \bar{\tensor{\stress}} \,. \end{align} To calculate the specific forms of the partial undamaged stress tensors, we decompose the stress tensor with respect to the direction of the crack. The purpose of this directional decomposition is to accommodate the phase-field model for frictional shear fracture~\cite{fei2020phaseshear}, which uses the same decomposition scheme. The directional decomposition scheme is also compatible with opening fracture, as proposed by Steinke and Kaliske~\cite{steinke2019phase} for brittle tensile fracture. When the directional decomposition is used, the partial undamaged stress tensors are expressed differently depending on the contact condition of the crack: open, stick, or slip. The contact condition can be identified following the phase-field method for frictional cracks~\cite{fei2020phasecontact}. Let us denote by $\tensor{n}$ the unit normal vector of the crack, by $\tensor{m}$ the unit vector in the slip direction, and by $\tensor{s}$ the unit vector mutually orthogonal to $\tensor{n}$ and $\tensor{m}$. The crack is open if \begin{align} \varepsilon_{nn} := \tensor{\strain}:(\tensor{n} \dyad \tensor{n}) > 0\,, \end{align} which corresponds to the gap condition in contact mechanics. Equivalently, we can use the contact normal component of the undamaged stress tensor as \begin{align} \bar{\sigma}_{nn} := \bar{\tensor{\stress}}: (\tensor{n} \dyad \tensor{n}) > 0 \,. \label{eq:open-condition-check-stress} \end{align} If the above condition is unsatisfied, the crack is closed (in contact), and it may be either in a stick or a slip condition. To distinguish between the stick and slip conditions, we introduce a yield function of the following form: \begin{align} f := |\tau| - \tau_{\mathrm{Y}} \leq 0\,, \label{eq:yield-function} \end{align} where \begin{align} \tau := \frac{1}{2}\tensor{\stress}:\bm{\alpha}\,, \;\;\mbox{with}\;\; \bm{\alpha} := \bm{m}\dyad\bm{n} + \bm{n}\dyad\bm{m}\,, \end{align} is the resolved shear stress in the crack, and $\tau_{\mathrm{Y}} := p_{\mathrm{N}}\tan\phi$ is the yield strength, which is a function of the contact normal pressure, \revised{$p_{\mathrm{N}} := -\tensor{\stress}: (\tensor{n} \dyad \tensor{n})$}, and the friction angle, $\phi$. The yield function gives $f<0$ in the stick condition and $f=0$ in the slip condition. Depending on the contact condition, $\bar{\tensor{\stress}}^{+}_{\rn{1}}$ and $\bar{\tensor{\stress}}^{+}_{\rn{2}}$ are calculated as follows: \begin{align} \bar{\tensor{\stress}}^{+}_{\rn{1}} &= \left \{ \begin{array}{ll} \bar{\sigma}_{nn} (\tensor{n} \dyad \tensor{n}) + (\lambda/M)\bar{\sigma}_{nn} [(\tensor{m} \dyad \tensor{m}) + (\tensor{s} \dyad \tensor{s})] & \mbox{if}\;\; \mbox{open}\,, \\ \tensor{0} & \mbox{if}\;\; \mbox{stick}\,, \\ \tensor{0} & \mbox{if}\;\; \mbox{slip}\,, \end{array} \right. \label{eq:mode-1-stress} \\ \bar{\tensor{\stress}}^{+}_{\rn{2}} &= \left \{ \begin{array}{ll} \bar{\tau}\tensor{\alpha} & \mbox{if}\;\; \mbox{open}\,,\\ \tensor{0} & \mbox{if}\;\; \mbox{stick}\,,\\ \bar{\tau}\tensor{\alpha} & \mbox{if}\;\; \mbox{slip}\,, \end{array} \right. \label{eq:mode-2-stress} \end{align} where $M := K + (4/3) G$ is the 1D constrained modulus, $\lambda := K - (2/3) G$ is Lame's first parameter, and $\bar{\tau}:=(1/2)\bar{\tensor{\stress}}:\bm{\alpha}$ is the undamaged resolved shear stress. Also, regardless of the contact condition, $\bar{\tensor{\stress}}^{-}$ is given by \begin{align} \bar{\tensor{\stress}}^{-} &= \bar{\tensor{\stress}} - \bar{\tensor{\stress}}^{+}_{\rn{1}} - \bar{\tensor{\stress}}^{+}_{\rn{2}}\,. \label{eq:undamaged-stress} \end{align} Using these partial undamaged stress tensors, we write the (damaged) stress tensor, $\tensor{\stress}$, as \begin{align} \tensor{\stress}(\tensor{\strain},d_{\rn{1}},d_{\rn{2}}) = g_{\rn{1}}(d_{\rn{1}})\bar{\tensor{\stress}}^{+}_{\rn{1}}(\tensor{\strain}) + g_{\rn{2}}(d_{\rn{2}})\bar{\tensor{\stress}}^{+}_{\rn{2}}(\tensor{\strain}) + \bar{\tensor{\stress}}^{-}(\tensor{\strain}) \,. \end{align} Here, $g_{\rn{1}}(d_{\rn{1}})\in[0,1]$ and $g_{\rn{2}}(d_{\rn{2}})\in[0,1]$ are the degradation functions for mode \rn{1} and mode \rn{2} fractures, respectively. Their specific expressions will be presented later in this section. Note that we have multiplied $g_{\rn{1}}(d_{\rn{1}})$ to $\bar{\tensor{\stress}}^{+}_{\rn{1}}$ only, and $g_{\rn{2}}(d_{\rn{2}})$ to $\bar{\tensor{\stress}}^{+}_{\rn{2}}$ only. Also importantly, the stress--strain relationship is incrementally nonlinear, because $\bar{\tensor{\stress}}^{+}_{\rn{1}}$, $\bar{\tensor{\stress}}^{+}_{\rn{2}}$ and $\bar{\tensor{\stress}}^{-}$ are dependent on the contact condition. Due to this incremental nonlinearity, we write the strain energy density as a rate form as \begin{align} \dot{\psi}^\mathrm{e} = \left[ g_{\rn{1}}(d_{\rn{1}}) \bar{\tensor{\stress}}^{+}_{\rn{1}} + g_{\rn{2}}(d_{\rn{2}})\bar{\tensor{\stress}}^{+}_{\rn{2}} + \bar{\tensor{\stress}}^{-} \right]:\dot{\tensor{\strain}}\,. \label{eq:strain-energy-density} \end{align} \subsubsection*{Frictional energy} Although open cracks are frictionless, sliding cracks may involve significant friction. This friction plays an important role in shear fracture propagation, as formally shown by Palmer and Rice~\cite{palmer1973growth}. Therefore, the frictional energy dissipated along a sliding crack should also be incorporated into the phase-field formulation. The frictional energy density is also an incrementally nonlinear function because frictional energy only dissipates during slip. So we write the frictional energy density as a rate form \begin{align} \dot{\psi}^\mathrm{f} = \left[1 - g_{\rn{2}}(d_{\rn{2}}) \right] \tensor{\stress}_\mathrm{friction}:\dot{\tensor{\strain}}\,, \label{eq:friction-energy-general} \end{align} where $\tensor{\stress}_\mathrm{friction}$ denotes the stress tensor at the crack associated with frictional slip. Its specific expressions, which depend on the contact condition, are given by~\cite{fei2020phasecontact} \begin{align} \tensor{\stress}_\mathrm{friction} = \left \{ \begin{array}{ll} \tensor{0} & \mbox{if}\;\; \mbox{open}\,, \\ \tensor{0} & \mbox{if}\;\; \mbox{stick}\,, \\ \tau_{r} \tensor{\alpha} & \mbox{if}\;\; \mbox{slip}\,. \end{array} \right. \label{eq:friction-stress} \end{align} Here, $\tau_{r}$ is the residual shear strength of the fracture, which equals $\tau_{\mathrm{Y}}$ during slip. Inserting Eq. \eqref{eq:friction-stress} into Eq.~\eqref{eq:friction-energy-general}, we obtain the rate form of frictional energy density as \begin{align} \dot{\psi}^\mathrm{f} = \left \{ \begin{array}{ll} 0 & \mbox{if}\;\; \mbox{open}\,, \\ 0 & \mbox{if}\;\; \mbox{stick}\,, \\ \left[1 - g_{\rn{2}}(d_{\rn{2}}) \right] \tau_{r} \dot{\gamma} & \mbox{if}\;\; \mbox{slip}\,, \end{array} \right. \label{eq:friction-energy-density} \end{align} where $\gamma := \tensor{\strain}:\tensor{\alpha}$ denotes the shear strain at the crack. \subsubsection*{Fracture energy} Since we consider two different modes of fracture, the fracture energy dissipation is additionally decomposed into two terms as \begin{align} \psi^\mathrm{d} = \psi^\mathrm{d}_{\rn{1}} + \psi^\mathrm{d}_{\rn{2}}\,, \end{align} where $\psi^\mathrm{d}_{\rn{1}}$ and $\psi^\mathrm{d}_{\rn{2}}$ correspond to energy dissipation densities associated with modes \rn{1} and \rn{2} fractures, respectively. Let $\mathcal{G}_{\rn{1}}$ and $\mathcal{G}_{\rn{2}}$ denote the critical fracture energies for mode \rn{1} and \rn{2} fractures. Then the two terms can be expressed as \begin{align} \psi^\mathrm{d}_{\rn{1}} &= \mathcal{G}_{\rn{1}}\Gamma_{d_{\rn{1}}} = \dfrac{\mathcal{G}_{\rn{1}}}{\pi L} \left[(2d_{\rn{1}} - d_{\rn{1}}^2) + L^2 (\grad d_{\rn{1}})^2 \right], \label{eq:dissipation-density-mode-1} \\ \psi^\mathrm{d}_{\rn{2}} &= \mathcal{G}_{\rn{2}}\Gamma_{d_{\rn{2}}} = \dfrac{\mathcal{G}_{\rn{2}}}{\pi L} \left[(2d_{\rn{2}} - d_{\rn{2}}^2) + L^2 (\grad d_{\rn{2}})^2 \right]. \label{eq:dissipation-density-mode-2} \end{align} \subsubsection*{External energy} The external energy, which is due to gravitational force, can be written as \begin{align} \psi^\mathrm{b} = \rho \tensor{g} \cdot \tensor{u}\,, \end{align} where $\rho$ is the mass density, $\tensor{g}$ is the gravitational acceleration vector, and $\bm{u}$ is the displacement vector. \subsection{Governing equations} According to the microforce argument~\cite{daSilva2013sharp}, the governing equations of the problem are obtained as follows: \begin{align} &\diver\, \left(\dfrac{\partial \psi(\tensor{\strain},d_{\rn{1}}, \grad d_{\rn{1}}, d_{\rn{2}}, \grad d_{\rn{2}})}{\partial \tensor{\strain}} \right) - \dfrac{\partial \psi(\tensor{\strain},d_{\rn{1}}, \grad d_{\rn{1}}, d_{\rn{2}}, \grad d_{\rn{2}}) }{\partial \tensor{u}} = \tensor{0} && \mbox{(momentum balance)}\,, \label{eq:momentum-balance-eq}\\ &\diver\, \left(\dfrac{\partial \psi(\tensor{\strain},d_{\rn{1}}, \grad d_{\rn{1}}, d_{\rn{2}}, \grad d_{\rn{2}})}{\partial \grad d_{\rn{1}}} \right) - \dfrac{\partial \psi(\tensor{\strain},d_{\rn{1}}, \grad d_{\rn{1}}, d_{\rn{2}}, \grad d_{\rn{2}})}{\partial d_{\rn{1}}} = -\pi_{r,\rn{1}} && \mbox{(mode \rn{1} microforce balance)}\,, \label{eq:microforce-balance-eq-mode-1} \\ &\diver\, \left(\dfrac{\partial \psi(\tensor{\strain},d_{\rn{1}}, \grad d_{\rn{1}}, d_{\rn{2}}, \grad d_{\rn{2}})}{\partial \grad d_{\rn{2}}} \right) - \dfrac{\partial \psi(\tensor{\strain},d_{\rn{1}}, \grad d_{\rn{1}}, d_{\rn{2}}, \grad d_{\rn{2}})}{\partial d_{\rn{2}}} = -\pi_{r,\rn{2}} && \mbox{(mode \rn{2} microforce balance)}\,. \label{eq:microforce-balance-eq-mode-2} \end{align} Here, $\pi_{r,{\rn{1}}}$ and $\pi_{r,{\rn{2}}}$ are reactive microforces introduced to ensure the irreversibility of modes \rn{1} and \rn{2} fracture processes, respectively. Their specific expressions will be presented later in this section, after other terms become derived. Substituting the previously derived expressions for the potential energy into Eqs.~\eqref{eq:momentum-balance-eq}, \eqref{eq:microforce-balance-eq-mode-1}, and \eqref{eq:microforce-balance-eq-mode-2}, we get more specific forms of the governing equations as \begin{align} \diver\, \tensor{\tensor{\stress}} + \rho\tensor{g} &= \tensor{0}\,, \label{eq:momentum-balance-eq-specific} \\ - g'_{\rn{1}}(d_{\rn{1}})\mathcal{H}_{\rn{1}} - \dfrac{\mathcal{G}_{\rn{1}}}{\pi L} \left( 2L^2 \diver \grad d_{\rn{1}} - 2 + 2d_{\rn{1}} \right) &= -\pi_{r,\rn{1}}\,, \label{eq:microforce-balance-eq-mode-1-specific} \\ - g'_{\rn{2}}(d_{\rn{2}})\mathcal{H}_{\rn{2}} - \dfrac{\mathcal{G}_{\rn{2}}}{\pi L} \left( 2L^2 \diver \grad d_{\rn{2}} - 2 + 2d_{\rn{2}} \right) &= -\pi_{r,\rn{2}}\,. \label{eq:microforce-balance-eq-mode-2-specific} \end{align} Here, $\mathcal{H}_{\rn{1}}$ and $\mathcal{H}_{\rn{2}}$ are the (undamaged) crack driving forces for modes \rn{1} and \rn{2} fractures, respectively, which are related to the derivatives of the potential energy with respect to the two phase fields. Because the potential energy has been formulated differently according to the contact condition, the crack driving forces must be dependent on the current contact condition. \subsection{Modes \rn{1} and \rn{2} crack driving forces under different contact conditions} A unique challenge for double-phase-field modeling of mixed mode fracture is to prevent overlapping of modes \rn{1} and \rn{2} within the same material point. This requires careful determination of the modes \rn{1} and \rn{2} crack driving forces, $\mathcal{H}_{\rn{1}}$ and $\mathcal{H}_{\rn{2}}$ at every material point. To this end, here we adapt the $\mathcal{F}$-criterion proposed by Shen and Stephansson~\cite{shen1994modification} to the double-phase-field modeling of mixed-mode fracture. Defining $\theta$ as the angle between the crack normal direction and the major principal direction in the slip plane, we rephrase the idea of the $\mathcal{F}$-criterion as: \begin{align} \theta = \arg \max_{\theta} \left[\mathcal{F}(\theta)\right] \rvert_{\tensor{\strain}}\,, \;\;\mbox{with}\;\; \mathcal{F}(\theta) := \dfrac{\mathcal{H}_{\rn{1}}(\tensor{\strain},\theta)}{\mathcal{G}_{\rn{1}}} + \dfrac{\mathcal{H}_{\rn{2}}(\tensor{\strain},\theta)}{\mathcal{G}_{\rn{2}}} \, . \label{eq:F-criterion-phase-field} \end{align} In other words, the mixed-mode fracture propagates such that the value of $\mathcal{F}$ is maximized. It is noted that the strain tensor, $\tensor{\strain}$, in the argument may be replaced by the undamaged stress tensor, $\bar{\tensor{\stress}}$, as $\tensor{\strain}=\bar{\mathbb{C}}^{-1}:\bar{\tensor{\stress}}$. Based on the foregoing derivations of the potential energy and the $\mathcal{F}$-criterion, we derive specific forms of the modes \rn{1} and \rn{2} crack driving forces in the following four cases: (i) the intact (undamaged) condition, (ii) the open condition, (iii) the stick condition, and (iv) the slip condition. \subsubsection*{Intact condition} Let us first consider an intact material point in which neither mode \rn{1} nor \rn{2} fracture has yet developed. To prevent fracturing in the elastic region, we set $\mathcal{H}_{\rn{1}}$ and $\mathcal{H}_{\rn{2}}$ as their threshold values defined as the crack driving forces at the peak tensile and shear strengths, respectively. Let $\mathcal{H}_{{\rn{1}}, t}$ denote the threshold for $\mathcal{H}_{\rn{1}}$. By definition, $\mathcal{H}_{{\rn{1}}, t}$ corresponds to the undamaged tensile strain energy, $W^{+}_{\rn{1}}$, when $\bar{\sigma}_{nn}$ equals the tensile strength. Thus we get \begin{align} \mathcal{H}_{{\rn{1}}, t} := W^{+}_{\rn{1}}|_{\bar{\sigma}_{nn}=\sigma_{p}} = \dfrac{1}{2}\bar{\tensor{\stress}}^{+}_{\rn{1}}|_{\bar{\sigma}_{nn}=\sigma_{p}} : \tensor{\strain} = \dfrac{1}{2M} \sigma_{p}^{2}\,, \label{eq:H-threhsold-mode-1} \end{align} where $\sigma_{p}$ denotes the tensile strength. The derivation of $\mathcal{H}_{{\rn{2}}, t}$ is more complex and long due to the existence of frictional energy dissipation in shear fracture. Referring to Fei and Choo~\cite{fei2020phaseshear} for a detailed derivation of $\mathcal{H}_{{\rn{2}}, t}$, we adopt \begin{align} \mathcal{H}_{{\rn{2}},t} := \dfrac{1}{2G}(\tau_{p} - \tau_{r})^2\,, \label{eq:H-threhsold-mode-2} \end{align} where $\tau_{p}$ is the peak shear strength. Because the threshold values are assigned as the crack driving forces of an intact material point, \begin{align} \left.\begin{array}{ll} \mathcal{H}_{\rn{1}} = \dfrac{1}{2M} \sigma_{p}^{2} \\ [0.5em] \mathcal{H}_{\rn{2}} = \dfrac{1}{2G}(\tau_{p} - \tau_{r})^2 \end{array}\right\} \;\text{if}\;\; \text{intact}\,. \end{align} In this work, we treat $\sigma_{p}$ as a constant material property, but consider $\tau_{p}$ a function of the contact normal pressure. Specifically, we set $\tau_{p} = c_{0} + p_{\mathrm{N}}\tan \phi$, where $c_{0}$ and $\phi$ denote the cohesion and the friction angle, respectively. For simplicity, we assume that the peak and residual friction angles are the same and calculate the residual shear strength as $\tau_{r}=p_{\mathrm{N}}\tan\phi$. This assumption can be easily relaxed as in Fei and Choo~\cite{fei2020phaseshear}. \subsubsection*{Open condition} Next, we consider the case in which a crack develops under an open contact condition. To determine the dominant fracturing mode in this case, we need to evaluate the value of $\mathcal{F}$ given in Eq.~\eqref{eq:F-criterion-phase-field}, and hence $\mathcal{H}_{\rn{1}}$ and $\mathcal{H}_{\rn{2}}$ therein. To this end, we only have to consider the strain energy density, $\psi^{\mathrm{e}}$, because an open crack is frictionless ($\psi^{\mathrm{f}}=0$) and other energy terms ($\psi^{\mathrm{d}}$ and $\psi^{\mathrm{b}}$) are unrelated to the crack driving forces. To compute $\psi^\mathrm{e}$ during post-peak fracturing, we integrate its rate form in Eq. \eqref{eq:strain-energy-density} from the peak stresses, as \begin{align} \psi^\mathrm{e} = \dfrac{1}{2}\bar{\tensor{\stress}}^{-}:\tensor{\strain} + g_{\rn{1}}(d_{\rn{1}}) \left[ W^{+}_{\rn{1}} \bigr \rvert_{t_{p,{\rn{1}}}} + \dfrac{1}{2}\left(\bar{\tensor{\stress}}^{+}_{\rn{1}}:\tensor{\strain} \right) \bigr \rvert^{t}_{t_{p,{\rn{1}}}} \right] + g_{\rn{2}}(d_{\rn{2}}) \left[ W^{+}_{\rn{2}} \bigr \rvert_{t_{p,{\rn{2}}}} + \dfrac{1}{2} \left(\bar{\tensor{\stress}}^{+}_{\rn{2}}:\tensor{\strain} \right) \bigr \rvert^{t}_{t_{p,{\rn{2}}}}\right] \, . \label{eq:strain-energy-open-loading-integral} \end{align} Here, $t_{p,{\rn{1}}}$ and $t_{p,{\rn{2}}}$ denote the time instances when $\sigma_{p}$ and $\tau_{p}$ are reached, respectively, and \begin{align} W^{+}_{\rn{1}}\rvert_{t_{p,{\rn{1}}}} = \dfrac{1}{2M} \sigma^2_{p} \, , \quad W^{+}_{\rn{2}}\rvert_{t_{p,{\rn{2}}}} = \dfrac{1}{2G} \tau^2_{p} \, , \end{align} are the strain energies relevant to modes \rn{1} and \rn{2} fracturing, respectively, at the corresponding peak stresses. Plugging the above expressions and Eqs.~\eqref{eq:mode-1-stress},~\eqref{eq:mode-2-stress}, and \eqref{eq:undamaged-stress} into Eq. \eqref{eq:strain-energy-open-loading-integral}, we obtain \begin{align} \psi^\mathrm{e} = \dfrac{1}{2} \bar{\tensor{\stress}}:\tensor{\strain} - [1 - g_{\rn{1}}(d_{\rn{1}})] \dfrac{1}{2M}\bar{\sigma}^2_{nn} - [1 - g_{\rn{2}}(d_{\rn{2}})] \dfrac{1}{2G}\bar{\tau}^2 \, . \label{eq:strain-energy-open-loading} \end{align} By definition, $\mathcal{H}_{\rn{1}}$ and $\mathcal{H}_{\rn{2}}$ must be the terms multiplied to $[1 - g_{\rn{1}}(d_{\rn{1}})]$ and $[1 - g_{\rn{2}}(d_{\rn{2}})]$, respectively. Therefore, \begin{align} \mathcal{H}_{\rn{1}} &= \dfrac{1}{2M}\bar{\sigma}^2_{nn} \,, \label{eq:H1-open-loading}\\ \mathcal{H}_{\rn{2}} &= \dfrac{1}{2G}\bar{\tau}^2 \, . \label{eq:H2-open-loading} \end{align} Substituting Eqs.~\eqref{eq:H1-open-loading} and~\eqref{eq:H2-open-loading} into the $\mathcal{F}$-criterion~\eqref{eq:F-criterion-phase-field}, we get \begin{align} \mathcal{F}(\tensor{\strain}, \theta) = \dfrac{\bar{\sigma}^{2}_{nn}}{2M \mathcal{G}_{\rn{1}}} + \dfrac{\bar{\tau}^2}{2G\mathcal{G}_{\rn{2}}} \, . \label{eq:F-open} \end{align} In terms of the principal strains and $\theta$, the above equation can be re-written as \begin{align} \mathcal{F}(\tensor{\strain}, \theta) = \dfrac{\left[\lambda\left(\varepsilon_1 \sin^2 \theta + \varepsilon_3 \cos^2 \theta \right) + M\left(\varepsilon_1\cos^2 \theta + \varepsilon_3 \sin^2 \theta \right) \right]^2}{2M\mathcal{G}_{\rn{1}}} + \dfrac{2G\left(\varepsilon_1 - \varepsilon_3 \right)^2\cos^2\theta \sin^2 \theta}{\mathcal{G}_{\rn{2}}} \, , \label{eq:F-open-principal} \end{align} where $\varepsilon_1$ and $\varepsilon_3$ denote the major and minor principal strains. Then, to find $\theta$ that maximizes $\mathcal{F}$, we take the partial derivative of $\mathcal{F}$ with respect to $\theta$ as \begin{align} \dfrac{\partial \mathcal{F}(\tensor{\strain}, \theta)}{\partial \theta} = \dfrac{2G\left(\lambda + G \right)}{M \mathcal{G}_{\rn{1}}} \left(\varepsilon^2_3 - \varepsilon^2_1 \right)\sin 2\theta + G(\varepsilon_1 - \varepsilon_3)^2 \sin 4\theta \left(\dfrac{1}{\mathcal{G}_{\rn{2}}} - \dfrac{G}{M\mathcal{G}_{\rn{1}}} \right) \, . \end{align} This derivative becomes zero when $\theta = 0$. This means that under an open condition, the crack should develop such that the crack normal direction is the same as the major principal direction. When the crack direction is determined in this way, $\mathcal{H}_{{\rn{2}}}=0$ because no shear stress exists on the principal plane, {\it i.e.}~$\bar{\tau}=0$ when $\theta=0$. Therefore, \begin{align} \left.\begin{array}{ll} \mathcal{H}_{\rn{1}} = \dfrac{1}{2M} \bar{\sigma}_{nn}^{2} \\ [0.5em] \mathcal{H}_{\rn{2}} = 0 \end{array}\right\} \;\text{if}\;\; \text{open}\,. \end{align} In other words, a mode \rn{2} crack does not grow ({\it i.e.}~$\dot{d_{\rn{2}}}=0$) under open conditions. Note that in this case, $\bar{\sigma}_{nn}$ should be equal to the major principal undamaged stress, $\bar{\sigma}_{1}$, because $\theta=0$. \subsubsection*{Stick condition} We now shift our focus to a material point that has a closed crack under a stick condition. In this case, $\bar{\tensor{\stress}}^{+}_{\rn{1}}=\bar{\tensor{\stress}}^{+}_{\rn{2}}=0$, thus $\partial\psi^\mathrm{e}/\partial d_{\rn{1}}=\partial\psi^\mathrm{e}/\partial d_{\rn{2}}=0$. The frictional energy is zero as well ({\it i.e.}~$\psi^{\mathrm{f}}$). Therefore, \begin{align} \left.\begin{array}{ll} \mathcal{H}_{\rn{1}} = 0 \\ [0.5em] \mathcal{H}_{\rn{2}} = 0 \end{array}\right\} \;\text{if}\;\; \text{stick}\,. \end{align} So neither mode \rn{1} nor mode \rn{2} crack grows ({\it i.e.}~$\dot{d_{\rn{1}}}=0$ and $\dot{d_{\rn{2}}}=0$) when there is no relative motion between the two crack surfaces. This result is also physically intuitive because a material with a perfectly sticky crack behaves like an undamaged material. \subsubsection*{Slip condition} Lastly, we consider the case when the material point has a closed crack undergoing slip. In this case, it can be easily shown that $\mathcal{H}_{\rn{1}}=0$, because $\partial\psi^\mathrm{e}/\partial d_{\rn{1}}=0$ and $\partial\psi^\mathrm{f}/\partial d_{\rn{1}}=0$ when the crack is sliding. Therefore, maximizing $\mathcal{F}$ in Eq.~\eqref{eq:F-criterion-phase-field} is equivalent to maximizing $\mathcal{H}_{\rn{2}}$ in slip conditions. This means that $\mathcal{H}_{\rn{2}}$ can be derived in the exact same way as in the phase-field model for shear fracture~\cite{fei2020phaseshear}. Below we briefly recap the derivation of $\mathcal{H}_{\rn{2}}$, referring to Fei and Choo~\cite{fei2020phaseshear} for details. We first evaluate the strain energy and frictional energy densities by integrating their rate forms given in Eqs. \eqref{eq:strain-energy-density} and \eqref{eq:friction-energy-density}, and get \begin{align} \psi^\mathrm{e} &= \psi^\mathrm{e}\rvert_{t_{p}} + \int^{t}_{t_{p}} \bar{\tensor{\stress}}^{-} : \dot{\tensor{\strain}} \, \mathrm{d} t + \int^{t}_{t_{p}} g_{\rn{2}}(d_{\rn{2}}) \bar{\tensor{\stress}}^{+}_{\rn{2}} \, \mathrm{d} t \, , \label{eq:strain-energy-density-slip} \\ \psi^\mathrm{f} &= \int^{t}_{t_{p}}[1 - g_{\rn{2}}(d_{\rn{2}})] \tau_{r} \dot{\gamma} \, \mathrm{d} t \, , \label{eq:friction-energy-density-slip} \end{align} Taking the partial derivatives of $\psi^\mathrm{e}$ and $\psi^\mathrm{f}$ with respect to $d_{\rn{2}}$, we obtain $\mathcal{H}_{\rn{2}}$ as \begin{align} \mathcal{H}_{\rn{2}} = \mathcal{H}_{\rn{2},t} + \mathcal{H}_\mathrm{slip}\,, \;\; \mbox{with} \;\; \mathcal{H}_\mathrm{slip} := \int_{\gamma_{p}}^\gamma (\bar{\tau} - \tau_{r}) \: \mathrm{d} \gamma \, . \label{eq:slip-H-mode-2} \end{align} Here, $\mathcal{H}_\mathrm{slip}$ denotes the crack driving force accumulated during the post-peak slip process, and $\gamma_{p}$ is the shear strain in the slip direction when $\tau = \tau_{p}$. We note that $\mathcal{H}_\mathrm{slip}$ is expressed as an integral form because $\tau_{r}$ is a function of the contact normal pressure. Now, we determine the crack propagation direction by maximizing $\mathcal{F}$, equivalently, $\mathcal{H}_{\rn{2}}$, in this case. As derived in Fei and Choo~\cite{fei2020phaseshear}, it eventually boils down to find $\theta$ such that \begin{align} \theta = \arg \max_\theta [\bar{\tau}(\theta) - \tau_{r}(\theta)] \, . \end{align} When $\tau_{r}=p_{\mathrm{N}}\tan\phi$, we get \begin{align} \theta = 45^\circ - \dfrac{\phi}{2} \label{eq:theta-slip} \,, \end{align} see Fei and Choo~\cite{fei2020phaseshear} for details. Note that this value of $\theta$ is necessary to calculate $\bar{\tau}$ and $\gamma$ in $\mathcal{H}_\mathrm{slip}$. To summarize, \begin{align} \left.\begin{array}{ll} \mathcal{H}_{\rn{1}} = 0 \\ [0.5em] \mathcal{H}_{\rn{2}} = \dfrac{1}{2G}(\tau_{p} - \tau_{r})^2 + \displaystyle\int_{\gamma_{p}}^\gamma (\bar{\tau} - \tau_{r}) \: \mathrm{d} \gamma \end{array}\right\} \;\text{if}\;\; \text{slip}\,. \end{align} As opposed to the previous case of open fracture, a mode \rn{1} crack does not grow ({\it i.e.}~$\dot{d_{\rn{1}}}=0$) in the slip case. This result also agrees well with our physical intuition. \smallskip \revised{ \begin{remark} The present double-phase-field model applies the $\mathcal{F}$-criterion~\cite{shen1994modification} in a largely different way from how previous single-phase-field models ({\it e.g.}~\cite{zhang2017modification,bryant2018mixed}) have used it for mixed-mode fracture. In Zhang {\it et al.}~\cite{zhang2017modification}, the $\mathcal{F}$-criterion is used to calculate an equivalent crack driving force as an weighted average of modes \rn{1} and \rn{2} crack driving forces. With the same equivalent crack driving force, Bryant and Sun~\cite{bryant2018mixed} have further used the $\mathcal{F}$-criterion to determine the fracture direction by solving an optimization problem at the material point level. However, instead of calculating an equivalent crack driving force, here we apply the $\mathcal{F}$-criterion to determine the dominant fracture mode (phase field) and its evolution direction. The upshot is that the double-phase-field model not only distinguishes between modes \rn{1} and \rn{2} fractures naturally but also calculates the fracturing direction based on the $\mathcal{F}$-criterion without solving an optimization problem. \end{remark} } \subsection{Crack irreversibility} Having derived the modes \rn{1} and \rn{2} crack driving forces under all contact conditions, we can now specify expressions for the modes \rn{1} and \rn{2} reactive microforces, $\pi_{r,{\rn{1}}}$ and $\pi_{r,{\rn{2}}}$, which prevent spurious crack healing. Assuming that both modes \rn{1} and \rn{2} cracks do not heal at all, here we set the reactive microforces following the method proposed by Miehe~{\it et al.}~\cite{miehe2010phase} whereby the crack driving force is replaced by the maximum crack driving force in loading history. \revised{Despite being simple, this history-based method has been shown to provide results fairly similar to those obtained by a more sophisticated and robust algorithm---see, {\it e.g.}~its comparison with an augmented Lagrangian method in Geelen {\it et al.}~\cite{geelen2019phase}. The simplicity and effectiveness of the history-based method is more appreciable for double-phase-field modeling in which two phase fields are subjected to irreversibility constraints.} Applying the history-based method, we define the reactive forces for modes \rn{1} and modes \rn{2} fracture as \begin{align} \pi_{r,\rn{1}} &= \left\{\begin{array}{ll} 0 & \text{if} \;\; \revised{\dot{d_{\rn{1}}}}>0\,,\\ [0.5em] -g'_{\rn{1}}(d_{\rn{1}})\displaystyle\max_{t \in [0,t]} \mathcal{H}_{\rn{1}}(t) + g'_{\rn{1}}(d_{\rn{1}}) \mathcal{H}_{\rn{1}} & \mbox{if} \;\; \revised{\dot{d_{\rn{1}}}}=0\,, \end{array}\right. \label{eq:reactive-force-mode1}\\ \pi_{r,\rn{2}} &= \left\{\begin{array}{ll} 0 & \mbox{if} \;\; \dot{d_{\rn{2}}}>0\,,\\ [0.5em] -g'_{\rn{2}}(d_{\rn{2}})\displaystyle\max_{t \in [0,t]} \mathcal{H}_{\rn{2}}(t)\ & \mbox{if} \;\; \dot{d_{\rn{2}}}=0\,. \end{array}\right. \label{eq:reactive-force-mode2} \end{align} where $t$ denotes the current time instance. Note that the last term in Eq.~\eqref{eq:reactive-force-mode1} is added because $\mathcal{H}_{\rn{1}}$ can be positive under an open condition. Eq.~\eqref{eq:reactive-force-mode2} \revised{is} the same as that in the phase-field model for frictional shear fracture~\cite{fei2020phaseshear}. \subsection{Degradation functions for modes \rn{1} and \rn{2} fractures} To complete the formulation, we introduce specific forms to the degradation functions for modes \rn{1} and \rn{2} fractures, $g_{\rn{1}}(d_{\rn{1}})$ and $g_{\rn{2}}(d_{\rn{2}})$, respectively. Particularly, we adopt $g_{\rn{1}}(d_{\rn{1}})$ from the phase-field model for cohesive tensile fracture~\cite{wu2017unified}, given by \begin{align} g_{\rn{1}}(d_{\rn{1}}) = \dfrac{\left(1 - d_{\rn{1}} \right)^n}{\left(1 - d_{\rn{1}} \right)^n + m_{\rn{1}}d_{\rn{1}}\left(1 - p d_{\rn{1}} \right)} \, , \;\;\mbox{with}\;\; m_{\rn{1}} := \dfrac{\mathcal{G}_{\rn{1}}}{\pi L}\dfrac{1}{\mathcal{H}_{\rn{1},t}} \, , \end{align} and $g_{\rn{2}}(d_{\rn{2}})$ from the phase-field model for frictional shear fracture~\cite{fei2020phaseshear}, given by \begin{align} g_{\rn{2}}(d_{\rn{2}}) = \dfrac{\left(1 - d_{\rn{2}} \right)^n}{\left(1 - d_{\rn{2}} \right)^n + m_{\rn{2}}d\left(1 - p d_{\rn{2}} \right)} \, , \;\;\mbox{with}\;\; m_{\rn{2}} := \dfrac{\mathcal{G}_{\rn{2}}}{\pi L}\dfrac{1}{\mathcal{H}_{\rn{2},t}} \, , \end{align} where $n$ and $p$ are parameters controlling post-peak softening responses. In this work, we use a standard choice of $n = 2$ and $p = -0.5$. \section{Discretization and algorithms} \label{sec:discretization} In this section, we describe how to numerically solve the proposed double-phase-field formulation using \revised{a nonlinear finite element method.} \subsection{Unified expressions for crack driving forces considering crack irreversibility} To simplify the succeeding formulations, let us first unify the expressions for the crack driving forces and the reactive microforces under different contact conditions. We begin this by merging the intact condition into either the open or the stick condition. When $\bar{\sigma}_{nn}> 0$, the intact condition can be combined with the open condition, because $\mathcal{H}_{\rn{1}}=\mathcal{H}_{\rn{1},t}$ initially. Likewise, when $\bar{\sigma}_{nn} \leq 0$, the intact condition can be integrated with the stick condition, as $\mathcal{H}_{\rn{2}}=\mathcal{H}_{\rn{2},t}$ initially. These intact and stick conditions can be distinguished from the slip condition based on the value of $f$, by setting $\tau_\mathrm{Y}$ in $f$ as follows: $\tau_\mathrm{Y}=\tau_{p}$ for an intact material, and $\tau_\mathrm{Y} = \tau_{r}$ for a damaged material. This way allows us to identify all possible conditions based on the values of $\bar{\sigma}_{nn}$ and $f$. Then, we define the combined crack driving and reactive forces for mode \rn{1} and \rn{2} fractures, $\mathcal{H}_{\rn{1}}^{+}$ and $\mathcal{H}_{\rn{2}}^{+}$, respectively, as \begin{align} \mathcal{H}_{\rn{1}}^{+} &= \left \{ \begin{array}{ll} \max \left \{\mathcal{H}_{\rn{1},t}, \, \dfrac{1}{2M} \left[ \displaystyle \max_{t \in [0,t]} \bar{\sigma}_{nn}(t) \right]^2 \right\} & \mbox{if} \;\; \bar{\sigma}_{nn} > 0 \, , \\ [0.5 em] \displaystyle\max_{t \in [0,t]} \mathcal{H}_{\rn{1}}(t) & \mbox{if} \; \; \bar{\sigma}_{nn} \leq 0 \; \; \mbox{and} \; \; f < 0 \, , \\ [0.5 em] \displaystyle\max_{t \in [0,t]} \mathcal{H}_{\rn{1}}(t) & \mbox{if} \;\; \bar{\sigma}_{nn} \leq 0 \;\; \mbox{and} \;\; f = 0 \,, \end{array} \right. \label{eq:H-mode-1} \\ \mathcal{H}_{\rn{2}}^{+} &= \left \{ \begin{array}{ll} \displaystyle\max_{t \in [0,t]} \mathcal{H}_{\rn{2}}(t) & \mbox{if} \;\; \bar{\sigma}_{nn} > 0 \, , \\ [0.5 em] \displaystyle\max_{t \in [0,t]} \mathcal{H}_{\rn{2}}(t) & \mbox{if} \; \; \bar{\sigma}_{nn} \leq 0 \; \; \mbox{and} \; \; f < 0 \, , \\ [0.5 em] \mathcal{H}_{{\rn{2}},t} + \mathcal{H}_\mathrm{slip} & \mbox{if} \;\; \bar{\sigma}_{nn} \leq 0 \;\; \mbox{and} \;\; f = 0 \,. \end{array} \right. \label{eq:H-mode-2} \end{align} Note that we update $\mathcal{H}_{\rn{1}}^{+}$ only when $\bar{\sigma}_{nn} > 0$, and $\mathcal{H}_{\rn{2}}^{+}$ only when $\bar{\sigma}_{nn} \leq 0$ and $f=0$. \subsection{Problem statement} Let us denote by $\hat{\tensor{u}}$ and $\hat{\tensor{t}}$ the prescribed displacement and traction boundary conditions, respectively, and by $\tensor{u}_0$, $d_{\rn{1}0}$ and $d_{\rn{2}0}$ the initial displacement field and the initial mode \rn{1} and \rn{2} phase fields, respectively. The time domain is denoted by $\mathbb{T}:=(0,t_{\mathrm{max}}]$. The strong form of the problem can then be stated as follows: find $\tensor{u}$, $d_{\rn{1}}$ and $d_{\rn{2}}$ that satisfy \begin{align} \diver \tensor{\stress} + \rho \tensor{g} &= \tensor{0} \quad \mbox{in} \quad \Omega \times \mathbb{T} \, , \\ -g'_{\rn{1}} (d_{\rn{1}}) \mathcal{H}_{\rn{1}}^{+} + \dfrac{\mathcal{G}_{\rn{1}}}{\pi L} \left(2L^2 \diver \grad d_{\rn{1}} - 2 + 2d_{\rn{1}} \right) &= 0 \quad \mbox{in} \quad \Omega \times \mathbb{T} \, , \\ -g'_{\rn{2}} (d_{\rn{2}}) \mathcal{H}_{\rn{2}}^{+} + \dfrac{\mathcal{G}_{\rn{2}}}{\pi L} \left(2L^2 \diver \grad d_{\rn{2}} - 2 + 2d_{\rn{2}} \right) &= 0 \quad \mbox{in} \quad \Omega \times \mathbb{T} \, , \end{align} subject to boundary conditions \begin{align} \tensor{u} = \hat{\tensor{u}} \quad &\mbox{on} \quad \partial_{u} \Omega \times \mathbb{T} \,, \\ \tensor{\stress} \cdot \tensor{v} = \hat{\tensor{t}} \quad &\mbox{on} \quad \partial_{t} \Omega \times \mathbb{T} \,, \\ \grad d_{\rn{1}} \cdot \tensor{v} = 0 \quad &\mbox{on} \quad \partial \Omega \times \mathbb{T} \,, \\ \grad d_{\rn{2}} \cdot \tensor{v} = 0 \quad &\mbox{on} \quad \partial \Omega \times \mathbb{T} \,, \end{align} with $\tensor{v}$ denoting the outward unit normal vector at the boundary, and initial conditions \begin{align} \tensor{u} \rvert_{t = 0} = \tensor{u}_0 \quad &\mbox{in} \quad \overline{\Omega}\,, \\ d_{\rn{1}}\rvert_{t = 0} = d_{{\rn{1}}0} \quad &\mbox{in} \quad \overline{\Omega}\,, \\ d_{\rn{2}} \rvert_{t = 0} = d_{{\rn{2}}0} \quad &\mbox{in} \quad \overline{\Omega}\,, \end{align} where $\overline{\Omega} := \overline{\Omega \cup \partial \Omega}$. \subsection{Finite element discretization} To begin finite element discretization, we define the trial function spaces for $\tensor{u}$, $d_{\rn{1}}$ and $d_{\rn{2}}$ as \begin{align} \mathcal{S}_u &:= \left \{ \tensor{u} \; \rvert \; \tensor{u} \in H^1, \, \tensor{u} = \hat{\tensor{u}} \; \mbox{on} \; \partial_u \Omega \right\} , \\ \mathcal{S}_{d_{\rn{1}}} &:= \left \{d_{\rn{1}} \; \rvert \; d_{\rn{1}} \in H^1 \right\} , \\ \mathcal{S}_{d_{\rn{2}}} &:= \left \{d_{\rn{2}} \; \rvert \; d_{\rn{2}} \in H^1 \right \} , \end{align} where $H^{1}$ denotes a Sobolev space of order one. Accordingly, the weighting function spaces are defined as \begin{align} \mathcal{V}_{u} &:= \left\{\tensor{\eta} \; \rvert \; \tensor{\eta} \in H^1, \, \tensor{\eta} = \tensor{0} \;\mbox{on} \; \partial_u \Omega \right\} , \\ \mathcal{V}_{d_{\rn{1}}} &:= \left\{\phi_{\rn{1}} \; \rvert \; \phi_{\rn{1}} \in H^1 \right\} , \\ \mathcal{V}_{d_{\rn{2}}} &:= \left\{\phi_{\rn{2}} \; \rvert \; \phi_{\rn{2}} \in H^1 \right\} . \end{align} Applying the standard weighted residual procedure, we obtain the following variational equations: \begin{align} R_{u} &:= - \int_\Omega \symgrad \tensor{\eta} : \tensor{\stress} \: \mathrm{d} V + \int_\Omega \rho \tensor{\eta} \cdot \tensor{g} \: \mathrm{d} V + \int_{\partial_t \Omega} \tensor{\eta} \cdot \hat{\tensor{t}} \: \mathrm{d} A = 0 \, , \label{eq:variational-momentum} \\ R_{d_{\rn{1}}} &:= \int_\Omega \phi_{\rn{1}} g'_{\rn{1}} (d_{\rn{1}}) \mathcal{H}_{\rn{1}}^{+} \: \mathrm{d} V + \int_\Omega \dfrac{\mathcal{G}_{\rn{1}}}{\pi L} \left(2L^2 \grad \phi_{\rn{1}} \cdot \grad d_{\rn{1}} + 2\phi_{\rn{1}} - 2\phi_{\rn{1}}d_{\rn{1}}\right) \mathrm{d} V = 0 \, , \label{eq:variational-microforce-mode-1} \\ R_{d_{\rn{2}}} &:= \int_\Omega \phi_{\rn{2}} g'_{\rn{2}} (d_{\rn{2}}) \mathcal{H}_{\rn{2}}^{+} \: \mathrm{d} V + \int_\Omega \dfrac{\mathcal{G}_{\rn{2}}}{\pi L} \left(2L^2 \grad \phi_{\rn{2}} \cdot \grad d_{\rn{2}} + 2\phi_{\rn{2}} - 2\phi_{\rn{2}}d_{\rn{2}}\right) \mathrm{d} V = 0 \, . \label{eq:variational-microforce-mode-2} \end{align} Here, we have defined the variational equations as residuals to solve them using Newton's method. The rest of the finite element procedure is straightforward; so we omit it for brevity. The standard linear elements are used for all the field variables. \subsection{Solution strategy} To solve the discrete versions of variational equations \eqref{eq:variational-momentum}, \eqref{eq:variational-microforce-mode-1}, and \eqref{eq:variational-microforce-mode-2}, we use a staggered scheme which has commonly been used since proposed by Miehe {\it et al.}~\cite{miehe2010phase}. Specifically, we first solve Eq.~\eqref{eq:variational-momentum} for $\tensor{u}$ fixing $d_{\rn{1}}$ and $d_{\rn{2}}$, then update the crack driving forces $\mathcal{H}_{\rn{1}}^{+}$ and $\mathcal{H}_{\rn{2}}^{+}$, and finally solve Eqs.~\eqref{eq:variational-microforce-mode-1} and \eqref{eq:variational-microforce-mode-2} for $d_{\rn{1}}$ and $d_{\rn{2}}$ fixing $\mathcal{H}_{\rn{1}}^{+}$ and $\mathcal{H}_{\rn{2}}^{+}$. Provided that the load step size is small enough, this staggered scheme significantly improves the robustness of numerical solution without much compromise in the solution accuracy. \revised{Also, a single staggered iteration may be sufficient for practical purposes, as will be demonstrated later through a numerical example. It is noted that other multi-phase-field models~\cite{nguyen2017multi,na2018computational,bleyer2018phase,dean2020multi} have also used staggered solution schemes.} \revised{Because the formulation is incrementally nonlinear, we use Newton's method to solve each stage in a staggered iteration. To solve for $\bm{u}$ in the first stage, we linearize Eq.~\eqref{eq:variational-momentum} as ($\delta$ denoting the linearization operator) \begin{align} \delta R_{u} = \int_\Omega \symgrad \bm{\eta} : \mathbb{C} : \symgrad \delta\bm{u} \: \mathrm{d} V\,, \end{align} and to solve for $d_{\rn{1}}$ and $d_{\rn{2}}$ in the second stage, we linearize Eqs.~\eqref{eq:variational-microforce-mode-1} and~\eqref{eq:variational-microforce-mode-2} as \begin{align} \delta R_{d_{\rn{1}}} &= \int_\Omega \phi_{\rn{1}} g''_{\rn{1}} (d_{\rn{1}}) \mathcal{H}_{\rn{1}}^{+}\, \delta d_{\rn{1}} \: \mathrm{d} V + \int_\Omega \dfrac{\mathcal{G}_{\rn{1}}}{\pi L} \left(2L^2 \grad \phi_{\rn{1}} \cdot \grad \delta d_{\rn{1}} - 2\phi_{\rn{1}}\deltad_{\rn{1}} \right) \mathrm{d} V\,, \\ \delta R_{d_{\rn{2}}} &= \int_\Omega \phi_{\rn{2}} g''_{\rn{2}} (d_{\rn{2}}) \mathcal{H}_{\rn{2}}^{+}\, \delta d_{\rn{2}} \: \mathrm{d} V + \int_\Omega \dfrac{\mathcal{G}_{\rn{2}}}{\pi L} \left(2L^2 \grad \phi_{\rn{2}} \cdot \grad \delta d_{\rn{2}} - 2\phi_{\rn{2}}\deltad_{\rn{2}} \right) \mathrm{d} V\,. \end{align} It is noted that $\mathcal{H}_{\rn{1}}^{+}$ and $\mathcal{H}_{\rn{2}}^{+}$ are not linearized because they are fixed during the phase-field solution stage.} Algorithm \ref{algo:material-update} presents a procedure to update internal variables at a material/quadrature point \revised{during a Newton iteration.} Here, known quantities at the previous load step are denoted with subscript $(\cdot)_{n-1}$, whereas \revised{unknown quantities requiring updates} are written without an additional subscript for brevity. The procedure essentially extends the predictor--corrector algorithm of the phase-field model for shear fracture~\cite{fei2020phaseshear} to accommodate the open contact condition. Importantly, one can see that the present model treats all the contact conditions without any algorithm for imposing contact constraints. This feature is the main advantage of the double-phase-field model from the numerical viewpoint. Several aspects of the algorithm may deserve elaboration. First, the crack driving forces, $\mathcal{H}_{\rn{1}}^{+}$ and $\mathcal{H}_{\rn{2}}^{+}$, of an initially intact material point ($d_{{\rn{1}}0}=d_{{\rn{2}}0}=0$) should be initialized by their threshold values, $\mathcal{H}_{\rn{1},t}$ and $\mathcal{H}_{\rn{2},t}$, respectively, to prevent fracturing in the elastic region. Second, because the potential fracture direction is unknown \textit{a priori}, we first evaluate $\theta$ using the undamaged major principal stress, $\bar{\sigma}_{1}$ (Line 2), considering that $\bar{\sigma}_{1}=\bar{\sigma}_{nn}$ under an open condition. Third, the stress tensor under a slip condition is obtained by enforcing $f=0$ (Line 20), similar to the return mapping algorithm in plasticity. Fourth, in Line 20, the residual strength, $\tau_{r}$, is evaluated explicitly from the previous time step, as in the frictional shear fracture model~\cite{fei2020phaseshear}. This semi-implicit update greatly simplifies the stress--strain tangent, $\mathbb{C}$, without much compromise in accuracy. Lastly, unlike the original algorithm for shear fracture~\cite{fei2020phaseshear}, $g_{\rn{2}}(d_{\rn{2}})$ is not updated when $d_{\rn{2}} = 0$ and $f<0$. This is because the friction angle for the peak and residual strengths are assumed to be the same in this work. If the peak and residual friction angles are considered different, $g_{\rn{2}}(d_{\rn{2}})$ needs to be updated as explained in Fei and Choo~\cite{fei2020phaseshear}. This modification is straightforward. \begin{algorithm}[h!] \setstretch{1.25} \caption{Material point update procedure for the double-phase-field model for mixed-mode fracture} \begin{algorithmic}[1] \Require $\tensor{\strain}$, $d_{\rn{1}}$ and $d_{\rn{2}}$. \Ensure $\tensor{\stress}$, $\mathbb{C}$, $\mathcal{H}^{+}_{\rn{1}}$ and $\mathcal{H}^{+}_{\rn{2}}$. \State Calculate $\bar{\tensor{\stress}} = \bar{\mathbb{C}}:\tensor{\strain}$ and $\bar{\sigma}_{1}$. \State Set $\theta = 0^\circ$ if $\bar{\sigma}_{1} > 0$; otherwise, set $\theta = 45^\circ - \phi/2$. \State Calculate $\tensor{n}$, $\tensor{m}$, and $\tensor{s}$ from $\theta$. \State Calculate $\tensor{\alpha}$ from $\tensor{n}$ and $\tensor{m}$. \State Calculate $\bar{\sigma}_{nn} = \bar{\tensor{\stress}}:\left(\tensor{n} \dyad \tensor{n} \right)$. \If {$\bar{\sigma}_{nn} > 0$} \State Open condition. \State Update $\tensor{\stress} = \bar{\tensor{\stress}} - \left[ 1 - g_{\rn{1}}(d_{\rn{1}}) \right]\{\bar{\sigma}_{nn} (\tensor{n} \dyad \tensor{n}) + (\lambda/M)\bar{\sigma}_{nn} [(\tensor{m} \dyad \tensor{m}) + (\tensor{s} \dyad \tensor{s})]\}$. \State Update $\mathbb{C} = \bar{\mathbb{C}} - \left[1 - g_{\rn{1}}(d_{\rn{1}}) \right] \left\{ \left(\tensor{n} \dyad \tensor{n} \right)+ \left(\lambda /M \right)[ (\tensor{m} \dyad \tensor{m} ) + (\tensor{s} \dyad \tensor{s} )]\right\} \dyad \left\{ M\left(\tensor{n} \dyad \tensor{n} \right) + \lambda [(\tensor{m} \dyad \tensor{m} ) + (\tensor{s} \dyad \tensor{s} )] \right\}$. \State Update $\mathcal{H}^{+}_{\rn{1}} = \max \left[\bar{\sigma}^2_{nn}/(2M),\, \left(\mathcal{H}^{+}_{\rn{1}} \right)_{n-1} \right]$. \State Set $\mathcal{H}^{+}_{\rn{2}} = \left( \mathcal{H}^{+}_{\rn{2}} \right)_{n-1}$. \Else \State Calculate $\bar{\tau} = (1/2)\bar{\tensor{\stress}}:\tensor{\alpha}$ and $p_{\mathrm{N}} = - \bar{\sigma}_{nn}$. \State Set $\tau_\mathrm{Y} = c_{0} + p_{\mathrm{N}} \tan \phi$ if $d_{\rn{2}} = 0$; otherwise, set $\tau_\mathrm{Y} = p_{\mathrm{N}} \tan \phi$. \State Evaluate $f = \lvert \bar{\tau} \rvert - \tau_\mathrm{Y}$. \If {$f < 0$} \State Stick condition. \State Update $\tensor{\stress} = \bar{\tensor{\stress}}$. \State Update $\mathbb{C} = \bar{\mathbb{C}}$. \State Set $\mathcal{H}^{+}_{\rn{2}} = \left( \mathcal{H}^{+}_{\rn{2}} \right)_{n-1}$. \Else \State Slip condition. \State Update $\tensor{\stress} = \bar{\tensor{\stress}} - [1 - g_{\rn{2}}(d_{\rn{2}})][\bar{\tau} - (\tau_{r})_{n-1}]\tensor{\alpha}$, where $(\tau_{r})_{n-1} := (p_{\mathrm{N}})_{n-1} \tan\phi$. \State Update $\mathbb{C} = \bar{\mathbb{C}} - [1 - g_{\rn{2}}(d_{\rn{2}})]G(\tensor{\alpha}\dyad\tensor{\alpha})$. \State Update $\mathcal{H}^{+}_{\rn{2}} = \left(\mathcal{H}^{+}_{\rn{2}}\right)_{n-1} + (\bar{\tau} - \tau_r)\Delta \gamma$, where $\tau_r = p_{\mathrm{N}} \tan\phi$ and $\Delta\gamma := (\tensor{\strain} - \tensor{\strain}_{n-1}):\tensor{\alpha}$. \EndIf \State Set $\mathcal{H}^{+}_{\rn{1}} = \left( \mathcal{H}^{+}_{\rn{1}} \right)_{n-1}$. \EndIf \end{algorithmic} \label{algo:material-update} \end{algorithm} \section{Validation} \label{sec:validation} In this section, we validate the proposed double-phase-field model with experimental data on mixed-mode fracture in rocks. Before simulating mixed-mode fracture, we have verified that the double-phase-field model degenerates into a cohesive tensile model and a frictional shear model under pure mode \rn{1} and mode \rn{2} problems, respectively. These verification results are omitted for brevity. Also, we do not repeat discussions pertaining to the numerical aspects of the original phase-field models combined in this work ({\it e.g.}~mesh and length sensitivity); we refer to Wu~\cite{wu2017unified} and Fei and Choo~\cite{fei2020phaseshear} for discussions on such topics. By doing so, we fully focus on new aspects that arise from the double-phase-field formulation for mixed-mode fracture. To validate the model, we simulate the uniaxial compression tests of Wong~\cite{wong2008}, Bobet and Einstein~\cite{bobet1998fracture} and Wong and Einstein~\cite{wong2009crack-a} on gypsum specimens with preexisting flaw(s), whereby various mixed-mode cracking patterns are characterized under different flaw configurations. Emulating the experimental setup, we consider 76.2 mm wide and 152.4 mm tall rectangular specimens with a single or double flaws. The flaw configuration of each specimen will be described later. Table~\ref{tab:double-flaws-parameters} presents the material parameters used in the simulation. Among these parameters, the elasticity parameters ($K$ and $G$) and the tensile strength ($\sigma_p$) are directly adopted from their values measured from the gypsum specimens in Bobet and Einstein~\cite{bobet1998fracture}. The cohesion strength ($c_0$) and the friction angle ($\phi$) are unavailable from the original experiment, so they are assigned referring to other experiments on molded gypsum specimens~\cite{wei2020physical}. The tensile and shear fracture energies ($\mathcal{G}_{\rn{1}}$ and $\mathcal{G}_{\rn{2}}$) are calibrated to match the coalescence stresses measured in Bobet and Einstein~\cite{bobet1998fracture}. The calibrated values and the mode mixity ratio ($\mathcal{G}_{\rn{2}}/\mathcal{G}_{\rn{1}}$) lie within the ranges of their typical values for rocks~\cite{shen1994modification}. \begin{table}[h!] \centering \begin{tabular}{lllrl} \toprule Parameter & Symbol & Units & Value & Reference \\ \midrule Bulk modulus & $K$ & GPa & 2.84 & Measured in~\cite{bobet1998fracture} \\ Shear modulus & $G$ & GPa & 2.59 & Measured in~\cite{bobet1998fracture} \\ Tensile strength & $\sigma_p$ & MPa & 3.2 & Measured in~\cite{bobet1998fracture} \\ Cohesion strength & $c_{0}$ & MPa & 10.7 & Measured in~\cite{wei2020physical} \\ Friction angle & $\phi$ & deg & 28 & Measured in~\cite{wei2020physical} \\ Mode \rn{1} fracture energy & $\mathcal{G}_{\rn{1}}$ & J/m$^{2}$ & 16 & Calibrated from data in~\cite{bobet1998fracture} \\ Mode \rn{2} fracture energy & $\mathcal{G}_{\rn{2}}$ & J/m$^{2}$ & 205 & Calibrated from data in~\cite{bobet1998fracture} \\ \bottomrule \end{tabular} \caption{Cracking from preexisting flaws: material parameters.} \label{tab:double-flaws-parameters} \end{table} For finite element simulation, we set the phase-field length parameter as $L = 0.2$ mm and refine elements near the preexisting flaw(s) such that their size $h$ satisfies $L/h \geq 5$. \revised{Each specimen is then discretized by around 300,000 quadrilateral elements. (The specific number depends on the flaw configuration.)} The simulation begins by applying a constant displacement rate of $2\times 10^{-3}$ mm on the top boundary. The bottom boundary is supported by rollers except for the left corner which is fixed by a pin for stability. The lateral boundaries are traction free. Gravity is ignored. The finite element solutions are obtained using a parallel finite element code for geomechanics~\cite{choo2015stabilized,choo2018large,choo2019stabilized}, which is built on the \verb|deal.II| finite element library~\cite{dealII,dealII91}, \verb|p4est| mesh handling library~\cite{p4est}, and the \verb|Trilinos| project~\cite{trilinos}. \revised{ \subsection{Cracking from a single flaw} To begin, we simulate the cracking process in a single-flawed gypsum specimen, following the experimental setup in Wong~\cite{wong2008}. Figure~\ref{fig:single-flaw-setup} illustrates the geometry and boundary conditions of the problem. The flaw is 12.7 mm long, 1.27 mm wide, and inclined $45^\circ$ from the horizontal. \begin{figure}[h!] \centering \includegraphics[width=0.3\textwidth]{figures/single-flaw-setup.pdf} \caption{Cracking from a single flaw: problem geometry and boundary conditions.} \label{fig:single-flaw-setup} \end{figure} Figure~\ref{fig:single-flaw-results} presents simulation results in comparison with the cracking pattern of a specimen studied in Wong~\cite{wong2008}. It can be seen that the double-phase-field model well reproduces the real cracking process. When $\hat{u}_{y}=-0.40$ mm, tensile wing cracks start to grow from the flaw tips, and later at $\hat{u}_{y}=-0.60$ mm, tensile and shear damages appear. These tensile and shear damages soon develop into full cracks at $\hat{u}_{y}=-0.66$ mm. The final cracking pattern in our numerical simulation is nearly the same as the experimental observation. \begin{figure}[h!] \centering \includegraphics[width=1.0\textwidth]{figures/single-flaw-45-results.pdf} \caption{Cracking from a single flaw: simulation and experimental results. The experimental result is redrawn from Wong~\cite{wong2008}.} \label{fig:single-flaw-results} \end{figure} For quantitative validation, Fig.~\ref{fig:single-flaw-force-disp} compares the stress--strain curve from numerical simulation with the experimental data of Wong~\cite{wong2008} provided by the author. The stress and strain in the specimen are defined in a nominal manner following the experimental data. The simulation result matches remarkably well with the experimental data, even though none of the material parameters has been calibrated from this particular experiment. Thus, the double-phase-field model has been fully validated, both qualitatively and quantitatively, with the experiment. \begin{figure}[h!] \centering \includegraphics[width=0.55\textwidth]{figures/single-flaw-45-stress-strain.pdf} \caption{Cracking from a single flaw: comparison of the stress--strain curve from numerical simulation with the experimental data of Wong~\cite{wong2008} provided by the author.} \label{fig:single-flaw-force-disp} \end{figure} To strengthen the validity of our numerical results, we repeat the same simulation with different numbers of staggered iterations and compare results in Fig.~\ref{fig:single-flaw-staggered}. One can see that the simulation results are virtually insensitive to the number of staggered iterations, in both qualitative and quantitative senses. It can thus be concluded that as long as the load step size is chosen to be reasonably small, a single iteration is sufficiently accurate. \begin{figure}[h!] \centering \includegraphics[width=1.0\textwidth]{figures/single-flaw-45-staggered.pdf} \caption{Cracking from a single flaw: comparison of simulation results obtained with different numbers of staggered iterations. The phase fields are drawn at $\hat{u}_{y}=-0.66$ mm.} \label{fig:single-flaw-staggered} \end{figure} Before proceeding to other validation examples, we also demonstrate why the double-phase-field model is an essential extension of previous single-phase-field models for quasi-brittle materials~\cite{wu2017unified,fei2020phaseshear} to simulate mixed-mode fracture in rocks. Figure~\ref{fig:single-flaw-comparison} compares simulation results of the same problem obtained by the present double-phase-field model, the single-phase-field model for cohesive tensile fracture~\cite{wu2017unified}, and the the single-phase-field model for frictional shear fracture~\cite{fei2020phaseshear}. Clearly, the single-phase-field models cannot reproduce the experimentally-observed cracking pattern presented in Fig.~\ref{fig:single-flaw-results}, even in a qualitative manner. Thus the present model is a critical achievement for phase-field modeling of mixed-mode fracture in quasi-brittle rocks and other similar materials. \begin{figure}[h!] \centering \includegraphics[width=0.8\textwidth]{figures/single-flaw-45-comparison.pdf} \caption{Cracking from a single flaw: comparison of simulation results obtained by the double-phase-field model (Mode \rn{1} \& Mode \rn{2}), the single-phase-field model for cohesive tensile fracture~\cite{wu2017unified} (Mode \rn{1} only), and the single-phase-field model for frictional shear fracture~\cite{fei2020phaseshear} (Mode \rn{2} only).} \label{fig:single-flaw-comparison} \end{figure} } \subsection{Cracking from double flaws} Next, we simulate a variety of mixed-mode fracture processes in double-flawed specimens experimentally studied in Bobet and Einstein~\cite{bobet1998fracture} and Wong and Einstein~\cite{wong2009crack-a}. Figure~\ref{fig:double-flaws-setup} depicts the general setup of specimens prepared according to the original experiments. In all the specimens, the two flaws have the same length and aperture, 12.7 mm and 0.1 mm, respectively, with the continuity ($c$) of 12.7 mm. By contrast, their inclination angle ($\alpha$) and spacing ($w$) are varied by specimens to trigger different types of cracking patterns under compression. In this work, we particularly consider two cases of the inclination angle, $\alpha=45^\circ$ and $\alpha=60^\circ$, which manifested mixed-mode cracking patterns in the experiments. Within the case of $\alpha=45^\circ$, we consider three sub-cases of flaw spacings: $w=0$, $w=a$, and $w=2a$, where $a$ denotes the half flaw length, $6.35$ mm. Within the case of $\alpha=60^\circ$, we consider two sub-cases: $w=0$ and $w=a$. As a result, we simulate a total of five cases. \begin{figure}[h!] \centering \includegraphics[width=0.5\textwidth]{figures/double-flaws-setup.pdf} \caption{Cracking from double flaws: problem geometry and boundary conditions. The ligament length stands for the distance between the two flaws.} \label{fig:double-flaws-setup} \end{figure} In what follows, we compare our simulation results with the qualitative and quantitative data from Bobet and Einstein~\cite{bobet1998fracture}. For the cases of zero spacing ($w=0$), Wong and Einstein~\cite{wong2009crack-a} later clarified the natures of cracks developed in gypsum specimens with the same flaw spacing. For these cases, we will complement the qualitative experimental data by those provided in Wong and Einstein~\cite{wong2009crack-a}. Figure~\ref{fig:45-0-2a-results} presents the simulation and experimental results when $\alpha=45^\circ$ and $w = 0$ mm. Tensile wing cracks first develop from the flaw tips in a stable manner, and then shear damages grow in between the tips of the two flaws. Eventually, the two flaws are coalesced by a mixed-mode crack, which consists of two coplanar shear cracks bridged by a tensile crack. One can find that the simulation and experimental results are remarkably consistent in terms of the locations, shapes, and modes of the cracks. \begin{figure}[h!] \centering \includegraphics[width=1.0\textwidth]{figures/double-flaws-45-0-2a-results.pdf} \caption{Cracking from double flaws with $\alpha = 45^\circ$ and $w = 0$ mm: simulation and experimental results. The experimental result is redrawn from Bobet and Einstein~\cite{bobet1998fracture} and Wong and Einstein~\cite{wong2009crack-a}.} \label{fig:45-0-2a-results} \end{figure} Next, Fig.~\ref{fig:45-a-2a-results} shows and compares results from the simulation and the experiment when the spacing of the two flaws is increased to the half crack width, $a = 6.35$ mm. The overall cracking process is similar to that in the previous case: tensile wing cracks followed by shear cracks and a secondary tensile crack which coalesce the two preexisting flaws. Unlike the previous case, however, the coalescence crack in this case exhibits a zig-zag pattern. This difference is also fully consistent with the experimental observations. \begin{figure}[h!] \centering \includegraphics[width=1.0\textwidth]{figures/double-flaws-45-a-2a-results.pdf} \caption{Cracking from double flaws with $\alpha = 45^\circ$ and $w = a = 6.35$ mm: simulation and experimental results. The experimental result is redrawn from Bobet and Einstein~\cite{bobet1998fracture}.} \label{fig:45-a-2a-results} \end{figure} In Fig. \ref{fig:45-2a-2a-results}, we show the simulation and experimental results when the spacing is further increased to the crack width, $2a = 12.7$ mm. The growth sequence of tensile wing cracks and shear cracks is the same as those in the previous two cases. In the current case, however, the flaws are finally coalesced when a shear crack generated from one flaw links an internal wing crack from the other flaw. This type of crack coalescence, which was not observed when $w/c < 1$, is also highlighted in the experimental study of Bobet and Einstein~\cite{bobet1998fracture}. As can be seen, the proposed phase-field model can well capture this pattern transition as observed from the experiments. \begin{figure}[h!] \centering \includegraphics[width=1.0\textwidth]{figures/double-flaws-45-2a-2a-results.pdf} \caption{Cracking from double flaws with $\alpha = 45^\circ$ and $w = 2a = 12.7$ mm: simulation and experimental results. The experimental result is redrawn from Bobet and Einstein~\cite{bobet1998fracture}.} \label{fig:45-2a-2a-results} \end{figure} Figures~\ref{fig:60-0-2a-results} and \ref{fig:60-a-2a-results} present how the simulation and experimental results become different as the flaw inclination angle is increased to $60^{\circ}$, when $w = 0$ mm and $w = a = 6.35$ mm, respectively. The geometrical features of the secondary tensile cracks are changed, while the overall cracking patterns and sequences remain analogous to those of the cases of $\alpha = 45^\circ$. The simulated coalescence crack in the case of $w = 0$ mm (Fig.~\ref{fig:60-0-2a-results}) is consistent with the experimental finding of Wong and Einstein~\cite{wong2009crack-a}, in that it is a mixed-mode crack consisting of two shear cracks developed from the inner flaw tips and a tensile crack bridging the shear cracks. Also in the case of $w=a=6.35$ mm (Fig.~\ref{fig:60-a-2a-results}), a zig-zag coalescence pattern has emerged in both our simulation and the experiment of Bobet and Einstein~\cite{bobet1998fracture}. Therefore, the simulation results are fully consistent with the experimental observations---from the geometry of cracks to the natures of tensile/shear cracks---under all of the flaw configurations. \begin{figure}[h!] \centering \includegraphics[width=1.0\textwidth]{figures/double-flaws-60-0-2a-results.pdf} \caption{Cracking from double flaws with $\alpha = 60^\circ$ and $w = 0$ mm: simulation and experimental results. The experimental result is redrawn from Bobet and Einstein~\cite{bobet1998fracture} and Wong and Einstein~\cite{wong2009crack-a}.} \label{fig:60-0-2a-results} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=1.0\textwidth]{figures/double-flaws-60-a-2a-results.pdf} \caption{Cracking from double flaws with $\alpha = 60^\circ$ and $w = a = 6.35$ mm: simulation and experimental results. The experimental result is redrawn from Bobet and Einstein~\cite{bobet1998fracture}.} \label{fig:60-a-2a-results} \end{figure} Further, for quantitative validation, Fig. \ref{fig:double-flaws-coalescence-stresses} compares the coalescence stresses in the simulations and those measured in the experiments of Bobet and Einstein~\cite{bobet1998fracture}. In all cases, the simulation results show excellent agreement with the experimental data. Remarkably, the simulation results can well capture the increasing/decreasing trends of the coalescence stresses as observed from the experiments. \begin{figure}[h!] \centering \includegraphics[width=0.55\textwidth]{figures/double-flaws-stresses.pdf} \caption{Cracking from double flaws: comparison of coalescence stresses in numerical simulation with the experimental data of Bobet and Einstein~\cite{bobet1998fracture}. (See Fig. \ref{fig:double-flaws-setup} for the definition of the ligament length.)} \label{fig:double-flaws-coalescence-stresses} \end{figure} The numerical results in this section have demonstrated that the proposed phase-field model can not only reproduce mixed-mode fracture in the individual cases but also capture the transition of cracking patterns according to change in the flaw configurations. The proposed model has thus been fully validated. \smallskip \revised{ \begin{remark} Some of the above experimental results have also been reproduced by the phase-field model for brittle mixed-mode fracture~\cite{zhang2017modification}. In the brittle model, however, the phase-field length parameter should be restricted to a specific value to match a prescribed tensile strength. Also, the shear strength of the brittle model cannot be controlled. Conversely, in the present quasi-brittle model, one can freely choose the length parameter because it does not affect the tensile and shear strengths of the material. Apart from its physical implications, this feature provides more flexibility to numerical modelers because the length parameter governs the discretization level in phase-field modeling. \end{remark}} \smallskip \begin{remark} Besides validation, the above numerical results have demonstrated that the double-phase-field model allows us to naturally distinguish between tensile and shear cracks. This feature is invaluable to develop a better understanding of mixed-mode cracking processes in rocks. One main reason is that accurate experimental characterization of rock cracking processes requires a sophisticated technique ({\it e.g.}~high speed imaging~\cite{wong2009using}) which is difficult, or even impossible, to be applied to rocks under in-situ stress conditions. The capability of providing physical insight into mixed-mode fracture without a sophisticated technique is a unique advantage of the double-phase-field formulation. \end{remark} \section{Closure} \label{sec:closure} We have developed a double-phase-field formulation for mixed-mode fracture in \revised{quasi-brittle} rocks, employing two different phase fields to describe \revised{cohesive tensile fracture and frictional shear fracture} individually. The formulation rigorously combines the two phase fields through three approaches: (i) crack-direction-based decomposition of the strain energy into the tensile, shear, and pure compression parts, (ii) contact-dependent calculation of the potential energy, and (iii) energy-based determination of the dominant fracturing mode in each contact condition. In this way, we have successfully coupled two types of phase-field models---one for cohesive tensile fracture and the other for frictional shear fracture---to model mixed-mode fracture in quasi-brittle rocks. The double-phase-field model has been validated to reproduce a variety of mixed-mode fracturing processes in rocks, in both qualitative and quantitative senses. \revised{Compared with the existing phase-field models for mixed-mode fracture in rocks, the double phase-field model has two standout features. First, it explicitly takes tensile and shear strengths as material parameters independent of the phase-field length parameter, unlike the existing models where the phase-field length controls the strengths. This feature allows one to use experimentally-measured strengths directly without any restriction on the length parameter.} Second, the double-phase-field model can simulate---and naturally distinguish between---tensile and shear fractures without complex algorithms. This feature offers an exceptional opportunity to better understand rock cracking processes that are challenging, or even impossible, to be characterized by experiments alone. Examples include crack growth and coalescence in rocks under true-triaxial stress conditions, which are much more difficult to be investigated experimentally than those under uniaxial/biaxial stress conditions. We thus believe that the proposed model is an attractive option for both understanding and predicting mixed-mode fracture in rocks. \section*{Acknowledgments} The authors are grateful to Dr. Louis N.Y. Wong for sharing his experimental data and for helpful discussions regarding rock fracture. The authors also thank Dr. Eric C. Bryant for his help with meshing. This work was supported by the Research Grants Council of Hong Kong through Projects 17201419 and 27205918. The first author also acknowledges financial support from a Hong Kong PhD Fellowship.
1,116,691,497,416
arxiv
\section{Introduction}\label{intro} Quantum computers consist of a finite number, $M$, of two state systems. The resulting Hilbert space for this quantum system is finite dimensional. Elementary gates are used to build an irreducible algebra of operators on this space that can be used to model complex systems. The goal is to time evolve these systems to solve problems that are difficult to solve classically. This requires time evolving vectors with large numbers of components and measuring the results. At the computational level results are achieved using finite quantum systems. Julian Schwinger's textbook \cite{schwinger} treatment of measurement theory generalizes the standard measurement theoretic treatment of the Stern Gerlach experiment to a system of a finite number of degrees of freedom. It is a natural framework for numerical treatments of path integrals, although these applications are limited on classical computers. Schwinger begins with a quantum observable that has a finite number of possible outcomes. He then constructs a complementary set of unitary operators - one that has the same eigenvectors as the original observable and one that cyclically shifts the eigenvectors. These two unitary operators are finite degree of freedom analogs of the Weyl algebra, which is the exponential form of the canonical commutation relations. In this case there is no identification of these observables with coordinates or momenta. When $M$ can be expressed as a product of prime factors, he shows that this algebra can be decomposed into products of irreducible sub-algebras acting on independent sets of prime numbers of degrees of freedom. When $M=2^L$ these elementary unitary operators can be represented by the q-bit gates $\sigma_x$ and $\sigma_z$. While Schwinger's representation provides a general structure theorem for quantum systems of a finite numbers of degrees of freedom, there are natural limits that provide models of quantum systems based on commuting observables with continuous eigenvalues. These systems can be approximated by finite systems with large number of degrees of freedom. The discrete representation leads to a discrete formulation of the path integral where for $N$ time steps there are a finite number, $M^N$, ``paths'' that pass through the $M$ allowed values of one of the observables at each time step. For a canonical system that is quadratic in the ``momentum'' variables, the amplitude for free propagation between time steps defines a ``complex probability'' on the space of paths. The path integral can be interpreted as the small time step limit of the expectation value of an interaction functional of the discrete of paths between time steps. An application of this discrete path integral to scattering in one dimension is illustrated in section \ref{scatt}. The discrete path integral is used to approximate sharp-momentum half-shell transition matrix elements. The same method can be applied to time evolve quantum fields using an exact discrete multi-resolution representation of the field algebra. The computation of time evolution of a volume and resolution truncated quantum field theory using the discrete path integral is discussed in section \ref{qft}. It is illustrated using a trivial two mode truncation of the theory. While the Schwinger representation does not solve any of the problems that quantum computers are designed to solve, it leads to a simple framework for modeling general quantum problems as finite quantum systems, where these finite systems can also be represented by products of elementary 2 or 3 state quantum systems. The 2-3 state algebras are more localized and should provide a more practical representation for quantum computations. In general applications are limited by the dimension of the vectors that represent realistic systems. The next section provides a brief discussion of the role of Hilbert spaces in the formulation of three valued quantum logic represented by q-bits. This emphasizes the relevant difference between digital and quantum computing. Section \ref{weyl} provides a summary of Schwinger's construction of irreducible algebras of systems of $M$ degrees of freedom. Section \ref{qbit} discusses the factorization of a $2^M$ degree of freedom system into a direct product of irreducible two-degree of freedom systems which results in a representation of the algebra of section \ref{weyl} in terms of qbits. Section \ref{cont} discusses the limit to a system with continuous eigenvalues. In this case the general construction requires additional boundary conditions. Section \ref{complex} gives a short discussion of the subject of complex probabilities, which will be used in the discrete formulation of the path integral. Section \ref{path} discusses the treatment of time evolution using a discrete formulation of the path integral as the expectation value of a potential functional with respect to a complex probability on a finite sample space of paths. Section \ref{scatt} illustrates the application of the path integral in section \ref{path} to scattering in one dimension. Section \ref{qft} discusses an exact discrete multi-resolution representation of a scalar field theory and uses the path integral in section \ref{path} to time evolve a two field-mode truncation of the theory. Section \ref{sum} gives a summary and conclusion. \section{Quantum logic}\label{logic} Classically if a system is prepared in a state $A$ and a later measurement tests if it will be detected in state $B$, there are two possible outcomes, true or false. This leads to a two valued system of logic that is encoded in the bits used in digital computing. In quantum mechanics there are three possibilities - the final system will always be measured to be in the state $B$, it will never be measured to be in the state $B$, or there is a finite probability $P$, with $0<P<1$, that it will be measured to be in state $B$. This leads to a three-valued logic or quantum logic. The three valued logic of quantum mechanics \cite{birkhoff} has a straightforward geometrical interpretation. If state $A$ is represented by a one-dimensional subspace of a Hilbert space and state $B$ is represented by another one dimensional subspace then there are three possibilities - (1) the subspace $B$ is the subspace $A$, (2) the subspace $B$ is orthogonal to the subspace $A$, or a non-0 vector in $A$ has a non-zero projection on the subspace $B$. In the quantum case states are represented by vectors or rays, $\vert a \rangle$ in a Hilbert space. Quantum probabilities are expressed in terms of the Hilbert space inner product: \[ P_{ab} := {\langle a \vert b \rangle \langle b \vert a \rangle \over \langle a \vert a \rangle\langle b \vert b \rangle } \] which is independent of the vectors in the rays. The three possibilities correspond to \begin{equation} (1) \qquad P_{ab}=1 \qquad (2) \qquad P_{ab}=0 \qquad (3) \qquad 0<P_{ab}<1. \end{equation} When the Hilbert space is two dimensional the difference in these two types of logic is encoded in bits or q-bits respectively. In quantum mechanics observable quantities are represented by linear operators $A$ on a Hilbert space. The only possible outcomes for measuring $A$ are one of its eigenvalues, $a_n$ (this assumes $A$ is a normal operator whose eigenvectors form a basis). In this case the Hilbert space has a decomposition ${\cal H} = \oplus_n {\cal H}_n$, where the ${\cal H}_n$ are $A$-invariant subspaces of ${\cal H}$. The mean value of a measurement of $A$ in state $\vert b\rangle$ is \begin{equation} \langle b \vert A \vert b \rangle = \sum_n P_{a_nb} a_n \qquad a_n \mbox{ eigenvalue of A} \end{equation} which is the weighted average of the quantum probabilities for a measurement of $b$ to be in one of the eigenstates of $A$. \section{Schwinger's discrete Weyl Algebra}\label{weyl} This section reviews Schwinger's \cite{schwinger} method of constructing an irreducible algebra of complementary unitary operators for quantum systems of a finite number of degrees of freedom. This construction generates a finite degree of freedom version of the Weyl (exponential) form of the canonical commutations relations. This algebra can be used to build discrete models of any quantum system. This construction is essentially the same as the treatment of the quantum Fourier transform discussed in \cite{nielsen} and elsewhere. Let ${\cal H}$ be a $M$-dimensional complex Hilbert space. Let $A$ be a normal operator on ${\cal H}$ with $M$ distinct eigenvalues and unit normalized eigenvectors: \begin{equation} A \vert a_m \rangle =a_m \vert a_m \rangle \qquad m=1,\cdots, M \qquad a_m \not= a_n \mbox{ for } m\not=n . \label{s1} \end{equation} Define the operator $U$ on ${\cal H}$ that cyclically shifts the eigenvectors of $A$: \begin{equation} U\vert a_m \rangle = \vert a_{m+1} \rangle \qquad m<M \qquad U\vert a_M \rangle = \vert a_1 \rangle . \label{s2} \end{equation} In what follows the labels $m$ on eigenvectors and eigenvalues are treated as integers mod $M$ so 0 is identified with $M$, $1$ with $M+1$ etc.. $U$ defined by (\ref{s2}) is unitary since \begin{equation} UU^{\dagger}= \sum_{m=1}^M U\vert a_m \rangle \langle a_m \vert U^{\dagger}= \sum_{m=1}^{M-1} \vert a_{m+1} \rangle \langle a_{m+1} \vert + \vert a_{1} \rangle \langle a_{1} \vert = \sum_{m'=1}^M \vert a_{m'} \rangle \langle a_{m'} \vert = I . \label{s3} \end{equation} Since $M$ applications of $U$ leaves all $M$ basis vectors, $\vert a_m \rangle$, unchanged, it follows that $U^M=I$. This means that the characteristic polynomial of $U$ is $P(\lambda) = \lambda^M-1 =0$. The eigenvalues of $U$ are the $M$ roots of $1$: \begin{equation} \lambda = u_m = e^{2\pi m i\over M}. \label{s4} \end{equation} Let $\vert u_m \rangle$ denote the associated eigenvectors: \begin{equation} U \vert u_m \rangle = u_m \vert u_m \rangle \label{s5} \end{equation} with unit normalization \begin{equation} \langle u_m \vert u_n \rangle = \delta_{mn}. \label{s6} \end{equation} The normalization does not fix the phase which will be fixed later. Since both $U^M=I$ and $u_n^M=1$ it follows that \[ 0= (U^M-I ) = {1 \over u_n^M} (U^M-I )= \] \begin{equation} \left (({U\over u_n})^M-I\right ) = \prod_{m=1}^M ({U\over u_n}- {u_m \over u_n}) = ({U\over u_n}-1)(1 +{U\over u_n} + ({U\over u_n})^2 +\cdots + ({U\over u_n})^{M-1}). \label{s7} \end{equation} Since this expression is identically zero and $({u_m\over u_n}-1)\not=0$ for $m\not= n$ it follows that \begin{equation} 1 +{U\over u_n} + ({U\over u_n})^2 +\cdots ({U\over u_n})^{M-1} = c \vert u_n \rangle \langle u_n \vert \label{s8} \end{equation} for some constant $c$. Applying (\ref{s8}) to $\vert u_n \rangle$ implies that the constant $c=M$. This results in an expression for the projection operator on each eigenstate of $U$ as a degree $M-1$ polynomial in $U$ \begin{equation} \boxed{ \vert u_n \rangle \langle u_n \vert = {1 \over M} \sum_{m=1}^M ({U\over u_n})^m = {1 \over M} \sum_{m=0}^{M-1} ({U\over u_n})^m . } \label{s9} \end{equation} Using (\ref{s9}) it follows that \begin{equation} \langle a_k \vert u_n \rangle \langle u_n \vert a_k \rangle = {1 \over M} \sum_{m=0}^{M-1} \langle a_k \vert ({U\over u_n})^m \vert a_k \rangle = {1 \over M} \sum_{m=0}^{M-1} ({1\over u_n})^m\langle a_k \vert a_{k+m} \rangle = {1 \over M} . \label{s10} \end{equation} This means for any $k$ and $n$ that \begin{equation} \vert \langle a_k \vert u_n \rangle \vert = {1 \over \sqrt{M}} . \label{s11} \end{equation} The interpretation is that if the system is prepared in any eigenstate of $U$ and $A$ is subsequently measured, then the probability of measuring any of the eigenvalues of $A$ is the same (1/M). This means that all of the information about the identity of the initial eigenstate of $U$ is lost after measuring $A$. This is the condition for the observables $A$ and $U$ to be complementary. The phase of $\vert u_n \rangle$ is defined by choosing \begin{equation} \langle a_M \vert u_n \rangle = \langle u_n \vert a_M \rangle = {1 \over \sqrt{M}} . \label{s12} \end{equation} It then follows from (\ref{s12}) that \[ \langle a_k \vert u_n \rangle \langle u_n \vert a_M \rangle = \langle a_k \vert u_n \rangle {1 \over \sqrt{M}} = \] \begin{equation} {1 \over M} \langle a_k \vert \sum_{m=1}^M u_n^{-m} \vert a_m \rangle = {1 \over M} u_n^{-k} = {1 \over M} e^{-2\pi i nk/M} \label{s13} \end{equation} which gives the inner product \begin{equation} \langle a_k \vert u_n \rangle = {1 \over \sqrt{M}} e^{-2\pi i nk/M}. \label{s14} \end{equation} Next define another unitary operator, $V$, that shifts the eigenvectors of $U$ cyclically, but in the opposite direction \begin{equation} V \vert u_n \rangle = \vert u_{n-1} \rangle, \qquad n \not=1, \qquad V \vert u_1 \rangle = \vert u_{M} \rangle . \label{s15} \end{equation} The same methods, with $U$ replaced by $V$, give \begin{equation} V^M=I \label{s16} \end{equation} \begin{equation} V \vert v_m \rangle = v_m \vert v_m \rangle \qquad v_m = e^{2 \pi i m \over M} \label{s17} \end{equation} \begin{equation} \vert v_n \rangle \langle v_n \vert = {1 \over M} \sum_{m=0}^{M-1} ({V \over v_n})^m = {1 \over M} \sum_{m=1}^{M} ({V \over v_n})^m \label{s18} \end{equation} and for unit normalized $\vert v_n \rangle$ \begin{equation} \vert \langle u_k \vert v_n \rangle\vert = {1 \over \sqrt{M}}. \label{s19} \end{equation} The phase of the $\vert v_n \rangle$ is defined by choosing \begin{equation} \langle u_M \vert v_n \rangle = {1 \over\sqrt{M}} . \label{s20} \end{equation} With this choice of phase \begin{equation} \langle u_M \vert v_n \rangle \langle v_n \vert u_k \rangle = \langle v_n \vert u_k \rangle {1 \over \sqrt{M}} = {1 \over M} \sum_{m=0}^{M-1} v_n^{-m} \langle u_m\vert u_k \rangle = {1 \over M} v_n^{-k} \label{s21} \end{equation} which gives \begin{equation} \boxed{ \langle v_k \vert u_n \rangle = {1 \over \sqrt{M}} e^{-2\pi i nk/M}. } \label{s22} \end{equation} Comparing (\ref{s14}) and (\ref{s22}) it follows that \begin{equation} \vert v_k \rangle =\sum_{m=0}^{M-1} \vert u_m \rangle \langle u_m \vert v_k \rangle= \sum_{m=0}^{M-1} \vert u_m \rangle {e^{2 \pi i nk/M}\over \sqrt{M}} = \sum_{m=0}^{M-1} \vert u_m \rangle \langle u_m \vert a_k \rangle = \vert a_k \rangle \label{s23} \end{equation} so the operators $A$ and $V$ have the same eigenvectors. The unitary operators $U$ and $V$ defined above satisfy \[ UV = U \sum_{m=0}^{M=1-1} \vert v_m \rangle e^{i2 \pi m \over M} \langle v_m \vert = \sum_{m=0}^{M-1} \vert v_{m+1} \rangle e^{i2 \pi m \over M} \langle v_m \vert = \] \begin{equation} \sum_{m=0}^{M-1} \vert v_{m+1} \rangle e^{i2 \pi m \over M} \langle v_{m+1} \vert U= \sum_{m=0}^{M-1} \vert v_{m+1} \rangle e^{i2 \pi (m+1) \over M} \langle v_{m+1} \vert U= e^{-2 \pi i \over M} VU \label{s24} \end{equation} or \begin{equation} \boxed{ UV = VU e^{-2\pi i \over M}. } \label{s25} \end{equation} $U$ and $V$ form an irreducible set of operators in the sense that that any operator on the Hilbert space can be expressed as a degree $(M-1) \times (M-1)$ polynomial these two operators. To show this note \[ \vert v_m \rangle \langle v_k \vert = U^{m-k} \vert v_k \rangle \langle v_k \vert = \] \begin{equation} {1 \over M} \sum_{n=0}^{M-1} e^{-2 \pi i nk /M} U^{m-k} V^n = {1 \over M} \sum_{n=0}^{M-1} e^{-2 \pi i mn /M} V^n U^{m-k} \label{s26} \end{equation} where (\ref{s25}) was used to change the order of the $U$ and $V$ operators in (\ref{s26}). Irreducibility follows since a general operator $O$ can be expressed in terms of its matrix elements in a basis \[ O = \sum_{m,k=0}^{M-1} \vert v_m \rangle \langle v_m \vert O \vert v_k \rangle \langle v_k \vert = \sum_{m,k=0}^{M-1} \langle v_m \vert O \vert v_k \rangle \vert v_m \rangle \langle v_k \vert = \] \begin{equation} {1 \over M} \sum_{m,n,k=0}^{M-1} e^{-2 \pi i nk /M} \langle v_m \vert O \vert v_k \rangle U^{m-k} V^n = {1 \over M} \sum_{m,n,k=0}^{M-1} e^{-2 \pi i mn /M}\langle v_m \vert O \vert v_k \rangle V^n U^{m-k}. \label{s27} \end{equation} These equations have the form \begin{equation} \boxed{ O = \sum_{m,n=0}^{M-1} a_{mn} U^m V^n = \sum_{m,n=0}^{M-1} b_{mn} V^m U^n } \label{s28} \end{equation} which is the Weyl representation of $O$. If $O$ commutes with $U$ then \begin{equation} 0= \sum_{mn=0}^{M-1} a_{mn} [U^m V^n,U] = \sum_{mnk=0}^{M-1} a_{mn} U^{m+1} V^n (e^{2\pi i n/M}-1) \label{s29} \end{equation} which requires $n=M$ or $0$. This means $O$ is independent of $V$. Similarly if $O$ commutes with $V$ it must be independent of $U$. This means that any operator that commutes with both $U$ and $V$ is a constant multiple of the identity. The following property will be used in the discussion of complex probabilities \begin{equation} \boxed{ \sum_{m=0}^{M-1} \langle u_n \vert v_m\rangle = {1 \over \sqrt{M}} \sum_{m=0}^{M-1} e^{2 \pi i mn\over M} = \delta_{n0}\sqrt{M} = \delta_{nM} \sqrt{M} }. \label{s30} \end{equation} To prove this consider two cases. If $n=0$ or $M$ the sum is $M$. Otherwise \begin{equation} \sum_{m=0}^{M-1} e^{{2\pi i mn \over M}} = {1 - e^{2 \pi i n} \over 1 - e^{2 \pi i n/M}} =0 \qquad 0<n<M . \label{s31} \end{equation} \section{qbits}\label{qbit} One property of the Schwinger representation is that it has a natural representation in terms of q-bits. When $M$ can be factored into products of prime numbers the $U$ and $V$ can be replaced by an algebra of commuting pairs of operators with cycles the length of each prime factor. The case of most interest for quantum computing is when $M=2^L$. In that case the irreducible algebra is represented by a product of q-bit gates. To show this assume that $M=2^L$ for large $L$. The indices $n= 0 \cdots 2^L-1$ can be labeled by $L$ numbers that can only take the values $0$ and $1$: $n \leftrightarrow (n_1,n_2,\cdots, n_L)$ \begin{equation} n=\sum_{m=1}^L n_m 2^{m-1}. \label{qb1} \end{equation} This results in the identifications \begin{equation} \vert u_{n_1\cdots n_L} \rangle :=\vert u_n \rangle \qquad \vert v_{n_1\cdots n_L} \rangle :=\vert v_n \rangle . \label{qb2} \end{equation} Define unitary operators $U_i$ and $V_i$ by \begin{equation} U_i \vert v_{n_1\cdots n_L} \rangle = \vert v_{n_1\cdots [n_i+1]_{\mbox{mod\,2}} \cdots n_L} \rangle \label{qb3} \end{equation} \begin{equation} V_i \vert u_{n_1\cdots n_L} \rangle = \vert u_{n_1\cdots [n_i-1]_{\mbox{mod\,2}} \cdots n_L} \rangle . \label{qb4} \end{equation} Applying what was done in the general case to $M=2^L$ gives \begin{equation} U_i^2-1= V_i^2 -1=0, \label{qb5} \end{equation} \begin{equation} [U_i,U_j]=[V_i,V_j]=0 \qquad [U_i,V_j]=0 \qquad i \not=j \qquad V_i U_i = U_i V_i e^{i \pi} \label{qb6} \end{equation} \begin{equation} U^n = \prod_{m=1}^L U_m^{n_m} \label{qb7} \end{equation} \begin{equation} V^n = \prod_{m=1}^L V_m^{n_m}. \label{qb8} \end{equation} Since $U$ and $V$ can be constructed from the $U_i$ and $V_i $ the set of $\{ U_i\}$ and $\{ V_i\}$ is also irreducible. A simple matrix representation of $U_i$ and $V_i$ is \begin{equation} V_i = \sigma_3 \qquad U_i = \sigma_1 \label{qb9} \end{equation} which are simple quantum gates. In this representation, $v_0=u_0=1; v_1=u_1=-1$ and \begin{equation} \vert v_0 \rangle = \left( \begin{array}{c} 1 \\ 0\\ \end{array} \right ) \qquad \vert v_1 \rangle = \left ( \begin{array}{c} 0 \\ 1\\ \end{array} \right ) \label{qb10} \end{equation} \begin{equation} \vert u_0 \rangle ={1 \over \sqrt{2}} \left( \begin{array}{c} 1 \\ 1\\ \end{array} \right ) \qquad \vert u_1 \rangle = {1 \over \sqrt{2}} \left ( \begin{array}{c} 1 \\ -1\\ \end{array} \right ). \label{qb10} \end{equation} The operators $\sigma_1$ and $\sigma_3$ \begin{equation} U_i= \sigma_1 = \left ( \begin{array}{cc} 0&1\\ 1&0\\ \end{array} \right ) \qquad V_i= \sigma_3 = \left ( \begin{array}{cc} 1&0\\ 0&-1\\ \end{array} \right ) \qquad \label{qb11} \end{equation} satisfy (\ref{s2}) and (\ref{s15}) for $M=2$. They also satisfy \begin{equation} \sigma_3 \sigma_1 = \sigma_1 \sigma_3 e^{{2\pi i \over 2}} \qquad (\sigma_1^2 -1)= (\sigma_3^2-1) =0 . \label{qb12} \end{equation} Any linear operator $A$ on this 2-dimensional vector space is a polynomial with constant coefficients $a_i$ in these operators: \begin{equation} A = a_1I + a_2 \sigma_1 + a_3 \sigma_3 +a_4 \sigma_3\sigma_1 . \label{qb13} \end{equation} This shows how the discrete version of the irreducible Weyl algebra can be built up out of q-bits using the two elementary gates $\sigma_1$ and $\sigma_3$ acting on each qbit. This means that any operator on the $2^L$ dimensional Hilbert space can be expressed as a polynomial in the $L$ pairs of 2 state $U$ and $V$ operators. Note for $M$ odd the same construction works with $3\times 3$ matrices with \begin{equation} U_i = \left ( \begin{array}{ccc} 0&0&1\\ 1&0&0\\ 0&1&0 \end{array} \right ) \qquad V_i = \left ( \begin{array}{ccc} 1&0&0\\ 0&e^{2 \pi i/3}& 0\\ 0 & 0 & e^{4 \pi i /3} \end{array} \right ). \label{qb14} \end{equation} \section{Schwinger's continuum limit}\label{cont} The eigenvalue spectrum of many observables of interest, like momenta and coordinates, are continuous. It is possible to use the discrete algebra generated by $U$ and $V$ to make a discrete approximation to the continuum in the large $M$ limit. To do this assume $M$ is large and define the small quantity $\epsilon$ by \begin{equation} \epsilon^2 := {2 \pi /M}. \label{cl1} \end{equation} For the purpose of approximating the continuum it is convenient (but not necessary) to choose $M=2K+1$ odd and number the eigenvectors and eigenvalues from $-K \leq n \leq K$ instead of $0$ to $M-1$ or $1$ to $M$. Discrete approximations to continuous variables $p$ and $q$ are defined by \begin{equation} p_l = l \epsilon = l \sqrt{2 \pi \over M} \qquad q_l = l \epsilon = l \sqrt{2 \pi \over M} \qquad -K\epsilon \leq q_l,p_l \leq K\epsilon \label{cl2} \end{equation} where \begin{equation} K\epsilon = \sqrt{M\pi \over 2} - \sqrt{\pi \over 2M}. \label{cl3} \end{equation} With these definitions the separation between successive values of $p_l$ and $q_l$, $p_{l+1}-p_l =q_{l+1}-q_l = \epsilon$ vanishes as $M \to \infty$ while at the same time the maximum and minimum values of $p_l$ and $q_l$, $p_{\pm K} = q_{\pm K} = \pm ( \sqrt{M\pi \over 2} - \sqrt{\pi \over 2M} )$ approach $\pm\infty$ in same limit. While for finite $M$ any vector with finite elements has a finite norm - in the continuum limit ($M\to \infty$) this is no longer true so the limiting vectors with finite norm should be square summable. This means that components of vectors with large $\vert l \vert$ should approach $0$ in the $M\to \infty$ limit. For $U$ and $V$ given by (\ref{s2}) and (\ref{s15}) Hermitian operators $\hat{p}$ and $\hat{q}$ are defined by \begin{equation} \boxed{ V = e^{i \epsilon \hat{p}} \qquad U= e^{i \epsilon \hat{q}} } . \label{cl4} \end{equation} These can be used to define \begin{equation} V(q_m) = e^{i\hat{p} q_m} = e^{i\hat{p} \epsilon m} = V^m \label{cl5} \end{equation} \begin{equation} U(p_n) = e^{i\hat{q} p_n} = e^{i\hat{q} \epsilon n} = U^n . \label{cl6} \end{equation} With these definitions equation (\ref{s25}) becomes \begin{equation} V(q_m)U(p_k)= U(p_k)V(q_m) e^{i2 \pi mk \over M}= U(p_k)V(q_m) e^{i \epsilon m \epsilon k}= U(p_k)V(q_m)e^{ip_kq_m} \label{cl7} \end{equation} \begin{equation} \boxed{ V(q_m)U(p_k)= U(p_k)V(q_m)e^{ip_kq_m} } \label{cl8} \end{equation} which is the Weyl \cite{Weyl} form of the canonical commutation relations, where in this case the variables are discrete. Equations (\ref{cl5}-\ref{cl6}) motivate the definitions \begin{equation} dp = \epsilon dn = \sqrt{2 \pi \over M} dn \qquad dq = \epsilon dn = \sqrt{2 \pi \over M} dn . \label{cl9} \end{equation} It follows from (\ref{cl4}) that eigenvectors of $V$ are also eigenvectors of $\hat{p}$ and the eigenvectors of $U$ are also eigenvectors of $\hat{q}$. Choosing normalization of the state $\vert p_n\rangle$ and $\vert q_n \rangle$ so \begin{equation} \int dp \approx \sum_{l=-K}^K {dp \over dl} = \epsilon \sum_{l=-K}^K \qquad \int dq \approx \sum_{l=-K}^K {dq \over dl} = \epsilon \sum_{l=-K}^K \label{cl9} \end{equation} \begin{equation} I = \sum_{l=-K}^K \vert v_l \rangle \langle v_l \vert = \sum_{l=-K}^K \vert p_l \rangle dp_l \langle p_l \vert = \sum_{l=-K}^K \vert p_l \rangle \epsilon \langle p_l \vert \label{cl10} \end{equation} \begin{equation} I = \sum_{l=-K}^K \vert u_l \rangle \langle u_l \vert = \sum_{l=-K}^K \vert q_l \rangle dq_l \langle q_l \vert = \sum_{l=-K}^K \vert q_l \rangle \epsilon \langle q_l \vert \label{cl11} \end{equation} and comparing these equations leads to the definitions \begin{equation} \vert p_l \rangle := \vert v_l \rangle /\sqrt{\epsilon} \label{cl12} \end{equation} and \begin{equation} \vert q_l \rangle := \vert u_l \rangle /\sqrt{\epsilon} . \label{cl13} \end{equation} Using these relations gives \begin{equation} \langle p_m\vert q_n \rangle = {1 \over \epsilon} \langle v_m \vert u_n \rangle = {1 \over \epsilon \sqrt{M}}e^{-2 \pi i mn\over M} = {1 \over \sqrt{2 \pi}} e^{-i p_m q_n} \end{equation} \begin{equation} \langle p_m\vert p_n \rangle = {1 \over \epsilon} \langle v_m \vert v_n \rangle = {\delta_{mn} \over \epsilon} \label{cl14} \end{equation} and \begin{equation} \langle q_m\vert q_n \rangle = {1 \over \epsilon} \langle u_m \vert u_n \rangle = {\delta_{mn} \over \epsilon}. \label{cl15} \end{equation} A result that will be used later to reinterpret the path integral as the expectation of a potential functional with respect to a complex probability distribution follows from (\ref{s30}). Consider the expression \begin{equation} \sum_{lm} \langle q_n \vert p_l \rangle dp_l f(p_l) \langle p_l \vert q_m \rangle dq_m = {\epsilon^2 \over 2\pi }\sum_{l=-K}^K\sum_{m=-K}^K e^{i (q_n -q_m) p_l } f(p_l) . \label{cl16} \end{equation} The $m$ sum can be computed in closed form \begin{equation} \sum_{m=-K}^K e^{i q_m p_l} = \sum_{m=-K}^K e^{2 \pi ml \over M} = e^{-i \pi l K \over M}\left ({1 \over 1- e^{2 \pi l \over M}} - {e^{2 \pi l}\over 1- e^{2 \pi l \over M}}\right ) = \delta_{l0}M . \label{cl17} \end{equation} Using (\ref{cl17}) in (\ref{cl16}) with (\ref{cl1}) gives \begin{equation} \boxed{ \sum_{lm} \langle q_n \vert p_l \rangle dp_l f(p_l) \langle p_l \vert q_m \rangle dq_m = {M \epsilon^2 \over 2\pi }f(0) = f(0). } \label{cl18} \end{equation} The same result is obtained by ``integrating'' over the final $q_n$ instead of the initial $q_m$. This result will be used in the development of discrete path integrals that follows. \section{complex probabilities}\label{complex} A complex probability system is defined by a sample set $S$ and a complex valued function $P$ on subsets of $S$ with the properties \begin{equation} P(S_i) = \sum_{s\in S_i} P(s) \qquad P(S) =1 . \label{cp1} \end{equation} $P(S_i)$ is the complex probability assigned to the subset $S_i$ of $S$. It follows that \begin{equation} P(S_i)+P(S_i^c)= 1 \label{cp2} \end{equation} where $S_i^c$ is the complement of $S_i$ in $S$, and for a finite set of non-intersecting subsets of $S$ \begin{equation} S_i\cap S_j = \mbox{$\empty$ } i \not=j \qquad P(\cup S_i) =\sum_i P(S_i). \label{cp3} \end{equation} In the applications that follow the sample set will be a finite collection of paths. More generally, since $P(s)$ is complex, equation (\ref{cp3}) cannot be extended to countable non-intersecting subsets, which is where complex probabilities differ from ordinary probabilities \cite{Muldowney2}\cite{Muldowney}. This is not an issue for finite sample sets. The extension of the notion of complex probabilities to continuous a sample set generated from intervals by complements and finite unions, based on the Henstock integral \cite{Henstock}\cite{bartle}\cite{Gill}, was used in \cite{Katya_1}\cite{Katya_2} to prove that the real-time path integral formulated as the expectation of a potential functional with respect to a complex probability distribution on cylinder sets of paths converges to a global solution of the Schr\"odinger equation. This was applied to compute scattering amplitudes using real-time path integrals in a simple model in \cite{polyzou}. In that case the complex probability was a probability on a finite collection of cylinder sets rather than a discrete set of paths. To treat the large number of cylinder sets, the probability was approximately factored into products of one-step probabilities, which reduced the problem to computing powers of approximate transfer matrices. Because of the approximation unitarity was only preserved approximately. In the discrete case the sample set is finite, the complex probability exactly factors into a product of one time step complex probabilities and the transfer matrices associated with the one-step probabilities are exactly unitary. A random variable $F(s)$ is a function on the sample set $S$ with expectation value \[ E[F] = \sum_{s \in S} P(s) F(s) . \] In this paper the sample set is the finite collection of paths that have $M$ possible values at each of $N$ time steps, the complex probability, $P(s)$ is associated with free propagation through $N$ time steps along the path ``$s$'', and $F(s)$ is the contribution from the potential due to the path ``$s$''. This is discussed in the next section. \section{Complex probabilities in real time path integrals}\label{path} The path integral for a system with one degree of freedom is formulated using the discrete representation discussed section \ref{weyl}. Following references \cite{Muldowney}\cite{Katya_1}\cite{Katya_2}, the path integral will be defined as the expectation value of a potential functional with respect to a complex probability distribution. To do this it is necessary to: \begin{itemize} \item[1.)] Define the space of paths \item[2.)] Define complex probabilities on the space of paths \item[3.)] Identify the path integral with the expectation value of a functional on the space of paths. \end{itemize} Let $H$ be a canonical Hamiltonian with one degree of freedom of the form \begin{equation} H= {\hat{{p}}^2 \over 2\mu} + V(\hat{q}) \label{pi1} \end{equation} where $\hat{q}$ and $\hat{p}$ are canonically conjugate operators satisfying \begin{equation} [\hat{q},\hat{p}] =i . \label{pi2} \end{equation} The starting point for constructing a path integral is the Trotter product formula \cite{Simon}\cite{Katya_3}: \[ \langle q_f, t_f \vert e^{-i H t } \vert q_i ,t_i \rangle = \langle q_f, t_f \vert (e^{-i H t/N })^N \vert q_i ,t_i \rangle = \] \begin{equation} \lim_{N \to \infty} \langle q_f, t_f \vert (e^{-i H t/N })^N \vert q_i ,t_i \rangle = \lim_{N \to \infty} \langle q_f, t_f \vert (e^{-i (\hat{p}^2 / 2\mu)\Delta t } e^{-i V(\hat{q})\Delta t} )^N \vert q_i ,t_i \rangle \label{pi3} \end{equation} where $\Delta t := t/N$. This is the operator generalization of the representation \begin{equation} e^x = \lim_{N\to \infty} (1+{x\over N})^N \label{pi3a} \end{equation} of $e^x$. Equation (\ref{pi3}) is exact in the limit that $N\to \infty$ when applied to a normalizable wave packet. It is also possible to express $e^{-i H \Delta t}$ in terms of the $U$ and $V$ operators in the Weyl representation (\ref{s28}), however the more familiar representations are used in this section. Following the standard steps in evaluating the path integrals, sums over complete sets of eigenstates of $U$ and $V$ are inserted between the operators in (\ref{pi3}): \[ \langle q_f \vert e^{-iHt}\vert \psi \rangle = \int \langle q_f \vert e^{-i H t } \vert q_i \rangle dq_i \psi(q_i) dq_i = \] \[ \lim_{N\to \infty} \int \langle q_f \vert p_N \rangle dp_N e^{-i (p_N^2/2\mu)\Delta t} \langle p_{N}\vert q_N \rangle dq_N e^{-i V(q_N) \Delta t} \cdots \] \begin{equation} \cdots \langle q_2 \vert p_1 \rangle dp_1 e^{-i (p_1^2 / 2\mu)\Delta t} \langle p_1 \vert q_1 \rangle dq_1 \langle q_1\vert e^{-i V(q_1) \Delta t} \vert \psi \rangle . \label{pi4} \end{equation} The next step is to approximate the integrals by numerical quadratures. This is done using the discrete variables introduced in the previous section. While this is not the most efficient approximation, it has the advantage that everything is discrete, finite and exactly unitary. In this case for an $N$ time step Trotter approximation the discrete variable $q_i$ for the $i$-th time step can take on the $2K+1$ discrete values $l\epsilon$, $-K \leq l \leq K$ and the transition amplitude (\ref{pi4}) becomes \[ \langle q_f \vert e^{-iH(t)}\vert \psi \rangle \approx \sum \langle q_f \vert p_{Nn_N} \rangle \epsilon e^{-i (p_{Nn_N}^2/2\mu)\Delta t} \langle p_{Nn_N}\vert q_{Nn_N} \rangle \epsilon e^{-i V(q_{Nn_N}) \Delta t} \times \] \[ \langle q_{Nn_N} \vert p_{N-1n_{N-1}} \rangle \epsilon e^{-i (p_{N-1n_{N-1}}^2/2\mu)\Delta t} \cdots \langle p_{Nn_N}\vert q_{Nn_N} \rangle \epsilon e^{-i V(q_{Nn_N}) \Delta t} \times \cdots \] \begin{equation} \langle q_{2n_2} \vert p_{1n_1} \rangle \epsilon e^{-i (p_{1n_1}^2 / 2\mu) \Delta t} \langle p_{1n_1} \vert q_{1n_1} \rangle \epsilon e^{-i V(q_{1n_1})\Delta t} \langle q_{1n_1} \vert \psi_i \rangle . \label{pi5} \end{equation} The next step is to ``integrate'' over the ``momentum'' variables. While this can be done exactly for quadratic functions of $p$ in terms of Fresnel integrals, here this integral is replaced by a finite sum. The first step is to define a one time step free propagation operator. This is the $q$-space representation of the transfer matrix for free propagation \cite{Feynman_1}\cite{Feynman_2} for a time $\Delta t$: \[ K(q'_{m}, q_{n},\Delta t ) dq_{n}:= \sum_{l=-K}^K \langle q_{m} \vert p_{l} \rangle \epsilon e^{-i (p_{l}^2 / 2\mu) \Delta t} \langle p_l \vert q_{n} \rangle \epsilon = \] \begin{equation} {\epsilon^2 \over 2 \pi} \sum_{l=-K}^K e^{i(q_{m}'- q_{n})p_l -i (p_{l}^2 / 2\mu) \Delta t}. \label{pi6} \end{equation} It follows from (\ref{cl18}) that the integral over the initial coordinate is $1$: \begin{equation} \int K(q',q,\Delta t) dq \to \sum_{n=-K}^K K(q_m,q_n,\Delta t)\epsilon = 1 \label{pi7} \end{equation} independent of $q_m$. Because of this, $K(q_m,q_n,\Delta t) dq$, is interpreted as the complex probability for making a transition from state $q_n$ to state $q_{m}$ in time step $\Delta t$. The probability interpretation follows because the sum is $1$. In this case the sample set of probabilities is finite. The interpretation of equation (\ref{pi7}) is that a state that ends up at $q_m$ has to have started at one of the $2K+1$ $q_n$'s with complex probability 1. The path integral (\ref{pi5}) can be expressed in terms of (\ref{pi7}) as \[ \langle q_f \vert e^{-i H t } \vert q_i , \rangle \approx \] \[ \sum_{n_1 \cdots n_N} K(q_f,q_{N,n_N}, \Delta t) e^{-iV(q_{Nn_N})\Delta t} \epsilon K(q_{N,n_N},q_{N-1,n_{N-1}}, \Delta t) e^{-iV(q_{N-1,n_{N-1}})\Delta t} \epsilon \cdots \] \[ \cdots K(q_{2,n_2},q_{1,n_{1}}, \Delta t) e^{-iV(q_{1,n_1})\Delta t} \epsilon= \] \[ \sum_{n_1 \cdots n_N} K(q_f,q_{N,n_N}, \Delta t) \epsilon K(q_{N,n_N},q_{N-1,n_{N-1}}, \Delta t) \epsilon \cdots K(q_{2,n_2},q_{1,n_{1}}, \Delta t) \epsilon \times \] \begin{equation} e^{-i\sum_{l=1}^N V(q_{ln_l})\Delta t}. \label{pi8} \end{equation} This is expressed as finite powers of products of finite-dimensional unitary transfer matrices. Define \[ P_N(q_f,q_{N},q_{N-1}, \cdots ,q_{2}, q_{1} ):= \] \begin{equation} K(q_f,q_{N}, \Delta t) \epsilon K(q_{N},q_{N-1} \Delta t) \epsilon \cdots K(q_{3},q_{2}, \Delta t) \epsilon K(q_{2},q_{1}, \Delta t) \epsilon \label{pi9} \end{equation} which represents free propagation from $q_1$ to $q_f$ along a path through $q_2, q_3, \cdots q_N$. By (\ref{pi7}) it follows that summing over all of $q_{i,n_{i}}$ gives 1 independent of $q_f$, \begin{equation} \sum_{n_1,\cdots, n_N}P_N(q_f,q_{N},q_{N-1}, \cdots ,q_{2}, q_{1} ):=1 \end{equation} It is now possible to define the space of paths between $q_i$ and $q_f$. A path $\gamma$ is a $N$-dimensional vector $(q_1,q_2,\cdots q_N)$ where each of the $q_n$ can have one of the $M$ discrete eigenvalues of $q$. This vector represents a path that starts at $q_1=n_1\epsilon$ and after time $\Delta t$ is at $q_2=n_2\epsilon$, $\cdots$ , and after $N-1$ time steps is at $q_N=n_N\epsilon$ and arrives at $q_f$ after $N$ time steps. The set of all $M^N$ paths that end up at $q_f$ is denoted by $\Gamma$. The quantity $P_N(\gamma) := P_N(q_f; q_N ,q_{N-1}, \cdots ,q_{2}, q_{1} )$ is a complex number that is interpreted as the complex probability for a particle to travel on the path $\gamma =q_1 \to q_2 \to \cdots \to q_{N-1} \to q_N \to q_f$ since \begin{equation} \sum_{\gamma \in \Gamma} P_N(\gamma)=1 . \label{pi10} \end{equation} It is the discrete analog of the complex probability that path lies in a given cylinder set of paths. For the path $\gamma$ a potential ``functional'' of the path $\gamma$ is defined by \begin{equation} W[\gamma] =e^{-i\sum_{n=1}^N V(q_{n})\Delta t} \label{pi11} \end{equation} where the sum is over each of the $q_n\in \gamma$. With this notation the approximate transition amplitude is \[ \langle q_f, t_f \vert e^{-i H t } \vert q_i ,t_i \rangle \approx \] \[ \sum_{n_1,n_2, \cdots, n_{N}=1}^M P_N(q_f,q_{Nn_N},q_{N-1 n_{N_1}}, \cdots ,q_{2 n_2}, q_{1n_1} ) \times e^{-i\sum_{k=1}^N V(\epsilon n_k)\Delta t} \delta_{q_{1n_1},q_i} = \] \begin{equation} \sum_{\gamma\in \Gamma}P_N(\gamma) W[\gamma]\delta_{q_{1n_1},q_i} \label{pi12} \end{equation} which is represented by the expectation $E[W\delta]$ of the potential functional $W[\gamma]\delta_{q_{1n_1},q_i}$ with respect to the complex probability distribution $P_N(\gamma)$. Note that this transition amplitude can be expressed exactly as the $N$-th power \begin{equation} X^N_{f j} \delta_{ji} \label{pi13} \end{equation} of the transfer matrix \begin{equation} X_{ij} := K_{ij}W_j \label{pi14} \end{equation} where \begin{equation} K_{ij} := K(q_j,q_k, \Delta t)\epsilon \qquad W_j := e^{-iV(q_j) \Delta t} \label{pi15} \end{equation} applied to the initial state. The important observation is that even though there are $(2K+1)^N$ discrete paths, the discrete path integral involves computing the $N^{th}$ power of a single $(2K+1)\times (2K+1)$ dimensional matrix. It is interesting to note that while the computation of the path integral is reduced to matrix multiplication, the matrix product can be deconstructed to find the contribution of each path to $E[W\delta]$. In \cite{polyzou} a similar method was used to compute sharp momentum scattering transition matrix elements using real-time path integrals interpreted as expectation values of a potential functional with respect to a complex probability distribution on cylinder sets of paths. In that application the factorization of the complex probability into a product of one time step probabilities was only approximate and as a result unitarity was only satisfied approximately. In \cite{polyzou} sharp momentum scattering matrix elements were approximated using a path integral approximation to matrix elements of the scattering transition operator \begin{equation} T_{fi}= \lim_{t \to -\infty} \langle k_f \vert V e^{-Ht}e^{-iH_0t} \vert k_i\rangle. \label{pi16} \end{equation} The sharp momentum eigenstates $\vert k_{f/i} \rangle$ were replaced by normalizable states $\vert \Psi_I \rangle$ and $\vert \Psi_F \rangle$ that are sharply peaked about the initial and final momenta respectively and have spatial support in the interaction volume. They were normalized like delta functions in the sense that they integrate to 1. In the form (\ref{pi16}) the interaction $V$ provides a convenient volume cutoff on the localized initial scattering state. The matrix $(K^{\dagger})^N X^{2N}(K^{\dagger})^N$, which converges to the scattering operator, is exactly unitary so it can be diagonalized with eigenvalues of the form $e^{2 i \delta_n}$, where $\delta_n$ are eigenvalues of a phase shift operator. The convergence of this method depends on the convergence of the Trotter limit and the convergence of the discrete quadrature. Mathematically the Trotter product formula converges strongly for suitable Hamiltonians; this was used in \cite{Katya_2} to show that in the continuum case the expectation of the potential functional with respect to the complex probability associated with free propagation converges to global solutions of the Schr\"odinger equation. This suggests that final result is independent of the order of making the Trotter approximation and the discrete quadrature approximation. This interpretation of the path integral as the expectation value of a random variable over a complex probability on a space of paths has a conceptual advantage over the conventional interpretation. In the conventional interpretation of the path integral in terms of the action functional, the finite difference representation of the ``derivatives'' in the action involves differences that never get small as the time steps get small, rendering the interpretation of the path integral as an integral over paths weighted by a ``measure'' depending on the action questionable. The representation in terms of complex probabilities involves a real potential functional defined on continuous paths. The potential functional can be thought of as a {\it perturbation of a complex Gaussian process} associated with free propagation. \section{scattering in the discrete representation}\label{scatt} Formal scattering theory is an idealization. A real scattering experiment takes place in a finite volume during a finite time interval. The relevant physics is dominated by a finite number of degrees of freedom that are limited by the energy and scattering volume. The fundamental quantum mechanical observable is the probability for a transition from a prepared initial state to a detected final state \begin{equation} P_{fi} = \vert \langle \psi_f (t)\vert \psi_i (t) \rangle \vert^2. \label{sca1} \end{equation} While the individual states depend on time, the probability (\ref{sca1}) is independent of $t$ due to the unitarity of the time evolution operator. The important consideration is that both states have to be evaluated at the same time. The problem of scattering theory is that there is no common time when both the initial and final states are simple. On the other hand the initial state is simple before the collision and the final state is simple after the collision. The initial and final states at the time of collision can be determined by evolving them from times where they behave like non-interacting subsystems to the collision time. Since localized wave packets spread, the effects of spreading can be eliminated by starting with localized wave packets at the collision time, evolving them beyond the range of interactions using free time evolution, and then evolving them back to the interaction region using the full Hamiltonian. The result is a unitary mapping that transforms the free wave packet at the collision time to the dynamical wave packet at the same time. If $U_0(t)$ and $U(t)$ represent the free and dynamical unitary time evolution operators, then assuming the time of collision is approximately at time $t=0$ the scattering asymptotic conditions have the form \begin{equation} \Vert \vert U(\pm \tau)\vert \psi_{\pm}(0) \rangle - U_0(\pm \tau) \vert \vert \psi_{0\pm}(0) \rangle \Vert \approx 0 \label{sca2} \end{equation} where $\tau$ is sufficiently large for the interacting particles to be separated beyond the range of their mutual interactions. This expression is independent of $\tau$ for sufficiently large $\tau$, but the minimum value of $\tau$ depends on the range of the interaction and the structure of $\vert \psi_{0\pm}(0) \rangle$. Normally dependence on these conditions is removed by taking the limit $\tau \to \infty$. In this work, for computational reasons, it is desirable to choose $\tau$ as small as possible, which requires paying attention to the range of the interaction and the structure of the initial and final states. The unitarity of the time evolution operator means that (\ref{sca2}) can be replaced by \begin{equation} \Vert \vert \psi_{\pm} (0)\rangle - U(\mp \tau)U_0(\pm \tau) \vert \psi_{0\pm}(0) \rangle \Vert \approx 0 . \label{sca3} \end{equation} The operators \begin{equation} \Omega_{\pm}(\tau) := U(\pm \tau)U_0(\mp \tau) \label{sca4} \end{equation} are unitary mappings from $\vert \psi_{0\pm} (0)\rangle$ to $\vert \psi_{\pm}(0) \rangle$. Using these definitions the scattering probability can be expressed as \begin{equation} P_{fi} = \vert \langle \psi_{0+}(0)\vert S(\tau) \vert \psi_{0-}(0) \rangle \vert^2 \label{sca5} \end{equation} where \begin{equation} S(\tau) := \Omega^{\dagger}(\tau)\Omega (-\tau) \label{sca6} \end{equation} is the scattering operator. Since $S(\tau)$ is unitary it can be expressed in terms of a self-adjoint phase shift operator \begin{equation} S(\tau) = e^{2i \delta (\tau)} \label{sca6} \end{equation} where $S(\tau)$ should be independent of $\tau$ for sufficiently large $\tau$. In a real experimental measurement the probability (\ref{sca5}) depends on the structure of the initial and final wave packets, which cannot be precisely controlled by experiment. If the matrix elements of $S(\tau)$ in sharp momentum states are slowly varying functions of momentum, then the dependence on the wave packet factors out \cite{brenig} and can be eliminated to compute differential cross sections. In this case the sharp momentum matrix elements can be approximated from the matrix elements using Gaussian (minimal uncertainty) wave packets with a ``delta-function normalization'' that are sharply peaked about the desired momenta. This formulation of scattering is amenable to a path integral treatment. As previously discussed scattering reactions are dominated by a finite number of degrees of freedom. The use of the discrete Weyl representation has the advantage that unitarity is exactly preserved on truncation to a finite number of degrees of freedom. Alternative path integral treatments of scattering appear in \cite{Campbell}\cite{Rosenfelder1}\cite{Rosenfelder2}. The advantage of the discrete representation is that $U_0(-\tau)U(2\tau)U^0(-\tau)$ can be expressed as the limit of products of the transfer matrices defined in the previous section \begin{equation} S(\tau) = \lim_{N\to \infty} K^{-N}X^{2N} K^{-N} \label{sca7} \end{equation} where \begin{equation} K_{ij}= K(q_i,q_j,\Delta t)\epsilon \qquad (X)_{ij}= K(q_i,q_j,\Delta t)\epsilon e^{-i V(q_j)\Delta t}, \label{sca8} \end{equation} $\Delta t = \tau/N$ and $N$ is the number of Trotter time slices. Note also that \begin{equation} K^{N} = K(f_f,q_i,N\Delta t). \end{equation} Sharp-momentum matrix elements of the scattering operator can be expressed in terms of the matrix elements of the transition operator $T$, which is easier to calculate in the discrete representation \begin{equation} S= I - 2 \pi i \delta (E_f -E_i) T \label{sca9} \end{equation} where $T$ is approximately given by \begin{equation} T_s \approx V \Omega (-\tau) \label{sca10} \end{equation} when evaluated in normalizable states with sharply peaked momenta. The advantage of this representation is that for scattering problems $V$ is a short range operator that provides a volume cutoff. In the discrete representation sharp momentum eigenstates are normalizable however {\it they cannot be used in scattering calculations} because they are completely delocalized in space because the discrete momenta and coordinates are complementary - making it impossible to get to the asymptotic region. The most straightforward way to construct suitable initial or final wave packets in the discrete representation is to approximate the corresponding minimal uncertainty states of the continuum theory. The quantities to control are the mean position, momentum and the uncertainty in both of these quantities defined for a given state $\vert \psi \rangle$ by: \begin{equation} \langle q \rangle_{\psi} := \sum_{n=-K}^K {\langle \psi \vert u_n \rangle n \epsilon \langle u_n \vert \psi \rangle \over \langle \psi \vert \psi \rangle } \qquad \langle p \rangle_{\psi} := \sum_{n=-K}^K {\langle \psi \vert v_n \rangle n \epsilon \langle v_n \vert \psi \rangle \over \langle \psi \vert \psi \rangle } \label{sca12} \end{equation} \begin{equation} (\Delta q)^2 =\langle \psi \vert (q-\langle q \rangle)^2 \vert \psi \rangle = \sum_{n=-K}^K {\langle \psi \vert u_n \rangle ((n \epsilon)^2 - \langle q \rangle^2 )\langle u_n \vert \psi \rangle \over \langle \psi \vert \psi \rangle } \label{sca13} \end{equation} \begin{equation} (\Delta p)^2 = \langle \psi \vert (p-\langle p \rangle)^2 \vert \psi \rangle= \sum_{n=-K}^K {\langle \psi \vert v_n \rangle ((n \epsilon)^2 - \langle p \rangle^2 )\langle v_n \vert \psi \rangle \over \langle \psi \vert \psi \rangle }. \label{sca14} \end{equation} The continuum delta function normalized minimal uncertainty states are \begin{equation} \langle p \vert \psi_{0} (0) \rangle = {1 \over 2 \sqrt{\pi} \Delta p} e^{- {(p-\langle p \rangle )^2 \over 4 (\Delta p)^2}} . \label{sca15} \end{equation} where $\langle p \rangle$ is the mean momentum and $\Delta p$ is the quantum mechanical uncertainty in $p$ for this wave packet. This wave packet needs to be evolved to $-\tau$ using the free time evolution which adds a phase to (\ref{sca15}): \begin{equation} \langle p \vert \psi_{0} (-\tau) \rangle = {1 \over 2 \sqrt{\pi} \Delta p} e^{ - {(p-p_i)^2 \over 4 (\Delta p)^2} + i {p^2 \over 2 \mu} \tau}. \label{sac16} \end{equation} In the discrete ``$p$'' representation this is replaced by \begin{equation} \langle n \vert \psi_{0}(-\tau) \rangle = C e^{-{(\epsilon n - \langle p \rangle)^2 \over 4 (\Delta p)^2} + i {n^2 \epsilon^2 \over 2 \mu} \tau} . \label{sac17} \end{equation} where $C$ is a normalization constant. In the $x$ representation this becomes \begin{equation} \langle m_q \vert \psi_0 \rangle = {\epsilon \over \sqrt{2\pi}} \sum_{n=-K}^K e^{i \epsilon^2 mn} \langle n \vert \psi_{0}(-\tau) \rangle . \label{sac18} \end{equation} To illustrate that this gives a good approximation to the continuum results $\langle p \rangle$, $\langle p \rangle$, $\Delta q$ and $\Delta p$ were calculated starting with $\langle p \rangle=2.5$, $\Delta p=.25$ and $K=300$ as input parameters in (\ref{sac16}). The results of the calculation \begin{align} \mbox{mean}_{p-calc}&=2.500000\\ \mbox{mean}_{q-calc}&=-2.51 \times 10^{-17}\\ \Delta_{p-calc}&=.3000000\\ \Delta_{q-calc}&= 1.666667 \end{align} are consistent with the input parameters, the minimal uncertainty condition $\Delta p \Delta q = 1/2$, and the continuum results. As a test the discrete approximation was applied to the problem of one-dimensional scattering of particle of mass $m$ by a repulsive Gaussian potential of the form \begin{equation} V(q) = \lambda e^{-\alpha q^2} \end{equation} with $\lambda=.5$ and $\alpha =2.0$. The potential is plotted in figure 1. The particle's mass is taken to be 1 in dimensionless units so the velocity and momentum can be identified. The initial wave packet is a Gaussian with a delta function normalization in momentum space with mean momentum $p=2.5$ and width $\Delta p= .25 $. It is pictured in figure 2. The Fourier transform of the initial wave packet is given in figure 3. The oscillations are due the to fact that the momentum space wave packet has a non-zero mean momentum. Given the size of the potential and wave packets, the wave packet needs to move about 18 units to the left in order to be out of the range of the potential. This suggest that for $v=p/m=2.5$ that $\tau=7$ should be sufficient to move the wave packet out of the range of the potential. The resulting free wave packet at $\tau=-7$ is shown in figure 4. The scattered wave function with $K=300$ ($M=601$) after $N=100$ time steps is shown in figure 5, and that result multiplied by the potential is shown in figure 6. Compared to the wave function in figure 3, the wave function in figure 5 includes the effects of the interaction. Figure 6 illustrates the cutoff due to the short range potential; it illustrates how only the part of the wave function inside the range of the interaction contributes to the scattering operator. Figure 7 compares the result of the off-shell Born approximation $\langle p \vert V \vert \psi(0) \rangle$ to the calculation of the real and imaginary parts of $\langle p \vert T \vert \psi(0) \rangle$ while figure 8 compares $\langle p \vert T \vert \psi(0) \rangle$ to $\langle p \vert T (p_0)\vert p_0 \rangle$ obtained by numerically solving the Lippmann-Schwinger equation using the method \cite{Rubtsova}. Figure 8 shows that the path integral computation with an initial wave packet with a width of 1/10 of the momentum converges to the numerical solution of the integral equation. In unrelated time-dependent scattering calculations \cite{Kopp:2011vv} a $\Delta p$ of about a tenth of $p$ gave good approximations to sharp momentum matrix elements of the transition operator for a wide range of momenta. Unlike the solution of the Lippmann Schwinger equation, in the path integral approach for each energy it is necessary to determine minimal values of $M$,$N$,$\tau$ and $\Delta p$ that are needed for convergence. In practice there are a number of trade offs. Making the wave packets narrow in momentum increases the scattering volume in the coordinate representation. This in turn requires a larger $\tau$ to get out of the range of the potential. If $\tau$ gets too large the wave packet can move past $q_{max}=K\epsilon$ and will reappear at $q_{min}=-K\epsilon$. As $p$ gets large the oscillations in the $q$ space wave function have higher frequencies, which requires smaller time steps, while when $p$ gets small it is necessary to make the wave packet width in momentum small enough so the coordinate space tail of the wave function gets out of the interaction volume. The computations require storing the initial vector. It is not necessary to store the transfer matrix - it can be computed efficiently on the fly. This is important for realistic calculations since the vectors will be significantly larger in higher dimensions. The hope is that in the future q-bits can be used to represent large vectors. This one-dimensional example approximated half-shell sharp-momentum transition matrix elements. The on-shell values can be used to extract other observables such as phase shifts and in the one-dimensional case transmission and reflection coefficients. This formulation of the one-dimensional problem in terms of transition operators has the advantage that the method can be formally extended to treat a large class of scattering problems. The formulation of the discrete path integral used a reducible discrete Schwinger representation where the complex one time step probability is represented by a dense matrix. An equivalent irreducible representation in terms of qbits involves a product of matrices (\ref{qb7}-\ref{qb8}) that act on single qbits, which may have computational advantages. \begin{figure} \begin{minipage}[t]{.45\linewidth} \centering \includegraphics[angle=000,scale=.5]{fig_1.pdf} \caption{\bf Potential} \label{fig:1} \end{minipage} \begin{minipage}[t]{.45\linewidth} \centering \includegraphics[angle=000,scale=.5]{fig_2.pdf} \caption{\bf Momentum space initial Gaussian wave packet} \label{fig:2} \end{minipage} \end{figure} \begin{figure} \begin{minipage}[t]{.45\linewidth} \centering \includegraphics[angle=000,scale=.5]{fig_3.pdf} \caption{\bf Coordinate space initial Gaussian wave packet} \label{fig:3} \end{minipage} \begin{minipage}[t]{.45\linewidth} \centering \includegraphics[angle=000,scale=.5]{fig_4.pdf} \caption{\bf Free Gaussian wave packet at $\tau=-4$} \label{fig:4} \end{minipage} \end{figure} \begin{figure} \begin{minipage}[t]{.45\linewidth} \centering \includegraphics[angle=000,scale=.5]{fig_5.pdf} \caption{\bf Initial scattering state at $t=0$ } \label{fig:5} \end{minipage} \begin{minipage}[t]{.45\linewidth} \centering \includegraphics[angle=000,scale=.5]{fig_6.pdf} \caption{\bf V $\times$ initial scattering state at $t=0$} \label{fig:6} \end{minipage} \end{figure} \begin{figure} \begin{minipage}[t]{.45\linewidth} \centering \includegraphics[angle=000,scale=.5]{fig_7.pdf} \caption{\bf $\langle p \vert V \vert \psi_{0i}(0) \rangle$} \label{fig:7} \end{minipage} \begin{minipage}[t]{.45\linewidth} \centering \includegraphics[angle=000,scale=.5]{fig_8.pdf} \caption{\bf $\langle p \vert T \vert \psi_{0i}(0) \rangle$} \label{fig:8} \end{minipage} \end{figure} \section{discrete multi-resolution representation of quantum field theory}\label{qft} One motivation for studying quantum computing in physics is that it might provide a framework for a numerical treatment of problems in quantum field theory. Clearly this goal is a long way off for realistic theories, but the state of quantum computing is advancing rapidly. Discrete formulations of field theory naturally fit into the discrete framework discussed in this work and should be relevant for future applications. A numerical treatment of quantum field theory requires a truncation to a system with a finite number of degrees of freedom. For reactions that take place in a finite space-time volume and involve a finite energy it is natural to limit the number of degrees of freedom by making volume and resolution truncations. Degrees of freedom that are outside of this volume or energetically inaccessible are expected to be unimportant for the given reaction. Daubechies wavelets \cite{daubechies}\cite{jorgensen1}\cite{jorgensen2} and scaling functions are a basis for square integrable functions and a natural representation to perform both kinds of truncations. The basis consists of a complete orthonormal set of functions that have compact support and a limited amount of smoothness. They have the property that for any small volume there are an infinite number of basis functions supported entirely in that volume. This means that they can be used to construct ``local'' observables by smearing the fields with basis functions. All of the basis functions $\xi_n(x)$ are generated from the solution of a linear renormalization group equation by translations and dyadic scale transformations, which facilitates computations. Because they are complete they can be used to {\it exactly} expand canonical fields \begin{equation} \Phi (\mathbf{x},t) = \sum \Phi_n (t) \xi_n (\mathbf{x}) \qquad \Pi (\mathbf{x},t) = \sum \Pi_n (t) \xi_n (\mathbf{x}) \end{equation} where $\Phi_n (t)$ and $\Pi_n (t)$ are discrete field operators. If the fields satisfy canonical equal time commutation relations \begin{equation} [\Phi (\mathbf{x},t), \Pi(\mathbf{y},t)] = i \delta (\mathbf{x}-\mathbf{y}) \end{equation} then the discrete fields $\Phi_n$ and $\Pi_n$ will satisfy discrete versions of the canonical equal time commutation relations \cite{Bulut:2013bg} \cite{Polyzou:2017wnj} \cite{Polyzou:2020ifj}: \begin{equation} [\Phi_m(t),\Pi_n(t)]=i \delta_{mn} \qquad [\Phi_m(t),\Phi_n(t)] =0 \qquad [\Pi_m(t), \Pi_n(t)] =0 . \label{w1} \end{equation} In terms of these degrees of freedom the Hamiltonian for a $\phi^4$ theory has the form \begin{equation} H= {1 \over 2}\sum_n \Pi_n\Pi_n + {m^2 \over 2}\sum_n \Phi_n\Phi_n + \sum_{mn} D_{mn} \Phi_m \Phi_n + \lambda \sum_{klmn} \Gamma_{klmn} \Phi_k\Phi_l\Phi_m\Phi_n \label{wav2} \end{equation} where the sum are all infinite. Since $H$ commutes with itself the discrete fields can be evaluated at $t=0$. The constant matrices are defined by the integrals \begin{equation} D_{mn} = {1 \over 2} \int \pmb{\nabla} \xi_n (\mathbf{x})\cdot \pmb{\nabla} \xi_m (\mathbf{x})d\mathbf{x} \label{wav3} \end{equation} \begin{equation} \Gamma_{klmn}= \int \xi_k (\mathbf{x}) \xi_l (\mathbf{x}) \xi_m (\mathbf{x}) \xi_n (\mathbf{x}) d\mathbf{x} . \label{wav4} \end{equation} For the wavelet basis these constants vanish unless all of the functions appearing in the integrals have a common support, which makes them almost local. In addition, because all of the functions in the integrand are related by translations and scale transformations to a single function, the integrals can all be expressed as linear combinations of solutions of some small linear systems generated by the renormalization group equation (\ref{w3}). Unlike a lattice truncation, the wavelet representation of the field theory is (formally) exact (before truncation). The basis functions regularize the fields so local products of fields that appear in the Hamiltonian are replaced by infinite sums of well-defined products of discrete field operators. The basis functions are differentiable, so there are no finite difference approximations. Wavelet representations of quantum field theories have been discussed by a number of authors \cite{best:1994} \cite{federbush:1995} \cite{1995NuPhB.436..414H} \cite{Battle:1999} \cite{best:2000} \cite{Ismail1:2003} \cite{Ismail2:2003} \cite{altaisky:2007} \cite{albeverio:2009} \cite{altaisky:2010} \cite{altaisky:2010} \cite{altaisky:2013} \cite{Bulut:2013bg} \cite{altaisky:2013b} \cite{PhysRevA.92.032315} \cite{PhysRevLett.116.140403} \cite{altaisky:2016b} \cite{altaisky:2016c} \cite{altaisky:2016} \cite{altaisky:2017} \cite{Polyzou:2017wnj} \cite{Neuberger2018} \cite{Tomboulis1} \cite{Polyzou:2020ifj} \cite{Altaisky:2021hbq}. What is relevant is that the Hamiltonian (\ref{wav2}) has the same form as (\ref{pi1}), except it involves an infinite number of degrees of freedom. It is diagonal and quadratic in the discrete momentum operators and has a non-trivial (almost local) dependence on the $\Phi_n$ operators. Because all of the basis functions are constructed from the fixed point $s(x)$ of the renormalization group equation (\ref{w3}) the constant quantities $D_{mn}$ and $\Gamma_{klmn}$ can be expressed in terms of a finite set of elementary integrals. The advantage of this basis is that it has natural volume and resolution truncations. For reactions taking place in a finite volume with a finite energy a (large) finite number of these degrees of freedom should provide a good approximation. This reduces the problem to a problem with a finite number of discrete degrees of freedom. In addition the truncated Hamiltonian still has the form (\ref{wav2}), except the sums are only over the retained discrete modes. As the volume and resolution are increased (i.e as more modes are added) the parameters of the theory have to be adjusted to keep the some physical observables constant. The truncated problem is a finite number of degree of freedom generalization of the one degree of freedom problem discussed in the section \ref{scatt}. For a quantum field theory the vector representing the state of the field will be much larger than in the one degree of freedom scattering case. The construction of the wavelet basis used to construct the discrete representation of the Hamiltonian (\ref{wav2}) is outlined below. The starting point the solution of the linear renormalization group equation \begin{equation} \boxed{ s(x) = \sum_{l=0}^{2L-1}h_l D T^l s(x) } \label{w3} \end{equation} where \begin{equation} Df(x):= \sqrt{2}f(2x) \qquad \mbox{and} \qquad Tf(x) := f(x-1) \label{w4} \end{equation} are unitary discrete dyadic scale transformations and unit translations. The $h_l$ are constants that depend on the choice of $L$. Generally as $L$ increases the solutions, $s(x)$, become smoother but the support increases. A useful case is $L=3$ where the solution $s(x)$ of (\ref{w3}), called the scaling function, has support on $[0,2L-1]=[0,5]$ and has one continuous derivative. In that case the coefficients $h_l$ for the Daubechies $L=3$ scaling functions are \[ h_0=(1+\sqrt{10}+\sqrt{5+2\sqrt{10}}\,)/16\sqrt{2} \] \[ h_1=(5+\sqrt{10}+3\sqrt{5+2\sqrt{10}}\,)/16\sqrt{2} \] \[ h_2=(10-2\sqrt{10}+2\sqrt{5+2\sqrt{10}}\,)/16\sqrt{2} \] \[ h_3= (10-2\sqrt{10}-2\sqrt{5+2\sqrt{10}}\,)/16\sqrt{2} \] \[ h_4=(5+\sqrt{10}-3\sqrt{5+2\sqrt{10}}\,)/16\sqrt{2} \] \begin{equation} h_5 =(1+\sqrt{10}-\sqrt{5+2\sqrt{10}}\,)/16\sqrt{2} . \label{w5} \end{equation} They are chosen so the solution of (\ref{w3}) and unit translates are orthonormal and locally finite linear combinations of these unit translates can be used to locally pointwise represent degree 2 polynomials. Given the solution, $s(x)$, of (\ref{w3}) new functions are constructed from $s(x)$ by rescaling and translating \begin{equation} s^k_n(x) := D^k T^n (x)s(x) = \sqrt{2^k} s(2^k x-n) . \label{w6} \end{equation} The starting scale is fixed using \begin{equation} \int s(x) dx=1 . \label{w7} \end{equation} The functions $s^k_n(x)$ for fixed $k$ span a subspace of the square integrable functions on the real line with a resolution $2^{-k}L$: \begin{equation} {\cal S}^k := \{ f(x) \vert f(x) = \sum_{n=-\infty}^\infty c_n s^k_n(x) \qquad \sum_{n=-\infty}^\infty \vert c_n\vert^2 < \infty \}. \label{w8} \end{equation} The renormalization group equation (\ref{w3}) implies \begin{equation} {\cal S}^k \subset {\cal S}^{k+1}. \label{w9} \end{equation} It follows that \begin{equation} {\cal S}^{k+1} = {\cal S}^{k} \oplus {\cal W}^{k}. \label{w10} \end{equation} where ${\cal W}^k$ is the orthogonal complement of ${\cal S}^k$ in ${\cal S}^{k+1}$. An orthonormal basis for the subspace ${\cal W}^k$ is the ``wavelet functions'': \begin{equation} w^k_n(x)=D^kT^n w(x) \label{w11} \end{equation} where \begin{equation} w (x):= \sum_{l=0}^{2L-1} (-)^l h_{2L-1-l} D T^{l} s(x) \label{w12} \end{equation} is called the ``mother wavelet''. This decomposition can be continued to generate a multi-resolution decomposition of $L^2(\mathbb{R})$ \begin{equation} \boxed{ L^2 (\mathbb{R}) = {\cal S}^{k} \oplus_{l=0}^\infty {\cal W}^{k+l}. } \label{w13} \end{equation} This results in a multi-resolution orthonormal basis for $L^2 (\mathbb{R})$ \begin{equation} \{\xi_n(x) \}_{n=-\infty}^{\infty} := \{ s^k_n(x)\}_{n=-\infty}^\infty \cup \{ w^m_n(x)\}_{n=-\infty,l=k}^\infty . \label{w14} \end{equation} For the choice $L=3$ the basis functions $s^k_n(x)$ and $w^k_n(x)$ have compact support on $[2^{-k}n,2^{-k}(n+5)]$. All of the basis functions have one continuous derivative so the coefficients (\ref{wav3}) are defined . The functions $s^k_n(x)$ are like splines in that linear combinations can be used to locally pointwise represent degree 2 polynomials while the functions $w^l_n(x)$ are orthogonal to the same polynomials on their support. The Fourier transforms of the basis functions are entire functions due to their compact support. Orthonormal three dimensional basis functions are products of one-dimensional basis functions. In spite of these nice properties, the basis functions are fractal valued (since they are related to fixed points of a renormalization group equation) and cannot be written down in closed form. In order to use this representation the constant coefficients $D_{mn}$ and $\Gamma_{n_1 \cdots n_k}$ that appear in the Hamiltonian (\ref{wav2}) need to be computed. Using scale transformations (\ref{w4}) and the renormalization group equation (\ref{w3}) they can all be expressed in terms of the integrals \begin{equation} d_{n} = \int {ds(x) \over dx} {ds(x-n) \over dx}dx \qquad -4\leq n \leq 4 \end{equation} \begin{equation} \gamma_{m,n,k}= \int s(x) s(x-m) s(x-n) s(x-k) dx \qquad -4 \leq mnk \leq 4 . \end{equation} These integrals are related to each other by finite linear equations derived from the renormalization group equation (\ref{w3}) and the scale fixing condition (\ref{w7}). These linear systems can formally be solved in terms of the coefficients $h_l$ (\ref{w5}). The coefficients $d_n$ are rational numbers and can be found in the literature on wavelets \cite{beylkin1}. To find the $\gamma_{mnk}$ requires finding eigenvalues of a $9^3 \times 9^3$ matrix. This eliminates the need be able to evaluate fractal valued functions. Alternatively the integrals $\gamma_{mnk}$ can be approximated by noting that the renormalization group equation (\ref{w3}) and the scale fixing condition (\ref{w7}) can be used to exactly calculate the basis functions and their derivatives exactly at all dyadic rational points. Since the functions and their derivatives are continuous this can be used to estimate these quantities and integrals involving these quantities to any desired accuracy. In order to illustrate a path integral treatment of this system consider a truncation of the theory in 1+1 dimension where only 2 adjacent modes of the Hamiltonian (\ref{wav2}) are retained. In this case the overlap coefficients that appear in the Hamiltonian and couple adjacent modes can be expressed in terms of the following quantities \begin{align} \Gamma_{0000}&= 0.9528539 \\ \Gamma_{0001}&= 0.0670946 \\ \Gamma_{0011}&= 0.0890895 \\ \Gamma_{0111}&=-0.1424536 \\ D_{00}&=295./56.; \\ D_{01}&=-356./105.; \\ D_{10}&=D_{01};\\ D_{11}&=D_{00} \end{align} where the $\Gamma$ coefficients were computed by numerical integration using the trapezoidal rule with the basis functions evaluated at 256 dyadic points on their support. Convergence was verified using 512 dyadic points. The truncated Hamiltonian in this case is \begin{equation} H= {1 \over 2}\sum_{n=0}^1 \Pi_n\Pi_n + {m^2 \over 2}\sum_{n=0}^1 \Phi_n\Phi_n +\sum_{mn} D_{mn} \Phi_m \Phi_n + \lambda \sum_{klmn=0}^1 \Gamma_{klmn} \Phi_k\Phi_l\Phi_m\Phi_n \label{wavxx} \end{equation} where $\Gamma_{0000}=\Gamma_{1111}$, $\Gamma_{0001}=\Gamma_{0010}=\Gamma_{0100}= \Gamma_{1000}$, etc. . The path integral treatment of the field theory in the discrete representation is a multi-dimensional generalization of the treatment for one degree of freedom where each field mode represents an independent degree of freedom. A general numerical treatment involves a truncation and renormalization followed by two approximations. The truncation discards all but a finite number, $F$, of discrete degrees of freedom. \begin{equation} H \to H_F \label{w22} \end{equation} Ideally physics at a given energy scale and in a given volume should be dominated by a finite number of accessible degrees of freedom. The remaining degrees of freedom that are not expected to impact calculation at that given scale and volume are discarded. The truncated theory is renormalized by adjusting the parameters of the theory so a set of observables agree with experiment. This gives the parameters a dependence on the choice of retained degrees of freedom. This is a truncation rather than an approximation. It assumes that no additional parameters need to be introduced beyond what appears in the truncated Hamiltonian and that there is a limit as the volume becomes infinite and resolution becomes arbitrarily small. This is followed by two approximations. The first approximation is to approximate the unitary time evolution operator for the truncated theory using the Trotter product formula with $N$ time slices. \begin{equation} U_F(\tau) = e^{-i H_F \tau} = \lim_{N\to \infty} (e^{-i H_F(\Pi)\Delta t}e^{-iH_F(\Phi)\Delta t})^N \label{w23} \end{equation} where $\Delta t = \tau/N$ and \begin{equation} H_F = H_F(\Pi) + H_F(\Phi) \label{w24} \end{equation} with \begin{equation} H_F(\Pi) := {1 \over 2}\sum_n \Pi_n\Pi_n \label{w24a} \end{equation} and \beq2 H_F(\Phi):= {m^2 \over 2}\sum_n \Phi_n\Phi_n + \sum_{mn} D_{mn} \Phi_m \Phi_n + \lambda \sum_{klmn} \Gamma_{klmn} \Phi_k\Phi_l\Phi_m\Phi_n \label{w24b} \end{equation} which expresses $H_F$ as the sum of a part with only the $\Pi_n$ fields and another part with only the $\Phi_n$ fields. Since the discrete canonical pairs of field operators $\Phi_n$ and $\Pi_n$ satisfy canonical commutation relations they have a continuous spectrum on the real line. This is because each one of these complementary operators generates translations in the other operator. The last step is to approximate the continuous spectrum of the discrete field operators $\Phi_n$ and $\Pi_n$ by a collection of $M=2K+1$ closely spaced eigenvalues $\phi_n,\pi_n = n \epsilon$ where $-K \leq n \leq K$ and $\epsilon^2 = 2 \pi/M$. This is exactly what was done in the one-dimensional case, except in this case there are $F$ degrees of freedom where $F$ is the number of retained discrete field modes. Unlike the truncation, both of these steps are mathematical approximations. Let $\langle \pmb{\phi} \vert \chi \rangle = \chi(n_1\epsilon, \cdots , n_F \epsilon) $ be a localized function of the amplitudes of the $F$ discrete field modes that represent an initial free wave packet. The goal is to use path integrals to calculate the time evolution of these coupled modes. For the field theory, before truncation, in the discrete representation there are integrals over an infinite number of modes. For the discarded modes the volume being integrated over for each mode is infinite, resulting in an infinite product of an infinite number of infinite irrelevant constants. The advantage of discretizing the integrals is that the volume for each mode is finite: \[ \mbox{Volume}= \sqrt{2 M\pi}- \sqrt{2 \pi /M}. \] The discarded modes can be eliminated by summing and dividing by this finite volume, mode by mode, before taking the continuum limit. In this way the integrals over discarded degrees of freedom are replaced by a product of 1's. This results in a path integral that only involves the retained degrees of freedom. The discrete approximation results in a sample space with a finite number of discrete paths. The Trotter approximation is \[ \langle n_1,n_2,\cdots n_F \vert U_F(\tau) \vert \chi (0) \rangle = \] \begin{equation} \lim_{N\to \infty} \langle n_1,n_2,\cdots n_F \vert (e^{-i H_f(\Pi)\Delta t}e^{-iH_F(\Phi)\Delta t})^N \vert \chi (0) \rangle . \label{w25} \end{equation} This can be evaluated by inserting complete sets of eigenstates of the complementary fields between each of the operators. The following abbreviations are used for sums over intermediate states: \begin{equation} \int d\pmb{\phi} = \epsilon^F \sum_{n_1=-K}^K \cdots \sum_{n_F=-K}^K, \label{w26} \end{equation} for vectors representing a value of the eigenvalues of each of the $F$ independent $\phi$ fields, \begin{equation} \pmb{\phi}=(n_1 \epsilon, \cdots , n_F \epsilon) \qquad -K \leq n_i \leq K, \label{w27} \end{equation} for vectors representing the value of the eigenvalues of each of the $F$ independent $\pi$ fields \begin{equation} \pmb{\pi}=(n_1 \epsilon, \cdots , n_F \epsilon) \qquad -K \leq n_i \leq K \label{w28} \end{equation} and \begin{equation} \gamma = (\pmb{\phi}_0,\pmb{\phi}_1, \cdots, \pmb{\phi}_N) \label{w29} \end{equation} for a ``path'' that ends at $\pmb{\phi}_0$ where $\pmb{\phi}_j$ ($j>0$) represents values of each of the $\phi_n$ fields at each of $N$ time steps. The following definitions are generalizations of the definitions in section \ref{path}: \begin{equation} K(\pmb{\phi}',\pmb{\phi},\Delta t) := \sum_{\mathbf{n}''} \langle \pmb{\phi}' \vert \pmb{\pi} \rangle \Delta t \langle \pmb{\pi} \vert \pmb{\phi} \rangle . \label{w30} \end{equation} It follows from (\ref{cl18}) that $K(\pmb{\phi}',\pmb{\phi},\Delta t)$ has the property \begin{equation} \sum_{\mathbf{n}} K(\pmb{\phi}',\pmb{\phi},\Delta t)\epsilon^F =1 \label{w31} \end{equation} and \[ P(\pmb{\phi}_f,\pmb{\phi}_N, \cdots \pmb{\phi}_1) := \] \begin{equation} K(\pmb{\phi}_f,\pmb{\phi}_N,\Delta t)\epsilon^F K(\pmb{\phi}_N,\pmb{\phi}_{N-1},\Delta t)\epsilon^F \cdots K(\pmb{\phi}_3,\pmb{\phi}_2,\Delta t)\epsilon^F K(\pmb{\phi}_2,\pmb{\phi}_1,\Delta t)\epsilon^F \label{w32} \end{equation} also satisfies \begin{equation} \sum_{\gamma \in \Gamma} P(\pmb{\phi}_f,\pmb{\phi}_N, \cdots , \pmb{\phi}_1) =1 . \label{w33} \end{equation} Equation (\ref{w32}) represents the complex probability of a given path, where at each time slice each of the $F$ $\phi$'s has one of $M$ allowed values between $-K\epsilon$ and $K\epsilon$. Removing the last factor of $\epsilon^F$ and only summing over $\pmb{\phi}_N \cdots \pmb{\phi}_2$ gives the evolution due to free propagation \begin{equation} \langle \pmb{\phi}_f \vert e^{-{i\over 2} \pmb{\Pi}\cdot \pmb{\Pi}\tau} \vert \pmb{\phi}_1 \rangle = \sum_{\pmb{n}_n \cdots \pmb{n}_1} P(\pmb{\phi}_f,\pmb{\phi}_N, \cdots , \pmb{\phi}_1)\epsilon^{-F} . \label{w34} \end{equation} The full path integral including the effects of the interaction can be expressed as the expectation of the following potential functional of the path $\gamma$ with respect to the complex probability distribution (\ref{pi12}): \begin{equation} W[\gamma] := e^{i\sum_{n} H_F (\pmb{\phi}_n) \Delta t} \label{w36} \end{equation} where $H_F (\pmb{\phi}_n)$ represents the value of the $\phi$-dependent part of the Hamiltonian evaluated at the value of the path $\gamma$ at the $n$-th time slice. This gives the path integral approximation \[ \langle n_{1f},n_{2f},\cdots n_{Ff} \vert U_F(\tau) \vert \chi (0) \rangle = \] \begin{equation} \sum_{\gamma} P(\pmb{\phi}_f,\pmb{\phi}_N, \cdots \pmb{\phi}_1)W[\gamma] \chi (\pmb{\phi}_1) \label{w37} \end{equation} which again represents the path integral for fields as the expectation value of a potential functional with respect to a complex probability distribution. As in the one degree of freedom case this can be exactly factored into a product of one-time step operators \[ P(\pmb{\phi}_f,\pmb{\phi}_N, \cdots \pmb{\phi}_1)W[\gamma] = \] \[ K(\pmb{\phi}_f,\pmb{\phi}_N,\Delta t) e^{iH_F (\pmb{\phi}_N) \Delta t}\epsilon^F K(\pmb{\phi}_N,\pmb{\phi}_{N-1},\Delta t) e^{iH_F (\pmb{\phi}_{N-1}) \Delta t}\epsilon^F \cdots \] \begin{equation} K(\pmb{\phi}_3,\pmb{\phi}_2,\Delta t) e^{iH_F (\pmb{\phi}_2) \Delta t} \epsilon^F K(\pmb{\phi}2,\pmb{\phi}_1,\Delta t) e^{iH_F (\pmb{\phi}_1) \Delta t}\epsilon^F . \label{w38} \end{equation} This can be used to represent time evolution as the product of large approximate transfer matrices. At each stage these calculations use finite mathematics. The use of the finite Weyl representation exactly preserves unitary at each level of approximation. Both the $\pmb{\phi}$ and $\pmb{\pi}$ transfer matrices are unitary and can be expressed exactly in the truncated model. This means that the discrete Trotter approximation to time evolution is exactly unitary. The calculation shown in figures nine and ten show the initial real and imaginary parts of the two field modes. In this case the initial modes are real and taken to be Gaussians of the form \begin{equation} \langle \phi_1,\phi_2 \vert \psi \rangle = N e^{-\sum_{i=0}^1 (\phi_i - \langle \phi_i \rangle)^2/(4 \delta \phi_i^2)} \end{equation} Figures 11 and 12 show the real and imaginary parts of the time $t=.5$ evolved amplitudes of these two discrete modes with $M=41$ values using $N=20$ Trotter steps. Figures 13 and 14 show plots of the real and imaginary parts of $\phi_0$ when $\phi_1=0$ at $T=0$ and $T=.5$. In the initial calculations the initial mean displacement and uncertainty of each mode was taken to be .5. The initial state has no imaginary part but one develops due to the non-zero displacement of the initial state. This truncation is too crude to contain any real physics, however it illustrates the application of the discrete path integral to fields. A more drastic truncation of the discretization of the continuum could be used to explore the dynamics of fields with a larger number of modes. \begin{figure} \begin{minipage}[t]{.45\linewidth} \centering \includegraphics[angle=000,scale=.6]{fig_10.pdf} \caption{\bf Two modes (real) at T=0} \label{fig:9} \end{minipage} \begin{minipage}[t]{.45\linewidth} \centering \includegraphics[angle=000,scale=.6]{fig_9.pdf} \caption{\bf Two modes (imaginary) at T=0} \label{fig:10} \end{minipage} \end{figure} \begin{figure} \begin{minipage}[t]{.45\linewidth} \centering \includegraphics[angle=000,scale=.6]{fig_11.pdf} \caption{\bf Two modes (real) after T=.5} \label{fig:11} \end{minipage} \begin{minipage}[t]{.45\linewidth} \centering \includegraphics[angle=000,scale=.6]{fig_12.pdf} \caption{\bf Two modes (imaginary) after T=.5} \label{fig:12} \end{minipage} \end{figure} \begin{figure} \begin{minipage}[t]{.45\linewidth} \centering \includegraphics[angle=000,scale=.6]{fig_13.pdf} \caption{\bf One mode (real) after T=0.,.5} \label{fig:13} \end{minipage} \begin{minipage}[t]{.45\linewidth} \centering \includegraphics[angle=000,scale=.6]{fig_14.pdf} \caption{\bf One mode (imaginary) after T=0.,.5} \label{fig:14} \end{minipage} \end{figure} \section{summary and conclusion}\label{sum} This paper discusses a path integral treatment of discrete representations of quantum theory. The treatment is motivated by a textbook treatment \cite{schwinger} of measurement theory of quantum systems on finite dimensional Hilbert spaces. The discrete representation provides a natural connection to a q-bit representation in terms of an irreducible set of quantum gates. It was also shown to formally provide a discrete path integral treatment of problems in potential scattering and quantum field theory. The discrete Weyl representation is closely related to the quantum Fourier transform while the equivalent decomposition into irreducible sub algebras is more directly related to quantum circuits. The treatment starts by considering a general quantum observable with a finite number of outcomes. It is used to construct a pair of unitary operators, one that commutes with the original observable and a second complementary unitary operator. The two unitary operators are a finite dimensional version of the irreducible Weyl algebra on the Hilbert space spanned by the eigenvectors of the original operator. When the dimension of the Hilbert space gets large this algebra approximates the Weyl algebra of a continuum theory. When the large number is a power of 2 the algebra can be decomposed into a product of irreducible sub-algebras where the complementary unitary operators are elementary qbit gates, which are the building blocks of quantum circuits. In the limit of large dimensions discrete operators that behave like canonical coordinates and momenta can be constructed from this algebra. In this approximation the ``coordinates'' and ``momenta'' take on a finite number of discrete values that get closer together and cover more of the real line as the number of degrees of freedom increases. Hamiltonians that are sums of an operator that is quadratic in the ``momentum'' variables and an operator that is a multiplication operator in the ``coordinate'' variables are considered. Time evolution is represented by a product of transfer matrices for a large number of small steps. For small time steps the transfer matrix can be approximately factored into a product of a transfer matrix involving the ``momentum'' part of the Hamiltonian and another transfer matrix involving the ``coordinate'' part of the Hamiltonian. Both of these transfer matrices are represented in the discrete ``coordinate'' representation. A path is defined to go through one of the discrete coordinates at each time step. In the discrete representation the number of possible paths is $M^N$ where $N$ is the number of time steps and $M$ is the number of discrete coordinates at each time slice. The transfer matrices involving the momentum part of the Hamiltonian have the property that summing over either the initial or final coordinates gives 1, independent of the other coordinate. In this work the momentum transfer matrix is interpreted as the complex probability for a transformation from one of the allowed coordinates to another one in time step $\Delta t$. The product of $N$ of these operators, where the final coordinate of one is the initial coordinate of the next one is interpreted as a complex probability for a given ``path'' on the finite sample set of discrete paths. This probability has the property that summing over all paths with a given starting point or a given end point is 1. The interaction (coordinate dependence) is included by multiplying this probability by the product of the coordinate transfer matrices evaluated at each point on the path. In this interpretation the coordinate contribution is represented by a functional on the space of paths. Taking the expectation value of this functional with respect to the complex probability distribution gives the usual Trotter product representation of finite time evolution of the discrete system. In the discrete representation all of the operators are exactly unitary and the mathematics is finite. The sample space of paths for the complex probabilities is finite. The general structure of the Hamiltonian as the sum of a quadratic form in the momentum variables and an interaction term is realized in non-relativistic quantum mechanics and relativistic quantum field theory. The application to potential scattering was discussed using the example of a particle scattering from a smooth short range interaction in one dimension. In the case of field theories an exact multi-resolution representation of the field in terms of an infinite number of discrete modes was used. When truncated to a finite number of modes the resulting discrete system has the structure of system of coupled particles. The long term interest is in quantum computing. The examples were computed by applying products of the one step transfer matrices to an initial vector. By computing the transfer matrix elements on the fly, is was not necessary to store the transfer matrix. However as the number of degrees of freedom is increased, the size of the vector representing the state of the quantum system is the major limitation. The author would like to thank William Hester for pointing out some errors in the original version of this manuscript.
1,116,691,497,417
arxiv
\section{Introduction} For fixed $n$, we seek a unit-volume $n$-hedral tile of space that minimizes surface area. Our Conjecture \ref{best3Dtiles} provides candidates from $n=4$, a certain irregular tetrahedron, to $n\geq 14$, Kelvin's truncated octahedron (see Figs. 1-7). The conjecture is known for $n=6$ and $n=5$. That the cube is the best 6-hedron, tile or not, is well known \cite{ftoth} (See Thm. \ref{Florianpf}). Theorem \ref{existspoly} shows that among convex polyhedra, for fixed $n$, there exists a surface-area-minimizing $n$-hedral tile of space. Section \ref{secprism} gives some properties of prisms and a proof that a certain hexagonal prism is the surface-area-minimizing prism. Theorem \ref{bestfivepoly} gives a nice new proof that a certain triangular prism is the surface-area-minimizing 5-hedron. Theorem \ref{besttetra} proves that a third of a triangular prism is the surface-area-minimizing "orientation-preserving" 4-hedral tile, based on a classification of tetrahedral tiles by Sommerville \cite{somville}. (Unfortunately the regular tetrahedron does not tile space.) \subsection{Acknowledgements} This paper is work of the 2012 ``SMALL'' Geometry Group, an undergraduate research group at Williams College continued by Waruhiu. Thanks to our advisor Frank Morgan, for his patience, guidance, and invaluable input. Thanks to Andrew Kelly and Max Engelstein for contributions to the summer work that laid the groundwork for this paper. Thanks to the National Science Foundation for grants to Morgan and the Williams College ``SMALL'' Research Experience for Undergraduates, and to Williams College for additional funding. Additionally thank you to the Mathematical Association of America (MAA), MIT, the University of Chicago, and Williams College for grants to Professor Morgan for funding in support of trips to speak at MathFest 2012 and the Joint Meetings 2013 in San Diego. \section{Tiling of Space} \label{space} We assume that a space-filing polyhedron tiles $\mathbb{R}^3$ with congruent copies of itself and the polyhedra are face-to-face, i.e., that polyhedra meet only along entire faces, entire edges, or at vertices. We have the following conjecture: \begin{conjt} \label{best3Dtiles} For fixed $n$ and unit volume, the following provide the surface-area-minimizing $n$-hedral tiles of $\mathbb{R}^3$ (see Figs. 1-7): \begin{enumerate} \item $n=4$: a tetrahedron formed by four isosceles right triangles with two sides of $\sqrt{3}$ and one side of 2. It is also formed by cutting a triangular prism into three congruent tetrahedra; \item $n=5$: a right equilateral-triangular prism; \item $n=6$: the cube; \item $n=7$: a right Cairo or Prismatic pentagonal prism; \item $n=8$: the gabled rhombohedron described by Goldberg \cite{goldbergocta} as having four pentagonal and four quadrilateral sides and the hexagonal prism; \item $n=9$: an enneahedron with three non-adjacent congruent square faces and six congruent pentagonal faces; \item $n=10$ and $11$: a decahedral "barrel" with congruent square bases and eight congruent pentagonal sides; \item $n=12$: a 12-hedron of Type 12-VIII described by Goldberg \cite{goldbergdodeca} with 20 vertices of degree three and none of degree four (one half the truncated octahedron (10)); \item $n=13$: a 13-hedron of Type 13-IV described by Goldberg \cite{goldberg>12} as cutting a 14 sided hexagonal prism capped on each end by four faces in half; \item $n \geq 14$: Kelvin's truncated octahedron (\cite{kelvin}, see \cite[pp. 157-171]{morggeo}). \end{enumerate} \end{conjt} \begin{remark} \emph{Goldberg (\cite[p.231]{goldberg}, see \cite[p. 213]{florian}) conjectured that a surface-area-minimizing $n$-hedron has only vertices of degree three, but it may well not tile. All the vertices of our conjectured polyhedra have degree three.} \end{remark} \begin{figure} \centering \centering \includegraphics[scale=0.7]{besttetra.png} \caption{A tetrahedron formed by cutting a triangular prism into three congruent tetrahedra is the conjectured surface-area-minimizing tetrahedral tile.} \label{fig:besttetrah} \centering \includegraphics[scale=0.7]{righttriangularprism.png} \caption{A right equilateral-triangular prism is the surface-area-minimizing 5-hedron.} \includegraphics[scale=0.7]{cube.png} \caption{The cube is the surface-area-minimizing 6-hedron.} \includegraphics[scale=0.6]{7-hedra.png} \caption{A right Cairo prism is the conjectured surface-area-minimizing 7-hedral tile.} \end{figure} \begin{figure} \centering \includegraphics[scale=0.6]{8-hedra.png} \caption{Goldberg's \cite[Fig. 8-VI]{goldbergocta} gabled rhombohedron and the hexagonal prism \cite{wiki} are the conjectured surface-area-minimizing 8-hedral tiles. They have the same surface area.} \includegraphics[scale=0.4]{12-hedra.png} \caption{Goldberg's \cite{goldbergdodeca} one half the truncated octahedron is the conjectured surface-area-minimizing 12-hedral tile.} \includegraphics[scale=0.4]{13v.png} \caption{Goldberg's \cite{goldberg>12} Type 13-IV is the conjectured surface-area-minimizing 13-hedral tile. It obtained by cutting a Goldberg's Type 14-IV 14-hedron in half.} \includegraphics[scale=0.5]{truncatedoctahedron.jpg} \caption{Kelvin's truncated octahedron is the conjectured surface-area-minimizing polyhedral tile. \cite{wiki}} \end{figure} Unfortunately, the regular tetrahedron, which is the surface-area-minimizing tetrahedron, does not tile because the dihedral angles of $70.53^\circ$ cannot add up to $360^\circ$ (Fig. \ref{fig:notiletetra}). We provide the best orientation-preserving tetrahedral tile in Theorem \ref{besttetra}, but have not been able to remove the orientation-preserving assumption. In the known cases $n=5$ and $n=6$, the candidates are surface-area-minimizing unit-volume $n$-hedra and hence, of course, the optimal $n$-hedral tiles. Minkowski \cite{mink} proved that such an $n$-hedron exists, as does Steinitz \cite{Steiz}. (We are not sure whether their arguments imply the existence of a surface-area-minimizing unit-volume $n$-hedral \textit{tile}.) The case $n=6$ follows immediately from a theorem of Goldberg \cite[p. 230]{goldberg} and also given by Fejes T\'{o}th. \begin{theorem} \label{Florianpf} (\cite[pp. 174 - 180]{ftoth}, \emph{see} \cite[pp. 212 - 213]{florian}). If F denotes the surface area and V the volume of a three-dimensional convex polyhedron with f faces, then \[ \frac{F^3}{V^2} \geq 54(f-2)\tan{\omega_f}(4\sin^2{\omega_f-1}) \] where $\omega_f = \pi f/6(f -2)$. Equality holds only for the regular tetrahedron, the cube, and the regular dodecahedron. \end{theorem} Regarding $n=7$, Goldberg \cite{goldberg} claims that the right regular pentagonal prism is the surface-area-minimizing 7-hedron. However, the proof, which was given by Lindel\"{o}f, is $-$ in Lindel\"{o}f's words $-$ "only tentative". Furthermore, regular pentagons cannot tile the plane. Therefore, we cannot tile $\mathbb{R}^3$ with the right regular pentagonal prism. The Cairo and Prismatic pentagons (Fig. \ref{fig:pentile}) have recently been proved by Chung et al. \cite[Thm. 3.5]{pen11} as the best pentagonal planar tiles. They are circumscribed about a circle, with three angles of $2\pi/3$ and two angles of $\pi / 2$, adjacent in the Prismatic pentagon and non-adjacent in the Cairo pentagon. We conjecture a right Cairo or Prismatic prism is the surface-area-minimizing 7-hedral tile. \begin{figure} \centering \includegraphics[scale=0.3]{tetrahedranotile.png} \caption{Because the dihedral angles ($70.53^\circ$) of a regular tetrahedron cannot add up to $360^\circ$, the regular tetrahedron does not tile. There is a small gap. \cite{ungor}} \label{fig:notiletetra} \end{figure} For $n=8$, Goldberg \cite{goldberg} shows that the regular octahedron does not minimize surface area, supporting his conjecture that the surface area minimizer cannot have vertices of degree greater than three. We found that the gabled rhombodecahedron has the same surface area as the regular octahedron (which does not tile) and hexagonal prism (which tiles). Moreover, it has less surface area than the gyrobifastigium suggested by Li et al. \cite[p. 30]{g10}. The gabled rhombodecahedron is distinguished among Goldberg's \cite{goldbergocta} octahedral tiles by having all vertices of degree three. The enneahedron and decahedron are inspired by two of the eight nontrivial geodesic nets on the sphere meeting in threes at $2\pi/3$, classified by Heppes (see \cite{taylornets} and \cite[pp. 132]{morggeo}), although these polyhedra inscribed in spheres are not circumscribed about spheres as surface area minimizers would be. We do not know if any such polyhedra tile space. The conjectured 13-hedron is distinguished by having all vertices of degree three \cite{goldberg>12}. \begin{figure} \centering \includegraphics[scale=0.7]{bestpentiling.png} \caption{The Cairo and Prismatic pentagons have recently been proved (Chung et al. \cite[Thm. 3.5]{pen11}) as the best pentagonal planar tiles.} \label{fig:pentile} \end{figure} For any $n \geq 14$, we follow the famous Kelvin Conjecture $-$ that the truncated octahedron is the surface-area-minimizing $n$-hedron that tiles space. Table \ref{tab:poly} gives the surface areas of the conjectured minimizers, computed using Proposition \ref{bestheight} and the Quickhull algorithm \cite{qhull}. Table \ref{tab:comp} shows surface areas of competing 12 and 13-hedra, Rhombic Dodecahedron, Elongated Dodecahedron, Goldberg's Type 13-I, and Type 13-II \cite{goldberg>12}. Note that an $n_0$-hedron may be considered a (degenerate) $n$-hedron for any $n>n_0$ by subdividing its faces, as in \ref{best3Dtiles}(7) and (10). \begin{table}[ht] \centering \begin{tabular}{|c|c|} \hline \begin{tabular}[x]{@{}c@{}} $n=4$ \\ One third triangular prism \end{tabular} & \begin{tabular}[x]{@{}c@{}}\textbf{7.4126}\\ \includegraphics[scale=0.2]{besttetra.png} \end{tabular} \\ \hline \begin{tabular}[x]{@{}c@{}} $n=5$ \\ A triangular prism \end{tabular} & \begin{tabular}[x]{@{}c@{}}\textbf{6.5467}\\ \includegraphics[scale=0.2]{righttriangularprism.png} \end{tabular} \\ \hline \begin{tabular}[x]{@{}c@{}} $n=6$ \\ Cube \end{tabular} & \begin{tabular}[x]{@{}c@{}}\textbf{6.0000}\\ \includegraphics[scale=0.2]{cube.png} \end{tabular} \\ \hline \begin{tabular}[x]{@{}c@{}} $n=7$ \\ Cairo pentagonal prism \end{tabular} & \begin{tabular}[x]{@{}c@{}}\textbf{5.8629}\\ \includegraphics[scale=0.2]{7-hedra.png} \end{tabular} \\ \hline \begin{tabular}[x]{@{}c@{}} $n=8$ \\ A Hexagonal prism \end{tabular} & \begin{tabular}[x]{@{}c@{}}\textbf{5.7191}\\ \includegraphics[scale=0.2]{hexagonalprism.png} \end{tabular} \\ \hline \begin{tabular}[x]{@{}c@{}} $n=9$ \\ An Enneahedron \end{tabular} & \begin{tabular}[x]{@{}c@{}}\textbf{5.5299}\\ \includegraphics[scale=0.2]{9hedra.png} \end{tabular} \\ \hline \begin{tabular}[x]{@{}c@{}} $n=10$ and 11 \\ Decahedral barrel \end{tabular} & \begin{tabular}[x]{@{}c@{}}\textbf{5.4434}\\ \includegraphics[scale=0.2]{10hedra.png} \end{tabular} \\ \hline \begin{tabular}[x]{@{}c@{}} $n=12$ \\ Half truncated octahedron \end{tabular} & \begin{tabular}[x]{@{}c@{}}\textbf{5.3199}\\ \includegraphics[scale=0.2]{12-hedra.png} \end{tabular} \\ \hline\\ \begin{tabular}[x]{@{}c@{}} $n=13$ \\ Goldberg's \cite{goldberg>12} Type 13-IV \end{tabular} & \begin{tabular}[x]{@{}c@{}}\textbf{5.3189}\\ \includegraphics[scale=0.2]{13v.png} \end{tabular} \\ \hline \begin{tabular}[x]{@{}c@{}} $n=14$ \\ Kelvin's truncated octahedron \end{tabular} & \begin{tabular}[x]{@{}c@{}}\textbf{5.3147}\\ \includegraphics[scale=0.2]{truncatedoctahedron.jpg} \end{tabular} \\ \hline \end{tabular}\\ \vspace{4 mm} \caption{Our conjectured surface-area-minimizing unit-volume $n$-hedral tiles.} \label{tab:poly} \end{table} \begin{table}[ht] \centering \begin{tabular}{|c|c|c|c|} \hline \begin{tabular}[x]{@{}c@{}} $n=12$ \\ Rhombic Dodecahedron \end{tabular} & \begin{tabular}[x]{@{}c@{}}\textbf{5.3454}\\ \includegraphics[scale=0.2]{rhombicdodecahedron.jpg} \end{tabular} & \begin{tabular}[x]{@{}c@{}} $n=12$ \\ Elongated Dodecahedron \end{tabular} & \begin{tabular}[x]{@{}c@{}}\textbf{5.4932}\\ \includegraphics[scale=0.2]{elongeted.png} \end{tabular}\\ \hline \begin{tabular}[x]{@{}c@{}} $n=13$ \\ Goldberg's Type 13-I \end{tabular} & \begin{tabular}[x]{@{}c@{}}\textbf{5.3640}\\ \includegraphics[scale=0.2]{13i-hedra.png} \end{tabular} & \begin{tabular}[x]{@{}c@{}} $n=13$ \\ Goldberg's Type 13-II \end{tabular} & \begin{tabular}[x]{@{}c@{}}\textbf{6.8813}\\ \includegraphics[scale=0.2]{13-hedra.png} \end{tabular}\\ \hline \end{tabular}\\ \vspace{4 mm} \caption{Table showing the surface area of competing 12 and 13-hedral tiles.} \label{tab:comp} \end{table} On the other hand, the following proposition shows that you can always reduce surface area by a small truncation and rescaling, but the resulting polyhedron may not tile. We think the truncated octahedron as far as you can go and still tile. \begin{proposition} \label{trunc} A slight truncation at any strictly convex vertex and rescaling to the original volume reduces the surface area of a polyhedron. \end{proposition} \begin{proof} Instead of rescaling, we show the decrease of the scale invariant area-volume ratio $A^3 / V^2$. Under truncation by a distance $t$, the logarithmic derivative $$ \frac{3A'}{A} - \frac{2V'}{V} $$ is negative for all sufficiently small $t$ because $A'$ is proportional to $-t$, while $V'$ is proportional to $-t^2$. \end{proof} Heppes drew our attention to Wolfram Online's \cite{wolfram} discussion of polyhedral tiles. It notes the extensive categorization of polyhedral tiles by Goldberg [G1-G7]. Gr\"{u}nbaum and Shephard \cite{grunbaum} and Wells \cite{wells} discuss the known polyhedral tiles pre-1980, when the maximal $n$ for $n$-hedral tiles was believed to be 26. In 1980, P. Engel \cite[pp. 234-235]{wells} found 172 additional polyhedral tiles with 17 to 38 faces, and more polyhedral tiles have been found subsequently. \section{Existence of a surface-area-minimizing tile} \label{existence} For fixed $n$, Minkowski \cite{mink} proved that among convex polyhedra, there exists a surface-area-minimizing $n$-hedron. We show that if we assume the polyhedron tiles space, then there exists a surface-area-minimizing convex polyhedral tile. \begin{definition} \emph{A polyhedron is} nondegenerate \emph{if it does not have any unnecessary edges.} \emph{The furthest distance between two vertices is the} diameter \emph{of a polyhedron.} \indent \emph{We call two polyhedra $P$ and $Q$ combinatorially equivalent if there exists a bijection $f$ between the set of the vertices of $P$ and $Q$ such that:} \begin{enumerate} \item $v_1v_2$ is an edge of $P$ if and only if $f(v_1)f(v_2)$ is an edge $Q$. \item $v_1, \dotsc, v_k$ is a face of $P$ if and only if $f(v_1), \dotsc, f(v_k)$ is a face $Q$. \end{enumerate} \end{definition} \begin{proposition} \label{types} For any $n$, there are a finite number of combinatorial types of $n$-hedra. \end{proposition} \begin{proof} Fix $n$. First, a $n$-hedron's face can have at most $(n-1)$-edges. Assume, on the contrary, that a $n$-hedron contains an $n$-gon. Then since each edge is shared by two faces and two faces share at most one edge, there are at least $n+1$ faces in the $n$-hedron, which is a contradiction. This means that the biggest face can have $n-1$ edges and the smallest is a triangle (3 edges). Therefore, we have $n-3$ choices for each face. Hence, the number of possible combinations of $n-3$ faces is equivalent to the number of solutions to the equation $$ x_3+x_4+...+x_{n-1} = n $$ where $x_i$ corresponds to the number of faces with $i$ edges. The number of solutions to the equation is ${2n-4 \choose n}$. It follows that for each combination, we can arrange the faces in a finite number of ways. Therefore, there are a finite number of combinatorial types. \end{proof} \begin{remark} \emph{Not all possible combinations of faces can make a polyhedron. For example for $n=5$, it is possible to have 6 combinations of different faces, but in Proposition \ref{fivefaceopt}, we will prove that the only combinatorial types are either triangular prisms or quadrilateral pyramids.} \end{remark} \begin{theorem} \label{existspoly} For a fixed $n$, there exists a surface-area-minimizing unit-volume convex n-hedral tile. \end{theorem} The minimizer could be a degenerate $n$-hedron (with fewer than $n$ faces), as we conjecture occurs for $n>14$ (Conj. \ref{best3Dtiles} (10)). \begin{proof} Take a sequence of unit-volume convex $n$-hedral tiles with areas approaching the infimum. We may assume that the areas are bounded by $P_0$. By standard compactness results, it suffices to show that the diameters are bounded. Consider a unit-volume convex polyhedron. Take the slice of largest area $a_0$ perpendicular to the diameter $D$. Consider the pyramid with based $a_0$ and the apex at the most distant end of the diameter. By convexity, the pyramid lies insider the polyhedron. Therefore, $$ 1 \geq \left(\frac{1}{3}\right)a_0 \frac{D}{2} $$ and $$ a_0 \leq \frac{6}{D}. $$ For every slice perpendicular to the diameter, by the isoperimetric inequality, the perimeter $p$ and area $a$ satisfy $$ p \geq \sqrt{4 \pi a}. $$ Since $\sqrt{a} \geq a / \sqrt{a_0}$, we have $$ \sqrt{4 \pi a} \geq \frac{a\sqrt{4 \pi}}{\sqrt{6/D}} = a\sqrt{\frac{2 \pi D}{3}}. $$ Integrating over all slices, the area becomes the volume which equals 1 and the perimeter-area $P_0$ satisfies $$ P_0 \geq \sqrt{\frac{2 \pi D}{3}}. $$ Therefore, $$ D \leq \frac{3P^2}{2\pi}, $$ as desired. \end{proof} \begin{remark} \emph{In general an area-minimizing $n$-hedral tile need not be unique. Indeed, for $n = 8$, the conjectured gabled rhombohedron and hexagonal prism have the same surface area.} \end{remark} \section{Properties of Prisms} \label{secprism} In this section, we give some properties of prisms, which are useful in the next section. We begin by giving a definition of prisms. Then we characterize prisms by showing that if a polyhedron has two $n$-gonal bases and $n$ quadrilateral faces, then it must be a prism (Prop. \ref{combinatorial_prism_face_3} and \ref{combinatorial_prism_face_n}). Moreover, we show that a prism with a regular polygonal base uniquely minimizes surface area among all prisms of fixed volume and number of faces and give a way to calculate the surface area and optimal height (Prop. \ref{bestheight}). Lastly, in Proposition \ref{montile}, we relate tiling of the plane with tiling of space in order to prove that a certain hexagonal prism is the surface-area-minimizing prism (Prop. \ref{hexbest}). \begin{definition} \emph{A} prism \emph{is a polyhedron consisting of a polygonal planar base, a translation of that base to another plane, and edges between corresponding vertices.} \end{definition} \begin{remark} \emph{Bernd Sturmfels \cite{sturmfels} asked us the following question: given a specific combinatorial type for some $n$-hedron, can we determine whether there exists a tile of that type. We conjecture that the pentagonal pyramid is the combinatorial polyhedron with the fewest faces which does not tile. Wolfram Online \cite{wolfram} remarks that there are no known pentagonal pyramids which tile.} \end{remark} The next two propositions characterize when we know that a $n$-hedron must be a combinatorial prism. \begin{proposition} \label{combinatorial_prism_face_3} Let $P$ be a nondegenerate polyhedron with three quadrilateral faces and two triangular faces. Then $P$ is a combinatorial triangular prism. \end{proposition} \begin{proof} Since each edge lies on two faces, the total number of edges is 9. By Euler's formula, the number of vertices is 6. Since the sum over the faces of the number of vertices is 18, each vertex must have degree 3. (By the nondegeneracy hypothesis, no vertex can have degree 2.) Suppose that the triangular faces $\bigtriangleup ABC$ and $\bigtriangleup ABY$ meet. Because each vertex has degree 3, they must share an edge, as in Figure \ref{fig:tprism1}. The other faces at edges $AC$ and $BC$ must be quadrilaterals. Quadrilateral $ACXY$ has vertices $X$ and $Y$, distinct because the prism has degree 3. It follows that the vertex $B$ is not of degree 3, a contradiction. Therefore, the triangular faces are disjoint and the polyhedron is a combinatorial triangular prism, as desired. \end{proof} Proposition \ref{combinatorial_prism_face_n} shows that more generally a nondegenerate polyhedron with $n$ quadrilateral faces and two $n$-gonal faces is a combinatorial $n$-gonal prism. The proof is similar to the proof of Proposition \ref{combinatorial_prism_face_3}. \begin{proposition} \label{combinatorial_prism_face_n} Let $P$ be a nondegenerate polyhedron with $n$ quadrilateral faces and two $n$-gonal faces. Then $P$ is a combinatorial $n$-gonal prism. \end{proposition} \begin{proof} By the same argument in Proposition \ref{combinatorial_prism_face_3}, we can show that every vertex has degree 3 and that $V=2n$ and $E=3n$. \newline \noindent \emph{(Case 1)}: $n=4$. \newline Since no vertex can have degree greater than three, it must be the case that two of the faces do not share a vertex. Since the six faces of this polyhedra will be quadrilaterals, we can identify any two faces as bases. \newline \noindent \emph{(Case 2)}: $n \geq 5$ \newline Suppose that the two $n$-gonal faces meet. If they only share one vertex, then the degree of this vertex is at least four, a contradiction. So they should meet at an edge. Let us call this edge $cd$ and the two $n$-gonal faces $a_1a_2 \dotsc a_{n-2}cd$ and $b_1b_2 \dotsc b_{n-2}cd$. $c$ is contained in the edges $ca_{n-2}$, $cb_{n-2}$, and $cd$. Therefore, there exists a quadrilateral face containing the edges $ca_{n-2}$ and $cb_{n-2}$, namely $ca_{n-2}xb_{n-2}$. Similarly, there exists a vertex $y$ such that $db_1ya_1$ is a face of $P$. If $x=y$, then the degree of $x$ is at least four, a contradiction. So $x$ and $y$ are distinct. Now note that since $b_1$ is contained in the three edges $b_1d$, $b_1y$, and $b_1b_2$, there exists a face containing the edges $b_1b_2$ and $b_1y$. This face must be a quadrilateral, so there exists a vertex $z$ such that $b_2b_1yz$ is a face of $P$. Since there are $2n$ vertices of $P$, $z \in \{a_1, \dotsc ,a_{n-2},b_1, \dotsc ,c_{n-2},c,d,x,y\}$. Moreover, since two faces meet at most at two vertices, $z \in \{b_3, \dotsc ,b_{n-2},x\}$. It follows that $\deg{z}$ is at least four, a contradiction. Therefore, the two $n$-gonal faces do not share an edge, and it follows that they cannot meet. We now show that $P$ is a combinatorial $n$-gonal prism. Let $a_1a_2 \dotsc a_n$ be an $n$-gonal face described above. Let the other $n$-gonal face have vertices $b_1,b_2,\dotsc ,b_n$. By permuting the vertices $b_1,b_2,\dotsc, b_n$, we may assume that $a_ib_i$ is an edge of $P$ for each $i=1,2, \dotsc ,n$. $a_ia_{i+1}$ is contained in a face of $P$ other than $a_1a_2 \dotsc a_n$. Since this face will contain the edges $a_ib_i$ and $a_{i+1}b_{i+1}$, we conclude that $a_ib_ib_{i+1}a_{i+1}$ is a face of $P$. Therefore, $b_ib_{i+1}$ is an edge of $P$. Hence, $b_1b_2\dotsc b_n$ is a face of $P$. From this map, it is clear $P$ is a combinatorial $n$-gonal prism, as desired. \end{proof} \begin{figure} \centering \includegraphics[scale=0.6]{tprismtriangles.png} \caption{Two triangular faces cannot meet in a nondegenerate polyhedron with three quadrilateral faces and two triangular faces.} \label{fig:tprism1} \end{figure} The following proposition gives the optimal height for any right regular prism: \begin{proposition} \label{bestheight} The optimal unit-volume prism with a base similar to a region $R$ of area $A_0$ and perimeter $P_0$ is a right prism of height $h=(4\sqrt{A_0}/P_0)^{2/3}$ and surface area $S = 3({P_0^2}/{2A_0})^{1/3}$. If the base is a regular polygon, it uniquely minimizes surface area among all prisms of fixed volume and number of faces. \end{proposition} \begin{proof} Since the top is a translation of the bottom, we may assume that both are horizontal. Since shearing a right prism preserves volume but increases surface area, we may assume that our prism is a right prism. A simple calculus computation shows that the optimal right prism has height and surface area as asserted. Since a regular $n$-gon uniquely minimizes perimeter for given area, the right $n$-gonal prism of optimal dimensions uniquely minimizes surface area among all prism of fixed volume and number of faces. \end{proof} The next proposition gives an example of how we can relate tiling of the plane with tiling of space. We use Proposition \ref{montile} and Hales' honeycomb theorem \cite[Thm. 1-A]{hales} to prove that the hexagonal prism is the surface-area-minimizing prism. \begin{proposition} \label{montile} Given $n \geq 5$, a monohedral tiling of space by a unit-volume right prisms with $n$ faces is surface-area-minimizing among prisms if and only if the bases are perimeter-minimizing tilings of parallel planes by fixed-area $(n-2)$-gons and the height is optimal as in Proposition \ref{bestheight}. \end{proposition} \begin{proof} We claim that bases must match up with bases and sides with sides. For $n \neq 6$, this is trivial. For $n = 6$, the prism is a cube and the claim is even more trivial. Therefore, the bases tile parallel planes. Furthermore, the bases minimize perimeter for fixed area if and only if the prisms minimize perimeter for fixed volume. \end{proof} \begin{remark} \emph{Proposition \ref{montile} assures that the surface-area-minimizing tile which is combinatorial prism of seven faces is the Cairo prism.} \end{remark} \begin{proposition} \label{hexbest} A right regular hexagonal prism of base length $(2/9)^{1/3}$ and height $2^{1/3}3^{-1/6}$ provides the least-surface area tiling of space by unit-volume prisms. Its surface area is $2^{2/3}3^{7/6}$. \end{proposition} \begin{proof} Hales' honeycomb theorem \cite[Thm. 1-A]{hales} says that a regular hexagon provides the least-perimeter way to tile the plane into equal parts. By Proposition \ref{montile}, a regular hexagonal prism is the least-surface-area way to tile space by equal volume prisms. The best right regular hexagonal prism has height given by Proposition \ref{bestheight}. Since the base length of a unit-volume right regular hexagonal prism is determined by its height, we have the desired result. \end{proof} \section{The surface-area-minimizing tetrahedron and 5-hedron tiles} \label{5and4hedra} The regular tetrahedron is the surface-area-minimizing tetrahedron by Theorem \ref{Florianpf}, but, unfortunately, does not tile space (Fig. \ref{fig:notiletetra}). While the problem of tetrahedral tilings has been considered in the literature, there does not seem to be a discussion of \textit{surface-area-minimizing} tetrahedral tiles. In this section, we use Sommerville's classification of space-filing tetrahedra to find the surface-area-minimizing tetrahedron. However, we are unable to remove the orientation-preserving assumption. We first define an orientation-preserving tiling as follows: \begin{definition} \label{propertiling} \emph{A tiling is} orientation preserving \emph{if any two tiles are equivalent under an orientation-preserving isometry of $\mathbb{R}^3.$} \end{definition} Sommerville \cite[p.57]{somville} describes four types of tetrahedral tiles and claims that, "in addition to these four, no tetrahedral tiles exist in euclidean space". Edmonds \cite{edmonds} addresses some concerns about Sommerville's proof and proves that Sommerville's four candidates are indeed the only four face-to-face, orientation-preserving tiles. The No. 1 tetrahedron is given by cutting a triangular prism into three (See Fig. \ref{fig:tetraprism}). The No. 2 tetrahedron is given by cutting No. 1 or cutting No. 3 in half (Fig. \ref{fig:tetra2}). The No. 3 tetrahedron is given by cutting a square pyramid in half across the diagonal of the base (Fig. \ref{fig:tetra3}). This means No. 3 is 1/12 a cube. Note that No. 3 was incorrectly suggested by Li et al. \cite{g10} as a surface-area-minimizing tetrahedral tile. Lastly, the No. 4 tetrahedron is given by cutting No. 1 into 4 (Fig. \ref{fig:tetra4}). \begin{figure} \centering \includegraphics[scale=0.7]{tetraprism.png} \caption{The tetrahedron (Sommerville No. 1) formed by four isosceles right triangles with two sides of $\sqrt{3}$ and one side of 2 minimizes surface area among all orientation-preserving tetrahedral tiles \cite[Fig. 7]{somville}.} \label{fig:tetraprism} \includegraphics[scale=0.7]{no2tetra.png} \caption{No. 2 tetrahedron is given by cutting No. 3 in half. \cite[Fig. 8]{somville}.} \label{fig:tetra2} \includegraphics[scale=0.7]{no3tetra.png} \caption{No. 3 tetrahedron is given by cutting a square pyramid into two. \cite[Fig. 9]{somville}.} \label{fig:tetra3} \includegraphics[scale=0.7]{no4tetra.png} \caption{No. 4 tetrahedron is given by cutting No. 1 into 4. \cite[Fig. 10]{somville}.} \label{fig:tetra4} \end{figure} Goldberg \cite{goldbergtetra} considered more general tetrahedral tilings (which are not face-to-face) and found infinitely many families of them. Edmonds does not consider tilings which are not orientation-preserving. Further investigation is needed regarding what is known about nonorientation-preserving tilings, and whether the orientation-preserving hypothesis can be removed from the Theorem \ref{besttetra}. Marjorie Senechal \cite{senechal} provides an excellent survey on tetrahedral tiles. Senechal explains that Sommerville's initial consideration of this question goes back to an error made by a student. The student stated that three tetrahedra which divide a triangular prism are congruent, though he meant equal volume. This prompted Sommerville's initial study of congruent tetrahedra which tile space. Senechal points out that Sommerville seems to consider only orientation-preserving, face-to-face tetrahedral tilings, and she stresses the need for more consideration of the problem. We now proceed to show that the No. 1 tetrahedron provides the optimal orientation-preserving tetrahedral tiling of space. \begin{theorem} \label{besttetra} Let $T$ be the No. 1 tetrahedron formed by four isosceles right triangles with two sides of $\sqrt{3}$ and one side of 2 (Fig. \ref{fig:tetraprism}). Then $T$ provides the least-surface-area unit-volume orientation-preserving tetrahedral tiling. \end{theorem} \begin{proof} Since Sommerville provides edge lengths and dihedral angles for each of the four types, we scaled the various tetrahedra to unit volume and calculated the surface area of each. The four types had surface areas of $7.4126, 7.9635, 8.1802,$ and $10.3646$ (to four decimal places), respectively. Thus, $T$ is the surface-area-minimizing orientation-preserving tetrahedral tile. \end{proof} \begin{remark} \label{sumofdihedral} \emph{For all prisms, the sum of all dihedral angles is a multiple of 360. This does not hold for every polyhedron that tiles $\mathbb{R}^3$, as shown by Sommerville's tetrahedra (as seen in \ref{besttetra}).} \end{remark} Although Conjecture \ref{best3Dtiles}(2) for $n=5$ is well known, there seems to be no nice proof in the literature. The more specific problem of tiling space with prisms was put forth by Steiner (\cite{Steiner2}; see \cite[p. 209]{florian}) who conjectured that a right prism with a regular polygonal base was surface area minimizing among all combinatorial prisms. Steinitz apparently proved the conjecture for triangular prisms but the result was never published (see \cite[p. 209]{florian}). Brass, Moser, and Pach \cite{disgeo} assert that the optimal $n$-hedron is known for $n \leq 7$ but do not provide candidates, though they do reference Goldberg \cite{goldberg}. Goldberg says that the optimal candidate among 5-hedra is known, but offers no proof or specific reference in his paper. We are happy to add our proof and Corollary \ref{triprismtile} to the literature. Earlier, Sucksdorff \cite{french} gave a proof which Florian \cite[p. 211]{florian} calls "very troublesome". Sucksdorff first eliminates other combinatorial types by noting that the well-known best representative, a square pyramid, has more surface area than the optimal triangular prism. Then follow eighteen pages of algebraic and trigonometric inequalities to show that the right equilateral triangular prism of optimal height minimizes surface area in its combinatorial type. The editor, M. Catalan, appends a note that Sucksdorff's conclusion agrees with the theorem published by Lindel\"{o}f \cite{lind} twelve years later, of which Sucksdorff was apparently unaware. The editor had heard of the result somewhere, from "Mr. Steiner, I believe." We thank Bill Dunbar for help reading the original French. Our proof of the least surface area 5-hedron begins by first showing that the faces characterize a combinatorial triangular prism (Prop. \ref{combinatorial_prism_face_3}). Then we show that a polyhedron with five faces is combinatorially equivalent to a square pyramid or a triangular prism (Prop. \ref{fivefaceopt}). Furthermore, we prove that the square pyramid is the least-surface-area combinatorial pyramid (Prop. \ref{square-pyramid}) and find a triangular prism that has less surface area than the square pyramid (Prop. \ref{optprism}). Therefore, the best 5-hedron must be a combinatorial triangular prism. By computation, we eliminated non-convex 5-hedra. Therefore, the most efficient must be convex. Finally, using Lindel\"{o}f's Theorem (Thm. \ref{linde}), we show that the 5-hedron with the least surface area is the right equilateral triangular prism (Thm. \ref{bestfivepoly}). In section \ref{secprism}, we gave the following proposition, which shows that faces characterize a combinatorial triangular prism. \newline \newline \textbf{Proposition \ref{combinatorial_prism_face_3}.} \emph{Let P be a nondegenerate polyhedron with three quadrilateral faces and two triangular faces. Then $P$ is a combinatorial triangular prism.} \newline We now show a nondegenerate polyhedron with five faces is combinatorially equivalent to a square pyramid or a triangular prism by using Euler's formula to limit the number of possible combinations of quadrilateral and triangular faces to three. Then we show one case is impossible and apply Proposition \ref{combinatorial_prism_face_3} to complete the proof. \begin{proposition} \label{fivefaceopt} A nondegenerate polyhedron with five faces is combinatorially equivalent to a square pyramid or a triangular prism. \end{proposition} \begin{proof} Because $P$ has five faces and is nondegenerate, each face is either a triangle or a quadrilateral. Let $a$ be the number of triangular faces and $b$ be the number of quadrilateral faces. Since $P$ has five faces, we have $a+b=5$. Let $V$ be the number of vertices of $P$ and $E$ be the number of edges of $P$. By Euler's formula, we have $V-E+5=2$. By calculating the sum of the number of edges of each face of $P$, we have $2E=3a+4b$. Therefore, $a$ is even. \newline \noindent\textit{(Case 1):} $a=0$ and $b=5$. \newline From the above formulas, we have $V=7$ and $E=10$. By counting the number of edges from each vertex, we have that the sum of degrees of vertices of $P$ is $2E=20$. By the Pigeonhole principle, there exists a vertex which has degree less than or equal to $20/7$. Since every degree is at least three, we get a contradiction. \newline \noindent\textit{(Case 2):} $a=2$ and $b=3$. \newline By Proposition \ref{combinatorial_prism_face_3}, $P$ is a combinatorial triangular prism. \newline \noindent\textit{(Case 3):} $a=4$ and $b=1$. \newline From the above formulas, we have $V=5$ and it easily follows that $P$ is a quadrilateral pyramid. Therefore, we have shown that $P$ is either a combinatorial triangular prism or quadrilateral pyramid. \end{proof} Next, we give a lower bound on the surface area of a given pyramid and use it to show that the quadrilateral pyramid with a square base has the least surface area of among quadrilateral pyramids. \begin{lemma} \label{side-surface} Let $P$ be a pyramid with apex $V$, base $A_1A_2...A_n$ and height $h$. Suppose that the base has area $S$ and perimeter $p$, then the sum of the areas of side faces of $P$ is greater than or equal to $(1/2)\sqrt{(2S)^2+p^2h^2}$. Equality holds if and only if the base is circumscribed about a circle and the foot of the perpendicular line from $V$ to the base is the center of the circumscribing sphere. \end{lemma} \begin{proof} Let $B$ be the foot of the perpendicular line from $V$ to the base. Let $a_1,a_2,...,a_n$ be the lengths of the sides of the base. Let $x_1,x_2,...,x_n$ be the distances from $B$ to the sides of the base. Then we have $\sum_i \pm a_ix_i=2S$. This implies that $\sum_i a_ix_i\geq2S$. Equality holds when $B$ lies in the interior of the base. The sum of areas of side faces of $P$ is given by $$ \frac{1}{2}\sum_i a_i\sqrt{x_i^2+h^2}=\frac{1}{2}\sum_i \sqrt{\left(a_ix_i\right)^2+ \left(a_ih\right)^2}. $$ By the triangle inequality, $$ \sum_i \sqrt{\left(a_ix_i \right)^2+ \left(a_ih \right)^2}\geq \sqrt{\left(\sum_i a_ix_i\right)^2+ \left(\sum_i a_ih\right)^2}. $$ Together with the inequality $\sum_i a_ix_i\geq2S$, we get the desired inequality. It is easy to verify the equality condition. \end{proof} \begin{proposition} \label{square-pyramid} Let $P$ be a unit-volume quadrilateral pyramid. Then the surface area of $P$ is greater than or equal to $2^{5/3}3^{2/3}$. Equality holds if and only if it is a right regular pyramid with base-length $2^{-1/3}3^{2/3}$ and height $2^{2/3}3^{-1/3}$. \end{proposition} \begin{proof} Let $S$ be the area and $p$ be the perimeter of the base of $P$. Let $h$ be the height of $P$. Since $P$ has unit volume, we have $Sh=3$. Moreover, for given perimeter, the square is the area maximizer among quadrilaterals. Therefore, $p\geq 4\sqrt{S}$. From Lemma \ref{side-surface}, the surface area of $P$ is greater than or equal to $$ S+\frac{1}{2}\sqrt{(2S)^2+p^2h^2}=S+\frac{1}{2}\sqrt{(2S)^2+\frac{9p^2}{S^2}}. $$ Furthermore, we have the following inequalities: $$ S+\frac{1}{2}\sqrt{(2S)^2+\frac{9p^2}{S^2}}\geq S+\frac{1}{2}\sqrt{(2S)^2+\frac{9(16S)}{S^2}} =S+\sqrt{S^2+\frac{36}{S}}. $$ Therefore, it suffices to show that $$ S+\sqrt{S^2+\frac{36}{S}}\geq 2^{5/3}3^{2/3} $$ or equivalently that $$ S^2+\frac{36}{S} \geq \left(2^{5/3}3^{2/3}-S \right)^2. $$ By direct calculation, this is equivalent to $2^{8/3}3^{2/3}S+36/S \geq 2^{10/3}3^{4/3}$. This follows directly from AM-GM inequality. It is easy to check the equality condition from the equality condition of AM-GM inequality and Lemma \ref{side-surface}. \end{proof} Proposition \ref{optprism} shows that a triangular prism has less surface area than the square pyramid. Therefore, it has less surface area than any unit-volume quadrilateral pyramid. It follows that the optimal 5-hedral tile must be a combinatorial triangular prism. \begin{proposition} \label{optprism} Let $P$ be the unit-volume right equilateral-triangular prism circumscribed about a sphere and $Q$ be a unit-volume quadrilateral pyramid. Then $P$ has less surface area than $Q$. \end{proposition} \begin{proof} By direct computation, we have that $P$ has base-length $4^{1/3}$ and height $4^{1/3}3^{-1/2}$. $P$ has surface area $2^{1/3}3^{3/2}$. Therefore, by Proposition \ref{square-pyramid}, the triangular prism has less surface area than any unit-volume quadrilateral pyramid. \end{proof} Before, we proceed to the main theorem, we use a linear algebra argument to show that the edges of the sides of a triangular prism are either parallel or concur at a point. We then use this lemma in a our main theorem. \begin{lemma} \label{combinatorial_triangular_prism_classification} Let $ABC-DEF$ be a combinatorial triangular prism such that $ABC$ and $DEF$ are triangular faces. Then the lines $AD$, $BE$, and $CF$ are either parallel to each other or concur at a point (Fig. \ref{fig:prismlines}). \end{lemma} \begin{proof} Imagine the prism $ABC-DEF$ is placed in an Euclidean space such that $ABC$ lies in the plane $z=0$. Pick vectors $v_1$, $v_2$ and $v_3$ such that they are parallel to $\overrightarrow{AD}$, $\overrightarrow{BE}$ and $\overrightarrow{CF}$, respectively and they all have $z$ coordinate 1. Consider the vector space $V$ spanned by the vectors $v_1$, $v_2$ and $v_3$. \newline \noindent\textit{(Case 1):} $\dim(V)=1$. \newline $v_1,v_2$ and $v_3$ are the same. Therefore $AD$, $BE$ and $CF$ are parallel to each other, as desired. \newline \noindent\textit{(Case 2):} $\dim(V)=2$. \newline Since the vectors $v_1$, $v_2$ and $v_3$ are not all the same, there exists a vector among them that is different from the others. Without loss of generality, suppose $v_3$ is different from $v_1$ and $v_2$. Then, $v_3$ and $v_1$ span the plane $ACFD$. Hence, $V$ contains the vector $\overrightarrow{AC}$. Similarly, we can show that the vector $\overrightarrow{BC}$ is contained in $V$. Because $\overrightarrow{AC}$, $\overrightarrow{BC}$, and $v_3$ are linearly independent, $\dim(V)=3$, a contradiction. \newline \noindent\textit{(Case 3):} $\dim(V)=3$. \newline It follows that $v_1,v_2$ and $v_3$ are distinct. Since $v_2$ and $v_3$ span the plane $BCFE$, there exists a real number $\alpha_1$ such that the vector $\overrightarrow{BC}=\alpha_1(v_2-v_3)$. Similarly, there exist real numbers $\alpha_2$ and $\alpha_3$ such that the vector $\overrightarrow{CA}=\alpha_2(v_3-v_1)$ and the vector $\overrightarrow{AB}=\alpha_3(v_1-v_2)$. Take the sum of these equations. We have $$ (\alpha_3-\alpha_2)v_1+(\alpha_1-\alpha_2)v_2+(\alpha_2-\alpha_3)v_3=0. $$ Since $v_1,v_2$ and $v_3$ are linearly independent, $\alpha_1=\alpha_2=\alpha_3(:=\alpha)$. It follows that $$ A+\alpha v_1=B+\alpha v_2=C+\alpha v_3. $$ Therefore, the lines $AD$, $BE$ and $CF$ meet at a point. \end{proof} \begin{figure} \centering \includegraphics[scale=0.7]{prismlines.png} \caption{In a combinatorial triangular prism, the lines $AD$, $BE$, and $CF$ are either parallel to each other or concur at a point.} \label{fig:prismlines} \end{figure} Lorenz Lindel\"{o}f \cite{lind} proved that a surface-area-minimizing $n$-hedron is circumscribed about a sphere, with each face tangent at its centroid. See the beautiful survey by Florian \cite[pp. 174-180]{florian} and \cite[Prop. 3.1]{pen11} from before we knew about Lindel\"{o}f. For a given combinatorial type, in order to find the surface-area-minimizing polyhedron of that type, it is usually enough to make sure it satisfies Lindel\"{o}f's condition. We prove that the right equilateral-triangular prism minimizes surface area among unit-volume 5-hedra, by showing that if the 5-hedra must satisfy Lindel\"{o}f's conditions, then the only possibility is that it is a right equilateral-triangular prism. \begin{theorem}[Lindel\"{o}f Theorem \cite{lind}.] \label{linde} A necessary condition for a polyhedron P to be the surface-area-minimizing polyhedron is that P circumscribes a sphere, and the inscribed sphere is tangent to all the faces of P at their respective centroids. \end{theorem} \begin{theorem} \label{bestfivepoly} The right equilateral-triangular prism circumscribed about a sphere minimizes surface area among unit-volume 5-hedra. \end{theorem} \begin{proof} A surface-area-minimizing 5-hedron $X$ exists \cite{mink}. By Proposition \ref{combinatorial_prism_face_3}, we may assume that it is nondegenerate. By Lindel\"{o}f's Theorem \cite{lind}, $X$ is circumscribed about a sphere tangent to each face of $X$ at its centroid. By Proposition \ref{optprism}, $X$ cannot be a square pyramid; therefore by Proposition \ref{fivefaceopt}, $X$ is a combinatorial triangular prism. Define $ABC$ and $DEF$ as the triangular bases of $X$ and $AD$, $BE$, and $CF$ as the edges. To simplify notation, we refer to the bases $ABC$ and $DEF$ as $B_1$ and $B_2$, respectively and the three quadrilateral faces - $ABED$, $BCFE$, and $CADF$ - as $Q_3$, $Q_4$ and $Q_5$, respectively (Fig. \ref{fig:tprism}). \begin{figure} \centering \includegraphics[scale=0.5]{tprism.png}\\ \caption{By Proposition \ref{optprism}, the surface-area-minimizing 5-hedron $X$ cannot be a square pyramid; therefore by Proposition \ref{fivefaceopt}, $X$ is a combinatorial triangular prism.} \label{fig:tprism} \includegraphics[scale=0.7]{Lindeloff.png} \caption{The right equilateral-triangular prism circumscribed about a sphere tangent to each face at its centroid minimizes surface area among unit-volume 5-hedra.} \label{fig:besttprism} \end{figure} Let $O$ be the center of a sphere inscribed in $X$. Let $T_1, T_2, T_3, T_4$ and $T_5$ be the touching points between the sphere and faces $B_1, B_2, Q_3, Q_4$ and $Q_5$, respectively. Finally, let $M_1$, $M_2$ and $M_3$ be midpoints of $AD$, $BE$ and $CF$, respectively. Imagine we place $X$ in Euclidean space such that $O$ is at the origin (Fig. \ref{fig:besttprism}). \newline \newline \noindent \textbf{(Step 1)} The midpoint of $T_1T_2$ is the centroid of $T_3T_4T_5$. \newline \newline This follows from the observation that both of them are the centroid of $X$. \newline \newline \noindent \textbf{(Step 2)} The quadrilaterals $M_1T_3T_4T_5$, $M_2T_3T_5T_4$, and $M_3T_4T_3T_5$ are parallelograms. \newline \newline Since $T_3$ is the centroid of $Q_3$, we have that $M_1+M_2=2T_3$. Similarly, we have $M_2+M_3=2T_4$ and $M_3+M_1=2T_5$. By solving this linear equation for $M_1$, $M_2$, and $M_3$, we have $M_1=T_5+T_3-T_4$, $M_2=T_3+T_4-T_5$, and $M_3=T_4+T_5-T_3$, as desired. \newline \newline \noindent \textbf{(Step 3)} $T_3T_4T_5$ is an equilateral triangle. \newline \newline Observe that the face $BEFC$ is perpendicular to the line $OT_4$. Therefore, $\overrightarrow{OT_4} \cdot \overrightarrow{M_2M_3} =0$. Additionally, from \textbf{(Step 2)}, we have $\overrightarrow{M_2M_3}=2\overrightarrow{T_3T_5}$. Hence $\overrightarrow{OT_4} \cdot \overrightarrow{T_3T_5}=O$. This is equivalent to $\overrightarrow{OT_4} \cdot \overrightarrow{OT_5} = \overrightarrow{OT_4} \cdot \overrightarrow{OT_3}$. Together with the fact that $|OT_5|=|OT_3|$, we have that $|T_4T_5|= |T_3T_4|$. Similarly, we can show that $|T_4T_5| = |T_3T_5|$. Therefore, $T_4T_4T_5$ is an equilateral triangle. \newline \newline \noindent \textbf{(Step 4)} $X$ is the right equilateral-triangular prism circumscribed by a sphere. \newline \newline By Lemma \ref{combinatorial_triangular_prism_classification}, $AD$, $BE$, and $CF$ are parallel to each other or they concur at a point. \newline \noindent \textit{(Case 1)}: $AD$, $BE$, and $CF$ are parallel to each other. \newline We orient $X$ such that $AD$, $BE$, and $CF$ are parallel to the $z$-axis and $O$ is at the origin. Define $\pi: R^{3} \rightarrow R^{2}$ be the projection map from the whole Euclidean space to $xy$-plane. Let $z(X)$ denote the $z$-component of any point $X$ in Euclidean space. First, observe that the tangent planes of the sphere at points $T_3, T_4$ and $T_5$ are parallel to the z-axis. It follows that $z(T_3)=z(T_4)=z(T_5)=0$, so $T_3,T_4$ and $T_5$ lie on the $xy$-plane. Then, by \textbf{(Step 3)}, we have that the centroid of $T_3T_4T_5$ is the origin $O$. It follows, by \textbf{(Step 2)}, that the centroid of $M_1M_2M_3$ is also the origin. Because projection maps are linear, it preserves centroids. Since the triangle $\pi(A)\pi(B)\pi(C)$ is equivalent to $M_1M_2M_3$, $\pi(T_1)$ is the origin $O$. Similarly, $\pi(T_2)$ is $O$. Therefore, the $B_1$ and $B_2$ are perpendicular to the lines $AD$, $BE$, and $CF$. This implies that $B_1$, $B_2$, and $M_1M_2M_3$ are congruent to each other. From \textbf{(Step 2)} and \textbf{(Step 3)}, the triangle $M_1M_2M_3$ is equilateral. Then $B_1$ and $B_2$ are also equilateral. Hence, $X$ is the unit-volume equilateral-triangular prism circumscribed about a sphere. \newline \noindent \textit{(Case 2)}: $AD$, $BE$, and $CF$ concur at a point. \newline We now orient $X$ such that $T_3T_4T_5$ is parallel to the $xy$-plane and $O$ is at the origin. Since $T_3T_4T_5$ is an equilateral triangle, the projection of $T_1$ to the $xy$-plane is the origin $O$. By \textbf{(Step 1)}, the midpoint of $T_1T_2$ also projects to the origin in $xy$-plane. From the assumption of this case, $AD$, $BE$, and $CF$ are not parallel to the $z$-axis. Therefore, the plane containing $T_3T_4T_5$ does not contain the origin. Hence, the distances from the plane containing $T_3T_4T_5$ to $T_1$ and to $T_2$ are different. Therefore, we deduce that $OT_1$ $\neq$ $OT_2$, a contradiction. It follows that this case is impossible. \end{proof} \begin{corollary} \label{triprismtile} The right equilateral-triangular prism circumscribed about a sphere, having base-length $4^{1/3}$ and height $4^{1/3}3^{-1/2}$, is the surface-area-minimizing 5-hedral tile. \end{corollary} \begin{proof} Since the prism is surface-area-minimizing by Theorem \ref{bestfivepoly} and is a tile, it gives the surface-area-minimizing tiling. \end{proof} \begin{remark} \emph{Since equilateral triangles are the perimeter-minimizing polygons of 3 sides, Corollary \ref{triprismtile} also follows directly from Proposition \ref{montile}.} \end{remark} \bibliographystyle{abbrv}
1,116,691,497,418
arxiv
\subsection{Characterization of the optimal damping rule} \label{damping_section} This section aims to motivate and propose a heuristic rule for the optimal choice of the damping parameters $\mathbf{R}$ that can accelerate the convergence of the numerical quadrature in the Fourier space when approximating \eqref{integrand} for pricing multi-asset options under the considered pricing models for various parameters. The main idea is to establish a connection between the damping parameter values, integrand properties, and quadrature error. Before considering the integral of interest \eqref{integrand}, we provide the general motivation for the rule through a simple 1D integration example for a real-valued function $f$ w.r.t.~a weight function $\lambda(\cdot)$ over the support interval $[a, b]$ (finite, half-infinite, or doubly infinite interval): \begin{equation}\label{eq:simple_integration_problem} I(f):=\int_{a}^{b} f(x) \lambda(x) dx \approx \sum_{k=1}^{N} w_k f(x_k):=Q_N(f), \end{equation} where the quadrature estimator, $Q_N(f)$ is characterized by the nodes $\{x_k\}_{k=1}^N$ which are the roots of the appropriate orthogonal polynomial, $\pi_k(x)$, and $\{w_k\}_{k=1}^N$ are the appropriate quadrature weights. Moreover, $ \mathcal{E}_{Q_N}(f)$ denotes the quadrature error (remainder), defined as $\mathcal{E}_{Q_N}(f):= I(f)-Q_N(f)$. The analysis of the quadrature error can be performed through two representations: the first relies on estimates based on high-order derivatives for a smooth function $f$ \cite{gautschi2004orthogonal,davis2007methods,trefethen2008gauss,xiang2012asymptotics}. These error representations are of limited practical value and use because high-order derivatives are usually challenging to estimate and control, particularly with relation to the damping parameters in this context as a complex rule for optimally choosing these parameters may result. For this reason, to derive our rule, we opt for the second form of quadrature error representation, valid for functions that can be extended holomorphically into the complex plane, which corresponds to the case in \eqref{integrand}. Several approaches exist for estimating the error $\mathcal{E}_{Q_N}(f)$ when $f$ is holomorphic: (i) methods of contour integration \cite{takahasi1971estimation,donaldson1972unified}, (ii) methods based on Hilbert space norm estimates \cite{davis1954estimation,donaldson1973estimates} which consider $ \mathcal{E}_{Q_N}$ as a linear functional on $f$, and (iii) methods based on the approximation theory \cite{babuvska2007stochastic,trefethen2008gauss}. Independent of the approach, the results are often comparable because the error bounds involve the supremum norm of $f$. We focus on error estimates based on contour integration tools to showcase these error bounds.\footnote{This approach uses Cauchy’s theorem in the theory of complex variables to express the value of an analytic function at some point $z$ by means of a contour integral (Cauchy integral) extended over a simple closed curve (or open arc) in the complex plane encircling the point $z$.} We assume that the function $f$ can be analytically extended into a sizable region of the complex plane, containing the interval $[a, b]$ with no singularities. Then, we have the following result. \begin{theorem}\label{thm:remainder_integration_estimates} The error integral in the approximation \eqref{eq:simple_integration_problem} can be expressed as \begin{equation}\label{eq:integ_error_contour} \mathcal{E}_{Q_N}(f)=\frac{1}{2 \pi i} \oint_{\mathcal{C}} K_N(z) f(z) dz, \end{equation} where \begin{equation}\label{eq:def Psi} K_N(z) = \frac{H_N(z)}{\pi_N(z)}, \quad H_N(z)=\int_{a}^{b} \lambda(x) \frac{\pi_N(z)}{z-x} dx, \end{equation} and $\mathcal{C}$ is a contour\footnote{Two choices of $\mathcal{C}$ are most frequently made: $\mathcal{C}=\mathcal{C}_r$, the circle $|z|= r, r > 1$, and $C = \mathcal{C}_\rho$, the ellipse with foci at $a$ and $b$, where the sum of its semiaxes is equal to $\rho, \rho > 1$. Circles can only be used if the analyticity domain is sufficiently large, and ellipses have the advantage of shrinking to the interval $[ a, b]$ when $\rho \rightarrow 1$, making them suitable for dealing with functions that are analytic on the segment $[a, b]$.} containing the interval $[a, b]$ within which $f(z)$ has no singularities. \end{theorem} \begin{proof} We refer to \cite{donaldson1972unified,gautschi2004orthogonal} for proof of Theorem \ref{thm:remainder_integration_estimates}. \end{proof} In the finite case, the contour $\mathcal{C}$ is closed and \eqref{eq:def Psi} represents an analytic function in the connected domain $\mathbb{C}\setminus [a, b]$ while we may take $\mathcal{C}$ to lie on the upper and lower edges of the real axis in the infinite case for large $|x|$. Discussions on choosing adequate contours are found in \cite{elliott1970uniform,donaldson1972unified,donaldson1973estimates}. Moreover, precise estimates of $H_n(z)$ were derived in \cite{donaldson1972unified,elliott1974asymptotic}. As $f(\cdot)$ has no singularities within $\mathcal{C}$, using Theorem \ref{thm:remainder_integration_estimates}, we obtain \begin{equation}\label{eq:error_bound_estimate} | \mathcal{E}_{Q_N}(f)| \le \frac{1}{2 \pi } \; \underset{z \in \mathcal{C}}{\max} | f(z)| \oint_{\mathcal{C}} |K_N(z)| |dz|, \end{equation} where the quantity $\oint_{\mathcal{C}} |K_n(z)| |dz|$ depends only on the quadrature rule. We expect that when the size of the contour increases, $\oint_{\mathcal{C}} |K_n(z)| |dz|$ decreases, whereas $ \underset{z \in \mathcal{C}}{\max} | f(z)| $ increases by the maximum modulus theorem. The optimal choice of the contour $\mathcal{C}$ is the one that minimizes the right-handside of \eqref{eq:error_bound_estimate}. Extending the error bound \eqref{eq:error_bound_estimate} to the multidimensional setting can be performed straightforwardly using tensorization tools. Moreover, the dependence of the upper bound on $||f||_{\infty}$ is independent of the quadrature method. Therefore, motivated by the error bound \eqref{eq:error_bound_estimate}, we propose a heuristic rule for choosing the damping parameters that improves the numerical convergence of the designed numerical quadrature method (see Section \ref{section_det_quad}) when approximating \eqref{integrand}. The rule consists in solving the following constrained optimization problem \begin{equation}\label{or_opt} \mathbf{R}^\ast:= \mathbf{R}^\ast (\boldsymbol{\Theta}_{m},\boldsymbol{\Theta}_{p})= \underset{\mathbf{R}\in \delta_V } {\arg \min} \; \|g( \mathbf{u};\mathbf{R},\boldsymbol{\Theta}_{m},\boldsymbol{\Theta}_{p})\|_{\infty}, \end{equation} where $\mathbf{R}^\ast:=(R_1^\ast,\ldots,R_d^\ast)$ denotes the optimal damping parameters. In our setting, the integrand defined in Equation \eqref{integrand} attains its maximum at the origin point $ \mathbf{u}= \mathbf{0}_{\mathbb{R}^d}$; thus solving \eqref{or_opt} is reduced to a simpler optimization problem given by \eqref{new_opt} \begin{equation}\label{new_opt} \mathbf{R}^\ast = \underset{\mathbf{R}\in \delta_V } {\arg \min} \;g(\mathbf{0}_{\mathbb{R}^d};\mathbf{R},\boldsymbol{\Theta}_{m},\boldsymbol{\Theta}_{p}). \end{equation} Equation \eqref{new_opt} cannot be solved analytically, especially in high dimensions; therefore, we solve it numerically, approximating $\mathbf{R}^\ast$ by $\bar{\mathbf{R}}=(\bar{R}_1,\ldots,\bar{R}_d)$. In this context, we used the interior point method with an accuracy of order $ 10^{-6}$. The numerical investigation through different models and parameters (for illustration, we refer to Figures \ref{1d_put_gbm_damping}, \ref{1d_put_vg_damping}, and \ref{1d_put_nig_damping} related to the single put option) confirmed that the damping parameters have a considerable effect on the properties of the integrand, particularly its peak, tail-heaviness, and oscillatory behavior. In particular, we observed that the damping parameters that produce the lowest peak of the integrand around the origin are associated with a faster convergence of the relative quadrature error than other damping parameters. Moreover, we observed that highly peaked integrands are more likely to oscillate, implying a deteriorated convergence of the numerical quadrature. Independent of the quadrature methods explained in Section \ref{section_det_quad}, this observation was consistent for several parameter constellations under the three tested pricing dynamics, GBM, VG, and NIG, and for different dimensions of the basket put and rainbow options. Section \ref{num_high_dim_damping_sec} illustrates the computational advantage of the optimal damping rule on the error convergence for the multi-asset basket put and call on min options under different models. \begin{remark} The $d$-dimensional optimization problem \eqref{new_opt} is simplified further to a 1D problem when the integrand is isotropic. \end{remark} \begin{remark} Other rules for choosing the damping parameters can be investigated to improve the numerical convergence of quadrature methods. For instance, one can account for additional features, such as (i) the distance of the damping parameters to the poles, which affects the choice of the integration contour in \eqref{eq:integ_error_contour}, or (ii) controlling the regularity of the integrand via high-order derivative estimates. However, we expect such rules to be more complicated and computationally expensive (e.g., the evaluation of the gradient of the integrand). Investigating other rules remains for future work. \end{remark} \begin{figure}[h!] \centering \begin{subfigure}{0.8\textwidth} \includegraphics[width=\linewidth]{1d_put_gbm_damping.eps} \caption{$S_0=100, K=100, r=0 \%, T=1,\sigma = 0.4$} \label{1d_put_gbm_damping} \end{subfigure} \begin{subfigure}{0.8\textwidth} \includegraphics[width=\linewidth]{1d_put_vg_damping.eps} \caption{$S_0=100, K=100, r=0 \%, T=1,\sigma = 0.4, \theta = -0.3, \nu= 0.257$} \label{1d_put_vg_damping} \end{subfigure} \begin{subfigure}{0.8\textwidth} \includegraphics[width=\linewidth]{1d_put_nig_damping.eps} \caption{$S_0=100, K=100, r=0 \%, T=1,\alpha = 10, \beta= -3, \delta= 0.2$} \label{1d_put_nig_damping} \end{subfigure} \caption{1D illustration: (Left) Shape of the integrand w.r.t~the damping parameter, $R$. (Right) $\mathcal{E}_{R}$ convergence w.r.t.~$N$, using Gauss--Laguerre quadrature for the European put option under (a) GBM, (b) VG, and (c) NIG pricing models. The relative quadrature error $\mathcal{E}_{R} $ is defined as $ \mathcal{E}_{R} = \frac{ \mid Q_{N}[g] - \text{Reference Value} \mid }{ \text{Reference Value}}$, where $Q_N$ is the quadrature estimator of \eqref{integrand} based on the Gauss--Laguerre rule.} \label{fig: (Left) Shape of the integrand w.r.t R, (Right) convergence w.r.t , using Gauss-Laguerre Quadrature for European put option under (a) GBM (b) VG (c) NIG pricing models.} \end{figure} \subsection{Numerical evaluation of the inverse Fourier integrals using hierarchical deterministic quadrature methods} \label{section_det_quad} We aim to approximate \eqref{integrand} efficiently using a tensorization of quadrature formulas over $\mathbb{R}^d$. When using Fourier transforms for option pricing, the standard numerical approach truncates and discretizes the integration domain and uses FFT based on bounded quadrature formulas, such as the trapezoidal rule. This option is efficient in the 1D setting, as the estimation of the truncation intervals based, for instance, on the cumulants, was widely covered in the literature. It remains affordable even though the additional cost might be high due to the inappropriate choice of truncation parameters. However, this is not the case in the multidimensional setting because determining the truncation parameters becomes more challenging. Moreover, the truncation errors nontrivially depend on the damping parameter values. Choosing larger than necessary truncation domains leads to a more significant increase in the computational effort for higher dimensions. For this reason, we use the DI approach with Gaussian quadrature rules. Moreover, our numerical investigation (see Appendix \ref{num_lag_herm_sec}) suggests that Gauss--Laguerre quadrature exhibits faster convergence than the Gauss--Hermite rule. Therefore, we used Laguerre quadrature on semi-infinite domains after applying the necessary transformations. Before defining the multivariate quadrature estimators, we first introduce the notation in the univariate setting. In addition, $\beta$ denotes a non-negative integer, referred to as the ``discretization level," and $m: \mathbb{N} \rightarrow \mathbb{N}$ represents a strictly increasing function with $m(0)=0$ and $m(1)=1$, called a ``level-to-nodes function." At each level $\beta$, we consider a set of $m(\beta)$ distinct quadrature points $ \mathcal{H}^{m(\beta)}=\left\{x_{\beta}^{1}, x_{\beta}^{2}, \ldots, x_{\beta}^{m(\beta)}\right\} \subset \mathbb{R}$, and a set of quadrature weights, $\boldsymbol{\omega}^{m(\beta)}=\left\{\omega_{\beta}^{1}, \omega_{\beta}^{2}, \ldots, \omega_{\beta}^{m(\beta)}\right\} .$ We also let $C^{0}(\mathbb{R})$ be the space of real-valued continuous functions over $\mathbb{R}$. We define the univariate quadrature operator applied to a function $f \in C^{0}(\mathbb{R}) $ as follows: \begin{equation*} Q^{m(\beta)}: C^{0}(\mathbb{R}) \rightarrow \mathbb{R}, \quad Q^{m(\beta)}[f]:=\sum_{j=1}^{m(\beta)} f\left(x_{\beta}^{j}\right) \omega_{\beta}^{j} \text { . } \end{equation*} In our case, in \eqref{integrand}, we have a multivariate integration problem of $g$ over $\mathbb{R}^{d}$. Accordingly, for a multi-index $\boldsymbol{\beta}=\left(\beta_{i}\right)_{i=1}^{d} \in \mathbb{N}^{d}$, the $d$-dimensional quadrature operator applied to $g$ is defined as\footnote{The $n$-th quadrature operator acts only on the $n$-th variable of $g$.} \begin{align*} &Q^{m(\boldsymbol{\beta})}: C^{0}\left(\mathbb{R}^{d}\right) \rightarrow \mathbb{R}, \quad Q^{m(\boldsymbol{\beta})}=\bigotimes_{i=1}^{d} Q^{m\left(\beta_{i}\right)},\nonumber\\ &Q_d^{m(\boldsymbol{\beta})}[g]=\sum_{j=1}^{\# \mathcal{T}^{m(\boldsymbol{\beta})}} g\left(\widehat{x}_{j}\right) \bar{\omega}_{j}, \end{align*} where $\widehat{x}_{j} \in \mathcal{T}^{m(\boldsymbol{\beta})}:=\prod_{i=1}^{d} \mathcal{H}^{m\left(\beta_{i}\right)}$ (with cardinality $ \# \mathcal{T}^{m(\boldsymbol{\beta})}= \prod_{i=1}^{d} m\left(\beta_{i}\right) $\footnote{$m( \beta_i) =N_i$ quadrature points in the dimension of $x_i$. }), and $\bar{\omega}_{j}$ is a product of the weights of the univariate quadrature rule. To simplify the notation, we replace $Q_d^{m(\boldsymbol{\beta})}$ with $Q_d^{\boldsymbol{\beta}}$. We define the set of differences $\Delta Q_{d}^{\boldsymbol{\beta}}$ for indices $i \in \{1,\ldots,d\}$ as follows: \begin{equation} \Delta_{i} Q_{d}^{\boldsymbol{\beta}}:=\left\{\begin{array}{l} Q_{d}^{\boldsymbol{\beta}}-Q_{d}^{\boldsymbol{\beta}^{\prime}}, \text { with } \; \boldsymbol{\beta}^{\prime}=\boldsymbol{\beta}-\mathbf{e}_{i}, \text { when } \beta_{i}>0, \\ Q_{d}^{\boldsymbol{\beta}}, \quad \text { otherwise, } \end{array}\right. \end{equation} where $\mathbf{e}_{i}$ denotes the $i$th $d$-dimensional unit vector. Then, using the telescopic property, the quadrature estimator, defined w.r.t.~a choice of the set of multi-indices $\mathcal{I} \subset \mathbb{N}^{d}$, is expressed by\footnote{For instance, when $d=2$, then $ \small { \Delta Q_{2}^{\boldsymbol{\beta}}=\Delta_{2} \Delta_{1} Q_{2}^{\left(\beta_{1}, \beta_{2}\right)} =Q_{2}^{\left(\beta_{1}, \beta_{2}\right)}-Q_{2}^{\left(\beta_{1}, \beta_{2}-1\right)}-Q_{2}^{\left(\beta_{1}-1, \beta_{2}\right)}+Q_{2}^{\left(\beta_{1}-1, \beta_{2}-1\right)}}$.}$^{,\thinspace}$\footnote{ To ensure the validity of the telescoping sum expansion, the index set $\mathcal{I}$ must satisfy the admissibility condition (\ie, $ \boldsymbol{\beta}\in \mathcal{I}, \boldsymbol{\alpha} \leq \boldsymbol{\beta} \Rightarrow \boldsymbol{\alpha} \in \mathcal{I},\text{ where} \: \boldsymbol{\alpha} \leq \boldsymbol{\beta} \: \text{is defined as} \: \alpha_i \leq \beta_i, i = 1,\ldots,d$).} \begin{equation} \label{quad_estimate} Q_{d}^{\mathcal{I}}=\sum_{\boldsymbol{\beta} \in \mathcal{I}} \Delta Q_{d}^{\boldsymbol{\beta}}, \quad \text{with} \: \Delta Q_{d}^{\boldsymbol{\beta}}=\left(\bigotimes_{i=1}^{d} \Delta_{i}\right) Q_{d}^{\boldsymbol{\beta}}, \end{equation} and the quadrature error can be written as \begin{equation}\label{eq: quad_error} \mathcal{E}_{Q}=\left|Q^{\infty}_d[g]-Q_d^{\mathcal{I}}[g]\right| \leq \sum_{\boldsymbol{\beta} \in \mathbb{N}^{d} \backslash \{ \mathcal{I} \}}\left|\Delta Q_d^{\boldsymbol{\beta}}[g]\right|, \end{equation} where \begin{equation*} \label{quad_op} Q_{d}^{\infty}:=\sum_{\beta_{1}=0}^{\infty} \cdots \sum_{\beta_{d}=0}^{\infty} \Delta Q_{d}^{\left(\beta_{1}, \ldots, \beta_{d}\right)}=\sum_{\boldsymbol{\beta} \in \mathbb{N}^d} \Delta Q_{d}^{\boldsymbol{\beta}}. \end{equation*} In Equation (\ref{quad_estimate}), the choice of (i) the strategy for the construction of the index set $\mathcal{I}$ and (ii) the hierarchy of quadrature points determined by $m(\cdot)$ defines different hierarchical quadrature methods. Table \ref{index_set_table} presents the details of the methods considered in this work. \begin{table}[h!] \centering \begin{tabular}{| p{3.87cm} | p{5.55cm} | p{6.6cm} |} \hline \textbf{Quadrature Method} & $m(\cdot)$ & $\mathcal{I}$ \\ \hline Tensor Product (TP)& $m(\beta)=\beta$ & \small $ \mathcal{I^{\text{TP}}}(l) = \{\boldsymbol{\beta} \in\mathbb{N}^d: \;\; \max_{1 \le i \le d}(\beta_i-1) \leq l\}$ \\ \hline Smolyak (SM) Sparse Grids & $ m(\beta)= 2^{\beta-1}+1,\, \beta>1,m(1) =1 $ & \small $\mathcal{I^{\text{SM}}}(l)=\{\boldsymbol{\beta} \in\mathbb{N}^d: \;\; \sum_{1 \le i \le d}(\beta_i-1) \leq l\}$ \\ \hline Adaptive Sparse Grid Quadrature (ASGQ) & $ m(\beta )= 2^{\beta-1}+1,\, \beta>1,m(1) =1 $ & \small $\mathcal{I^{\text{ASGQ}}}=\left\{\boldsymbol{\beta} \in \mathbb{N}_{+}^{d}: P_{\boldsymbol{\beta}} \geq \bar{T}\right\}$ \newline(see \eqref{profit_rule} and \eqref{error_contr})\\ \hline \end{tabular} \caption{Construction details for the quadrature methods. $l \in \mathbb{N}$ represents a given level. $\bar{T} \in \rset$ is a threshold value.} \label{index_set_table} \end{table} In many situations, the tensor product (TP) estimator can become rapidly unaffordable because the number of function evaluations increases exponentially with the problem dimensionality, known as the \emph{curse of dimensionality}. We use Smolyak (SM) and ASGQ methods based on sparsification and dimension-adaptivity techniques to overcome this issue. For both TP and SM methods, the construction of the index set is performed a priori. However, ASGQ allows for the a posteriori and adaptive construction of the index set $\mathcal{I}$ by greedily exploiting the mixed regularity of the integrand during the actual computation of the quantity of interest. The construction of $\mathcal{I^{\text{ASGQ}}}$ is performed through profit thresholding, where new indices are selected iteratively based on the error versus cost-profit rule, with a hierarchical surplus defined by \begin{equation} \label{profit_rule} P_{\boldsymbol{\beta}}=\frac{\left|\Delta E_{\boldsymbol{\beta}}\right|}{\Delta \mathcal{W}_{\boldsymbol{\beta}}}, \end{equation} where $\Delta \mathcal{W}_{\boldsymbol{\beta}}$ is the work contribution (\ie, the computational cost required to add $\Delta Q_{d}^{\boldsymbol{\beta}}$ to $Q_{d}^{\mathcal{I^{\text{ASGQ}}}}$) and $\Delta E_{\boldsymbol{\beta}}$ is the error contribution (\ie, a measure of how much the quadrature error would decrease once $\Delta Q_{d}^{\boldsymbol{\beta}}$ has been added to $Q_{d}^{\mathcal{I^{\text{ASGQ}}}}$): \begin{align} \label{error_contr} \Delta E_{\boldsymbol{\beta}} &=\left|Q_{d}^{\mathcal{I^{\text{ASGQ}}} \cup\{\boldsymbol{\beta}\}}[g]-Q_{d}^{\mathcal{I^{\text{ASGQ}}}}[g]\right|\\ \Delta \mathcal{W}_{\boldsymbol{\beta}} &=\operatorname{Work}\left[Q_{d}^{\mathcal{I^{\text{ASGQ}}}\cup\{\boldsymbol{\beta}\}}[g]\right]- \operatorname{Work}\left[Q_{d}^{\mathcal{I^{\text{ASGQ}}}}[g]\right]. \nonumber \end{align} The convergence speed for all quadrature methods in this work is determined by the behavior of the quadrature error defined in \eqref{eq: quad_error}. In this context, given the model and option parameters, the convergence rate depends on the damping parameter values, which control the regularity of the integrand $g$ in the Fourier space (see \eqref{integrand}). We let $N:= \prod_{i=1}^{d} m\left(\beta_{i}\right)$ denote the total number of quadrature points used by each method. For the TP method, we have the following \cite{davis2007methods}:\begin{equation} \label{eq:error_bound_TP} \mathcal{E}^{\text{TP}}_{Q}\left(N;\mathbf{R}\right)=\mathcal{O}\left(N^{- \frac{r_t}{d}}\right) \end{equation} for functions with bounded total derivatives up to order $r_t := r_t(\mathbf{R})$. When using SM sparse grids (not adaptive), we obtain the following \cite{smolyak1963quadrature,WASILKOWSKI19951,gerstner1998numerical,barthelmann2000high}: \begin{equation} \label{eq:error_bound_SM} \mathcal{E}^{\text{SM}}_{Q}\left(N ;\mathbf{R}\right)=\mathcal{O}\left(N^{-r_m}\left(\log N\right)^{(d-1)(r_m)+1)}\right) \end{equation} for functions with bounded mixed partial derivatives up to order $r_m:=r_m(\mathbf{R})$. Moreover, it was observed in \cite{Gerstner2003DimensionAdaptiveTQ} that the convergence is even spectral for analytic functions ($r_m \rightarrow +\infty$). For the ASGQ method, we achieve \begin{equation} \label{eq:error_bound_ASGQ} \mathcal{E}^{\text{ASGQ}}_{Q}\left(N;\mathbf{R}\right)=\mathcal{O}\left(N^{-r_w}\right) \end{equation} for functions with bounded weighted mixed derivatives up to order $r_w:=r_w(\mathbf{R})$. In \eqref{eq:error_bound_TP}, \eqref{eq:error_bound_SM}, and \eqref{eq:error_bound_ASGQ}, we emphasize the dependence of the convergence rates on the damping parameters $\mathbf{R}$, which is only valid in this context because these parameters control the regularity of the integrand in the Fourier space. Moreover, our optimized choice of $\mathbf{R}$ is not only used to increase the number of derivatives but also to reduce the bounds on these derivatives. \subsection{Combining the optimal damping heuristic rule with hierarchical deterministic quadrature methods} \label{num_quad_comparison_sec} \subsubsection{Effect of sparsification and dimension-adaptivity} \label{num_adap_sec} In this section, we analyze the effect of dimension adaptivity and sparsification on the acceleration of the convergence of the relative quadrature error, $\mathcal{E}_{R}$. We elaborate a comparison between the TP, SM, and ASGQ methods when optimal damping parameters are used. Table \ref{cpu_tab} summarizes these findings. Through the numerical experiments, ASGQ consistently outperformed SM. Moreover, for the $2$D options, the performance of the ASGQ and TP methods is model-dependent, with ASGQ being the best method for options under the GBM model. For $d=4$, for options under the GBM and VG models, ASGQ performs better than TP, which is not the case for options under the NIG model. As for $6$D options, ASGQ performs better than TP in most cases. These observations confirm that the effect of adaptivity and sparsification becomes more important as the dimension of the option increases. For the sake of illustration, Figure \ref{quad} compares ASGQ and TP for $4$D options with anisotropic parameter sets under different pricing models when optimal damping parameters are used. Figure \ref{basket_gbm_4D_aniso_EC} reveals that, for the 4D-basket put option under the GBM model, the ASGQ method achieves $\mathcal{E_R}$ below $1\%$ using $13.3 \%$ of the work of the TP quadrature. Moreover, Figure \ref{basket_vg_4D_aniso_EC} indicates that, for the 4D-basket put option under the VG model, the ASGQ method achieves $\mathcal{E_R}$ below $0.1\%$ using $25 \%$ of the work of the TP quadrature. In contrast, for the 4D-basket put option under the NIG model, Figure \ref{basket_nig_4D_aniso_EC} reveals that the TP quadrature attains $\mathcal{E_R}$ below $0.1\%$ using $10 \%$ of the work of the ASGQ. \FloatBarrier \begin{figure}[h!] \centering \begin{subfigure}{0.4\textwidth} \includegraphics[width=\linewidth]{basket_gbm_4D_aniso_EC.eps} \caption{Ex 6 in Table \ref{tab_mgbm}} \label{basket_gbm_4D_aniso_EC} \end{subfigure} \begin{subfigure}{0.4\textwidth} \includegraphics[width=\linewidth]{rainbow_gbm_4D_aniso_EC.eps} \caption{Ex 8 in Table \ref{tab_mgbm}} \label{rainbow_gbm_4D_aniso_EC} \end{subfigure} \begin{subfigure}{0.4\textwidth} \includegraphics[width=\linewidth]{basket_vg_4D_aniso_EC.eps} \caption{Ex 18 in Table \ref{tab_mvg}} \label{basket_vg_4D_aniso_EC} \end{subfigure} \begin{subfigure}{0.4\textwidth} \includegraphics[width=\linewidth]{rainbow_vg_4D_aniso_EC.eps} \caption{Ex 20 in Table \ref{tab_mvg}} \label{rainbow_vg_4D_aniso_EC} \end{subfigure} \begin{subfigure}{0.4\textwidth} \includegraphics[width=\linewidth]{basket_nig_4D_aniso_EC.eps} \caption{Ex 30 in Table \ref{tab_mnig}} \label{basket_nig_4D_aniso_EC} \end{subfigure} \begin{subfigure}{0.4\textwidth} \includegraphics[width=\linewidth]{rainbow_nig_4D_aniso_EC.eps} \caption{Ex 32 in Table \ref{tab_mnig} } \label{rainbow_nig_4D_aniso_EC} \end{subfigure} \caption{Convergence of the relative quadrature error, $\mathcal{E}_{R}$, w.r.t.~$N$ for TP, SM and ASGQ methods for European $4$-asset options under GBM ((a) and (b)), VG ((c) and (d)), and NIG ((e) and (f)) models, when optimal damping parameters, $\mathbf{\overline{R}}$, are used.} \label{quad} \end{figure} \FloatBarrier \subsubsection{Effect of the optimal damping rule} \label{num_high_dim_damping_sec} In this section, we present the computational benefit of using the optimal damping rule proposed in Section \ref{damping_section} on the convergence speed of the relative quadrature error of various methods when pricing the multi-asset European basket and rainbow options. Figures \ref{mgbm_damping}, \ref{mvg_damping}, and \ref{mnig_damping} illustrate that the optimal damping parameters lead to substantially better error convergence behavior. For instance, Figure \ref{basket_gbm_4D_aniso_damping} reveals that, for the $4$D-basket put option under the GBM model, ASGQ achieves $\mathcal{E_R}$ below $0.1 \%$ using around $N=1500$ quadrature points when using optimal damping parameters, compared to around $N=5000$ points to achieve a similar accuracy for damping parameters shifted by $+1$ in each direction w.r.t.~the optimal values. When using damping parameters shifted by $+2$ in each direction w.r.t.~the optimal values, we do not reach $\mathcal{E_R}= 10 \%$, even using $N=5000$ quadrature points. Similarly, for the $4$D-call on min option under the VG model, Figure \ref{rainbow_vg_4D_aniso_damping} illustrates that ASGQ achieves $\mathcal{E_R}$ below $0.1 \%$ using around $N=500$ quadrature points when using the optimal damping parameters. In contrast, ASGQ cannot achieve $\mathcal{E_R}$ below $1\%$ when using damping parameters shifted by $-1$ in each direction w.r.t.~the optimal values with the same number of quadrature points. Finally, for the $4$D-basket put option under the NIG model, Figure \ref{basket_nig_4D_aniso_damping} illustrates that, when using the optimal damping parameters, the TP quadrature crosses $\mathcal{E_R}= 0.1 \%$ using $22 \%$ of the work it would have used with damping parameters shifted by $-2$ in each direction w.r.t.~the optimal values. In summary, in all experiments, small shifts in both directions w.r.t.~the optimal damping parameters lead to worse error convergence behavior, suggesting that the region of optimality of the damping parameters is tight and that our rule is sufficient to obtain optimal quadrature convergence behavior, independently of the quadrature method. Moreover, arbitrary choices of damping parameters may lead to extremely poor convergence of the quadrature, as illustrated by the purple curves in Figures \ref{basket_gbm_4D_aniso_damping},\ref{rainbow_gbm_4D_aniso_damping}, \ref{basket_vg_4D_aniso_damping} and \ref{rainbow_nig_4D_aniso_damping}. All compared damping parameters belong to the strip of regularity of the integrand $\delta_V$ defined in Section \ref{sec:Problem Setting and Pricing Framework}. Finally, although we only provide some plots to illustrate these findings, the same conclusions were consistently observed for different models and damping parameters. \FloatBarrier \begin{figure}[h!] \centering \begin{subfigure}{0.4\textwidth} \includegraphics[width=\linewidth]{basket_gbm_4D_aniso_damping.eps} \caption{4D-basket put: Ex 6 in Table \ref{tab_mgbm}} \label{basket_gbm_4D_aniso_damping} \end{subfigure} \begin{subfigure}{0.4\textwidth} \includegraphics[width=\linewidth]{rainbow_gbm_4D_aniso_damping.eps} \caption{4D-call on min: Ex 8 in Table \ref{tab_mgbm}} \label{rainbow_gbm_4D_aniso_damping} \end{subfigure} \caption{GBM model: Convergence of the relative quadrature error, $\mathcal{E}_{R}$, w.r.t.~$N$ for the ASGQ method for different damping parameter values. } \label{mgbm_damping} \end{figure} \begin{figure}[h!] \centering \begin{subfigure}{0.4\textwidth} \includegraphics[width=\linewidth]{basket_vg_4D_aniso_damping.eps} \caption{4D-basket put: Ex 18 in Table \ref{tab_mvg}} \label{basket_vg_4D_aniso_damping} \end{subfigure} \begin{subfigure}{0.4\textwidth} \includegraphics[width=\linewidth]{rainbow_vg_4D_aniso_damping.eps} \caption{4D-call on min: Ex 20 in Table \ref{tab_mvg}} \label{rainbow_vg_4D_aniso_damping} \end{subfigure} \caption{VG model: Convergence of the relative quadrature error, $\mathcal{E}_{R}$, w.r.t.~$N$ for the ASGQ method for different damping parameter values.} \label{mvg_damping} \end{figure} \begin{figure}[h!] \centering \begin{subfigure}{0.4\textwidth} \includegraphics[width=\linewidth]{basket_nig_4D_aniso_damping.eps} \caption{4D-basket put: Ex 30 in Table \ref{tab_mnig}} \label{basket_nig_4D_aniso_damping} \end{subfigure} \begin{subfigure}{0.4\textwidth} \includegraphics[width=\linewidth]{rainbow_nig_4D_aniso_damping.eps} \caption{4D-call on min: Ex 32 in Table \ref{tab_mnig} } \label{rainbow_nig_4D_aniso_damping} \end{subfigure} \caption{NIG model: Convergence of the relative quadrature error, $\mathcal{E}_{R}$, w.r.t.~$N$ for the TP method for different damping parameter values.} \label{mnig_damping} \end{figure} \FloatBarrier \subsection{Computational comparison of quadrature methods with optimal damping and MC} \label{num_quad_vs_mc_sec} This section compares the MC method and our proposed approach based on on the best quadrature method in the Fourier space combined with the optimal damping parameters in terms of errors and computational time. The comparison is performed for all option examples in Tables \ref{tab_mgbm}, \ref{tab_mvg}, and \ref{tab_mnig}. While fixing a sufficiently small relative error tolerance in the price estimates, we compare the necessary computational time for different methods to meet it. The computational time of the proposed approach is the sum of the CPU time required for numerically optimizing of \eqref{new_opt}, and the numerical quadrature. The MC CPU time is obtained through an average of $10$ runs. The results presented in Table \ref{cpu_tab} highlight that our approach significantly outperforms the MC method for all the tested options with various models, parameter sets, and dimensions. In particular, for all tested $2$D and $4$D options, the proposed approach requires less than $20\%$ (even less than $1\%$ for most cases) of the MC work to achieve a total relative error below $0.1\%$. In general, these gains degrade for the tested $6$D options. For Example $21$ in Table \ref{tab_mvg}, this approach requires around $43\%$ of the work of MC, to achieve a total relative error below $1\%$. The magnitude of the CPU gain varies depending on different factors, such as the model and payoff parameters affecting the integrand differently in physical space (related to the MC estimator variance), and the integrand regularity in Fourier space (related to the quadrature error for quadrature methods). Finally, we observed significant memory gains using our approach for all examples, as we required considerably fewer quadrature points (functions evaluations) than the required number of samples for the MC method to meet the same error tolerance. \begin{table}[h] \centering \hspace{-0.4cm} \small \begin{tabular}{|p{2.7cm}| p{1.2cm}| p{1.4cm} | p{1.2cm} | p{1.6cm} | p{1.cm} |p{1.2cm} | p{3cm} |} \hline \textbf{Example} & \textbf{Best} \textbf{Quad}& $\mathcal{E_R}$ & \textbf{MC CPU Time} & \textbf{$M$ (MC samples)} & \textbf{Quad CPU Time} & \textbf{$N$ (Quad. Points) } & \textbf{CPU Time Ratio (Quad/MC) in $\%$} \\ \hline Ex 1 in Table \ref{tab_mgbm} & ASGQ & $ 7 e^{-04}$ & $7.36$ & $ 1.2 \times 10^{7}$ & $0.63$ & $33$ & $8.5 \%$\\ \hline Ex 2 in Table \ref{tab_mgbm} & ASGQ & $ 3.7 e^{-04}$ & $20.7$ & $3.3 \times 10^{7}$ & 0.65 & 67 & $3.14 \%$\\ \hline Ex 13 in Table \ref{tab_mvg}& TP & $ 2.9 e^{-04}$ & $44$ & $ 8.8 \times 10^{7}$ & 0.25 & 64 & $0.57 \%$\\ \hline Ex 14 in Table \ref{tab_mvg}& TP & $ 1.8 e^{-04}$ & $70.9$ & $1.4\times 10^{8}$ & 0.23 & 64 & $0.32 \%$\\ \hline Ex 25 in Table \ref{tab_mnig} & TP & $ 2.9 e^{-04}$ & $75.3$ & $1.1 \times 10^{8}$ & 0.2 & 36 & $0.26 \%$\\ \hline Ex 26 in Table \ref{tab_mnig} & TP & $ 5.86 e^{-04}$ & $17.2$ & $2.6\times 10^{7}$ &$ 0.2$ & 25 & $1.16 \%$\\ \hline Ex 3 in Table \ref{tab_mgbm} & ASGQ & $ 7 e^{-04}$ & $47.3$ & $ 7.6\times 10^{7}$ & 0.6 & 37 & $1.26 \%$\\ \hline Ex 4 in Table \ref{tab_mgbm} & ASGQ & $ 5.8 e^{-04}$ & $102$ & $ 1.4\times 10^{8}$ & 0.63 & 37 & $0.62 \%$\\ \hline Ex 15 in Table \ref{tab_mvg} & ASGQ & $ 8.26 e^{-04}$ & $19.5$ & $ 4.1\times10^{7}$ & 0.54 & 25 & $2.77 \%$\\ \hline Ex 16 in Table \ref{tab_mvg} & TP & $ 5.37 e^{-04}$ & $87.1$ & $ 1.4\times 10^{8}$ & 0.16 & 49 & $0.18 \%$\\ \hline Ex 26 in Table \ref{tab_mnig} & TP & $ 6.7 e^{-04}$ & $35.8$ & $5.3\times 10^{7}$ & 0.22 & 100 & $0.61 \%$\\ \hline Ex 27 in Table \ref{tab_mnig} & TP & $ 6.46 e^{-04}$ & $42.2$ & $6.5\times 10^{7}$ & 0.22 & 64 & $0.52 \%$\\ \Xhline{7\arrayrulewidth} Ex 5 in Table \ref{tab_mgbm} & ASGQ & $ 2.46e^{-04}$ & $207$ & $ 10^8$ & 7.8 & 5257 & $3.77\%$\\ \hline Ex 6 in Table \ref{tab_mgbm}&ASGQ & $8.12 e^{-04}$ & $14.5$ & $ 7.9 \times 10^6$ & 2.73 & 1433 & $18.83\%$ \\ \hline Ex 17 in Table \ref{tab_mvg}& ASGQ & ${2.58 e^{-04}}$ & $106.3$ & $1.23 \times 10^8$ & 5 & 3013 & $4.7\%$\\ \hline Ex 18 in Table \ref{tab_mvg}& ASGQ & $3.58 e^{-04}$ & $38.7$ & $4.5 \times10^7$ & 2 & 1109 & $5.17 \%$ \\ \hline Ex 27 in Table \ref{tab_mnig}& TP & $4.57 e^{-04}$& $50.2$ & $4.7\times10^7$ & 0.5 & 256 & $1 \%$\\ \hline Ex 28 in Table \ref{tab_mnig}& TP & ${4.1 e^{-04}}$& $49.4$ & $4.8 \times 10^7$ & 0.52 & 256 & $1\%$ \\ \hline Ex 7 in Table \ref{tab_mgbm}& ASGQ & ${ 5.7e^{-04}}$ & $1147$ & $7 \times10^8$ & 1 & 435 & $0.09 \%$\\ \hline Ex 8 in Table \ref{tab_mgbm} &ASGQ & ${5.5 e^{-04}}$ & ${1580}$ & $ 9.6 \times 10^8$ & 0.95 &654 & $0.06\%$ \\ \hline Ex 19 in Table \ref{tab_mvg} & ASGQ & ${5.9 e^{-04}}$ & $220$ & $3 \times10^{8}$ & 1.25 & 567 & $0.57\%$\\ \hline Ex 20 in Table \ref{tab_mvg}& ASGQ & ${8.9 e^{-04}}$ & 249 & $3.3 \times 10^8$ & 1.4 & 862 & $0.56 \%$ \\ \hline Ex 29 in Table \ref{tab_mnig} & TP & ${7.2 e^{-04}}$& 193.5 & $2\times 10^8$ & 8.7 & 20736 & $4.5 \%$\\ \hline Ex 30 in Table \ref{tab_mnig} & TP & ${4.2e^{-04}}$& 716 & $7.8 \times 10^8$ & 0.8 & 2401 & $0.11\%$ \\ \hline \Xhline{7\arrayrulewidth} Ex 9 in Table \ref{tab_mgbm} & ASGQ & $2.9 e^{-02}$ & 18.53 & $5.5 \times 10^6$ & 2 & 318 & $11\%$\\ \hline Ex 10 in Table \ref{tab_mgbm} & ASGQ & $3.3e^{-03}$ & 548 & $1.5 \times 10^8 $ & 2.1 & 340 & $0.38\%$ \\ \hline Ex 21 in Table \ref{tab_mvg} & ASGQ & ${7.8e^{-03}}$ & 5.4 & $ 4.7 \times 10^6 $ & 2.3 & 453 & $42.6\% $\\ \hline Ex 22 in Table \ref{tab_mvg} & ASGQ & $5.4e^{-03}$ & 31.5 & $2.5 \times 10^7$ & 3.5 & 566 & $11\% $ \\ \hline Ex 31 in Table \ref{tab_mnig} & ASGQ & ${1.47e^{-02}}$ & $14.2$ & $ 10^7$ & 3.4 & 616 & $24\% $\\ \hline Ex 32 in Table \ref{tab_mnig} & TP & $3.75e^{-02}$ & 33.5 & $2.5 \times 10^7$ & 11.7 & 4096 & $35\% $ \\ \hline Ex 11 in Table \ref{tab_mgbm} & ASGQ & ${ 1.4e^{-03}}$ & $2635$ & $6.9 \times10^8$ & 6 & 3070 & $0.23\%$\\ \hline Ex 12 in Table \ref{tab_mgbm} & ASGQ & $1.7e^{-03}$ & 2110 & $ 5.3 \times 10^8 & 4.5 & 1642 & $0.21\% $ \\ \hline Ex 23 in Table \ref{tab_mvg} & ASGQ & $2 e^{-03}$ & 85 & $6.8 \times 10^7$ & 19.5 & 7401 & $23\%$\\ \hline Ex 24 in Table \ref{tab_mvg} & ASGQ & ${2.6 e^{-03}}$ & 360 & $ 2.8 \times 10^8 $ & 4.6 & 1671 & $1.28 \%$ \\ \hline Ex 33 in Table \ref{tab_mnig} & ASGQ & ${5.7e^{-02}}$ & $85.5$ & $6.3 \times 10^7 $ & 1 & 105 & $1.17\% $\\ \hline Ex 34 in Table \ref{tab_mnig} & ASGQ & $3.79e^{-02}$ & 108 & $7.5 \times 10^7 $ & 1.4 & 340 & $1.3\% $ \\ \hline \end{tabular} \caption{Errors, CPU times, and function evaluations comparing the Fourier approach combined with the optimal damping rule and the best quadrature (Quad) method with the Gauss--Laguerre rule against the MC method for the European basket and rainbow options under the multivariate GBM, VG, and NIG pricing dynamics for various dimensions. Tables \ref{tab_mgbm}, \ref{tab_mvg}, \ref{tab_mnig} present the selected parameter sets for each pricing model, the reference values with their corresponding statistical errors, and the optimal damping parameters.} \label{cpu_tab} \end{table} \section{Option Pricing Models in Physical Space}\label{appendix:Option Pricing Models} \label{option_pricing_models_sec} In this section, we briefly give the details of the pricing models considered in this work: the multivariate GBM (Section \ref{MGBM_sec}), VG (Section \ref{VG_sec}), and NIG (Section \ref{NIG_sec}) models. \subsection{Multivariate Geometric Brownian Motion} \label{MGBM_sec} In the multivariate GBM model with $d$ assets $\{S_i(.)\}_{i=1}^d$, each stock satisfies\footnote{$\stackrel{d}{=}$ denotes the equality in distribution.} \begin{equation} S_i(t)\stackrel{d}{=}S_{i}(0) \exp \left[\left(r-\frac{\sigma_{i}^{2}}{2} \right) t+ \sigma_i W_i(t)\right], \quad i=1, \ldots, d, \end{equation} where $\sigma_1, \dots, \sigma_d> 0 $ and $\{W_1(t),\dots, W_d(t) , t \ge 0\}$ are risk-neutral Brownian motions with correlation matrix $\mathbf{C} \in \mathbb{R}^{d \times d}$ with components $-1\le \rho_{i, j} \le 1$, denoting the correlation between $W_i$ and $W_j$. Moreover, $\boldsymbol{\Sigma} \in \mathbb{R}^{d \times d}$ denotes the covariance matrix of the assets, with $\boldsymbol{\Sigma}_{ij}=\rho_{i, j} \sigma_{i} \sigma_{j}$. \subsection{Multivariate L\'evy models} Many stock price models are of the form $S_i(t) \stackrel{d}{=} S_i(0)e^{(r+\mu)t + X_i(t)}$, where $X_i(t)$ is a L\'evy process, for which the characteristic function is explicitly known, and $\mu$ is the cumulant generating function of the process evaluated at one. We consider two models within this family: the VG and NIG models in Sections \ref{VG_sec} and \ref{NIG_sec}, respectively. replace "µ is a Martingale correction term" with "µ". \subsubsection{Multivariate variance Gamma }\label{VG_sec} We consider the multivariate VG model introduced in \cite{luciano2006multivariate}. The joint risk-neutral dynamics of the stock prices are modeled as follows: \begin{equation} S_{i}(t) \stackrel{d}{=} S_{i}(0) \exp \left\{\left(r+\mu_{VG,i}\right) t+\theta_{i} G(t)+\sigma_{i} \sqrt{G(t)}Z_i\right\}, \quad i=1, \ldots, d, \end{equation} where $\{Z_1,\dots, Z_d \}$ are independent standard normal variables, $\{ G(t) | t \geq 0 \}$ is a common Gamma process,\footnote{$\{ G(t) | t \geq 0 \}$ is a Gamma process with parameters $(a , b)$ defined by $f_{G}(x;a,b)= \frac{b^{a t}}{\Gamma (a t)} x^{a t-1} e^{-b x},\: x \ge 0$.} independent of all involved Brownian motions, with parameters $( \frac{t}{\nu}, \frac{1}{\nu})$. $\theta_i \in \rset$ and $\sigma_i> 0$, $1 \le i \le d$. We only consider the case in which the Brownian motions of the stocks are uncorrelated and share the same parameter $\nu$; thus, the matrix $\mathbf{\Sigma} \in \mathbb{R}^{d \times d}$, presented in Table \ref{table:chf_table} satisfies $\Sigma_{i,j} = {\sigma_i }^2$ for $i=j$, and 0 otherwise. In addition, $\boldsymbol{\mu}_{VG}:=(\mu_{VG,1}, \dots,\mu_{VG,d})$ are the Martingale correction terms that ensure that $\{e^{-rt}S_i(t) | t \geq 0\}$ is a Martingale and are given by \begin{equation} \mu_{VG,i}=\frac{1}{\nu} \log \left(1-\frac{1}{2} \sigma_{i}^{2} \nu-\theta_{i} \nu\right), \quad i=1, \ldots, d. \end{equation} \subsubsection{Multivariate normal inverse Gaussian \label{NIG_sec} We consider the multivariate NIG model where the joint risk-neutral dynamics of the stock prices are modeled as follows: \begin{equation} S_{i}(t) \stackrel{d}{=} S_{i}(0) \exp \left\{\left(r+\mu_{NIG,i}\right) t+\beta_{i} IG(t)+ \sqrt{IG(t)}\sum_{j=1}^d C_{i,j}Z_{j}\right\}, \quad i=1, \ldots, d, \end{equation} where $\{Z_1,\dots, Z_d\}$ are standard normal random variables, $\{ IG(t) | t \geq 0 \}$ is a common inverse Gaussian process, \footnote{$\{ IG(t) | t \geq 0 \}$ is an inverse Gaussian process with parameters $(a, b)$ defined by $f_{IG}(x;a,b)= \left(\frac{a}{2 \pi x^3}\right)^{1/2} e^{ \frac{-a(x-b)^2}{2b^2x} },\: x > 0$.} independent of all involved standard normal variables, with parameters $(\delta^2 t^2, \alpha^{2}-\boldsymbol{\beta}^{\mathrm{T}} \boldsymbol{\Delta} \boldsymbol{\beta})$. Additionally, $\alpha \in \mathbb{R}_{+}$, $\boldsymbol{\beta} \in \mathbb{R}^{d}$, $\alpha^2 > \boldsymbol{\beta}^{\mathrm{T}} \boldsymbol{\Delta} \boldsymbol{\beta} $, $\delta > 0$, and $ \boldsymbol{\Delta} \in \mathbb{R}^{d \times d}$ is a symmetric positive definite matrix with a unit determinant, such that $\textbf{C}\textbf{C}^T = \boldsymbol{\Delta}$. $\{\mu_{NIG,i}\}_{i=1}^{d}$ are the Martingale correction terms that ensure that $\{e^{-rt}S_i(t) | t \geq 0\}$ is a Martingale, given by \begin{equation} \mu_{NIG,i}=- \delta \left( \sqrt{\alpha^2 - \beta_i^2} - \sqrt{\alpha^2 - (\beta_i + 1)^2}\right), \quad i=1, \ldots, d. \end{equation} \section{On the Choice of the Quadrature Rule}\label{num_lag_herm_sec} In this section, through numerical examples on vanilla put options, we show that the Gauss--Laguerre quadrature rule significantly outperforms the Gauss--Hermite quadrature rule for the numerical evaluation of the inverse Fourier integrals; hence, we adopt the Gauss--Laguerre measure for the rest of the work. Figures \ref{1d_put_gbm_laguerre_vs_hermite}, \ref{1d_put_vg_laguerre_vs_hermite}, and \ref{1d_put_nig_laguerre_vs_hermite} reveal that the Gauss--Laguerre quadrature rule significantly outcompetes the Gauss--Hermite quadrature independently of the values of the damping parameters in the strip of regularity for the tested models: GBM, VG, and NIG. For instance, Figure \ref{1d_put_gbm_laguerre_vs_hermite} illustrates that, when $R=4$ is used, the Gauss--Laguerre quadrature rule reaches approximately the relative quadrature $ \mathcal{E}_{R} =0.01 \%$ using $12 \%$ of the work required by the Gauss--Hermite quadrature to attain the same accuracy. Although we present a few plots for 1D put options under the various models for illustrative purposes, the observations were consistent for all tested parameter constellations and dimensions, and independent of the choice of the quadrature methods (TP, ASGQ, or SM). \begin{figure}[h!] \centering \begin{subfigure}{0.4\textwidth} \includegraphics[width=\linewidth]{1d_put_gbm_laguerre_vs_hermite.eps} \caption{$\sigma = 0.4$} \label{1d_put_gbm_laguerre_vs_hermite} \end{subfigure} \begin{subfigure}{0.4\textwidth} \includegraphics[width=\linewidth]{1d_put_vg_laguerre_vs_hermite.eps} \caption{$\sigma = 0.4, \theta = -0.3, \nu= 0.257$} \label{1d_put_vg_laguerre_vs_hermite} \end{subfigure} \begin{subfigure}{0.4\textwidth} \includegraphics[width=\linewidth]{1d_put_nig_laguerre_vs_hermite.eps} \caption{$\alpha = 10, \beta= -3, \delta= 0.2$} \label{1d_put_nig_laguerre_vs_hermite} \end{subfigure} \caption{Relative quadrature error, $\mathcal{E}_{R}$, convergence w.r.t.~$N$ of Gauss--Laguerre and Gauss--Hermite quadrature rules for a European put option with $S_0=100$, $K=100$, $r=0$ , and $T=1$ under (a) GBM, (b) VG, and (c) NIG.} \label{fig: convergence w.r.t N of Gauss-Laguerre and Gauss-Hermite quadrature rules for European put option under (a) GBM (b) VG (c) NIG.} \end{figure} \section{Monte Carlo Algorithms} \begin{algorithm}[h!] \caption{Multivariate GBM: Pricing $d$-dimensional option with payoff $P(.)$ using MC} \label{mc_algo_gbm} \SetAlgoLined \KwResult{Compute $\hat{V}(S_0,K,T,r) =\frac{1}{M}\sum_{m=1}^M \left(P(S_T^i(\omega_m) ) \right)$ } Input: $S_0^i, \boldsymbol{\sigma}, \mathbf{C},M,N, d$ \\ Set: $ 0 = t_0<..<t_N = T, \; \Delta t = \frac{T}{N}, \; t_j = j \Delta t$\\ Compute: $ \mathbf{L}$=Cholesky($ \mathbf{C}$) \mbox{(Cholesky: $\mathbf{C} = \mathbf{L} \cdot \mathbf{L}^T$) }\\ \For{m = 1 \ldots M}{ \For{j = 1 \ldots N}{ Sample $ \tilde{\mathbf{Z}}_{j,m}=(\tilde{Z}_{j,m}^1, \ldots, \tilde{Z}_{j,m}^d) \; \mbox{from} \; \mathcal{N}( \boldsymbol{0}, \mathbf{I}_d )$ \\ Compute $ \mathbf{Z_{j,m} } = \mathbf{L} \cdot \tilde{\mathbf{Z}}_{j,m}$, \mbox{(correlated Brownian motion)} \\ \For {i = 1 \ldots d}{ $S_j^i(\omega_m) = S_{j-1}^{i}(\omega_m) \; exp( (r - \frac{\sigma_i^2}{2})\Delta t + \sigma_i \sqrt{\Delta t} Z_{j,m}^i)$} } } \end{algorithm} \begin{algorithm}[h!] \caption{Multivariate VG: Pricing $d$-dimensional option with payoff $P(.)$ using MC} \label{mc_algo_vg} \SetAlgoLined \KwResult{Compute $\hat{V}(S_0,K,T,r) =\frac{1}{M}\sum_{m=1}^M \left(P(S_T^i(\omega_m) ) \right)$ } Input: $S_0^i, \boldsymbol{\sigma}, \boldsymbol{\theta},\nu,M,N, d$ \\ Set: $0=t_0<..<t_N=T, \; \Delta t = \frac{T}{N}, \; t_j = j \Delta t$\\ Compute $ \boldsymbol{ \mu_{VG } }=(\mu_{VG,1},\ldots, \mu_{VG,d} )$ s.t $\mu_{VG,i}=\frac{1}{\nu} \log \left(1-\frac{1}{2} \sigma_{i}^{2} \nu-\theta_{i} \nu\right)$ \\ \For{m = 1 \ldots M}{ \For{j = 1 \ldots N}{ Sample $G_{j,m} \; \mbox{from} \; \Gamma(\frac{\Delta t}{\nu},\frac{1}{\nu} )$ (common clock for all stocks)\\ Sample $\mathbf{Z}_{j,m} \; \mbox{from} \; \mathcal{N}(\mathbf{0},\mathbf{I}_d) $ (independent from $G_{j,m}$) \\ \For {i = 1 \ldots d}{ $S_j^i(\omega_m) = S_{j-1}^{i}(\omega_m) \; exp( (r + \mu_{VG,i})\Delta t + \theta_iG_{j,m} + \sigma_i \sqrt{G_{j,m}}Z_{j,m}^i)$\\ } } } \end{algorithm} \begin{algorithm}[h!] \caption{Multivariate NIG: Pricing $d$-dimensional option with payoff $P(.)$ using MC} \label{mc_algo_nig} \SetAlgoLined \KwResult{Compute $\hat{V}(S_0,K,T,r) =\frac{1}{M}\sum_{m=1}^M \left(P(S_T^i(\omega_m) ) \right)$ } Input: $S_0^i, \alpha, \boldsymbol{\beta},\delta, \boldsymbol{\Delta},M,N, d$ \\ Set: $0=t_0<..<t_N=T, \; \Delta t = \frac{T}{N}, \; t_j = j \Delta t$\\ Compute $\mathbf{L}$=Cholesky($ \boldsymbol{\Delta}$) \mbox{(Cholesky: $ \boldsymbol{\Delta} = \mathbf{L} \cdot \mathbf{L}^T$) } \\ Compute $ \boldsymbol{ \mu_{NIG } }=(\mu_{NIG,1},\ldots, \mu_{NIG,d} )$ s.t $ \mu_{NIG,i} = - \delta ( \sqrt{\alpha^2 - \beta_i^2} - \sqrt{\alpha^2 - (\beta_i + 1)^2})$ \\ \For{m = 1 \ldots M}{ \For{j = 1 \ldots N}{ Sample $IG_{j,m} \; \mbox{from} \; IG(\delta \Delta t, \sqrt{\alpha^2 - \beta^2} )$ \\ Sample $ \tilde{\mathbf{Z}}_{j,m}=(\tilde{Z}^1_{j,m}, \ldots, \tilde{Z}^d_{j,m}) \; \mbox{from} \; \mathcal{N}(\mathbf{0},\mathbf{I}_d) $ (independent from $IG_{j,m} $) \\ Compute $ \mathbf{Z}_{j,m} = \mathbf{L} \cdot \tilde{\mathbf{Z}}_{j,m} $\\ \For {i = 1 \ldots d}{ $S_j^i(\omega_m) = S_{j-1}^{i}(\omega_m) \; exp( (r + \mu_{NIG,i})\Delta t + \beta_i IG_{j,m} + \sqrt{IG_{j,m}}Z_{j,m}^i)$ } } } \end{algorithm} \section*{\hfil #1\hfil}} \renewcommand{\refname}{\hfil References Cited\hfil} \def\smallskip{\smallskip} \def\medskip{\medskip} \def\bigskip{\bigskip} \makeatletter \def\State\hskip-\ALG@thistlm{\State\hskip-\ALG@thistlm} \makeatother \title{Optimal Damping with Hierarchical Adaptive Quadrature for Efficient Fourier Pricing of Multi-Asset Options in L\'evy Models} \author[1]{Christian Bayer} \author[2]{Chiheb Ben Hammouda\thanks{[email protected]}} \author[3]{Antonis Papapantoleon} \author[4]{Michael Samet\thanks{[email protected]}} \author[4,2]{Ra\'ul Tempone} \affil[1]{Weierstrass Institute for Applied Analysis and Stochastics (WIAS), Berlin, Germany.} \affil[2]{Chair of Mathematics for Uncertainty Quantification, RWTH Aachen University, Aachen, Germany.} \affil[3]{Delft Institute of Applied Mathematics, TU Delft, 2628 Delft, The Netherlands, and Institute of Applied and Computational Mathematics, FORTH, 70013, Heraklion, Greece.} \affil[4]{King Abdullah University of Science and Technology (KAUST), Computer, Electrical and Mathematical Sciences \& Engineering Division (CEMSE), Thuwal, Saudi Arabia.} \renewcommand\Authands{ and } \begin{document} \date{} \maketitle \begin{abstract} Efficient pricing of multi-asset options is a challenging problem in quantitative finance. When the Fourier transform of the density function is available, Fourier-based pricing methods become very competitive compared to alternative techniques because the integrand in the frequency space has often higher regularity than in the physical space. However, when designing a numerical quadrature method for most of these Fourier pricing approaches, two key aspects affecting the numerical complexity should be carefully considered: (i) the choice of the damping parameters that ensure integrability and control the regularity class of the integrand and (ii) the effective treatment of the high dimensionality of the integration problem. To address these challenges, based on the extension of the one-dimensional Fourier valuation formula to the multivariate case, we propose an efficient numerical method for pricing European multi-asset options based on two complementary ideas. First, we smooth the Fourier integrand via an optimized choice of damping parameters based on a proposed heuristic optimization rule. Second, we use the adaptive sparse grid quadrature based on sparsification and dimension-adaptivity techniques to accelerate the convergence of the numerical quadrature in high dimensions. Through an extensive numerical study on the basket and rainbow options under the multivariate geometric Brownian motion and some multivariate L\'evy models, we demonstrate the advantages of adaptivity and our damping parameter rule on the numerical complexity of the quadrature methods. Moreover, we reveal that our approach achieves substantial computational gains compared to the Monte Carlo method for different dimensions and parameter constellations. \textbf{Keywords} Option pricing, multi-asset options, Fourier methods, numerical quadrature, damping parameters, adaptive sparse grid quadrature, basket and rainbow options, multivariate geometric Brownian motion, multivariate L\'evy models, Monte Carlo. \textbf{2010 Mathematics Subject Classification} 65D32, 65T50, 65Y20, 91B25, 91G20, 91G60 \end{abstract} \section{Introduction} \input{Introduction.tex} \section{Problem Setting and Pricing Framework}\label{sec:Problem Setting and Pricing Framework} \input{Problem_setting.tex} \section{Methodology of our Approach}\label{sec:Methodology of our Approach} \input{Methodology.tex} \section{Numerical Experiments and Results}\label{sec: num_exp_results} \input{Num_exp.tex} \textbf{Acknowledgments} C. Bayer gratefully acknowledges support from the German Research Foundation (DFG) via the Cluster of Excellence MATH+ (project AA4-2). This publication is based on the work supported by the King Abdullah University of Science and Technology (KAUST) Office of Sponsored Research (OSR) under Award No. OSR-2019-CRG8-4033 and the Alexander von Humboldt Foundation. Antonis Papapantoleon gratefully acknowledges the financial support from the Hellenic Foundation for Research and Innovation Grant No. HFRI-FM17-2152. \bibliographystyle{plain}
1,116,691,497,419
arxiv
\section{TO DO} \section{Introduction} This work deals with the complexity of smooth functions of many variables. The questions addressed in this paper can be phrased as: What does a random Morse function look like on a high-dimensional manifold? How many critical values of given index, or below a given level? What can be said about the topology of its level sets? We study here general smooth Gaussian functions on the sphere in dimension N, when N is large. We investigate the number of critical points of given index in level sets below a given value, as well as the topology of the level sets through their mean Euler characteristic. Our main result is that these functions have an exponentially large number of critical points of given index, and that the Euler characteristic of the level sets have a very interesting oscillatory behavior. Moreover we find an invariant to distinguish between two very different classes of complexity for these functions. These two classes should correspond to the distinction between one-step replica symmetry breaking and full replica-symmetric breaking of the physics literature on spin glasses. Indeed the general random Gaussian smooth functions on the sphere correspond exactly to the Hamiltonians of an important class of models of statistical physics of disordered media, i.e. mixed spherical spin-glasses. These mean-field models, as well as other spin glass models, are well-known to be very challenging to analyze. It is believed (see \cite{Leuzzi} and the references therein) that a subset of the spherical models that we study here share the same interesting static and dynamical behavior as the famous Sherrington-Kirkpatrick model at low temperature. As part of our study, we give further evidence for this claim and conjecture its domain of validity. We start by calculating the averaged complexity of these functions i.e. the exponential rate function of the mean number of critical points of finite and diverging index at any level of energy. This initial computation uses the method developed in \cite{ABC}, where this study was initiated for particular covariance functions that appear as Hamiltonians of the pure spherical $p$-spin models. In the general case, our first result is that the complexity of critical points of finite index can be decomposed into two pieces: only one of them present in the pure case (see Figure~1). This difference allows us to separate the models of Gaussian smooth functions on the sphere in two classes : one where the bottom landscape is qualitatively similar to the pure $p$-spin models and another which should correspond to the case of Full replica symmetry breaking where in particular even at energy levels below the limiting ground state energy, the mean number of local minima is exponentially large. In the former case, that we call \textit{pure-like} region, we prove a strong correlation between critical values and their indexes. There exist energy thresholds $-E_k$ such that below $-E_k$ with probability going to one it is only possible to find critical points of index less than $k$. In the latter case, called \textit{full mixture} region this layered structure is not present and there is no difference between the complexity of critical points of finite index $k$ for any $k$ (not diverging with $N$). However, in both cases the complexity of critical points of any index is different from the pure case as we increase the level of energy. In particular, coexistence of local minima and local maxima is perfectly possible and the mean number of critical points of any finite index agree in a full neighborhood of energies around their "most typical" energy (see Theorem \ref{critical2}). The understanding of the landscape of these Hamiltonians might prove useful for the study of both static and dynamical questions of these models. First, the layered structure described above may shed a light on the metastability of Langevin dynamics (in longer time scales than those studied in \cite{BG97}). Second, it may provide an insight on the most important statics open question: to understand at any temperature the structure of the (random) Gibbs measure associated to these Hamiltonians. In this direction, a major breakthrough was done by Talagrand \cite{Talagrand} based on the remarkable work of Guerra \cite{Guerra} by computing the free energy at any temperature under a convexity assumption. The free energy is given as the infimum of the Parisi functional over the space of probability measures on $[0,1]$ (see \eqref{ParisiFor}). Understanding the minimizer of this functional (uniqueness, for instance, is only known under certain conditions), called a Parisi measure, is also a major challenge. In the spherical pure $p$-spin it was shown in Proposition 2.2 of \cite{Talagrand} that the model has a one step replica symmetry breaking (1-RSB) at low temperature, i.e the Parisi measure at low temperature is atomic with two atoms. For a mixed spin model, as far as we know, the structure of the Parisi measure remains an open question. We also show that in the \textit{pure-like} region, the 1-RSB picture is consistent with the complexity picture without any convexity assumption. Precisely, the complexity function of local minima can be characterized near its zero as a function of the Parisi Functional minimized over two-atomic measures and vice-versa. This is the content of Theorem \ref{tris}. Furthermore, we show that concentration of the number of local minima implies 1-RSB at zero temperature for any mixture in the \textit{pure-like} region. In the \textit{full mixture} region, the 1-RSB Parisi functional does not describe the complexity function near its zero. In Theorem \ref{Msri1} we show that the ground state energy converges almost surely to a constant. In the \textit{full mixture} region, this constant has positive complexity. This immediately implies that it is not possible to have concentration for the number of critical points around its mean and one may expect that the averaged complexity is strictly bigger than the quenched complexity (see the discussion after Theorem \ref{tris}.) Our picture is consistent and generalizes the one proposed by physicists. In \cite{Leuzzi}, it is claimed that a $2+p$ spherical spin glass model with $p\geq 4$, at low temperature is either 1-RSB or its Parisi measure has an absolute continuous part (a Full RSB or a 1-Full RSB) depending on how much weight is assigned to the $2$-spin model. The regions \textit{pure-like} and \textit{full mixture} seem to numerically agree and to extend (since we do not need the $2$ spin component) the one proposed by \cite{Leuzzi}. We find this a remarkable fact : some mixtures of the spherical model are expected to have the same Gibbs structure as the Sherrington-Kirkpatrick Model on the hypercube. We conjecture that Full RSB holds in the \textit{full mixture} region. Intuitively, since we prove that the average number of critical points at the ground state energy is exponentially large with $N$, the Gibbs measure at low temperature has plenty of candidates to sample from. However, our techniques are still worlds away to prove this fact. In particular we still do not know what is the typical overlap of two critical points. Back to the topology of level sets, we show that the total number of critical points at a given level of energy is asymptotically equal to the number of critical points of a particular index (that depends on the level of energy). Loosely speaking, at lower levels, local minima dominate. In a certain threshold energy window $(-NE_\infty, NE_\infty)$ critical points of diverging index give the main contribution to the total complexity. Above $NE_\infty$, the total complexity is equal to the complexity of local maxima. This phenomena is related to the asymptotic mean Euler (or Euler-Poincar\'{e}) characteristic of the set of points below a certain level of energy. We show that in absolute value the mean Euler characteristic of these level sets is asymptotically equal to the number of total critical points at that level. It is therefore exponentially large for most energies. Moreover, we prove that is positive outside the window $(-NE_\infty, NE_\infty)$ but oscilates $O(N)$ times from positive to negative (exponentially large) values inside $(-NE_\infty, NE_\infty)$. We find this picture very interesting but quite hard to visualize. The paper is organized as follows. In section \ref{inicio}, we define the model and we state the main results about the complexity of critical points. In section \ref{sec4}, we state a few relations between the structure of the Parisi measure, the global minima of the Hamiltonian and the complexity of critical points. We also define the regions pure-like and full-mixture. Next, in section \ref{Eulersec}, we state our results about the Euler characteristic of level sets. In section \ref{sec2} we prove all Theorems about the complexity function. Their proofs follow the same strategy of \cite{ABC}. Namely, they will follow from an exact formula for the mean number of critical points of index $k$ that translates the problem to a Random Matrix Theory question. This formula is more envolving than the pure case since in a mixture the Hessian matrix gains an independent Gaussian component on the diagonal. This leads to different variational principle that we analyze. To end, in sections \ref{sec5} and \ref{sec6} we prove the results of section \ref{sec4} while in section \ref{eulerproof} we prove the results of section \ref{Eulersec}. \subsection{Acknowledgements} We want to underline our debt to Michel Ledoux's friendly help for the results of section \ref{sec4}. We also would like to thank Jiri Cerny for a careful reading of this manuscript and Yan Fyodorov for pointing out that the method used in this paper is similar to \cite{fyodorov-2004-92} and \cite{Fyodorov}. Both authors were partially supported by NSF Grant DMS 0806180. The first author was also partially supported by NSF grant DMS-0500923. The second author was also partially supported by NSF Grant OISE-0730136. We want to thank the hospitality from MSRI, IMPA, Universit\'{e} de Marseille and from the Universit\'{e} de Nice where a mini-course based on these results were given. A more pedagogical account of this subject that includes this work should appear in the MSRI publication series. \section{Complexity and Energy Landscape}\label{inicio} The state space of the spherical spin-glass model is $S^{N-1}(\sqrt N)\subset \mathbb R^N$, the Euclidean sphere of radius $\sqrt N$. A configuration $\boldsymbol\sigma$ is a vector of $\mathbb R^N$ satisfying the constraint \begin{equation} \frac{1}{N}\sum_{i=1}^N \sigma_i^2 = 1. \end{equation} The Hamiltonian of the pure $p$- spin model is the random function defined on $S^{N-1}(\sqrt N)$ by \begin{equation} \label{Hamiltonian} H_{N,p}(\boldsymbol\sigma) = \frac{1}{N^{(p-1)/2}} \sum_{i_1, \dots, i_p=1}^N J_{i_1, \dots, i_p} \sigma_{i_1}\dots \sigma_{i_p}, \qquad \boldsymbol\sigma=(\sigma_1,\dots,\sigma_N)\in S^{N-1}(\sqrt N), \end{equation} where $J_{i_1, \dots, i_p}$ are independent centered standard Gaussian random variables. Equivalently, $H_{N,p}$ is the centered Gaussian process on the sphere $S^{N-1}(\sqrt N)$ whose covariance is given by \begin{equation} \mathbb E\big[H_{N,p}(\boldsymbol\sigma)H_{N,p}(\boldsymbol\sigma')\big] =N^{1-p}\Big(\sum_{i=1}^N \sigma_i\sigma'_i\Big)^p = N R(\boldsymbol\sigma, \boldsymbol\sigma')^p, \end{equation} where the normalised inner product $R(\boldsymbol\sigma, \boldsymbol\sigma') = \frac{1}{N}\< \boldsymbol\sigma, \boldsymbol\sigma'\>=\frac{1}{N}\sum_{i=1}^N \sigma_i\sigma'_i$ is usually called the overlap of the configurations $\boldsymbol\sigma$ and $\boldsymbol\sigma'$. The study of the landscape of such Hamiltonians was done in \cite{ABC} via a Random Matrix Theory. In this paper, we will consider the analogous analysis for a mixed $p$-spin model, i.e. linear combinations, of these Hamiltonians. At a first sight, this question appears just as a simple generalization, however, as mentioned in the introduction we find a richer structure when considering mixed Hamiltonians instead of pure $p$-spins. We now define a mixture of p-spins (or a mixed spin). Given a sequence $\boldsymbol\beta = (\beta_p)_{p\in \ensuremath{\mathbb{N}}, p\geq 2}$ of positive real numbers such that \begin{equation}\label{e1b} \sum_{p=2}^{\infty} 2^p \beta_p < \infty, \end{equation} let \begin{equation} H_{N, \boldsymbol\beta}(\boldsymbol\sigma) = \sum_{p=2}^{\infty} \beta_p H_{N,p}(\boldsymbol\sigma), \end{equation} where for any pair of values $p\neq p'$ the Hamiltonians $H_{N,p}, H_{N,p'}$ are independent. Condition \eqref{e1b} is more than enough to guarantee that the above sum is a.s. finite and the Hamiltonian $H_{N, \boldsymbol\beta}$ is a.s. smooth (see Theorem 11.3.1 of \cite{AT07}). In this case, we have that \begin{equation}\label{duali} \mathbb E\big[H_{N,\boldsymbol\beta}(\boldsymbol\sigma)H_{N,\boldsymbol\beta}(\boldsymbol\sigma')\big]= N \sum_{p=2}^\infty \beta_p^2 \Big(R(\boldsymbol\sigma, \boldsymbol\sigma')\Big)^p = N \nu (R(\boldsymbol\sigma, \boldsymbol\sigma')), \end{equation} where \begin{equation}\label{linsum} \nu(t)= \sum_{p=2}^{\infty} \beta_p^2 t^p. \end{equation} A word of comment is needed here. By Schoenberg's theorem \cite{Schoenberg}, if $\nu(R(\boldsymbol \sigma, \boldsymbol \sigma'))$ is a positive-definite function for all $N$ and all $ \boldsymbol \sigma, \boldsymbol \sigma' \in S^{N-1}(\sqrt N)$ then $\nu$ can be written as a linear sum as in \eqref{linsum}. This shows that we are exhausting all possible covariances given as \eqref{duali}. From now on, we call the function $\nu$ a mixture and we note that $\nu$ is smooth with \begin{equation} \nu'(1)\equiv\nu' \neq 0, \nu''(1) \equiv \nu''>0 \quad \nu(1)= \sum_{p=2}^{\infty} \beta_p^2 = 1. \end{equation} If we consider the random variable $X$ that assign probability $\beta_p^2$ to the integer $p$, then its probability measure is given by $\mu_X= \sum \beta_p^2 \delta_p$ and \begin{equation}\label{eq2p} \ensuremath{\mathbb{E}} X = \nu' \quad \text{and} \quad \alpha^2 \equiv \mathop{\rm Var}\nolimits X = \nu'' + \nu' - \nu'^2. \end{equation} A mixture is pure if and only if $\alpha=0$. Furthermore, note that $\nu''\geq \nu'$ with equality only in the pure case with $p=2$. The parameters $\nu', \nu''$ and $\alpha^2$ will be fundamental in our analysis. We now introduce the complexity of spherical spin glasses as in \cite{ABC}. For any Borel set $B\subset \ensuremath{\mathbb{R}}$ and any integer $0\le k < N$, we consider the (random) number $\mathop{\mathrm{Crt}}\nolimits_{N,k}(B)$ of critical values of the Hamiltonian $H_{N,\boldsymbol\beta}$ in the set $NB=\{Nx:x\in B \}$ with index equal to $k$, \begin{equation} \label{defWk} \mathop{\mathrm{Crt}}\nolimits_{N,k}(B) = \sum_{\boldsymbol\sigma: \nabla H_{N,\boldsymbol\beta}(\boldsymbol\sigma) = 0 } \ensuremath{\boldsymbol 1}\{ H_{N,\boldsymbol\beta}(\boldsymbol\sigma) \in NB\} \ensuremath{\boldsymbol 1}\{ i(\nabla^2 H_{N,\boldsymbol\beta}(\boldsymbol\sigma)) = k\}. \end{equation} Here $\nabla$, $\nabla^2$ are the gradient and the Hessian restricted to $S^{N-1}(\sqrt N)$, and $i(\nabla^2 H_{N,\boldsymbol\beta}(\boldsymbol\sigma))$ is the number of negative eigenvalues of the Hessian $\nabla^2 H_{N,p}$, called the index of the Hessian at $\boldsymbol\sigma$. We will also consider the total number $\mathop{\mathrm{Crt}}\nolimits_{N}(B)$ of critical values of the Hamiltonian $H_{N,\boldsymbol\beta}$ in the set $NB$ (whatever their index) \begin{equation} \label{e:defW} \mathop{\mathrm{Crt}}\nolimits_{N}(B) = \sum_{\boldsymbol\sigma: \nabla H_{N,\boldsymbol\beta}(\boldsymbol\sigma) = 0 } \ensuremath{\boldsymbol 1}\{ H_{N,\boldsymbol\beta}(\boldsymbol\sigma) \in NB\}. \end{equation} Our first results will give exact and asymptotic formulas for the mean values $ \ensuremath{\mathbb{E}} \mathop{\mathrm{Crt}}\nolimits_{N,k}(B)$ and $\ensuremath{\mathbb{E}}\mathop{\mathrm{Crt}}\nolimits_{N}(B)$, when $N\to\infty$ and $k$, $B$ and $\nu$ are fixed. We will compute the limits of $\frac{1}{N} \log \ensuremath{\mathbb{E}} \mathop{\mathrm{Crt}}\nolimits_{N,k}(B)$ and $\frac{1}{N} \log \ensuremath{\mathbb{E}} \mathop{\mathrm{Crt}}\nolimits_{N}(B)$ as $N$ tends to infinity. \subsection{Complexity functions for critical values of finite index.} \indent Our first result is the existence and characterization of the asymptotic complexity of the mean number of critical points of index $k$ in a certain level of energy. \begin{theorem}\label{maintheorem} For any fixed integer $k \geq 0$, there exist a continuous function $\theta_{k,\nu}(u)$, called the $k$-complexity function, explicitly given in \eqref{complexityfunction2}, such that, for any open set $B \subseteq \ensuremath{\mathbb{R}}$, \label{critical2} \begin{equation} \lim_{N\to\infty} \frac{1}{N} \log \ensuremath{\mathbb{E}} \mathop{\mathrm{Crt}}\nolimits_{N,k}(B) = \sup_{u \in B} \theta_{k,\nu}(u). \end{equation} \end{theorem} We decide to postpone to section \ref{sectionlo} the explicit expression of the $k$- complexity functions $\theta_{k,\nu}(u)$. However, we describe some important properties of these functions (see Figure 3) in the proposition below. We first fix three important thresholds that depend on $\nu$. Let \begin{equation} E'_\infty = \frac{2 \nu' \sqrt{\nu''}}{\nu'+\nu''}, \quad E_\infty = \frac{ \nu'' - \nu' + \nu'^2 }{\nu'\sqrt{\nu''}} \end{equation} and \begin{equation}\label{edocap1} E_\infty^{-} = \frac{2\nu'\sqrt{\nu''} - \sqrt{4\nu''\nu'^2 - (\nu''+\nu')(2(\nu''-\nu'+\nu'^2)-(\nu''+\nu'-\nu'^2)\log{\frac{\nu''}{\nu'}})}}{\nu'+\nu''}. \end{equation} Note that \begin{equation}\label{seqequalities} E_\infty^{-} \leq E'_\infty \leq E_\infty .\end{equation} Furthermore, $E'_\infty = E_\infty$ if and only if $E_\infty = E_\infty^{-}$ if and only if $\nu''+\nu'-\nu'^2 = 0$, that is, any equality in \eqref{seqequalities} implies a triple equality. It occurs if and only if the mixture is a pure $p$-spin (see \eqref{eq2p}). \begin{figure}\label{fig01010} \centering \includegraphics[scale=1]{complexidademisturafig3} \caption{$k$-complexity functions $\theta_{k,\nu}(u)$ for $-6\leq u \leq -1$, $k=1,2,3,5$ in the case where $\nu$ is pure-like, i.e. $\theta_{k,\nu}(-E_{\infty})>0$. The dashed line is the continuation of the parabola that describes $\theta_{k,\nu}(u)$ in the interval $[-E_{\infty},\infty)$ where they all agree. } \end{figure} \begin{proposition}\label{remarkmaxima} For any mixture $\nu$ and any $k\geq0$, the $k$-complexity functions $\theta_{k, \nu}(u)$ satisfy the following: \begin{enumerate} \item $\theta_{k, \nu}(u)$ is continuous on $\ensuremath{\mathbb{R}}$ and differentiable on $\ensuremath{\mathbb{R}} \setminus \{-E_\infty\}$. \item $\theta_{k, \nu}(u)$ is strictly increasing on $(-\infty,-E_\infty')$ and strictly decreasing on $(-E_\infty',\infty)$. Its unique maxima is independent of $k$ and equal to \begin{equation}\label{totalmaxima} \Sigma_\nu:= \theta_{k,\nu} (-E_{\infty}') = \frac{1}{2} \log \frac{\nu''}{\nu'}-\frac{\nu''-\nu'}{\nu''+ \nu'} > 0. \end{equation} \item $\theta_{k, \nu}(u)$ has exactly two distinct zeros. The largest zero is given by $-E_\infty^{-}$ and therefore is independent of $k$. \item For any $k,k'\in \ensuremath{\mathbb{N}}$ with $k < k'$, $\theta_{k,\nu}(u) < \theta_{k',\nu}(u) $ for all $u\in(-\infty,-E_\infty)$. \item For any $k,k'\in \ensuremath{\mathbb{N}}$ with $k < k'$, $\theta_{k,\nu}(u) = \theta_{k',\nu}(u) $ for all $u\in[-E_\infty, \infty)$. \end{enumerate} \end{proposition} Immediately from Theorem \ref{maintheorem} and Proposition \ref{remarkmaxima} we obtain: \begin{corollary}\label{coracaocor} The mean total number of critical points of index $k$ satisfies \begin{equation} \lim_{N\to\infty} \frac{1}{N} \log \ensuremath{\mathbb{E}} \mathop{\mathrm{Crt}}\nolimits_{N,k}(\ensuremath{\mathbb{R}}) = \Sigma_\nu. \end{equation} Furthermore, if $B = (-\infty,u)$ with $u \leq -E'_\infty$ then \begin{equation} \lim_{N\to\infty} \frac{1}{N} \log \ensuremath{\mathbb{E}} \mathop{\mathrm{Crt}}\nolimits_{N,k}(-\infty,u) = \theta_{k,\nu}(u). \end{equation} \end{corollary} \begin{remark} By symmetry, Theorem \ref{critical2} also holds as stated for the random variables $\mathop{\mathrm{Crt}}\nolimits_{N,N-l}(B)$, with $l \geq 1$ fixed if one replaces $\theta_{k,\nu}(u)$ by $\theta_{k,\nu}(-u)$. \end{remark} We now use Theorem \ref{maintheorem} and Proposition \ref{remarkmaxima} to describe the bottom landscape of the mixed spin glass models. For any integer $k \geq 0$, we introduce $E_k = E_k(\nu)>0$ as the unique solution in $(E_\infty,\infty)$ to (see Figure 3 again) \begin{equation}\label{def:Ek} \theta_{k,\nu}(-E_k(\nu)) = 0. \end{equation} That is, $-E_k(\nu)$ is the smallest zero of the $k$-complexity function. It is important to note that, by items (4) and (5) of Proposition \ref{remarkmaxima} the sequence $(E_k(\nu))_{k\in \ensuremath{\mathbb{N}}}$ is non-increasing. Its structure is of extreme importance and will be further explored in the next section. At this point we have the following consequence of Theorem \ref{maintheorem}: \begin{theorem} \label{t:nofiniteindex} For $k\geq 0$ and $\varepsilon > 0$, let $A_{N,k}(\varepsilon)$ to be the event ``there is a critical value of the Hamiltonian $H_{N,\boldsymbol \beta}$ below the level $-N(E_{k}(\nu)+\varepsilon)$ and with index larger or equal to $k$'', that is $$A_{N,k}(\varepsilon )=\{\sum_{i=k}^\infty \mathop{\mathrm{Crt}}\nolimits_{N,i}(-E_k(\nu)-\varepsilon )>0\}$$ and $B_{N,k}(\varepsilon )$ be the event ``there is a critical value of index $k$ of the Hamiltonian $H_{N,\boldsymbol \beta}$ above the level $-N(E_\infty^{-}-\varepsilon)$'', that is $$B_{N,k}(\varepsilon )=\{\mathop{\mathrm{Crt}}\nolimits_{N,k}((-E_\infty^{-}+\varepsilon ,\infty))>0\}$$ Then for all $k\ge 0$ and $\varepsilon >0$, \begin{equation} \limsup_{N\rightarrow\infty} \frac{1}{N} \log \ensuremath{\mathbb{P}} (A_{N,k}(\varepsilon)) < 0 \quad \text{and} \quad \limsup_{N\rightarrow\infty} \frac{1}{N} \log \ensuremath{\mathbb{P}} (B_{N,k}(\varepsilon)) < 0. \end{equation} \end{theorem} Theorem \ref{t:nofiniteindex} says that with overwhelming probability all critical values of the Hamiltonian $H_{N,\boldsymbol \beta}$ of index $k$ are inside the interval $[-NE_{k}, -NE_\infty^{-}]$. A similar result was derived for the pure spin glass models in \cite{ABC}. However, in the pure case it was shown (Theorem 2.2 of \cite{ABC}) that the probability of finding a critical point of finite index above the level $-N E_\infty$ is asymptotically of order $\exp(-N^2 C)$. Hence, in the mixture case not only the window of possible values for the Hamiltonian for a critical point of finite index has changed, but also the probability of being outside of that window is of order $\exp(-N C)$. \subsection{Complexity function for critical values of diverging index and the total number of critical points.} \indent We end this section by studying the number of critical points with diverging index and the total number of critical points (regardless of index). Let $k=k(N)$ be a sequence of integers such that as $N$ goes to infinity \begin{equation}\label{e:k} \frac{k(N)}{N} \rightarrow \gamma \in (0,1). \end{equation} Let $s_\gamma \in (-\sqrt{2},\sqrt{2})$ be defined as solution of \begin{equation}\label{ihgai} \frac{1}{\pi}\int_{-\sqrt{2}}^{-s_{\gamma}} \sqrt{2-x^2} \mathrm{d} x = \gamma. \end{equation} The first result in this subsection is the analogue of Theorem \ref{maintheorem} for critical points of diverging index. \begin{theorem}\label{midguythm} For any sequence $k(N)$ satisfying \eqref{e:k}, as $N$ goes to infinity \begin{eqnarray*} \lim_{N\rightarrow \infty} \frac{1}{N} \log \ensuremath{\mathbb{E}} \mathop{\mathrm{Crt}}\nolimits_{N,k(N)}(B) &=& \sup_{y \in B} \bigg\{ \frac{1}{2} \log \frac{\nu''}{\nu'} + \frac{1}{2}\bigg(s_\gamma^2 - \frac{2\nu''}{\alpha^2}\big(s_\gamma-\frac{\nu'y}{(2\nu'')^\frac{1}{2}} \big)^2 - y^2\bigg)\bigg\} \\ &:=& \sup_{y \in B} \theta_{\gamma, \nu}(u). \end{eqnarray*} \end{theorem} \begin{remark} From Theorem \ref{midguythm} one can easily get analogues of Theorems \ref{t:nofiniteindex} and Corollary \ref{coracaocor} for the case of critical points with diverging index. Its statements are adapted rewrites of the respective results. We leave it to the reader. \end{remark} We also provide the complexity for the expected total number of critical values at a level of energy. Our next result can be described as follows: the mean number of critical points at level $u$ is asymptotically given by the mean number of local minima, local maxima or critical points of index $k(N) \sim \gamma(u) N$ if $u\leq -E_{\infty}', u\geq E_{\infty}', -E_{\infty}'\leq u \leq E_{\infty}',$ respectively. Here, $\gamma(u) \in (0,1)$ is such that $s_{\gamma(u)}= \sqrt{2} \frac{u}{E_{\infty}'}$, see \eqref{ihgai}. Precisely, define \begin{equation}\label{eq:qsela} \theta_\nu(u) = \begin{cases} \theta_{0,\nu}(u) \quad \text{if} \quad u\leq -E_\infty' \\ \theta_{0,\nu}(-u) \quad \text{if} \quad u\geq E_\infty' \\ \frac{1}{2} \bigg(\log \frac{\nu''}{\nu'} - \frac{\nu''-\nu'}{\nu'^2-\nu'+\nu''}u^2\bigg) = \sup_{\gamma \in (0,1)} \theta_{\gamma,\nu}(u) = \theta_{\gamma(u),\nu}(u) , \quad \text{otherwise}. \end{cases} \end{equation} \begin{theorem} The total number of critical points satisfies \label{t:complexityglobal} \begin{equation} \label{totalequation} \lim_{N\rightarrow\infty} \frac{1}{N} \log \ensuremath{\mathbb{E}} \mathop{\mathrm{Crt}}\nolimits_{N}(B) = \sup_{u\in B} \theta_\nu(u) := \Theta_\nu(u). \end{equation} \end{theorem} \section{The Ground State Energy, Pure-like and Full Mixtures and 1-step Replica Symmetry Breaking.}\label{sec4} The goal of this section is to establish relations between the structure of the Parisi measure, the global minima of $H_{N,\boldsymbol \beta}$, and the results obtained for the asymptotic complexity in last section for different classes of mixed spin glass models. We start by recalling known results about the free energy at positive temperature, more precisely the Parisi formula as proved by Talagrand \cite{Talagrand}. \subsection{The Parisi Functional} The partition function of the $p$-spin spin glass is given by \begin{equation} Z_{N,\nu}(\beta)= \int_{S^{N-1}(\sqrt{N})} e^{-\beta H_{N,\boldsymbol \beta}(\boldsymbol \sigma)} \Lambda_N(\mathrm{d} \boldsymbol \sigma), \end{equation} where $\Lambda_N$ is the normalized surface probability measure on the sphere $S^{N-1}(\sqrt N)$. Let $M[0,1]$ the space of probability measures on $[0,1]$. By Theorem 1.1 of \cite{Talagrand}, if $\nu$ is convex, the following limit holds almost surely, \begin{equation} \label{ParisiFor} F_\infty(\beta):= \lim_{N\rightarrow \infty} \frac{1}{N} \log Z_{N,\nu}(\beta) = \inf_{\rho \in M[0,1]} F_\nu(\beta, \rho) \end{equation} A formula for $F_\nu(\beta, \rho)$ is given in (1.11) of \cite{Talagrand} and we reproduce it now. Given a probability measure $\rho$ on $[0,1]$ consider its distribution function $x_\rho : [0,1]\rightarrow [0,1]$. Write for $q \in [0,1]$: \begin{equation} \hat{x}(q) = \int_{q}^{1} x_\rho(s) \mathrm{d} \; s. \end{equation} Assuming that $x(\hat{q}) = 1$ for some $\hat{q} < 1$, then \begin{equation}\label{pqp} F_\nu(\beta, \rho) = \frac{1}{2}\bigg(\beta^2 \int_{0}^{1} x_\rho(q) \nu'(q) \mathrm{d} \; q + \int_{0}^{\hat{q}} \frac{\mathrm{d} \; q}{\hat{x}(q)} + \log(1-\hat{q})\bigg). \end{equation} If $\hat{q} = 1$, we set $F_\nu(\beta, \rho) = \infty.$ A measure that minimizes the right side of \eqref{ParisiFor} is called a Parisi Measure. It is believed that the nature (atomic, absolutely continuous) of the Parisi Measure is of extreme importance to understand the statics of the spin glass model \cite{Talagrand, ParisiMeasures}. It is a difficult task to handle the infinite dimensional variational principle in \eqref{ParisiFor}. However, in some cases, \eqref{ParisiFor} can be simplified. Let $M_1[0,1]$ be the space of atomic probability measures on $[0,1]$ that have at most $2$ atoms. Define $$F_1(\beta) := \inf_{\rho \in M_1[0,1]} F_{\nu}(\beta, \rho).$$ Clearly, $F_\infty(\beta) \leq F_{1}(\beta).$ When equality holds, that is when $F_1(\beta) = F_\infty(\beta)$, we say that the model has a \textbf{1 step replica symmetry breaking} (1RSB) at inverse temperature $\beta$. In \cite{Talagrand}, it was shown that the pure $p$-spin glass model is $1$-RSB for $\beta$ sufficient large if $p$ is even. It is an open question to determine mixtures and values of $\beta$ that 1RSB holds. Let \begin{equation} \label{GroundState} GS^N = \frac{1}{N} \inf_{\boldsymbol\sigma \in S^{N-1}(\sqrt N)} H_{N,\boldsymbol \beta}(\boldsymbol\sigma) \end{equation} be the normalized absolute minima of the Hamiltonian, i.e., the energy of its Ground State. A straight forward exercise shows that, for any $\epsilon >0$, with probability approaching one as $N$ goes to infinity, \begin{equation}\label{equatioprimi} -E_{0}(\nu) - \epsilon \leq GS^N \leq \liminf_{\beta \rightarrow \infty} - \frac{1}{\beta} F_{\infty}(\beta) + \epsilon. \end{equation} The main question we investigate in this section is whether the lower and upper bounds given on \eqref{equatioprimi} are optimal, that is, whether is possible to identify the limit ground state energy using the partition function and the asymptotic complexity. The question to find the Ground State Energy is one of the foundational and most relevant questions in the study of spin glass system among probabilists \cite[Chapter 1]{Talagrandbook}. The left and right sides of \eqref{equatioprimi} are quantities that come from different computations, complexity and free energy, respectively. So, a priori, there is no reason to expect that these bounds match. We start our analysis by the following fact: The upper bound in \eqref{equatioprimi} is optimal in the sense that \begin{theorem}\label{Msri1} For any convex covariance function $\nu$: \begin{enumerate} \item The following limit exists \begin{equation} \lim_{\beta \rightarrow \infty} \frac{1}{\beta} F_\infty(\beta) \equiv f_{\infty} \in [0,\infty). \end{equation} \item The ground-state energy $GS^N$ converges almost surely to $-f_{\infty}.$ \end{enumerate} \end{theorem} We turn to ask the same question about the lower bound. Our approach is simply to try to prove that $f_\infty = E_0(\nu)$ directly\footnote{One could argue that if we prove a concentration result for the number of local minima then $E_0(\nu) = f_{\infty} = \lim GS_N$ a.s.. Unfortunately, we still do not know how to control the second moment of $\mathop{\mathrm{Crt}}\nolimits_{0,\nu}$, since we could not derive a manageable formula like \eqref{e:exactk} nor we could remove the expectation in Theorem \ref{critical2}.}. The problem is that handling $f_\infty$ is not an easy task since it comes from a rather complicated object, the infimum of the Parisi functional over the space of probability measures on $[0,1]$. Instead our approach is to compare $E_0(\nu)$ to the analogous constant as if the model was $1$ RSB at low temperature. We will show in Lemma \ref{lemmadacon} that for any mixture the following limit exists \begin{equation}\label{f1func} f_1 := \lim_{\beta \rightarrow \infty} \frac{1}{\beta} F_1(\beta) = \inf_{(a,b) \in [\epsilon,\infty)^2} \bigg\{ \frac{1}{2}\Big(b + \nu'a + \frac{1}{b}(\log \frac{a+b}{a})\Big) \bigg\}, \end{equation} where $\epsilon$ is a positive constant depending on $\nu$. When $f_1 = f_\infty$ we say that the model is {\bf 1-RSB at zero temperature}. Surprisingly, the comparision between $f_1$ and $E_0$ will heavily depend on the structure of the mixture $\nu$ and on the bottom landscape of $H_{N,\boldsymbol \beta}$. \subsection{Pure-like mixtures and full mixtures.} \indent In this subsection we relate the Parisi Functional to the asymptotic complexity of spin glasses and derive more precise information about the landscape of $H_{N,\boldsymbol \beta}$. We first identify the regions of mixtures mentioned in the introduction. We refer the reader to Figure 3. Let \begin{equation} G(\nu',\nu''):= \log \frac{\nu''}{\nu'} -\frac{(\nu''-\nu') (\nu''-\nu' + \nu'^2)}{\nu'' \nu'^2} = \theta_{0,\nu}(-E_\infty). \end{equation} \begin{definition} A mixture $\nu$ is called a \textit{pure-like} mixture if and only if $G(\nu',\nu'') > 0$. If $G(\nu',\nu'')<0$, $\nu$ is called a \textit{full mixture}. When $G(\nu',\nu'')=0$, $\nu$ is called \textit{critical}. \end{definition} \begin{example} One can easily verify that all pure $p-$spins, $\nu(x)=x^p$, $p\geq3$ are pure-like while the spherical SK model, $p=2$, is critical. A picture of these regions is given in Figure 3. \end{example} \begin{example}\label{example} Consider the case \begin{equation}\label{poichi} \nu(t) = \mu t^2 + (1-\mu) t^p \end{equation} where $\mu \in [0,1].$ Then, if $p > 3$ then it is possible to show that there exists a $ 0<\mu_c(p)<1$ such that $\nu$ is \textit{pure-like} if and only if $\mu \leq \mu_c(p)$. $\mu_c(p)$ is given as the unique zero in $(0,1)$ of \begin{equation*} -\frac{(p^2-2p)(1-\mu) \left(2(p^2-p)-3 (p^2-2p)\mu +(p-2)^2 \mu ^2\right)}{2((p^2-p)(1-\mu )+2 \mu ) (p+2\mu -p \mu)^2} \quad + \frac{1}{2}\log\left[1+p-\frac{2p}{p+2 \mu -p \mu }\right] \end{equation*} see Figure 2. Remarkably, $p = 3$ in \eqref{poichi} is the only case where the mixture is a pure-like mixture for all values of $t$. \end{example} \begin{figure}\label{fig0101} \centering \includegraphics{figura2} \caption{Function $G(\nu', \nu'')$ in the case $\nu = \mu t^2 + (1-\mu) t^{10}$.} \end{figure} Our first statement concerning pure-like mixtures is the following result about the bottom landscape. Let \begin{equation*} E_\infty^{+} = \frac{2\nu'\sqrt{\nu''} + \sqrt{4\nu''\nu'^2 - (\nu''+\nu')(2(\nu''-\nu'+\nu'^2)-(\nu''+\nu'-\nu'^2)\log{\frac{\nu''}{\nu'}})}}{\nu''+\nu'}. \end{equation*} It follows directly from the definition of pure -like and \eqref{def:Ek} that: \begin{proposition}\label{theoasd} If $\nu$ is a pure-like mixture then the sequence $E_k(\nu)$ is strictly decreasing and $E_k(\nu)$ converges to $E_{\infty}^{+}$ as $k$ goes to infinity. \end{proposition} Last Theorem combined with Theorem \ref{t:nofiniteindex} says if the mixture $\nu$ is \textit{pure-like} then the landscape of $\nu$ at low levels of energy is similar to the pure case. In particular, the same interesting layered structure for the lowest critical values of the Hamiltonian $H_{N,\boldsymbol \beta}$ holds. Namely, the lowest critical values above the ground state energy are (with an overwhelming probability) only local minima, this being true up to the value $-NE_1(\nu)$, and that in a layer above, $(-NE_1(\nu), -NE_2(\nu))$, one finds only critical values with index 0 (local minima) or saddle point with index $1$, and above this layer one finds only critical values with index $0, 1$ or $2$, etc. Using the fact that $f_1$ comes from an easier variational principle, Lemma 5.3 of \cite{ABC} shows that miraculously in the pure case with $\nu(x)=x^p, p$ even, \eqref{equatioprimi} is optimal as indeed we have $E_{0}(\nu) = f_{\infty} = f_{1}$. This result extends as: \begin{theorem}\label{tris11} If $\nu$ is pure-like or critical then $f_1 = E_0(\nu)$. \end{theorem} Combining Theorems \ref{Msri1} and \ref{tris} we have the following: \begin{corollary}\label{caboou} In the case of $\nu$ pure-like or critical, concentration of $\mathop{\mathrm{Crt}}\nolimits_{N,0}(-\infty,u)$ around its mean implies 1-RSB at zero temperature. \end{corollary} A word of comment is needed here. It is reasonable (although we do not have a proof at the moment) to believe that $\frac{1}{N}\log \mathop{\mathrm{Crt}}\nolimits_{N,0}(-\infty,u)$ concetrates around its mean. However, in Theorem \ref{critical2} we study an "averaged" complexity instead of the possible smaller "quenched" complexity: \begin{equation} \lim_{N \rightarrow \infty}\frac{1}{N}\ensuremath{\mathbb{E}} \log \mathop{\mathrm{Crt}}\nolimits_{N,0}(-\infty,u). \end{equation} It is not clear if (and when) both quantities agree. We conjecture that in the pure-like region "quenched" is equal to "averaged" and indeed we have 1-RSB at zero temperature. We also believe that when $\nu$ is a full mixture the averaged complexity is indeed larger than the quenched complexity. These conjectures are supported by physicists \cite{Leuzzi} and by the following result that tells us that the complexity of minima can be constructed from the 1 RSB Parisi functional at zero temperature and vice-versa. For $b \in (0,\infty)$ define \begin{equation}\label{legendreq} f_1(b)= \inf_{a\in(0,\infty)}\bigg\{ \frac{1}{2}\Big(b + \nu'a + \frac{1}{b}(\log \frac{a+b}{a})\Big) \bigg\}. \end{equation} and set \begin{equation} c_\nu=\frac{\nu'-2}{\sqrt{\nu'(\nu'-1)}}, \quad g_1(x)= \begin{cases} -x f_1(x), \quad x>c_\nu \\ -c_\nu f_1(c_\nu), \quad x\leq c_\nu. \end{cases} \end{equation} \begin{theorem}\label{Legendre} If $\nu$ is pure like then for all $u<-E_\infty$ \begin{equation} \theta_{0,\nu}(u) = \min_{b\in[c_\nu,\infty)} \bigg( ub - b f_1(b)\bigg). \end{equation} Moreover, $g_1(x)$ is a convex fucntion, strictly convex in $(c_\nu,\infty)$ and if we set for $u>E_\infty$, $\psi(u) = -\theta_{0,\nu}(-u)$ then $\psi$ is the Legendre-Fenchel conjugate of $g_1(x)$: \begin{equation} \psi(u) = \max_{x\in \ensuremath{\mathbb{R}}} \bigg(ux - g_1(x) \bigg). \end{equation} \end{theorem} \begin{remark} In fact, such duality is the first sign at zero temperature of an apparently deeper connection predicted by physicists \cite{CavagnaGia, Cavagna} between the Parisi Functional at finite temperature and the TAP complexity (see \cite{ABC}, section 6 for a definition of TAP equations). We plan to explore this connection in the future. \end{remark} The above connection does not hold if $\nu$ is a full mixture. We end this section by analyzing this case. If $\nu$ is either \textit{critical} or a \textit{full mixture} it follows from Theorem \ref{critical2} that for any $k$ finite the mean number of critical points of index $k$ are asymptotically equal at any possible level of energy. In particular, \begin{corollary}\label{trivialfm} If $\nu$ is either critical or a full mixture then for any $k, k' \in \ensuremath{\mathbb{N}}$, \begin{equation} E_k(\nu) = E_{k'}(\nu) = E_0(\nu). \end{equation} Furthermore, for any $k \in \ensuremath{\mathbb{N}}$ the probability of finding a critical value of index $k$ below the level $-N(E_{0}(\nu)+\varepsilon)$ is exponentially small in $N$. \end{corollary} \begin{theorem}\label{tris} If $\nu$ is full mixture, not necessarily convex, then $f_1<E_0(\nu)$. \end{theorem} If $\nu$ is a full mixture, last theorem combined with Theorem \ref{Msri1} immediately implies that $-E_0(\nu) < -f_1 \leq GS_N$ for $N$ large enough with probability one. Hence, we can not remove the expectation from Theorem \ref{critical2}. Moreover, \begin{corollary} If $\nu$ is a full mixture, then for any $u \in (-E_0(\nu), -f_\infty)$ the probability of having a critical value below $u$ goes to zero while the mean number of local minima is exponentially large in $N$. Namely for such $u$ there exist constants $0< C_1 < C_2$ such that for $N$ sufficiently large \begin{equation} \ensuremath{\mathbb{E}}\mathop{\mathrm{Crt}}\nolimits_{N,0}(-\infty, u) \geq e^{N C_1}, \quad \text{and} \quad \ensuremath{\mathbb{P}}\bigg(\mathop{\mathrm{Crt}}\nolimits_N(-\infty,u)> e^{N C_1} \bigg) \leq e^{-N C_2}. \end{equation} \end{corollary} \begin{figure}\label{zonas} \centering \includegraphics[scale=1]{zonas4} \caption{Graph of $\nu' \times \nu''$. In blue, the level set $G(\nu',\nu'')=0$ i.e. the case where $\nu$ is critical. Dotted lines are the possible values of $(\nu',\nu'')$ for the mixtures $2+6, 2+10$ and $4+30$. The gray region is outside the domain of possible values for $(\nu',\nu'')$.} \end{figure} \section{Euler characteristic of Level Sets}\label{Eulersec} In this section, we investigate the landscape of the Hamiltonian $H_{N, \boldsymbol \beta}$ by analyzing the mean Euler characteristic of level sets as $N$ goes to infinity. In order to state our results we need further notation. The Hermite functions $\phi_j$, $j\in \mathbb N$, are defined by \begin{equation}\label{HFd} \phi_j(x) = (2^jj!\sqrt{\pi})^{-1/2} h_j(x) e^{-\frac{x^2}{2}}, \end{equation} where $h_j$, $j\in \mathbb N$ are Hermite polynomials, \begin{equation}\label{eq1} h_j(x) = e^{x^2}(-\frac{\mathrm{d}}{\mathrm{d} x})^j e^{-x^2}. \end{equation} In particular, $h_0(x) =1, h_1(x)=2x, h_2(x)=4x^2-2x. $ The Hermite functions are orthonormal functions in $\ensuremath{\mathbb{R}}$ with respect to Lebesgue measure. We denote by $\chi(A_u)$ the Euler characteristic of a level set $$A_u := \{\boldsymbol \sigma \in S^{N-1}(\sqrt{N}): H_{N, \beta}(\boldsymbol \sigma) \leq N u \}.$$ $\chi(\cdot)$ is a topological invariant, integer valued function that is defined for any CW-complex as the alternating sum of Betti's numbers \cite{Betti}. It is a functional that is invariant under homotopies and satisfies \begin{equation}\label{eq:eulde} \chi(A \cup B) = \chi(A)+\chi(B)-\chi (A \cap B), \quad \chi(\mathbb B)=1 \quad \text{and} \quad \chi(S_N) = 1 + (-1)^{N-1} \end{equation} where $\mathbb B$ denotes a $N$-dimensional unit ball, $S_N$ the $N$-dimensional unit sphere and $A$, $B$ are CW-complexes. $\chi(\cdot)$ roughly measures the number of connected components and its number of attached cylindrical holes and handles. Since we are only interested on Euler characteristics of level sets of almost surely Morse functions, we use the equivalent definition that follows from Morse's theorem (see \cite[Theorem 9.3.2]{AT07}) : $$\chi(A_u) := \sum_{k=0}^{N-1} (-1)^{k} \mathop{\mathrm{Crt}}\nolimits_k(A_u).$$ The strategy of using Rice's formula to compute Euler characteristics of level sets was developed in \cite{AT07,TaylorJon, Taylor22} and also explored in \cite{Azaisbook}. In fact, in a similar fashion, we prove the following proposition: \begin{proposition}\label{eulerexact} \begin{equation}\label{eqmix} \ensuremath{\mathbb{E}} \chi(A_u) = (-1)^{N-1} \bigg(\frac{\nu''}{\nu'}\bigg)^{\frac{N-1}{2}} \frac{2^{-(N-1)}N}{\sqrt{\pi}\Gamma(\frac{N}{2})}\int_{-\infty}^{\infty} \int_{-\infty}^{u} h_{N-1}\big( \frac{\sqrt{N}(\nu'x -\alpha y)}{\sqrt{2\nu''}}\big) e^{-\frac{N}{2} (x^2+y^2)} \mathrm{d} x \mathrm{d} y. \end{equation} \end{proposition} Our main result in this section is the asymptotic formula for $\ensuremath{\mathbb{E}} \chi(A_u)$ and its relation to the asymptotic complexity of the total number of critical points (see \eqref{totalequation}). \begin{theorem}\label{meaneulerasym} The mean Euler-Poincar\'{e} characteristic $\ensuremath{\mathbb{E}} \chi(A_u)$ satisfies the following: \begin{enumerate} \item If $u \leq -E_\infty'$, \begin{equation} \lim_{N \rightarrow \infty} \frac{1}{N} \log \ensuremath{\mathbb{E}} \chi (A_u) = \Theta_{\nu}(u). \end{equation} \item If $ -E_\infty' < u \leq 0 $, with $u = -E_\infty' \cos \omega$, $\omega \in (0,\pi)$ \begin{equation} \ensuremath{\mathbb{E}} \chi (A_u) = (-1)^{N-1}\frac{c(N,\nu)}{2^{\frac{1}{4}}\pi^{\frac{1}{2}} N^{\frac{5}{4}}} \frac{e^{N \Theta_\nu(u)}}{f(\omega)(\sin \omega)^{1/2}} \sin\bigg[ N \tau(\omega) + \rho(\omega)\bigg](1 + O(N^{-1})). \end{equation} where \begin{equation*} \tau(\omega) = \frac{1}{2}\big(\sin 2\omega - 2\omega \big), \quad \rho(\omega) = -\frac{1}{2}\tau(\omega) + \frac{3\pi}{4} + \alpha(\omega), \end{equation*} $c(N,\nu)$ is given in \eqref{cnnu} and $f(\omega)$, $\alpha(\omega)$ are given in \eqref{finalalpha}. \item If $u>0$ we have $\ensuremath{\mathbb{E}}\chi(A_u) = \ensuremath{\mathbb{E}}\chi(A_{-u})$ for $N$ even and $\ensuremath{\mathbb{E}}\chi(A_u) = 2-\ensuremath{\mathbb{E}}\chi(A_{-u})$ for $N$ odd. \end{enumerate} \end{theorem} Let us describe in words the landscape picture emerging from Theorem \ref{meaneulerasym}. Roughly speaking, Theorem \ref{meaneulerasym} says that the mean Euler Characteristic of $A_u$ is in absolute value asymptotically equal to the total number of critical points \textbf{at level} $Nu$ if $u < E_0$. This picture is fairly intuitive and easy to explain in the bottom of the landscape. As we increase the energy level $u$ from negative infinity to $- E_\infty'$, the level set $A_u$ is "essentially" a union of disjoint simply connected neighborhoods of local minima. Since these are exponentially large and dominate the total number of critical points, the mean Euler characteristic is positive and of same size. As we cross the level $-E_\infty'$, local minima cease to dominate. The total number of critical points and the Euler characteristic (in absolute value) is given by the critical values of dominant divergent index. The landscape is then hard to visualize. By increasing a tiny amount of energy it oscillates from a large positive to a large negative Euler characteristic (and vice versa). This oscillation continues up to level $ E_\infty'$. It would be of interest to find a simple and intuitive geometric reason for this large oscillation. By symmetry above $ E_\infty'$ we have "essentially" covered the whole sphere minus an exponentially large number of disjoint simply connected sets. \begin{remark} The above Theorem also holds as stated in the pure $p$-spin case. Only the complexity function $\Theta_\nu(u)$ needs to be replaced by its analogue given in Theorem 2.8 of \cite{ABC} (see also Remark \ref{abccase} below). \end{remark} Proposition \ref{eulerexact} and Theorem \ref{meaneulerasym} are proven in Section \ref{eulerproof}. \section{Complexity of critical points}\label{sec2} \subsection{Main Identity} In this section, we introduce the main identity that relates the mean number of critical points of index $k$ with the $k$-th smallest eigenvalue of the Gaussian Orthogonal Ensemble. This identity, given in Proposition \ref{identity}, is the analogous of Theorem 2.1 of \cite{ABC} and it is the first step of the proofs of Theorems \ref{critical2}, \ref{t:nofiniteindex}, \ref{t:complexityglobal} and \ref{theoasd}. We fix our notation for the Gaussian Orthogonal Ensemble (GOE). The GOE is a probability measure on the space of real symmetric matrices. Namely, it is the probability distribution of the $N \times N$ real symmetric random matrix $M^N$, whose entries $(M_{ij}, i\leq j)$ are independent centered Gaussian random variables with variance \begin{equation} \label{e:Ms} \mathbb E M_{ij}^2 = \frac{1+\delta_{ij}}{2N}. \end{equation} We will denote by $\ensuremath{\mathbb{E}}^N_{{\text{GOE}}}$ the expectation under the GOE ensemble of size $N\times N$. Let $\lambda^N_0 \leq \lambda^N_1\le\dots\leq \lambda^N_{N-1}$ be the ordered eigenvalues of $M^N$. \begin{proposition}\label{identity} The following identity holds for all $N$, $\nu$, $k\in\{0,\dots,N-1\}$, and for all Borel sets $B \subset \mathbb R$, \begin{equation} \label{e:exactk} \ensuremath{\mathbb{E}} [\mathop{\mathrm{Crt}}\nolimits_{N,k}(B)] = C(N,\nu',\nu'') \int_{B} \ensuremath{\mathbb{E}}_{\text{GOE}}^N \bigg[ \exp\bigg\{\frac{N}{2}\bigg((\lambda_{k}^{N})^2 - y^2 - \frac{2\nu''}{\alpha^2}\big(\lambda_{k}^{N}-\frac{\nu'y}{(2\nu'')^\frac{1}{2}} \big)^2\bigg)\bigg\} \bigg]\mathrm{d} y, \end{equation} where $C(N,\nu',\nu'')= 2 \sqrt{\frac{2\nu''N}{\nu'\pi \alpha^2}}(\frac{\nu''}{\nu'})^{\frac N2}$. \end{proposition} \begin{proof} Proof of Proposition \ref{identity} is a rewrite of the proof of Theorem 2.1 of \cite{ABC} with one subtle difference: the law of the Hessian in the mixed case gains an independent Gaussian component on its diagonal. In this proof, we use $H$ to denote $H_{N,\boldsymbol \beta}$. The hypothesis on $\nu$ allows us to apply Rice's Formula, in the form of Lemma 3.1 of \cite{ABC}. It says that using $\mathrm{d} \boldsymbol\sigma$ to denote the usual surface measure on $S^{N-1}(\sqrt{N})$, \begin{align} \label{e:metak} \ensuremath{\mathbb{E}} \mathop{\mathrm{Crt}}\nolimits_{N,k}(B)= \int_{S^{N-1}(\sqrt{N})} \ensuremath{\mathbb{E}} \big[ | \det \nabla^2H(\boldsymbol\sigma) | \ensuremath{\boldsymbol 1}\{H(\boldsymbol\sigma) \in N B,i(\nabla^2 H(\boldsymbol \sigma)))=k\}\, \big|\, &\nabla H(\boldsymbol\sigma) = 0 \big]\\ &\times \phi_{\boldsymbol\sigma}(0) \mathrm{d} \boldsymbol\sigma \nonumber \end{align} where $\phi_{\boldsymbol\sigma}$ is the density of the gradient vector of $H$. Now, since $H$ is invariant under rotations, to compute the above expectation it is enough to study the joint distribution of $(H,\nabla H, \nabla^2 H)$ at at the north pole $\boldsymbol n$. We fix a orthogonal base for the tangent plane at the north pole, and we consider $\nabla H (\boldsymbol n), \nabla^2 H(\boldsymbol n)$ with respect to that base. Denoting subscript by a derivative according to a orthonormal basis in $T_{\bold \sigma} S^{N-1}(\sqrt{N})$ we have that \begin{lemma} \label{l:conditioning} For all $1\le i\le j\le N-1$, \begin{equation*} \label{e:covariancesa} \begin{aligned} &\mathbb E[H(\boldsymbol n)^2]=N,\\ &\mathbb E[H(\boldsymbol n) H_{ij}(\boldsymbol n)]=-\nu' \delta_{ij},\\ \end{aligned} \qquad \begin{aligned} &\mathbb E[H(\boldsymbol n)H_i(\boldsymbol n)]= \mathbb E[H_i(\boldsymbol n)H_{jk}(\boldsymbol n)]=0,\\ &\mathbb E[H_i(\boldsymbol n)H_j(\boldsymbol n)]=\nu' \delta_{ij},\\ \end{aligned} \end{equation*} and \begin{equation*} \label{e:covariancesb} \mathbb E[H_{ij}(\boldsymbol n)H_{kl}(\boldsymbol n)]= \frac{1}{N}[ \nu''(\delta_{ik}\delta_{jl}+\delta_{il}\delta_{jk})+ (\nu'' + \nu') \delta_{ij}\delta_{kl}]. \end{equation*} Furthermore, under the conditional distribution $\mathbb P[\cdot | H(\boldsymbol n)=x]$ the random variables $H_{ij}(\boldsymbol n)$ are Gaussian variables with \begin{equation*} \mathbb E[H_{ij}(\boldsymbol n)]=- \frac{x}{N} \nu'\delta_{ij} \end{equation*} and \begin{equation*} \mathbb E\big[H_{ij}(\boldsymbol n)H_{kl}(\boldsymbol n)\big]= \frac{1}{N} [\nu''(1+\delta_{ij})\delta_{ik}\delta_{jl} + \alpha^2 \delta_{ij}\delta_{kl}]. \end{equation*} i.e., if $M^{N-1}$ is distributed as a $(N-1)\times (N-1)$ GOE matrix \begin{equation*} \ensuremath{\mathbb{E}}\big[\nabla^2 H | H(\boldsymbol n)] \stackrel{d}{=}(\frac{N-1}{N}2\nu'')^{1/2}M^{N-1} + \frac{1}{\sqrt{N}}(\alpha Z - \frac{1}{\sqrt{N}}\nu' H(\boldsymbol n)) I \end{equation*} where $Z$ is an independent standard Gaussian. \end{lemma} Last Lemma implies that \eqref{e:metak} can be rewritten as \begin{equation} \label{e:baa} \begin{split} &\mathbb E \mathop{\mathrm{Crt}}\nolimits_{N,k}(B) \\ &= \omega_N \ensuremath{\mathbb{E}} \Bigg[ \ensuremath{\mathbb{E}} \Big[ \left| \det \big( (\frac{N-1}{N}2\nu'')^{1/2}M^{N-1} + \frac{1}{N}(\sqrt{N}\alpha Z - \nu' H(\boldsymbol n)) I \big)\right| \\ &\times \ensuremath{\boldsymbol 1}\Bigg\{i\big[(\frac{N-1}{N}2\nu'')^{\frac{1}{2}}M^{N-1} + (\alpha \frac{Z}{\sqrt{N}} - \nu'\frac{H(\boldsymbol n)}{N}) I\big]=k\Bigg\} \ensuremath{\boldsymbol 1}\{H(\boldsymbol n) \in N B\} \Big| H(\boldsymbol n) \Big] \Bigg] \phi_{\boldsymbol n }(\boldsymbol n) , \end{split} \end{equation} where $\omega_N$, the volume of the sphere $S^{N-1}(\sqrt{N})$, and $\phi_{\boldsymbol n}(\boldsymbol n)$ are given by \begin{equation} \label{e:aab} \omega_N=(\sqrt{N})^{N-1}\frac{2\pi^{N/2}}{\Gamma (N/2)}, \qquad \phi_{\boldsymbol n}(\boldsymbol n)= (2 \pi \nu')^{-(N-1)/2}. \end{equation} Since we can assume $\alpha\neq0$ (the case $\alpha =0$, i.e. the pure p-spin was treated in \cite{ABC}), we can rewrite the conditional expectation in \eqref{e:baa} as \begin{equation}\label{poiloas} \frac{\sqrt{N}}{\sqrt{2\pi}} (2 \nu'' \frac{N-1}{N})^{\frac{N-1}{2}} \int_{B} e^{\frac{-N y^2}{2}} \ensuremath{\mathbb{E}} \left| \det \big(M^{N-1} - X(y)\big) I \right| \ensuremath{\boldsymbol 1}\Big\{i\big[M^{N-1} - X(y) I\big]=k\Big\} \mathrm{d} y \end{equation} where $X(y)$ is a Gaussian random variable with mean $m= \frac{\sqrt{N}\nu'y}{(2\nu''(N-1))^{1/2}}$ and variance $t^2 = \frac{\alpha^2}{2\nu''(N-1)}$. Hence, we can apply Lemma 3.3 of \cite{ABC} with $G=\ensuremath{\mathbb{R}}$ to get that \eqref{poiloas} is equal to \begin{equation}\label{e:losi} \frac{\Gamma(\frac{N}{2})(\frac{N-1}{N})^{-\frac{N}{2}}}{\sqrt{\pi t^2}} \int_{B} \ensuremath{\mathbb{E}}_{\text{GOE}}^N \bigg[ \exp\bigg\{\frac{N}{2}\bigg((\lambda_{k}^{N})^2 - y^2 - \frac{2\nu''}{\alpha^2}\big(\lambda_{k}^{N}-\frac{\nu'y}{(2\nu'')^\frac{1}{2}} \big)^2\bigg)\bigg\} \mathrm{d} y \end{equation} Putting \eqref{e:baa}, \eqref{e:aab} and \eqref{e:losi} together we end the proof of Proposition \ref{identity}. \end{proof} \subsection{Proof of Theorems \ref{critical2}, \ref{t:nofiniteindex} , \ref{midguythm} and \ref{t:complexityglobal}}\label{sectionlo} \subsubsection{Proving Theorem \ref{critical2} and Proposition \ref{remarkmaxima}.} In this subsection, we will compute the logarithm asymptotics of the left-side of \eqref{e:exactk}. Let $F:\ensuremath{\mathbb{R}}^2 \rightarrow \ensuremath{\mathbb{R}}$ be given by \begin{equation}\label{fdefini} F(\lambda,y) = \frac{1}{2}\bigg( - \frac{\nu''+\nu'}{\nu''+\nu'-\nu'^2}y^2 + \frac{2\sqrt{2}\sqrt{\nu''}\nu'}{\nu''+\nu'-\nu'^2} \lambda y-\frac{\nu''-\nu'+\nu'^2}{\nu''+\nu'-\nu'^2} \lambda^2\bigg). \end{equation} Note that $ F(\lambda,y) = -a y^2+ by\lambda -c \lambda^2$ for some constants $a,b,c>0$. Let \begin{equation}I_1(x) = \int_{\sqrt{2}}^{x} \sqrt{z^2-2} \mathrm{d} z = \frac{1}{2} \left(x \sqrt{x^2-2}+\log[2]-2 \log\left[ \left(x+\sqrt{x^2-2}\right)\right]\right).\end{equation} For any $k \in \ensuremath{\mathbb{N}}$ fixed, let \begin{equation}\label{complexityfunction2} \theta_{k,\nu}(u) = \begin{cases} \frac{1}{2} \log \frac{\nu''}{\nu'} + F(-\sqrt{2},u) , \quad \text{if} \quad -E_\infty \leq u, \\ \frac{1}{2} \log \frac{\nu''}{\nu'} + F(\lambda^*_k[u],u)-(k+1) I_1(|\lambda^*_k[u]|) , \quad \text{if} \quad u \leq -E_\infty \end{cases} \end{equation} where$ \frac{\nu'\sqrt{2\nu''}u}{\nu''-\nu'+\nu'^2}<\lambda^*_k[u]\leq-\sqrt{2}$ is given by \begin{equation*} \Psi'(\lambda^*_k[u]) = 0, \quad \Psi(x)= \frac{2 \nu' \sqrt{2\nu''}}{\alpha^2} u x- \frac{\nu''-\nu'+\nu'^2}{\alpha^2} x^2 - 2(k+1) I_1(|x|), \end{equation*} i.e., $\lambda^*_k[u]$ is a solution on $(-\infty,-\sqrt{2}]$ of \begin{equation}\label{inlambda} \frac{\nu' \sqrt{2\nu''}}{\alpha^2} u - \frac{\nu''-\nu'+\nu'^2}{\alpha^2} \lambda^*_k[u] + (k+1) \sqrt{(\lambda^*_k[u])^2-2}= 0. \end{equation} Our goal in this section is to prove that $\theta_{k,\nu}$ is the $k$-complexity function. When $k=0$ the formula for $\theta_{0,\nu}$ simplifies as follows. \begin{proposition}\label{equalityinthe} For all $u \in \ensuremath{\mathbb{R}}$, \begin{equation} \theta_{0,\nu}(u) = \begin{cases} \frac{1}{2} \left(\log[\frac{\nu''}{\nu'}] -\frac{u^2 (\nu'+\nu'')}{\nu'-\nu'^2+\nu''}+\frac{4 u \nu' \sqrt{\nu''}}{\nu'-\nu'^2+\nu''}-\frac{2 \left(-\nu'+\nu'^2+\nu''\right)}{\nu'-\nu'^2+\nu''}\right) , \quad \text{if} \quad -E_\infty \leq u, \\ \frac{1}{2}\log[\nu'-1]-\frac{u^2 (\nu'-2)}{4 (\nu'-1)}- I_1(-\frac{u \nu'}{\sqrt{2} \sqrt{\nu'(\nu'-1)}}) , \quad \text{if} \quad u \leq -E_\infty. \end{cases} \end{equation} \end{proposition} \begin{remark}\label{abccase} It is possible to recover all complexity functions of the pure case by taking $\alpha$ to zero (i.e. recover the first results of \cite{ABC}). In particular, if $\alpha=0$, $E'_\infty = E_\infty$ and we do not have the intermediate regions where the $k$- complexity functions are equal for different $k$ and non-constant. \end{remark} We postpone the proof of Proposition \ref{equalityinthe} to the end of this subsection since we will need another characterization of $\theta_{k, \nu}$. \begin{proof}[Proof of Theorem \ref{critical2}.] To prove Theorem \ref{critical2} it suffices to show that $\theta_{k,\nu}(u)$ is the logarithm asymptotic limit of the left hand side of \eqref{e:exactk}. First, note that we can rewrite \eqref{e:exactk} as \begin{equation}\label{acopado} C_N \ensuremath{\mathbb{E}} e^{-N \Lambda(\lambda^N_k,Y_N)} \ensuremath{\boldsymbol 1}\{Y_N \in B\}. \end{equation} where $Y_N$ is a Gaussian random variable of mean zero and variance $N$ independent of $\lambda^N_k$, $\ensuremath{\mathbb{E}}$ is the expectation with respect to GOE and $Y_N$ and \begin{equation}\label{ploeamoreadedio} \lim_{N\rightarrow \infty} \frac{1}{N} \log C_N = \frac{1}{2} \log \frac{\nu''}{\nu'}, \qquad \Lambda(\lambda,y)= F(\lambda,y) + \frac{y^2}{2} = \frac{1}{2}\bigg(\lambda^2 - \frac{2\nu''}{\alpha^2}\big(\lambda-\frac{\nu'y}{(2\nu'')^\frac{1}{2}} \big)^2\bigg). \end{equation} By the independence of $Y_N$ and $\lambda_k^N$ and Theorem A.1 of \cite{ABC}, the sequence of random variables $(\lambda^N_k,Y_N)$ satisfies a large deviation principle of speed $N$ and rate function \begin{equation*} I_k(\lambda,x) = \begin{cases}\frac{x^2}{2} + (k+1) I_1(|\lambda|), \quad \text{if} \quad \lambda \leq - \sqrt{2}, \\ \infty, \quad \text{otherwise}. \end{cases} \end{equation*} Therefore, in view of \eqref{acopado} and \eqref{ploeamoreadedio}, we can apply Laplace-Varadhan Lemma (see e.g.~\cite{AmirOfer}, Theorem 4.3.1 and Exercise~4.3.11) and get that \begin{equation*}\label{erkow} \lim_{N\rightarrow \infty} \frac{1}{N} \log \ensuremath{\mathbb{E}} \mathop{\mathrm{Crt}}\nolimits_{N,k}(B) = \frac{1}{2} \bigg[ \log \frac{\nu''}{\nu'} +\max_{x \in B, \lambda \leq - \sqrt{2}} \bigg\{\lambda^2 - \frac{1}{\alpha^2}(\nu'x- \sqrt{2\nu''}\lambda)^2 - 2 I_k(\lambda,x)\bigg\} \bigg]. \end{equation*} We will now analyse the above variational principle. We start by the case of $B=(-\infty, u)$. We want to find \begin{equation}\label{erkoe} \max_{x \leq u, \lambda \leq -\sqrt{2}} \bigg\{-x^2 + \lambda^2 - \frac{1}{\alpha^2}(\nu'x- \sqrt{2\nu''}\lambda)^2 -2 (k+1) I_1(|\lambda|)\bigg\}. \end{equation} \vspace{0.3cm} \textit{Case $u\ge - E'_\infty$}: If $u \ge -E'_\infty$ then we maximize \eqref{erkoe} in $x$ first. The maximum is obtained at $x = x_\lambda :=\frac{\nu' \sqrt{2\nu''}}{\nu''+\nu'} \lambda \leq u$. Plugging $x_\lambda$ back in \eqref{erkoe}, we get an increasing function in $\lambda$, since $I_1(|\lambda|)$ is itself decreasing. Thus the maxima is realized at \begin{equation*} x = x_\lambda, \qquad \lambda = -\sqrt{2}. \end{equation*} This together with \eqref{erkow} proves Theorem \ref{critical2} in the case $B=(-\infty,u)$ with $- E'_\infty \leq u$. \vspace{0.3cm} \textit{Case $u\le - E'_\infty$}: In the case $u\le - E'_\infty$, $x_\lambda \leq u$ if and only if $ \lambda \leq \frac{\sqrt{2}u}{E'_\infty}$. Therefore if $x^*$ maximizes \eqref{erkoe} then \begin{equation}\label{casdf} x^*= x_\lambda \Leftrightarrow \lambda \leq \frac{\sqrt{2}u}{E'_\infty} \quad \text{and}\quad x^*= u \Leftrightarrow \frac{\sqrt{2}u}{E'_\infty} \leq \lambda \leq -\sqrt{2}. \end{equation} If we plug in the correspondent values of $x$ in each region we note that in the first case our function is again increasing in $\lambda$. Furthermore, since at $\lambda= \frac{\sqrt{2}u}{E'_\infty}$, $x_\lambda=u$, we are led to the following variational principle valid in both cases of \eqref{casdf} \begin{align}\label{rafuta} &\max_{\frac{\sqrt{2}u}{E'_\infty} \leq \lambda \leq -\sqrt{2}} \bigg\{ -u^2 + \lambda^2 - \frac{1}{\alpha^2}(\nu'u- \sqrt{2\nu''}\lambda)^2 - 2(k+1) I(|\lambda|)\bigg\}= \nonumber \\ &= -(1 + \frac{\nu'^2}{\alpha^2})u^2 + \max_{\frac{\sqrt{2}u}{E'_\infty} \leq \lambda \leq -\sqrt{2}} \bigg\{ \frac{2 \nu' \sqrt{2\nu''}}{\alpha^2} u \lambda - \frac{\nu''-\nu'+\nu'^2}{\alpha^2}\lambda^2 - 2(k+1) I_1(|\lambda|)\bigg\} \nonumber \\ &= -(1 + \frac{\nu'^2}{\alpha^2})u^2 + \max_{\frac{\sqrt{2}u}{E'_\infty} \leq \lambda \leq -\sqrt{2}} \Psi(\lambda) = \max_{\frac{\sqrt{2}u}{E'_\infty} \leq \lambda \leq -\sqrt{2}} \Gamma(\lambda). \end{align} Note that $\Psi(\lambda)$ is a parabola $a\lambda^2 + b \lambda, a<0$ plus an increasing function. The critical point of the parabola is given by \begin{equation}\label{lambdacritical} \lambda_c = \frac{\nu'\sqrt{2\nu''}u}{\nu''-\nu'+\nu'^2} \geq - \sqrt{2} \Longleftrightarrow u \geq - E_\infty. \end{equation} Therefore if $u \geq -E_\infty$, $\Psi$ is an increasing function in $\lambda$, so its maximum is attained at $\lambda=-\sqrt{2}$. This proves the Theorem in the region $-E_\infty \leq u \leq - E'_\infty$. \vspace{0.1cm} If $u < -E_\infty$, equation \eqref{lambdacritical} and the facts that $\Psi'(-\sqrt{2})<0$ and $\Psi'(\lambda_c)>0$ imply that the maximum is taken in the interior of the interval $[\lambda_c,-\sqrt{2}]$ at $\lambda_k^*[u]$. This ends the proof of the Theorem in the case $B=(-\infty,u)$. Now, it is easy to extend it to any open set $B$. Let $u^*$ be the point that realizes the $\sup_{\{u\in B \}} \theta_{k,\nu}(u)$. From the continuity and uniqueness of a local maxima of $\theta_{k,\nu}$, it is clear that either $u^* = -E_\infty'$ or $u^*$ is in the boundary of $B$. Assume without loss of generality that there exists an increasing sequence $u_n$ in $B$ approaching $u^*$. Since $B$ is open, there exist $\epsilon_n >0$ such that \begin{eqnarray*} \ensuremath{\mathbb{E}} (\mathop{\mathrm{Crt}}\nolimits_{N,k}(-\infty,u_n) - \mathop{\mathrm{Crt}}\nolimits_{N,k}(-\infty, u_n-\epsilon_n)) &=& \ensuremath{\mathbb{E}} \mathop{\mathrm{Crt}}\nolimits_{N,k}(u_n-\epsilon_n,u_n)\leq \ensuremath{\mathbb{E}} \mathop{\mathrm{Crt}}\nolimits_{N,k}(B) \\ &\leq& \ensuremath{\mathbb{E}}\mathop{\mathrm{Crt}}\nolimits_{N,k}(-\infty,u^*). \end{eqnarray*} But since $\theta_{k,\nu}$ is continuous and increasing for $u \leq -E_\infty'$ last equation implies \begin{equation*} \theta_{k,\nu} (u_n) \leq \lim_{N\rightarrow \infty}\frac{1}{N} \log \ensuremath{\mathbb{E}} \mathop{\mathrm{Crt}}\nolimits_{N,k}(B) \leq \theta_{k, \nu}(u^*), \end{equation*} for all $n$, which proves Theorem \ref{critical2} for any $B$ open. \end{proof} It remains to prove Proposition \ref{equalityinthe}. We first need the following miraculous Lemma. \begin{lemma}\label{surla} For all $u<-E_\infty$, \begin{equation*} \frac{\partial}{\partial \nu''}\theta_{0,\nu}(u) = 0. \end{equation*} \end{lemma} \begin{proof} The proof relies on how we derived $\theta_{0,\nu}(u)$. When $u<-E_\infty$, $\theta_{0,\nu}(u)$ is the maximum over $\lambda$ of a functional $\Gamma$ (that depends on $\nu''$) given in \eqref{rafuta}. Its maximizer $\lambda^*(u)$ is the smallest root of a second degree polynomial that can be derived from \eqref{inlambda}. This second degree equation is given by $A + B \lambda + C \lambda^2 = 0$ where \begin{eqnarray}\label{sasawqwq} A&=& 2+\frac{2 u^2 \nu'^2 \nu''}{\left(\nu'-\nu'^2+\nu''\right)^2} \nonumber \\ B&=& -\frac{2 \sqrt{2} u \nu' \sqrt{\nu''} ((-1+\nu') \nu'+\nu'')}{\left(\nu'-\nu'^2+\nu''\right)^2}\\ C&=& \frac{2 \left((-1+\nu')^2 \nu'^2+\nu''^2\right)}{\left(\nu'-\nu'^2+\nu''\right)^2}. \nonumber \end{eqnarray} Now chain rule and the fact that $\lambda^*(u)$ is a maximizer imply that $\frac{\partial}{\partial \nu''}\theta_{0,\nu}(u) = 0$ if and only if $\frac{\partial}{\partial \nu''} \bigg( \Gamma(\lambda^*(u)) \bigg)=0$ if and only if $\bigg(\frac{\partial}{\partial \nu''} \Gamma\bigg)(\lambda^*(u))=0$. The last condition can be written down as a second degree equation of the form \begin{equation}\label{sesesewq1} \begin{split} &\frac{1}{2 \nu''}+\frac{1}{2} \left(-\frac{u^2 (-\nu'-\nu'')}{\left(\nu'-\nu'^2+\nu''\right)^2}-\frac{u^2}{\nu'-\nu'^2+\nu''}-\frac{2 \sqrt{2} u \nu' \sqrt{\nu''} \lambda }{\left(\nu'-\nu'^2+\nu''\right)^2}+\frac{\sqrt{2} u \nu' \lambda }{\sqrt{\nu''} \left(\nu'-\nu'^2+\nu''\right)}\right)\\ &+ \frac{1}{2} \left( -\frac{\lambda ^2}{\nu'-\nu'^2+\nu''}+\frac{\left(-\nu'+\nu'^2+\nu''\right) \lambda ^2}{\left(\nu'-\nu'^2+\nu''\right)^2}\right)=0. \end{split} \end{equation} Comparing the coefficients of \eqref{sesesewq1} with \eqref{sasawqwq} one sees that their ratios are constant equal to $\frac{1}{4\nu''}$. This immediately implies that they share the same roots. So $\lambda^*(u)$ indeed satisfies $\bigg(\frac{\partial}{\partial \nu''} \Gamma\bigg)(\lambda^*(u)) = 0$ and the lemma is proven. \end{proof} \begin{proof}[Proof of Proposition \ref{equalityinthe}.] From Lemma \ref{surla} we know that for $u<-E_\infty$, $\theta_{k,\nu}$ does not depend on $\nu''$. By choosing $\nu'' = \nu'^2 - \nu' + \epsilon$ and taking $\epsilon$ to zero we get the desired result. Indeed, when $\epsilon$ goes to zero \begin{equation*} \lambda^*(u) \rightarrow \frac{u \nu'}{\sqrt{2} \sqrt{(\nu'-1) \nu'}}, \quad F(\lambda^*(u),u) \rightarrow \frac{-u^2 (\nu'-2)}{4 (\nu'-1)}. \end{equation*} \end{proof} \subsubsection{Proof of Theorem \ref{t:nofiniteindex}} We want to prove that there are no critical values of index $k$ of the Hamiltonian above $-N(E_{\infty}^{-}-\varepsilon )$. The function $\theta_{k,\nu}$ is strictly decreasing on $(-E_{\infty}^{-},\infty)$. Using Theorem~\ref{critical2}, we have \begin{equation*} \mathbb E\big[ \mathop{\mathrm{Crt}}\nolimits_{N,k}(-E_{\infty}^{-}+\varepsilon,\infty )\big]\le \exp\big\{N \theta_{k,\nu}(-E_{\infty}^{-}+\varepsilon) +o(N)\big\}. \end{equation*} The constant $-E_{\infty}^{-}$ is defined by $\theta_{k,\nu}(-E_{\infty}^{-})=0$ for all $k$. Therefore, $\theta_{k,p}(-E_{k}+\varepsilon )= c(k,\nu,\varepsilon )<0$. An application of Markov's inequality as \begin{equation*} \ensuremath{\mathbb{P}} \bigg( B_{N,k}(\epsilon) \bigg) \leq \ensuremath{\mathbb{E}} \big[ \mathop{\mathrm{Crt}}\nolimits_{N,k}(-E_{\infty}^{-}+\varepsilon,\infty ) \big] \leq e^{-Nc(k,\nu,\varepsilon )} \end{equation*} proves Theorem~\ref{t:nofiniteindex} for the event $B_{N,k}(\epsilon)$. The proof for the event $A_{N,k}(\epsilon)$ is analogous. \subsubsection{Proof of Theorem \ref{midguythm}} The proof of Theorem \ref{midguythm} follows the same steps as the proof of Theorem \ref{critical2}. First by Lemma 3.5 of \cite{ABC}, for any $\epsilon >0$, there exists a constant $c=c(\gamma,\epsilon)>0$ such that \begin{equation*}\label{concetrationmid}\ensuremath{\mathbb{P}}\big (|\lambda_{k}^N - s_{\gamma}|> \epsilon \big)\leq e^{-cN^2}.\end{equation*} Therefore if we use Proposition \ref{identity}, \eqref{ploeamoreadedio} and the above statement we have that for any $\epsilon >0, \delta > 0$ there exists constants $c=c(\epsilon), d=d(\epsilon)$ such that for $N$ large enough \begin{equation*}\label{ubmg} \begin{split} \ensuremath{\mathbb{E}}\mathop{\mathrm{Crt}}\nolimits_{N,k}(B) &\le C_N \int_{B} e^{\frac{N}{2} \big( F(\lambda_k^N,y)\big )}\ensuremath{\boldsymbol 1} \{ \lambda_k^N \in (s_\gamma-\epsilon, s_\gamma+\epsilon)\} + e^{dN} e^{-cN^2} \\ &\le C_N \int_B e^{\frac{N}{2} \sup_{\lambda \in (s_\gamma-\epsilon, s_\gamma+\epsilon)} \{ F(\lambda,y)\} } \mathrm{d} y + e^{dN} e^{-cN^2}\\ &\le C_N e^{\frac{N}{2} \sup_{\lambda \in (s_\gamma-\epsilon, s_\gamma+\epsilon), y \in B} \{ F(\lambda,y)\} }(1+\delta) + e^{dN} e^{-cN^2}. \end{split} \end{equation*} On the other hand we have the lower bound \begin{equation*} \begin{split} \ensuremath{\mathbb{E}}\mathop{\mathrm{Crt}}\nolimits_{N,k}(B) &\ge C_N \int_{B} e^{\frac{N}{2} \big( F(\lambda_k^N,y)\big )}\ensuremath{\boldsymbol 1} \{ \lambda_k^N \in (s_\gamma-\epsilon, s_\gamma+\epsilon)\} \\ &\ge C_N \int_B e^{\frac{N}{2} \inf_{\lambda \in (s_\gamma-\epsilon, s_\gamma+\epsilon)} \{ F(\lambda,y) \} } \mathrm{d} y \\ &\ge C_N e^{\frac{N}{2} \inf_{\lambda \in (s_\gamma-\epsilon, s_\gamma+\epsilon)}\{\sup_{ y \in B} \{ F(\lambda,y) \} \} }(1-\delta). \end{split} \end{equation*} Taking $\frac{1}{N}\log$ on both bounds and taking $\epsilon$ to zero afterwards, we see that \begin{equation*} \frac{1}{N}\log \ensuremath{\mathbb{E}} \mathop{\mathrm{Crt}}\nolimits_{N,k}(B) = \sup_{y \in B} \{ F(s_\gamma, y)\} . \end{equation*} \subsubsection{Proof of Theorem \ref{t:complexityglobal}} We now prove the asymptotic limit of the mean number of critical points at some level of energy. Since the total number of critical points is greater than the number of critical points of index $k(N)$ with $k(N)$ satisfying \eqref{e:k} for $\gamma \in [0,1]$ we clearly have the lower bound \begin{equation}\label{ras} \sup_{\gamma \in [0,1]} \sup_{u\in B} \theta_{\gamma,\nu}(u) \leq \lim_{N\rightarrow \infty}\frac{1}{N}\log \ensuremath{\mathbb{E}} \mathop{\mathrm{Crt}}\nolimits_{N}(B). \end{equation} For $u\leq-E_\infty'$, taking $\gamma = 0$ (i.e. considering the complexity of local minima) we get the right hand side of \eqref{totalequation}. For $ u \in (-E_\infty', E_\infty')$ the supremum on $\gamma$ of $\theta_{\gamma,\nu}(u)$ is attained at $\gamma \in (0,1)$ such that $s_\gamma = \frac{\sqrt{2}u}{E_\infty'}$, plugging it back on the left hand side of \eqref{ras}, we get the right hand side of \eqref{totalequation}. Last, for $u\geq E_\infty$, one just need to take the complexity of local maxima. This is enough to prove a lower bound. To show a matching upper bound, we proceed as follows. A sum over $k$ in Proposition \ref{identity}, gives us that \begin{equation*} \ensuremath{\mathbb{E}} [\mathop{\mathrm{Crt}}\nolimits_{N}(B)] = 2N\sqrt\frac {2}{\nu'} (\frac{\nu''}{\nu'})^{\frac N2} \int_{B} \ensuremath{\mathbb{E}}_{\text{GOE}}^N \int \exp\bigg\{N F(z,y)\bigg\} \mathrm{d} y L_N (\mathrm{d} z). \end{equation*} and $L_N$ is the empirical spectral measure of the GOE matrix. The constant in front the integral gives a constant term $C_\nu$ after the $\frac{1}{N}\log$ limit. Furthermore, \begin{equation}\label{dskop} \begin{split} \int_{B} \ensuremath{\mathbb{E}}_{\text{GOE}}^N \int \exp\bigg\{N F(z,y)\bigg\} \mathrm{d} y L_N (\mathrm{d} z) &\leq N \int_{B} \sup_{z \in \ensuremath{\mathbb{R}}} \exp\bigg\{NF(z,y)\bigg\} \mathrm{d} y \\ & \leq N \int_{B} e^{-\frac{N}{2} \frac{\nu'' - \nu'}{\nu'^2-\nu' + \nu''}y^2} \mathrm{d} y. \end{split} \end{equation} So if $B \cap (-E_\infty',E_\infty') \neq \emptyset$ this matches the right hand side of $\eqref{totalequation}$. If $B \subseteq (-\infty,-E_\infty')$ then we can estimate \eqref{dskop} with \begin{equation*} N \int_{B} \ensuremath{\mathbb{E}}_{\text{GOE}}^N \int \exp\bigg\{N F(\lambda_0,y)\bigg\}. \end{equation*} Applying $\log$, dividing by $N$ and taking limits we get Theorem \ref{t:complexityglobal} from Theorem \ref{critical2}. \section{Partition Function}\label{sec5} In this section we prove Theorem \ref{Msri1} and we derive a formula for the 1-RSB solution at zero temperature that will be useful in the next section. We start by the proof of Theorem \ref{Msri1}. \begin{proof}[Proof of (a):] By Holder's inequality the function $\frac{1}{N} \ensuremath{\mathbb{E}} \log Z_{N,\nu}(\beta)$ is convex in $\beta$, therefore its limit $F_\infty(\beta)$ is also convex. From \eqref{equatioprimi}, \begin{equation} 0 \leq \liminf_{\beta \rightarrow \infty} \frac{1}{\beta} F_\infty(\beta) \leq \limsup_{\beta \rightarrow \infty} \frac{1}{\beta} F_\infty(\beta) \leq E_0. \end{equation} So $F(\beta)$ is convex, positive and grows at most linearly. This easily implies that \begin{equation} \lim_{\beta \rightarrow \infty} \frac{1}{\beta} F_\infty(\beta) = \sup_{\beta} \frac{1}{\beta} F_\infty(\beta) \in [0,\infty). \end{equation} \end{proof} To prove item (b), we will need to introduce some notation and the proposition below. Let $\boldsymbol \sigma^*$ be a point on the sphere such that $H_{N,\boldsymbol \beta}(\boldsymbol \sigma^*) = N GS_N$ and let $d$ denote the geodesic distance on the sphere. For $\rho, \alpha, K >0$, let \begin{equation*} B_{N,\rho} \equiv \bigg \{\boldsymbol \sigma \in S_{N-1}(\sqrt{N}): d(\boldsymbol \sigma, \boldsymbol \sigma^*) < \rho \bigg \} \end{equation*} and $A_{\epsilon,\alpha,K}(N)$, be the event \begin{equation} A_{\epsilon,\alpha,K}(N) \equiv \bigg \{\sup_{\boldsymbol \sigma \in B_{N, \sqrt{N}\epsilon}} |H_{N,\boldsymbol \beta} (\sigma) - N GS_N | \leq K N \epsilon^{\alpha} \bigg \}. \end{equation} \begin{lemma}\label{smallball} For any $0<\alpha<1$ there exist constants $K, K_1>0$ so that for all $\epsilon >0$ and all $N$ sufficiently large \begin{equation} \ensuremath{\mathbb{P}} \bigg(A_{\epsilon,\alpha,K}(N)^c \bigg) < 2 e^{-K_1 N}. \end{equation} \end{lemma} Note that this bound is independent of $\epsilon$. \begin{proof} Clearly, $$A_{\epsilon,K}(N) \supseteq \hat{A}_{\alpha,K}(N) \equiv \bigg \{ \| H_{N,\boldsymbol \beta} \|_{\alpha} \leq K N^{1-\frac{\alpha}{2}}\bigg \}$$ where \begin{equation} \| H_{N,\boldsymbol \beta} \|_{\alpha} = \sup_{\boldsymbol \sigma, \boldsymbol \sigma'} \frac{|H_{N,\boldsymbol \beta} (\boldsymbol \sigma) - H_{N,\boldsymbol \beta} (\boldsymbol \sigma')|}{d(\boldsymbol \sigma,\boldsymbol \sigma')^\alpha}. \end{equation} Now consider the centered Gaussian process $\boldsymbol X_{\alpha}$ field on $S_{N-1}(\sqrt{N})\times S_{N-1}(\sqrt{N})$ given by \begin{equation} \boldsymbol X_\alpha (\boldsymbol \sigma, \boldsymbol \sigma') = \begin{cases} \frac{H_{N,\boldsymbol \beta} (\boldsymbol \sigma) - H_{N,\boldsymbol \beta} (\boldsymbol \sigma')}{d(\boldsymbol \sigma,\boldsymbol \sigma')^\alpha},\quad \text{if} \quad d(\boldsymbol \sigma,\boldsymbol \sigma')>0 \\ 0, \quad \text{otherwise.} \end{cases} \end{equation} Since the Gaussian field $H_{N,\boldsymbol \beta} $ is $C^1$ almost surely, then \begin{equation}\label{unicaeq} \ensuremath{\mathbb{P}} \bigg(\hat{A}_{\alpha,K}(N)^c \bigg) = \ensuremath{\mathbb{P}} \bigg(\sup_{\boldsymbol \sigma, \boldsymbol \sigma'} |X_{\alpha}(\boldsymbol \sigma, \boldsymbol \sigma')| > K N^{1-\frac{\alpha}{2}} \bigg). \end{equation} But now a simple computation yields for $\boldsymbol \sigma \neq \boldsymbol \sigma'$, \begin{equation} \ensuremath{\mathbb{E}} \boldsymbol X_{\alpha}^2(\boldsymbol \sigma, \boldsymbol \sigma')=\frac{2N}{d(\boldsymbol \sigma_1,\boldsymbol \sigma'_1)^{2\alpha}}\bigg[ 1-\nu(\frac{1}{N}\< \boldsymbol\sigma, \boldsymbol\sigma'\>) \bigg]=\frac{2N}{(\sqrt{N}\theta)^{2\alpha}}\bigg[ 1-\nu(\cos \theta) \bigg]. \end{equation} where $\theta$ is the angle between $\boldsymbol \sigma, \boldsymbol \sigma'$ in $\ensuremath{\mathbb{R}}^N$. Therefore by the boundedness of $\nu'(x)$ in $[-1,1]$ there exists a constant $C$ independent of $N$ such that (if $\alpha<1/2$ or $\alpha<1$ - using the boundedness of $\nu'(x)$ and $\nu''(x)$) \begin{equation} \sup_{(\boldsymbol \sigma, \boldsymbol \sigma')}\ensuremath{\mathbb{E}} \boldsymbol X_{\alpha}^2(\boldsymbol \sigma, \boldsymbol \sigma') \leq CN^{1-\alpha}. \end{equation} Now, by Borell's inequality, (see page 50 and 51 of \cite{AT07}, where we take $u=K N^{1-\frac{\alpha}{2}}$, $\sigma_{T} \leq CN^{1-\alpha}$) for all $\delta$, if $N, K$ is large enough \begin{equation} \ensuremath{\mathbb{P}}\bigg(\sup_{\boldsymbol \sigma, \boldsymbol \sigma'} \boldsymbol X_{\alpha}(\boldsymbol \sigma, \boldsymbol \sigma') > K N^{1-\frac{\alpha}{2}}\bigg) \leq e^{\delta K N^{1-\frac{\alpha}{2}}}e^{\frac{-K^2 N^{2(1-\frac{\alpha}{2})}}{2CN^{1-\alpha}}}\leq e^{-\frac{K^2N}{4C}}. \end{equation} Taking $K_1= K^2/4C$ in the last equation, using \eqref{unicaeq} and symmetry of $\boldsymbol X_{\alpha}$ the lemma is proven. \end{proof} Now we can start the \begin{proof}[Proof of (b):] We will show that for any $\delta>0$ there exists $\epsilon(\delta)$ so that if $N$ is large enough \begin{equation}\label{caputio} \ensuremath{\mathbb{P}} \bigg( |GS_N +f_{\infty}|> \delta \bigg) \leq \ensuremath{\mathbb{P}} \bigg(A_{\epsilon(\delta),\alpha, K}(N)^c \bigg). \end{equation} The proof of (b) will then follow from \eqref{caputio} and Borel-Cantelli's Lemma since for all $\delta > 0$ by Lemma \ref{smallball} \begin{equation} \sum_{N=1}^{\infty} \ensuremath{\mathbb{P}} \bigg( |GS_N +f_{\infty}|> \delta \bigg) < \infty. \end{equation} We will prove \eqref{caputio} by showing that for any $\delta>0$ if $N$ is large enough $A_{\epsilon,\alpha, K}(N) \subset \{ |GS_N +f_{\infty}|< \delta \}. $ On $A_{\epsilon,\alpha,K}(N)$, \begin{equation} \label{aburc} Z_{N,\nu}(\beta) = \int_{S^{N-1}(\sqrt{N})} e^{-\beta H_{N,\boldsymbol \beta}(\boldsymbol \sigma)} \Lambda_N(\mathrm{d} \boldsymbol \sigma) \\ \geq e^{-\beta N GS_N - K \beta N \epsilon^\alpha} \Lambda_N(B_{N,\sqrt{N}\epsilon}). \end{equation} Recall that $\Lambda_N(\mathrm{d} \boldsymbol \sigma)$ is the surface measure of $S_N(\sqrt{N})$ normalized to be a probability measure. We trivially have the bound \begin{equation}\label{aburc2} \frac{1}{N} \log Z_{N,\nu}(\beta) \leq - \beta GS_N. \end{equation} Combining \eqref{aburc} and \eqref{aburc2} we then have on $A_{\epsilon,\alpha,K}(N)$, \begin{equation} - \frac{1}{N \beta} \log Z_{N,\nu}(\beta) - K \epsilon^{\alpha} + \frac{1}{N \beta} \log \Lambda_N(B_{N,\sqrt{N}\epsilon}) \leq GS_N \leq -\frac{1}{N\beta} \log Z_{N,\nu}(\beta). \end{equation} Note that using spherical coordinates and the inequality $\frac{2 \theta}{\pi} \leq \sin \theta$ for $\theta \leq \frac{\pi}{2}$, we have for $\epsilon < \pi/2$, \begin{equation} \begin{split} \Lambda_N(B_{N,\sqrt{N}\epsilon}) &= \bigg(\int_0^\epsilon \sin^{N-2} (\phi) \quad \mathrm{d} \phi \bigg) \bigg(\int_0^\pi \sin^{N-2} (\phi) \quad \mathrm{d} \phi \bigg)^{-1} \\ &\geq (\frac{2\epsilon}{\pi})^{N-1} \frac{1}{\pi (N-1)}. \end{split} \end{equation} So on $A_{\epsilon,\alpha,K}(N)$, for some constant $C>0$ \begin{equation} - \frac{1}{N \beta} \log Z_{N,\nu}(\beta) - K \epsilon^{\alpha} + C \epsilon \leq GS_N \leq -\frac{1}{N\beta} \log Z_{N,\nu}(\beta). \end{equation} Therefore by \eqref{ParisiFor}, for any $\delta_1 > 0$ one can take $N$ large enough so that, \begin{equation} -\frac{F_{\infty}(\beta)}{\beta} - K \epsilon^{\alpha} + C \epsilon - \frac{\delta_1}{\beta} \leq GS_N \leq -\frac{F_\infty(\beta}{\beta} + \frac{\delta_1}{\beta}. \end{equation} By taking $\beta$ large enough, part (a) of this Theorem and by choosing $\epsilon$ sufficiently small, \eqref{caputio} is proven. \end{proof} \section{Proofs of Theorems \ref{tris11}, \ref{Legendre} and \ref{tris}} \label{sec6} In this section, we prove Theorems \ref{tris11}, \ref{Legendre} and \ref{tris}. \subsection{Calculating $f_1$.} We now prove equation \eqref{f1func}. \begin{lemma}\label{lemmadacon} \begin{enumerate} \item There exists $\epsilon, M >0$ such that the following limit holds \begin{equation}\label{princvarpar} f_1 := \lim_{\beta\rightarrow \infty} \frac{1}{\beta}F_{1}(\beta) = \inf_{a,b \in [\epsilon,M]} \frac{1}{2}\Big(b + \nu'a + \frac{1}{b}(\log \frac{a+b}{a})\Big). \end{equation} \item $f_1$ depends (continuously) only on the first derivative $\nu'$. \end{enumerate} \end{lemma} \begin{remark}It is remarkable that while the $k$-complexity function depends on the first two derivatives at $1$ of the covariance function $\nu$ and $f_1$ depends only on the first derivative $\nu'$ and $E_0(\nu) = f_1$ for any pure-like mixture. \end{remark} \begin{proof} First, taking $\mu = m\delta_{r}+(1-m)\delta_{q}$, we can write the 1RSB Free Energy $F_1(\beta)$ via the Crisanti-Sommers representation (see (1.4) of \cite{TalagrandMul}) as \begin{equation*} \begin{split} F_1(\beta) &= \inf_{m,r,q \in [0,1]^3} \bigg\{ \frac{r}{m(q-r)+1-q}+ \frac{1}{m}\Big(\log(m(q-r)+1-q) \\ & \hspace{2cm} -\log(1-q)\Big)+\log(1-q)+\beta^{2}m(\nu(q)-\nu(r)) + \beta^2(\nu(1)-\nu(q)) \bigg\}. \end{split} \end{equation*} It is easy to show that the infimum above is attained at $r=0$. Therefore, \begin{equation}\label{astoni} F_1(\beta) = \inf_{m,q \in [0,1]^2} \bigg\{ \beta^2 (1 -(1-m)\nu(q)) + \log (1-q) - \frac{1}{m}\log(\frac{1-q}{1-q+mq}) \bigg\}. \end{equation} The conditions to be a critical points are \begin{eqnarray}\label{gsm} \beta^2 \nu'(q) &=& \frac{q}{(1-q)(1-q +qm)} \nonumber \\ \log\bigg( \frac{1-q+qm}{1-q}\bigg)&=&\nu(q)m^2\beta^2 + \frac{mq}{1-q(1-m)}. \end{eqnarray} Let $(q^*,m^*)=(q^*,m^*)(\beta)$ be a solution of \eqref{gsm}. First, from the first equation we deduce that either $q^{*}$ goes to $0$, $1$ or $1-q^{*}(1-m^{*})$ goes to zero as $\beta$ goes to infinity. Analogously to the pure case, the case $q^{*}$ approaching zero is excluded via Lemma 3 of \cite{TalagrandMul}. Since $1-q^{*}(1-m^{*}) \rightarrow 0$ implies $q^{*} \rightarrow 1$, it is then a fact that $q^{*} \rightarrow 1$. Looking at the second equation we see that $m^*$ has to go to zero. Indeed, if it is not the case $1-q^{*} \sim \beta^{-2}$ implying $\log \beta^2 \sim \beta^2$ which is not possible. If we put $A = 1-q^{*}$, $B = m^{*}$, then $A, B$ go to zero as $\beta$ goes to infinity and conditions \eqref{gsm} become \begin{equation}\label{baluu} \beta^2 \nu'(1) \sim \frac{1}{A^2+AB}, \quad \log\bigg[1 + \frac{B}{A}\bigg] \sim B^2\beta^2 + \bigg(1 + \frac{B}{A}\bigg)^{-1}. \end{equation} We will argue that $B \sim A$, i.e. $\lim_{\beta\rightarrow \infty} \frac{B}{A} = l \in (0,\infty).$ Suppose that $A >> B$. The first condition in \eqref{baluu} implies $B << \beta^{-2}$. But now the RHS in the second condition goes to $1$ while the LHS goes to $0$. Next, suppose that $B>>A$. First condition in \eqref{baluu} implies $\frac{1}{AB} \sim \beta^2$ while the second implies $\log(1 + \frac{B}{A} ) \sim \frac{B}{A}$ which is a contradiction since $\frac{B}{A} \rightarrow \infty$. Therefore, not only we have $B \sim A$ but we can also write \begin{equation} A = a(\beta) a^{*} \beta^{-1}, \quad B = b(\beta) b^{*} \beta^{-1} \end{equation} where $a(\beta),b(\beta)$ are functions that converge to $1$ as $\beta$ goes to infinity and $a^{*},b^{*} \in (0,\infty)$. Furthermore, $a^{*},b^{*}$ satisfy the following relations: \begin{equation}\label{portia} \nu'(1) = \frac{1}{a^{*}(a^{*}+b^{*})}, \quad \log\bigg[1 + \frac{b^{*}}{a^{*}}\bigg] = (b^{*})^2 + \bigg(1 + \frac{b^{*}}{a^{*}}\bigg)^{-1}. \end{equation} To get the statement just note that replacing $m=b\beta^{-1}$ and $q=1-a\beta^{-1}$ in \eqref{astoni}, we get \[ \frac{1}{\beta}F_1(\beta)= \inf_{a,b \in [0,\beta]^2} P(a,b,\beta) \] where \begin{equation*} \begin{split} P(\beta,&a,b)= \frac{1}{2} \Big\{ \beta\big(1 + (b\beta^{-1}-1)\nu[1-a\beta^{-1}]\big) \\ &+ \big(\beta-b^{-1}\big) \log\big(1-(1-a\beta^{-1})\big) + b^{-1} \log\big[1 -(1-a\beta^{-1})(1-b\beta^{-1})\big] \Big\}. \end{split} \end{equation*} Clearly, for any $a,b>0$ the function $P(\beta,a,d)$ converges pointwise as $\beta$ goes to infinity to $\frac{1}{2}\Big(b + \nu'a + \frac{1}{b}(\log \frac{a+b}{a})\Big)$. Since we know that the location of the minima of $\frac{1}{\beta}F_1(\beta)$ converges to $(a^{*}, b^{*}) \in (0,\infty)^2$, this is enough to guarantee the convergence stated in Lemmma \ref{lemmadacon}. By solving for the critical points of \eqref{princvarpar}, i.e. using equations \eqref{portia} we can get an expression for $f_1$ in terms of $\nu'$. Namely, \begin{equation*}\label{f1function} f_1 = \frac{1}{2}\Big(\frac{\nu'y^2-1}{\nu'y} + \frac{1}{y} + \frac{\nu'y}{\nu'y^2-1}\log(\nu'y^2)\Big) =y + \frac{\nu'-1}{y\nu'} \end{equation*} where $y=y(\nu')$ is given by the unique solution of \begin{equation*} \label{e:eqy} \Big(\frac{\nu'y^2-1}{\nu'y}\Big)^2y + \frac{\nu'y^2-1}{\nu'y} = y\log(\nu'y^2), \qquad y>\nu'^{-1/2}. \end{equation*}% In other words, $y=\frac{\sqrt{a}}{\sqrt{\nu'}}$ where $a$ is the unique solution of \begin{equation*} \label{e:eqa} a\log[a] - a + 1 - \frac{(a - 1)^2}{\nu'}=0, \qquad a>1. \end{equation*} This ends the proof of Lemma \ref{lemmadacon}. \end{proof} \subsection{Proof of Proposition \ref{theoasd}} \begin{proof} If $\nu$ is pure-like then $\theta_{k,\nu}(-E_\infty)>0$. Since $\theta_{k,\nu}(u)$ converges to negative infinity as $u$ goes to negative infinity, $E_k(\nu)$ are well-defined. Furthermore, as $k$ goes to infinity, $\lambda_k^*(u)$ converges to $-\sqrt{2}$ for any $u \leq - E_\infty$, implying that $\theta_{k,\nu}(u)$ converges to $F(-\sqrt{2},u)$ pointwise. Therefore, taking $u$ in a small neighborhood of $E_{\infty}^{+}$ and using the fact that $\theta_{k,\nu}$ are increasing in that neighborhood we see that the zero of $\theta_{k,\nu}$ has to converge to the zero of $F(-\sqrt{2},u)$. Namely $E_k(\nu)$ converges to $E_{\infty}^{+}$. \end{proof} \subsection{Proofs of Theorems \ref{tris11}, \ref{Legendre} and \ref{tris}} We start by the case when $\nu$ is critical, i.e. by the case when $G(\nu',\nu'')=0$. \begin{proposition}\label{calodoido} A mixture $\nu$ is critical if and only if \begin{equation} f_1 = E_\infty = E_{0,\nu} = \frac{ \nu'' - \nu' + \nu'^2 }{\nu'\sqrt{\nu''}}. \end{equation} \end{proposition} \begin{proof} If $\nu$ is critical then $y=\frac{\sqrt{\nu''}}{\nu'}$ is the unique solution of \eqref{e:eqy} with $y > \frac{1}{\sqrt{\nu'}}$. Indeed, \begin{equation*} 1-\frac{\nu''}{\nu'}+\frac{(-\nu'+\nu'') \left(-\nu'+\nu'^2+\nu''\right)}{\nu'^3}-\frac{\left(-1+\frac{\nu''}{\nu'}\right)^2}{\nu'} = 0. \end{equation*} Plugging back the value of $y$ in \eqref{f1function} we get $f_1$. On the other hand, if $f_1 = \frac{ \nu'' - \nu' + \nu'^2 }{\nu'\sqrt{\nu''}}$ then one solve equation \eqref{f1function} in $y$ to see that the only positive solution is $y=\frac{\sqrt{\nu''}}{\nu}$. By the definition of $y$ in \eqref{e:eqy} this immediately implies that $\nu$ is critical. And trivially, $\nu$ critical is precisely the case where $E_\infty = E_{0,\nu}$. \end{proof} Now we analyse the case where $\nu$ is critical or a full mixture, i.e. the case where $G(\nu',\nu'')\leq0$. In this case, the zero of the complexity function can be explicitly computed and is given by: \begin{equation*} -E_{0,\nu} = -E_{\infty}^{+}, \end{equation*} where $E_{\infty}^{+}$ was defined in \eqref{edocap1}. Note that $E_{0,\nu}$ is a function of $\nu'$ and $\nu''$. \begin{proposition}\label{sabadao} If $G(\nu',\nu'')\leq 0$ then \begin{equation*} \frac{\partial}{\partial \nu''} E_{0,\nu} = 0 \quad \text{if and only if} \quad G(\nu',\nu'')=0. \end{equation*} \end{proposition} \begin{proof} Let \begin{equation*}A(\nu',\nu'')=\sqrt{(\nu''-\nu'^2+\nu') \left((\nu'+\nu'') \log\left[\frac{\nu''}{\nu'}\right]-2 (\nu''-\nu')\right)}. \end{equation*} Calculating the derivative $\frac{\partial}{\partial \nu''} E_{0,\nu}$ one gets \begin{equation}\label{shabu} \begin{split} &\bigg(\nu'^2 \nu'' (\nu'+\nu'') \log\left[\frac{\nu''}{\nu'}\right]+(\nu''-\nu') \left(\nu'^3+\nu''^2-\nu'^2 (1+3 \nu'')-2 \nu' \sqrt{\nu''} A(\nu',\nu'') \right) \times \\ &\times \bigg(2 \nu'' (\nu'+\nu'')^2 A(\nu',\nu'')\bigg)^{-1} \end{split} \end{equation} Sufficiency comes from a simplification of the above formula. To get necessity we solve a second degree equation on the variable $M = \log\left[\frac{\nu''}{\nu'}\right]$ to see that this second degree equation has a unique zero given by \begin{equation*} \frac{\nu'^2 - \nu'^3 - 2 \nu' \nu'' + \nu'^2 \nu'' + \nu''^2}{\nu'^2 \nu''}. \end{equation*} This is precisely $G(\nu',\nu'')=0$. \end{proof} With the above propositions we now prove Theorem \ref{tris}. \begin{proof}[Proof of Theorem \ref{tris11}] If $\nu$ is critical Theorem \ref{tris} is Proposition \ref{calodoido}. Now suppose that $\nu$ is pure-like. By Lemma \ref{lemmadacon} and \eqref{complexityfunction2}, both $f_1(\nu)$ and $E_0(\nu)$ are independent of $\nu''$. Consider then another mixture $\mu$ such that $\mu'=\nu'$ and $\mu$ satisfies $G(\mu',\mu'')=0$. Since $G$ is continuous on its domain, we have \begin{equation*} f_1(\nu)=f_1(\mu)=E_{0}(\mu)=E_0(\nu). \end{equation*} \end{proof} \begin{proof}[Proof of Theorem \ref{Legendre}] The proof is just a simple but major simplification. Note that by solving \eqref{legendreq} in $a$ we can rewrite \eqref{legendreq} as \begin{equation*} \frac{1}{4} \left(-b (-2+v)-\sqrt{\nu'} \sqrt{4+b^2 \nu'}+\frac{2 \text{Log}\left[\frac{1}{2} \left(2+b^2 \nu'-b \sqrt{\nu'} \sqrt{4+b^2 \nu'}\right)\right]}{b}\right). \end{equation*} This implies that a critical point $b^*(u)$ of $M(u,b) = ub - b f_1(b)$ is given by \begin{equation*} b^*_\pm(u) = \frac{-u(\nu'-2)\pm \sqrt{\nu'} \sqrt{4-4 \nu'+u^2 \nu'}}{2(\nu'-1)}. \end{equation*} Here, we choose $b^*_+(u)$ since $b^*_-(u)$ is negative and a local maxima. Furthermore, since \begin{equation*} (\frac{\partial}{\partial b} M)(u,c_\nu)= u + 2 \sqrt{\frac{\nu'-1}{\nu'}} <0, \end{equation*} $b^*_+(u)$ is a global minima in $[c(\nu),\infty)$. So, replacing $b^*_+(u)$ in \eqref{legendreq} and setting $z= -u(\nu'-2)+\sqrt{\nu'} \sqrt{u^2 \nu'- 4\nu'+4} = $ we get the function \begin{equation}\label{cadfa}\begin{split} \frac{zu}{2(\nu'-1)} &+ \frac{z}{8 (\nu'-1)} \left(-\frac{(\nu'-2) z}{2 (\nu'-1)}+\sqrt{\nu'}\sqrt{4+\frac{\nu' z^2}{4 (\nu'-1)^2}}\right)\\ &+\frac{1}{2} \log\left[\frac{1}{2} \left(2+\frac{\nu' z^2}{4 (\nu'-1)^2}+\frac{\sqrt{\nu'} z}{2 (\nu'-1)} \sqrt{4+\frac{\nu' z^2}{4 (\nu'-1)^2}}\right)\right]. \end{split} \end{equation} Comparing term by term the logarithm, polynomial and fractional terms of \eqref{complexityfunction2} and \eqref{cadfa} one gets the first part of the Theorem. The proof of the second part follows from Fenchel's duality theorem (see\cite[Chapter 1]{Rock}). \end{proof} \begin{proof}[Proof of Theorem \ref{tris}] First, note that for a fixed $\nu'$, there exists $\nu''^c$ such that the condition that $\nu$ is a full mixture can be written as $\nu'' > \nu''^c$. From Proposition \ref{sabadao} we know that $\frac{\partial}{\partial \nu''} E_{0,\nu}$ does not change sign as we change $\nu''$. Taking $\nu''$ to infinity in \eqref{shabu} we see that $\frac{\partial}{\partial \nu''} E_{0,\nu} \geq 0$, meaning that $E_{0,\nu}$ is increasing in $\nu''$. Since at $\nu''^c$, $E_{0,\nu}=f_1$, we get that for any $\nu$ such that $G(\nu',\nu'')<0$, $E_{0,\nu}>f_1$ proving the Theorem. \end{proof} \section{Proof of Propositon \ref{eulerexact} and Theorem \ref{meaneulerasym}}\label{eulerproof} In this section we prove Propositon \ref{eulerexact} and Theorem \ref{meaneulerasym}. \begin{proof}[Proof of Proposition \ref{eulerexact}] We start from the following identity: \begin{eqnarray*} &\ensuremath{\mathbb{E}}& \chi(A_u) = \sum_{k=0}^{N-1} (-1)^{k} \mathop{\mathrm{Crt}}\nolimits_k(A(u)) \\ &=& \sum_{k=0}^{N-1} (-1)^{k} \int_{S^{N-1}(\sqrt{N})} \ensuremath{\mathbb{E}}\bigg( |\det \nabla^2 H_N (\sigma)| \ensuremath{\boldsymbol 1}_{\{i(\nabla^2 H_N (\sigma)) =k\}} \ensuremath{\boldsymbol 1}_{\{H_N(\sigma) \leq Nu\}}\bigg| \nabla H_N(\sigma) = 0 \bigg) \\ &\times&\phi_{\nabla H_N}(0) \mathrm{d} \sigma = (2\nu'\pi)^{-\frac{N-1}{2}} |S^{N-1}(\sqrt{N})| \frac{1}{\sqrt{2\pi N}} \times \\ &\times& \sum_{k=0}^{N-1} \int_{-\infty}^{Nu} \ensuremath{\mathbb{E}}\bigg( (-1)^{k} |\det \nabla^2 H_N (\sigma)| \ensuremath{\boldsymbol 1}_{\{ i(\nabla^2 H_N (\sigma)) =k \}} \bigg| H_N(\sigma) = x \bigg) e^{-\frac{1}{2N} x^2} \mathrm{d} x \\ &=& (2\nu'\pi)^{-\frac{N-1}{2}} \frac{2 \pi^{\frac{N}{2}}}{\Gamma(\frac{N}{2})}N^{\frac {N-1}{2}} \frac{\sqrt{N}}{\sqrt{2\pi }} \int_{-\infty}^{u} \ensuremath{\mathbb{E}}\bigg( \det \nabla^2 H_N (\sigma) \bigg| H_N(\sigma) = Nx \bigg) e^{-\frac{N}{2} x^2} \mathrm{d} x. \\ &=& \nu'^{-\frac{N-1}{2}} 2^{-\frac{N-2}{2}} \frac{N^{\frac{N}{2}}}{\Gamma(\frac{N}{2})} \int_{-\infty}^{u} \ensuremath{\mathbb{E}}\bigg( \det \nabla^2 H_N (\sigma) \bigg| H_N(\sigma) = Nx \bigg) e^{-\frac{N}{2} x^2} \mathrm{d} x. \end{eqnarray*} \begin{lemma}\label{charac} If $M_N$ is a $N\times N$ GOE with variance $\ensuremath{\mathbb{E}} M_{ij}^2 = \frac{1+\delta_{ij}}{2N}$ then for any $x\in \ensuremath{\mathbb{R}}$ \begin{equation*} \ensuremath{\mathbb{E}} \det (M_N-xI) = 2^{-N} N^{-\frac{N}{2}}(-1)^N h_N(\sqrt{N}x) \end{equation*} where $h_N(x)$ is given in \eqref{eq1}. \end{lemma} \begin{proof} The proof, a straight-forward linear algebra exercise, can be found as Corollary 11.6.3 in \cite{AT07}. \end{proof} Now by Lemma \ref{l:conditioning}: \begin{equation}\label{Laaa2} \begin{split} \ensuremath{\mathbb{E}} \chi(A_u) = \nu'^{-\frac{N-1}{2}} 2^{-\frac{N-2}{2}} \frac{N^{\frac{N}{2}}}{\Gamma(\frac{N}{2})}\frac{\sqrt{N}}{\sqrt{2\pi}} \times \\ \int_{-\infty}^{\infty} \int_{-\infty}^{u} \ensuremath{\mathbb{E}}\bigg( \det \bigg[ (\frac{N-1}{N}2\nu'')^{1/2}M^{N-1} + (\alpha y - \nu'x)I \bigg] \bigg) e^{-\frac{N}{2} x^2} e^{-\frac{N}{2}y^2} \mathrm{d} x \mathrm{d} y . \end{split} \end{equation} The double integral becomes: \begin{equation*} (\frac{N-1}{N}2\nu'')^{\frac{N-1}{2}} \int_{-\infty}^{\infty} \int_{-\infty}^{u} \ensuremath{\mathbb{E}}\bigg( \det \bigg[M^{N-1} +(\frac{N-1}{N}2\nu'')^{-\frac{1}{2}}(\alpha y - \nu'x)I \bigg] \bigg) e^{-\frac{N}{2} x^2} e^{-\frac{N}{2}y^2} \mathrm{d} x \mathrm{d} y , \end{equation*} which by Lemma \ref{charac} can be rewritten as \begin{equation}\label{impa1} (-1)^{N-1}(\frac{\nu''}{2N})^{\frac{N-1}{2}}\int_{-\infty}^{\infty} \int_{-\infty}^{u} h_{N-1}\big( \frac{\sqrt{N}(\nu'x -\alpha y)}{\sqrt{2\nu''}}\big) e^{-\frac{N}{2} x^2} e^{-\frac{N}{2}y^2} \mathrm{d} x \mathrm{d} y. \end{equation} Combining \eqref{Laaa2} and \eqref{impa1} we get Proposition \ref{eulerexact}. \end{proof} We will need the following Lemma to prove Theorem \ref{meaneulerasym}: \begin{lemma}\label{lemmanecessa} Let $a$, $b$ be constants such that $a>1/2$ and $b\geq0$. Set $$I_N(M) = \int_{M}^{\infty} \phi_{N-1}(\sqrt{N}x)e^{ax^2+bx} \; \mathrm{d} x.$$ As $N$ goes to infinity: \begin{enumerate} \item If $\sqrt{2}\leq M$ then $I_N(M) = O(e^{-N(aM^2+bM+I_1(M))}).$ \item If $-\sqrt{2}<M < \sqrt{2}$ and if we set $M = \sqrt{2}\cos \omega$ with $\epsilon < \omega < \pi-\epsilon$ then $I_N(M)$ is equal to $$\frac{2^{1/4}}{\pi^{1/2}N^{\frac{5}{4}}}\frac{e^{-N(aM^2+bM)}}{2|m'(2\iota(M))|(\sin \omega)^{1/2}} \sin\bigg[ \big(\frac{N}{2}-\frac{1}{4} \big)\big(\sin 2\omega - 2\omega \big) + \frac{3\pi}{4} + \alpha(M)\bigg](1 + O(N^{-1})).$$ \item If $M\leq -\sqrt{2}$ then $ I_N(M)= \frac{c}{N^{5/4}}e^{- N\lambda(a,b,M)}$ where $\lambda(a,b,M)$ is the minimum of $ax^2 + bx + I_1(-x)$ in $[M,-\sqrt{2}]$ and $c$ is a positive constant that depends on $a,b$ and $M$. \end{enumerate} \end{lemma} A few comments before the proof of the above lemma. First, under the assumption that $a>1/2$ and $b>0$ the major contribution to the integral in part (2) comes from a small neighborhood of M, instead of the minimum of $ax^2+bx$. This is due to rapid oscillations of $\phi_{N-1}$ inside the "bulk" $(-\sqrt{2},\sqrt{2})$. Second, in part (3), the condition that the minimizer of $ax^2 + bx + I_1(-x)$ lies inside $[M,-\sqrt{2}]$ is similar to the condition on \eqref{inlambda}. It will lead to the asymptotic Euler characteristic in the region $u<-E_\infty'$. t The main tool to prove Lemma \ref{lemmanecessa} is the following well-known formula for the asymptotics of the Hermite functions, first proved by Plancherel-Rotach \cite{Prota}. Let $$h(x) = \bigg|\frac{x - \sqrt{2}}{x + \sqrt{2}}\bigg|^{1/4} + \bigg|\frac{x + \sqrt{2}}{x - \sqrt{2}}\bigg|^{1/4}.$$ \begin{lemma} [Placherel - Rotach] \label{lem:PR}There exists $\delta_0 >0$ such that for any $0<\delta <\delta_0$ the following asymptotics hold uniformly in each region: \begin{enumerate} \item If $x<-\sqrt{2}-\delta$, \begin{equation*} \phi_{N-1}(\sqrt{N}x) = (-1)^{N-1} \frac{e^{-N I_1(-x)}}{\sqrt{4\pi\sqrt{2N}}}h(x) (1 + O(N^{-1})). \end{equation*} \item If $-\sqrt{2}-\delta<x<-\sqrt{2}+\delta$, \begin{equation*} \label{PRA4} \begin{split} \phi_{N-1}(\sqrt{N}x) &= \frac{(-1)^{N-1}}{(2N)^{1/4}} \bigg\{ \bigg|\frac{x-\sqrt{2}}{x+\sqrt{2}}\bigg|^{1/4} |\frac{3N}{2}I_1(-x)|^{\frac{1}{6}} \mathop{\rm Ai}\nolimits\big[(\frac{3N}{2}I_1(-x))^{\frac{2}{3}}\varepsilon(x)\big] (1+O(N^{-1})) \\&- \bigg|\frac{x+\sqrt{2}}{x-\sqrt{2}}\bigg|^{1/4} |\frac{3N}{2}I_1(-x)|^{-\frac{1}{6}} \mathop{\rm Ai}\nolimits'\big[(\frac{3N}{2}I_1(-x))^{\frac{2}{3}}\varepsilon(x)\big](1+O(N^{-1})) \bigg\}, \end{split} \end{equation*} where $\mathop{\rm Ai}\nolimits(x)$ is the Airy function of first kind, $\mathop{\rm Ai}\nolimits(x)= \frac{2}{\pi} \int_{-\infty}^{\infty} \cos \big(\frac{t^3}{3}+ tx\big) \mathrm{d} t,$ and $\varepsilon(x)=\frac{-x-\sqrt{2}}{|-x-\sqrt{2}|}, x \neq -\sqrt{2},$ $\varepsilon(-\sqrt{2})=0$. \item If $-\sqrt{2}+\delta < x < \sqrt{2}-\delta$ and if we set $x = \sqrt{2}\cos \omega$ with $\epsilon < \omega < \pi-\epsilon$ then \begin{equation*} \phi_{N-1}(\sqrt{N}x) = \frac{2^{1/4}}{\pi^{1/2}N^{\frac{1}{4}}} \frac{1}{(\sin \omega)^{\frac{1}{2}}} \sin\bigg((\frac{N}{2}+\frac{1}{4})(\sin 2\omega - 2\omega) + \frac{3\pi}{4}\bigg) (1 + O(N^{-1})). \end{equation*} \item If If $x>\sqrt{2}+\delta$, \begin{equation*} \label{PRA5} \phi_{N-1}(\sqrt{N}x) = \frac{e^{-N I_1(x)}}{\sqrt{4\pi\sqrt{2N}}}h(x) (1 + O(N^{-1})). \end{equation*} \end{enumerate} \end{lemma} \begin{proof}[Proof of Lemma \ref{lemmanecessa}] Part (1): We can use the uniform asymptotics given by the exponential region (4) in Lemma \ref{lem:PR} . Precisely, by hypothesis, the function $K(x):=ax^2+bx+I_1(x)$ is increasing in $[M,\infty)$ and by Laplace's method: \begin{eqnarray*} I_N(M) &=& \int_{M}^{\infty} \frac{e^{-N( ax^2+bx + I_1(x))}}{\sqrt{4\pi\sqrt{2N}}}h(x) (1 + O(N^{-1})) \mathrm{d} x \\ &=& \frac{e^{-NK(M)}}{N|K'(M)|\sqrt{4\pi\sqrt{2N}}}h(M) (1+ O(N^{-1}). \end{eqnarray*} Part (2): Choose $\delta < \delta_0$ such that $-\sqrt{2}<M<\sqrt{2}-\delta$. We split the integral $I_N(M)$ into three parts \begin{equation} I_N(M) = \bigg(\int_{M}^{\sqrt{2}-\delta} + \int_{\sqrt{2}-\delta}^{\sqrt{2}+\delta} + \int_{\sqrt{2}+\delta}^{\infty}\bigg) := I_1(M) + I_2 + I_3. \end{equation} We will show that the main contribution in this case comes from $I_1(M)$. As in part (1), it is easy to see that \begin{equation} I_3 = O(e^{-NK(\sqrt2)}). \end{equation} Next since $|x|^{1/4}|\text{Ai}(x)|$ and $|x|^{-1/4}|\text{Ai}'(x)|$ are bounded functions on $\ensuremath{\mathbb{R}}$ a change of variables $z=I_1(-x)$ when using Part (2) of Lemma \ref{lem:PR} immediately implies that for any $\epsilon >0$: \begin{equation} I_2 =O(e^{-N(a(\sqrt{2}-\delta)^2+b(\sqrt{2}-\delta) +\epsilon}). \end{equation} Now we estimate $I_1(M)$. Using the uniform asymptotics of $\phi_{N-1}$ we need to evaluate \begin{equation} \frac{2^{1/4}}{\pi^{1/2}N^{\frac{1}{4}}} \int_{M}^{\sqrt{2}-\delta} e^{-N( ax^2+bx)} \frac{1}{(\sin \omega)^{\frac{1}{2}}} \sin\bigg((\frac{N}{2}-\frac{1}{4})(\sin 2\omega - 2\omega) + \frac{3\pi}{4}\bigg)\mathrm{d} x. \end{equation} Performing the change of variables $x = \sqrt{2} \cos \omega, 0<\omega<\pi$ the integral above becomes (for some different $\delta >0$) \begin{equation}\label{aaa333} \sqrt{2} \int_{\iota(M)}^{\pi-\delta} e^{- N(2a\cos^2\omega + \sqrt{2}b\cos \omega)} (\sin \omega)^{\frac{1}{2}} \sin\bigg((\frac{N}{2}-\frac{1}{4})(\sin 2\omega - 2\omega) + \frac{3\pi}{4}\bigg)\mathrm{d} \omega \end{equation} for $\iota(M) = \arccos (2^{1/2}M)$. We now rewrite $\cos^2\omega = \frac{1+\cos{2\omega}}{2}$ and use the substitution $2\omega = z$ to obtain the integral \begin{equation}\label{aaa3331} \frac{1}{\sqrt{2}} \int_{2\iota(M)}^{2\pi-2\delta} e^{-N( a+a\cos{z} + \frac{b}{\sqrt{2}}\cos \frac{z}{2})} (\sin \frac{z}{2})^{\frac{1}{2}} \sin\bigg((\frac{N}{2}-\frac{1}{4})(\sin z - z) + \frac{3\pi}{4}\bigg)\mathrm{d} z . \end{equation} Last, we write \begin{equation} \label{sinusexp} \sin\bigg((\frac{N}{2}-\frac{1}{4})(\sin z - z) + \frac{3\pi}{4}\bigg) = \frac{1}{2i} \bigg[e^{i(\frac{N}{2})(\sin z - z)} e^{if_1(z)} - e^{-i(\frac{N}{2})(\sin z - z)} e^{-if_1(z)} \bigg], \end{equation} where $f_1(z) = -\frac{1}{4}(\sin z - z) + \frac{3\pi}{4}$. Therefore, we just need to evaluate the asymptotics of \begin{equation} \int_{2\iota(M)}^{2\pi-2\delta} e^{- N m(z)}j(z) \mathrm{d} z, \quad \int_{2\iota(M)}^{2\pi-2\delta} e^{- N n(z)}k(z) \mathrm{d} z \end{equation} where $m$ and $n$ are entire functions given by \begin{eqnarray}\label{eq:m} m(z)&=&a+a\cos{z} + \frac{b}{\sqrt{2}}\cos \frac{z}{2} - \frac{i}{2}(\sin z - z)\\ n(x)&=&a+a\cos{z} + \frac{b}{\sqrt{2}}\cos \frac{z}{2} + \frac{i}{2}(\sin z - z) \end{eqnarray} and $j(z) = \sin(\frac{z}{2})^{\frac{1}{2}}e^{if_1(z)}$, $k(z)=\sin(\frac{z}{2})^{\frac{1}{2}}e^{-if_1(z)}$. We will change our contour of integration and apply Laplace's Integral in the appropriate integrals. Notice that the steepest descent paths are given by the equations \begin{eqnarray}\label{cob} \Im(m(z)) &=& \sin x \bigg(a\sinh y + \frac{\cosh y}{2}\bigg) +\frac{b}{\sqrt{2}}\sin \frac{x}{2}\sinh \frac{y}{2}- \frac{x}{2} = \text{constant} \\ \Im(n(z)) &=&\sin x \bigg(a\sinh y - \frac{\cosh y}{2}\bigg) +\frac{b}{\sqrt{2}}\sin \frac{x}{2}\sinh \frac{y}{2}+ \frac{x}{2} = \text{constant}. \end{eqnarray} The phase diagram for the steepest paths of $m$ is described as follows. First all lines $x=2k \pi$, $k \in \ensuremath{\mathbb{N}}$ are steepest paths. Second, for every $t\in (0,2\pi)$ the steepest path that passes through $t$ goes from $0 - i \infty $ to $\pi + i \infty$ if $b>0$ and from $\pi - i \infty $ to $\pi + i \infty$ if $b=0$. The real part of $m(z)$ is given by \begin{eqnarray}\label{cab} \Re(m(z)) &=& \cos x \bigg(a\cosh y + \frac{1}{2}\sinh y\bigg) + a +\frac{b}{\sqrt{2}}\cos \frac{x}{2}\cosh y - \frac{y}{2}\\ \Re(n(z)) &=& \cos x \bigg(a\cosh y - \frac{1}{2}\sinh y\bigg) + a + \frac{b}{\sqrt{2}}\cos \frac{x}{2}\cosh y + \frac{y}{2} . \end{eqnarray} If we integrate $m(z)$ between two points $\alpha , \beta \in (0,2\pi)$, we can deform our contour to be equal to the two steepest paths that connect $\alpha$ and $\beta$ to $z= 0 - i\infty$. Precisely, we deform our countour into three pieces: we first follow the steepest descent path from $\alpha$ to a point with imaginary part $y_0<0$, $|y_0|$ large. From there we go along the straight line $y=y_0$ until we reach the steepest path that passes through $\beta$, $\gamma_{y_0}$, and then we integrate on this steepest path back to $\beta$. From \eqref{cab} we see that if we choose $|y_0|$ large enough, every point in the straight segment $y=y_0$ that we cross has real part $x$ sufficiently close to $0$ so $\cos x >0$. This together with $ a>1/2$ implies that $\Re(m(z))$ diverges to infinity as $y$ goes to negative infinity. The trivial bound \begin{equation} |\int_{\gamma_{y_0}} e^{-Nm(z)} j(z) \mathrm{d} z| \leq \int_{\gamma_{y_0}} e^{-N \Re(m(z))} \mathrm{d} z \sup_{z\in \gamma_{y_0}} |j(z)| \end{equation} combined with the bounded length of $\gamma_{y_0}$ show that the contribution of this part can be made as small as we want by choosing $y_0$ large enough. In the two remaining paths the imaginary part of $m$ is constant and therefore we can apply Laplace's method to get the asymptotic behavior. Since we assumed that $M < \sqrt{2}$ the contribution at $2\pi-2\delta$ is negligible compared to the one at $2\iota(M)$. Indeed, by formula (7.2.11) of \cite{Bleistein}, \begin{equation}\label{saast} \int_{2\iota(M)}^{2\pi-2\delta} e^{- N m(z)}j(z) \mathrm{d} z = \frac{e^{- N m(2\iota(M)) + i (\pi - \alpha(M))}j(2\iota(M))}{N|m'(2\iota(M))|}(1+O(N^{-1})), \end{equation} where $\alpha(M)$ is the angle of the steepest descent path of $m$ at $z=2\iota(M)$: \begin{equation} \alpha(M) = \arctan\bigg(\frac{1-a \cos z}{2a\sin z + \frac{b \sin(x/2)}{\sqrt{2}}}\bigg). \end{equation} The above argument adapted to the function $n$ implies \begin{equation}\label{saast2} \int_{2\iota(M)}^{2\pi-2\delta} e^{- N n(z)}k(z) \mathrm{d} z = \frac{e^{- N n(2\iota(M)) + i (\pi - \alpha(M))} k(2\iota(M))}{N|n'(2\iota(M))|}(1+O(N^{-1})). \end{equation} Noting that for any $x \in (0,2\pi)$ $|n'(x)| = |m'(x)|$, we can combine \eqref{sinusexp}, \eqref{saast} and \eqref{saast2} to recover that $I_1(M)$ is asymptotic equivalent to \begin{equation} \frac{2^{1/4}}{\pi^{1/2}N^{\frac{5}{4}}}\frac{e^{-N(aM^2+bM)}}{2|m'(2\iota(M))|(\sin \omega)^{1/2}} \sin\bigg[ \big(\frac{N}{2}-\frac{1}{4} \big)\big(\sin 2\omega - 2\omega \big) + \frac{3\pi}{4} + \alpha(M)\bigg](1 + O(N^{-1})). \end{equation} This ends the proof of Part (2) of Lemma. The proof of part (3) follows from the proof of part (2) and Lapace's method as in part (1) applied to the integral \begin{equation*} \int_{M}^{-\sqrt{2}-\delta} e^{ax^2+bx+I_1(-x)} h(x) \mathrm{d} x = O(e^{-N\lambda(M,a,b)}). \end{equation*} We leave the details to the reader. \end{proof} We now turn to the proof of Theorem \ref{meaneulerasym}. \begin{proof}[Proof of Theorem \ref{meaneulerasym}] We can rewrite \eqref{eqmix} as \begin{equation*}\begin{split} \ensuremath{\mathbb{E}} \chi(A_u) &= (-1)^{N-1} \bigg(\frac{\nu''}{\nu'}\bigg)^{\frac{N-1}{2}} c(N,\nu) \\ &\times \int_{-\infty}^{\infty} \int_{-\infty}^{\frac{u}{\sqrt{2\nu''}}} \phi_{N-1}\big(\sqrt{N}(\nu'x -\alpha y)\big) e^{-N \nu''(x^2+y^2)} e ^{ \frac{N}{2} (\nu'x -\alpha y)^2} \mathrm{d} x \mathrm{d} y \end{split} \end{equation*} where \begin{equation}\label{cnnu} c(N, \nu) = 2 \nu''([N-1]!\sqrt{\pi})^{1/2} \frac{2^{-\frac{N-1}{2}}N}{\sqrt{\pi}\Gamma(\frac{N}{2})}. \end{equation} For the case $\alpha \neq 0$, we can change variables $z=\nu'x -\alpha y$, $w = \alpha x + \nu'y$ to get \begin{equation*} x = \bigg(\nu'z + \alpha w\bigg) \bigg( \frac{1}{\alpha^2 + \nu'^2} \bigg), \quad y = \bigg(\nu'w - \alpha z\bigg)\bigg( \frac{1}{\alpha^2 + \nu'^2} \bigg), \end{equation*} and the above double integral becomes (using $\alpha^2 = \nu'' + \nu' - \nu'^2$): \begin{equation*} \frac{1}{\nu'' + \nu'}\int \int_{\nu'z + \alpha w \leq (\nu'' + \nu')\frac{u}{\sqrt{2\nu''}}} \phi_{N-1}\big( \sqrt{N}z\big) e^{-\frac{N \nu''(z^2+w^2)}{\nu'' + \nu'} } e^{N \frac{z^2}{2}} \mathrm{d} z \; \mathrm{d} w. \end{equation*} So we have to evaluate the asymptotic behavior of the following integral: \begin{equation*} J=\int_{-\infty}^{\infty} \phi_{N-1}\big( \sqrt{N}z\big) e^{\frac{N (\nu'-\nu'')z^2}{2(\nu'' + \nu')} } \int_{-\infty}^{\frac{1}{\alpha} (\frac{(\nu''+\nu')u}{\sqrt{2\nu''}}- \nu' z)} e^{-\frac{N\nu''w^2}{\nu'+\nu''}} \mathrm{d} w \; \mathrm{d} z. \end{equation*} We write the outside integral $ \int_{-\infty}^{\infty} \mathrm{d} z $ as $\int_{-\infty}^{M} + \int_{M}^{\infty}$ with $M = \frac{(\nu'+\nu'')u}{\sqrt{2\nu''}\nu'}$. The inside integral is just a Gaussian integral and therefore after a straight-forward computation the problem amounts to compute the asymptotics of the three following one-dimensional integrals: \begin{eqnarray*} J_1 &=& \int_{M}^{\infty} \phi_{N-1}(\sqrt{N}z) e^{-N(\frac{\nu'^2+\nu''-\nu'}{2(\nu''+\nu'-\nu'^2)} z^2 - \frac{\sqrt{2\nu''}\nu'u}{\nu''+\nu'-\nu'^2}z)} \mathrm{d} z \\ J_2&=& \int_{M}^{\infty}\phi_{N-1}(\sqrt{N}z) e^{-N\frac{2(\nu'+\nu'')}{\nu''-\nu'}z^2} \mathrm{d} z \end{eqnarray*} as $J = (J_1 + J_2) (1+O(N^{-1/2}))$ if $N$ is even and $J = (J_1 - J_2) (1+O(N^{-1/2}))$ if $N$ is odd. Take $u \leq0$. We use Lemma \ref{lemmanecessa} in both cases. Note that by \eqref{eq2p}, $a = \frac{\nu'^2+\nu''-\nu'}{2(\nu''+\nu'-\nu'^2)} > \frac{1}{2}$ and $b= - \frac{\sqrt{2\nu''}\nu'u}{\nu''+\nu'-\nu'^2} \geq0$. Now the condition $M\leq-\sqrt{2}$ ($M>-\sqrt{2}$) is exactly the condition $u\leq -E_\infty'$ ($u>-E_\infty'$). Applying the appropriate cases of Lemma \ref{lemmanecessa} we see that the integral $J_2$ is negligible compared to $J_1$. A comparison with \eqref{eq:qsela} and \eqref{rafuta} gives the proof of part (1) and part (2) of the Theorem with $a$ and $b$ as above, \begin{equation}\label{finalalpha} \alpha(w)= \arctan\bigg(\frac{1-a \cos \omega}{2a\sin \omega + \frac{b \sin(\omega/2)}{\sqrt{2}}}\bigg) \; \text{and} \; f(\omega)= \bigg(|m'(2\omega)| \sin^{1/2} \omega\bigg)^{-1} \end{equation} where $m$ is given in \eqref{eq:m}. Part (3) follows from symmetry of the Hamiltonian and \eqref {eq:eulde}. \end{proof}
1,116,691,497,420
arxiv
\section{Introduction} Decentralized cryptocurrencies have witnessed explosive growth since the first decentralized distributed currency Bitcoin was launched in 2009~\cite{Nakamoto2008}. In contrast to traditional currencies, decentralized cryptocurrencies are traded among participants over a peer-to-peer (P2P) network without relying on trusted third parties like banks or financial regulatory authorities. As the backbone technology, blockchain protocol provides an effective consensus mechanism to successfully solve problems about incentive, tamper-resistance, trust and so on~\cite{Kiayias2016}. Recently, blockchain has heralded many applications in various fields, such as finance~\cite{Guo2016}, Internet of Things~\cite{Christidis2016}, smart grid~\cite{Kang2017} and cognitive radio~\cite{Kotobi2017}. According to the market research firm Tractica, it is estimated that the annual revenue for enterprise applications of blockchain will increase to \$19.9 billion by 2025~\cite{Tractica}. It is also worth noting that the access scope of the blockchain can be not only \emph{public} as Bitcoin, but also \emph{private} or \emph{consortium/community}~\cite{Buterin2015} where blockchain networks are established and usually managed by the blockchain owner\footnote{A good example is R3 consortium (\url{https://www.r3.com/}) experimenting an Ethereum private blockchain on \href{https://azuremarketplace.microsoft.com/en-us/marketplace/apps/microsoft-azure-blockchain.azure-blockchain-service}{Microsoft Azure Blockchain Service}.}. The security and reliability of blockchains depend on a distributed consensus mechanism. Specifically, a group of participants in the blockchain network, called \emph{miners,} try to solve a computationally difficult problem, i.e., the \emph{proof of work} (PoW) puzzle, where the process is called \emph{mining}. First, each miner receives and selects certain number of transaction records from public. Once solving the puzzle, the miner will broadcast a \emph{block} which combines the transaction records and relevant information in the blockchain network. Next, this block will be verified by the majority of other miners for consensus and then finally be added to the blockchain. The miner which successfully finishes the above steps will receive a fixed reward and certain transaction fees as incentives of mining. However, blockchain applications in mobile environments are still seldom realized because solving the PoW puzzle needs high computing power and large amount of energy which mobile devices cannot satisfy. In this paper, we consider the edge computing services for mobile users to deploy their mining tasks and thus support the mobile blockchain applications. Specifically, we discuss the allocation and pricing issue for edge computing resource. We first propose an auction-based market model. The market consists of three entities, i.e., blockchain owner, edge computing service provider (ESP) and miners. Considering the competition among miners~\cite{Kiayias2016} and network effects of blockchain by nature~\cite{Catalini2016}, we then study the auction mechanism with allocative externalities to maximize the social welfare. Our social maximization mechanism is truthful (incentive compatible), individually rational and computationally efficient. Based on our real-world experiment of mobile blockchain, we analyze the probability of successfully mining a block and verify the probability function. Our simulation results show that the proposed auction market model can not only help the ESP to make practical sale strategies, but also incentivize the blockchain owner in adjusting the blockchain protocol. To the best of our knowledge, this is the first work that investigates resource management and pricing in the mobile blockchain with an auction model. The rest of this paper is organized as follows. Section~\ref{sec:Related-Work} reviews related work, and the system model of edge computing resource market for mobile blockchain is introduced in Section~\ref{sec:System-Model}. Section~\ref{sec:Social-welfare-maximization} formulates the social maximization problem and gives theoretical analysis. Section~\ref{sec:Experiment-and-numerical} presents experimental results of mobile blockchain and performance analysis. Finally, Section~\ref{sec:Conclusions} concludes the paper. \section{Related Work\label{sec:Related-Work}} Over the recent years, there are some studies on the economics and mining strategies in the blockchain network. As one of the pioneer papers, the authors in~\cite{Kroll2013} modeled the mining process as a game played by miners. Each miner's strategy is to choose which branch of blockchain to mine on. They proved the existence of a Nash equilibrium when all miners behave as expected by Bitcoin designer. Further, they explored the case that some miners may deviate from the expected behavior, which makes blockchain network unstable and vulnerable. The authors in~\cite{Houy2014} proposed a game model in which the occurrence of solving the PoW puzzle is modeled as a Poisson process. Miners have to decide the size of block to broadcast as their response. Analytical solutions to the Nash equilibrium in a two-miner case was given. In~\cite{Lewenberg2015}, the authors designed a cooperative game model to investigate the mining pool. In the pool, miners form a coalition to accumulate their computational power and have steady reward. However, these works only studied the internal mining scheme and paid little attention to the actual running of blockchain in more dynamic environments, i.e., mobile blockchain. Therefore, this motivates us to place the edge computing service as underlying technology for mobile blockchain network and build an edge computing resource market model. The blockchain technology can achieve more extensive and promising applications in the mobile information society. The auction design has been widely studied in other resource allocation problems, such as spectrum trading~\cite{Gao2011,Zhou2008} and data crowdsensing~\cite{Yang2016}. However, none of these works can be directly applied to edge computing applications for mobile blockchain, since they only focused on the specific properties constrained by the studied topics. For example, in the blockchain networks, the allocative externalities~\cite{Jehiel2005,Salek2008} should be considered because of miners care about the computational power allocated to the others. The authors in~\cite{Luong} used deep learning to recover the classical optimal auction for revenue maximization and applied it in the edge computing resources allocation. However, it is less about the design of auction and cannot guarantee the incentive compatibility. \section{System Model: Mobile Blockchain and Market Model\label{sec:System-Model}} \begin{figure}[tbh] \begin{centering} \includegraphics[width=1\columnwidth]{figure/system_model_blockchain} \par\end{centering} \caption{Edge computing resource market for mobile blockchain.\label{fig:system-model}} \end{figure} Figure~\ref{fig:system-model} shows the auction-based market model for trading edge computing resources. The \emph{blockchain owner} launches a blockchain application and designs the protocol for blockchain network operation. The \emph{mobile users} buy resources from the edge computing service provider (ESP) and become miners. In the miners network, they take part in the mining process to contribute new blocks to the blockchain. \subsection{Mobile Blockchain} Blockchain can be used to develop applications with mobile devices, as indicated in our earlier study~\cite{Suankaewmanee2018}. To support the blockchain based service, there are a set of miners continuously running a consensus protocol~\cite{Nakamoto2008} to confirm and secure distributed data or transactions at backend. According to the protocol, miners are required to finish the mining task, i.e., solving the PoW puzzle. The mining process is conducted in a tournament structure, and miners chase each other to obtain the solution. Specifically, the PoW algorithm involves finding a nonce value that, together with additional fields about all valid and received transactions, the previous block and the timestamp, the output satisfies a given condition. If the nonce is found, the miner will combine it and additional fields into a block and then broadcast the block to peers in the blockchain network for verification and reaching consensus. Finally, the new block can be linked to the existing accepted chain of blocks. However, for a mobile user, it is unrealistic to continuously run such a computationally difficult program which requires high computing power and consumes a large volume of energy and time. Because the outstanding characteristics of edge computing: low latency, mobility and wide-spread geographical distribution~\cite{Ahmed2016}, we consider offloading the mining tasks to the edge servers. \subsection{Edge Computing Resources Trading \label{subsec:Edge-Computing-Resources}} As shown in Fig.~\ref{fig:system-model}, we consider a scenario where there is one ESP, one blockchain owner and a community of mobile users $\mathcal{N=}\{1,\ldots,N\}$. Each mobile user wants to be a miner which runs a mobile blockchain application to record and verify the transactions or data sent to the community. Due to the computing limitation on their devices, mobile users want to offload the task of solving PoW to the nearby edge computing servers deployed by the ESP. In particular, the ESP launches an auction to sell its services. It first announces its service and relevant information to mobile users. Then, the mobile users submit their resource demand profile $\mathbf{d}=(d_{1},\ldots,d_{N})$ and corresponding bids $\mathbf{b}=(b_{1},\ldots,b_{N})$ which represents their valuations of the offered services. After receiving the demands and bids, the ESP selects the winners as the successful miners and notifies the mobile users the allocation $\mathbf{x}=(x_{1},\ldots,x_{N})$ and the service price $\mathbf{p}=(p_{1},\ldots,p_{N})$. The setting $x_{i}=1$ means user $i$ is within the winner list and being allocated resources that it demands for while $x_{i}=0$ is for no resource\footnote{The user becomes a miners if it wins the auction.}. $p_{i}$ is the sale price that user $i$ is charged by the ESP\footnote{The payment for user which is not allocated any resource is zero, i.e., $p=0$.}. At the end of the auction, the winners make the payment and access the edge computing service. \subsection{Blockchain Mining with Edge Computing Service \label{subsec:Blockchain-Mining-with-ES} } With the allocation $x_{i}$ and demand $d_{i}$, miner $i$'s hash power $\gamma_{i}$ relative to other miners' allocated resources can be calculated by: \begin{equation} \gamma_{i}(\mathbf{d},\mathbf{x})=\frac{d_{i}^{\alpha}x_{i}}{\sum_{j\in\mathcal{N}}d_{j}^{\alpha}x_{j}}\label{eq:hash-power} \end{equation} which is a fraction function that $\sum_{i\in\mathcal{N}}\gamma_{i}=1.$ $\alpha$ is the curve fitting parameter of the hash power function $\gamma_{i}(\mathbf{d},\mathbf{x})$ verified by our real-world experiment, the detail of which will be presented in Section~\ref{sec:Experiment-and-numerical}. In the mining tournament, miners compete to be the first to solve PoW with correct nonce value and propagate the block to reach consensus. The generation of new blocks follows a Poisson process with a constant rate $\frac{1}{\lambda}$ throughout the whole blockchain network~\cite{Kraft2016}. Before the tournament, miners collect unconfirmed transactions into their blocks. We represent the size of transactions of each miner by $\mathbf{s}=(s_{1},\ldots,s_{N})$. When miner $i$ propagates its block to the mobile blockchain network for consensus, the time for verifying each transaction is affected by the size of transactions $s_{i}$. The first miner which successfully has its block achieve consensus can get a reward $R$. The reward is composed of a fixed bonus $T$ for mining a new block and a flexible transaction fee $t$ determined by the size of its collected transactions $s$ and the transaction fee rate $r$~\cite{Houy2014}. Thus, miner $i$'s expected reward $R_{i}$ can be expressed by: \begin{equation} R_{i}=(T+rs_{i})\mathbb{P}_{i}(\gamma_{i}(\mathbf{d},\mathbf{x}),s_{i}),\label{eq:Expected-reward} \end{equation} where $\mathbb{P}_{i}(\gamma_{i}(\mathbf{d},\mathbf{x}),s_{i})$ is the probability that miner $i$ receives the reward by contributing a block to the blockchain. From the mining tournament above, winning the reward depends on the successful mining and instant propagation. The probability of mining a new block $P_{i}^{m}$ is equal to miner $i$'s hash power $\gamma_{i}$, i.e., $P_{i}^{m}=\gamma_{i}$. However, the miner may even lose the tournament if its new block does not achieve consensus as the first. This kind of mined block that cannot be added on to the blockchain is called orphaned block~\cite{Houy2014}. Moreover, the block containing larger size of transactions has higher chance becoming orphaned. This is because a larger block needs more propagation time, thus causing higher delay for consensus. Here, we assume miner $i$'s block propagation time $\tau_{i}$ is linear to the size of transactions in its block, i.e., $\tau_{i}=\xi s_{i}$. $\xi$ is a constant that reflects the impact of $s_{i}$ on $\tau_{i}$. Since the arrival of new blocks follows a Poisson distribution, miner $i$'s orphaning probability can be approximated as follows~\cite{Rizun2015}: \begin{equation} P_{i}^{o}=1-\exp(-\frac{1}{\lambda}\tau_{i}).\label{eq:p-orphan} \end{equation} After substituting $\tau_{i}$, we can express $\mathbb{P}_{i}$ as follows: \begin{align} \mathbb{P}_{i} & =P_{i}^{m}(1-P_{i}^{o})\label{eq:p-mining-a-block}\\ & =\gamma_{i}e^{-\frac{1}{\lambda}\xi s_{i}}.\nonumber \end{align} \subsection{Blockchain Management} The blockchain owner maintains the blockchain mining protocol that specifies the fixed bonus $T$ for the contributing miner and the transaction fee rate $r$. Through adjusting the difficulty of finding a new block, the blockchain owner keeps the average time $\lambda$ at a reasonable constant value\footnote{It is worth noting that the blockchain owner is not a central entity that controls the data storage and mining strategies of the miners. Similar to the bitcoin protocol~\cite{Nakamoto2008}, the blockchain owner in this paper only designs the protocol specifies the value of $T$, $r$ and $\lambda$, and does not affect the decentralization and security of blockchain. }. Additionally, a blockchain in PoW systems is only as secure as the amount of computing power dedicated to mining it~\cite{Catalini2016}. This results in positive network effects: as more mobile users participate in mining and more computing resources are invested, the value of reward given to miners increases since the blockchain network is more stable and secure. Empirically, we define the network effects by a common S-shaped utility function~\cite{Jackson2010}: \begin{equation} w(d_{\mathcal{N}})=\frac{1-e^{-\nu d_{\mathcal{N}}}}{1+\mu e^{-\nu d_{\mathcal{N}}}},\label{eq:S-shape} \end{equation} where $d_{\mathcal{N}}=\sum_{i\in\mathcal{N}}d_{i}x_{i}$ is the total quantity of allocated resources and $\mu,\nu$ are positive parameters. The monotonic increase of network effect function begins slowly from $0$, then accelerates (convexly), and then eventually slows down (concavely) and converges asymptotically to $1$. \section{Social Welfare Maximization Auction for Edge Computing Service\label{sec:Social-welfare-maximization}} In this section, we propose an auction mechanism for the ESP to allocate edge computing resources efficiently. We focus on maximizing the social welfare while guaranteeing the truthfulness, individual rationality and computational efficiency. \subsection{Valuation of mobile users} To take part in the auction, a mobile user needs to give the bid representing its valuation to the auctioneer, i.e., the ESP. Since the mobile user $i$ cannot know the number of winners and total supply of computing resources until auction ends, it can only give the bid $b_{i}$ according to its expected reward $R_{i}$ which is also called \emph{ex-ante} valuation $v_{i}^{'}$, i.e., \begin{equation} v_{i}^{'}=R_{i}\label{eq:ex-ante-valuation-o} \end{equation} After the auction result is released, user $i$ has an \emph{ex-post} valuation $v_{i}^{''}$ of the edge computing service considering network effects, which is defined by \begin{equation} v_{i}^{''}=R_{i}w\label{eq:ex-post-valuation-o} \end{equation} where $w$ is the network effect defined in~(\ref{eq:S-shape}). After substituting (\ref{eq:hash-power}), (\ref{eq:p-mining-a-block}), (\ref{eq:S-shape}) into (\ref{eq:ex-ante-valuation-o}) and (\ref{eq:ex-post-valuation-o}), we have the specific expression of user $i$'s ex-ante and ex-post valuation: \begin{equation} v_{i}^{'}=(T+rs_{i})e^{-\frac{1}{\lambda}\xi s_{i}},\label{eq:ex-ante} \end{equation} \begin{equation} v_{i}^{''}=\frac{d_{i}^{\alpha}x_{i}}{\sum_{j\in\mathcal{N}}d_{j}^{\alpha}x_{j}}\frac{1-e^{-\nu\sum_{i\in\mathcal{N}}d_{i}x_{i}}}{1+\mu e^{-\nu\sum_{i\in\mathcal{N}}d_{i}x_{i}}}(T+rs_{i})e^{-\frac{1}{\lambda}\xi s_{i}}.\label{eq:ex-post} \end{equation} \subsection{Auction Maximizing Social Welfare} Once receiving bids $\mathbf{b}$ from all the mobile users, ESP will select winners and determine corresponding payments to maximize the social welfare. Let $c$ denote the unit cost of running the edge computing service. ESP's total cost is $C(d_{\mathcal{N}})=cd_{\mathcal{N}}$. Thus, designing such an auction becomes solving an optimization problem: \begin{align} \max_{\mathbf{x}} & \sum_{i\in\mathcal{N}}\frac{d_{i}^{\alpha}x_{i}}{\sum_{j\in\mathcal{N}}d_{j}^{\alpha}x_{j}}\frac{1-e^{-\nu\sum_{i\in\mathcal{N}}d_{i}x_{i}}}{1+\mu e^{-\nu\sum_{i\in\mathcal{N}}d_{i}x_{i}}}(T+rs_{i})e^{-\frac{1}{\lambda}\xi s_{i}}\nonumber \\ & \quad-\sum_{i\in\mathcal{N}}cd_{i}x_{i}\label{eq:socail-welfare-original}\\ s.t. & \sum_{i\in\mathcal{N}}d_{i}x_{i}\leq D\label{eq:constraint-supply}\\ & x_{i}\in\{0,1\},\forall i\in\mathcal{N} \end{align} where the objective function in (\ref{eq:socail-welfare-original}) is the difference between the sum of all users' ex-post valuations and ESP's total cost. The constraint in (\ref{eq:constraint-supply}) defines the maximum quantity of computing resources that ESP can offer denoted by $D$. Based on the above system model, we first consider a simpler case where all mobile users submit various bids to compete for a fixed quantity of resources. Without loss of generality, we set $d_{i}=1,\forall i\in\mathcal{N}$. Then, the optimization problem can be expressed as follows: \begin{align} \max_{\mathbf{x}} & \sum_{i\in\mathcal{N}}\frac{x_{i}}{\sum_{j\in\mathcal{N}}x_{j}}\frac{1-e^{-\nu\sum_{i\in\mathcal{N}}x_{i}}}{1+\mu e^{-\nu\sum_{i\in\mathcal{N}}x_{i}}}(T+rs_{i})e^{-\frac{1}{\lambda}\xi s_{i}}\nonumber \\ & \quad-\sum_{i\in\mathcal{N}}cx_{i}\label{eq:socail-welfare-original-unit}\\ s.t. & \sum_{i\in\mathcal{N}}x_{i}\leq D\label{eq:constraint-supply-1}\\ & x_{i}\in\{0,1\},\forall i\in\mathcal{N}\label{eq:xi_integer} \end{align} We aim to solve this integer programming efficiently while making the auction process truthful and individually rational. The proposed auction is based on the Myerson's well-known characterization~\cite{Myerson1981} as described in Theorem~\ref{thm: truthful-condition}. \begin{thm} (\cite[Theorem~13.6]{Nisan2007}) An auction is truthful if and only if it satisfies the following two properties:\label{thm: truthful-condition} \end{thm} \begin{enumerate} \item \emph{Monotonicity of winner selection rule: If user $i$ wins the auction with bid $b_{i}$, then it will also win with any higher bid $b_{i}'>b_{i}$.} \item \emph{Critical payment: The payment by a winner is the smallest value needed in order to win the auction. } \end{enumerate} By using Theorem~\ref{thm: truthful-condition}, our auction mechanism is illustrated in Algorithm~\ref{alg:1}. In Lines~5-12, the winner selection process is conducted with a greedy scheme. We define a winner set $\mathcal{W}$. Including a user $i$ in the set is equivalent to assigning $x_{i}=1$. Thus, we rewrite the problem in an alternative form as follows: \begin{align*} \max_{\mathcal{W\subseteq N}} & S(\mathcal{W})\\ s.t. & S(\mathcal{W})=\sum_{i\in\mathcal{W}}\frac{1}{\mathcal{\left|W\right|}}\frac{1-e^{-\nu\left|\mathcal{W}\right|}}{1+\mu e^{-\nu\left|\mathcal{W}\right|}}b_{i}-c\left|\mathcal{W}\right|,\\ & \mathcal{\left|W\right|}\leq D \end{align*} where $\left|\mathcal{W}\right|$ measures the number of winners in $\mathcal{W}$, $S(\mathcal{W})$ is social welfare of $\mathcal{W}$ and $b_{i}=v_{i}'=(T+rs_{i})e^{-\lambda ks_{i}}$ because the auction is truthful. In the winner selection process (Lines~6-12), mobile users are first sorted in a descending order according to their bids. We then add one user sequentially to the winner set $\mathcal{W}$, which will be stopped before the corresponding social welfare $S(\mathcal{W})$ decreases. Finally, the solution $\mathcal{W}$ is output by the algorithm. \begin{prop} The resource allocation $\mathbf{x}$ output by Algorithm~\ref{alg:1} is globally optimal to the social welfare maximization problem given in (\ref{eq:socail-welfare-original-unit})-(\ref{eq:xi_integer}).\label{prop:resource-allocation} \end{prop} \begin{IEEEproof} With proof by contradiction, this result follows from Claim~\ref{claim:3}. \end{IEEEproof} \begin{claim} Let $\mathcal{W}_{A}$ be the solution output by Algorithm~\ref{alg:1} on input $\mathbf{b}$, and $\mathcal{W}_{O}$ the optimal solution. If $\mathcal{W}_{A}\neq\mathcal{W}_{O}$, then we can construct another solution $\mathcal{W}_{O}^{*}$ the social welfare of which $S(\mathcal{W}_{O}^{*})$ is even larger than $\mathcal{W}_{O}$.\label{claim:3} \end{claim} \begin{IEEEproof} Without loss of generality, we assume $b_{1}\geq\cdots\geq b_{N}$ and $\mathcal{W}_{A}\neq\mathcal{W}_{O}$. Let $m$ be the first element (while-loop Lines~6-12) where $m\notin\mathcal{W}_{O}$. Since ($b_{m}$ is minimal by assumption.) $m$ is maximal, we must have $1,\ldots,m-1\in\mathcal{W}_{O}$ and in particular, the set of corresponding bid $\mathbf{b}_{\mathcal{W}_{O}}$ has the form $\mathbf{b}_{\mathcal{W}_{O}}=\{b_{1},b_{2},\ldots,b_{m-1},b'_{m},b'_{m+1},\ldots,b'_{\left|\mathcal{W}_{O}\right|}\}$, where the bids $b_{1},\ldots,b'_{\left|\mathcal{W}_{O}\right|}$ are listed in the descending order. Meanwhile, Algorithm~\ref{alg:1} chooses $\mathbf{b}_{\mathcal{W}_{A}}=\{b_{1},b_{2},\ldots,b_{m-1},b{}_{m},b{}_{m+1},\ldots,b{}_{\left|\mathcal{W}_{O}\right|}\}$ and there must be $b_{m}>b'_{j}$ for all $j\geq m$. In particular, we have $b_{m}>b'_{m}$. Hence, we define $\mathbf{b}_{\mathcal{W}_{O}^{*}}=\mathbf{b}_{\mathcal{W}_{O}}\cup\{b_{m}\}\setminus\{b'_{m}\}$ , i.e., we obtain $\mathbf{b}_{\mathcal{W}_{O}^{*}}$ by deleting the $m$th bid in $\mathbf{b}_{\mathcal{W}_{O}}$ and adding $b_{m}$. Now we have the social welfare of $\mathbf{b}_{\mathcal{W}_{O}^{*}}$: \[ S(\mathcal{W}_{O}^{*})=S(\mathcal{W}_{O})+\frac{1}{\left|\mathcal{W}_{O}\right|}\frac{1-e^{-\nu\left|\mathcal{W}_{O}\right|}}{1+\mu e^{-\nu\left|\mathcal{W}_{O}\right|}}(b_{m}-b'_{m}). \] Since $b_{m}-b'_{m}>0$ and $\left|\mathcal{W}_{O}^{*}\right|=\left|\mathcal{W}_{O}\right|$, $S(\mathcal{W}_{O}^{*})$ is strictly larger than $S(\mathcal{W}_{O})$, which is in contradiction to that $\mathcal{W}_{O}$ is the optimal solution. This proves the claim. \end{IEEEproof} In Lines~13-24, for each iteration, we exclude one winner from the user set and rerun the winner selection process to calculate the payment for the winner. The payment calculation is based on the well-known Vickrey\textendash Clarke\textendash Groves (VCG) mechanism~\cite{Krishna2009}. \begin{algorithm}[tbh] \begin{algorithmic}[1] \Require{Mobile users' bid profile~$\mathbf{b}=({b_{1}},\ldots,{b_{N}})$.} \Ensure{Resource allocation profile $\mathbf{x}=({x_{1}},\ldots,{x_{N}})$ and payment profile~$\mathbf{p}=({p_{1}},\ldots,{p_{N}})$.} \Begin \ForEach{$i \in \mathcal{N}$} \State{$x_i \gets 0, p_i \gets 0$} \EndFor \State{$\mathcal{W}\gets \varnothing, \mathcal{W}_{t}\gets \varnothing, S \gets 0, S_t \gets 0$} \Comment{$\mathcal{W}$ is the set of winners.} \While{$S \leq S', \mathcal{W} \neq \mathcal{N}, {\left|\mathcal{W}\right|} \leq D$} \State{$j \gets {\arg\max}_{j \in \mathcal{N}\setminus\mathcal{W}}b_{j}$} \State{$\mathcal{W} \gets \mathcal{W}_{t}$} \State{$\mathcal{W}_{t} \gets \mathcal{W} \cup \{j\}, S \gets S_t$} \State{$S_t\gets\frac{1}{\left|\mathcal{W}_{t}\right|}\frac{1-e^{-\nu\left|\mathcal{W}_{t}\right|}}{1+\mu e^{-\nu \left|\mathcal{W}_{t}\right|}}\sum_{l\in\mathcal{W}_{t}}b_l-c\left|\mathcal{W}_{t}\right|$} \EndWhile \ForEach{$j \in \mathcal{W}$} \State{$x_j \gets 1$} \State{$\mathcal{N}_{-j} \gets \mathcal{N} \setminus \{j\}, \mathcal{W}_{-j} \gets \mathcal{W} \setminus \{j\}$} \State{$\mathcal{W}' \gets \varnothing, \mathcal{W}_{t}' \gets \varnothing, S' \gets 0, S_{t}' \gets 0$} \While{$S' \leq S_{t}', \mathcal{W}' \neq \mathcal{N}_{-j}$} \State{$k \gets {\arg\max}_{k \in \mathcal{N}_{-j}\setminus\mathcal{W}'}b_k$} \State{$\mathcal{W}' \gets \mathcal{W}_{t}'$} \State{$\mathcal{W'}_{t} \gets \mathcal{W}' \cup \{k\}, S' \gets S_{t}'$} \State{$S_{t}'\gets\frac{1}{\left|\mathcal{W}_{t}'\right|}\frac{1-e^{-\nu\left|\mathcal{W}_{t}'\right|}}{1+\mu e^{-\nu \left|\mathcal{W}_{t}'\right|}}\sum_{l\in\mathcal{W}_{t}'}b_l-c\left|\mathcal{W}_{t}'\right|$} \EndWhile \State{$p_j=S'-\frac{1}{\left|\mathcal{W}_{-j}\right|}\frac{1-e^{-\nu\left|\mathcal{W}_{-j}\right|}}{1+\mu e^{-\nu \left|\mathcal{W}_{-j}\right|}}\sum_{l\in\mathcal{W}_{-j}}b_l+c\left|\mathcal{W}_{-j}\right|$} \EndFor \End \end{algorithmic} \caption{Social Welfare Maximization Auction \label{alg:1}} \end{algorithm} \begin{figure*}[t] \begin{centering} \par\end{centering} \begin{equation} S(\{b_{1},\ldots b_{k-1},b_{k}\})=\frac{1-\mathrm{e}^{-\nu k}}{(1+\mu\mathrm{e}^{-\nu k})k}\left(\sum_{j=1}^{k-1}b_{j}+b_{k}\right)<\frac{1-\mathrm{e}^{-\nu k}}{(1+\mu\mathrm{e}^{-\nu k})k}\left(\sum_{j=1}^{k-1}b_{j}+b_{i}^{+}\right)=S\left(\{b_{1},\ldots,b_{k-1},b_{i}^{+}\}\right)\label{eq:inequality} \end{equation} \end{figure*} \begin{prop} The Social Welfare Maximization Auction (Algorithm~\ref{alg:1}) is truthful. \end{prop} \begin{IEEEproof} Since the calculation of payment by the algorithm relies on VCG mechanism, it directly satisfies the second condition in Theorem~\ref{thm: truthful-condition}. For the first condition about monotonicity, we only need to show that if a winner $i$ raises its bid from $b_{i}$ to $b_{i}^{+}$where $b_{i}^{+}>b_{i}$, it still stays in the set of winners. We denote the original set of winners as $\mathcal{W}$ and the new set of winners $\mathcal{W}_{+}$ after winner $i$ changes its bid to $b_{i}^{+}$. The original bid set is $\mathbf{b}=\{b_{1},\ldots,b_{i},\ldots b_{N}\}$ $(i\leq\mathcal{\left|W\right|})$ sorted in the descending order. In addition, we define $S(\mathbf{b}_{\mathcal{U}})=S(\mathcal{U}),\forall\mathcal{U}\subseteq\mathcal{N}$ which means the social welfare of a set of bids is equal to the set of its corresponding users. We discuss the monotonicity in two cases: 1) Case 1: $b_{i-1}\geq b_{i}^{+}\geq b_{i}\geq b_{i+1}$. The new set of ordered bids is $\mathbf{b}^{+}=\{b_{1},\ldots,b_{i-1},b_{i}^{+},b_{i+1},\ldots b_{N}\}$. We have \begin{align} S(\{b_{1},\ldots,b_{i}^{+}\})= & \frac{1-e^{-\nu i}}{(1+\mu e^{-\nu i})i}\left(\sum_{j=1}^{i-1}b_{j}+b_{i}^{+}\right)-ci\nonumber \\ > & S(\{b_{1},\ldots,b_{i}\})=\sum_{j=1}^{i}\frac{1-e^{-\nu i}}{\left(1+\mu e^{-\nu i}\right)i}b_{j}-ci.\label{eq:S-case1} \end{align} The social welfare of new set of bids $\{b_{1},\ldots,b_{i}^{+}\}$ is larger than that of original set of bids $\{b_{1},\ldots,b_{i}\}$, which guarantees $b_{i}^{+}$ being in the set of winning bids. 2) Case 2: $b_{k-1}\geq b_{i}^{+}\geq b_{k}\geq\ldots\geq b_{i}$, $1<k<i$. The new set of ordered bids is $\mathbf{b}^{+}=\{b_{1},\ldots,b_{k-1},b_{i}^{+},b_{k},\ldots,b_{i+1},\ldots b_{N}\}$. We have \begin{equation} S(\{b_{1},\ldots,b_{k-1},b_{i}^{+}\})=\frac{1-\mathrm{e}^{-\nu k}}{(1+\mu\mathrm{e}^{-\nu k})k}\left(\sum_{j=1}^{k-1}b_{j}+b_{i}^{+}\right)-ck,\label{eq:S-case2-k} \end{equation} \begin{equation} S(\{b_{1},\ldots b_{k-1},b_{k}\})=\frac{1-\mathrm{e}^{-\nu k}}{(1+\mu\mathrm{e}^{-\nu k})k}\sum_{j=1}^{k}b_{j}-ck,\label{eq:S-case-original-k} \end{equation} \begin{equation} S(\{b_{1},\ldots,b_{k-1}\})=\frac{1-\mathrm{e}^{-\nu(k-1)}}{(1+\mu\mathrm{e}^{-\nu(k-1)})(k-1)}\sum_{j=1}^{k-1}b_{j}-c(k-1).\label{eq:S-case2-k-1} \end{equation} Since in the original set of bids $\mathbf{b}$, $\{b_{1,\ldots,}b_{k-1},b_{k},\ldots,b_{i}\}$ are all selected as winning bids, $S(\{b_{1},\ldots b_{k-1},b_{k}\})>S(\{b_{1},\ldots,b_{k-1}\})$. Because the inequality in (\ref{eq:inequality}), we have \[ S(\{b_{1},\ldots,b_{k-1},b_{i}^{+}\})>S(\{b_{1},\ldots,b_{k-1}\}), \] which implies that $b_{i}^{+}$still wins the auction. This concludes the proof. \end{IEEEproof} \begin{prop} The Social Welfare Maximization Auction (Algorithm~\ref{alg:1}) is computationally efficient and individually rational. \end{prop} \begin{IEEEproof} Since the time complexity of finding the maximum miner's bid is $O(N)$ and the number of winners is at most $N$, the time complexity of the winner selection process (while-loop, Lines~6-12) is $O(N^{2})$. In each iteration of payment calculation process (Lines~13-24), a similar winner selection process is executed. Therefore, the whole auction process can be performed in polynomial time with the time complexity of $O(N^{3})$ which is efficient. According to Proposition~\ref{prop:resource-allocation} and the properties of the VCG mechanism~\cite{Krishna2009}, the payment scheme in Algorithm~\ref{alg:1} guarantees the individual rationality. \end{IEEEproof} \section{Experiment results and performance analysis\label{sec:Experiment-and-numerical}} In this section, we provide simulation results of the proposed auction, from which we can further obtain useful decision making strategies for ESP and the blockchain owner. \subsection{Verification for Hash Power Function} An earlier real-world mobile blockchain mining experiment has been done in~\cite{Suankaewmanee2018,Xiong2017}. In the experiment, we designed a mobile blockchain client application in the Android platform and implemented it on three mobile devices (miners). Each of the three client applications generates transactions and then starts mining with one CPU core. The miners' CPU utilization rate is managed and measured on the Docker platform~\cite{docker}. Each mobile device mines the block under Go-Ethereum~\cite{Go-ethereum} blockchain framework. To verify the hash power function (\ref{eq:hash-power}), we vary one miner\textquoteright s service demand while fixing the other two miners\textquoteright{} service demand (CPU utilization) at 40 and 60. Besides, we set the number of transactions in each mined block to be 10 for all miners. Figure~\ref{fig::hash-power} shows the change of the hash power, i.e., the probability of successfully mining a block with different amount of computing resources. We note that the hash power function defined in (\ref{eq:hash-power}) can well fit the actual experimental data. From these results, we choose the hash power function with parameter $\alpha=1.2$ in the rest of this section. \begin{figure}[tbh] \begin{centering} \includegraphics[width=0.65\columnwidth]{figure/hash_power_function_fitting} \par\end{centering} \caption{Estimation of the hash power function $\gamma(d)$\label{fig::hash-power}.} \end{figure} \subsection{Simulation Results} We vary the number of mobiles users $N$ from $100$ to $1000$, the mining bonus $T$ from $0$ to $5$, and the transaction fee rate $r$ from $0.001$ to $0.009$. We set $\mu=0.5$, $\nu=0.005$, $\xi=1$ and $c=0.02$. The transaction size $s$ of each user is uniformly distributed over $[0,1000]$. Since the blockchain owner can adjust the average time of mining a block, we also varied the average time of mining a block $\lambda$ from $100$ to $1800$ with increment of $212.5$. Each measurement is averaged over $100$ instances. \begin{figure}[tbh] \begin{centering} \includegraphics[width=0.65\columnwidth]{figure/Num_customers2SW_NUM_WINNERS} \par\end{centering} \caption{Impact of the number of mobile users on social welfare $S$ and number of winners $\left|\mathcal{W}\right|$.\label{fig:N}} \end{figure} \begin{figure}[tbh] \begin{centering} \includegraphics[width=0.65\columnwidth]{figure/T_capital2SW_NUM_WINNERS} \par\end{centering} \caption{Impact of fixed bonus $T$.\label{fig:T}} \end{figure} \begin{figure}[tbh] \begin{centering} \includegraphics[width=0.65\columnwidth]{figure/r_lowercase2SW_NUM_WINNERS} \par\end{centering} \caption{Impact of the transaction fee rate $r$.\label{fig:r}} \end{figure} \begin{figure}[tbh] \begin{centering} \includegraphics[width=0.65\columnwidth]{figure/lambda2SW_NUM_WINNERS} \par\end{centering} \caption{Impact of the parameter $\lambda$.\label{fig:lambda}} \end{figure} \begin{enumerate} \item \emph{Impact of the number of mobile users $N$:} Figure~\ref{fig:N} shows the impact of the total number of mobile users $N$ on the social welfare $S$ and the number of participating users $\left|\mathcal{W}\right|$. We fix $T=2.5$, $r=0.007$ and $\lambda=600$. We observe that $\left|\mathcal{W}\right|$ and $S$ increase at diminishing rate as the base of mobile users becomes larger. Naturally, the ESP can select more winners as miners to increase the social welfare with more mobiles users. However, at the same time, the negative effects from the competition among a larger number of miners are apparent, which slows down the rise of the social welfare as well as the number of winners. \item Impact of the fixed bonus $T$ for mining a block and the transaction fee rate $r$: We set $N=600$ and $\lambda=600$. By fixing $r=0.007$ and $T=2.5$, we consider the impact of varied fixed bonus and transaction fee rate on the social welfare and the number of selected miners. From Figs.~\ref{fig:T} and~\ref{fig:r}, we note that as if the blockchain owner raises the bonus or the transaction fee rate, more social welfare will be generated in nearly proportion. However, the number of winners increases and tends to be stable. This is because there will be fierce competition if too many miners participate in the blockchain network, which causes the loss of social welfare. \item Impact of the average time $\lambda$ for successfully mining a block: In Fig.~\ref{fig:lambda}, we fix $N=600$, $T=2.5$ and $r=0.007$. When the blockchain owner raises the difficulty of mining a block, represented by $\lambda$, the social welfare increases while the number of winners initially increases and then declines. Note that the user's expected reward $R$, i.e., the valuation for edge computing service, grows with increasing $\lambda$. When the difficulty $\lambda$ is small and each user's valuation is also small, the ESP has to accept more users, i.e., more winners, to maximize the social welfare. However, if the difficulty of mining a block becomes high and each user values the service more, the ESP can reduce the number of winning users while achieving the optimal social welfare. Another reason for the decreasing number of winners is the increasingly intense competition among them. \end{enumerate} \section{Conclusions\label{sec:Conclusions}} In this paper, we have investigated the edge computing services that enable mobile blockchain. To efficiently allocate computing resources, we have proposed an auction-based market model to maximize the social welfare. In the auction design, we have considered allocative externalities, including the competition among the miners as well as the network effects in the blockchain network. By theoretical analysis and simulation, we have proved that the auction mechanism is truthful, individually rational and computationally efficient and solves the social welfare maximization problem. For the future work, we will consider variable demands and corresponding bids of mobile users. \bibliographystyle{ieeetr}
1,116,691,497,421
arxiv
\section{Introduction} \label{intro} Type Ia supernovae (SNe Ia) have become a major tool to determine cosmological parameters. As a consequence of their uniformity of properties and their enormous brightness they are suitable to be applied in cosmological distance measurements. However, they cannot be claimed to be perfect ``standard candles'', as they show a significant intrinsic scatter in their peak luminosities as well as other characteristics. Therefore their cosmological application rests on empirical corrections of the peak luminosities based on correlations with other observables \citep[e.g.][]{phillips1993a}. Only such empirical corrections facilitated distance measurements of SNe Ia at high redshifts which have led to the spectacular conclusion that the expansion of universe currently accelerates \citep{riess1998a,perlmutter1999a}. One possibility to technically take this result into account is by a cosmological constant in the Einstein equations eventually indicating a dark energy component of the universe (for a review see \citealt{leibundgut2001a}). This underscores the need for a theoretical reasoning of the correlation of characteristics that are yet established only empirically. A theoretical understanding will help to answer questions, such as a possible affliction of calibration procedures with evolutionary effects. In the astrophysical standard model \citep[see][]{hillebrandt2000a}, SNe Ia are associated with thermonuclear explosions of carbon/oxygen white dwarf (WD) stars. The optical event is powered by the decay of radioactive species (e.g.\ $^{56}$Ni) produced in the thermonuclear burning. Numerical simulations on the basis of this scenario provide an approach to the understanding of calibration methods. Recently, there has been much progress in the three-dimensional modeling of the explosion process \citep{hillebrandt2000b,reinecke2002b,reinecke2002c, reinecke2002d,gamezo2003a} and the question arises whether it is possible to reproduce the SN Ia diversity by varying the initial parameters of such models. This will be addressed in the present study where we restrict the survey to so-called deflagration models of thermonuclear supernovae which can be summarized as follows. After ignition near the center of the WD the flame propagates outward in the subsonic deflagration mode, i.e.\ it is mediated by thermal conduction of the degenerate electron gas. This outward burning produces an inverse density stratification in the gravitational field of the WD star with dense fuel on top of hot and light ashes. Consequently, due to buoyancy (Rayleigh-Taylor) instabilities burning bubbles form that rise into the fuel leading to shear flows. Kelvin-Helmholtz instabilities generate strong turbulence given the fact that the Reynolds number typical for the situation is of the order of $10^{14}$. Resulting from this, turbulent eddies decay to smaller scales thereby forming a turbulent energy cascade. The interaction of the thermonuclear flame with these turbulent motions is the key feature of the deflagration model for SNe Ia. A laminar flame would burn far too slowly to release sufficient energy for an explosion of the star. However, the wrinkling of the flame due to turbulence and the accompanying flame surface enhancement increase the net burning rate and accelerate the flame. This defines the deflagration model of thermonuclear supernova explosions as a problem of turbulent combustion. \citet{reinecke2002d} and \citet{gamezo2003a} could show that this model indeed leads to an explosion. Whether it reproduces all aspects of observed supernovae is still not fully explored \citep[e.g.][]{kozma2005a}. \citet{gamezo2004a} and \citet{hoeflich1998a} claim that a hypothetical transition from the deflagration mode of flame propagation to a supersonic detonation needs to be invoked at later phases of the explosion. We set aside such a transition because its physical origin in not understood \citep{niemeyer1999a}. Moreover, even in such a case the initial deflagration stage will be essential for understanding the SN Ia diversity since large fractions of the energy and the radioactive $^{56}$Ni (which powers the lightcurve) are produced here and nonlinear effects in flame propagation are extremely sensitive to the initial conditions. The crucial role played by three-dimensional effects in deflagration SN Ia models calls for multi-dimensional simulations to study the diversity of such events. Most previous attempts to unveil the origin of the SN Ia diversity were, however, based on one-dimensional models \citep{bravo1993a, bravo1996a, hoeflich1998a, umeda1999b, iwamoto1999a, dominguez2000a, dominguez2000b, dominguez2001a}. These are hampered by introducing free parameters due to incomplete description of the relevant physics in addition to the initial parameters they intend to study. The description of the turbulent mixing as well as the effective flame velocity is not inherent in one-dimensional models but rather accomplished in a parametrized way. Due to the free parameters empirical one-dimensional models are not sufficiently predictive to nail down explanations for the diversity of SNe Ia, but they can nevertheless provide valuable clues for possible trends. Systematic studies based on three-dimensional models overcome the ambiguity of the turbulent flame velocity and mixing. By correctly modeling these effects, multi-dimensional deflagration models contain no tunable parameters and possess a high predictive power. However, due to the challenging computational demands of three-dimensional models, the available studies of initial parameters are very incomplete. Applying a simplified setup we present the first systematic survey of the impact of initial parameters on three-dimensional SN Ia models. The price of the simplicity (and possibly incompleteness) of our models is that we cannot set the absolute scale of the effects in the presented parameter study. Nevertheless, we are able to point out the trends of effects from varying the initial parameters. We restrict this first systematic study to variations of the central density, the initial carbon-to-oxygen (C/O) ratio and the metallicity of the progenitor just prior to ignition. Our intention is to test the parameters independently, setting aside any realistic evolution of the progenitor system. For detailed progenitor evolution studies we refer to e.g.\ \citet{nomoto1985a}, \citet{hernanz1988a}, \citet{bravo1996a}, \citet{umeda1999a}, \citet{langer2000a}, and \citet{dominguez2001a}. Important parameters that are not addressed in this study are for instance rotation and the way of flame ignition \citep[see e.g.][]{woosley2004a}. Some effects of the ignition conditions on SN Ia models and nucleosynthesis have been recently discussed by \citet{travaglio2004a}. In Sect.~\ref{num_model} we describe the numerical schemes we apply to model SNe Ia explosions and the nucleosynthesis, followed by a discussion of the parameter space to be explored in Sect.~\ref{param}. The features of the explosion models will be compared in Sect.~\ref{expl}, and Sect.~\ref{nuc} describes the nucleosynthetic yields of these models. Conclusions are drawn in Sect.~\ref{concl} \section{Numerical Model} The numerical model applied in our study consists of two parts. In a first step we simulate the hydrodynamics of the explosion process. Here, the description of the nuclear processes is very coarse. With the information gained from tracer particles advected in this simulation we perform a nucleosynthetic postprocessing as a second step. This enables us to infer the production of the individual isotopes. Both methods will be briefly described in the following. \label{num_model} \subsection{Explosion dynamics} \label{exp_model_sect} The deflagration model of thermonuclear supernova explosions as outlined in Sect.~\ref{intro} was implemented in a numerical scheme by \citet{reinecke1999b,reinecke2002b}. We refer to these works for the details of the applied techniques and will only mention the basic aspects here. The major problem of SN Ia simulations is the vast range of relevant scales. The thickness of the flame is tiny compared with the dimensions of the WD star and the turbulent cascade interacts with the flame down to the so-called Gibson scale where the turbulent velocity fluctuations become comparable with the laminar flame speed. Neither the internal flame structure nor the Gibson scale can be resolved in multidimensional simulations in the foreseeable future and thus the flame propagation and turbulence effects have to be adequately modeled in numerical simulations. Seen from the size of the WD star, it is well justified to regard the unresolved flame as a discontinuity separating the fuel from the ashes. Then the description of flame propagation has to track this interface and a technique well-suited for this purpose is the so-called \emph{level set method} \citep{osher1988a}. It is widely used in simulations of combustion problems in engineering. In this technique, the flame front is associated with the zero level set of a scalar field $G$. For numerical reasons, $G$ is chosen to be a signed distance function with respect to the flame front. To model the flame propagation we evolve the $G$-field according to the scheme described by \citet{reinecke1999a}. In this scheme the effective flame velocity has to be provided. To this end, the notion is essential that turbulent combustion proceeds in different regimes \citep[e.g.][]{peters2000a}. For most parts of the supernova explosion the so-called \emph{flamelet regime} applies, where the flame as a whole is wrinkled by turbulence. Here, the flame propagation is known to decouple from the microphysics of the burning process and to be determined by the turbulent motions exclusively \citep{damkoehler1940a}. These, however, are derived from a \emph{subgrid scale model} implemented first in SN Ia simulations by \citet{niemeyer1995b}. It describes the effects turbulence on unresolved scales. In this sense our model can be regarded as a Large Eddy Simulation (LES) well-known from computational fluid dynamics. Since flame propagation is modeled in our simulations, supplementary simulations of the physical processes on small scales have to be provided that ensure the validity of the underlying assumptions. In this spirit \citet{roepke2003a, roepke2004a, roepke2004b} showed that flame propagation proceeds in a stabilized way below the Gibson scale. The hydrodynamics is modeled based on the PROMETHEUS implementation \citep{fryxell1989a} of the piecewise parabolic method \citep{colella1984a}. The equation of state of the WD material comprises contributions from a variably degenerate and relativistic electron gas, ions following the Boltzmann statistics, a photon gas and eventually electron/positron pairs. The correct way to incorporate the nuclear burning would require a full reaction network. However, due to the restricted computational resources only a very simplified description of the nucleosynthesis is possible concurrent with the explosion simulation. Our implementation follows the approach suggested by \citet{reinecke2002b}, who include five species, viz.\ $\alpha$-particles, $^{12}$C, $^{16}$O, ``Mg'' as a representative of intermediate mass elements and ``Ni'' representing iron group nuclei\footnote{In the following we set ``Ni'' and ``Mg'' in quotes when we refer to the iron group elements and intermediate mass elements followed in the explosion hydro-simulations. This is done to avoid confusion with the results of the nuclear postprocessing.}. The fuel is assumed to be a mixture of carbon and oxygen. At the initial high densities burning proceeds to nuclear statistical equilibrium (NSE) composed of $\alpha$-particles and ``Ni''. Depending on temperature and density in the ashes, the NSE composition changes, which has significant impact on the explosion dynamics \citep{reinecke2002b}. Once the fuel density drops below $5.25 \times 10^7 \,\mathrm{g}\,\mathrm{cm}^{-3}$ due to the expansion of the WD, burning is assumed to terminate at intermediate mass elements. Below $1 \times 10^7 \,\mathrm{g}\,\mathrm{cm}^{-3}$ burning is switched off, since it is then expected to leave the flamelet regime and to enter the so-called distributed burning regime. Here turbulence penetrates the internal structure of the flame. This effect is ignored in the present study but was addressed by \citet{roepke2005a}. In order to achieve a more detailed analysis of the nucleosynthetic yields of the simulated supernova explosion we advect tracer particles with the fluid motions recording temperature, density, and internal energy as a function of time. These data then serve as input for a nucleosynthetic postprocessing. \subsection{Nuclear postprocessing} The nuclear postprocessing determines the nucleosynthetic yields of the explosion models \emph{a posteriori} from the data recorded by the tracer particles. The applied method is similar to that described by \citet{thielemann1986a} (there labeled as \emph{method (a) simple postprocessing)}. Its application to SNe Ia explosions is discussed in detail by \citet{travaglio2004a}. \begin{table} \centering \caption{Nuclear reaction network (note that the elements below arsenic are irrelevant for SNe Ia). \label{network_tab}} \setlength{\extrarowheight}{2pt} \begin{tabular}{p{0.15\linewidth}p{0.25\linewidth}|p{0.15\linewidth}p{0.25\linewidth}} \hline\hline element & atomic mass $A$ & element & atomic mass $A$\\ \hline n & 1 & Sc & 40 \ldots 50 \\ p & 1 & Ti & 42 \ldots 52 \\ He & 4, 6 & V & 44 \ldots 54 \\ Li & 6, 7, 8 & Cr & 46 \ldots 56 \\ Be & 7, 9, 10, 11 & Mn & 48 \ldots 58 \\ B & 8, 9 \ldots 12 & Fe & 50 \ldots 62 \\ C & 10 \ldots 15 & Co & 52 \ldots 63 \\ N & 12 \ldots 17 & Ni & 54 \ldots 67 \\ O & 14 \ldots 20 & Cu & 56 \ldots 69 \\ F & 17 \ldots 21 & Zn & 59 \ldots 72 \\ Ne & 18 \ldots 25 & Ga & 61 \ldots 76 \\ Na & 20 \ldots 26 & Ge & 63 \ldots 78 \\ Mg & 21 \ldots 28 & As & 71 \ldots 80 \\ Al & 23 \ldots 30 & Se & 74 \ldots 83 \\ Si & 25 \ldots 33 & Br & 75 \ldots 83 \\ P & 27 \ldots 35 & Kr & 78 \ldots 87 \\ S & 29 \ldots 38 & Rb & 79 \ldots 87 \\ Cl & 31 \ldots 40 & Sr & 84 \ldots 91 \\ Ar & 33 \ldots 44 & Y & 85 \ldots 91 \\ K & 35 \ldots 46 & Nb & 91 \ldots 97 \\ Ca & 37 \ldots 49 & Mo & 92 \ldots 98 \\ \hline \end{tabular} \end{table} The employed nuclear reaction network code was kindly provided by F.-K.~Thielemann. It comprises 384 isotopes which are listed in Table~\ref{network_tab} and takes into account $\beta$-decays, electron captures, photo-disintegrations, two-body reactions, and three-body reactions. A detailed description of the network is given by \citet{thielemann1996a} and \citet{iwamoto1999a}. As \citet{brachwitz2000a} and \citet{thielemann2003a} discussed previously, the new electron capture and $\beta$-decay rates by \citet{langanke2000a} and \citet{martinez-pinedo2000a} are included in the network. Since the description of the nuclear reactions in the hydrodynamic explosion simulation is coarse and $Y_e$ is assumed to be constant at a value of 0.5, the internal energy recorded by the tracer particles is employed to calculate a realistic temperature form a high-temperature equation of state \citep{timmes2000a} combined with an improved nuclear reaction network \citep[cf.][]{travaglio2004a}. The nucleosynthesis is calculated separately for each tracer particle. To level out variations in the data from the hydrodynamic simulation, the minimal temperature is set to $10^9 \, \mathrm{K}$. This measure guarantees stability of the nuclear reaction network code. Subsequently, the maximum temperature $T_\mathrm{max}$ is checked. If it does not exceed $2 \times 10^9 \, \mathrm{K}$, then the corresponding material is treated as unprocessed. This approach is justified since the fuel consists only of $^{12}$C, $^{16}$O, and $^{22}$Ne, which below $2\times 10^9 \, \mathrm{K}$ will burn hydrostatically, not significantly contributing to the nucleosynthetic yields over the simulated period of time. For tracers with $T_\mathrm{max} > 2 \times 10^9 \, \mathrm{K}$ the following procedure is applied: \begin{enumerate} \item \label{step1} Nuclear statistical equilibrium (NSE) is assumed if the temperature of the current time step $t_i$ is larger than $6 \times 10^9 \, \mathrm{K}$, i.e.\ the strong reactions can be neglected and only the ``weak'' nuclear network is applied updating $Y_e$. Otherwise the full reaction network is employed. \item \label{step2} Temperature and density are interpolated for the sample point at $t_{i+1}$. If these variables change for more than 5\% in the interval $[t_i, t_{i+1}]$, the time step is halved. \item The network is solved for $t_{i+1}$. If a relative accuracy of $10^{-5}$ cannot be reached in a limited number of steps, the time step is again halved and we resume with point \ref{step2} of the scheme. If this measure fails, the tracer is ignored in the final result. Fortunately the number of such cases could be drastically reduced to at most one out of $[27]^3$. When reaching NSE the new abundances are calculated for the updated $Y_e$ at $t_{i+1}$. \item If the abundance of an isotope drops below $10^{-25}$, it is set to zero. \end{enumerate} \section{Parameter space} \label{param} The initial parameters we explore in our study (the carbon mass fraction $X(^{12}\mathrm{C})$, the central density $\rho_c$, and the metallicity $Z$ of the WD at ignition) are treated as independent. This allows to disentangle the effects of the individial parameters on the explosion process. Nontheless, the parameter space is chosen in agreement with values suggested by stellar evolution, as described below. Different values for the central density of the WD and the carbon-to-oxygen ratio of its material are applied in the explosion model itself. Contrary to that, we vary the metallicity only in the nucleosynthesis postprocessing. The nomenclature of the models is given in Table~\ref{models_tab}. \begin{table} \centering \caption{Model parameters. \label{models_tab}} \setlength{\extrarowheight}{2pt} \begin{tabular}{p{0.15\linewidth}p{0.2\linewidth}p{0.15\linewidth}p{0.2\linewidth}l} \hline\hline model & $\rho_c$ [$10^9\,\mathrm{g}\,\mathrm{cm}^{-3}$] & $X(^{12}\mathrm{C})$ & metallicity \\ \hline \emph{1\_1\_1} & 1.0 & 0.30 & $0.5 Z_\odot$\\ \emph{1\_1\_2} & 1.0 & 0.30 & $1.0 Z_\odot$\\ \emph{1\_1\_3} & 1.0 & 0.30 & $3.0 Z_\odot$\\ \hline \emph{1\_2\_1} & 1.0 & 0.46 & $0.5 Z_\odot$\\ \emph{1\_2\_2} & 1.0 & 0.46 & $1.0 Z_\odot$\\ \emph{1\_2\_3} & 1.0 & 0.46 & $3.0 Z_\odot$\\ \hline \emph{1\_3\_1} & 1.0 & 0.62 & $0.5 Z_\odot$\\ \emph{1\_3\_2} & 1.0 & 0.62 & $1.0 Z_\odot$\\ \emph{1\_3\_3} & 1.0 & 0.62 & $3.0 Z_\odot$\\ \hline \emph{2\_1\_1} & 2.6 & 0.30 & $0.5 Z_\odot$\\ \emph{2\_1\_2} & 2.6 & 0.30 & $1.0 Z_\odot$\\ \emph{2\_1\_3} & 2.6 & 0.30 & $3.0 Z_\odot$\\ \hline \emph{2\_2\_1} & 2.6 & 0.46 & $0.5 Z_\odot$\\ \emph{2\_2\_2} & 2.6 & 0.46 & $1.0 Z_\odot$\\ \emph{2\_2\_3} & 2.6 & 0.46 & $3.0 Z_\odot$\\ \hline \emph{2\_3\_1} & 2.6 & 0.62 & $0.5 Z_\odot$\\ \emph{2\_3\_2} & 2.6 & 0.62 & $1.0 Z_\odot$\\ \emph{2\_3\_3} & 2.6 & 0.62 & $3.0 Z_\odot$\\ \hline \end{tabular} \end{table} \subsection{Variation of the carbon mass fraction} The origin of the diversity in the carbon mass fraction has been studied by \citet{umeda1999a} by numerically evolving the corresponding binary systems with 3--9 $M_\odot$ WD progenitor stars. They found it to depend on the metallicity and the zero-age main sequence (ZAMS) mass of the WD progenitor, as well as on the mass of the companion star. These in turn determine the mass of the WD, $M_{\mathrm{WD},0}$, just prior to the onset of accretion. The main outcome of the survey was that $X(^{12}\mathrm{C})$ in the core of the WD decreases with increasing $M_{\mathrm{WD},0}$ and that the direct dependence of $X(^{12}\mathrm{C})$ on the metallicity is small although the correlation between the ZAMS mass and $M_{\mathrm{WD},0}$ depends sensitively on it \citep{umeda1999b}. Taking into account the conditions ensuring that the WD will accrete mass until reaching $M_\mathrm{Ch}$, \citet{umeda1999a} infer that $X(^{12}\mathrm{C})$ may vary in the range from $\sim$$0.36$ to $\sim$$0.5$. These values apply only to the convective core of the WD. The accreted material is assumed to be processed to a C/O ratio of $\sim$$1$, leading to a gradient of the carbon mass fraction inside the WD. This effect will be ignored in our model, were we postulate a uniform C/O ratio throughout the entire star employing values of 0.30, 0.46, and 0.62 for $X(^{12}\mathrm{C})$ (cf.~Table~\ref{models_tab}). \subsection{Variation of the central density} The variation of the central density in SN Ia progenitors just before the ignition of the flame is even more difficult to constrain. At least two effects determine the value of $\rho_c$. The first is the accretion history of the binary system \citep[see][for a detailed study of the accretion process]{langer2000a}. There seems to be only a narrow window in the range of possible accretion rates $\dot{M}$ in which carbon can be ignited centrally, avoiding off-center ignitions and gravitational collapse due to high electron-capture rates. \citet{nomoto1985a} report on two centrally ignited models with $\rho_c = 1.7 \times 10^9 \, \mathrm{g} \, \mathrm{cm}^{-3}$ and $\rho_c = 5.2 \times 10^9 \, \mathrm{g} \, \mathrm{cm}^{-3}$, respectively, and \citet{bravo1996a} find models in the range $1.8 \times 10^9 \, \mathrm{g} \, \mathrm{cm}^{-3} \lesssim \rho_c \lesssim 6.3 \times 10^9 \, \mathrm{g}\, \mathrm{cm}^{-3}$. However, the exact range of that window is uncertain and depends additionally on the white dwarf mass and temperature. Initially cooler WDs are shifted to rather high central densities in the range $6 \times 10^9 \, \mathrm{g} \, \mathrm{cm}^{-3} \lesssim \rho_c \lesssim 1.3 \times 10^{10} \, \mathrm{g}\, \mathrm{cm}^{-3}$ \citep{hernanz1988a}. The second effect is the establishment of the thermal structure of the WD. Cooling due to plasmon neutrino losses and neutrino bremsstrahlung have to be taken into account \citep{iwamoto1999a}, and a (most uncertain) contribution may come from the convective Urca process \citep{paczynski1972a,barkat1990a,mochkovich1996a, lesaffre2005a}. \citet{bravo1993a} calculate models for central densities at ignition of $2.5 \times 10^9\,\mathrm{g} \,\mathrm{cm}^{-3}$, $4.0 \times 10^9\,\mathrm{g} \,\mathrm{cm}^{-3}$, and $8.0 \times 10^9 \,\mathrm{g} \,\mathrm{cm}^{-3}$, and \citet{iwamoto1999a} use values of $1.37 \times 10^9 \,\mathrm{g} \,\mathrm{cm}^{-3}$ and $2.12 \times 10^9 \,\mathrm{g} \,\mathrm{cm}^{-3}$. We assume central densities of $1.0 \times 10^9 \,\mathrm{g} \,\mathrm{cm}^{-3}$ and $2.6 \times 10^9 \,\mathrm{g} \,\mathrm{cm}^{-3}$ (see Table~\ref{models_tab}). Unfortunately it is not yet possible to apply higher central densities, since then electron captures will become dynamically important. Although electron captures are correctly treated in the nuclear postprocessing, this effect is not implemented in the current explosion models. \subsection{Variation of the metallicity} Our ignorance concerning realistic progenitor evolution is evident in the approach to prescribe the metallicity $Z$ of the progenitor independent of the other parameters. Detailed stellar evolution calculations (e.g.~\citealt{umeda1999a,dominguez2001a}) have shown that it strongly influences the progenitor's central density and also the C/O ratio. Nevertheless, in the spirit of our exploration of possible effects in the explosion models, we set aside a realistic progenitor description and treat the metallicity as an independent parameter. A direct effect of the metallicity of the WD's progenitor is the $^{14}$N abundance after the CNO burning phase. In Helium burning it is then converted mostly to $^{22}$Ne. For simplicity, we assume a uniform distribution of this $^{22}$Ne, which is only justified in regions mixed by pre-ignition convection. An analytic estimation of the effect of the metallicity on the $^{56}$Ni production was given by \citet{timmes2003a}. They suggest a variation of $Z$ ranging from $1/3$ to 3 times solar, based on observations of field white dwarfs. Following this suggestion we vary the $^{22}$Ne abundance in our models to simulate a variation in metallicity $Z$. In particular we explore $Z_\odot$ (corresponding to $X(^{22}\mathrm{Ne}) = 0.025$), $0.5 Z_\odot$, and $3 Z_\odot$ (cf.~Table~\ref{models_tab}). \section{Simulation setup} \begin{figure*}[t] \centerline{ \includegraphics[width = 0.72 \linewidth] {roepke_fig01.eps}} \caption{Time evolution of the burning front for model \emph{2\_2\_X}. \label{evo_fig}} \end{figure*} A rather large number of simulations is required to cover the parameter space we aim at. We therefore have to minimize the computational expenses by applying a simple setup for the individual models. Our calculations span only one spatial octant and assume mirror symmetry to the other octants. Full-star simulations \citep{roepke2005b} have shown that this approach does not miss large-scale flame features and thus -- although being a simplification -- does not restrict the validity of the model. The simulations were set up on a Cartesian computational grid that was equally spaced in the inner regions. To capture the expansion of the WD, the outer grid cells were widened exponentially. Recently, \citet{roepke2004d} showed that with a comoving computational grid the evolution can be followed to homologous expansion. This, however, is not applied in the present models. The resolution of the individual runs was rather low -- the computational domain was divided in $[256]^3$ grid cells corresponding to a central grid resolution of $10^6 \, \mathrm{cm}$. In each direction the grid length in the outer 35 zones was increased subsequently by a factor of 1.15. As was pointed out by \citet{reinecke2002c} the chosen resolution still guarantees the explosion characteristics to be numerically converged (possibly with the exception of the latest stages of the burning where intermediate mass elements are produced). However, with this resolution it is not possible to set up reasonable multi-point ignition scenarios, since only a very small number of seed-bubbles could be resolved. This is certainly a drawback because \citet{reinecke2002d} showed that such models give rise to more vigorous explosions. We restrict our simulations to the centrally ignited model \emph{c3\_3d\_256} model of \citet{reinecke2002c} in which the spherical initial flame geometry is perturbed with three toroidal rings (see the upper left panel of Fig.~\ref{evo_fig}). Note that we initially incinerate the same volume in all models, which does not correspond to the same mass for different central densities. This ensures the same initial numerical resolution of the flame front. For the construction of the WD near the Chandrasekhar mass we follow the procedure described by \citet{reinecke_phd}. We assume a cold isothermal WD of a temperature $T_0 = 5 \times 10^5 \, \mathrm{K}$. With the chosen values for the carbon mass fraction of the material and the central density we integrate the equations of hydrostatic equilibrium using the equation of state described in Sect.~\ref{exp_model_sect}. Depending on the central densities and compositions the masses of the resulting WDs vary slightly: for $\rho_c = 1.0 \times 10^9 \, \mathrm{g}\,\mathrm{cm}^{-3}$ and $\rho_c = 2.6 \times 10^9 \, \mathrm{g}\,\mathrm{cm}^{-3}$ the WD masses amount to $1.367\,M_\odot$ and $1.403\,M_\odot$, respectively. As tested by \citet{reinecke_phd}, the construction procedure guarantees stability of the WD over a time longer than simulated. The $[n_\mathrm{trace}]^3$ tracer particles are distributed in an $n_\mathrm{trace} \times n_\mathrm{trace} \times n_\mathrm{trace}$ equidistant grid in the integrated mass $M_0(r)$, the azimuthal angle $\phi$, and $\cos \theta$, so that each particle represents the same amount of mass. In order to improve the tracer particle statistics, a random offset to the coordinates was applied. This offset was chosen small enough to keep the tracer particles in their individual mass cells. The values of the density, the temperature and the internal energy at the tracer particle's location and its coordinates were recorded every $\sim$$1\,\mathrm{ms}$. This allows for an accurate reconstruction of the trajectories as well as the final velocities and the thermodynamical data. In the models presented in the following we set $n_\mathrm{trace} = 27$. To test the representation of the model in the tracer particles in cases of low central densities, this number was increased to 35 in test calculations, as will be discussed below. \section{Explosion models} \label{expl} The explosion simulation for the exemplary case of model \emph{2\_2\_X} (the metallicity does not affect the explosion dynamics in our implementation) at four different times is illustrated in Fig.~\ref{evo_fig}. The isosurface indicating the position of the flame front is rendered from the zero level set of the scalar field $G$. The computational grid plotted in these snapshots visualizes our setup with uniform grid cells in the inner region and an exponential growth of the grid spacing further out. Our initial flame configuration is shown in the upper left snapshot of Fig.~\ref{evo_fig}. In the subsequent snapshots the growth of instabilities and an increasing wrinkling of the flame front are visible. Once the flame enters the exponentially growing part of the grid, the resolution of flame features becomes coarser. However, at this stage the expansion of the WD decreases the density of the fuel to values where burning has largely ceased in our model. Thus the coarse flame resolution in late stages of the simulation does not affect the results. \begin{figure}[t] \centerline{ \includegraphics[width = \linewidth] {roepke_fig02.eps}} \caption{Total energies in models (a) \emph{2\_3\_X}, (b) \emph{2\_2\_X}, (c) \emph{2\_1\_X}, (d) \emph{1\_3\_X}, (e) \emph{1\_2\_X}, and (f) \emph{1\_1\_X}. \label{etot_fig}} \end{figure} \begin{table} \centering \caption{Results of explosion models: produced masses of iron group elements (``Ni'') and intermediate mass elements (``Mg''), nuclear energy release, and total energy at the end of the simulations. \label{energy_tab}} \setlength{\extrarowheight}{2pt} \begin{tabular}{p{0.1\linewidth}p{0.15\linewidth}p{0.15\linewidth} p{0.15\linewidth}p{0.15\linewidth}} \hline\hline model & $M(\mbox{``Ni''})$ $[M_\odot]$ & $M(\mbox{``Mg''})$ $[M_\odot]$ & $E_\mathrm{nuc}$ $[10^{50}\,\mathrm{erg}]$ & $E_\mathrm{tot}$ $[10^{50}\,\mathrm{erg}]$\\ \hline \emph{1\_1\_X} & 0.3944 & 0.2067 & 6.974 & 2.714\\ \emph{1\_2\_X} & 0.3867 & 0.2081 & 7.445 & 3.140\\ \emph{1\_3\_X} & 0.3757 & 0.2144 & 7.870 & 3.563\\ \hline \emph{2\_1\_X} & 0.5178 & 0.1874 & 8.851 & 3.772\\ \emph{2\_2\_X} & 0.5165 & 0.1859 & 9.461 & 4.412\\ \emph{2\_3\_X} & 0.5104 & 0.1822 & 9.966 & 4.909\\ \hline \end{tabular} \end{table} \begin{figure}[t] \centerline{ \includegraphics[width = \linewidth] {roepke_fig03.eps}} \caption{Energy generation rates in models (a) \emph{2\_3\_X}, (b) \emph{2\_2\_X}, (c) \emph{2\_1\_X}, (d) \emph{1\_3\_X}, (e) \emph{1\_2\_X}, and (f) \emph{1\_1\_X}. \label{egen_fig}} \end{figure} Fig.~\ref{etot_fig} shows the total energy production of our models. Due to the simple setup, all explosions are weak, but trends can clearly be identified. The energy releases of the different models are listed in Table~\ref{energy_tab}, which also provides the masses of produced iron group elements (``Ni'') and intermediate mass elements (``Mg''). In Figs.~\ref{egen_fig} and \ref{eturb_fig} the energy generation rates and the evolution of the turbulent energies in our models are plotted, respectively. \subsection{Variation of the progenitor's C/O ratio} \label{eplo_co} The effects of a variation of the progenitor's carbon-to-oxygen ratio on the SN Ia explosion models have been described by \citet{roepke2004c}. We extend the discussion here. Considering the explosion energetics first, Fig.~\ref{etot_fig} shows that a higher carbon mass fraction leads to an increased energy production for fixed central densities. Values are given in Table~\ref{energy_tab}. For both central densities the nuclear energy releases of the models increase by 12\% ($\sim$27\% in the total energies) changing $X(^{12}\mathrm{C})$ from 0.30 to 0.62. The observed trend is not surprising and can easily be explained by the burning process. The predominant effect is certainly the difference in the mean binding energy of the fuel. A higher carbon mass fraction increases the total energy generation for the simple reason that the binding energy of $^{12}\mathrm{C}$ is lower than that of $^{16}\mathrm{O}$ so that it releases more energy by fusion to iron group elements. A minor effect could be that the laminar burning velocity increases with $X(^{12}\mathrm{C})$ \citep{timmes1992a}. This, however, is negligible in our models, since already after a few time steps the flame propagation is completely determined by the turbulent flame speed. \begin{figure}[t] \centerline{ \includegraphics[width = \linewidth] {roepke_fig04.eps}} \caption{Turbulent energies in models (a) \emph{2\_3\_X}, (b) \emph{2\_2\_X}, (c) \emph{2\_1\_X}, (d) \emph{1\_3\_X}, (e) \emph{1\_2\_X}, and (f) \emph{1\_1\_X}. \label{eturb_fig}} \end{figure} \begin{figure*}[t] \centerline{ \includegraphics[width = \linewidth] {roepke_fig05.eps}} \caption{Flame surface of models with different carbon mass fraction at $t = 1.0 \, \mathrm{s}$. \label{grid_co_fig}} \end{figure*} It is noteworthy that the evolution of the energetics in the model does not show a strong temporal shift. The energy generation rate peaks at comparable times for the models with different carbon mass fractions (cf.\ Fig.~\ref{egen_fig} for the temporal evolution of the energy generation rate\footnote{Note, that the peak at $t=0\,\mathrm{s}$ is caused by our setup in which the initial flame is initialized by instantly incinerating the material behind it.}). Fig.~\ref{etot_fig} reveals that the total energies of our models are very similar for the largest part of energy generation and differ only in the late phases. In this point our findings disagree with \citet{khokhlov2000a}. Although he speculates that a decreasing $X(^{12}\mathrm{C})$ would result in weaker explosions, he claims that the reason is a delay of the development of the buoyancy instabilities, which seems to be only a minor effect in our simulations. The reason for the difference in the interpretation of the results is possibly that the models of \citet{khokhlov2000a} were apparently not evolved beyond the maximum of energy generation so that it is difficult to distinguish between a delay and an overall lower energy production. Contrary to the explosion energetics, the produced masses of iron group elements are unexpected. The working hypothesis by \citet{umeda1999b} predicting a larger production of $^{56}$Ni for larger carbon mass fractions was based on the speculation that the resulting increased energy release would enhance buoyancy effects and thus accelerate the turbulent flame propagation. Consequently, more material would be burnt at high velocities producing larger fractions of iron group elements. As emphasized by \citet{umeda1999b}, this hypothesis can only be tested in multi-dimensional simulations which treat the turbulent flame velocity in an unparametrized way. This is provided by our approach, but \begin{figure}[t] \centerline{ \includegraphics[width = \linewidth] {roepke_fig06.eps}} \caption{Mean effective gravitational accelerations experienced by the flame fronts for models with different C/O ratios. \label{geff_co_fig}} \end{figure} surprisingly our models do not support the hypothesis. The energies in our models evolve similar in the first part of the explosion and no enhanced turbulent flame propagation is visible regardless of the carbon mass fractions. The similar temporal evolutions of the energetics in our models correspond to a striking similarity in the flame morphology. Fig.~\ref{grid_co_fig} illustrates the flame front in models with different C/O ratios at $t = 1.0 \, \mathrm{s}$. The extent of the burnt volumes is comparable. The similarities in the first part of the energy generation as well as in the flame morphologies and flame propagations indicate that the large scale buoyancy effects, which feed energy into the turbulent cascade and thereby drive the flame propagation, are comparable in models with different C/O ratio. The buoyant velocities can be estimated from the relation \begin{equation} v_\mathrm{buoy} \sim \sqrt{\mathit{At}\, g\, L} \end{equation} for a single non-burning rising bubble of size $L$ subject to a gravitational acceleration $g$ \citep{davies1950a}. The Atwood number $At$ characterizes the contrast between the density inside the bubble ($\rho_\mathrm{i}$) and outside it ($\rho_\mathrm{o}$): \begin{equation} \mathit{At} = \frac{|\rho_\mathrm{o} - \rho_\mathrm{i}|}{\rho_\mathrm{o} + \rho_\mathrm{i}}. \end{equation} In a supernova explosion, the situation is, of course, much more complex since bubbles burn and will merge. Nevertheless, it is clear that the effective gravitational acceleration ($\mathit{At} \, g$) determines the flame propagation velocity -- not only directly on the largest scales but also over the mechanism of the interaction of the flame with generated turbulence. This effect can only be revealed in multidimensional calculations, as presented here. Fig.~\ref{geff_co_fig} shows the mean effective gravitational acceleration ($\mathit{At} \, g$) experienced by the flame front. Only minor differences are visible here. The data point at $t = 0.0 \, \mathrm{s}$ is unphysical, since the material behind the flame had not been burnt at this instant and thus there is no density contrast over the flame yet. With temporal evolution there is a competition between a rapidly decreasing gravitational acceleration due to the expansion of the star in the explosion and an increasing density contrast over the flame along with lower fuel densities \citep[cf.][]{timmes1992a}. As seen from Fig.~\ref{geff_co_fig}, finally the decreasing gravitational acceleration dominates this competition. Because of the very similar evolutions of the large-scale buoyancy effects, there are little differences in the evolutions of the turbulent energies in models with varying carbon mass fractions (see Fig.~\ref{eturb_fig}\footnote{The values of $E_\mathrm{turb}$ are not significant at late times since those are derived from the subgrid energy which depends on the length of the grid cells. In the outer regions which the flame enters at late times, those are elongated and therefore $E_\mathrm{turb}$ rises again after reaching a peak at $t \sim 0.65 \, \mathrm{s}$ and $t \sim 1.05 \, \mathrm{s}$ for $\rho_c = 2.6 \times 10^9 \,\mathrm{g}\, \mathrm{cm}^{-3}$ and $\rho_c = 1.0 \times 10^9 \,\mathrm{g}\, \mathrm{cm}^{-3}$, respectively. For a uniform grid it would be expected to monotonically decrease after these peaks \citep[cf.][]{roepke2004d}.}). \begin{figure}[t] \centerline{ \includegraphics[width = \linewidth] {roepke_fig07.eps}} \caption{Fuel densities at the mean flame locations as a function of the mass of ``ashes'' behind the flame front for models with different carbon mass fraction of the progenitor material. \label{mb_co_fig}} \end{figure} \begin{figure}[t] \centerline{ \includegraphics[width = \linewidth] {roepke_fig08.eps}} \caption{Temporal evolution of the chemical composition in models with different carbon mass fraction of the progenitor material. \label{evalmass_fig}} \end{figure} If the larger energy from burning carbon-rich fuel was directly converted to work expanding the ashes, buoyancy effects and an acceleration of the turbulent flame propagation should be observable in our simulations. The only way to bypass these effects is that the energy is buffered in a larger fraction of $\alpha$-particles present in the NSE. This fraction indeed increases with higher temperatures and the consequences are twofold. First, the binding energy of the ashes is lowered and less energy is released from thermonuclear burning. Second, distributing the energy on an increased number of particles in the ashes decreases their temperature. Both effects lead to an increase in the density of the ashes which suppresses the buoyancy effects. Hence the turbulent flame propagation velocity in carbon-rich fuel models is lowered to values comparable to those found in oxygen-rich fuel simulations. As a consequence similar masses of fuel are burnt at particular fuel densities. To corroborate this, we plot the fuel density at the average flame front location over the burnt mass in Fig.~\ref{mb_co_fig}. \begin{figure}[t] \centerline{ \includegraphics[width = \linewidth] {roepke_fig09.eps}} \caption{Mean effective gravitational accelerations experienced by the flame fronts for models with different central densities. \label{geff_rc_fig}} \end{figure} Fig.~\ref{evalmass_fig} proves that the effect of energy buffering in the $\alpha$-particles indeed applies for our models, here shown for the models with a central density of $2.6 \, \mathrm{g} \, \mathrm{cm}^{-3}$. Between $0.2 \, \mathrm{s}$ and $0.9\, \mathrm{s}$ the ashes contain significant amounts of $\alpha$-particles. The maximum mass fraction of $\alpha$-particles increases by about 20\% when changing the carbon mass fraction from 0.30 to 0.62. This effect is capable of compensating the differences in the fuel binding energies according to the following estimate. At $t \sim 0.6\,\mathrm{s}$ (the energy generation rate peaks here, cf.\ Fig.~\ref{egen_fig}) the ashes in the model \emph{2\_2\_X} contain about 21\% $\alpha$-particles and 79\% ``Ni''. If there was no change in the amount of $\alpha$-particles along with changing the C/O ration in the models, the nuclear energy release would increase for about 22\% changing $X(^{12}\mathrm{C})$ from 0.30 to 0.62. In contrast, taking into account the observed 20\% increase in the $\alpha$-particles in the ashes, the nuclear energy release difference reduces to $\sim$5\%. This self-regulation mechanism has an important consequence. Since it suppresses increased buoyancy effects which would otherwise arise with larger carbon mass fractions in the fuel, similar amounts of fuel are consumed by the flame at stages where burning terminates in NSE. Therefore the amount of produced iron group elements is nearly kept constant for different C/O ratios in the fuel. The $\alpha$-particles buffer the energy only temporarily. With further expansion and cooling of the WD they are converted to iron group elements (``Ni'') releasing the stored energy. This is the reason why the energies in the models diverge at later times. Then, however, the fuel density has dropped to values where burning terminates in intermediate mass elements and hence the synthesis of iron group elements is unaffected. Therefore the models with different carbon-to-oxygen ratios produce similar amounts of iron group elements. Interestingly, we find even a slight decrease in the production of iron group elements for an increasing carbon mass fraction of the model. The same holds for the intermediate mass nuclei in the high central density models while the trend is reversed in the models with lower central densities (cf.\ Table~\ref{energy_tab}). \subsection{Variation of the central density} \label{expl_dens} \begin{figure*}[t] \centerline{ \includegraphics[width = 0.667 \linewidth] {roepke_fig10.eps}} \caption{Flame surface of models with different central densities prior to ignition at $t = 1.0 \, \mathrm{s}$. \label{grid_rc_fig}} \end{figure*} For higher central densities at ignition, the explosion turns out to be more vigorous for a fixed carbon mass fraction in the WD material (cf.\ Fig.~\ref{etot_fig}, Table~\ref{energy_tab}). Here, the nuclear energy releases differ about 26\% and the total energies vary about 34\%. As for an increased carbon mass fraction, a higher density of the fuel accelerates the laminar burning \citep{timmes1992a}. Again, this has little impact on our models since burning proceeds laminar only in the first few time steps. Two other effects are more significant here. First, for the higher central density, obviously more material is present at sufficiently high densities so that it can potentially be burnt to iron group elements. Second, for higher central densities the effective gravitational acceleration ($\mathit{At} \, g$) experienced by the flame is higher in the first $\sim$$0.9 \, \mathrm{s}$ (cf.\ Fig.~\ref{geff_rc_fig}). This increases the development of flame structures resulting from the non-linear stage of the buoyancy instability. As a result, the turbulent cascade will develop more quickly (cf.\ Fig.~\ref{eturb_fig}) and the turbulence-induced boost of of the effective flame propagation velocity sets in earlier. This is shown in Fig.~\ref{grid_rc_fig}, where snapshots of the flame evolutions at fixed times for models with different central densities at ignition are given. Consequently, the production of iron group elements increases with higher central densities, while the amount of intermediate mass elements decreases. \section{Nucleosynthesis} \label{nuc} \begin{table*} \begin{center} \setlength{\extrarowheight}{2pt} \begin{tabular}{lllllll} \hline\hline model & $\rho_c [10^9\,\textrm{g} \, \textrm{cm}^{-3}]$& $X(^{16}\textrm{O}) $ & $X(^{12}\textrm{C}) $ & $X(^{22}\textrm{Ne})$ & $Z[Z_{\odot}]$ & $M(^{56}\textrm{Ni}) [M_{\odot}]$\\ \hline \emph{1\_1\_1} & $1.0$ & $0.7$ & $0.2875$ & $0.0125$ & $0.5$ & $0.2982$\\ \emph{1\_1\_2} & $1.0$ & $0.7$ & $0.275$ & $0.025$ & $1.0$ & $0.2876$\\ \emph{1\_1\_3} & $1.0$ & $0.7$ & $0.225$ & $0.075$ & $3.0$ & $0.2450$\\ \hline \emph{1\_2\_1} & $1.0$ & $0.54$ & $0.4475$ & $0.0125$ & $0.5$ & $0.2966$\\ \emph{1\_2\_2} & $1.0$ & $0.54$ & $0.435$ & $0.025$ & $1.0$ & $0.2860$\\ \emph{1\_2\_3} & $1.0$ & $0.54$ & $0.385$ & $0.075$ & $3.0$ & $0.2444$\\ \hline \emph{1\_3\_1} & $1.0$ & $0.38$ & $0.6075$ & $0.0125$ & $0.5$ & $0.2907$\\ \emph{1\_3\_2} & $1.0$ & $0.38$ & $0.595$ & $0.025$ & $1.0$ & $0.2805$\\ \emph{1\_3\_3} & $1.0$ & $0.38$ & $0.545$ & $0.075$ & $3.0$ & $0.2403$\\ \hline \emph{2\_1\_1} & $2.6$ & $0.7$ & $0.2875$ & $0.0125$ & $0.5$ & $0.3115$\\ \emph{2\_1\_2} & $2.6$ & $0.7$ & $0.275$ & $0.025$ & $1.0$ & $0.2999$\\ \emph{2\_1\_3} & $2.6$ & $0.7$ & $0.225$ & $0.075$ & $3.0$ & $0.2544$\\ \hline \emph{2\_2\_1} & $2.6$ & $0.54$ & $0.4475$ & $0.0125$ & $0.5$ & $0.3163$\\ \emph{2\_2\_2} & $2.6$ & $0.54$ & $0.435$ & $0.025$ & $1.0$ & $0.3046$\\ \emph{2\_2\_3} & $2.6$ & $0.54$ & $0.385$ & $0.075$ & $3.0$ & $0.2592$\\ \hline \emph{2\_3\_1} & $2.6$ & $0.38$ & $0.6075$ & $0.0125$ & $0.5$ & $0.3174$\\ \emph{2\_3\_2} & $2.6$ & $0.38$ & $0.595$ & $0.025$ & $1.0$ & $0.3065$\\ \emph{2\_3\_3} & $2.6$ & $0.38$ & $0.545$ & $0.075$ & $3.0$ & $0.2608$\\ \hline \end{tabular} \end{center} \caption{$^{56}$Ni masses synthesized according to the nucleosyntehsis postprocessing.\label{ni_results_tab}} \end{table*} \begin{figure*}[ht] \centerline{ \includegraphics[width = 0.75\linewidth] {roepke_fig11.eps}} \caption{Final abundances for models with different C/O ratios. \label{abundance_co_fig}} \end{figure*} \begin{figure*}[ht] \centerline{ \includegraphics[width = 0.75\linewidth] {roepke_fig12.eps}} \caption{Final abundances for models with different central densities. \label{abundance_rc_fig}} \end{figure*} \begin{figure*}[ht] \centerline{ \includegraphics[width = 0.75\linewidth] {roepke_fig13.eps}} \caption{Final abundances for models with different metallicities. \label{abundance_z_fig}} \end{figure*} After the nucleosynthesis postprocessing the abundances of the individual isotopes in the ashes are unveiled. Additionally, the parameter of the progenitor's metallicity comes into play here by assuming a certain fraction of the material to be composed of $^{22}$Ne. The lightcurves of SNe Ia are powered by the radioactive decay of $^{56}$Ni and $^{56}$Co. Therefore their peak luminosities are determined by the nucleosynthetic yields of the explosions rather than the energetics. Consequently, the nucleosynthetic postprocessing of explosion models can shed some light on the observed diversity of SN Ia events. $^{56}$Co decay is slow and thus the peak luminosity is a function of the produced amount of $^{56}$Ni. A compilation of the $^{56}$Ni masses derived from all our models by nucleosynthetic postprocessing can be found in Table~\ref{ni_results_tab}. Although we will focus here on the effect of initial parameters on the production of $^{56}$Ni, we will first discuss the overall picture of the nucleosynthesis yields. \subsection{The final yields} The freeze-out masses after completion of the $\beta$-decays are plotted in Figs.~\ref{abundance_co_fig} to \ref{abundance_z_fig} for different models. Here the usual normalization to the solar abundances and the $^{56}$Fe mass fraction was applied. Values are given in Table~\ref{final_yields_tab} in the online material. Fig.~\ref{abundance_co_fig} shows a comparison between models with different carbon mass fractions for fixed $\rho_c = 2.6 \times 10^9 \, \mathrm{g} \, \mathrm{cm}^{-3}$ and for solar metallicity. Obviously, the carbon mass fraction has only little effect on the final abundances. Though some variation is visible for the intermediate mass elements (Mg to Ca), there is practically no difference in the iron group yields for the different models. This is expected from the analysis of the explosion process in the previous section. Due to the energy buffering in the $\alpha$-particles, burning to NSE consumes almost identical masses of fuel, while the recombination of the $\alpha$-particles at the end of complete NSE burning leads to an additional energy release that varies with the C/O ratio. Therefore the incomplete burning in the models that follows burning to NSE proceeds differently in the various models. The variations in the final yields due to different central densities are illustrated in Fig.~\ref{abundance_rc_fig}. Here, the models \emph{X\_2\_2} are plotted, i.e. the C/O ratio is fixed to 0.81 and the metallicity is solar. The model with lower central density produces more intermediate mass elements, but the variations are small. In contrast, for higher central densities, there is a visible increase in the abundances of iron group isotopes, viz. titanium, vanadium, chromium, manganese, iron, cobalt, and nickel. The two effects that contribute to an increased mass consumption in complete NSE burning with higher $\rho_c$ were discussed in Sect.~\ref{expl_dens}. The resulting final yields are a natural consequence of these effects. Changes in the progenitor's metallicity resulting in different abundances of $^{22}$Ne in the WD material have a large impact on the final yields. To illustrate this influence, we consider the models \emph{2\_2\_X} for $0.5\,Z_\odot$, $1.0\, Z_\odot$, and $3.0\, Z_\odot$ (cf.\ Fig.~\ref{abundance_z_fig}). The variation of the $^{22}$Ne abundance is obvious and caused by the representation of the progenitor's metallicity in the different mass fractions of that isotope in our simulations. The production of chromium, manganese and iron isotopes is increased for higher metallicity, especially for isotopes with two more neutrons than the symmetric nuclei ($^{54}$Fe, $^{58}$Ni). An exception is $^{56}$Fe which was used to normalize the abundances. This trend holds analogously for intermediate mass elements. In particular, one observes a higher abundance of $^{26}$Mg, $^{30}$Si, $^{34}$S, $^{38}$Ar and $^{42}$Ca with increased metallicity. Comparing the models \emph{2\_2\_2} and \emph{2\_2\_3} the change is by a factor of 11 for $^{26}$Mg and a factor of approximately $3$ for the other isotopes (cf.\ Table~\ref{final_yields_tab} in the online material). The other models not present in the table give similar factors for identical metallicities. The increase of neutron-rich isotopes is caused by the fact that a higher progenitor's metallicity results in an increased $^{22}$Ne mass fraction, which serves as a source of a neutron excess. \subsection{Impact of the C/O ratio on the $^\textsf{56}$Ni mass} Contrary to the previous section, we analyze here the nucleosynthesis yields right after the explosion. The production of radioactive species is given in Table~\ref{radio_tab} in the online material. \begin{figure}[t] \centerline{ \includegraphics[width = \linewidth] {roepke_fig14.eps}} \caption{$^{56}$Ni production depending on the carbon mass fraction for models with different central densities and solar metallicity. \label{nickel_co_fig}} \end{figure} Fig.~\ref{nickel_co_fig} shows the $^{56}$Ni production for the models in dependence of the carbon mass fraction of the progenitor. The central densities are fixed to $\rho_c = 1.0 \times 10^9 \, \mathrm{g} \, \mathrm{cm}^{-3}$ and $\rho_c = 2.6 \times 10^9 \, \mathrm{g} \, \mathrm{cm}^{-3}$, respectively. The metallicities of the models shown here are set to $Z=Z_\odot$. We note only minor changes in the $^{56}$Ni masses (about 2\%) for both central densities, which is not surprising given the small variations in the flame morphology and advancement discussed in Sect.~\ref{expl_dens}. It should be noted in Fig.~\ref{nickel_co_fig} that the trend of $^{56}$Ni production has opposite directions for different central densities. While this feature is in accordance with the total production of iron group elements in the explosion models for low central densities, it is reversed for the high central density case (cf.\ Table~\ref{energy_tab}). In order to check whether an under-representation of NSE-material in tracer particles in the low density case was the origin, we recalculated these models with the number of tracers increased to $35^3$. The trend of decreasing $^{56}$Ni production with higher carbon mass fraction was weaker, but had still the same direction. Since the variations are at the percent level, it is beyond the accuracy of our models to judge whether the trend is of physical nature or an artifact of our simulation. The result of the $^{56}$Ni production in the explosion phase being largely independent of the carbon mass fraction supports the conjecture of \citet{roepke2004c} that the peak luminosity of SNe Ia will be only marginally affected by the carbon-to-oxygen ratio of the progenition WD star. This conjecture was only based on the cumulative production of all iron group elements and is now confirmed by the derivation of the exact amounts of $^{56}$Ni via nucleosynthetic postprocessing. \subsection{Impact of the central density on the $^\textsf{56}$Ni mass} For fixed C/O ratios of 0.81 and solar metallicities our models produce $0.286\,M_\odot$ of $^{56}$Ni for $\rho_c = 1.0\times 10^9\,\mathrm{g}\,\mathrm{cm}^{-3}$ and $0.305\,M_\odot$ of $^{56}$Ni for $\rho_c = 2.6\times 10^9\,\mathrm{g}\,\mathrm{cm}^{-3}$, i.e.\ from the lower to the higher central density the $^{56}$Ni production increases for 7\%. These changes go along with the higher overall production of iron group nuclei at higher central densities (cf.\ Table~\ref{energy_tab}). The reasons for this effect have been discussed in Sect.~\ref{expl_dens}. Although somewhat larger than the changes found in the case of varying carbon mass fraction, the effect is still rather small. However, our study covers only parts of the effects resulting from changing the central densities of the models. With a further increasing central density, electron captures will become important and the $^{56}$Ni production is expected to decrease while the total mass of iron group elements should still increase. Unfortunately, in the current study this effect could not be consistently modeled, but it will be addressed in forthcoming work. \subsection{Impact of the metallicity on the $^\textsf{56}$Ni mass} \begin{figure}[t] \centerline{ \includegraphics[width = \linewidth] {roepke_fig15.eps}} \caption{$^{56}$Ni production depending on the progenitor's metallicity for models with different central densities C/O ratios. \label{nickel_z_fig}} \end{figure} \citet{timmes2003a} proposed an analytic model for the effect of the progenitor's metallicity on the $^{56}$Ni production in SN Ia explosions. Their reasoning is based on the assumptions that most of the $^{56}$Ni is produced between the $0.2 M_\odot$ and the $0.8 M_\odot$ mass shells in NSE and that the $Y_e$ is constant during burning in that region. This is motivated by one-dimensional models. Furthermore, they take into account only the species with the highest mass fraction; in a first step $^{56}$Ni and $^{58}$Ni. Under these assumptions they derive a linear correlation between Z and the produced $M(^{56}\mathrm{Ni})$: \begin{equation} \frac{M_{^{56}\mathrm{Ni}}(\tilde{Z})}{M_{^{56}\mathrm{Ni}}(\tilde{Z} = 0)} = 1 - 0.057 \tilde{Z}, \label{linear1} \end{equation} with $\tilde{Z} = Z/Z_\odot$. This equation is obtained from combining the equations \begin{equation} \frac{M_{^{56}\mathrm{Ni}}}{M_{^{56}\mathrm{Ni}}(\tilde{Z} = 0)} = 58 Y_e -28, \label{lin2} \end{equation} resulting from conservation of mass and charge, \begin{equation} X(^{22}\mathrm{Ne}) = 22 \left(\frac{X(^{12}\mathrm{C})}{12} + \frac{X(^{14}\mathrm{N})}{14} + \frac{X(^{16}\mathrm{O})}{16} \right), \label{ignore} \end{equation} approximating the $^{22}$Ne abundance resulting from the metallicity of the ZAMS progenitor, and \begin{equation}\label{ye-init} \begin{split} Y_e = \frac{10}{22} & X(^{22}\mathrm{Ne}) + \frac{26}{56} X(^{56}\mathrm{Fe}) \\ + \frac{1}{2} & \left [ 1 - X(^{22}\mathrm{Ne}) - X(^{56}\mathrm{Fe}) \right ], \end{split} \end{equation} giving the initial $Y_e$ of the white dwarf right before the explosion under the assumption of a uniform distribution of $^{22}$Ne and $^{56}$Fe. When the presence of $^{54}$Fe is taken into account the factor 0.057 in Eq.~(\ref{linear1}) changes to 0.054 \citep[cf.][]{timmes2003a}. For a comparison of this analytic prediction with our models we set $X(^{56}\mathrm{Fe}) = 0$ in Eq.~(\ref{ye-init}) since here the initial $Y_e$ is determined by $^{22}\mathrm{Ne}$ only. We set $X(^{22}\mathrm{Ne}) = 0.025 \tilde{Z}$, where $\tilde{Z} = Z/ Z_\odot$. If Eq.~(\ref{ye-init}) is now substituted into Eq.~(\ref{lin2}), the following equation is obtained \begin{equation} \frac{M_{^{56}\mathrm{Ni}}(\tilde{Z})}{M_{^{56}\mathrm{Ni}}(\tilde{Z} = 0)} = 1 - \frac{58}{22}\,0.025 \tilde{Z}. \label{our-lin} \end{equation} To compare this linear dependence of the $^{56}$Ni mass on $\tilde{Z}$ with our simulations, a linear regression following \begin{equation} M_{^{56}\mathrm{Ni}}(\tilde{Z}) = M_0 + m \tilde{Z} \label{ansatz} \end{equation} was applied to our data. Here $M_0$ denotes the extrapolated value $M(^{56}\mathrm{Ni})$ at $\tilde{Z} = 0$. Values for $m$ and $M_0$ for the different models are given in Table~\ref{metal_tab}. The coefficient of correlation is 0.9999 in all cases, which suggests a good agreement of our data with a linear dependence, but, of course, more data points would be desirable for a definite statement. \begin{table*} \centering \caption{Fit parameters according to Eq.\ref{ansatz} from our models. \label{metal_tab}} \setlength{\extrarowheight}{2pt} \begin{tabular}{rllllll} \hline\hline model & \emph{1\_1\_X} & \emph{1\_2\_X} & \emph{1\_3\_X} & \emph{2\_1\_X} & \emph{2\_2\_X} & \emph{2\_3\_X} \\ \hline $-m$ & 0.0213 & 0.0209 & 0.0201 & 0.0228 & 0.0228 & 0.0227 \\ $M_0$ $[M_{\odot}]$ & 0.3089 & 0.3070 & 0.3007 & 0.3228 & 0.3275 & 0.3290 \\ $-m/M_0$ $[M_\odot^{-1}]$ & 0.0690 & 0.0681 & 0.0668 & 0.0706 & 0.0696 & 0.0690 \\ \hline \end{tabular} \end{table*} To compare the slope of (\ref{our-lin}), i.e. $-0.0659$, with the fits to our data, we give values for $-m/M_0$ in Table~\ref{metal_tab}. The agreement is reasonable keeping in mind that Eqs.~(\ref{linear1}) and (\ref{our-lin}) were derived by assuming $^{56}$Ni and $^{58}$Ni to be the two most abundant isotopes in NSE with a constant $Y_e$. However, a significant amount of $^{56}$Ni seems to be produced in regions where the assumption of constant $Y_e$ breaks down. Thus, the analytical model introduced in \citet{timmes2003a} provides an excellent explanation for the effect of the metallicity. Based on models different from the ones applied here this was recently confirmed by \citet{travaglio2005a}. \section{Conclusions} \label{concl} In the present paper the impact of several progenitor parameters on three-dimensional SN Ia explosion models has been studied for the first time in a systematic way. Here, we investigated the effects of the progenitor's central density, its carbon mass fraction, and its metallicity. Of course, there may be several other parameters that possibly affect the light curve of SNe Ia (rotation of the progenitor, morphology of the ignition spot(s), a delayed detonation at varying densities, asphericities etc.), which were not addressed in the present survey. A first important point to note is that our numerical implementation as well as the underlying astrophysical model are evidently robust against variations of the initial conditions to a reasonable degree. On the one hand, the variations in the resulting features are relatively small. A deviation in orders of magnitude should have been reason for concern, but all our models seem to be well-behaved. On the other hand, the model is not too robust in the sense that variations of the initial parameters do show effects on the results, i.e.\ an intrinsic variability is preserved. The degrees of freedom expected for a SN Ia explosion are at least not entirely artificially suppressed in our model. Hence our model fulfills the requirements 2, 3, and 4 stated by \citet{hillebrandt2000a}. Another point is the absolute scale of the results. Given the limited resources of computational time and storage space, we had to restrict the models to a resolution of $[256]^3$ grid cells per octant. Although such models reach numerical convergence in global characteristics \citep{reinecke2002c}, it is not possible to apply multi-spot ignition scenarios at this resolution which would produce more vigorous explosions. As a consequence, the explosion energy of all our models is rather low and the $^{56}$Ni production falls short the nickel mass of a prototype SN Ia (\citet{contardo2000a} find $0.41 M_\odot$ of $^{56}$Ni for SN1994D). These restrictions exclude the possibility of finding the absolute scale of effects and hence requirement 1 of \citet{hillebrandt2000a} is not met in the current study. However, there is a fair chance that models with more elaborate initial flame representations will agree better with the absolute values of observed quantities \citep[see e.g.][]{travaglio2004a}. Nevertheless the present parameter study should reveal the correct trends of the variation of SN Ia properties. A major uncertainty lies in the range of variation of the progenitor parameters. Although we applied values that are common in literature, our parameter space is not derived from a realistic stellar evolution of the progenitor. Keeping this in mind, the maximum variation in $^{56}$Ni of about 27\% found in our parameter study can be regarded as a strong hint that the variations of the progenitor properties taken into account here provide a significant contribution to the scatter in SN Ia luminosities. However, it seems unlikely that these are sufficient to explain the full range of diversities in ``Branch normal'' SNe Ia. Of course, more elaborate models are required to assess this. Regarding the diversity of $^{56}$Ni production in our models resulting from the variation of the initial parameters, the following trends were found: \begin{itemize} \item The \emph{progenitor's carbon-to-oxygen ratio} has only little impact on the amount of produced $^{56}$Ni. This is in strong contrast to the common assumption that the C/O ratio be a major source of luminosity variation in SN Ia explosions. The ``working hypothesis'' of \citet{umeda1999b} could not be confirmed by our models. The reason for this effect could be unveiled by our three-dimensional simulations. Since flame propagation in the deflagration stage is mainly determined by the turbulent motions of the material, the explosion dynamics is not altered as long as the buoyancy effects that generate the turbulence are comparable. This is given in our models at stages of iron group nuclei synthesis. Different energy releases resulting from differences in the fuel binding energies are compensated by a varying amount of $\alpha$-particles present in the ashes. These buffer the temperature of the ashes and thus the densities are not altered substantially ensuring the same buoyancies. Consequently, the explosion dynamics is similar in the stages of iron group element synthesis for models with different C/O ratio in the fuel resulting in a small variation of the produced $^{56}$Ni of only about 2\%. \item The \emph{central density} affects the $^{56}$Ni production. The variation found in our models amounts to about 7\%. This is explained by the fact that for higher central densities more material is burned under conditions where iron group elements are produced. Moreover, a higher central density increases the mean gravitational acceleration experienced by the flame front and thus enhances the generation of turbulence thereby accelerating the flame propagation. Due to this effect even more material is processed at higher densities where the reactions terminate in iron group elements. \item A greater effect (assuming that our parameter space is reasonable) was found for a variation of the \emph{metallicity} in the nuclear postprocessing. By varying the $^{22}$Ne mass fraction from 0.5 to 3 times solar, a variation of the produced $^{56}$Ni mass of about 20\% was found. Our models were consistent with the analytical prediction of a linear relation between the metallicity and $X(^{56}\mathrm{Ni})$ by \citet{timmes2003a}. \end{itemize} The effects of varying C/O ratios and central densities of the progenitor on the supernova explosion are based on effects of the turbulent flame propagation and can thus only be revealed by three-dimensional models. However, we have to emphasize an important limitation of the results. Our analysis addresses only changes in the explosion process itself. For comparability of the simulations we assumed identical initial flame configurations. The ignition process, however, may be influenced by the carbon-to-oxygen ratio of the progenitor \citep{woosley2004a}. Since different initial flames can have large impact on the explosion dynamics \citep[e.g.][]{reinecke2002d, gamezo2003a, calder2004a, roepke2005b}, the C/O ratio may still be an important parameter via this mechanism. We emphasize an incompleteness of the present survey towards higher densities, at which electron captures in the ashes become important. These shift the burning products to neutron-rich isotopes, favoring $^{58}$Ni instead of $^{56}$Ni. This effect would be taken into account in our postprocessing procedure, however electron captures may also become dynamically important with increasing central densities, since they reduce the electron pressure in the ashes. Unfortunately, this effect could not consistently be modeled in the current study. The explosion model assumes $Y_e = 0.5$. Effects of higher central densities will be addressed in forthcoming investigations. \subsection{Comparison with one-dimensional models} The effect of a variation in the carbon mass fraction of the progenitor on the produced $^{56}$Ni mass was studied by \citet{hoeflich1998a}. They applied a one-dimensional delayed detonation model. For a central density of $2.6 \times 10^9 \, \mathrm{g} \, \mathrm{cm}^{-3}$, solar metallicity and a presumed deflagration-to-detonation transition at a density of $2.7 \times 10^7 \, \mathrm{g} \, \mathrm{cm}^{-3}$ they calculated a model with a C/O ratio of 1/1 (DD21c in their notation) and a model with C/O reduced to 2/3. Here they find a decrease of the produced $M(^{56}\mathrm{Ni})$ of about 14\%. Assuming that a transition to detonation at such low densities as applied here does not alter the production of iron peak elements, a comparison with our models is possible. However, the results of \citet{hoeflich1998a} are in contrast with ours. This may be mainly due to the fact that the modeling of the correct implications of the C/O ratio on the explosion results requires an accurate description of the multidimensional effects that dominate the flame propagation. \citet{bravo1993a} investigated the impact of the ignition density on the $^{56}$Ni production for one-dimensional deflagration models. For models with a central density of $2.5 \times 10^9 \, \mathrm{g} \, \mathrm{cm}^{-3}$ (R2 in their notation) and $4.0 \times 10^9 \, \mathrm{g} \, \mathrm{cm}^{-3}$ (R4) they find differences of about 7\% which is in good agreement with our results. Although our results regarding the change in $^{56}$Ni production varying the metallicity are in good agreement with the analytical prediction by \citet{timmes2003a} and with the study by \citet{travaglio2005a}, they disagree with the findings of \citet{hoeflich1998a}. They report an only $\sim$5\% effect on the $^{56}$Ni production changing the metallicity from $0.1 Z_\odot$ to $10 Z_\odot$. Contrary to this, the result of \citet{iwamoto1999a} that an increase of the metallicity from zero to solar decreases the $^{56}$Ni production for about 8\% is consistent with our results. \subsection{Cosmological significance} In order to discuss the cosmological significance of our results, we take on a very simplistic view on the mechanism of the light curve. Following ``Arnett's rule'' \citep{arnett1982a} we assume that the total mass of $^{56}$Ni immediately determines the peak luminosity of the SN Ia event. Furthermore we assume that a larger energy released in the explosion leads to a more rapid decline of the light curve \citep{pinto2000a}. It has to be noted, however, that this may only be a second-order effect in the light-curve shape. The main parameter here is the opacity given by the distribution of heavy elements \citep{mazzali2001a}. This can only be adequately addressed in detailed synthetic light curve calculations and will be ignored here. In the context of this simplification, we note that none of the tested parameters reproduces the peak luminosity-light curve shape relation by lowering the produced $^{56}$Ni mass accompanied by an increased energy release. While the carbon-to-oxygen ratio of the progenitor has little effect on the peak luminosity, it could alter the width of the light curve. The opposite holds for the progenitor's metallicity. Here, the peak luminosity can be vary by about 20\%, but the explosion dynamics is unaffected. The central density prior to the ignition changes both the $^{56}$Ni production and the energy release. Unfortunately our study is incomplete here. At higher values for the central density, the produced $^{56}$Ni mass could decrease due to electron captures while the energy release may still increase. This has to be tested in forthcoming studies. Another aspect is that we have ignored the interrelation of the parameters by stellar evolution here. This, however, predicts a lower C/O ratio for higher metallicities \citep[cf.][]{umeda1999a}. The effects of both parameters in this combination may possibly reproduce the trend of the peak luminosity-light curve shape relation. The final conclusions on the cosmological significance of the variations in the explosions found in the present study need to be drawn on the basis of synthetic light curves derived from our models. This is subject of a subsequent publication. \begin{acknowledgements} This work was supported in part by the European Research Training Network ``The Physics of Type Ia Supernova Explosions'' under contract HPRN-CT-2002-00303. \end{acknowledgements}
1,116,691,497,422
arxiv
\section{Introduction} Channel quantization in a network with multiple receivers is fundamentally different from that in a point-to-point system. In a point-to-point system, the receiver can acquire the entire channel state information (CSI) and send the corresponding quantized feedback information to the transmitter \cite{Quantization_Interference,ErdemRelayFeedback,BDRaoTransmitBeamforming,ErdemQuantization}. On the other hand, in a network with multiple receivers, each receiver only has access to its own local CSI due to different geographical locations of the different receivers. Each receiver can thus quantize only a part of the entire global CSI, which results in a distributed quantization problem. In the existing work on distributed quantization for networks \cite{Quantization_Interference,Interference_Power_Control, Interference_Throughput}, each receiver first quantizes its local CSI independently and then sends a finite number of bits representing quantized information through feedback links to other terminals. After decoding feedback information from all receivers, each terminal reconstructs the quantized version of the global CSI. Afterwards, transmission methods such as beamforming or power control are adopted by treating the global quantized CSI as the exact unquantized CSI. For example, power control and throughput maximization for interference networks based on separate quantized feedback information from receivers are analyzed in \cite{Interference_Power_Control,Interference_Throughput}. In \cite{Quantization_Interference}, beamformers are designed for the $K$-user MIMO interference channels with independent quantized information from each receiver. The performance of these quantizers depend on the number of feedback bits assigned for quantization to each receiver and always suffer from some loss when compared with the optimal performance. In this paper, we propose a novel distributed quantization strategy with multiple rounds of feedback communication in the form of conferencing between receivers. Through conferencing among receivers, partial CSI from other receivers can be utilized for a better overall quantizer performance. To illustrate this, we consider the distributed quantization problem for two-user interference networks with time sharing and interference transmission strategies. The network outage probability is the performance metric. We first propose a distributed quantizer that achieves the optimal network outage probability of sum rate in both time sharing and interference transmission with only two bits of feedback information. We also propose a distributed quantizer that attains the optimal network outage probability of minimum rate in time sharing with finite average feedback rate. For the optimal network outage probability of minimum rate in interference transmission, a distributed quantizer that can approach it closely is also proposed. By numerical simulations, we show the effectiveness of the proposed quantizers by comparing them with the conventional ones. The rest of this paper is organized as follows: In Section \ref{secprelim}, we provide a description of the system model. In Sections \ref{secthree} and \ref{secfour}, we introduce and analyze the distributed quantizers for time sharing and interference transmission strategies, respectively. Numerical simulations are provided in Section \ref{secnumresults}. {\bf Notations:} Bold-face letters refer to vectors or matrices. $\top$ denotes the matrix transpose. $\mathtt{C}$, $\mathtt{R}$ and $\mathtt{N}$ represent the sets of complex, real and natural numbers, respectively. The set of complex $n$-vectors is denoted by $\mathtt{C}^{n\times 1}$ and the set of complex $m\times n$ matrices is denoted by $\mathtt{C}^{m\times n}$. $\mathtt{CN}(a, b)$ represents a circulary-symmetric complex Gaussian random variable (r.v.) with mean $a$ and covariance $b$. $f_{X}(\cdot)$ is the probability density function (PDF) of a r.v. $X$. $\left|\mathcal{S}\right|$ is the cardinality of the set $\mathcal{S}$. For sets $\mathcal{A}$ and $\mathcal{B}$, $\mathcal{A} - \mathcal{B} = \left\{x \in \mathcal{A}, x \notin \mathcal{B}\right\}$. $\textmd{E}[\cdot]$ denotes the expectation and $\textmd{Prob}\{\cdot\}$ denotes the probability. For any $x \in \mathtt{R}$, $\lfloor x\rfloor$ is the largest integer that is less than or equal to $x$ and $\lceil x \rceil$ is the smallest integer that is larger than or equal to $x$. For any logical statement $\sf ST$, we let $\bf{1}({{\sf ST}})=1$ when ${\sf ST}$ is true, and $\bf{1}({{\sf ST}})=0$ when ${\sf ST}$ is false. Finally, for $b_1,\ldots,b_N\in\{0,1\},\,N\geq 1$, the real number $[0.b_1\cdots b_N]_2$ is the base-$2$ representation of the real number $\sum_{n=1}^{N} b_n2^{-n}$. \section{Preliminaries} \label{secprelim} \subsection{System strategy} Consider an interference network where transmitters ${\sf S}_1$ and ${\sf S}_2$ send independent signals to receivers ${\sf D}_1$ and ${\sf D}_2$ concurrently. Both transmitters and receivers are equipped with only a single antenna. The channel gain from ${\sf S}_k$ to ${\sf D}_l$ is denoted by $h_{k, l}$ for $k, l = 1, 2$. We assume that $h_{1, 1}, h_{2, 2} \simeq \mathtt{CN}(0, 1)$ and $h_{1, 2}, h_{2, 2} \simeq \mathtt{CN}(0, \epsilon)$, where $\epsilon $ is the covariance of interference links. Let ${H}_{k, l} = \left|h_{k, l}\right|^2$. Then, ${\bf h}_k = \left[ H_{1, k}, H_{2, k}\right]^{\top}\in \mathtt{C}^{2\times 1}$ denotes the local CSI at receiver $k$, and ${\bf H} = \left[ {\bf h}_1, {\bf h}_2\right]\in \mathtt{C}^{2\times 2}$ represents the entire CSI. The additive noises at the receivers are distributed as $\mathtt{CN}(0, 1)$. We assume a quasi-static block fading channel in which the channels vary independently from one block to another while remain constant within each block. Each receiver can perfectly estimate its local CSI and provide quantized instantaneous CSI to other terminals via error-free and delay-free feedback links. \subsection{Transmission strategies} We consider two transmission strategies in the two-user interference network, namely time sharing and interference transmission. Time sharing means either transmitter only occupies a proportion of the block to transmit while remains silent in the rest, thus no interference exists. Interference transmission refers to the scenario where both transmitters send signals within the entire block, thereby causing interference to each other. We assume that interference signals are dealt with as noises. Since we focus on the design of distributed quantizers based on conferencing, we also assume that only one strategy will be performed in the entire transmission for simplicity. In time sharing, let $t_k \in [0, 1]$ be the percentage of time within the entire block in which only ${\sf S}_k$ is active for $k = 1, 2$ with $t_1 + t_2 = 1$. The instantaneous power used by ${\sf S}_k$ is $P_k = p_k P$, where $p_k \in [0, 1]$ and $P$ is the short-term power constraint. It is optimal for both transmitters to use full power under the condition of no interference. Therefore, for a given $\bf H$, the end-to-end rate at receiver $k$ is \begin{align} \textit{R}_{{\sf ts}, k} (t_k) \triangleq t_k \log_2\left(1 + P H_{k, k}\right).\nonumber \end{align} In interference transmission, for $k, l = 1, 2$ and $k \neq l$, the end-to-end rate at receiver $k$ is \begin{align} \textit{R}_{{\sf it}, k} (p_1, p_2) \triangleq \log_2\left(1 + \frac{p_k P H_{k, k}}{p_l P H_{l, k} + 1}\right).\nonumber \end{align} \subsection{Network Outage Probability} Our performance measure is the network outage probability, which is the fraction of channel states at which the rate measure of the network falls below a target data rate $\rho$. Such a performance metric is well-suited for applications where a given constant data rate needs to be sustained for every channel state. Two kinds of rate measurements are considered, namely sum rate and minimum rate. Our goal is to design efficient distributed quantizers that can achieve the optimal network outage probability of sum rate or minimum rate for both time sharing and interference transmission strategies. \section{Distributed Quantization for Network Outage Probability of Sum Rate} \label{secthree} We first design distributed quantizers for interference transmission. The sum rate is $\textit{SR}_{\sf it}\left(p_1, p_2\right) \triangleq \sum_{k = 1}^2 \textit{R}_{{\sf it}, k} (p_1, p_2)$. We define the network outage probability as\footnote{We choose the sum-rate outage threshold to be $2\rho$ for a more fair comparison with the rate threshold $\rho$ that we shall specify for the minimum-rate outage threshold.} \begin{align} \textmd{OUT}_{\sf it}^{\sf sr} \triangleq \textmd{Pr} \left\{ \textit{SR}_{\sf it}\left(p_1, p_2\right) < 2\rho \right\}.\nonumber \end{align} It is proved in \cite{OptimalPowerControlSumRate} that the maximum sum rate is $\max\left\{\textit{SR}_{\sf it}\left(1, 0\right), \textit{SR}_{\sf it}\left(0, 1\right), \textit{SR}_{\sf it}\left(1, 1\right)\right\}$. Therefore, the optimal (minimum-achievable) network outage probability is \begin{align} \textmd{OUT}_{{\sf sr}, \sf it}^{ {\sf opt}}\! =\! \textmd{Pr}\left\{ \max\left\{\textit{SR}_{\sf it}\left(1, 0\right), \textit{SR}_{\sf it}\left(0, 1\right), \textit{SR}_{\sf it}\left(1, 1\right)\right\} \!<\! 2\rho \right\}.\nonumber \end{align} In the following, we design a distributed quantizer, namely ${\textmd{\textit{DQ}}}_{{{{\sf sr}, {\sf it}}}}$, that can achieve $\textmd{OUT}_{{\sf sr}, \sf it}^{ {\sf opt}}$ with only $1$ feedback bit per receiver. The quantizer ${\textmd{\textit{DQ}}}_{{{{\sf sr}, {\sf it}}}}$ consists of two local encoders and a unique decoder. The $k$-th encoder $\textmd{ENC}_{{\sf sr}, {\sf it}, k}$ is located at receiver $k$ and the decoder $\textmd{DEC}_{{\sf sr}, {\sf it}}$ is shared by all terminals, for $k = 1, 2$. The components of ${\textmd{\textit{DQ}}}_{{{{\sf sr}, {\sf it}}}}$ operate as follows: For $k = 1, 2$, $\textmd{ENC}_{{\sf sr}, {\sf it}, k}: \mathtt{C}^{2\times 1}\rightarrow \{0, 1\}$ maps ${\bf h}_k$ to $0$ or $1$ according to $\textmd{ENC}_{{\sf sr}, {\sf it}, k}\left({\bf h}_k\right) = {\bf 1}({\log_2\left(1 + P H_{k, k}\right) \geq 2\rho})$. Accordingly, receiver $k$ will send the feedback bit ``1'' if $\textmd{ENC}_{{\sf sr}, {\sf it}, k}\left({\bf h}_k\right) = 1$, and ``0'' otherwise. The decoder $\textmd{DEC}_{{\sf sr}, {\sf it}}$ decodes the bits fed back by receivers and recovers the values of $\textmd{ENC}_{{\sf sr}, {\sf it}, k}\left({\bf h}_k\right)$ for $k = 1, 2$. The interference transmission pair $\left(p_1, p_2\right)$ is decided based on Table 1. \begin{table}[!htb] \caption{Decision rule of ${\textmd{\textit{DQ}}}_{{{{\sf sr}}}}$.} \centering \begin{tabular}{|c|c|c|} \hline $\textmd{ENC}_{{\sf sr}, {\sf it}, 1}\left({\bf h}_1\right)$ & $\textmd{ENC}_{{\sf sr}, {\sf it}, 2}\left({\bf h}_2\right)$ & $\left(p_1, p_2\right)$ \\ \hline $1$ & $0$ & $\left(1, 0\right)$ \\ \hline $0$ & $1$ & $\left(0, 1\right)$ \\ \hline $1$ & $1$ & $\left(1, 0\right)$ or $\left(0, 1\right)$\\ \hline $0$ & $0$ & $\left(1, 1\right)$ \\ \hline \end{tabular} \end{table} Denote the network outage probability achieved by ${\textmd{\textit{DQ}}}_{{{{\sf sr}, {\sf it}}}}$ as $\textmd{OUT}\left({\textmd{\textit{DQ}}}_{{{{\sf sr}, {\sf it}}}}\right)$ and let $\textmd{FR}\left({\textmd{\textit{DQ}}}_{{{{\sf sr}, {\sf it}}}}\right)$ be the average feedback rate.\footnote{The average feedback rate in this paper is the sum of the average number of feedback bits fed back by each receiver.} \begin{theorem} $\textmd{\rm OUT}\left({\mathtt{\textit{DQ}}}_{{{{\sf sr}, {\sf it}}}}\right) = \textmd{\rm OUT}_{{\sf sr}, \sf it}^{{\sf opt}}$ and $\textmd{\rm FR}\left({\mathtt{\textit{DQ}}}_{{{{\sf sr}, {\sf it}}}}\right) = 2$. \end{theorem} \begin{IEEEproof} With ${\textmd{\textit{DQ}}}_{{{{\sf sr}, {\sf it}}}}$, an outage event occurs only when $\textit{SR}_{\sf it}(p_1, p_2) < 2\rho$ for every $(p_1, p_2)\in\{\left(1, 0\right), \left(0, 1\right)$, $\left(1, 1\right)\},$ or equivalently when both receivers feeds back ``$0$'' and the corresponding power vector $\left(1, 1\right)$ from Table \Rmnum{1} still results in outage. This shows that $\textmd{\rm OUT}\left({\mathtt{\textit{DQ}}}_{{{{\sf sr}, {\sf it}}}}\right) = \textmd{\rm OUT}_{{\sf sr}, \sf it}^{{\sf opt}}$. Since two bits are fed back in total (one bit for either receiver), the average feedback rate is two bits per channel state. \end{IEEEproof} The design of ${\textmd{\textit{DQ}}}_{{{{\sf sr}, {\sf it}}}}$ utilizes the fact that checking whether $(p_1, p_2) = \left(1, 0\right)$ or $\left(0, 1\right)$ leads to an outage event only requires the knowledge of local CSI at either receiver. Thus two bits of conferencing between receivers provides adequate information to each other for choosing the right pair $(p_1, p_2)$ to achieve the optimal performance. We now consider the design of disributed quantizers for the time sharing strategy. In this case, we can similarly define the network outage probability of sum rate as $ \textmd{OUT}_{{\sf sr}, \sf ts} \triangleq \textmd{Pr} \left\{ \textit{SR}_{\sf ts}\left(t_1, t_2\right) < 2\rho \right\},\nonumber $ where $\textit{SR}_{\sf ts}\left(t_1, t_2\right) \triangleq \sum_{k = 1}^2\textit{R}_{{\sf ts}, k} (t_k)$. Under the constraint of $t_1 + t_2$ = 1, the maximum sum rate can easily be calculated to be $\max\left\{\textit{SR}_{\sf ts}\left(1, 0\right), \textit{SR}_{\sf ts}\left(0, 1\right)\right\}$. Therefore, the optimal network outage probability is \begin{align} \textmd{OUT}_{{\sf sr}, \sf ts}^{ {\sf opt}} = \textmd{Pr} \left\{ \textit{SR}_{\sf ts}\left(1, 0\right) < 2\rho, \textit{SR}_{\sf ts}\left(0, 1\right) < 2\rho \right\}.\nonumber \end{align} Noticing that $\textit{SR}_{\sf ts}\left(1, 0\right) = \textit{SR}_{\sf it}\left(1, 0\right)$ and $ \textit{SR}_{\sf ts}\left(0, 1\right) = \textit{SR}_{\sf it}\left(0, 1\right) $ and using the same ideas as in the construction of $\textit{DQ}_{{\sf sr}, {\sf it}}$, we can design a distributed quantizer for time sharing that achieves $\textmd{OUT}_{{\sf sr}, \sf ts}^{ {\sf opt}}$ with only one bit of feedback per receiver (we omit the details). On the other hand, the equalities $\textit{SR}_{\sf ts}(1, 0) = \textit{SR}_{\sf it}(1, 0)$ and $\textit{SR}_{\sf ts}(0, 1) = \textit{SR}_{\sf it}(0, 1)$ also imply $\textmd{OUT}_{{\sf sr}, \sf ts}^{ {\sf opt}} \leq \textmd{OUT}_{{\sf sr}, \sf it}^{ {\sf opt}}$. Hence, we only need to consider interference transmission if our objective is to minimize the network outage probability of the sum rate. \section{Distributed Quantization for Network Outage Probability of Minimum Rate} \label{secfour} We now study the design of distributed quantizers that minimize the outage probability of minimum rate. First, we determine the optimal network outage probability with time sharing or interference transmission. For time sharing, we define the network outage probability as \begin{align} \textmd{OUT}_{{\sf mr}, {\sf ts}} \triangleq \textmd{Pr}\left\{\textit{MR}_{\sf ts}(t_1, t_2) < \rho\right \},\nonumber \end{align} where $\textit{MR}_{\sf ts}(t_1, t_2) \triangleq \min\left\{\textit{R}_{{\sf ts}, 1} (t_1), \textit{R}_{{\sf ts}, 2} (t_2)\right\}$ is the minimum achievable rate of the two transmitters. In interference transmission, the network outage probability is \begin{align} \textmd{OUT}_{{\sf mr}, {\sf it}} \triangleq \textmd{Pr}\left\{\textit{MR}_{\sf it}(p_1, p_2) < \rho\right \},\nonumber \end{align} where $\textit{MR}_{\sf it}(p_1, p_2) \triangleq \min\left\{\textit{R}_{{\sf it}, 1} (p_1, p_2), \textit{R}_{{\sf it}, 2} (p_1, p_2)\right\}$. Now, let $(t_1^{\star},t_2^{\star}) = \arg\max_{(t_1,t_2)}\textit{MR}_{\sf ts}(t_1, t_2)$ and $(p_1^{\star},p_2^{\star}) = \arg\max_{(p_1,p_2)}\textit{MR}_{\sf it}(p_1, p_2)$ denote the optimal time sharing and power pairs that achieve $\textmd{OUT}_{{\sf mr}, {\sf ts}}$ and $\textmd{OUT}_{{\sf mr}, {\sf it}}$, respectively. We have the following two results, whose proofs can be found in Appendix A. \begin{proposition} We have \begin{align} \label{Optimal_Time_Sharing} \begin{array}{l} {t}_1^{\star} = \frac{ \log_2\left(1 + P H_{2, 2}\right)}{\log_2\left(1 + P H_{1, 1}\right) +\log_2\left(1 + P H_{2, 2}\right)},\\ {t}_2^{\star} = \frac{ \log_2\left(1 + P H_{1, 1}\right)}{\log_2\left(1 + P H_{1, 1}\right) +\log_2\left(1 + P H_{2, 2}\right)}. \end{array} \end{align} \end{proposition} \begin{proposition} If $\frac{P H_{1, 1}}{P H_{2, 1} + 1} \geq \frac{P H_{2, 2}}{P H_{1, 2} + 1}$, we have \begin{align} \label{First_p} ({p}_1^{\star},p_2^{\star}) = \textstyle \Biggl(\frac{\sqrt{\frac{4P^2 H_{1, 2} H_{2, 1} H_{2, 2} + 4P H_{2, 2} H_{1, 2}}{H_{1, 1}} + 1} - 1}{2PH_{1, 2}}, 1\Biggr), \end{align} and otherwise, if $\frac{P H_{1, 1}}{P H_{2, 1} + 1} < \frac{P H_{2, 2}}{P H_{1, 2} + 1}$, we have \begin{align} \label{Second_p} ({p}_1^{\star},p_2^{\star}) = \textstyle \Biggl( 1, \frac{\sqrt{\frac{4 P^2 H_{1, 1} H_{1, 2} H_{2, 1} + 4 P H_{1, 1} H_{2, 1}}{H_{2, 2}} + 1} - 1}{2P H_{2, 1} }\Biggr). \end{align} \end{proposition} In particular, the optimal network outage probabilities of minimum rate for time sharing and interference transmission are given by $\textmd{OUT}_{{\sf mr}, \sf ts}^{ {\sf opt}} = \textmd{Pr}\left\{\textit{MR}_{\sf ts}(t_1^{\star}, t_2^{\star}) < \rho\right \}$ and $\textmd{OUT}_{{\sf mr}, \sf it}^{ {\sf opt}} = \textmd{Pr}\left\{\textit{MR}_{\sf it}(p_1^{\star}, p_2^{\star}) < \rho\right \}$, respectively. We now propose two distributed quantizers, namely $\textmd{\textit{DQ}}_{{\sf mr}, {\sf ts}}$ and $\textmd{\textit{DQ}}_{{\sf mr}, {\sf it}}$. For the time sharing strategy, $\textmd{\textit{DQ}}_{{\sf mr}, {\sf ts}}$ will attain $\textmd{OUT}_{{\sf mr}, \sf ts}^{ {\sf opt}}$ exactly with a finite average feedback rate. For interference transmission, $\textmd{\textit{DQ}}_{{\sf mr}, {\sf it}}$ will approach $\textmd{OUT}_{{\sf mr}, \sf it}^{ {\sf opt}} $ tightly with a finite average feedback rate. \subsection{Time Sharing} For a given ${\bf H}$, the minimum time percentage for receiver $k$ to prevent outage is given by \begin{align} t_{k, \min} = \frac{\rho}{\log_2\left(1 + P H_{k, k}\right)},\nonumber \end{align} which can be calculated and known by receiver $k$, for $k = 1, 2$. Denote by $\textmd{\textit{DQ}}_{{\sf mr}, {\sf ts}}\left({\bf H}\right)$ the time sharing pair $(t_1, t_2)$ determined by $\textmd{\textit{DQ}}_{{\sf mr}, {\sf ts}}$. The first task of $\textmd{\textit{DQ}}_{{\sf mr}, {\sf ts}}$ is to determine whether or not $\textit{MR}_{\sf ts}\left(t_1^{\star}, t_2^{\star}\right) \geq \rho$ through feedback communication between receivers. The first task is essentially a distributed decision-making problem. If $\textit{MR}_{\sf ts}\left(t_1^{\star}, t_2^{\star}\right) \geq \rho$ holds, the second task is to find $\textmd{\textit{DQ}}_{{\sf mr}, {\sf ts}}\left({\bf H}\right)$ that also enables $\textit{MR}_{\sf ts}\left(\textmd{\textit{DQ}}_{{\sf mr}, {\sf ts}}\left({\bf H}\right)\right) \geq \rho$. The quantizer $\textmd{\textit{DQ}}_{{\sf mr}, {\sf ts}}$ is composed by two local encoders with the $k$th encoder $\textmd{ENC}_{{\sf mr}, {\sf ts}, k}$ located at receiver $k$ and a unique decoder $\textmd{DEC}_{{\sf mr}, {\sf ts}}$ employed by all terminals. We add the superscript ``$l$'' to indicate their operations in the $l$-th round of conferencing for $l \in \mathtt{N}$. Also, four parameters ${t}_{k, \min}^{\textmd{lb}}, {t}_{k, \min}^{\textmd{ub}}$ for $k = 1, 2$ are stored and updated at all terminals. Let ${t}_{k, \min}^{\textmd{lb}, l}, {t}_{k, \min}^{\textmd{ub}, l}$ represent the values of ${t}_{k, \min}^{\textmd{lb}}, {t}_{k, \min}^{\textmd{ub}}$ after round $l$. In round $0$, $\textmd{ENC}_{{\sf mr}, {\sf ts}, k}^{0}: \mathtt{C}^{2\times 1}\rightarrow \{0, 1\}$ maps ${\bf h}_k$ into $0$ or $1$ via $\textmd{ENC}_{{\sf mr}, {\sf ts}, k}^{0}\left({\bf h}_k\right) = {\bf 1}(t_{k, \min} \geq 1)$, for $k = 1, 2$. Receiver $k$ will send the feedback bit ``1'' if $\textmd{ENC}_{{\sf mr}, {\sf ts}, k}^{0}\left({\bf h}_k\right) = 1$, and the feedback bit ``0'' otherwise. Then, $\textmd{DEC}_{{\sf mr}, {\sf ts}}^{0}$ decodes the bits fed back by receivers and recovers the values of $\textmd{ENC}_{{\sf mr}, {\sf ts}, k}^{0}\left({\bf h}_k\right) $ for $k = 1, 2$. {If $\textmd{ENC}_{{\sf mr}, {\sf ts}, 1}^{0}\left({\bf h}_1\right)= 1$ or $\textmd{ENC}_{{\sf mr}, {\sf ts}, 2}^{0}\left({\bf h}_2\right) = 1$, an outage event is sure to happen. Then we set $\left(0.5, 0.5\right)$ as the time sharing pair (in fact, any time sharing pair can be used as outage is inavoidable) and the conferencing process ends.} Otherwise, ${t}_{k, \min}^{\textmd{lb}}$ and ${t}_{k, \min}^{\textmd{ub}} = 1$ are updated as ${t}_{k, \min}^{\textmd{lb}, 0} = 0, {t}_{k, \min}^{\textmd{ub}, 0} = 1$ for $k = 1, 2$, then $\textmd{\textit{DQ}}_{{\sf mr}, {\sf ts}}$ continues to the next round. In round $l$ where $l \in \mathtt{N} - \{0\}$, $\textmd{ENC}_{{\sf mr}, {\sf ts}, k}^{l}: \mathtt{C}^{2\times 1}\rightarrow \{0, 1\}$ maps ${\bf h}_k$ into $0$ or $1$ according to \begin{align} \textmd{ENC}_{{\sf mr}, {\sf ts}, k}^{l}\left({\bf h}_k\right) = {\bf 1}\left(t_{k, \min} \geq \textstyle \frac{{t}_{k, \min}^{\textmd{lb}, l - 1} + {t}_{k, \min}^{\textmd{ub}, l - 1}}{2}\right),\nonumber \end{align} for $k = 1, 2$. Receiver $k$ will send 1 bit of ``1'' if $\textmd{ENC}_{{\sf mr}, {\sf ts}, k}^{l}\left({\bf h}_k\right) = 1$, and ``0'' otherwise. Then $\textmd{DEC}_{{\sf mr}, {\sf ts}}^{l}$ decodes the bits fed back by receivers and recovers the values of $\textmd{ENC}_{{\sf mr}, {\sf ts}, k}^{l}\left({\bf h}_k\right) $ for $k = 1, 2$. \begin{enumerate} \item If $\textmd{ENC}_{{\sf mr}, {\sf ts}, 1}^{l}\left({\bf h}_1\right) = \textmd{ENC}_{{\sf mr}, {\sf ts}, 2}^{l}\left({\bf h}_2\right) = 1$, an outage event is inavoidable. We thus set $(0.5, 0.5)$ as the time sharing pair and conferencing ends. \item If $\textmd{ENC}_{{\sf mr}, {\sf ts}, 1}^{l}\left({\bf h}_1\right) = \textmd{ENC}_{{\sf mr}, {\sf ts}, 2}^{l}\left({\bf h}_2\right) = 0$, we set $\textmd{\textit{DQ}}_{{\sf mr}, {\sf ts}}\left({\bf H}\right) = \left(\frac{{t}_{1, \min}^{\textmd{lb}, l -1} + {t}_{1, \min}^{\textmd{ub}, l - 1}}{2}, \frac{{t}_{2, \min}^{\textmd{lb}, l - 1} + {t}_{2, \min}^{\textmd{ub}, l - 1}}{2}\right)$ as the time sharing pair, and conferencing ends. \item If $\textmd{ENC}_{{\sf mr}, {\sf ts}, 1}^{l}\left({\bf h}_1\right) =1$ and $ \textmd{ENC}_{{\sf mr}, {\sf ts}, 2}^{l}\left({\bf h}_2\right) = 0$, we let $t_{1, \min}^{\textmd{ lb}, l} = \frac{{t}_{1, \min}^{\textmd{lb}, l - 1} + {t}_{1, \min}^{\textmd{ub}, l - 1}}{2}$ and $t_{2, \min}^{\textmd{ub}, l} = \frac{{t}_{2, \min}^{\textmd{lb}, l - 1} + {t}_{2, \min}^{\textmd{ub}, l - 1}}{2}$. If $\textmd{ENC}_{{\sf mr}, {\sf ts}, 1}^{l}\left({\bf h}_1\right) =0$ and $\textmd{ENC}_{{\sf mr}, {\sf ts}, 2}^{l}\left({\bf h}_2\right) = 1$, we let $t_{1, \min}^{\textmd{ ub}, l} = \frac{{t}_{1, \min}^{\textmd{lb}, l - 1} + {t}_{1, \min}^{\textmd{ub}, l - 1}}{2}$ and $t_{2, \min}^{\textmd{lb}, l} = \frac{{t}_{2, \min}^{\textmd{lb}, l - 1} + {t}_{2, \min}^{\textmd{ub}, l - 1}}{2}$. In either case, conferencing continues to the next round. \end{enumerate} Note that the condition $\textmd{MR}_{\sf ts}\left(\textmd{\textit{DQ}}_{{\sf mr}, {\sf ts}}\left({\bf H}\right)\right) < \rho$ is equivalent to $t_{1, \min} + t_{2, \min} > 1$, and $\textmd{\textit{DQ}}_{{\sf mr}, {\sf ts}}$ determines whether $t_{1, \min} + t_{2, \min} > 1$ holds or not. To accomplish this, either receiver quantizes its own $t_k$ in a finer and finer way when $l$ increases and tells the quantized feedback bits to others. The parameters ${t}_{k, \min}^{\textmd{lb}}, {t}_{k, \min}^{\textmd{ub}}$ serve as the lower and upper bounds on $t_{k, \min}$ updated by conferencing between receivers. The decision of whether $t_{1, \min} + t_{2, \min} > 1$ holds or not is made by jointly considering ${t}_{k, \min}^{\textmd{lb}}$ and $ {t}_{k, \min}^{\textmd{ub}}$. The inter-receiver conferencing process will continue until the exchanged feedback bits are adequate to make a precise decision about whether $t_{1, \min} + t_{2, \min} > 1$ holds or not. Let $\textmd{OUT}\left(\textmd{\textit{DQ}}_{{\sf mr}, {\sf ts}}\right)$ and $\textmd{FR}\left(\textmd{\textit{DQ}}_{{\sf mr}, {\sf ts}}\right)$ denote the network outage probability and average feedback rate of $\textmd{\textit{DQ}}_{{\sf mr}, {\sf ts}}$, respectively. The following theorem shows that whenever the optimal time shairing pair $(t_1^{\star}, t_2^{\star})$ in Proposition 1 can avoid outage, the time sharing pair picked by $\textmd{\textit{DQ}}_{{\sf mr}, {\sf ts}}$ will also avoid outage with probability one, and that the average feedback rate of $\textmd{\textit{DQ}}_{{\sf mr}, {\sf ts}}$ is finite. The proof is provided in Appendix B. \begin{theorem} For any $P > 0$, we have \begin{align} \textmd{\rm OUT}\left(\mathtt{\textit{DQ}}_{{\sf mr}, {\sf ts}}\right) = \textmd{\rm OUT}_{{\sf mr}, \sf ts}^{ {\sf opt}}, \end{align} and \begin{align} \label{FR_Time_Sharing} \textmd{\rm FR}\left(\mathtt{\textit{DQ}}_{{\sf mr}, {\sf ts}}\right) \leq 2 + 2 e^{-\frac{\rho \log 2}{P}}\left(1 + \frac{C_0}{P}\right), \end{align} where $C_0$ is a bounded constant that is independent of $P$.\footnote{Since we focus on showing the average feedback rate is finite for any $P$, it is beyond the scope of our paper to derive the tightest bound, i.e., the smallest value for $C_0$. } \end{theorem} Theorem 2 shows zero-distortion in network outage probability actually can be achieved by finite average feedback rates, other than infinite number of feedback bits in the traditional view. This surprising result comes from our design for feedback communication between receivers based on conferencing. \subsection{Interference Transmission} For $k, l = 1, 2$ and $k \neq l$, the maximum allowed power of transmitter $k$ that will not cause outage to receiver $l$ when transmitter $l$ uses full power can be calculated to be \begin{align} p_{k, \max} = \frac{H_{l, l}}{\left(2^{{\rho}}-1\right) H_{k, l}} - \frac{1}{P H_{k, l}}.\nonumber \end{align} Note that $p_{k, \max}$ can be calculated at receiver $l$. The proposed quantizer $\textmd{\textit{DQ}}_{{\sf mr}, {\sf it}}$ consists of two local encoders, two local compressors and a unique decoder. The $k$-th encoder $\textmd{ENC}_{{\sf mr}, {\sf it}, k}$ and $k$-th compressor $\textmd{CMP}_{{\sf mr}, {\sf it}, k}$ are located at receiver $k$, while the decoder $\textmd{DEC}_{{\sf mr}, {\sf it}}$ is used by all terminals. We add the superscript ``$l$'' to indicate their operations in the $l$-th round of conferencing for $l = 0, 1$. For any $M \in \mathtt{N} - \{0\}$, let $\mathcal{C}_M = \left\{\frac{m}{M}: m = 0, \ldots, M\right\}$. Denote $\textmd{\textit{DQ}}_{{\sf mr}, {\sf it}}\left({\bf H}\right)$ as the interference transmission pair $(p_1, p_2)$ determined by $\textmd{\textit{DQ}}_{{\sf mr}, {\sf it}}$. There are at most two rounds of conferencing in $\textmd{\textit{DQ}}_{{\sf mr}, {\sf it}}$. In round $0$, $\textmd{ENC}_{{\sf mr}, {\sf it}, 1}^{0}: \mathtt{C}^{2\times 1}\rightarrow \mathcal{C}_M$ maps ${\bf h}_1$ into a codeword in $\mathcal{C}_M$ according to \begin{align} \textmd{ENC}_{{\sf mr}, {\sf it}, 1}^{0} \left({\bf h}_1\right) = \left\{ \begin{matrix} 0, &{p}_{2, \max} \leq 0,\\ \argmax\limits_{x \in \mathcal{C}_M, x \leq {p}_{2, \max} } x, & {p}_{2, \max} >0. \end{matrix} \right. \nonumber \end{align} Then $\textmd{CMP}_{{\sf mr}, {\sf it}, 1}^{0}: \mathcal{C}_M \rightarrow \mathcal{B}$ maps the index of $\textmd{ENC}_{{\sf mr}, {\sf it}, 1}^{0} \left({\bf h}_1\right)$ to a binary description in $\mathcal{B}$, a set of binary representations for codewords in $\mathcal{C}$. With fixed-length coding, $\left\lceil \log_2\left|\mathcal{C}\right|\right\rceil = \left\lceil \log_2(M + 1)\right\rceil$ bits indicating the index of $\textmd{ENC}_{{\sf mr}, {\sf it}, 1}^{0} \left({\bf h}_1\right)$ are fed back by receiver 1.\footnote{The performance of $\textmd{\textit{DQ}}_{{\sf mr}, {\sf it}}$ can be improved by taking variable-length coding into consideration. We use fixed-length coding here for convenience.} $\textmd{DEC}_{{\sf mr}, {\sf it}}^{0}$ decodes them and recovers the value of $\textmd{ENC}_{{\sf mr}, {\sf it}, 1}^{0} \left({\bf h}_1\right)$, then receiver 2 will send one bit of ``1'' if $\log_2\left(1 + \frac{ \textmd{ENC}_{{\sf mr}, {\sf it}, 1}^{0} \left({\bf h}_1\right) P H_{2, 2}}{ P H_{1, 2} + 1}\right) \geq \rho$, and ``0'' otherwise. If ``1'' is fed back by receiver 2, $\textmd{\textit{DQ}}_{{\sf mr}, {\sf it}}\left({\bf H}\right) = \left(1, \textmd{ENC}_{{\sf mr}, {\sf it}, 1}^{0} \left({\bf h}_1\right) \right)$ is the decided pair and thus, conferencing for the current channel state finishes. Otherwise, conferencing will continue to the next round. In round $1$, $\textmd{ENC}_{{\sf mr}, {\sf it}, 2}^{1}: \mathtt{C}^{2\times 1}\rightarrow \mathcal{C}_M$ maps ${\bf h}_2$ into a codeword in $\mathcal{C}_M$ according to \begin{align} \textmd{ENC}_{{\sf mr}, {\sf it}, 2}^{1} \left({\bf h}_2\right) = \left\{ \begin{matrix} 0, &{p}_{1, \max} \leq 0,\\ \argmax\limits_{x \in \mathcal{C}_M, x \leq {p}_{1, \max} } x, & {p}_{1, \max} >0. \end{matrix} \right. \nonumber \end{align} Then $\textmd{CMP}_{{\sf mr}, {\sf it}, 2}^{1}: \mathcal{C}_M \rightarrow \mathcal{B}$ maps the index of $\textmd{ENC}_{{\sf mr}, {\sf it}, 2}^{1} \left({\bf h}_2\right)$ to a binary description in $\mathcal{B}$. $\left\lceil \log_2(M + 1)\right\rceil$ bits indicating the index of $\textmd{ENC}_{{\sf mr}, {\sf it}, 2}^{1} \left({\bf h}_2\right)$ are fed back by receiver 2. $\textmd{DEC}_{{\sf mr}, {\sf it}}^{1}$ decodes them and recovers the value of $\textmd{ENC}_{{\sf mr}, {\sf it}, 2}^{1} \left({\bf h}_2\right)$, and $\textmd{\textit{DQ}}_{{\sf mr}, {\sf it}}\left({\bf H}\right) = \left(\textmd{ENC}_{{\sf mr}, {\sf it}, 2}^{1} \left({\bf h}_2\right), 1 \right)$ is the final interference transmission pair. The interference transmission pair decided by $\textmd{\textit{DQ}}_{{\sf mr}, {\sf it}}$ has at least one element equal to $1$, i.e., $p_1 = 1$ or $p_2 = 1$, which arises from the fact that the performance of any pair that does not satisfy this can be improved by multiplying the pair with a scaling factor until at least one element reaches $1$ \cite{OptimalPowerControlSumRate}. Therefore, the proposed quantizer only needs to work on the non-one element. To do this, either receiver tries to tell others the maximum power it can tolerate for preventing outage. Denote the network outage probability and average feedback rate of $\textmd{\textit{DQ}}_{{\sf mr}, {\sf it}}$ by $\textmd{OUT}\left(\textmd{\textit{DQ}}_{{\sf mr}, {\sf it}}\right)$ and $\textmd{FR}\left(\textmd{\textit{DQ}}_{{\sf mr}, {\sf it}}\right)$, respectively. The following theorem provides upper bounds on $\textmd{OUT}\left(\textmd{\textit{DQ}}_{{\sf mr}, {\sf it}}\right)$ and $\textmd{FR}\left(\textmd{\textit{DQ}}_{{\sf mr}, {\sf it}}\right)$. The proof of the theorem is provided in Appendix D. \begin{theorem} For any $P>0$ and $M \in \mathtt{N} - \{0\}$, we have \begin{align} \label{DQ_OUT_OUT} \textmd{\rm OUT}\left(\textmd{\textit{DQ}}_{{\sf mr}, {\sf it}}\right) \leq \textmd{\rm OUT}_{{\sf mr}, \sf it}^{ {\sf opt}} + \frac{C_1}{M}, \end{align} and \begin{align} \label{AFR_NOP} \textmd{\rm FR}\left(\textmd{\textit{DQ}}_{{\sf mr}, {\sf it}}\right) \leq 2\log_2(M + 1) + 3, \end{align} where $C_1 > 0$ is a bounded constant that is independent of $P$ and $M$. \end{theorem} From Theorem 3, it is seen that the distortion in network outage probability is inversely proportional to $M$, while the average feedback rate is bounded by a finite constant plus the term $2\log_2(M + 1)$ that scales as $O\left(\log(M)\right)$. Letting $M$ satisfy $2\log_2(M + 1) + 3 = \textit{R}$, we can observe that the loss in outage probability due to quantization decays at least exponentially with the total feedback rate $\textit{R}$ as $O\left(2^{-\frac{\textit{R}}{2}}\right)$. \subsection{Time Sharing or Interference Transmission?} We recall from Section \Rmnum{3} that for the network outage probability of sum rate, the interference transmission is always superior to time sharing. On the other hand, for the network outage probability of minimum rate, depending on the power constraing $P$, either one of two transmission strategies may be optimal. To illustrate this phenomenon, the network outage probabilities $\textmd{OUT}_{{\sf mr}, \sf ts}^{ {\sf opt}} $ and $\textmd{OUT}_{{\sf mr}, \sf it}^{ {\sf opt}}$ are plotted versus $P$ for various $\epsilon$ in Fig. 1. The target data rate is $\rho = 0.5$. We can observe from Fig. 1 that for any given $\epsilon$, there is a threshold power level $P_{\textmd{th}}$ (that depends on $\epsilon$) such that when $P \leq P_{\textmd{th}}$, $\textmd{OUT}_{{\sf mr}, \sf ts}^{ {\sf opt}} \leq \textmd{OUT}_{{\sf mr}, \sf it}^{ {\sf opt}}$, and when $P > P_{\textmd{th}}$, $\textmd{OUT}_{{\sf mr}, \sf ts}^{ {\sf opt}} > \textmd{OUT}_{{\sf mr}, \sf it}^{ {\sf opt}}$. In other words, we should use interference transmission when $P \leq P_{\textmd{th}}$, and otherwise, if $P > P_{\textmd{th}}$, we should utilize the time sharing strategy. The decision between time sharing and interference transmission only requires the knowledge of $P_{\textmd{th}}$, which can be a prior information known by all terminals. Although it is difficult to derive a closed-form expression of $P_{\textmd{th}}$, it can still be estimated through numerical simulations. For example, according to Fig. 1, we have $P_{\textmd{th}}\approx 2, 5, 12, 25$ dB when $\epsilon = 1, 0.5, 0.1$ and $0.01$, respectively. \begin {figure} \centering \includegraphics[width= 4.5 in]{Fig_1.eps} \caption{$\textmd{OUT}_{{\sf mr}, \sf ts}^{ {\sf opt}} $ and $\textmd{OUT}_{{\sf mr}, \sf it}^{ {\sf opt}}$ versus $P$.} \label{fig1, 2} \end{figure} \section{Numerical Simulations} \label{secnumresults} \begin {figure} \centering \includegraphics[width= 4.5 in]{Fig_2.eps} \caption{Simulated network outage probabilities of minimum rate for $\textit{DQ}_{{\sf mr}, {\sf ts}}$, $\textit{DQ}_{{\sf mr}, {\sf ts}}^{\sf conv}$ and the case with no feedback as well as the average feedback rate of $\textit{DQ}_{{\sf mr}, {\sf ts}}$ versus $P$.} \end{figure} In this section, we present simulations to verify the theoretical results for $\textit{DQ}_{{\sf mr}, {\sf ts}}$ in time sharing and $\textit{DQ}_{{\sf mr}, {\sf it}}$ in interference transmission. For each instance of $P$ and $\epsilon$, a sufficient number of channel state realizations are generated to observe at least 5000 outage events. We have chosen $\rho = 0.5$. We will compare the performance of the proposed quantizers with that of the conventional one \cite{Interference_Power_Control, Interference_Throughput} denoted by $\textit{DQ}_{{\sf mr}}^{\sf conv}$ in time sharing and interference transmission, respectively. For readers' convenience, we provide a brief description of the quantizer $\textit{DQ}_{{\sf mr}}^{\sf conv}$ as described in \cite{Interference_Power_Control, Interference_Throughput}. For $k = 1, 2$, receiver $k$ employs $\frac{B_{\sf tot}}{4}$ bits to quantize $H_{1, k}$ and $H_{2, k}$ separately based on a scalar codebook generated by Lloyd Algorithm \cite{GLA} with the cardinality being $2^{\frac{B_{\sf tot}}{4}}$. All terminals decode the feedback bits and reconstruct the quantized $\bf H$ as $\hat{\bf H}$. In time sharing, ${t}_1^{\star}$ and ${t}_2^{\star}$ are calculated according to Proposition 1 by treating $\hat{\bf H}$ as ${\bf H}$, while in interference transmission, ${p}_1^{\star}$ and ${p}_2^{\star}$ are computed by Proposition 2 based on $\hat{\bf H}$. The average feedback rate of $\textit{DQ}_{{\sf mr}}^{\sf conv}$ is $B_{\sf tot}$ bits per channel state. We add the subscript of ``$\sf ts$'' or ``$\sf it$'' to $\textit{DQ}_{{\sf mr}}^{\sf conv}$ to distinguish when it is applied in time sharing or interference transmission, respectively. In Fig. 2 (a), the network outage probabilities of minimum rate for $\textit{DQ}_{{\sf mr}, {\sf ts}}$, $\textit{DQ}_{{\sf mr}, {\sf ts}}^{\sf conv}$ (with $B_{\sf tot} = 16$) and the case with no feedback (where either transmitter consumes half of the entire block to transmit, i.e., $t_1 = t_2 = 0.5$) are plotted. It is shown that the network outage probabilities of the latter two scenarios are worse than that of $\textit{DQ}_{{\sf mr}, {\sf ts}}$ (the minimum one), which substantiates that feedback is necessary as well as the proposed quantizer based on conferencing is superior. Fig. 2 (b) plots the average feedback rate of $\textit{DQ}_{{\sf mr}, {\sf ts}}$, which is finite and small in the entire interval of $P$. Furthermore, when $P\rightarrow \infty$ or $0$, the average feedback rate approaches towards $4$ or $2$, respectively. This corresponds to the upper bound in Theorem 2 and it can be intuitively interpreted like this: when $P\rightarrow \infty$, the probability that $t_{k, \min} < \frac{1}{2}$ for $k = 1, 2$, is increasing towards $1$, then after two rounds, $\left(0.5, 0.5\right)$ will be chosen as $\textit{DQ}_{{\sf mr}, {\sf ts}}\left({\bf H}\right)$ most likely. On the other hand, when $P\rightarrow 0$, the probability that $t_{k, \min} > 1$ for $k = 1, 2$, also goes to $1$, thus after round $0$, the quantization process will finish because an outage event is inevitable almost surely. \begin {figure} \centering \includegraphics[width= 4.5 in]{Fig_3.eps} \caption{Distortions of network outage probability for minimum rate of $\textit{DQ}_{{\sf mr}, {\sf it}} $, $\textit{DQ}_{{\sf mr}, {\sf it}}^{\sf conv}$ and the case with no feedback versus $M$.} \end{figure} In Fig. 3, we show the distortions of network outage probability for minimum rate of $\textit{DQ}_{{\sf mr}, {\sf it}} $, $\textit{DQ}_{{\sf mr}, {\sf it}}^{\sf conv}$ and the case with no feedback (where both transmitters will use full power, i.e., $p_1 = p_2 = 1$) versus $M$. For each $\epsilon$, we choose a value of $P$ smaller than $P_{\textmd{th}}$ thus interference transmission should be applied. In order to demonstrate that $\textit{DQ}_{{\sf mr}, {\sf it}}$ outperforms $\textit{DQ}_{{\sf mr}, {\sf it}}^{\sf conv}$ even when $\textit{DQ}_{{\sf mr}, {\sf it}}^{\sf conv}$ has a higher feedback rate, we choose the number of feedback bits assigned to $\textit{DQ}_{{\sf mr}, {\sf it}}^{\sf conv}$ as is $B_{\sf tot} = 4 \left\lceil \frac{2\log_2 (M + 1) + 3}{4} \right\rceil $. Note that $B_{\sf tot} = 8$ when $1 \leq M \leq 4$ and $12$ when $5\leq M \leq 8$. The distortions of $\textit{DQ}_{{\sf mr}, {\sf it}}$ and $\textit{DQ}_{{\sf mr}, {\sf it}}^{\sf conv}$ versus both $P$ and the average feedback rate are also shown in Fig. 4 for different values of $\epsilon$. It can be observed that in interference transmission, (i) the distortion of $\textit{DQ}_{{\sf mr}, {\sf it}}$ decreases almost linearly with increasing $M$ in the $\log$-scale, which corresponds to the upper bound derived in Theorem 3; (ii) the decreasing speed of the distortion for $\textit{DQ}_{{\sf mr}, {\sf it}}$ in regard to $M$ or the average feedback rate is much faster than that of $\textit{DQ}_{{\sf mr}, {\sf it}}^{\sf conv}$; (ii) the distortion of $\textit{DQ}_{{\sf mr}, {\sf it}}$ is much smaller than those of $\textit{DQ}_{{\sf mr}, {\sf it}}^{\sf conv}$ and the case with no feedback, which verifies that feedback is necessary and our proposed distributed quantizer based on conferencing outperforms the conventional distributed quantizer. \begin {figure} \centering \includegraphics[width= 4.5 in]{Fig_4.eps} \caption{Distortions of network outage probability for minimum rate of $\textit{DQ}_{{\sf mr}, {\sf it}} $ and $\textit{DQ}_{{\sf mr}, {\sf it}}^{\sf conv}$ versus $P$ and average feedback rate.} \end{figure} \section{Conclusions and Future Work} We have introduced conferencing-based distributed channel quantizers for a two-user interference network where interference signals are treated as noise. We have shown that the proposed distributed quantizers can achieve or closely approach the optimal network outage probabilities of sum rate and minimum rate in time sharing or interference transmission with finite average feedback rates. So far, we have studied the scenario where only one transmission strategy (interference transmission or time sharing) is used for every channel state. We note that utilizing different transmission strategies for different channel states will result in a better performance. The design and analysis of distributed quantizers for such an adaptive system is an interesting future research direction. \section*{Acknowledgement} This work was supported in part by the NSF Award CCF-1218771. \appendices \section{Proofs of Propositions 1 and 2} \begin{IEEEproof} The optimal time sharing pair $(t_1^{\star}, t_2^{\star})$ that minimizes $\textmd{OUT}_{\sf ts}^{\sf mr}$ also maximizes $\textit{MR}_{\sf ts}(t_1, t_2)$. Substituting $t_2 = 1 - t_1$ into $\textit{MR}_{\sf ts}(t_1, t_2)$, the problem that maximizes $\textit{MR}_{\sf ts}(t_1, t_2)$ becomes $\maxmin\limits_{0 \leq t_1\leq 1}\left\{ t_1\log_2\left(1 + P {H_{1, 1}}\right), (1 - t_1)\log_2\left(1 + P {H_{2, 2}}\right) \right\}$. The first term is increasing in $t_1$ while the second term is decreasing in $t_1$. Therefore, the maximum is reached when $t_1\log_2\left(1 + P {H_{1, 1}}\right) = (1 - t_1)\log_2\left(1 + P {H_{2, 2}}\right)$, yielding $t_1^{\star}$ and $t_2^{\star}$ given in \eqref{Optimal_Time_Sharing}. The optimal interference transmission pair $(p_1^{\star}, p_2^{\star})$ that minimizes $\textmd{OUT}_{\sf it}^{\sf mr}$ also maximizes $\textit{MR}_{\sf it}(p_1, p_2)$. We first show $p_1^{\star} = 1$ or $p_2^{\star} = 1$. Assume by contradiction that $0< p_1^{\star}, p_2^{\star} < 1$. Let $\beta = \min\left\{\frac{1}{p_1^{\star}}, \frac{1}{p_2^{\star}}\right\} > 1$, then \begin{align} \label{Optimality_One} {{\textit{MR}}}_{\sf it}\left(\beta p_1^{\star}, \beta p_2^{\star}\right) & =\min \left\{ \log_2\left(1 + \frac{P \beta p_1^{\star} H_{1, 1}}{P \beta p_2^{\star} H_{2, 1} + 1}\right), \log_2\left(1 + \frac{P \beta p_2^{\star} H_{2, 2}}{P \beta p_1^{\star} H_{1, 2} + 1}\right) \right\} \nonumber\\ &=\min \left\{ \log_2\left(1 + \frac{P p_1^{\star} H_{1, 1}}{P p_2^{\star} H_{2, 1} + \frac{1}{\beta}}\right), \log_2\left(1 + \frac{P p_2^{\star} H_{2, 2}}{P p_1^{\star} H_{1, 2} + \frac{1}{\beta}}\right) \right\} \nonumber\\ &> \min \left\{ \log_2\left(1 + \frac{P p_1^{\star} H_{1, 1}}{P p_2^{\star} H_{2, 1} + 1}\right), \log_2\left(1 + \frac{P p_2^{\star} H_{2, 2}}{P p_1^{\star} H_{1, 2} + 1}\right) \right\} \nonumber\\ & ={{\textit{MR}}}_{\sf it}\left(p_1^{\star}, p_2^{\star}\right), \end{align} which contradicts the assumption that $\left(p_1^{\star}, p_2^{\star}\right)$ is optimal. Therefore, $p_1^{\star} = 1$ or $p_2^{\star} = 1$. When $p_1^{\star} = 1$, the problem that maximizes $\textit{MR}_{\sf it}\left(p_1, p_2\right)$ is equivalent to $\maxmin\limits_{0< p_2 \leq 1} \left\{ \frac{P H_{1, 1}}{P p_2 H_{2, 1} + 1}, \frac{P p_2 H_{2, 2}}{P H_{1, 2} + 1} \right\}$, where $\frac{P H_{1, 1}}{P p_2 H_{2, 1} + 1}$ is decreasing in $p_2$ and $\frac{P p_2 H_{2, 2}}{P H_{1, 2} + 1}$ is increasing in $p_2$. Letting $\frac{P H_{1, 1}}{P p_2 H_{2, 1} + 1} = \frac{P p_2 H_{2, 2}}{P H_{1, 2} + 1}$, the positive root is $\tilde{p}_2 = \frac{\sqrt{\frac{4 P^2 H_{1, 1} H_{1, 2} H_{2, 1} + 4 P H_{1, 1} H_{2, 1}}{H_{2, 2}} + 1} - 1}{2P H_{2, 1} }$. Note that $0<\tilde{p}_2 < 1$ holds only when $\frac{P H_{1, 1}}{P H_{2, 1} + 1} <\frac{H_{2, 2}}{P H_{1, 2} + 1}$. Thus, when $\frac{P H_{1, 1}}{P H_{2, 1} + 1} <\frac{H_{2, 2}}{P H_{1, 2} + 1}$, $p_1^{\star} = 1$ and $p_2^{\star} = \tilde{p}_2 $. Similarly, when $p_2^{\star} = 1$, we derive the positive root of $\frac{P p_1 H_{1, 1}}{P H_{2, 1} + 1} = \frac{P H_{2, 2}}{P p_1 H_{1, 2} + 1}$ as $\tilde{p}_1 = \frac{\sqrt{\frac{4P^2 H_{1, 2} H_{2, 1} H_{2, 2} + 4P H_{2, 2} H_{1, 2}}{H_{1, 1}} + 1} - 1}{2PH_{1, 2}}$. Note that $0< \tilde{p}_1 < 1$ holds when $\frac{P H_{1, 1}}{P H_{2, 1} + 1} \geq \frac{H_{2, 2}}{P H_{1, 2} + 1}$. Hence, when $\frac{P H_{1, 1}}{P H_{2, 1} + 1} \geq \frac{H_{2, 2}}{P H_{1, 2} + 1}$, $p_1^{\star} = \tilde{p}_1$ and $p_2^{\star} = 1$. \end{IEEEproof} \section{Proof of Theorem 2} \begin{IEEEproof} Let \begin{align} \begin{array}{l} \mathcal{H}_{1} = \left\{{\bf H}: t_{1, \min} + t_{2, \min} > 1, t_{1, \min}, t_{2, \min} > 0\right\},\nonumber\\ \mathcal{H}_{2} = \left\{{\bf H}: t_{1, \min} + t_{2, \min} = 1, t_{1, \min}, t_{2, \min} > 0\right\}, \nonumber\\ \mathcal{H}_{3} = \left\{{\bf H}: t_{1, \min} + t_{2, \min} < 1, t_{1, \min}, t_{2, \min} > 0\right\}.\nonumber \end{array} \end{align} Note that $t_{1, \min} + t_{2, \min} = \frac{\rho}{\log_2\left(1 + P H_{1, 1}\right)} + \frac{\rho}{\log_2\left(1 + P H_{2, 2}\right)} = \frac{\rho}{\frac{{\log_2\left(1 + P H_{1, 1}\right)} {\log_2\left(1 + P H_{2, 2}\right)}}{{\log_2\left(1 + P H_{1, 1}\right)} + {\log_2\left(1 + P H_{2, 2}\right)}}} = \frac{\rho}{\textit{MR}_{\sf it}\left(t_1^{\star}, t_2^{\star}\right)}$. Then $\textmd{OUT} \left( {\textit{DQ}}_{{\sf mr}, {\sf ts}} \right)$ and $\textmd{OUT}_{{\sf mr}, {\sf ts}}^{\sf opt}$ can be rewritten as \begin{align} \textmd{OUT} \left( {\textit{DQ}}_{{\sf mr}, {\sf ts}} \right) & = \underbrace{\textmd{Prob} \left\{ {\bf H} \in \mathcal{H}_1, {\textit{DQ}}_{{\sf mr}, {\sf ts}} \left({\bf H}\right) < \rho \right\}}_{ = \textmd{OUT}_1} \nonumber\\ & + \underbrace{\textmd{Prob} \left\{ {\bf H} \in \mathcal{H}_2, {\textit{DQ}}_{{\sf mr}, {\sf ts}} \left({\bf H}\right) < \rho \right\}}_{ = \textmd{OUT}_2} \nonumber\\ & + \underbrace{\textmd{Prob} \left\{ {\bf H} \in \mathcal{H}_3, {\textit{DQ}}_{{\sf mr}, {\sf ts}} \left({\bf H}\right) < \rho \right\}}_{ = \textmd{OUT}_3}, \nonumber\\ \textmd{OUT}_{{\sf mr}, {\sf ts}}^{\sf opt} & = \textmd{Prob} \left\{ {\bf H} \in \mathcal{H}_1 \right\}.\nonumber \end{align} To prove $\textmd{OUT} \left( {\textit{DQ}}_{{\sf mr}, {\sf ts}} \right) = \textmd{OUT}_{{\sf mr}, {\sf ts}}^{\sf opt}$, it is sufficient to prove $\textmd{OUT}_{{\sf mr}, {\sf ts}}^{\sf opt} = \textmd{OUT}_1$ and $\textmd{OUT}_2 = \textmd{OUT}_3 = 0$. For any ${\bf H} \in \mathcal{H}_1$, $t_{1, \min} + t_{2, \min} > 1$ is equivalent to $\textit{MR}_{\sf it}\left(t_1^{\star}, t_2^{\star}\right) < \rho$, then ${\bf 1}\left({{\bf H} \in \mathcal{H}_1}\right) = {\bf 1}\left({{\bf H} \in \mathcal{H}_1, \textit{DQ}_{{\sf mr}, {\sf ts}}({\bf H}) < \rho}\right)$. Thus $\textmd{OUT}_1 = \textmd{E}\left[{\bf 1}\left({{\bf H} \in \mathcal{H}_1, \textit{DQ}_{{\sf mr}, {\sf ts}}({\bf H}) < \rho}\right)\right] = \textmd{E}\left[{\bf 1}\left({ {\bf H} \in \mathcal{H}_1}\right)\right] = \textmd{OUT}_{{\sf mr}, {\sf ts}}^{\sf opt}$. Besides, $\textmd{OUT}_2 \leq \textmd{Prob}\left\{t_{1, \min} + t_{2, \min} = 1\right\} = \textmd{Prob}\left\{\textit{MR}_{\sf it}\left(t_1^{\star}, t_2^{\star}\right) = \rho\right\} = 0$, which is from the fact that the probability of a continuous r.v. assuming a specific value is zero. Since $\textmd{OUT}_2 \geq 0$, $\textmd{OUT}_2 = 0$. To prove $\textmd{OUT}_3=0$, it is sufficient to show for any ${\bf H} \in \mathcal{H}_{3}$, ${\textit{MR}_{\sf ts}\left( {\textit{DQ}}_{{\sf mr}, {\sf ts}}\left({\bf H}\right)\right) \geq \rho}$. Let $t_{k, \min} = \left[0.b_{k, 1}b_{k, 2}\cdots\right]_{2}$. \begin{lemma} For any ${\bf H} \in \mathcal{H}_3$, $\textmd{\rm ENC}_{{\sf mr}, {\sf ts}, k}^l \left({\bf h}_k\right) = b_{k, l}$, $t_{k, \min}^{{\sf lb}, l} = \left[0.b_{k, 1} b_{k, 2}\cdots b_{k, l}\right]_{2}$ and $t_{k, \min}^{{\sf ub}, l} = t_{k, \min}^{{\sf lb}, l} + 2^{-l}$ when $k = 1, 2$ and $l \in \mathtt{N} - \{0\}$. \end{lemma} The proof of Lemma 1 is given in Appendix C. Since $t_{1, \min} + t_{2, \min} < 1$, there must exist $\hat{l}\in\mathtt{N}$ such that $t_{1, \min} + t_{2, \min} \leq 1 - 2^{-\hat{l}}$, or equivalently, \begin{align} \label{Violation} \left[0.b_{1, 1}b_{1, 2}\cdots b_{1, \hat{l}}\cdots\right]_{2} + \left[0.b_{2, 1}b_{2, 2}\cdots b_{2, \hat{l}}\cdots\right]_{2} \leq \left[0.\underbrace{11\cdots 1}_{\hat{l}}\right]_{2}. \end{align} All $(t_{1, \min}, t_{2, \min})$s satisfying \eqref{Violation} can be categorized into the following two types: \begin{enumerate} \item $\exists 1 \leq l^{'} \leq \hat{l}$ such that $\left(b_{1, l^{'}}, b_{2, l^{'}}\right) = (0, 0)$ and $\left(b_{1, l}, b_{2, l}\right) \in \left\{(0, 1), (1, 0)\right\}$ for $l = 1, \ldots, l^{'} - 1$; \item $\left(b_{1, l}, b_{2, l}\right) \in \left\{(0, 1), (1, 0)\right\}$ for any $l\leq \hat{l}$ and $\left(b_{1, \hat{l} + 1}, b_{2, \hat{l} + 1}\right) =(0, 0)$. \end{enumerate} For 1), by Lemma 1, $\textmd{\rm ENC}_{{\sf mr}, {\sf ts}, 1}^{{l}^{'}} \left({\bf h}_k\right) = \textmd{\rm ENC}_{{\sf mr}, {\sf ts}, 2}^{{l}^{'}} \left({\bf h}_k\right) = 0$, then the distributed quantization process will stop at round $l^{'}$ and \begin{align} \textit{DQ}_{{\sf mr}, {\sf ts}}\left({\bf H}\right) &= \left(\frac{t_{1, \min}^{{\sf lb}, l^{'} - 1} + t_{1, \min}^{{\sf ub}, l^{'} - 1}}{2}, \frac{t_{2, \min}^{{\sf lb}, l^{'} - 1} + t_{2, \min}^{{\sf ub}, l^{'} - 1}}{2} \right) \nonumber\\& = \left(\left[0. b_{1, 1}\cdots b_{1, {l^{'}-1}} 1\right]_{2}, \left[0. b_{2, 1}\cdots b_{2, {l^{'}-1}} 1\right]_{2} \right).\nonumber \end{align} Since $t_{k, \min} \leq \left[0. b_{k, 1}\cdots b_{k, {l^{'}-1}} 1\right]_{2}$, ${\textit{MR}_{\sf ts}\left( {\textit{DQ}}_{{\sf mr}, {\sf ts}}\left({\bf H}\right)\right) \geq \rho}$. For 2), by Lemma 1, $\textmd{\rm ENC}_{{\sf mr}, {\sf ts}, 1}^{\hat{l} + 1} \left({\bf h}_k\right) = \textmd{\rm ENC}_{{\sf mr}, {\sf ts}, 2}^{\hat{l} + 1} \left({\bf h}_k\right) = 0$, then the distributed quantization process will stop at round $\hat{l} + 1$ and \begin{align} \textit{DQ}_{{\sf mr}, {\sf ts}}\left({\bf H}\right) &= \left(\frac{t_{1, \min}^{{\sf lb}, \hat{l}} + t_{1, \min}^{{\sf ub}, \hat{l}}}{2}, \frac{t_{2, \min}^{{\sf lb}, \hat{l}} + t_{2, \min}^{{\sf ub}, \hat{l}}}{2} \right) \nonumber\\& = \left(\left[0. b_{1, 1}\cdots b_{1, \hat{l}} 1\right]_{2}, \left[0. b_{2, 1}\cdots b_{2, \hat{l}} 1\right]_{2} \right).\nonumber \end{align} Since $t_{k, \min} \leq \left[0. b_{k, 1}\cdots b_{k, {l^{'}}} 1\right]_{2}$, ${\textit{MR}_{\sf ts}\left( {\textit{DQ}}_{{\sf mr}, {\sf ts}}\left({\bf H}\right)\right) \geq \rho}$. Therefore, for any ${\bf H} \in \mathcal{H}_3$, ${\textit{MR}_{\sf ts}\left( {\textit{DQ}}_{{\sf mr}, {\sf ts}}\left({\bf H}\right)\right) \geq \rho}$ and $\textmd{OUT}_3 = 0$. To summarize, $\textmd{OUT} \left( {\textit{DQ}}_{{\sf mr}, {\sf ts}} \right) = \textmd{OUT}_{{\sf mr}, {\sf ts}}^{\sf opt}$. Now, let's prove the upper bound given in \eqref{FR_Time_Sharing}. Let \begin{align} {\mathcal{{R}}}_l = \left\{{\bf H}: \text{ the quantization process of } {\textit{DQ}}_{{\sf mr}, {\sf ts}} \text{ will stop after round } l \right\},\nonumber \end{align} for $l \in \mathtt{N}$. From Lemma 1 and the description of $\textit{DQ}_{{\sf mr}, {\sf ts}}$, for $l \geq 1$, \begin{align} \mathcal{R}_l = \left\{{\bf H}: (b_{1, l}, b_{2, l}) = (0, 0) \text{ or } (1, 1), (b_{1, m}, b_{2, m}) \in\{(0, 1), (1, 0)\}, m = 1, \ldots, l - 1\right\}.\nonumber \end{align} More specifically, ${\mathcal{{R}}}_l = \bigcup_{q = 0}^{2^{l} - 1} \left\{{\mathcal{{R}}}_{l, q}^{(1)}\cup{\mathcal{{R}}}_{l, q}^{(2)}\right\}$, where \begin{align} \label{DCP_1} \begin{array}{l} \mathcal{R}_{l, q}^{(1)} = \left\{{\bf H}: \frac{2q}{2^{l}} \leq t_{1, \min} \leq \frac{2q+1}{2^{l}}, 1 - \frac{2q+2}{2^{l}} \leq t_{2, \min}\leq 1-\frac{2q+1}{2^{l}}, 0 < t_{1, \min}, t_{2, \min} < 1\right\} \\ \hspace{8.5mm}-\left\{{\bf H}: t_{1, \min} = \frac{2q+1}{2^{l}}, t_{2, \min} = 1-\frac{2q+1}{2^{l}}\right\}, \\ \mathcal{R}_{l, q}^{(2)} = \left\{{\bf H}: \frac{2q + 1}{2^{l}} \leq t_{1, \min} \leq \frac{2q+2}{2^{l}}, 1 - \frac{2q+1}{2^{l}} \leq t_{2, \min} \leq 1 - \frac{2q}{2^{l}}, 0 < t_{1, \min}, t_{2, \min} < 1\right\} \\ \hspace{8.5mm}- \left\{{\bf H}: t_{1, \min} = \frac{2q+1}{2^{l}}, t_{2, \min} = 1-\frac{2q+1}{2^{l}}\right\}. \end{array} \end{align} It follows from \eqref{DCP_1} that \begin{align} \label{Compound_R_W} \begin{array}{l} \bigcup_{w = l}^{\infty} {\mathcal{R}}_{w} \subseteq \left\{\bigcup_{u=0}^{2^{l-1}-1}\left\{{\bf H}: \frac{1}{2} - \frac{u + 1}{2^{l}} \leq t_{1, \min} \leq \frac{1}{2} - \frac{u}{2^l}, \frac{1}{2} + \frac{u }{2^{l}} \leq t_{2, \min}\leq \frac{1}{2} + \frac{u + 1}{2^l}\right\}\right. \\ \left. \hspace{21.5mm}\cup \bigcup_{u=0}^{2^{l-1}-1}\left\{{\bf H}: \frac{1}{2} + \frac{u }{2^{l}} \leq t_{1, \min}\leq \frac{1}{2} + \frac{u + 1}{2^l}, \frac{1}{2} - \frac{u + 1}{2^{l}} \leq t_{2, \min} \leq \frac{1}{2} - \frac{u}{2^l}\right\} \right\}. \end{array} \end{align} Since $2(l+1)$ bits are fed back in total after round $l$, the average feedback rate is given as \begin{align} \label{DQ_FR_Temp} \textmd{FR} \left({\textit{DQ}}_{{\sf mr}, {\sf ts}}\right) & = \sum_{l = 0}^{\infty} 2(l+1)\textmd{Prob}\left\{{\bf H} \in {\mathcal{{R}}}_l\right\},\nonumber \\ & = 2 \textmd{Prob}\left\{{\bf H} \in {\mathcal{{R}}}_0\right\} + 4 \textmd{Prob}\left\{{\bf H} \in {\mathcal{{R}}}_1\right\} + \sum_{l = 2}^{\infty} (2l+2) \textmd{Prob}\left\{{\bf H} \in {\mathcal{{R}}}_l \right\} \nonumber\\ & = 2 + 2 \textmd{Prob}\left\{{\bf H} \in {\mathcal{{R}}}_1\right\} + 2 \sum_{l = 2}^{\infty} l \times\textmd{Prob}\left\{{\bf H} \in {\mathcal{{R}}}_l \right\} \nonumber\\ &\leq 2 + 2 \textmd{Prob}\left\{{\bf H} \in {\mathcal{{R}}}_1\right\} + 2\sum_{l = 2}^{\infty}l \times \textmd{Prob}\left\{{\bf H} \in \bigcup_{w = l}^{\infty} {\mathcal{{R}}}_w \right\}. \end{align} It is trivial to obtain the PDF of $t_{k, \min}$ as $f_{t_{k, \min}}(x) = \frac{\rho \log 2 }{P x^2 } e^{-\frac{e^{\frac{\rho \log 2}{x} - 1}}{P}} e^{\frac{\rho \log 2}{x}}, x > 0$ for $k = 1, 2$. Since ${\mathcal{{R}}}_1 \subseteq \left\{{\bf H}: 0 \leq t_{1, \min}, t_{2, \min} \leq \frac{1}{2} \text{ or } \frac{1}{2} \leq t_{1, \min}, t_{2, \min} \leq 1 \right\}$, the upper bound on $\textmd{Prob}\left\{{\bf H} \in {\mathcal{{R}}}_1\right\}$ is derived as \begin{align} \label{Temp_Prob} \textmd{Prob}\left\{{\bf H} \in {\mathcal{{R}}}_1\right\} & \leq \int_0^{\frac{1}{2}} f_{t_{1, \min}}(x_1) {\rm d}x_1\int_0^{\frac{1}{2}} f_{t_{2, \min}}(x_2) {\rm d}x_2 + \int_{\frac{1}{2}}^1 f_{t_{1, \min}}(x_1) {\rm d}x_1\int_{\frac{1}{2}}^1 f_{t_{2, \min}}(x_2) {\rm d}x_2 \nonumber\\ & \leq \int_0^{1} f_{t_{1, \min}}(x_1) {\rm d}x_1 = e^{-\frac{e^{\rho\log 2}-1}{P}} \leq e^{-\frac{{\rho\log 2}}{P}}, \end{align} where the inequalities arise from $\int_0^{\frac{1}{2}} f_{t_{2, \min}}(x_2){\rm d}x_2 \leq 1$, $\int_{\frac{1}{2}}^1 f_{t_{2, \min}}(x_2){\rm d}x_2 \leq 1$, and $e^x - 1 \geq x$ for $x\geq 0$. When $l \geq 2$, from \eqref{Compound_R_W}, $\textmd{Prob}\left\{{\bf H} \in \bigcup_{w = l}^{\infty} {\mathcal{{R}}}_w \right\}$ can be bounded by \begin{align} \textmd{Prob} \left\{ {\bf H} \in \bigcup_{w = l}^{\infty}{\mathcal{R}}_{w} \right\} & \leq \sum_{u = 0}^{2^{l-1}-1} \int_{\frac{1}{2} - \frac{u + 1}{2^{l}}}^{\frac{1}{2} - \frac{u}{2^l}} f_{t_{1, \min}}(x_1){\rm d}x_1 \int_{\frac{1}{2} + \frac{u }{2^{l}}}^{\frac{1}{2} + \frac{u + 1}{2^l}} f_{t_{2, \min}}(x_2){\rm d}x_2 \nonumber\\ & + \sum_{u = 0}^{2^{l-1}-1} \int_{\frac{1}{2} + \frac{u }{2^{l}}}^{\frac{1}{2} + \frac{u + 1}{2^l}} f_{t_{1, \min}}(x_1){\rm d}x_1 \int_{\frac{1}{2} - \frac{u + 1}{2^{l}}}^{\frac{1}{2} - \frac{u}{2^l}} f_{t_{2, \min}}(x_2){\rm d}x_2 \nonumber\\ & = 2\sum_{u = 0}^{2^{l-1}-1} \int_{\frac{1}{2} - \frac{u + 1}{2^{l}}}^{\frac{1}{2} - \frac{u}{2^l}} f_{t_{1, \min}}(x_1){\rm d}x_1 \int_{\frac{1}{2} + \frac{u }{2^{l}}}^{\frac{1}{2} + \frac{u + 1}{2^l}} f_{t_{2, \min}}(x_2){\rm d}x_2. \nonumber \end{align} When $\frac{1}{2} + \frac{u }{2^{l}} \leq x_2 \leq \frac{1}{2} + \frac{u + 1}{2^l}$, $\frac{1}{2}\leq x_2 \leq 1$, thus $f_{t_{2, \min}}(x_2) = \frac{\rho \log 2 }{P x_2^2 } e^{-\frac{e^{\frac{\rho \log 2}{x_2} - 1}}{P}} e^{\frac{\rho \log 2}{x_2}} \leq \frac{4 \rho \log 2 }{P } e^{-\frac{e^{{\rho \log 2} - 1}}{P}} e^{{2 \rho \log 2}}$. Then the upper bound on $\textmd{Prob}\left\{{\bf H} \in \bigcup_{w = l}^{\infty} {\mathcal{{R}}}_w \right\}$ is further derived as \begin{align} \label{Temp_Tail} \textmd{Prob} \left\{ {\bf H} \in \bigcup_{w = l}^{\infty}{\mathcal{R}}_{w} \right\} & \leq \frac{8\rho \log 2}{P}\sum_{u = 0}^{2^{l-1}-1} \int_{\frac{1}{2} - \frac{u + 1}{2^{l}}}^{\frac{1}{2} - \frac{u}{2^l}} f_{t_{1, \min}}(x_1) {\rm d}x_1 \int_{\frac{1}{2} + \frac{u }{2^{l}}}^{\frac{1}{2} + \frac{u + 1}{2^l}} {e^{-\frac{e^{{\rho}{\log 2} - 1}}{P}} e^{{2\rho}{\log 2}}} {\rm d}x_2 \nonumber\\ & \leq \frac{8\rho \log 2}{P}\sum_{u = 0}^{2^{l-1}-1} \int_{\frac{1}{2} - \frac{u + 1}{2^{l}}}^{\frac{1}{2} - \frac{u}{2^l}} f_{t_{1, \min}}(x_1) {\rm d}x_1 \int_{\frac{1}{2} + \frac{u }{2^{l}}}^{\frac{1}{2} + \frac{u + 1}{2^l}} {e^{-\frac{{{\rho}{\log 2}}}{P}} e^{{2\rho}{\log 2}}} {\rm d}x_2 \nonumber\\ & = {8\rho e^{{2\rho}{\log 2}}}{\log 2}\times \frac{e^{-\frac{\rho \log 2}{P}}}{P} \times \frac{1}{2^l}\sum_{u = 0}^{2^{l-1}-1} \int_{\frac{1}{2} - \frac{u + 1}{2^{l}}}^{\frac{1}{2} - \frac{u}{2^l}} f_{t_{1, \min}}(x_1) {\rm d}x_1 \nonumber\\ & = {8\rho e^{{2\rho}{\log 2}}}{\log 2}\times \frac{e^{-\frac{\rho \log 2}{P}}}{P} \times \frac{1}{2^l} \int_0^{\frac{1}{2}} f_{t_{1, \min}}(x_1) {\rm d}x_1 \nonumber\\ & \leq {8\rho e^{{2\rho}{\log 2}}}{\log 2}\times \frac{e^{-\frac{\rho \log 2}{P}}}{P} \times \frac{1}{2^l}. \end{align} Subsituting \eqref{Temp_Prob}, \eqref{Temp_Tail} into \eqref{DQ_FR_Temp} and using the fact that $\sum_{l = 2}^{\infty}\frac{l}{2^l}$ is finite yield the upper bound in \eqref{FR_Time_Sharing}. \end{IEEEproof} \section{Proof of Lemma 1} \begin{IEEEproof} Based on the procedures in $\textit{DQ}_{{\sf mr}, {\sf ts}}$, $t_{k, \min}^{{\sf lb}, l} \leq t_{k, \min} \leq t_{k, \min}^{{\sf ub}, l}$ for $l \in \mathtt{N} - \{0\}$. It is straightforward to verify Lemma 1 holds when $l = 1$. By induction, assume Lemma 1 holds when $l \leq m$ where $m \geq 2$. For $l = m + 1$,\footnote{We assume the quantization process in $\textit{DQ}_{{\sf mr}, {\sf ts}}$ still continues in round $m + 1$. Otherwise, it is not necessary to consider Lemma 1 when $l = m + 1$.} according to $\textit{DQ}_{{\sf mr}, {\sf ts}}$, $\textmd{\rm ENC}_{{\sf mr}, {\sf ts}, k}^{m + 1} \left({\bf h}_k\right) = {\bf 1}\left({t_{k, \min} \geq \frac{t_{k, \min}^{{\sf lb}, m} + t_{k, \min}^{{\sf ub}, m}}{2}}\right)$, and \begin{align} \frac{t_{k, \min}^{{\sf lb}, m} + t_{k, \min}^{{\sf ub}, m}}{2} = \left[0.b_{k, 1} b_{k, 2}\cdots b_{k, m}\right]_{2} + 2^{-m-1} = \left[0.b_{k, 1} b_{k, 2}\cdots b_{k, m} 1\right]_{2}.\nonumber \end{align} If ${t_{k, \min} \geq \left[0.b_{k, 1} b_{k, 2}\cdots b_{k, m} 1\right]_{2} =\frac{t_{k, \min}^{{\sf lb}, m} + t_{k, \min}^{{\sf ub}, m}}{2}}$, it must have $b_{k, m + 1} = 1 = \textmd{\rm ENC}_{{\sf mr}, {\sf ts}, k}^{m + 1} \left({\bf h}_k\right)$. Then $t_{k, \min}^{{\sf lb}, m + 1} = \frac{t_{k, \min}^{{\sf lb}, m} + t_{k, \min}^{{\sf ub}, m}}{2} = \left[0.b_{k, 1} b_{k, 2}\cdots b_{k, m} b_{k, m + 1}\right]_{2}$ and $t_{k, \min}^{{\sf ub}, m + 1} = t_{k, \min}^{{\sf ub}, m } = t_{k, \min}^{{\sf lb}, m } + 2^{-m} = t_{k, \min}^{{\sf lb}, m+1 } + 2^{-m - 1}$. If ${t_{k, \min} < \left[0.b_{k, 1} b_{k, 2}\cdots b_{k, m} 1\right]_{2} =\frac{t_{k, \min}^{{\sf lb}, m} + t_{k, \min}^{{\sf ub}, m}}{2}}$, since $t_{k, \min}\geq t_{k, \min}^{{\sf lb}, m} = \left[0.b_{k, 1} b_{k, 2}\cdots b_{k, m}\right]_{2}$, it must have $b_{k, m + 1} = 0 = \textmd{\rm ENC}_{{\sf mr}, {\sf ts}, k}^{m + 1} \left({\bf h}_k\right)$. Then $t_{k, \min}^{{\sf lb}, m + 1} = t_{k, \min}^{{\sf lb}, m } =\left[0.b_{k, 1} b_{k, 2}\cdots b_{k, m} 0\right]_{2} = \left[0.b_{k, 1} b_{k, 2}\cdots b_{k, m} b_{k, m + 1}\right]_{2}$ and $t_{k, \min}^{{\sf ub}, m + 1} = \frac{t_{k, \min}^{{\sf lb}, m} + t_{k, \min}^{{\sf ub}, m}}{2} = \left[0.b_{k, 1} b_{k, 2}\cdots b_{k, m} 1\right]_{2} =t_{k, \min}^{{\sf lb}, m + 1} + 2^{-m-1} $. Therefore, Lemma 1 holds when $l = m + 1$. In conclusion, Lemma 1 holds for any $l \in \mathtt{N} - \{0\}$. \end{IEEEproof} \section{Proof of Theorem 3} \begin{IEEEproof} For a given $M \in \mathtt{N} - \{0\}$, define a global quantizer which selects the interference transmission pair that maximizes $\textit{MR}_{\sf it}\left(p_1, p_2\right)$ among the codebook $\mathcal{C}_{{\sf unif}}$ as \begin{align} \textit{GQ}_{{\sf mr}, {\sf it}}\left({\bf H}\right) = \argmax\limits_{(p_1, p_2) \in \mathcal{C}_{{\sf unif}}} \textit{MR}_{\sf it}\left(p_1, p_2\right),\nonumber \end{align} where $\mathcal{C}_{{\sf unif}} = \left\{(1, 1), (1, \frac{m}{M}), (\frac{m}{M}, 1): m = 1, \ldots, M - 1\right\}$. Let $\textmd{OUT} \left(\textit{GQ}_{{\sf mr}, {\sf it}}\right) = \textmd{Prob} \left\{ \textit{MR}_{{\sf it}}\left(\textit{GQ}_{{\sf mr}, {\sf it}}\left({\bf H}\right) \right)< \rho \right\}$. First, let us show that $\textmd{OUT} \left({\textit{DQ}}_{{\sf mr}, {\sf it}}\right) = \textmd{OUT} \left(\textit{GQ}_{{\sf mr}, {\sf it}}\right)$. According to ${\textit{GQ}}_{{\sf mr}, {\sf it}}$, an outage event happens if and only if $\textit{MR}_{{\sf it}}\left(p_1, p_2\right) < {\rho}$ for any $(p_1, p_2) \in \mathcal{C}_{{\sf unif}}$. In ${\textit{DQ}}_{{\sf mr}, {\sf it}}$, an outage occurs if and only if the following conditions are satisfied: (i) receiver 2 sends ``0'' after round $0$; (ii) $\log_2\left(1 + \frac{\textmd{ENC}_{{\sf mr}, {\sf it}, 2}^{1} \left({\bf h}_2\right) P H_{1, 1}}{ P H_{2, 1} + 1}\right) < \rho$. (i) happens because $\log_2\left(1 + \frac{ \textmd{ENC}_{{\sf mr}, {\sf it}, 1}^{0} \left({\bf h}_1\right) P H_{2, 2}}{ P H_{1, 2} + 1}\right) < \rho$. It means for $x \in\mathcal{C}_{M}$, $\log_2\left(1 + \frac{ P H_{1, 1}}{ x P H_{2, 1} + 1}\right) \geq \rho$ and $\log_2\left(1 + \frac{ x P H_{2, 2}}{ P H_{1, 2} + 1}\right) \geq \rho$ cannot hold simultaneously, or equivalently, ${{\textit{MR}}}_{\sf it}\left(p_1, p_2\right) < {\rho}$ for $(1, p_2) \in \mathcal{C}_{{\sf unif}}$. Similarly, (ii) means $\log_2\left(1 + \frac{ x P H_{1, 1}}{ P H_{2, 1} + 1}\right) \geq \rho$ and $\log_2\left(1 + \frac{ P H_{2, 2}}{ x P H_{1, 2} + 1}\right) \geq \rho$ cannot stand at the same time for $x\in\mathcal{C}_{M}$, which is to say, ${{\textit{MR}}}_{\sf it}\left(p_1, p_2\right) < {\rho}$ for $(p_1, 1) \in \mathcal{C}_{\textmd{unif}}$. Thus, (i) and (ii) both happen means ${{\textit{MR}}}_{\sf it}\left(p_1, p_2\right) < {\rho}$ for any $(p_1, p_2)\in \mathcal{C}_{\textmd{unif}}$. i.e., ${\bf 1}\left({\textit{MR}_{{\sf it}}\left(\textit{GQ}_{{\sf mr}, {\sf it}}\left({\bf H}\right) \right)< \rho}\right)={\bf 1}\left({\textit{MR}_{\sf it}\left( {\textit{DQ}}_{{\sf mr}, {\sf it}}\left({\bf H}\right)\right) < \rho}\right)$. Hence, we have $\textmd{OUT} \left({\textit{DQ}}_{{\sf mr}, {\sf it}}\right)= \textmd{OUT} \left(\textit{GQ}_{{\sf mr}, {\sf it}}\right)$ since $\textmd{OUT} \left({\textit{DQ}}_{{\sf mr}, {\sf it}}\right) = \textmd{E}\left[{\bf 1}\left({\textit{MR}_{\sf it}\left( {\textit{DQ}}_{{\sf mr}, {\sf it}}\left({\bf H}\right)\right) < \rho}\right)\right]$ and $\textmd{OUT} \left(\textit{GQ}_{{\sf mr}, {\sf it}}\right) = \textmd{E}\left[{\bf 1}\left({\textit{MR}_{{\sf it}}\left(\textit{GQ}_{{\sf mr}, {\sf it}}\left({\bf H}\right) \right)< \rho}\right)\right]$. To prove \eqref{DQ_OUT_OUT}, it is sufficient to show $\textmd{OUT} \left(\textit{GQ}_{{\sf mr}, {\sf it}}\right) \leq \textmd{OUT}_{{\sf mr}, {\sf it}}^{\sf opt} + \frac{C_1}{M}$. Define another quantizer $\tilde{\textit{GQ}}_{{\sf mr}, {\sf it}}$ that selects the interference transmission pair according to \begin{align} \label{OUT_Sub_TPV} \tilde{\textit{GQ}}_{{\sf mr}, {\sf it}}\left({\bf H}\right) = \left\{ \begin{matrix} \left(\hat{p}_1, 1 \right), & \frac{{H}_{1, 1}}{ {H}_{2, 1} + \frac{1}{P}} \geq \frac{{H}_{2, 2}}{ {H}_{1, 2} + \frac{1}{P}},\\ \left(1, \hat{p}_2 \right), & \frac{{H}_{1, 1}}{ {H}_{2, 1} + \frac{1}{P} } < \frac{{H}_{2, 2}}{ {H}_{1, 2} + \frac{1}{P}}, \end{matrix} \right. \end{align} where \begin{align} \label{hat_p} \hat{p}_1 = \max\limits_{ \begin{subarray}{c} x\in \mathcal{C}_M, x\leq {p}_1^{\star} \end{subarray}} x, \hat{p}_2 = \max\limits_{ \begin{subarray}{c} x\in\mathcal{C}_M, x\leq {p}_2^{\star} \end{subarray}} x. \end{align} The network outage probability of minimum rate achieved by $\tilde{\textit{GQ}}_{{\sf mr}, {\sf it}}$ is $\textmd{OUT} \left(\tilde{\textit{GQ}}_{{\sf mr}, {\sf it}}\right) = \mathtt{Prob} \left\{ \tilde{\textit{GQ}}_{{\sf mr}, {\sf it}}\left({\bf H}\right) < \rho \right\}$. Since ${\textit{GQ}}_{{\sf mr}, {\sf it}}\left({\bf H}\right) \geq \tilde{\textit{GQ}}_{{\sf mr}, {\sf it}}\left({\bf H}\right)$, $\textmd{OUT} \left(\textit{GQ}_{{\sf mr}, {\sf it}}\right) \leq \textmd{OUT} \left(\tilde{\textit{GQ}}_{{\sf mr}, {\sf it}}\right)$. Hence, to prove \eqref{DQ_OUT_OUT}, it is sufficient to prove $\textmd{OUT} \left(\tilde{\textit{GQ}}_{{\sf mr}, {\sf it}}\right) - \textmd{OUT}_{{\sf mr}, {\sf it}}^{\sf opt} \leq \frac{C_1}{M}$. Let $\bar{\rho} = 2^{\rho}-1$, $H_{121} = \frac{{H}_{1, 1}} {{H}_{2, 1} + \frac{1}{P}}$, $H_{212} = \frac{{H}_{2, 2}} {{H}_{1, 2} + \frac{1}{P}}$, and $\alpha = \frac{1}{M}$. When $M = 1$, $\textmd{OUT} \left(\tilde{\textit{GQ}}_{{\sf mr}, {\sf it}}\right) = \textmd{Prob}\left\{\textit{MR}_{\sf it}(1, 1) < \rho\right\}$. Let $C_2 = \textmd{Prob}\left\{\textit{MR}_{\sf it}(1, 1) < \rho\right\}$, then $\textmd{OUT} \left(\tilde{\textit{GQ}}_{{\sf mr}, {\sf it}}\right) \leq \frac{C_2}{M}$. When $M\geq 1$, $0 < \alpha \leq \frac{1}{2} < 1$. $\textmd{OUT}_{{\sf mr}, {\sf it}}^{\sf opt}$ and $\textmd{OUT} \left(\tilde{\textit{GQ}}_{{\sf mr}, {\sf it}}\right)$ are rewritten as \begin{align} \textmd{OUT}_{{\sf mr}, {\sf it}}^{\sf opt}& = \textmd{Prob}\left\{H_{121} \geq H_{212}, p_1^{\star} H_{121} <\bar{\rho}\right\} + \textmd{Prob}\left\{H_{121} < H_{212}, p_2^{\star} H_{212} < \bar{\rho}\right\}, \nonumber\\ \textmd{OUT} \left(\tilde{\textit{GQ}}_{{\sf mr}, {\sf it}}\right) & = {\textmd{Prob} \left\{ H_{121} \geq H_{212}, \hat{p}_1 H_{121} < \bar{\rho} \right\}} + {\textmd{Prob} \left\{ H_{121} < H_{212}, \hat{p}_2 H_{212} < \bar{\rho} \right\}}, \nonumber \end{align} then $\textmd{OUT} \left(\tilde{\textit{GQ}}_{{\sf mr}, {\sf it}}\right) - \textmd{OUT}_{{\sf mr}, {\sf it}}^{\sf opt}$ is derived as \begin{align} \label{OUT_UB_1} & \textmd{OUT} \left(\tilde{\textit{GQ}}_{{\sf mr}, {\sf it}}\right) - \textmd{OUT}_{{\sf mr}, {\sf it}}^{\sf opt} \nonumber\\ & = {\textmd{Prob} \left\{ H_{121} \geq H_{212}, p_1^{\star} H_{121} \geq \bar{\rho}, \hat{p}_1 H_{121} < \bar{\rho} \right\}} \nonumber\\ & + {\textmd{Prob} \left\{ H_{121} < H_{212}, p_2^{\star} H_{212} \geq \bar{\rho}, \hat{p}_2 H_{212} < \bar{\rho} \right\}}\nonumber\\ & = 2 {\textmd{Prob} \left\{ H_{121} \geq H_{212}, p_1^{\star} H_{121} \geq \bar{\rho}, \hat{p}_1 H_{121} < \bar{\rho} \right\}} \nonumber\\ & \leq 2 {\textmd{Prob} \left\{ H_{121} \geq H_{212}, p_1^{\star} H_{121} \geq \bar{\rho}, \left(p_1^{\star} - \alpha\right) H_{121} < \bar{\rho} \right\}} \nonumber\\ & = 2 {\textmd{Prob} \left\{ H_{121} \geq H_{212}, \frac{\bar{\rho}}{H_{121}} \leq p_1^{\star} < \frac{\bar{\rho}}{H_{121}} + \alpha \right\}}, \end{align} where the first inequality is from ${p}_1^{\star} - \hat{p}_1 \leq \alpha$ by \eqref{hat_p}. Let $A = \frac{\bar{\rho}}{H_{121}}$ and $B = A + \alpha$. The PDFs of $H_{k, l}$ are $f_{H_{1, 1}}(x) = f_{H_{2, 2}}(x) = e^{-x}$ and $f_{H_{1, 2}}(x) = f_{H_{2, 1}}(x) = \frac{1}{\epsilon}e^{-\frac{x}{\epsilon}}$, $x > 0$, for $k, l = 1, 2$. Then the PDFs of $H_{121}$ and $H_{212}$ are easily obtained as $f_{H_{121}}(x) = f_{H_{212}}(x) = \frac{e^{-\frac{x}{P}}}{P(\epsilon x + 1)} + \frac{\epsilon e^{-\frac{x}{P}}}{(\epsilon x + 1)^2}$, $x > 0$. From \eqref{First_p}, ${p}_1^{\star}$ is rewritten as ${p}_1^{\star} = \frac {\sqrt{\frac{4P^2 }{H_{121}} {H}_{2, 2}{H}_{1, 2} + 1} - 1} {2P {H}_{1, 2}}$. Since $0\leq p_1^{\star}\leq 1$, it follows that \begin{align} \label{I_1_OUT} & \textmd{OUT} \left(\tilde{\textit{GQ}}_{{\sf mr}, {\sf it}}\right) - \textmd{OUT}_{{\sf mr}, {\sf it}}^{\sf opt} \nonumber\\ & \leq 2\textmd{Prob} \left\{ H_{121} \geq H_{212}, A \leq 1, B > 1, A \leq p_1^{\star} \right\} + 2\textmd{Prob} \left\{ H_{121} \geq H_{212}, B \leq 1, A \leq p_{1}^{\star} < B \right\} \nonumber\\ & \leq 2\textmd{Prob} \left\{ H_{121} \geq H_{212}, \bar{\rho} \leq H_{121} < \frac{\bar{\rho}}{1 - \alpha}, A \leq \frac {\sqrt{\frac{4P^2 }{H_{121}} {H}_{2, 2}{H}_{1, 2} + 1} - 1} {2P {H}_{1, 2}} \right\} \nonumber\\ & + 2\textmd{Prob} \left\{ H_{121} \geq H_{212}, H_{121} \geq \frac{\bar{\rho}}{1 - \alpha}, A \leq \frac {\sqrt{\frac{4P^2 }{H_{121}} {H}_{2, 2}{H}_{1, 2} + 1} - 1} {2P {H}_{1, 2}} < B \right\} \nonumber\\ & \leq 2\underbrace{\textmd{Prob} \left\{ \bar{\rho} \leq H_{121} < \frac{\bar{\rho}}{1 - \alpha}, H_{121} A^2 {H}_{1, 2} + \frac{A}{P}H_{121} \leq {H}_{2, 2} < H_{121} {H}_{1, 2} + \frac{H_{121} }{P} \right\}}_{ = I_{1}} \nonumber\\ & + 2\underbrace{\textmd{Prob} \left\{ H_{121} \geq \frac{\bar{\rho}}{1 - \alpha}, H_{121} A^2 {H}_{1, 2} + \frac{A}{P}H_{121} \leq {H}_{2, 2} < H_{121} B^2 {H}_{1, 2} + \frac{B}{P} H_{121} \right\}.}_{= I_{2}} \end{align} The upper bound on $I_{1}$ can be derived as \begin{align} I_{1} \leq \textmd{Prob} \left\{ \bar{\rho} \leq H_{121} \leq \frac{\bar{\rho} }{1 - \alpha} \right\} = \int_{\bar{\rho}}^{\frac{\bar{\rho} }{1 - \alpha}} f_{H_{121}}(x){\rm d}x = \frac {e^{-\frac{\bar{\rho}}{P}}} {\epsilon \bar{\rho} + 1} \left( 1 - \frac {\epsilon \bar{\rho} + 1} {\epsilon \frac{\bar{\rho}}{1 - \alpha} + 1} e^{-\frac{\bar{\rho}}{P (1 - \alpha)}\alpha} \right).\nonumber \end{align} Since $1 - x e^{-y} \leq 1 - x + xy$ when $0 < x \leq 1, y >0$, $\epsilon \bar{\rho} + 1 \geq 1$, $\frac{1}{1 - \alpha}\geq 1$, and $1 - \alpha \geq \frac{1}{2}$, $I_{1}$ is further bounded by \begin{align} \label{I_11_OUT} I_{1} & \leq \frac {{e^{-\frac{\bar{\rho}}{P}}}} {{{\epsilon \bar{\rho} + 1}}} \left( 1 - \frac {\epsilon \bar{\rho} + 1} {\epsilon \frac{\bar{\rho}}{1 - \alpha} + 1} + \frac {\epsilon \bar{\rho} + 1} { \epsilon\frac{\bar{\rho}}{{1 - \alpha}} + 1} \times { \frac{\bar{\rho}}{P {(1 - \alpha)}}\alpha} \right) \nonumber\\ & \leq e^{-\frac{\bar{\rho}}{P}} \left( 1 - \frac {\epsilon \bar{\rho} + 1} {\epsilon \frac{\bar{\rho}}{1 - \alpha} + 1} + \frac {\epsilon \bar{\rho} + 1} { \epsilon \bar{\rho} + 1} \times { \frac{\bar{\rho}}{P \times { \frac{1}{2}}}\alpha} \right) \nonumber\\ & \leq {{e^{-\frac{\bar{\rho}}{P}}}} \left[1 + \frac{2\bar{\rho}}{P}\right] \alpha \leq C_3 \alpha, \end{align} where $C_3 = 2$. The last inequality arises from $e^{-x}(1 + 2x) \leq 2 e^{-\frac{1}{2}}\leq 2$ for $x \geq 0$. Subsequently, $I_{2}$ is upper-bounded by \begin{align} \label{sum_I_2} I_{2} & = \frac{1}{\epsilon} \int_{\underbrace{\frac{\bar{\rho}}{1 - \alpha}}_{\geq \bar{\rho}}}^{\infty} f_{H_{121}}(x) \int_0^{\infty} e^{-\frac{H_{1, 2}}{\epsilon}} \int_{x A^2 {H}_{12} + \frac{A}{P}x}^{x B^2 {H}_{12} + \frac{B}{P}x} e^{-H_{2, 2}} {\rm d}H_{2, 2} {\rm d}H_{1, 2} {\rm d}x \nonumber\\ & \leq \frac{1}{\epsilon} \int_{{\bar{\rho}}}^{\infty} f_{H_{121}}(x) \int_0^{\infty} e^{-\frac{H_{1, 2}}{\epsilon}} \int_{x A^2 {H}_{12} + \frac{A}{P}x}^{x B^2 {H}_{12} + \frac{B}{P}x} e^{-H_{2, 2}} {\rm d}H_{2, 2} {\rm d}H_{1, 2} {\rm d}x \nonumber\\ & = \frac{1}{\epsilon} \int_{\bar{\rho}}^{\infty} f_{H_{121}}(x) \left( \frac {e^{-\frac{A}{P}x}} {x A^2 + \frac{1}{\epsilon}} - \frac {e^{-\frac{B}{P}x}} {x B^2 + \frac{1}{\epsilon}} \right) {\rm d}x \nonumber\\ & = \frac{1}{\epsilon} \int_{\bar{\rho}}^{\infty} f_{H_{121}}(x) \frac {\overbrace{\left(\frac{e^{-\frac{A}{P}x}}{\epsilon}\right)}^{\leq 1}\overbrace{\left(1 - e^{-\frac{\alpha}{P}x}\right) }^{\leq \frac{\alpha}{P} x}} {\underbrace{\left(x A^2 + \frac{1}{\epsilon}\right)\left(x B^2 + \frac{1}{\epsilon}\right)}_{\geq \frac{1}{\epsilon^2}}} {\rm d}x + \frac{1}{\epsilon} \int_{\bar{\rho}}^{\infty} f_{H_{121}}(x) \frac {B^2 x \overbrace{e^{-\frac{A}{P}x}}^{\leq 1}\overbrace{\left(1 - \frac{A^2}{B^2}e^{-\frac{\alpha}{P}x}\right)}^{\leq 1 - \frac{A^2}{B^2} + \frac{A^2}{B^2} {\frac{\alpha}{P}x}} } {\left(x A^2 + \frac{1}{\epsilon}\right)\left(x B^2 + \frac{1}{\epsilon}\right)} {\rm d}x \nonumber\\ & \leq \frac{1}{\epsilon} \int_{\bar{\rho}}^{\infty} f_{H_{121}}(x) \frac {\frac{1}{\epsilon}\left(\frac{\alpha}{P} x\right) } {\frac{1}{\epsilon^2 }} {\rm d}x + \frac{1}{\epsilon} \int_{\bar{\rho}}^{\infty} f_{H_{121}}(x) \frac {B^2 x \left(1 - \frac{A^2}{B^2} + \frac{A^2}{B^2} {\frac{\alpha}{P}x}\right) } {\underbrace{\left(x A^2 + \frac{1}{\epsilon}\right)\left(x B^2 + \frac{1}{\epsilon}\right)}_{\geq \frac{A^2 x}{\epsilon}}} {\rm d}x \nonumber\\ &\leq \frac{1}{\epsilon} \int_{\bar{\rho}}^{\infty} f_{H_{121}}(x) \frac {\frac{1}{\epsilon}\left(\frac{\alpha}{P} x\right) } {\frac{1}{\epsilon^2 }} {\rm d}x + \frac{1}{\epsilon} \int_{\bar{\rho}}^{\infty} f_{H_{121}}(x) \frac {B^2 x \left(1 - \frac{A^2}{B^2} \right) } {\left(x A^2 + \frac{1}{\epsilon}\right)\left(x B^2 + \frac{1}{\epsilon}\right)} {\rm d}x + \frac{1}{\epsilon} \int_{\bar{\rho}}^{\infty} f_{H_{121}}(x) \frac {B^2 x \left(\frac{A^2}{B^2} {\frac{\alpha}{P}x}\right) } { \frac{A^2 x}{\epsilon}} {\rm d}x \nonumber\\ & = \underbrace{\int_{\bar{\rho}}^{\infty} f_{H_{121}}(x) \frac{2\alpha x}{P} {\rm d}x}_{ = I_{2, 1}} + \frac{1}{\epsilon} \underbrace{\int_{\bar{\rho}}^{\infty} f_{H_{121}}(x) \frac { (B^2 - A^2 ) x } {{\left(x A^2 + \frac{1}{\epsilon}\right)\left(x B^2 + \frac{1}{\epsilon}\right)}} {\rm d}x.}_{ = I_{2, 2}} \end{align} The upper bound on $I_{2, 1}$ is derived as \begin{align} \label{I_121_OUT} I_{2, 1} & \leq \int_{\bar{\rho}}^{\infty} f_{H_{121}}(x) \frac{2\alpha x}{P} {\rm d}x = \frac{2\alpha }{P} \textmd{E}\left[\frac{H_{1, 1}}{H_{2, 1} + \frac{1}{P}}\right] = \frac{2\alpha }{\epsilon P} \int_0^{\infty} e^{-H_{1, 1}} \int_0^{\infty} e^{-\frac{H_{2, 1}}{\epsilon}} \frac{H_{1, 1}}{H_{2, 1} + \frac{1}{P}} {\rm d}H_{1, 1} {\rm d}H_{2, 1} \nonumber\\ & = \frac{2\alpha }{\epsilon P} \int_0^{\infty} \frac{e^{-\frac{H_{2, 1}}{\epsilon}}}{H_{2, 1} + \frac{1}{P}} {\rm d}H_{2, 1} = \frac{2\alpha e^{\frac{1}{\epsilon P}}}{\epsilon P} \int_\frac{1}{\epsilon P}^{\infty} \frac{e^{-z}}{z} {\rm d}z \leq \frac{2\log(1 + \epsilon P)}{\epsilon P}\alpha \leq C_4 \alpha, \end{align} where $C_4 = 2$. The last inequality is from the exponential integral $\int_x^{\infty}\frac{e^{-y}}{y}{\rm d}y \leq e^{-x} \log\left(1 + \frac{1}{x}\right)$ \cite{Handbook} as well as $\log(1 + x) \leq x$ for $x\geq 0$. Substituting $A = \frac{\bar{\rho}}{x}$ $ B = A + \alpha$, and $f_{H_{121}}(\cdot)$ into $I_{2, 2}$ yields \begin{align} \label{I_122_OUT} I_{2, 2} & = \underbrace{\frac{1}{\epsilon } \int_{\bar{\rho}}^{\infty} \left[\frac{ e^{-\frac{x}{P}}}{P (\epsilon x + 1)} + \frac{\epsilon e^{-\frac{x}{P}}}{(\epsilon x + 1)^2}\right] \frac { \alpha^2 x } {{\left(x A^2 + \frac{1}{\epsilon }\right)\left(x B^2 + \frac{1}{\epsilon }\right)}} {\rm d}x}_{=I_{2, 2, 1}} \nonumber\\ & + \underbrace{\frac{1}{\epsilon } \int_{{\bar{\rho}}}^{\infty} \left[\frac{e^{-\frac{x}{P}}}{P (\epsilon x + 1)} + \frac{\epsilon e^{-\frac{x}{P}}}{(\epsilon x + 1)^2}\right] \frac { 2 \alpha \bar{\rho} } {{\left(x A^2 + \frac{1}{\epsilon }\right)\left(x B^2 + \frac{1}{\epsilon }\right)}} {\rm d}x}_{=I_{2, 2, 2}}. \end{align} $I_{2, 2, 1}$ is bounded by \begin{align} \label{Final_I_1221} I_{2, 2, 1} & = \frac{1}{\epsilon } \int_{\bar{\rho}}^{\infty} \frac{ {e^{-\frac{x}{P}}}}{P\underbrace{ (\epsilon x + 1)}_{\geq \epsilon x}} \frac { \alpha^2 x } {\underbrace{\left(x A^2 + \frac{1}{\epsilon }\right)\left(x B^2 + \frac{1}{\epsilon }\right)}_{\geq \frac{1}{\epsilon^2}}} {\rm d}x + \frac{1}{\epsilon } \int_{\bar{\rho}}^{\infty} \frac{\epsilon \overbrace{e^{-\frac{x}{P}}}^{\leq 1}}{(\epsilon x + 1)^2} \frac { \alpha^2 x } {{\left(x A^2 + \frac{1}{\epsilon }\right)\left(x B^2 + \frac{1}{\epsilon }\right)}} {\rm d}x \nonumber\\ & \leq \frac{1}{\epsilon } \int_{0}^{\infty} \frac{ e^{-\frac{x}{P}}}{P \epsilon x } \frac { \alpha^2 x } {{\frac{1}{\epsilon^2}}} {\rm d}x + \frac{1}{\epsilon } \int_{0}^{\infty} \frac{\epsilon }{(\epsilon x + 1)^2} \frac { \alpha^2 x } {{\left(x A^2 + \frac{1}{\epsilon }\right)\left(x B^2 + \frac{1}{\epsilon }\right)}} {\rm d}x \nonumber\\ & = {\alpha^2 \int_{0}^{\infty} \frac{e^{-\frac{x}{P}}}{P } {\rm d}x } + {\alpha^2 \int_{0}^{\infty} \frac{1}{(\epsilon x + 1)^2} \frac { x } {{\left(x A^2 + \frac{1}{\epsilon}\right)\left(x \underbrace{B^2}_{\geq \alpha^2} + \frac{1}{\epsilon}\right)}} {\rm d}x } \nonumber\\ & \leq \alpha ^2 + \alpha^2 \int_{0}^{\infty} \frac{1}{(\epsilon x + 1)^2} \frac { x } {{\left(x \left(\frac{\bar{\rho}}{x}\right)^2 + \frac{1}{\epsilon}\right)\left(x \alpha ^2 + \frac{1}{\epsilon}\right)}} {\rm d}x \nonumber\\ & = \alpha^2 + \frac{1}{\epsilon} \int_{0}^{\infty} \frac{1}{\left( x + \frac{1}{\epsilon}\right)^2} \frac { x^2 } {{\left(x + \epsilon \bar{\rho}^2\right)\left(x + \frac{1}{\epsilon \alpha^2}\right)}} {\rm d}x \nonumber\\ & \leq \alpha^2 + \frac{1}{\epsilon} \int_{0}^{\infty} \frac { 1 } {{\left( x + \frac{1}{\epsilon}\right)\left(x + \frac{1}{\epsilon \alpha^2}\right)}} {\rm d}x = \alpha^2 + \frac {\alpha^2 \log\frac{1}{\alpha^2}} {1 - \alpha^2} \leq\frac{\alpha}{2} + \frac{\alpha^2 \left(\frac{1}{\alpha^2}\right)^{\frac{1}{2}}}{\frac{1}{2}\left(1 - \frac{1}{4}\right)} =C_5 \alpha, \end{align} where $C_5 = \frac{7}{8}$. The last inequality is because $\alpha \leq \frac{1}{2}$ and $\log x \leq 2 {x^{\frac{1}{2}}}$ for $x>0$. The upper bound of $I_{2, 2, 2}$ is derived as \begin{align} \label{I_1222} I_{2, 2, 2} & = \frac{1}{\epsilon } \int_{{\bar{\rho}}}^{\infty} \frac{e^{-\frac{x}{P}}}{P \underbrace{(\epsilon x + 1)}_{\geq \epsilon x}} \frac { 2 \alpha \bar{\rho} } {\underbrace{\left(x A^2 + \frac{1}{\epsilon }\right)\left(x B^2 + \frac{1}{\epsilon }\right)}_{\geq \frac{1}{\epsilon^2}}} {\rm d}x + \frac{1}{\epsilon } \int_{{\bar{\rho}}}^{\infty} \frac{\epsilon \overbrace{e^{-\frac{x}{P}}}^{\leq 1}}{\underbrace{(\epsilon x + 1)^2}_{\geq \epsilon^2 x^2}} \frac { 2 \alpha \bar{\rho} } {\underbrace{\left(x A^2 + \frac{1}{\epsilon }\right)\left(x B^2 + \frac{1}{\epsilon }\right)}_{\geq \frac{1}{\epsilon^2}}} {\rm d}x \nonumber\\ & \leq \frac{1}{\epsilon } \int_{{\bar{\rho}}}^{\infty} \frac { e^{-\frac{x}{P}} \times 2 \alpha \bar{\rho} } {P \times \frac{x}{\epsilon}} {\rm d}x + \frac{1}{\epsilon } \int_{{\bar{\rho}}}^{\infty} \frac { \epsilon \times 2 \alpha \bar{\rho} } {x^2 } {\rm d}x = \frac{2 \alpha \bar{\rho}}{P} \int_{{\bar{\rho}}}^{\infty} \frac { e^{-\frac{x}{P}} } {{ x}} {\rm d}x + 2 \alpha \bar{\rho} \int_{{\bar{\rho}}}^{\infty} \frac {1} {{x^2}} {\rm d}x \nonumber\\ & = \frac{2 \alpha \bar{\rho}}{P} \int_{\frac{\bar{\rho}}{P}}^{\infty} \frac { e^{-z} } {{ z}} {\rm d}z + 2 \alpha \leq \left[{2 e^{-\frac{\bar{\rho}}{P}}}\frac{ \log\left(1 + \frac{P}{\bar{\rho}}\right)}{\frac{P}{\bar{\rho}}}+2\right] \alpha \leq C_6 \alpha, \end{align} where $C_6 = 4$. After substituting \eqref{Final_I_1221} and \eqref{I_1222} into \eqref{I_122_OUT}, $I_{ 2, 2} \leq C_7 \alpha$, where $C_7 = C_5 + C_6$. Combined with \eqref{I_121_OUT}, \eqref{I_11_OUT}, \eqref{sum_I_2} and \eqref{I_11_OUT}, $I_{2} \leq C_8 \alpha$ and $\textmd{OUT} \left(\tilde{\textit{GQ}}_{{\sf mr}, {\sf it}}\right) - \textmd{OUT}_{{\sf mr}, {\sf it}}^{\sf opt} \leq 2(I_{1} + I_{2}) \leq C_9 \alpha$ when $M \geq 2$, where $C_8 = C_4 + C_7$ and $C_9 = 2 \left(C_3 + C_8\right)$. Letting $C_1 = \max\{C_2, C_9\}$, $\textmd{OUT} \left(\tilde{\textit{GQ}}_{{\sf mr}, {\sf it}}\right) - \textmd{OUT}_{{\sf mr}, {\sf it}}^{\sf opt} \leq \frac{C_1}{M}$ for any $M \in \mathtt{N} - \{0\}$. The upper bound on the average feedback rate of $\textit{DQ}_{{\sf mr}, {\sf it}}$ is derived as $\textmd{FR}\left(\textit{DQ}_{{\sf mr}, {\sf it}}\right) \leq 1 + 2\left\lceil \log_2\left(M + 1\right) \right\rceil \leq 2\log_2\left(M + 1\right) + 3$, which completes the proof. \end{IEEEproof}
1,116,691,497,423
arxiv
\section{Introduction} \label{secIntroduction} A {\em Fano variety} is a smooth projective variety whose anti-canonical bundle is ample. It is proved by Koll\'{a}r-Miyaoka-Mori \cite{KMM} that there are finitely many deformation classes of Fano varieties in each dimension. The complete classification has been known up to dimension three by Iskovskih and Mori-Mukai \cite{I1, I2, MM}, and up to dimension five for toric Fano case by Batyrev and Kreuzer-Nill \cite{Ba, KN}. A {\em monotone} symplectic manifold is a symplectic analogue of a Fano variety in the sense that $\langle c_1(TM), [\Sigma] \rangle > 0$ for every symplectic surface $\Sigma$. In a low dimensional case, the monotonicity of $\omega$ implies that $(M,\omega$ is symplectomorphic to some K\"{a}hler manifold. Especially in dimension four, it was proved by Ohta-Ono \cite{OO2} that any closed monotone symplectic four manifold is diffeomorphic to a del Pezzo surface (and hence Fano by the uniqueness of a symplectic structure on a rational surface proved by McDuff \cite{McD3}). On the other hand, it turned out by Fine-Panov \cite{FP} that a monotone symplectic manifold need not be K\"{a}hler in general. More precisely, they constructed a twelve dimensional closed monotone symplectic manifold having the fundamental group which is not a K\"{a}hler group. We notice that the existence of a closed monotone symplectic non-K\"{a}hler manifold is still unknown in dimension 6, 8, or 10. In a series of papers, the author deals with the following conjecture. \begin{conjecture}\cite[Conjecture 1.1]{LinP}\cite[Conjecture 1.4]{FP2}\label{conjecture_main} Let $(M,\omega)$ be a six dimensional closed monotone symplectic manifold equipped with an effective Hamiltonian circle action. Then $(M,\omega)$ is $S^1$-equivariantly symplectomorphic to some K\"{a}hler manifold $(X,\omega_X, J)$ with a certain holomorphic Hamiltonian $S^1$-action. \end{conjecture} In the previous work \cite{Cho}, the author proved that Conjecture \ref{conjecture_main} holds under the assumptions that the action is {\em semifree}\footnote{An $S^1$-action is called {\em semifree} if it is free outside the fixed point set.} and at least one of extremal fixed components is an {\em isolated point}. Indeed, there are 18 types of such manifolds and their algebro-geometric descriptions (in the sense of Mori-Mukai \cite{MM}) as well as their fixed point data are illustrated in \cite[Section 6,7,8]{Cho} and \cite[Table 9.1]{Cho}, respectively. For the complete classification of semifree $S^1$-actions, it remains to deal with the case where every extremal fixed component is non-isolated. In this paper, we prove the following. \begin{theorem}\label{theorem_main} Let $(M,\omega)$ be a six-dimensional closed monotone symplectic manifold equipped with a semifree Hamiltonian circle action. Suppose that the maximal and the minimal fixed component are both two-dimensional. Then $(M,\omega)$ is $S^1$-equivariantly symplectomorphic to some K\"{a}hler manifold with a certain holomorphic Hamiltonian circle action. In fact, there are 21 types of such manifolds up to $S^1$-equivariant symplectomorphism. \end{theorem} \subsection{Summary of the classification} \label{ssecSummaryOfClassification} Figure \ref{figure_summary} (except for {\bf (II-2.2)} and {\bf (III-3)}) illustrates all possible moment map images of a six-dimensional closed monotone symplectic manifold with a Hamiltonian torus action which induces a semifree circle action with two dimensional extremal fixed components. \begin{figure}[H] \scalebox{0.6}{\input{figure_summary.pdf_tex}} \caption{\label{figure_summary} Semifree $S^1$-Fano 3-folds with two dimensional extremal fixed components} \end{figure} \noindent Note that \begin{itemize} \item red edges are images of fixed spheres, \item red dots are images of isolated fixed points. \end{itemize} Two exceptional cases {\bf (II-2.2)} and {\bf (III-3)} are {\em conceptual images} each of which depicts a blowing up of a complexity-one and complexity zero (toric) variety along some $S^1$-invariant sphere, respectively. Later, one can see that \begin{enumerate} \item {\bf (I-1), (II-1.2), (II-1.3), (II-2.1), (III.1), (III.2), (IV-1-1.1), (IV-1-1.2), (IV-1-2), (IV-2-2.1), (IV-2-3), (IV-2-4), (IV-2-5), (IV-2-6)} are toric, \item {\bf (II-1.1), (IV-1-1.3), (IV-2-1.1), (IV-2-1.2), (IV-2-2.2)} are of complexity one, and \item {\bf (II-2.2), (III-3)} are of complexity two \end{enumerate} where the {\em complexity} of a variety $X$ is by definition half the minimal codimension of a (possibly improper) toric subvariety of $X$. More detailed description of the manifolds and the actions thereon can be found in Section \ref{secCaseIMathrmCritMathringHEmptyset}, \ref{secCaseIIMathrmCritMathringH}, \ref{secCaseIIIMathrmCritMathringH11}, \ref{secCaseIVMathrmCritMathringH11}. See also Table \ref{table_list}. \subsection{Outline of Proof of Theorem \ref{theorem_main}} \label{ssecOutlineOfProofOfTheoremRefTheoremmain} The strategy of the proof of Theorem \ref{theorem_main} is essentially the same as one used in \cite{Cho}. The main difference from \cite{Cho} is that the normal bundle of an extremal fixed component could be {\em arbitrary}, while the normal bundle of an isolated extremum is always trivial and isomorphic to $\C^3$. By careful analysis of the geometry of reduced spaces, we may overcome the difficulty and obtain a full list of (topological) fixed point data as given in Table \ref{table_list}, which leads to {\em symplectical rigidity}\footnote{See Section \ref{secFixedPointData} for the definition.} of reduced spaces. This fact enables us to utilize the following theorem. \begin{theorem}\cite[Theorem 1.5]{G}\label{theorem_Gonzalez} Let $(M,\omega)$ be a six-dimensional closed semifree Hamiltonian $S^1$-manifold. Suppose that every reduced space is symplectically rigid. Then $(M,\omega)$ is determined by its fixed point data up to $S^1$-equivariant symplectomorphism. \end{theorem} Here, by the fixed point data of $(M,\omega)$ we mean a collection of a {\em symplectic reduction\footnote{A reduced space at a critical level is not a smooth manifold nor an orbifold in general. However, if $\dim M = 6$ and the action is semifree, then a symplectic reduction at any (critical) level is a smooth manifold with the induced symplectic form. See Proposition \ref{proposition_GS}.} at each critical level} together with an {\em information of critical submanifolds} (or equivalently fixed components) as embedded symplectic submanifolds of reduced spaces. (See Definition \ref{definition_fixed_point_data} or \cite[Definition 1.2]{G}.) We divide the proof of Theorem \ref{theorem_main} into three steps : \begin{itemize} \item ({\bf Step 1}) Classify all {\em topological fixed point data}\footnote{A {\em topological fixed point data}, or TFD for short, is a topological analogue of a fixed point data in the sense that it records ``homology classes'', not embeddings themselves, of fixed components in reduced spaces.}. In this process, we obtain a complete list of all topological fixed point data as described in Table \ref{table_list}. Then it follows that every reduced space is diffeomorphic to one of the following manifolds : $\p^1 \times \p^1$ or $X_k$ : $k$-times blow-up of $\p^2$ for $1 \leq k \leq 4$ where those spaces are known to be symplectically rigid (see Section \ref{secMainTheorem}). \item ({\bf Step 2}) Show that each topological fixed point data determines a unique fixed point data. \item ({\bf Step 3}) For each topological fixed point data given in Table \ref{table_list}, there exists a corresponding smooth Fano variety with a holomorphic semifree Hamiltonian $S^1$-action. \end{itemize} Then the proof of Theorem \ref{theorem_main} immediately follows by Gonzalez's theorem \ref{theorem_Gonzalez}. This paper is organized as follows. In Section \ref{secBackground}, we build up our notations and introduce theorems about Hamiltonian $S^1$-actions that will be crucially used in the rest of the paper. In Section \ref{secFixedPointData}, we give a rigorous definition of (topological) fixed point data and explain the idea of the Gonzalez's Theorem \cite[Theorem 1.5]{G}. Then, from Section \ref{secCaseIMathrmCritMathringHEmptyset} to Section \ref{secCaseIVMathrmCritMathringH11}, we classify all topological fixed point data as well as provide examples of Fano varieties with specific holomorphic $\C^*$-actions for each fixed point data in Table \ref{table_list}. Finally in Section \ref{secMainTheorem}, we complete the proof of Theorem \ref{theorem_main}. \subsection*{Acknowledgements} The author would like to thank Dmitri Panov for bringing the paper \cite{Z} to my attention. The author would also like to thank Jinhyung Park for helpful comments. This work is supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIP; Ministry of Science, ICT \& Future Planning) (NRF-2017R1C1B5018168). \section{Background} \label{secBackground} In this section, we establish our notations and review some basic properties of a semifree Hamiltonian circle action. We also refer to \cite[Section 2,3,4]{Cho} (and the references therein) in which basic materials (including the ABBV localization and the Duistermaat-Heckman theorem) are provided in more detail. Let $S^1 \subset \C^*$ be the unit circle group with the Lie algebra $\frak{t}$ and its dual Lie algebra $\frak{t}^*$. An $S^1$-action on a symplectic manifold $(M,\omega)$ is called {\em Hamiltonian} if there exists a smooth function $H : M \rightarrow [a,b] \subset \R$ (called a {\em moment map}) such that \[ \omega(\underline{X}, \cdot) = dH(\cdot) \] for every $X \in \frak{t}$ where $\underline{X}$ denotes the fundamental vector field on $M$ generated by $X$. Note that $H$ is a perfect Morse-Bott function and has many good properties (e.g., local normal form), see \cite[Chapter 4]{Au} or \cite[Section 2]{Cho}. \begin{notation} We use the following notations. \begin{itemize} \item $M^{S^1}$ : fixed point set (which coincides with the critical point set of $H$). \item $\mathrm{Crit}~H$ : set of critical values of $H$. \item $\mathrm{Crit}~ \mathring{H}$ : set of non-extremal critical values of $H$. \item $Z_{\min} := H^{-1}(a)$, $Z_{\max} := H^{-1}(b)$ : minimal and maximal fixed component. \item $M_t := H^{-1}(t) / S^1$ : reduced space at level $t \in [a,b]$ \item $\omega_t$ : reduced symplectic form on $M_t$. \item $P_c^{\pm}$ : principal $S^1$-bundle $\pi_{c \pm \epsilon} : H^{-1}(c \pm \epsilon) \rightarrow M_{c \pm \epsilon}$ where $\epsilon > 0$ is sufficiently small. \item $e(P_c^{\pm}) \in H^2(M_{c \pm \epsilon}; \Q)$ : the Euler class of $P_c^{\pm}$. \item $Z_c$ : fixed point set lying on the level set $H^{-1}(c)$. That is, $Z_c = M^{S^1} \cap H^{-1}(c)$. \item $\R[\lambda]$ : cohomology ring of $H^*(BS^1;\R)$, where $-\lambda$ is the Euler class of the universal Hopf bundle $ES^1 \rightarrow BS^1.$ \end{itemize} \end{notation} From now on, we assume that the $S^1$-action on $(M,\omega)$ is semifree. \subsection{Topology of reduced spaces} \label{ssecTopologyOfReducedSpaces} In this section, we briefly review how the topology of a reduced space changes when a level set of $H$ passes through a critical level. Note that the `{\em semifree}' condition implies that a reduced space $M_t$ is a smooth manifold for every regular value $t$ of $H$. \begin{proposition}\cite{McD2}\cite{GS}\label{proposition_GS} Let $(M,\omega)$ be a closed semifree Hamiltonian $S^1$-manifold with a moment map $H : M \rightarrow \R$ and $c \in \R$ be a critical value of $H$. If $Z_c := H^{-1}(c) \cap M^{S^1}$ consists of index-two (co-index two, resp.) fixed components, then $M_c = H^{-1}(c) / S^1$ is smooth and is diffeomorphic to $M_{c-\epsilon}$ ($M_{c+\epsilon}$, resp.). Also, $M_{c+\epsilon}$ is the blow-up (blow-down, resp.) of $M_c$ along $Z_c$. \end{proposition} If $M$ is of dimension six, then the condition of Proposition \ref{proposition_GS} is automatically satisfied so that a reduced space is smooth for every (possibly critical) value of $H$. In fact, Guillemin-Sternberg \cite{GS} states Proposition \ref{proposition_GS} in full generality (i.e., without index assumptions), namely reduced spaces are in {\em birational equivalence}. See also the paragraph below \cite[Proposition 4.1]{Cho} for the brief survey on this topic. They also describe how the reduced symplectic form $\omega_{c+\epsilon}$ can be obtained from $\omega_{c-\epsilon}$. Recall that the Duistermaat-Heckman's theorem \cite{DH} says that \begin{equation}\label{equation_DH} [\omega_r] - [\omega_s] = (s-r)e, \quad r,s \in I \end{equation} where $I$ is an interval consisting of regular values of $H$ and $e \in H^2(M_r; \Z)$ denotes the Euler class of the principal $S^1$-bundle $\pi_r : H^{-1}(r) \rightarrow M_r$. \begin{lemma}\cite[Theorem 13.2]{GS}\label{lemma_Euler_class} Suppose that $Z_c = M^{S^1} \cap H^{-1}(c)$ consists of fixed components $Z_1, \cdots, Z_k$ each of which is of index two. Let $e^{\pm}$ be the Euler classes of principal $S^1$-bundles $\pi_{c \pm \epsilon} : H^{-1}(c \pm \epsilon) \rightarrow M_{c \pm \epsilon}$. Then \[ e^+ = \phi^*(e^-) + E \in H^2(M_{c+\epsilon}; \Z) \] where $\phi : M_{c+\epsilon} \rightarrow M_{c-\epsilon}$ is the blow-down map and $E$ denotes the Poincar\'{e} dual of the exceptional divisor of $\phi$. \end{lemma} It is worth mentioning that if $Z_c$ in Lemma \ref{lemma_Euler_class} is of codimension four in $M$, i.e., $Z_c$ is of co-dimension two in $M_{c-\epsilon}$, then the blow-up of $M_{c-\epsilon}$ is itself and the exceptional divisor becomes $Z_c$ so that we obtain the following. \begin{corollary}\label{corollary_Euler_class_zero_level} Under the same assumption with Lemma \ref{lemma_Euler_class}, if $Z_c$ is of co-dimension four in $M$, then the topology of a reduced does change, i.e., $M_{c-\epsilon} \cong M_{c+\epsilon}$. Moreover, we have \[ e^+ = e^- + \mathrm{PD}(Z_c) \in H^2(M_{c+\epsilon}; \Z). \] \end{corollary} See also \cite[Lemma 5]{McD1} for the case of $\dim M = 6$. \subsection{Equivariant cohomology} \label{ssecEquivariantCohomology} The {\em equivariant cohomology} of $M$ is defined by \[ H^*_{S^1}(M) := H^*(ES^1 \times_{S^1} M) \] It admits a natural $H^*(BS^1)$-module structure induced by the projection map $\pi$ : \begin{equation}\label{equation_Mbundle} \begin{array}{ccc} M \times_{S^1} ES^1 & \stackrel{f} \hookleftarrow & M \\[0.3em] \pi \downarrow & & \\[0.3em] BS^1 & & \end{array} \end{equation} where $f$ is an inclusion of $M$ as a fiber. Then $H^*(BS^1)$-module structure on $H^*_{S^1}(M)$ is given by the map $\pi^*$ such that \[ y \cdot \alpha = \pi^*(y)\cup \alpha \] for $y \in H^*(BS^1)$ and $\alpha \in H^*_{S^1}(M)$. One remarkable fact on the equivariant cohomology of a Hamiltonian $S^1$-manifold is that it is {\em equivariantly formal}. \begin{theorem}\label{theorem_equivariant_formality}\cite{Ki} Let $(M,\omega)$ be a closed symplectic manifold equipped with a Hamiltonian circle action. Then $M$ is equivariatly formal, that is, $H^*_{S^1}(M)$ is a free $H^*(BS^1)$-module so that $$H^*_{S^1}(M) \cong H^*(M) \otimes H^*(BS^1).$$ Equivalently, the map $f^*$ is surjective with kernel $x \cdot H^*_{S^1}(M)$ where $\cdot$ means the scalar multiplication of $H^*(BS^1)$-module structure on $H^*_{S^1}(M)$. \end{theorem} \subsection{Localization theorem} \label{ssecLocalizationTheorem} Thanks to the equivariant formality, for any homogeneous element $\alpha \in H^k_{S^1}(M)$, we may express $\alpha$ as \begin{equation}\label{equation_expression} \alpha = \alpha_k \otimes 1 + \alpha_{k-2} \otimes \lambda + \alpha_{k-4} \otimes \lambda^2 + \cdots \end{equation} where $\alpha_i \in H^i(M)$ for each $i = k, k-2, \cdots$. We then obtain $f^*(\alpha) = \alpha_k$ where $f$ is given in \eqref{equation_Mbundle}. \begin{definition} An \textit{integration along the fiber $M$} is an $H^*(BS^1)$-module homomorphism $\int_M : H^*_{S^1}(M) \rightarrow H^*(BS^1)$ defined by \[ \int_M \alpha = \langle \alpha_k, [M] \rangle \cdot 1 + \langle \alpha_{k-2}, [M] \rangle \cdot x + \cdots \] for every $ \alpha = \alpha_k \otimes 1 + \alpha_{k-2} \otimes \lambda + \alpha_{k-4} \otimes \lambda^2 + \cdots \in H^k_{S^1}(M).$ Here, $[M] \in H_{2n}(M; \Z)$ denotes the fundamental homology class of $M$. \end{definition} Now, let $M^{S^1}$ be the fixed point set of the $S^1$-action on $M$ and let $F \subset M^{S^1}$ be a fixed component. Then the inclusion $i_F : F \hookrightarrow M$ induces a ring homomorphism $$i_F^* : H^*_{S^1}(M) \rightarrow H^*_{S^1}(F) \cong H^*(F) \otimes H^*(BS^1).$$ For any $\alpha \in H^*_{S^1}(M)$, we call the image $i_F^*(\alpha)$ \textit{the restriction of $\alpha$ to $F$} and denote by \[ \alpha|_F := i_F^*(\alpha). \] Then we may compute $\int_M \alpha$ concretely by using the following theorem due to Atiyah-Bott \cite{AB} and Berline-Vergne \cite{BV}. \begin{theorem}[ABBV localization]\label{theorem_localization} For any $ \alpha \in H^*_{S^1}(M)$, we have \[ \int_M \alpha = \sum_{F \subset M^{S^1}} \int_F \frac{\alpha|_F}{e^{S^1}(F)} \] where $e^{S^1}(F)$ is the equivariant Euler class of the normal bundle $\nu_F$ of $F$ in $M$. That is, $e^{S^1}(F)$ is the Euler class of the bundle \[ \nu_F \times_{S^1} ES^1 \rightarrow F \times BS^1. \] induced from the projection $\nu_F \times ES^1 \rightarrow F \times ES^1$. \end{theorem} \subsection{Monotone symplectic manifolds} \label{ssecMonotoneSymplecticManifolds} Now, we assume that $\omega$ is monotone and normalized, i.e., $c_1(TM) = [\omega]$. \begin{definition}\label{definition_balanced} We call a moment map $H : M \rightarrow \R$ {\em balanced} if it satisfies \[ H(Z) = -\Sigma(Z), \quad \quad \Sigma(Z) = ~\text{sum of weights of the $S^1$-action at $Z$} \] for every fixed component $Z \subset M^{S^1}$. \end{definition} Note that there exists a unique balanced moment map. See \cite[Proposition 4.4]{Cho}. The following lemma is immediate from Definition \ref{definition_balanced}. \begin{lemma}\label{lemma_possible_critical_values}\cite[Lemma 5.9]{Cho} Let $(M,\omega)$ be a six-dimensional closed monotone semifree $S^1$-manifold with the balanced moment map $H$. Then all possible critical values of $H$ are $\pm 3, \pm 2, \pm 1$, and $0$. Moreover, any connected component $Z$ of $M^{S^1}$ satisfies one of the followings : \begin{table}[H] \begin{tabular}{|c|c|c|c|} \hline $H(Z)$ & $\dim Z$ & $\mathrm{ind}(Z)$ & $\mathrm{Remark}$ \\ \hline $3$ & $0$ & $6$ & $Z = Z_{\max} = \mathrm{point}$ \\ \hline $2$ & $2$ & $4$ & $Z = Z_{\max} \cong S^2$ \\ \hline $1$ & $4$ & $2$ & $Z = Z_{\max}$ \\ \hline $1$ & $0$ & $4$ & $Z = \mathrm{pt}$ \\ \hline $0$ & $2$ & $2$ & \\ \hline $-1$ & $0$ & $2$ & $Z = \mathrm{pt}$ \\ \hline $-1$ & $4$ & $0$ & $Z = Z_{\min}$ \\ \hline $-2$ & $2$ & $0$ & $Z = Z_{\min} \cong S^2$ \\ \hline $-3$ & $0$ & $0$ & $Z = Z_{\min} = \mathrm{point}$ \\ \hline \end{tabular} \vs{0.2cm} \caption{\label{table_fixed} List of possible fixed components} \end{table} \end{lemma} Another important fact on the balanced moment map is that the monotonicity property of the reduced symplectic form $\omega_0$ at level zero is inherited from $\omega$. \begin{proposition}\label{proposition_monotonicity_preserved_under_reduction}\cite[Proposition 4.8, Remark 4.9]{Cho} Let $(M,\omega)$ be a semifree Hamiltonian $S^1$-manifold with $c_1(TM) = [\omega]$ and $H$ be the balanced moment map. If the symplectic reduction is defined at level zero, then $(M_0, \omega_0)$ is a monotone symplectic manifold with $[\omega_0] = c_1(TM_0)$ \end{proposition} By Proposition \ref{proposition_monotonicity_preserved_under_reduction} and Ohta-Ono's classification \cite{OO2} of closed monotone symplectic four manifolds, we obtain the following. \begin{corollary}\label{corollary_monotone_reduced_space} Let $(M,\omega)$ be a six-dimensional closed monotone semifree $S^1$-manifold with the balanced moment map $H$. Then $M_0$ is diffeomorphic to a del Pezzo surface, i.e., \[ M_0 \cong \p^2, \p^1 \times \p^1, ~\text{or}~X_k \quad (k \leq 8) \] where $X_k$ denotes the $k$-points blow-up of $\p^2$. \end{corollary} \section{Fixed point data} \label{secFixedPointData} In \cite{Li2, Li3}, Li introduced the notion of a {\em fixed point data} (or FD shortly) for some particular semifree Hamiltonian $S^1$-manifold and Gonzalez \cite{G} defined it in more general context. Also, the author \cite{Cho} defined a {\em topological fixed point data} (or TFD for short). In this section, we briefly overview the notions {\em FD} and {TFD} of a closed semifree Hamiltonian $S^1$-manifold and explain how the fixed poitn data determines a manifold up to $S^1$-equivariant symplectomorphism. We also refer to \cite[Section 5]{Cho} for more detail. \subsection{Slices and Gluing} \label{ssecSlicesAndGluing} Any closed Hamiltonian $S^1$-manifold can be decomposed into {\em slices} and, conversely, a family of slices with certain compatible conditions determines a closed Hamiltonian $S^1$-manifold . More precisely, let $(M,\omega)$ be a Hamiltonian $S^1$-manifold with a moment map $H : M \rightarrow I \subset \R$. Assume that the critical values of $H$ are given by \[ \min H = c_1 < \cdots < c_k = \max H. \] Then, $M$ can be decomposed into a union of Hamiltonian $S^1$-manifolds $\{ (N_j, \omega_j) \}_{1 \leq j \leq 2k-1}$ with boundaries : \[ N^{2j-1} = H^{-1}(\underbrace{[c_j - \epsilon, c_j + \epsilon]}_{ =: I_{2j-1}}), \quad N^{2j} = H^{-1}(\underbrace{[c_j + \epsilon, c_{j+1} - \epsilon]}_{=: I_{2j}}) \] where $\epsilon > 0$ is chosen to be sufficiently small so that $I_{2j-1}$ contains exactly one critical value $c_j$ of $H$ for each $j$. We call those $N^{2j-1}$'s and $N^{2j}$'s {\em critical and regular slices}, respectively. \begin{definition}\label{definition_regular_slice}\cite{G} \cite[Definition 5.1, 5.2]{Cho} \begin{enumerate} \item A {\em regular slice} $(N,\sigma,K, I)$ is a free Hamiltonian $S^1$-manifold $(N, \sigma)$ with boundary and $K : N \rightarrow I$ is a surjective proper moment map where $I = [a,b]$ is a closed interval. \item A {\em critical slice} $(N, \sigma, K, I)$ is a semifree Hamiltonian $S^1$-manifold $(N, \sigma)$ with boundary together with a surjective proper moment map $K : N \rightarrow I = [a,b]$ such that there exists exactly one critical value $c \in [a,b]$ satisfying one the followings : \begin{itemize} \item (interior slice) $c \in (a,b)$, \item (maximal slice) $c = b$ and $K^{-1}(c)$ is a critical submanifold, \item (minimal slice) $c = a$ and $K^{-1}(c)$ is a critical submanifold. \end{itemize} \item An interior critical slice is called {\em simple} if every fixed component in $K^{-1}(c)$ has the same Morse-Bott index. \end{enumerate} \end{definition} Two slices $(N_1,\sigma_1,K_1, I_1)$ and $(N_2,\sigma_2,K_2, I_2)$ are said to be {\em isomorphic} if there exists an $S^1$-equivariant symplectomorphism $\phi : (N_1, \sigma_1) \rightarrow (N_2, \sigma_2)$ satisfying \[ \xymatrix{N_1 \ar[r]^{\phi} \ar[d]_{K_1} & N_2 \ar[d]^{K_2} \\ I_1 \ar[r]^{ + k} & I_2} \] where $+k$ denotes the translation map as the addition of some constant $k \in \R$. The following lemma tells us when two slices can be glued along their boundaries. \begin{lemma}\cite[Lemma 13]{Li3}\cite[Lemma 1.2]{McD2}\label{lemma_gluing} Two slices $(N_1, \sigma_1, K_1, [a,b])$ and $(N_2, \sigma_2, K_2, [b,c])$ can be glued along $K_i^{-1}(b)$ if there exists a diffeomorphism \[ \phi : (N_1)_b \rightarrow (N_2)_b, \quad \quad (N_i)_b := K_i^{-1}(b) / S^1 \] such that \begin{itemize} \item $\phi^* (\sigma_2)_b = (\sigma_1)_b$, and \item $\phi^* (e_2)_b = (e_1)_b$ \end{itemize} where $(\sigma_i)_b$ and $(e_i)_b$ denote the reduced symplectic form on $(N_i)_b$ and the Euler class of the principal $S^1$-bundle $K_i^{-1}(b) \rightarrow (N_i)_b$, respectively. \end{lemma} Now, suppose that $\frak{S} = \{ (N_i, \sigma_i, K_i, [a_i,b_i]) \}$ be a finite family of slices with gluing data \[ \Phi := \{\phi_i : (N_i)_{b_i} \rightarrow (N_{i+1})_{a_{i+1}}\} \] satisfying the conditions in Lemma \ref{lemma_gluing}. Then $(\frak{S}, \Phi)$ determines a closed Hamiltonian $S^1$-manifold denoted by $M(\frak{S}, \Phi)$. Note that $M(\frak{S}, \Phi)$ may not be $S^1$-equivariantly symplectomorphic (nor even diffeomorphic) to $M(\frak{S}, \Phi')$ for a different choice of gluing data $\Phi'$. \subsection{Fixed point data} \label{ssecFixedPointData} Now, consider a six-dimensional closed monotone symplectic manifold $(M,\omega)$ equipped with an effective semifree Hamiltonian $S^1$-action. We further assume that $c_1(TM) = [\omega]$ so that there exists a (unique) balanced moment map $H : M \rightarrow \R$ for the action defined in Definition \ref{definition_balanced}. \begin{definition}\cite[Definition 1.2]{G}\label{definition_fixed_point_data} A {\em fixed point data} (or {\em FD} shortly) of $(M,\omega, H)$, denoted by $\frak{F}(M, \omega, H)$, is a collection \[ \frak{F} (M, \omega, H) := \left\{(M_{c}, \omega_c, Z_c^1, Z_c^2, \cdots, Z_c^{k_c}, e(P_{c}^{\pm})) ~|~c \in \mathrm{Crit} ~H \right\} \] which consists of the information below. \begin{itemize} \item $(M_c, \omega_c)$\footnote{$M_c$ is smooth manifold under the assumption that the action is semifree and the dimension of $M$ is six. See Proposition \ref{proposition_GS}.} is the symplectic reduction at level $c$. \item $k_c$ is the number of fixed components on the level $c$. \item Each $Z_c^i$ is a connected fixed component and hence a symplectic submanifold of $(M_c, \omega_c)$ via the embedding \[ Z_c^i \hookrightarrow H^{-1}(c) \rightarrow H^{-1}(c) / S^1 = M_c. \] (This information contains a normal bundle of $Z_c^i$ in $M_c$.) \item The Euler class $e(P_c^{\pm})$ of principal $S^1$-bundles $H^{-1}(c \pm \epsilon) \rightarrow M_{c \pm \epsilon}$. \end{itemize} \end{definition} \begin{definition}\cite[Definition 2.13]{McD2}\cite[Definition 1.4]{G}\label{definition_rigid} A manifold $B$ is said to be {\em symplectically rigid} if \begin{itemize} \item (uniqueness) any two cohomologous symplectic forms are diffeomorphic, \item (deformation implies isotopy) every path $\omega_t$ ($t \in [0,1]$) of symplectic forms such that $[\omega_0] = [\omega_1]$ can be homotoped through families of symplectic forms with the fixed endpoints $\omega_0$ and $\omega_1$ to an isotopy, that is, a path $\omega_t'$ such that $[\omega_t']$ is constant in $H^2(B)$. \item For every symplectic form $\omega$ on $B$, the group $\text{Symp}(B,\omega)$ of symplectomorphisms that act trivially on $H_*(B;\Z)$ is path-connected. \end{itemize} \end{definition} As we have seen in Section \ref{ssecSlicesAndGluing}, the $S^1$-equivariant symplectomorphism class of a Hamiltonian $S^1$-manifold $M(\frak{S}, \Phi)$ constructed from a given family $\frak{S}$ of slices depends on the choice of a gluing data $\Phi$. The following theorem due to Gonzalez states that $M(\frak{S}, \Phi)$ only depends on the fixed point data of the action on $M(\frak{S}, \Phi)$ if every reduced space is symplectically rigid. \begin{theorem}\cite[Theorem 1.5]{G}\label{theorem_Gonzalez_5} Let $(M,\omega)$ be a six-dimensional closed semifree Hamiltonian $S^1$-manifold such that every critical level is simple\footnote{ A critical level is called {\em simple} if every fixed component in the level set has a common Morse-Bott index.}. Suppose further that every reduced space is symplectically rigid. Then $(M,\omega)$ is determined by its fixed point data up to $S^1$-equivariant symplectomorphism. \end{theorem} \begin{remark}\label{remark_Gonzalez_5} Note that Theorem \ref{theorem_Gonzalez_5} is a six-dimensional version of the original statement of Theorem \cite[Theorem 1.5]{G} so that we may drop ``(co)-index two'' condition in his original statement because every non-extremal fixed component has index two or co-index two in a six-dimensional case. In addition, if $\omega$ is monotone, then the condition ``simpleness'' is automatically satisfied by Lemma \ref{lemma_possible_critical_values}. \end{remark} For proving Theorem \ref{theorem_main}, we need to \begin{itemize} \item classify all possible fixed point data $\frak{F}$, \item show the existence of the corresponding Hamiltonian $S^1$-manifold having the fixed point data $\frak{F}$, \item show that every reduced space is symplectically rigid. \end{itemize} However, the classification of fixed point data is extremely difficult as it involves the classification problem of all symplectic embeddings of each fixed component of $Z_c$ into a reduced space $(M_c, \omega_c)$. Thus, instead of a fixed point data, we introduce the notion ``{\em topological fixed point data}'', which is a topological analogue of a fixed point data, as follows. \begin{definition}\label{definition_topological_fixed_point_data}\cite[Definition 5.7]{Cho} Let $(M,\omega)$ be a six-dimensional closed semifree Hamiltonian $S^1$-manifold equipped with a moment map $H : M \rightarrow I$ such that all critical level sets are simple. A {\em topological fixed point data} (or {\em TFD} for short) of $(M,\omega, H)$, denoted by $\frak{F}_{\text{top}}(M, \omega, H)$, is defined as a collection \[ \frak{F}_{\text{top}}(M, \omega, H) := \left\{(M_{c}, [\omega_c], \mathrm{PD}(Z_c^1), \mathrm{PD}(Z_c^2), \cdots, \mathrm{PD}(Z_c^{k_c}), e(P_c^{\pm}) ) ~|~c \in \mathrm{Crit} ~H \right\} \] where \begin{itemize} \item $(M_c, \omega_c)$ is the reduced symplectic manifold at level $c$, \item $k_c$ is the number of fixed components at level $c$, \item each $Z_c^i$ is a connected fixed component lying on the level $c$ and $\mathrm{PD}(Z_c^i) \in H^*(M_c)$ denotes the Poincar\'{e} dual class of the image of the embedding \[ Z_c^i \hookrightarrow H^{-1}(c) \rightarrow H^{-1}(c) / S^1 = M_c. \] \item the Euler class $e(P_c^{\pm})$ of principal $S^1$-bundles $H^{-1}(c \pm \epsilon) \rightarrow M_{e \pm \epsilon}$. \end{itemize} \end{definition} The classification of TFD is relatively much more easier than the classification of FD as we will see later. Indeed, we will classify all possible TFD for a semifree Hamiltonian circle action on a six-dimensional monotone symplectic manifold. (See Table \ref{table_list} for the full list of TFD.) On the other hand, there is one more critical issue. In general, it is not obvious whether a TFD determines a FD uniquely. Namely, for two candidates $Z_c^1$ and $Z_c^2$ of a fixed component in $(M_c,\omega_c)$ representing a same homology class, it is not guaranteed the existence of a symplectomorphism (nor a diffeomorphism) \[ \psi : (M_c, \omega_c) \rightarrow (M_c, \omega_c), \quad \quad \psi(Z_c^1) = \psi(Z_c^2). \] In Section \ref{secMainTheorem}, we will show that each TFD determines FD uniquely in our situation, and therefore TFD becomes a complete invariant for a semifree Hamiltonian circle action on a six-dimensional closed monotone symplectic manifold. \section{Reduced spaces near the extremum} \label{secReducedSpacesNearTheExtremum} This section is devoted to collect some information of a reduced space near an extremum such as a cohomology ring structure and the symplectic area. These materials would be used in the rest of the paper. Let $(M,\omega)$ be a six-dimensional closed monotone semifree Hamiltonian $S^1$-manifold with the balanced moment map $H$. We assume that all extremal fixed components are two-dimensional, i.e., $H(Z_{\min}) = -2$ and $H(Z_{\max}) = 2$. Thanks to Li's theorem \cite[Theorem 0.1]{Li1}, we have \[ \pi_1(Z_{\max}) \cong \pi_1(Z_{\min}) \cong \pi_1(M_0), \] which implies that $Z_{\max} \cong Z_{\min} \cong S^2$ as in Lemma \ref{lemma_possible_critical_values} (since $M_0$ is simply connected.) Observe that the only possible non-extremal critical values are $\{\pm1, 0\}$ and each non-extremal fixed component $Z$ is either \[ \begin{cases} \text{$Z$ = pt} \hspace{1cm} \text{if $H(Z) = \pm 1$, \quad or} \\ \text{$\dim Z = 2$} \quad \text{if $H(Z) = 0$.} \end{cases} \] by Lemma \ref{lemma_possible_critical_values}. Moreover, since the moment map $H$ is a perfect Morse-Bott function, we may easily deduce that \[ |Z_1| = |Z_{-1}| \] by the Poincar\'{e} duality. We follow Li's notations in \cite{Li2} and \cite{Li3}. For a sufficiently small $\epsilon > 0$, the level set $H^{-1}(-2+\epsilon)$ becomes an $S^3$-bundle over $Z_{\min}$ with the induced fiberwise free $S^1$-action. (This can be shown using the {\em equivariant Darboux theorem} and the explicit formula of the moment map, see \cite[Theorem 2.1, Section 4.1]{Cho}.) Thus the reduced space $M_{-2 + \epsilon}$ near $Z_{\min}$ is an $S^2$-bundle over $S^2$ and hence diffeomorphic to either $S^2 \times S^2$ or a Hirzebruch surface which we denote by $E_{S^2}$. When $M_{-2 + \epsilon} \cong S^2 \times S^2$, regarded as a trivial $S^2$-bundle over $Z_{\min} \cong S^2$, let $x$ and $y$ in $H^2(M_{-2 + \epsilon} ;\Z)$ be the dual classes of the fiber $S^2$ and the base $Z_{\min}$, respectively. Then \[ \langle xy, [M_{-2 + \epsilon}] \rangle = 1, \quad \langle x^2, [M_{-2 + \epsilon}] \rangle = \langle y^2, [M_{-2 + \epsilon}] \rangle = 0. \] Similarly, when $M_{-2 + \epsilon} \cong E_{S^2}$ regarded as a non-trivial $S^2$-bundle over $Z_{\min}$, let $x$ and $y$ be the dual of the fiber $S^2$ and the base respectively so that \[ \langle xy, [M_{-2 + \epsilon}] \rangle = 1, \quad \langle x^2, [M_{-2 + \epsilon}] \rangle = 0, \quad \langle y^2, [M_{-2 + \epsilon}] \rangle = -1. \] In this notation, we have $c_1(T(S^2 \times S^2)) = 2x + 2y$ and $c_1(TE_{S^2}) = 3x+2y$, respectively. The following lemma describes the relation between the Euler class of a level set (as a principal $S^1$-bundle) near the extremal fixed components $Z_{\min}$ and $Z_{\max}$ of the action and the first Chern numbers of the normal bundles of them. \begin{lemma}\cite[Lemma 6, 7]{Li2}\label{lemma_Euler_extremum} Let $b_{\min}$ (respectively $b_{\max}$) be the first Chern number of the normal bundle of $Z_{\min}$ (respectively $Z_{\max}$) in $M$. Also, we let $x$ and $y$ be the dual classes of the fiber and the base of the bundle $M_{-2+\epsilon} \rightarrow Z_{\min}$ (respectively $M_{2 - \epsilon} \rightarrow Z_{\max}$). Then $M_{-2 + \epsilon}$ (respectively $M_{2 - \epsilon}$) is a trivial $S^2$-bundle if and only if $b_{\min} = 2k$ (respectively $b_{\max} = 2k$), and it is diffeomorphic to $E_{S^2}$ if and only if $b_{\min} = 2k+1$ (respectively $b_{\max} = 2k+1$) for some $k \in \Z$. In either case, we have \[ e(P_{-2}^+) = kx - y \quad \quad \left(\text{respectively} \hs{0.1cm} e(P_2^-) = -kx + y \right) \] where $e(P_t^{\pm})$ denote the Euler class of the principal $S^1$-bundle $\pi_{t \pm \epsilon} : P_t^{\pm} = H^{-1}(t \pm \epsilon) \rightarrow M_{t \pm \epsilon}$. In particular, we have \[ \langle e(P_{-2}^+)^2, [M_{-2 + \epsilon}] \rangle = -b_{\min} \quad \quad \left( \text{respectively} \hs{0.2cm} \langle e(P_2^-)^2, [M_{2 - \epsilon}] \rangle = -b_{\max} \right). \] \end{lemma} The monotonicity\footnote{A symplectic form $\omega$ is {\em monotone} if $c_1(TM) = \lambda [\omega] \in H^2(M; \R)$ for some $\lambda \in \R_{>0}$.} of $\omega$ implies the following. \begin{corollary}\label{corollary_volume_extremum} Let $(M,\omega)$ be a six-dimensional closed semifree Hamiltonian $S^1$-manifold. Suppose that $c_1(TM) = [\omega]$. If the minimal fixed component $Z_{\min}$ (respectively $Z_{\max}$) is diffeomorphic to $S^2$, then \[ b_{\min} \geq -1 \quad \text{(respectively ~$b_{max} \geq -1$)}. \] \end{corollary} \begin{proof} Note that the symplectic volume of $Z_{\min}$ (respectively $Z_{\max}$) is given by \[ \int_{Z_{\min}} \omega = 2 + b_{\min} \quad \quad \left( \text{respectively} \hs{0.1cm} \int_{Z_{\max}} \omega = 2 + b_{\max} \right) \] which follows from the fact that the restriction of the tangent bundle $\left. TM \right|_{Z_\bullet}$ splits into the sum of the tangent bundle and the normal bundle of $Z_{\bullet}$ where $\bullet = \min$ or $\max$. Then the proof is straightforward by the positivity of symplectic area and the fact that $\omega$ is integral. \end{proof} \begin{remark}\label{remark_bminbmax} If we take the new Hamiltonian $S^1$-action ``$*$'' on $M$ by \[ t * p := t^{-1} \cdot p, \quad \quad p \in M, \] then the balanced moment map becomes $-H$ so that the maximal (resp. minimal) fixed component becomes the minimal (resp. maximal) one. Therefore, we only need to classify TFD under the assumption that \begin{equation}\label{equation_assumption} b_{\min} \leq b_{\max}. \end{equation} Then any case with ``$b_{\min} > b_{\max}$'' can be recovered from one in our classification by taking a ``reversed'' $S^1$-action. \end{remark} The following lemma due to McDuff will be useful in the rest sections. \begin{lemma}\label{lemma_list_exceptional}\cite[Section 2]{McD2} Let $X_k$ be the $k$-times simultaneous symplectic blow-up of $\p^2$ with the exceptional divisors $C_1, \cdots, C_k$. We denote by $E_i := \mathrm{PD}(C_i) \in H^2(M_0; \Z)$ the dual classes, called {\em exceptional classes}. Then all possible exceptional classes are listed as follows (modulo permutations of indices) : \[ \begin{array}{l} E_1, u - E_{12}, \quad 2u - E_{12345}, \quad 3u - 2E_1 - E_{234567}, \quad 4u - 2E_{123} - E_{45678} \\ \vs{0.1cm} 5u - 2E_{123456} - E_{78}, \quad 6u - 3E_1 - 2E_{2345678} \\ \end{array} \] Here, $u$ is the positive generator of $H^2(\p^2; \Z)$ and $E_{j \cdots n} := \sum_{i=j}^n E_i$. Furthermore, elements involving $E_i$ do not appear in $X_k$ with $k < i$. \end{lemma} We divide the classification process into four cases: $\mathrm{Crit} ~\mathring{H} = \emptyset, \{0\}, \{-1, 1\},$ and $\{-1,0,1\}$. \\ \section{Case I : $\mathrm{Crit} ~\mathring{H} = \emptyset$} \label{secCaseIMathrmCritMathringHEmptyset} In this section, we classify all TFD in the case where $\mathrm{Crit} \mathring{H} = \emptyset$. Also, for each TFD, we give the corresponding example of a Fano variety with an explicit holomorphic $S^1$-action on it. Note that $M_{-2 + \epsilon} \cong M_0 \cong M_{2 - \epsilon}$. \begin{theorem}\label{theorem_I_1} Let $(M,\omega)$ be a six-dimensional closed monotone semifree Hamiltonian $S^1$-manifold with $c_1(TM) = [\omega]$. Suppose that $\mathrm{Crit} H = \{ 2, -2\}$. Then the only possible topological fixed point data is given by \begin{table}[H] \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline & $(M_0, [\omega_0])$ & $e(P_{-2}^+)$ &$Z_{-2}$ & $Z_2$ & $b_2$ & $c_1^3$ \\ \hline \hline {\bf (I-1)} & $(S^2 \times S^2, 2x + 2y)$ & $x-y$ &$S^2$ & $S^2$ & $1$ & $64$\\ \hline \end{tabular} \vs{0.5cm} \caption{\label{table_I_1} Topological fixed point data for $\mathrm{Crit} H = \{-2, 2\}$} \end{table} \vs{-0.7cm} \end{theorem} \begin{proof} We first assume that $M_0 \cong S^2 \times S^2$ (so that $b_{\min} = 2k$ for some $k \in \Z$ and $e(P_{-2}^+) = kx - y$ by Lemma \ref{lemma_Euler_extremum}.) Then, Corollary \ref{corollary_volume_extremum} implies that $b_{\min} = 2k \geq -1$, i.e., $k \geq 0$. Using the monotonicity of the reduced space (Proposition \ref{proposition_monotonicity_preserved_under_reduction}) and the Duistermaat-Heckman theorem \eqref{equation_DH}, we obtain \[ [\omega_t] = 2x + 2y - t(kx - y) = (2 - kt)x + (2 + t)y, \quad \quad t \in (-2,2). \] As $\lim_{t \rightarrow 2} \int_{M_t} [\omega_t]^2 = 0$, we get $k=1$ and so $b_{\min} = 2$. Moreover there is a natural identification $H^{-1}(-2 + \epsilon) \cong H^{-1}(2 - \epsilon)$ by a Morse flow of $H$ so that we obtain $e(P_2^-) = e(P_{-2}^+)$ and \[ \langle e(P_{-2}^+)^2, [M_{-2 + \epsilon}] \rangle = \langle e(P_{-2}^+)^2, [M_{2 - \epsilon}] \rangle = -2. \] Therefore $b_{\max} = 2$ by Lemma \ref{lemma_Euler_extremum}. Let $u$ be the positive generator of $H^2(Z_{\min};\Z) = H^2(Z_{\max};\Z)$ so that $u^2 = 0$. The first Chern number can be obtained by applying the localization theorem \ref{theorem_localization} : \[ \begin{array}{ccl}\vs{0.1cm} \ds \int_M c_1^{S^1}(TM)^3 & = & \ds \int_{Z_{\min}} \frac{\left(c_1^{S^1}(TM)|_{Z_{\min}}\right)^3}{e_{Z_{\min}}^{S^1}} + \int_{Z_{\max}} \frac{\left(c_1^{S^1}(TM)|_{Z_{\max}}\right)^3}{e_{Z_{\max}}^{S^1}} \\ \vs{0.1cm} & = & \ds \int_{Z_{\min}} \frac{\left( (2+b_{\min}) u + 2\lambda \right)^3}{b_{\min} u\lambda + \lambda^2} + \int_{Z_{\max}} \frac{\left( (2+b_{\max}) u - 2\lambda \right)^3}{-b_{\max} u\lambda + \lambda^2} \\ \vs{0.1cm} & = & \ds \int_{Z_{\min}} (\lambda - 2u)(48u\lambda^2 + 8\lambda^3) + \int_{Z_{\max}} (\lambda + 2u)(48u\lambda^2 - 8\lambda^3) = 32 + 32 = 64. \end{array} \] See Table \ref{table_I_1}: {\bf (I-1).} It remains to consider the case where $M_0 \cong E_{S^2}$. In this case, we have $b_{\mathrm{min}} = 2k + 1$ for some $k \in \Z$ by Lemma \ref{lemma_Euler_extremum}. Similar to the previous case, we have \[ [\omega_t] = (3x + 2y) - t (kx - y) = (3 - kt)x + (2 + t)y, \quad t \in (-2, 2). \] Again, since $\lim_{t \rightarrow 2} \int_{M_t} [\omega_t]^2 = 8(3-2k) - 16 = 8 - 16k = 0$, we have $k = \frac{1}{2}$ which contradicts that $k \in \Z$. Consequently, $M_0$ cannot be diffeomorphic to $E_{S^2}$. This completes the proof. \end{proof} \begin{example}[Fano variety of type {\bf (I-1)}]\cite[17th in Section 12.2]{IP}\label{example_I_1} Let $X = \p^3$ with the symplectic form $4 \omega_{\mathrm{FS}}$ (so that $c_1(TX) = [4\omega_{\mathrm{FS}}]$) where $\omega_{\mathrm{FS}}$ denotes the normalized Fubini-Study form such that $\int_X \omega_{\mathrm{FS}} = 1$. Consider the Hamiltonian $S^1$-action on $(X, 4\omega_{\mathrm{FS}})$ given by \[ t \cdot [z_0, z_1, z_2, z_3] = [tz_0, tz_1, z_2, z_3], \quad \quad t \in S^1 \] where the balanced moment map for the action is given by \[ H([z_0, z_1, z_2, z_3]) = \frac{4|z_0|^2 + 4|z_1|^2}{|z_0|^2 + |z_1|^2 + |z_2|^2 + |z_3|^2} - 2. \] Then the fixed point set (whose image is red lines in Figure \ref{figure_II_1}) is given by $\{ Z_{-2} \cong Z_2 \cong S^2 \}$ and this coincides with the one given in Theorem \ref{theorem_I_1}. (See also \cite[Table 1-(4)]{Li2}.) \begin{figure}[H] \scalebox{1}{\input{figure_II_1_1.pdf_tex}} \caption{\label{figure_I_1} Toric moment map on $\p^3$} \end{figure} \end{example} \section{Case II : $\mathrm{Crit} ~\mathring{H} = \{0\}$} \label{secCaseIIMathrmCritMathringH} In this section, we classify all TFD in the case where $\mathrm{Crit} \mathring{H} = \{ 0 \}$. By Proposition \ref{proposition_GS}, we have $M_{-2 + \epsilon} \cong M_0 \cong M_{2 - \epsilon}$ so that we may divide the proof into two cases: \begin{itemize} \item $M_0 \cong S^2 \times S^2.$ \item $M_0 \cong E_{S^2}.$ \end{itemize} We begin with the case $M_0 \cong S^2 \times S^2.$ \begin{theorem}\label{theorem_II_1} Let $(M,\omega)$ be a six-dimensional closed monotone semifree Hamiltonian $S^1$-manifold with $c_1(TM) = [\omega]$. Suppose that $\mathrm{Crit} H = \{ 2, 0, -2\}$ and $M_0 \cong S^2 \times S^2$. Then, up to orientation of $M$, the list of all possible topological fixed point data is given by \begin{table}[H] \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline & $(M_0, [\omega_0])$ & $e(P_{-2}^+)$ &$Z_{-2}$ & $Z_0$ & $Z_2$ & $b_2(M)$ & $c_1^3(M)$ \\ \hline \hline {\bf (II-1.1)} & $(S^2 \times S^2, 2x + 2y)$ & $-y$ &$S^2$ & $Z_0 \cong S^2, ~\mathrm{PD}(Z_0) = x+y$ & $S^2$ & $2$ &$48$ \\ \hline {\bf (II-1.2)} & $(S^2 \times S^2, 2x + 2y)$ & $-y$ &$S^2$ & $Z_0 \cong S^2, ~\mathrm{PD}(Z_0) = x$ & $S^2$ & $2$ & $56$\\ \hline {\bf (II-1.3)} & $(S^2 \times S^2, 2x + 2y)$ & $-y$ &$S^2$ & \makecell{ $Z_0 = Z_0^1 ~\dot \cup ~ Z_0^2$ \\ $Z_0^1 \cong Z_0^2 \cong S^2$ \\ $\mathrm{PD}(Z_0^1) = \mathrm{PD}(Z_0^2) = y$} & $S^2$ & $3$ & $48$\\ \hline \end{tabular} \vs{0.5cm} \caption{\label{table_II_1} Topological fixed point data for $\mathrm{Crit} H = \{-2, -0, 2\}$, $M_0 \cong S^2 \times S^2$} \end{table} \vs{-0.7cm} \end{theorem} \begin{proof} Denote by $\mathrm{PD}(Z_0) = ax + by \in H^2(M_0; \Z)$ for some $a,b \in \Z$. By Lemma \ref{lemma_Euler_extremum}, we may assume that $b_{\mathrm{min}} = 2k$ for some integer $k \in \Z$ and that $e(P_{-2}^+) = kx - y$. By the Duistermaat-Heckman theorem \eqref{equation_DH}, we have \[ [\omega_2] = [\omega_0] - 2(kx - y + \mathrm{PD}(Z_0)) = 2(1-a-k)x + (4-2b)y. \] As $\lim_{t \rightarrow 2} \int_{M_t} [\omega_t]^2 = 0$, we see that \begin{enumerate} \item $1-a-k=0$ and $4-2b > 0$, or \item $b=2$ and $1-a-k > 0$ \end{enumerate} where the above two strict inequalities follow from the fact that $\int_{M_t} [\omega_t]^2 > 0$ for every $0 \leq t < 2$. Moreover, we have \begin{equation}\label{equation_area_vol} \langle c_1(TM_0), [Z_0] \rangle = \langle [\omega_0], [Z_0] \rangle = 2a + 2b > 0, \quad \quad \mathrm{Vol}(Z_{-2}) = 2k + 2 > 0 \hs{0.2cm} (\Leftrightarrow ~k \geq 0) \end{equation} by Corollary \ref{corollary_volume_extremum}. \\ \noindent {\bf Case (1).} ~If $a + k = 1$ and $b \leq 1$, then the integer solutions $(a,b,k)$ for \eqref{equation_area_vol} are \[ (1,1,0), (1,0,0), (0,1,1). \] \noindent {\bf Case (2).} ~If $a + k \leq 0$ and $b = 2$, then the integer solutions for $(a,b,k)$ are \[ (0,2,0), (-1,2,0), (-1,2,1). \] However, we may rule out the last two solutions in {\bf Case (2)} using the adjuction formula \begin{equation}\label{equation_adjunction} [Z_0]\cdot [Z_0] + \sum (2 - 2g_i) = \langle c_1(TM_0), [Z_0] \rangle \end{equation} where the sum is taken over all fixed components of $Z_0$. If $(a,b,k)$ is $(-1,2,0)$ or $(-1,2,1)$, we have \[ -4 + \sum (2 - 2g_i) = \langle 2x + 2y, [Z_0] \rangle = 2, \quad \quad \mathrm{PD}(Z_0) = -x + 2y \] which implies that there are at least three components each of which is homeomorphic to a sphere. Meanwhile, since $\langle c_1(TM_0), [Z_0] \rangle = 2$ is the symplectic area of $Z_0$, there should be at most two components in $Z_0$ and this leads to a contradiction. Summing up, we have \begin{equation}\label{equation_2_1_bminbmax} \begin{array}{lll} (a,b,k) = (1,1,0) ~(b_{\min} = 0, b_{\max} = 0), & & (a,b,k) = (1,0,0) ~(b_{\min} = 0, b_{\max} = 2) \\ (a,b,k) = (0,1,1) ~(b_{\min} = 2, b_{\max} = 0), & & (a,b,k) = (0,2,0) ~(b_{\min} = 0, b_{\max} = 0) \\ \end{array} \end{equation} where $b_{\min} = 2k$ and $b_{\max}$ is computed by Lemma \ref{lemma_Euler_extremum}. Since we only need to classify TFD's satisfying $b_{\min} \leq b_{\max}$ by \eqref{equation_assumption}, the case $(a,b,k) = (0,1,1)$ can be ruled out. Notice that the symplectic area of each component of $Z_0$ is even (since $[\omega_0] = 2x + 2y$). Applying \eqref{equation_adjunction} to each solutions in \eqref{equation_2_1_bminbmax}, we deduce that \begin{equation}\label{equation_II_1} \begin{array}{lllll} \text{\bf (II-1.1)} : (a,b,k) = (1,1,0) & \Rightarrow & 2 + \sum (2-2g_i) = 4 & \Rightarrow & \text{$Z_0$ has at most two components,}\\ \text{\bf (II-1.2)} : (a,b,k) = (1,0,0) & \Rightarrow & 0 + \sum (2-2g_i) = 2 & \Rightarrow & Z_0 \cong S^2,\\ \text{\bf (II-1.3)} : (a,b,k) = (0,2,0) & \Rightarrow & 0 + \sum (2-2g_i) = 4 & \Rightarrow & \text{$Z_0$ has exactly two components.}\\ \end{array} \end{equation} For the last case, it is easy to check that each two components are spheres (with area $2$) whose Poincar\'{e} dual classes are both $y$. For the first case, if $Z_0$ consists of two components, say $Z_0^1$ and $Z_0^2$, then we can easily see that $Z_0^1 \cong S^2$ and $Z_0^2 \cong T^2$ with \[ [Z_0^1] \cdot [Z_0^1] = 0, \quad [Z_0^2] \cdot [Z_0^2] = 2, \quad [Z_0^1] \cdot [Z_0^2] = 0, \quad \mathrm{PD}(Z_0^1) + \mathrm{PD}(Z_0^2) = x+y. \] The first and the third equalities imply that $(\mathrm{PD}(Z_0^1), \mathrm{PD}(Z_0^2)) = (ax, bx)$ or $(ay, by)$ for some $a,b \in \Z$, but in either case, the second (as well as fourth) equality does not hold. Therefore, $Z_0$ is connected and homeomorphic to $S^2$. To calculate the Chern number for each fixed point data, we apply the localization theorem \ref{theorem_localization} : \[ \begin{array}{ccl}\vs{0.3cm} \ds \int_M c_1^{S^1}(TM)^3 & = & \ds \int_{Z_{\min}} \frac{\left(c_1^{S^1}(TM)|_{Z_{\min}}\right)^3}{e_{Z_{\min}}^{S^1}} + \int_{Z_{\max}} \frac{\left(c_1^{S^1}(TM)|_{Z_{\max}}\right)^3}{e_{Z_{\max}}^{S^1}} + \int_{Z_0} \frac{\overbrace{\left(c_1^{S^1}(TM)|_{Z_0}\right)^3}^{= 0}}{e_{Z_0}^{S^1}} \\ \vs{0.2cm} & = & \ds \int_{Z_{\min}} \frac{\left( (2+b_{\min}) u + 2\lambda \right)^3}{b_{\min} u\lambda + \lambda^2} + \int_{Z_{\max}} \frac{\left( (2+b_{\max}) u - 2\lambda \right)^3}{-b_{\max} u\lambda + \lambda^2} \\ \vs{0.1cm} & = & \ds \int_{Z_{\min}} (\lambda - b_{\min}u)(12(2+b_{\min}) u\lambda^2 + 8\lambda^3) + \int_{Z_{\max}} (\lambda + b_{\max}u)(12(2+ b_{\max}) u\lambda^2 - 8\lambda^3) \\ \vs{0.1cm} & = &\ds 24 + 4b_{\min} + 24 + 4b_{\max}. \end{array} \] By \eqref{equation_2_1_bminbmax}, this completes the proof. See Table \ref{table_II_1} and compare it with \eqref{equation_II_1}. \end{proof} \begin{remark}\label{remark_localization_surface} We use the following equations frequently for calculating the Chern numbers : \[ \left(c_1^{S^1}(TM)|_{Z_0} \right)^3 = 0, \quad \int_{Z_{\min}} \frac{\left(c_1^{S^1}(TM)|_{Z_{\min}}\right)^3}{e_{Z_{\min}}^{S^1}} = 24 + 4b_{\min}, \quad \int_{Z_{\max}} \frac{\left(c_1^{S^1}(TM)|_{Z_{\max}}\right)^3}{e_{Z_{\max}}^{S^1}} = 24 + 4b_{\max}. \] \end{remark} \vs{0.3cm} \begin{example}[Fano varieties of type {\bf (II-1)}]\label{example_II_1} We denote by $T^k$ a $k$-dimensional compact torus, $\frak{t}$ the Lie algebra of $T$, and $\frak{t}^*$ the dual of $\frak{t}$. We provide algebraic Fano examples for each topological fixed point data given in Theorem \ref{theorem_II_1} as follows. \vs{0.3cm} \begin{figure}[H] \scalebox{1}{\input{figure_II_2_1.pdf_tex}} \caption{\label{figure_II_1} Fano varieties of type {\bf (II-1)}} \end{figure} \begin{enumerate} \item {\bf Case (II-1.1)} \cite[32nd in Section 12.3]{IP} : Let $W= \mcal{F}(3)$ be the complete flag variety of $\C^3$, or equivalently, a smooth divisor of bidegree $(1,1)$ in $\p^2 \times \p^2$ (via the Pl\"{u}cker embedding). One can think of $M$ as a co-adjoint orbit of $U(3)$. It is well-known that $M$ admits a unique $U(3)$-invariant monotone K\"{a}hler form $\omega$ (called a {\em Kirillov-Kostant-Souriau form}) such that $c_1(TW) = [\omega]$. A maximal torus $T^2$ of $U(3)$ acts on $(W,\omega)$ in a Hamiltonian fashion with a moment map \[ \mu : W \rightarrow \frak{t}^* \] such that the moment map image can be described by Figure \ref{figure_II_1} (a), where edges corresponds to $T$-invariant spheres (called 1-skeleton in \cite{GKM}). If we take a circle subgroup $S^1$ generated by $\xi = (1,0) \in \frak{t} \cong \R^2$, then the action is semifree and the balanced moment map is given by \[ \mu_\xi = \langle \mu, \xi \rangle - 2 \] The fixed point set for the $S^1$-action consists of three spheres corresponding to the edges (colored by red in Figure \ref{figure_II_1} (a)) \[ e_1 = \overline{(0,0) ~(0,2)}, \quad e_2 = \overline{(2,0) ~(2,4)}, \quad e_3 = \overline{(4,2) ~(4,4)} \] The symplectic areas of the minimum $Z_{-2} = \mu^{-1}(e_1)$ and the maximum $Z_2 = \mu^{-1}(e_3)$ are both equal to $2 = 2 + b_{\min} = 2 + b_{\max}$ by Corollary \ref{corollary_volume_extremum} and hence $b_{\min} = b_{\max} = 0$. Thus $W_{-2 + \epsilon} \cong S^2 \times S^2$ by Lemma \ref{lemma_Euler_extremum}. Therefore, the corresponding fixed point data should coincide with {\bf (II-1.1)} in Table \ref{table_II_1}. \vs{0.2cm} \item {\bf Case (II-1.2)} \cite[35th in Section 12.3]{IP} : Let $M = V_7$, the toric blow-up of $\p^3$ at a point. Then the moment polytope is given by Figure \ref{figure_II_1} (b) where we denote the moment map by $\mu$. If we take a circle subgroup generated by $\xi = (1,1,0) \in \frak{t}$, then we can easily check that the $S^1$-action is semifree and the balanced moment map is given by $\mu_\xi := \langle \mu, \xi \rangle - 2$. Moreover, the fixed components $Z_{-2}$, $Z_0$, and $Z_2$ are three spheres whose moment map images are the edges (colored by red in Figure \ref{figure_II_1} (b)) \[ e_1 = \overline{(0,0,0) ~(0,0,2)}, \quad e_2 = \overline{(0,2,2) ~(2,0,2)}, \quad e_3 = \overline{(0,4,0) ~(4,0,0)}. \] In this case, we have $Z_{-2} = \mu^{-1}(e_1)$ and $Z_2 = \mu^{-1}(e_3)$ with the symplectic areas 2 and 4, respectively. By Corollary \ref{corollary_volume_extremum}, we have $b_{\min} = 0$ and $b_{\max} = 2$ and so $M_{-2 + \epsilon} \cong S^2 \times S^2$ by Lemma \ref{lemma_Euler_extremum}. Also, one can easily check that the fixed point data for the $S^1$-action equals {\bf (II-1.2)} in Table \ref{table_II_1} (see also \eqref{equation_2_1_bminbmax}). \vs{0.2cm} \item {\bf Case (II-1.3)} \cite[27th in Section 12.4]{IP} : Let $M = \p^1 \times \p^1 \times \p^1$ with the monotone K\"{a}hler form $\omega = 2\omega_{\mathrm{FS}} \oplus 2\omega_{\mathrm{FS}} \oplus 2\omega_{\mathrm{FS}}$ so that $c_1(TM) = [\omega]$. Then the standard =Hamiltonian $T^3$-action admits a moment map whose image is a cube with side length 2, see Figure \ref{figure_II_1} (c). Take a circle subgroup $S^1$ of $T^3$ generated by $\xi = (1,0,1)$. Then the induced $S^1$-action becomes semifree with the balanced moment map is given by $\mu_\xi = \langle \mu, \xi \rangle - 2$. It is easy to see that there are four fixed components homeomorphic to spheres and their moment map images are \[ e_1 = \overline{(0,2,0) ~(0,0,0)}, \quad e_2 = \overline{(0,2,2) ~(0,0,2)}, \quad e_3 = \overline{(2,2,0) ~(2,0,0)}, \quad e_4 = \overline{(2,2,2) ~(2,0,2)} \] colored by red in Figure \ref{figure_II_1} (c). Since $Z_{-2} = \mu^{-1}(e_1)$ and $Z_2 = \mu^{-1}(e_4)$ both have the symplectic area 2, we have $b_{\min} = b_{\max} = 0$ and this fixed point data coincides with {\bf (II-1.3)} in Table \ref{table_II_1}. \end{enumerate} \end{example} Now we consider the case of $M_0 \cong E_{S^2}$. \begin{theorem}\label{theorem_II_2} Let $(M,\omega)$ be a six-dimensional closed monotone semifree Hamiltonian $S^1$-manifold with $c_1(TM) = [\omega]$. Suppose that $\mathrm{Crit} H = \{ 2, 0, -2\}$ and $M_0 \cong E_{S^2}$. Then, up to orientation of $M$, the list of all possible topological fixed point data is given by \begin{table}[H] \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline & $(M_0, [\omega_0])$ & $e(P_{-2}^+)$ &$Z_{-2}$ & $Z_0$ & $Z_2$ & $b_2(M)$ & $c_1^3(M)$ \\ \hline \hline {\bf (II-2.1)} & $(E_{S^2}, 3x + 2y)$ & $-x -y$ &$S^2$ & \makecell{ $Z_0 = Z_0^1 ~\dot \cup ~ Z_0^2$ \\ $Z_0^1 \cong Z_0^2 \cong S^2$ \\ $\mathrm{PD}(Z_0^1) = y$, $\mathrm{PD}(Z_0^2) = x+y$} & $S^2$ & $3$ & $48$\\ \hline {\bf (II-2.2)} & $(E_{S^2}, 3x + 2y)$ & $-x-y$ &$S^2$ & $Z_0 \cong S^2, ~\mathrm{PD}(Z_0) = 2x+2y$ & $S^2$ & $2$ &$40$ \\ \hline \end{tabular} \vs{0.5cm} \caption{\label{table_II_2} Topological fixed point data for $\mathrm{Crit} H = \{-2, -0, 2\}$, $M_0 \cong E_{S^2}$} \end{table} \vs{-0.7cm} \end{theorem} \begin{proof} The idea of the proof is essentially similar to the proof of Theorem \ref{theorem_II_1}. In this case, Lemma \ref{lemma_Euler_extremum} implies that $b_{\mathrm{min}} = 2k+1$ and $e(P_{-2}^+) = kx - y$ for some integer $k \in \Z$. If we denote by $\mathrm{PD}(Z_0) = ax + by \in H^2(M_0; \Z)$ for some $a,b \in \Z$, then it follows that \[ \langle c_1(TM_0), [Z_0] \rangle > 0, \quad \mathrm{Vol}(Z_{-2}) = 2k + 3 > 0 \quad \quad \Rightarrow \quad \quad 2a+b \geq 1, \quad k \geq -1. \] by Corollary \ref{corollary_volume_extremum}. Also, by the Duistermaat-Heckman theorem \eqref{equation_DH}, we obtain \[ [\omega_2] = [\omega_0] - 2(kx - y + \mathrm{PD}(Z_0)) = (3-2a-2k)x + (4-2b)y. \] Since $\lim_{t \rightarrow 2} \int_{M_t} [\omega_t]^2 = 0$, we have \[ 2(3-2a-2k)(4-2b) - (4-2b)^2 = 0 \quad \Rightarrow \quad b=2 \hs{0.3cm} \text{or} \hs{0.3cm} 1+b = 2a + 2k \] Note that in the latter case, $b$ becomes odd and this implies that \begin{equation}\label{equation_2_2_bmax} \langle e(P_2^-)^2, [M_0] \rangle = \langle ((a+k)x + (b-1)y)^2, [M_0] \rangle = 2(a+k)(b-1) - (b-1)^2 \equiv 0 ~\mod 2 \end{equation} which contradicts that $- b_{\max} = \langle e(P_2^-)^2, [M_0] \rangle$ is odd by Lemma \ref{lemma_Euler_extremum} (since $M_{2-\epsilon} \cong M_0 \cong E_{S^2}$). Consequently, we get \begin{equation}\label{equation_2_2} b=2, \quad a \geq 0, \quad k \geq -1, \quad a + k \leq 1 ~( \Leftrightarrow ~b_{\max} + 2 = \mathrm{vol}(Z_{\max}) \geq 1). \end{equation} Therefore, all possible solutions $(k,a,b)$ to \eqref{equation_2_2} are given by \[ (-1,0,2), (-1,1,2), (-1, 2, 2), (0,0,2), (0,1,2),(1,0,2). \] Applying the adjunction formula, we may rule out some solutions : if $a=0$, then $\mathrm{PD}(Z_0) = 2y$ so that we have $[Z_0] \cdot [Z_0] = -4$ and $\langle c_1(TM_0), [Z_0] \rangle = 2$ and hence there are at most two connected component in $Z_0$. On the other hand, the adjunction formula \eqref{equation_adjunction} implies that \[ \underbrace{[Z_0] \cdot [Z_0]}_{= ~-4} + \sum (2 - 2g_i) = \langle c_1(TM_0), [Z_0] \rangle = 2 \] and therefore there should be at least three spheres, which contradicts that $Z_0$ consists of at most two connected components. Also, if $(k,a,b)=(0,1,2)$, then the formula \eqref{equation_2_2_bmax} induces that $b_{\min} = 1$ and $b_{\max} = -1$ (in particular $b_{\min} > b_{\max}$) and hence we may rule out this case by \eqref{equation_assumption}. To sum up, we have only two possible cases : \vs{0.2cm} \noindent {\bf (II-2.1)} : $(k,a,b)=(-1,1,2)$. In this case, $[Z_0] \cdot [Z_0] = 0$ and $\langle c_1(TM_0), [Z_0] \rangle = 4$, $b_{\min} = -1$ and $b_{\max} = 1$. The adjunction formula implies that there are at least two spheres denoted by $C_1$ and $C_2$ where the followings are satisfied : \begin{itemize} \item $1 \leq \langle [\omega_0], [C_i] \rangle \leq 3$. \item $2 \leq \langle [\omega_0], [C_1] + [C_2] \rangle \leq 4$. \item $[C_1] \cdot [C_2] =0$. \end{itemize} Let $\mathrm{PD}(C_1) = p x + q y$. If $\langle [\omega_0], [C_1] \rangle = 2p + q = 1$, then $2pq - q^2 = -1$ by the adjunction formula so that we have $(p,q) = (0,1)$. Similarly, if $\langle [\omega_0], [C_1] \rangle = 2p + q = 2$, then we have $2pq - q^2 = 0$ and hence \[ q = 0 ~(p = 1) \quad \text{or} \quad q = 2p ~(4p = 2). \] So, we have $(p,q) = (1,0)$. Note that if $\langle [\omega_0], [C_i] \rangle \leq 2$ for every $i = 1,2$, since $[C_1] \cdot [C_2] = 0$, the only possible case is $\langle [\omega_0], [C_i] \rangle = 2$ for every $i=1,2.$ However, this cannot be happened since $\mathrm{PD}(C_1) + \mathrm{PD}(C_2) \neq x + 2y$. Thus the only possibility is that $\langle [\omega_0], [C_1] \rangle = 1$ and $\langle [\omega_0], [C_2] \rangle = 3$. Therefore, we obtain $\mathrm{PD}(C_1) = y$, $\mathrm{PD}(C_2) = x+y$, and $C_1 \cong C_2 \cong S^2$. See Table \ref{table_II_2}: {\bf (II-2.1)}. \vs{0.3cm} \noindent {\bf (II-2.2)} : $(k,a,b)=(-1,2,2)$. In this case, we have $[Z_0] \cdot [Z_0] = 4$ and $\langle c_1(TM_0), [Z_0] \rangle = 6$, $b_{\min} = -1$ and $b_{\max} = -1$. By the adjunction formula, there exists a component $C \cong S^2$ of $Z_0$ where we denote by $\mathrm{PD}(C) = px + qy$. Then, we have \[ [C] \cdot ([Z_0] - [C]) = \langle (px + qy) \cdot ((2-p)x + (2-q)y), [M_0] \rangle = 0 \quad \Leftrightarrow \quad -2pq + 2p + q^2 = 0. \] Also, since \[ V := \mathrm{vol}(C) = [C] \cdot [C] + 2 = \langle (px+qy)^2, [M_0] \rangle + 2 = 2pq - q^2 + 2, \] we get $2p + 2 - V = 0$. If $V = 6$, then $Z_0$ is connected so that we are done. If $V=2$, then $p = q= 0$ which is impossible. Finally if $V=4$, then $p = 1$ and $q^2 - 2q + 2 = 0$ whose solution cannot be real. Therefore, we have $V=6$ and $Z_0$ is connected and homemorphic to $S^2$. See Table \ref{table_II_2}: {\bf (II-2.2)}. \vs{0.3cm} Note that the Chern number computations in Table \ref{table_II_2} immediately follow from Remark \ref{remark_localization_surface}. \end{proof} \begin{example}[Fano varieties of type {\bf (II-2)}]\label{example_II_2} We illustrate algebraic Fano varieties with holomorphic Hamiltonian torus actions having each topological fixed point data given in Theorem \ref{theorem_II_2}.\vs{0.3cm} \begin{figure}[h] \scalebox{1}{\input{figure_II_2_2.pdf_tex}} \caption{\label{figure_II_2} Fano varieties of type {\bf (II-2)}} \end{figure} \begin{enumerate} \item {\bf Case (II-2.1)} \cite[28th in Section 12.4]{IP} : Let $M = \p^1 \times F_1$ where $F_1 = \p(\mcal{O} \oplus \mcal{O}(1))$ is the Hirzebruch surface. Equip $M$ with the toric K\"{a}hler form $\omega$ such that $c_1(TM) = [\omega]$ so that the moment map $\mu : M \rightarrow \frak{t}^*$ has the image FIgure \ref{figure_II_2} (a). If we take a circle subgroup $S^1$ generated by $\xi = (0,1,-1) \in \frak{t}$, then one can check that the action is semifree and the balanced moment map is given by \[ \mu_\xi = \langle \mu, \xi \rangle. \] The fixed point set for the $S^1$-action has four connected components each of which are all spheres and have the moment map images (colored by red in Figure \ref{figure_II_2} (a)) \[ e_1 = \overline{(0,0,2) ~(1,0,2)}, \quad e_2 = \overline{(0,2,2) ~(1,2,2)}, \quad e_3 = \overline{(0,0,0) ~(3,0,0)}, \quad e_4 = \overline{(0,2,0) ~(3,2,0)}. \] The symplectic areas of the minimum $Z_{-2} = \mu^{-1}(e_1)$ and the maximum $Z_2 = \mu^{-1}(e_4)$ are 1 and 3, respectively, so that $b_{\min} = -1$ and $b_{\max} = 1$ by Corollary \ref{corollary_volume_extremum}. Thus $M_{-2 + \epsilon} \cong E_{S^2}$ by Lemma \ref{lemma_Euler_extremum} and the corresponding fixed point data coincides with {\bf (II-2.1)} in Table \ref{table_II_2}. \vs{0.2cm} \item {\bf Case (II-2.2)} \cite[29th in Section 12.3]{IP} : Let $M$ be a smooth quadric in $\p^4$. As a co-adjoint orbit of $SO(5)$, $M$ admits a $SO(5)$-invariant monotone K\"{a}hler form $\omega$ such that $c_1(TM) = [\omega]$. With respect to the maximal torus $T^2$-action on $(M,\omega)$, we get a moment map $\mu : M \rightarrow \frak{t}^*$ whose image is a square with four vertices $(0, \pm 3)$, $(\pm 3, 0)$ (see Figure \ref{figure_II_2} (b)). Let $C$ be the $T^2$-invariant sphere $\mu^{-1}(\overline{(0,-3) ~(0,3)})$ and define \[ \widetilde{M} := ~\text{$T^2$-equivariant (or GKM) blow-up of $M$ along $C$} \] where the {\em $T^2$-equivariant blowing up} can be done via the following two steps:\vs{0.2cm} \begin{itemize} \item Take a $T^2$-equivariant neighborhood $\mcal{U}$ of $C$, isomorphic to some $T^2$-equivariant $\C^2$-bundle over $\p^1$, and extend the $T^2$-action to (any effective Hamiltonian) $T^3$-action so that we get a toric model. \item Take the toric blow-up of $\mcal{U}$ along the zero section, i.e., $C$, and restrict the toric action to the original $T^2$-action. \end{itemize} \vs{0.2cm} The resulting moment map image is given in Figure \ref{figure_II_2} (b). Now, we take a circle subgroup generated by $\xi = (0,1) \in \frak{t}$. One can directly check that the $S^1$-action is semifree and the balanced moment map is given by $\mu_\xi := \langle \mu, \xi \rangle - 2$. Moreover, the fixed components $Z_{-2}$, $Z_0$, and $Z_2$ are given by \[ Z_{-2} = \mu^{-1}(e_1), \quad Z_{-2} = \mu^{-1}(e_2), \quad Z_{-2} = \mu^{-1}(e_3) \] where \[ e_1 = \overline{(-1,-2) ~(1,-2)}, \quad e_2 = \overline{(-3,0) ~(3,0)}, \quad e_3 = \overline{(-1,2) ~(1,2)} \] (colored by red in Figure \ref{figure_II_2} (b). In particular, we have $\mathrm{vol}(Z_{-2}) = \mathrm{vol}(Z_{-2}) = 1$ so that $b_{\min} = b_{\max} = -1$. By Lemma \ref{lemma_Euler_extremum}, we have $M_{-2 + \epsilon} \cong S^2 \times S^2$. So, the fixed point data for the $S^1$-action coincides with {\bf (II-2.2)} in Table \ref{table_II_2}. \\ \end{enumerate} \end{example} \section{Case III : $\mathrm{Crit} ~\mathring{H} = \{-1, 1\}$} \label{secCaseIIIMathrmCritMathringH11} In this section, we classify all TFD in the case where $\mathrm{Crit} \mathring{H} = \{-1, 1\}$. Let $m = |Z_{-1}|$ ($ m \in \Z_{>0}$) be the number of isolated fixed points of index two. By the Poincar\'{e} duality, we have $|Z_1| = m$. Applying the localization theorem to $1 \in H^0_{S^1}(M)$ and $c_1^{S^1}(TM) \in H^2_{S^1}(M)$, we obtain \begin{equation}\label{equation_3_localization_0} \begin{array}{ccl}\vs{0.2cm} 0 = \ds \int_M 1 & = & \ds \int_{Z_{\min}} \frac{1}{e_{Z_{\min}}^{S^1}} + m \cdot \frac{1}{-\lambda^3} + m \cdot \frac{1}{\lambda^3} + \int_{Z_{\max}} \frac{1}{e_{Z_{\max}}^{S^1}} \\ \vs{0.2cm} & = & \ds \int_{Z_{\min}} \frac{1}{b_{\min} u\lambda + \lambda^2} + \int_{Z_{\max}} \frac{1}{-b_{\max} u\lambda + \lambda^2} \\ \vs{0.2cm} & = &\ds \frac{- b_{\min} + b_{\max}}{\lambda^3} \end{array} \end{equation} and \begin{equation}\label{equation_3_localization_c1} \begin{array}{ccl}\vs{0.2cm} 0 = \ds \int_M c_1^{S^1}(TM) & = & \ds \int_{Z_{\min}} \frac{c_1^{S^1}(TM)|_{Z_{\min}}}{e_{Z_{\min}}^{S^1}} + m \cdot \frac{\lambda}{-\lambda^3} + m \cdot \frac{-\lambda}{\lambda^3} + \int_{Z_{\max}} \frac{c_1^{S^1}(TM)|_{Z_{\max}}}{e_{Z_{\max}}^{S^1}} \\ \vs{0.2cm} & = & \ds \int_{Z_{\min}} \frac{2\lambda + (b_{\min}+2)u}{b_{\min} u\lambda + \lambda^2} -2m \cdot \frac{\lambda}{\lambda^3} + \int_{Z_{\max}} \frac{- 2\lambda + (b_{\max} + 2)u}{-b_{\max} u\lambda + \lambda^2} \\ \vs{0.2cm} & = &\ds \frac{- b_{\min} - b_{\max} -2m +4}{\lambda^2}. \end{array} \end{equation} From \eqref{equation_3_localization_0} and \eqref{equation_3_localization_c1}, we get $b_{\max} = b_{\min}$ and $b_{\min} + m = 2$. Moreover, Corollary \ref{corollary_volume_extremum} implies that $b_{\min} \geq -1$ and therefore we have three possible cases : \[ (b_{\min}, m) = (1,1), (0,2), (-1, 3). \] Therefore we obtain the following. \begin{theorem}\label{theorem_III} Let $(M,\omega)$ be a six-dimensional closed monotone semifree Hamiltonian $S^1$-manifold with $c_1(TM) = [\omega]$. Suppose that $\mathrm{Crit} H = \{ 2, 1, -1, -2\}$. Then the list of all possible topological fixed point data is given by \begin{table}[H] \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline & $(M_0, [\omega_0])$ & $e(P_{-2}^+)$ &$Z_{-2}$ & $Z_{-1}$ & $Z_1$ & $Z_2$ & $b_2(M)$ & $c_1^3(M)$ \\ \hline \hline {\bf (III.1)} & $(E_{S^2} \# ~\overline{\p^2}, 3x + 2y - E_1)$ & $-y$ &$S^2$ & {\em pt} & {\em pt} & $S^2$ & $2$ & $54$\\ \hline {\bf (III.2)} & $(S^2 \times S^2 \# ~2\overline{\p^2}, 2x + 2y - E_1 - E_2)$ & $-y$ &$S^2$ & {\em 2 pts} & {\em 2 pts} & $S^2$ & $3$ & $44$\\ \hline {\bf (III.3)} & $(E_{S^2} \# ~\overline{\p^2}, 3x + 2y - E_1)$ & $-x-y$ &$S^2$ & {\em 3 ~pts} & {\em 3 ~pts} & $S^2$ & $4$ &$34$ \\ \hline \end{tabular} \vs{0.5cm} \caption{\label{table_III} Topological fixed point data for $\mathrm{Crit} H = \{-2, -1,1, 2\}$.} \end{table} \vs{-0.7cm} \end{theorem} \begin{proof} The formula follows from Lemma \ref{lemma_Euler_extremum} that $e(P_{-2}^+) = kx - y$ with $b_{\min} = 2k +1$. Also the Chern number computations can be easily obtained by Remark \ref{remark_localization_surface}. \end{proof} \begin{example}[Fano varieties of type {\bf (III)}]\label{example_III} We provide algebraic Fano varieties with holomorphic Hamiltonian $S^1$-action with topological fixed point data given in Theorem \ref{theorem_III} as follows. \begin{figure}[h] \scalebox{1}{\input{figure_II_3.pdf_tex}} \caption{\label{figure_III} Fano varieties of type {\bf (III)}} \end{figure} \begin{enumerate} \item {\bf Case (III.1)} \cite[33rd in Section 12.3]{IP} : Let $M$ be the toric blow-up of $\p^3$ along a $T^3$-invariant line. WIth respect to the $T^3$-invariant normalized monotone K\"{a}hler form, we get a moment map $\mu : M \rightarrow \frak{t}^*$ whose image is given by Figure \ref{figure_III} (a). If we take a circle subgroup $S^1$ generated by $\xi = (1,0,1) \in \frak{t}$, then the action is semifree with the balanced moment map $\mu_\xi = \langle \mu, \xi \rangle - 2$ and the fixed point set consists of \[ Z_{-2} = \mu^{-1}(e_1), \quad Z_{-1} = \mu^{-1}(1,0,0), \quad Z_1 = \mu^{-1}(0,1,3), \quad \mu^{-1}(e_2) \] where $e_1 = \overline{(0,1,0) ~(0,4,0)}$ and $e_2 = \overline{(1,0,3) ~(4,0,0)}$. Note that $\mathrm{Vol}(Z_{-2}) = \mathrm{Vol}(Z_2) = 3$ and so $b_{\min} = b_{\max} = 1$ by Corollary \ref{corollary_volume_extremum}. Thus the fixed point data for the $S^1$-action coincides with Table \ref{table_III} {\bf (III.1)}. \vs{0.2cm} \item {\bf Case (III.2)} \cite[25th in Section 12.4]{IP} : Let $M$ be the toric blow-up of $\p^3$ along two disjoint $T^3$-invariant lines. Then the image of a moment map $\mu : M \rightarrow \frak{t}^*$ (with respect to the normalized $T^3$-invariant K\"{a}hler form) is described as in Figure \ref{figure_III} (b). One can easily check that the circle action generated by $\xi = (1,0,1) \in \frak{t}$ is semifree and the balanced moment map is given by $\mu_\xi = \langle \mu, \xi \rangle - 2$. The fixed components are \[ Z_{-2} = \mu^{-1}(e_1), \quad Z_{-1} = \{ (0,3,1), (1,0,0) \}, \quad Z_1 = \{ (0,1,3), (3,0,0)\}, \quad Z_2 = \mu^{-1}(e_2) \] where $e_1 = \overline{(0,3,0) ~(0,1,0)}$ and $e_2 = \overline{(1,0,3) ~(3,0,1)}$. As the symplectic volumes of $Z_{-2}$ and $Z_2$ are both 2, we have $b_{\min} = b_{\max} = 0$ and so the fixed point data of the action is the same as Table \ref{table_III} {\bf (III.2)}. \item {\bf Case (III.3)} \cite[6th in Section 12.5]{IP} : Consider $M = \p^1 \times \p^1 \times \p^1$ equipped with the normalized monotone K\"{a}hler form $\omega$ on $M$ with the standard $\omega$-compatible integrable complex structure $J$ on $M$. Consider the standard $T^3$-action on $(M,\omega)$ with a moment map given by \[ \mu([x_0, x_1], [y_0, y_1], [z_0, z_1]) = \left( \frac{2x_0|^2}{|x_0|^2 + |x_1|^2}, \frac{2|y_0|^2}{|y_0|^2 + |y_1|^2}, \frac{2|z_0|^2}{|z_0|^2 + |z_1|^2} \right). \] For the diagonal circle subgroup \[ S^1 = \{(t,t,t) ~|~ t \in S^1 \} \subset T^3, \] generated by $\xi = (1,1,1) \in \frak{t}$, the induced $S^1$-action on $(M,\omega, J)$ is semifree with the balanced moment map $\mu_\xi = \langle \mu, \xi \rangle - 3$. See Figure 2 in \cite[Example 6.6]{Cho}. Now, we take the $S^1$-invariant diagonal sphere $D = \{ \left([z_0, z_1], [z_0, z_1], [z_0, z_1] \right) ~|~ [z_0, z_1] \in \p^1 \}$ in $M$, which is obviously a K\"{a}hler submanifold of $(M,\omega,J)$. One can obtain an equivariant blowing-up $(\widetilde{M}, \widetilde{\omega}, \widetilde{J})$ of $(M,\omega,J)$ along $D$ as follows (where the construction seems to be well-known to experts): \vs{0.2cm} \begin{itemize} \item Let $\mcal{U}$ be a sufficiently small $T^3$-invariant neighborhood of $D$ such that $\mcal{U}$ equipped with the induced K\"{a}hler structure is $S^1$-equivariantly isomorphic to some neighborhood of the zero section of $E_D := \mcal{O}(k_1) \oplus \mcal{O}(k_2)$ where \begin{itemize} \item $E_D$ is equipped with the K\"{a}hler structure whose restriction on each fiber of $E_D$ equals the standard symplectic form on $\C \oplus \C$, \vs{0.1cm} \item $E_D$ admits an $S^1$-action compatible with the bundle structure such that the normal bundle $\nu_D$ of $D$ in $M$ is $S^1$-equivariantly isomorphic to $E_D$. \vs{0.1cm} \end{itemize} Note that each $\mcal{O}(k_i)$ has a fiberwise circle action so that $E_D$ has a fiberwise $T^2$-action. Together with the $S^1$-action given, $E_D$ becomes a (non-complete) toric variety and a zero section becomes $T^3$-invariant. \vs{0.1cm} \item Equip $\mcal{U}$ the toric structure (called a {\em local toric structure near $D$}) induced by the $T^3$-action on $E_D$. Then one can obtain a toric blow-up of $\mcal{U}$ along $D$ so that we obtain a new K\"{a}hler manifold, say $(\widetilde{M}, \widetilde{\omega}, \widetilde{J})$. We finally restrict the $T^3$-action to the $S^1$-subgroup of $T^3$. \begin{figure}[h] \scalebox{1}{\input{figure_II_3_blow_up.pdf_tex}} \caption{\label{figure_III_blowup} Blow up along an $S^1$-invariant sphere} \end{figure} \end{itemize} It is not hard to see that the induced $S^1$-action on $\widetilde{M}$ is semifree. Also, new fixed components which appear on $\widetilde{M}$ instead of two isolated fixed points on $D$ in $M$ are two spheres and hence the fixed point data coincides with Table \ref{table_III} {\bf (III.3)} (see Figure \ref{figure_III_blowup}). \\ \end{enumerate} \end{example} \section{Case IV : $\mathrm{Crit} ~\mathring{H} = \{-1, 0, 1\}$} \label{secCaseIVMathrmCritMathringH11} In this section, we classify all TFD in the case where $\mathrm{Crit} \mathring{H} = \{-1,0,1\}$. Let $m = |Z_{-1}|= |Z_1| > 0$ be the number of fixed points of index two. \begin{lemma}\label{lemma_number_indextwo} We have $m=1$ or $2$. \end{lemma} \begin{proof} Applying the localization theorem to $c_1^{S^1}(TM)$, we obtain \[ \begin{array}{ccl}\vs{0.2cm} 0 & = & \ds \int_M c_1^{S^1}(TM) \\ \vs{0.2cm} & = & \ds \int_{Z_{\min}} \frac{c_1^{S^1}(TM)|_{Z_{\min}}}{e_{Z_{\min}}^{S^1}} + m \cdot \frac{\lambda}{-\lambda^3} + m \cdot \frac{-\lambda}{\lambda^3} + \int_{Z_0} \frac{c_1^{S^1}(TM)|_{Z_0}}{e_{Z_0}^{S^1}} + \int_{Z_{\max}} \frac{c_1^{S^1}(TM)|_{Z_{\max}}}{e_{Z_{\max}}^{S^1}} \\ \vs{0.2cm} & = & \ds \int_{Z_{\min}} \frac{2\lambda + (b_{\min}+2)u}{b_{\min} u\lambda + \lambda^2} -2m \cdot \frac{\lambda}{\lambda^3} + \int_{Z_0} \frac{\overbrace{c_1(TM)|_{Z_0}}^{=~\mathrm{Vol}(Z_0)}}{(b^- - b^+) u\lambda - \lambda^2} + \int_{Z_{\max}} \frac{- 2\lambda + (b_{\max} + 2)u}{-b_{\max} u\lambda + \lambda^2} \\ \vs{0.2cm} & = &\ds \frac{- b_{\min} - b_{\max} -2m +4 - \mathrm{Vol}(Z_0)}{\lambda^2} \end{array} \] where $b^+$ and $b^-$ denote the Chern numbers of the positive and negative normal bundle of $Z_0$ in $M$, respectively. So, we have \begin{equation}\label{equation_m} b_{\min} + b_{\max} + 2m + \mathrm{Vol}(Z_0) = 4. \end{equation} Moreover, since $b_{\min}, ~b_{\max} \geq -1$ by Corollary \ref{corollary_volume_extremum}, we have $m \leq 2.$ \end{proof} By Lemma \ref{lemma_number_indextwo}, we may divide the classification into two cases: $m=1$ and $m=2$. Indeed, it follows directly from \eqref{equation_m} that there are 13 solutions for $(m, \mathrm{Vol}(Z_0), b_{\min}, b_{\max})$: \begin{equation}\label{equation_8_solutions} m=2, ~\begin{cases} \underline{\mathrm{Vol}(Z_0) = 2, (b_{\min}, b_{\max}) = (-1,-1)}\\ \underline{\mathrm{Vol}(Z_0) = 1, (b_{\min}, b_{\max}) = (-1,0)}\\ \mathrm{Vol}(Z_0) = 1, (b_{\min}, b_{\max}) = (0,-1)\\ \end{cases} m=1, ~\begin{cases} \underline{\mathrm{Vol}(Z_0) = 4, (b_{\min}, b_{\max}) = (-1,-1)}\\ \underline{\mathrm{Vol}(Z_0) = 3, (b_{\min}, b_{\max}) = (-1,0)}\\ \mathrm{Vol}(Z_0) = 3, (b_{\min}, b_{\max}) = (0,-1)\\ \underline{\mathrm{Vol}(Z_0) = 2, (b_{\min}, b_{\max}) = (-1,1)}\\ \underline{\mathrm{Vol}(Z_0) = 2, (b_{\min}, b_{\max}) = (0,0)}\\ \mathrm{Vol}(Z_0) = 2, (b_{\min}, b_{\max}) = (1,-1)\\ \underline{\mathrm{Vol}(Z_0) = 1, (b_{\min}, b_{\max}) = (-1,2)}\\ \underline{\mathrm{Vol}(Z_0) = 1, (b_{\min}, b_{\max}) = (0,1)}\\ \mathrm{Vol}(Z_0) = 1, (b_{\min}, b_{\max}) = (1,0)\\ \mathrm{Vol}(Z_0) = 1, (b_{\min}, b_{\max}) = (2,-1)\\ \end{cases} \end{equation} As \eqref{equation_assumption}, we may rule out the case of ``$b_{\min} > b_{\max}$'', and therefore we only need to deal with 8 solutions (underlined in \eqref{equation_8_solutions}) with $b_{\min} \leq b_{\max}$ and obtain the following. \begin{theorem}\label{theorem_IV_1} Let $(M,\omega)$ be a six-dimensional closed monotone semifree Hamiltonian $S^1$-manifold with $c_1(TM) = [\omega]$. Suppose that $\mathrm{Crit} H = \{ 2, 1, 0, -1, -2\}$. If the number of fixed points of index two equals two, up to orientation of $M$, the list of all possible topological fixed point data is given in the Table \ref{table_IV_1} \begin{table}[h] \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline & $(M_0, [\omega_0])$ & $e(P_{-2}^+)$ &$Z_{-2}$ & $Z_{-1}$ & $Z_0$ & $Z_1$ & $Z_2$ & $b_2(M)$ & $c_1^3(M)$ \\ \hline \hline {\bf (IV-1-1.1)} & \makecell{$(E_{S^2} \# ~2\overline{\p^2},$ \\$3x + 2y - E_1-E_2)$} & $-x-y$ &$S^2$ & {\em 2 pts} & \makecell{ $Z_0 = Z_0^1 ~\dot \cup ~ Z_0^2$ \\ $Z_0^1 \cong Z_0^2 \cong S^2$ \\ $\mathrm{PD}(Z_0^1) = x+y-E_1 - E_2$ \\ $\mathrm{PD}(Z_0^2) = x - E_1$} & {\em 2 pts} & $S^2$ & $5$ & $36$\\ \hline {\bf (IV-1-1.2)} & \makecell{$(E_{S^2} \# ~2\overline{\p^2},$ \\$3x + 2y - E_1-E_2)$} & $-x-y$ &$S^2$ & {\em 2 pts} & \makecell{ $Z_0 = Z_0^1 ~\dot \cup ~ Z_0^2$ \\ $Z_0^1 \cong Z_0^2 \cong S^2$ \\ $\mathrm{PD}(Z_0^1) = y$ \\ $\mathrm{PD}(Z_0^2) = x+y-E_1 - E_2$} & {\em 2 pts} & $S^2$ & $5$ & $36$\\ \hline {\bf (IV-1-1.3)} & \makecell{$(E_{S^2} \# ~2\overline{\p^2},$ \\$3x + 2y - E_1-E_2)$} & $-x-y$ &$S^2$ & {\em 2 pts} & \makecell{ $Z_0 \cong S^2$ \\ $\mathrm{PD}(Z_0) = x+y-E_1$} & {\em 2 pts} & $S^2$ & $4$ & $36$\\ \hline {\bf (IV-1-2)} & \makecell{$(E_{S^2} \# ~2\overline{\p^2},$ \\$3x + 2y - E_1-E_2)$} & $-x-y$ &$S^2$ & {\em 2 pts} & \makecell{ $Z_0 \cong S^2$ \\ $\mathrm{PD}(Z_0) = x - E_1$} & {\em 2 pts} & $S^2$ & $4$ & $40$\\ \hline \end{tabular} \vs{0.5cm} \caption{\label{table_IV_1} Topological fixed point data for $\mathrm{Crit} H = \{-2, -1,0,1, 2\}$ with $|Z_{-1}| = 2$.} \end{table} \end{theorem} \begin{proof} As in \eqref{equation_8_solutions}, $b_{\min} = -1$ so that $M_{-2 + \epsilon} \cong E_{S^2}$ by Lemma \ref{lemma_Euler_extremum}, and therefore $M_0$ is a two points blow-up of $E_{S^2}$ where we denote the dual classes of the exceptional divisors are denote by $E_1$ and $E_2$. Also, we have $e(P_{-2}^+) = kx - y = -x -y$ as $b_{\min} = 2k+1 = -1$. Let $\mathrm{PD}(Z_0) = ax + by + cE_1 + dE_2$ for some $a,b,c,d \in \Z$. By the Duistermaat-Heckman theorem \eqref{equation_DH}, we have \[ \begin{array}{ccl} [\omega_1] = [\omega_0] - e(P_0^+) & = & (3x + 2y - E_1 - E_2) - (-x - y + E_1 + E_2 + \mathrm{PD}(Z_0)) \\ & = &(4-a)x + (3-b)y - (2+c)E_1 - (2+d)E_2 \end{array} \] where $[\omega_0] = c_1(TM_0) = 3x + 2y - E_1 - E_2$ and $e(P_0^+) = -x - y + E_1 + E_2 + \mathrm{PD}(Z_0)$ by Lemma \ref{lemma_Euler_class}. Observe that exactly two blow-downs occur simultaneously at $M_1$ and we denote by $C_1, C_2$ the vanishing cycles so that \begin{equation}\label{equation_vanishing_IV_1} \langle [\omega_1], C_1 \rangle = \langle [\omega_1], C_2 \rangle = 0, \quad C_1 \cdot C_2 = 0. \end{equation} By Lemma \ref{lemma_list_exceptional}, the list of all possible $(\mathrm{PD}(C_1), \mathrm{PD}(C_2))$ (up to permutation on $\{E_1, E_2\}$) is given by \[ (E_1, E_2), \quad (E_1, E_3), \quad (E_3, u - E_1 - E_2), \quad (E_1, u - E_2- E_3), \quad (u-E_1-E_2, u-E_1- E_3), \quad (u-E_1-E_3, u-E_2- E_3) \] with the identification $u = x+y$ and $E_3 = y$, or equivalently, in terms of $\{x,y,E_1, E_2\}$, possible candidates for $(\mathrm{PD}(C_1), \mathrm{PD}(C_2))$ are \[ (E_1, E_2), \quad (E_1, y), \quad (y, x+y - E_1 - E_2), \quad (E_1, x - E_2), \quad (x+y -E_1-E_2, x - E_1), \quad (x -E_1, x -E_2) \] We divide the proof into two cases; {\bf (IV-1-1)}: $(b_{\min}, b_{\max}) = (-1,-1)$ and {\bf (IV-1-2)}: $(b_{\min}, b_{\max}) = (-1,0)$ as listed in \eqref{equation_8_solutions}. \vs{0.3cm} \noindent {\bf (IV-1-1) : $m = 2, \mathrm{Vol}(Z_0) = 2, (b_{\min}, b_{\max}) = (-1,-1)$} \vs{0.3cm} \noindent Note that there are at most two connected components of $Z_0$ since $\mathrm{Vol}(Z_0) = 2$. Because $\mathrm{Vol}(Z_0) = 2$ and $b_{\max} = -1$, it follows that \begin{equation}\label{equation_IV_1_1} \mathrm{Vol}(Z_0) = 2a+b+c+d = 2, \quad \langle e(P_2^-)^2, [M_{2-\epsilon}] \rangle = 1 ~\text{so that $\langle e(P_0^+)^2, [M_0] \rangle = -1$} \end{equation} by Lemma \ref{lemma_Euler_extremum} and Lemma \ref{lemma_Euler_class}. \vs{0.3cm} \noindent {\bf Case (1) : $(\mathrm{PD}(C_1), \mathrm{PD}(C_2)) = (E_1, E_2)$} \vs{0.3cm} \noindent In this case, we have $c=d=-2$ by \eqref{equation_vanishing_IV_1}. Also, by \eqref{equation_IV_1_1}, it follows that $2a + b = 6$ and \[ \langle ((a-1)x + (b-1)y + (c+1)E_1 + (d+1)E_2)^2, [M_0] \rangle = 2(a-1)(b-1) - (b-1)^2 - 2 = -1. \] So, we get $a=2, ~b=2, ~c=d=-2.$, i.e., $\mathrm{PD}(Z_0) = 2x + 2y - 2E_1 - 2E_2$ which implies that $Z_0 \cdot Z_0 = -4.$ Because the number of connected components of $Z_0$ is at most two, there is no such manifold by the adjunction formula \eqref{equation_adjunction} : \[ [Z_0] \cdot [Z_0] + \sum (2 - 2g_i) = \langle c_1(TM_0), [Z_0] \rangle = 2 \] where the sum is taken over all connected components of $Z_0$ \vs{0.1cm} \noindent {\bf Case (2) : $(\mathrm{PD}(C_1), \mathrm{PD}(C_2)) = (E_1, y)$} \vs{0.1cm} \noindent By \eqref{equation_vanishing_IV_1}, we obtain $c = -2$ and $a = b + 1$. Also from \eqref{equation_IV_1_1}, we get \[ b = 1 \hs{0.2cm}(a = 2) \quad \text{and} \quad d = -1, \] that is, $\mathrm{PD}(Z_0) = 2x + y - 2E_1 - E_2$ and $[Z_0] \cdot [Z_0] = -2$. The adjunction formula \eqref{equation_adjunction} says that \[ [Z_0] \cdot [Z_0] + \sum (2 - 2g_i) = \langle c_1(TM_0), [Z_0] \rangle = 2 \] and this implies that $Z_0$ consists of two spheres $Z_0^1$ and $Z_0^2$ (since $Z_0$ consists at most two components) with \begin{equation}\label{equation_IV_1_1_1} \mathrm{PD}(Z_0^1) = x + y - E_1 - E_2 \quad \mathrm{PD}(Z_0^2) = x - E_1 \end{equation} up to permutation on $\{E_1, E_2\}$. (Note that this computation can be easily obtained from the fact that each $[Z_0^i]$ is an exceptional class so that one can apply Lemma \ref{lemma_list_exceptional}.) See Table \ref{table_IV_1} : {\bf (IV-1-1.1)}. For the Chern number computation, we apply the localization theorem \ref{theorem_localization} : \begin{equation}\label{equation_Chern_IV_1_1} \begin{array}{ccl}\vs{0.3cm} \ds \int_M c_1^{S^1}(TM)^3 & = & \ds \int_{Z_{\min}} \frac{\left(c_1^{S^1}(TM)|_{Z_{\min}}\right)^3}{e_{Z_{\min}}^{S^1}} + 2 \frac{\overbrace{\lambda^3}^{Z_{-1} ~\text{term}}} {-\lambda^3} + \int_{Z_0} \frac{\overbrace{\left(c_1^{S^1}(TM)|_{Z_0}\right)^3}^{= 0}}{e_{Z_0}^{S^1}} + 2 \frac{\overbrace{-\lambda^3}^{Z_1 ~\text{term}}}{\lambda^3} + \int_{Z_{\max}} \frac{\left(c_1^{S^1}(TM)|_{Z_{\max}}\right)^3} {e_{Z_{\max}}^{S^1}} \\ \vs{0.2cm} & = & (24 + 4b_{\min}) + (24 + 4b_{\max}) - 4 = 36 \end{array} \end{equation} by Remark \ref{remark_localization_surface}. \vs{0.3cm} \noindent {\bf Case (3) : $(\mathrm{PD}(C_1), \mathrm{PD}(C_2)) = (y, x+y-E_1-E_2)$} \vs{0.1cm} \noindent From \eqref{equation_vanishing_IV_1} and \eqref{equation_IV_1_1}, we have \[ a = b+1, \quad a+c+d = 0, \quad 2a + b + c + d = 2 \quad (\Leftrightarrow 3b + c + d = 0). \] This implies that $a = 3b$ so that $b = \frac{1}{2}$ and it leads to a contradiction. Thus no such manifold exists. \vs{0.3cm} \noindent {\bf Case (4) : $(\mathrm{PD}(C_1), \mathrm{PD}(C_2)) = (E_1, x-E_2)$} \vs{0.1cm} \noindent We similarly obtain \[ c = -2, \quad b+d = 1, \quad 2a + b + c + d = 2 \quad (\Leftrightarrow 2a + c = 1). \] Then we see that $a = \frac{3}{2}$, which is not an integer. Therefore no such manifold exists. \vs{0.3cm} \noindent {\bf Case (5) : $(\mathrm{PD}(C_1), \mathrm{PD}(C_2)) = (x+y-E_1 - E_2, x-E_1)$} \vs{0.1cm} \noindent In this case, we have \[ a+c+d = 0, \quad b+c = 1, \quad 2a+b+c+d=2 \quad (\Leftrightarrow 2a + d = 1), \] and \[ \langle e(P_0^+)^2, [M_0] \rangle = 2(a-1)(b-1) - (b-1)^2 - (c+1)^2 - (d+1)^2 = -1. \] Those equations have the unique solution $(a,b,c,d) = (1,1,0,-1)$ so that $\mathrm{PD}(Z_0) = x + y - E_2$. In particular, we have $[Z_0] \cdot [Z_0] = 0$ and $Z_0$ is connected, and therefore $Z_0 \cong S^2$ by the adjunction formula \eqref{equation_adjunction}. The Chern number can be obtained in exactly the same way as in \eqref{equation_Chern_IV_1_1}. See Table \ref{table_IV_1} : {\bf (IV-1-1.3)}. (The connectedness of $Z_0$ is proved as follows : if $Z_0^1$ and $Z_0^2$ are connected components of $Z_0$, then \begin{itemize} \item $\mathrm{Vol}(Z_0^1) = \mathrm{Vol}(Z_0^2) = 1$, and \item $[Z_0^1] \cdot [Z_0^1] = -1$ and $[Z_0^2] \cdot [Z_0^2] = 1$ since \[ [Z_0^i] \cdot [Z_0^i] + 2 - 2g_i = 1\quad \text{and} \quad [Z_0^1] \cdot [Z_0^1] + [Z_0^2] \cdot [Z_0^2] = 0. \] \end{itemize} Then $Z_0^1 \cong S^2$ by the adjunction formula \eqref{equation_adjunction} and $\mathrm{PD}(Z_0^1)$ should be on the list in Lemma \ref{lemma_list_exceptional}. However, it contradicts that $\mathrm{PD}(Z_0^1) \cdot (x + y -E_2 - \mathrm{PD}(Z_0^1)) = 0$. Therefore $Z_0$ has to be connected.) \vs{0.3cm} \noindent {\bf Case (6) : $(\mathrm{PD}(C_1), \mathrm{PD}(C_2)) = (x - E_1, x - E_2)$} \vs{0.1cm} \noindent Again by \eqref{equation_vanishing_IV_1} and \eqref{equation_IV_1_1} , we get \[ b+c = 1, \quad b+d = 1, \quad 2a + b + c + d = 2 \quad (\Leftrightarrow 2a + d = 2a + c = 1), \] and \[ \langle e(P_0^+)^2, [M_0] \rangle = 2(a-1)(b-1) - (b-1)^2 - (c+1)^2 - (d+1)^2 = -1. \] Then we get the unique solution $(a,b,c,d) = (1,2,-1,-1)$ so that $\mathrm{PD}(Z_0) = x + 2y - E_1 - E_2$. Moreover, since $[Z_0] \cdot [Z_0] = -2$, the adjunction formula \eqref{equation_adjunction} implies that $Z_0$ consists of two spheres $Z_0^1$ and $Z_0^2$ such that $[Z_0^1] \cdot [Z_0^1] = [Z_0^2] \cdot [Z_0^2] = -1$. Applying Lemma \ref{lemma_list_exceptional}, we finally obtain \[ \mathrm{PD}(Z_0^1) = y \quad \text{and} \quad \mathrm{PD}(Z_0^2) = x + y - E_1 - E_2. \] See Table \ref{table_IV_1} : {\bf (IV-1-1.2)}. \vs{0.5cm} \noindent {\bf (IV-1-2) : $m = 2, \mathrm{Vol}(Z_0) = 1, (b_{\min}, b_{\max}) = (-1,0)$} \vs{0.3cm} \noindent In this case, $Z_0$ is connected by the assumption $\mathrm{Vol}(Z_0) = 1$. Together with the condition $b_{\max} = 0$, we have \begin{equation}\label{equation_IV_1_2} \mathrm{Vol}(Z_0) = 2a+b+c= 1, \quad \langle e(P_2^-)^2, [M_{2-\epsilon}] \rangle = 0 ~\text{so that $\langle e(P_0^+)^2, [M_0] \rangle = -2$} \end{equation} by Lemma \ref{lemma_Euler_extremum}. The latter equation can be re-written as \begin{equation}\label{equation_IV_1_2_Euler} 2(a-1)(b-1) - (b-1)^2 - (c+1)^2 - (d+1)^2 = -2. \end{equation} Using \eqref{equation_vanishing_IV_1}, \eqref{equation_IV_1_2}, \eqref{equation_IV_1_2_Euler}, we analyze each cases as follows: \vs{0.3cm} \noindent {\bf Case (1) : $(\mathrm{PD}(C_1), \mathrm{PD}(C_2)) = (E_1, E_2)$} \vs{0.1cm} \[ c=-2, \quad d=-2, \quad 2a+b+c+d = 1 \quad (\Leftrightarrow 2a + b = 5), \quad 2(a-1)(b-1) - (b-1)^2 = 0 \] so that $(a,b,c,d) = (2,1,-2,-2)$, i.e., $\mathrm{PD}(Z_0) = 2x + y - 2E_1 - 2E_2$. However, since $[Z_0] \cdot [Z_0] = -5$, no such manifold exists by the adjunction formula \eqref{equation_adjunction}. \vs{0.3cm} \noindent {\bf Case (2) : $(\mathrm{PD}(C_1), \mathrm{PD}(C_2)) = (E_1, y)$} \vs{0.1cm} \[ c = -2, \quad a = b+1, \quad 2a+b+c+d = 1 \quad (\Leftrightarrow 3b + d = 1), \quad \underbrace{2(a-1)(b-1) - (b-1)^2 - (d+1)^2}_{= ~-8b^2 + 12b - 5} = -1 \] so that $(a,b,c,d) = (2,1,-2,-2)$, i.e., $\mathrm{PD}(Z_0) = 2x + y - 2E_1 - 2E_2$. Again by \eqref{equation_adjunction}, no such manifold exists. \vs{0.3cm} \noindent {\bf Case (3) : $(\mathrm{PD}(C_1), \mathrm{PD}(C_2)) = (y, x+y-E_1 - E_2)$} \vs{0.1cm} \[ a = b+1, \quad a+c+d = 0, \quad 2a+b+c+d = 1 \quad (\Leftrightarrow a + b = 1 ~\Leftrightarrow ~b=0, a=1), \quad (c+1)^2 + (d+1)^2 = 1 \] so that $(a,b,c,d) = (1,0,-1,0)$ or $(1,0,0,-1)$ (where they are equal up to permutation on $\{E_1, E_2\}$.) In this case, we have $Z_0 \cong S^2$ by \eqref{equation_adjunction}. See Table \ref{table_IV_1} : {\bf (IV-1-2)}. \vs{0.3cm} \noindent {\bf Case (4) : $(\mathrm{PD}(C_1), \mathrm{PD}(C_2)) = (E_1, x - E_2)$} \vs{0.1cm} \[ c=-2, \quad b+d = 1, \quad 2a+b+c+d = 1 \quad (\Leftrightarrow 2a + c = 0 \Leftrightarrow a=1), \quad (b-1)^2 + (d+1)^2 = 1 \] so that $(a,b,c,d) = (1,1,-2,0)$ or $(1,2,-2,-1)$. In either case, $[Z_0] \cdot [Z_0] < -1$ so that it violates \eqref{equation_adjunction}. Therefore no such manifold exists. \vs{0.3cm} \noindent {\bf Case (5) : $(\mathrm{PD}(C_1), \mathrm{PD}(C_2)) = (x+y - E_1 - E_2, x - E_1)$} \vs{0.1cm} \[ a+c+d = 0, \quad b+c = 1, \quad \underbrace{2a+b+c+d = 1}_{\Leftrightarrow ~a+b = 1, ~2a+d=0}, \quad \underbrace{-2b(b-1) - (b-1)^2 - (2-b)^2 - (2b-1)^2}_{= ~-8b^2 + 12b - 6} = -2 \] and we obtain $(a,b,c,d) = (0,1,0,0)$, i.e., $\mathrm{PD}(Z_0) = y$. However, we can check that a cycle representing $x - E_2$ vanishes on $M_1$ which leads to a contradiction. Therefore no such manifold exists. \vs{0.3cm} \noindent {\bf Case (6) : $(\mathrm{PD}(C_1), \mathrm{PD}(C_2)) = (x - E_1, x - E_2)$} \vs{0.1cm} \[ b+c=1, \quad b+d = 1, \quad \underbrace{2a+b+c+d = 1}_{\Leftrightarrow 2a + d =0, ~2a + c = 0}, \quad \underbrace{2(a-1)(b-1) - (b-1)^2 - (c+1)^2 - (d+1)^2}_{4a(a-1) - 4a^2 - (1-2a)^2 - (1-2a)^2} = -2. \] So, $(a,b,c,d) = (0,1,0,0)$. Similar to {\bf Case (5)}, a cycle representing $x+y - E_1 -E_2$ vanishes on $M_1$, and therefore no such manifold exists. \vs{0.5cm} \end{proof} \begin{example}[Fano varieties of type {\bf (IV-1)}]\label{example_IV_1} In this example, we illustrate algebraic Fano varieties with holomorphic Hamiltonian $S^1$-action with topological fixed point data given in Theorem \ref{theorem_IV_1}. \begin{enumerate} \item {\bf Case (IV-1-1.1)} \cite[2nd in Section 12.6]{IP} : Let $Y$ be the toric blow-up of $\p^3$ along two disjoint $T^3$-invariant lines where the moment map image is described in Figure \ref{figure_IV_1_1_1} (see also Figure \ref{figure_III} (b)). We take two disjoint lines $C_1$ and $C_2$ corresponding to the edges \[ e_1 = \overline{(0,1,3) ~(1,0,3)} \quad e_2 = \overline{(0,1,0) ~(1,0,0)} \] respectively. Let $M$ be the monotone toric blow-up of $Y$ along $C_1$ and $C_2$ so that the resulting moment polytope (with respect to a moment map $\mu : M \rightarrow \R^3$ is illustrated on the right of Figure \ref{figure_IV_1_1_1}. Now, we take the circle subgroup of $T^3$ generated by $\xi = (1,0,1)$. It is straightforward (by calculating the inner product of $\xi$ and each primitive edge vector) that the action is semifree and the balanced moment map is given by \[ \mu_\xi = \langle \mu, \xi \rangle -2. \] Moreover, the fixed point set consists of \begin{itemize} \item $Z_{-2} = \mu^{-1}(\overline{(0,2,0) ~(0,3,0)})$ \item $Z_{-1} = \mu^{-1}(0,3,1) \cup \mu^{-1}(0,1,1)$ \item $Z_{-2} = \mu^{-1}(\overline{(0,2,2) ~(0,1,2)}) \cup \mu^{-1}(\overline{(1,0,1) ~(2,0,0)})$ \item $Z_1 = \mu^{-1}(1,0,2) \cup \mu^{-1}(3,0,0)$ \item $Z_2 = \mu^{-1}(\overline{(2,0,2) ~(3,0,1)})$ \end{itemize} Furthremore, the symplectic areas of $Z_{-2}, Z_0^1, Z_0^2,$ and $Z_2$ are all 1 (see \eqref{equation_IV_1_1_1}) and hence $b_{\min} = b_{\max} = -1$. Thus the fixed point data of $M$ coincides with the one in Table \ref{table_IV_1} {\bf (IV-1-1.1)}. \begin{figure}[H] \scalebox{1}{\input{figure_II_4_1_1.pdf_tex}} \caption{\label{figure_IV_1_1_1} Blow up of $Y$ along two lines $C_1$ and $C_2$ lying on the same exceptional components} \end{figure} \item {\bf Case (IV-1-1.2)} \cite[3rd in Section 12.6]{IP} : Let $M = \p^1 \times X_3$ where $X_k$ denotes the blow-up of $\p^2$ at $k$ generic points. In particular we assume that $X_3$ is the toric blow-up of $\p^3$ equipped with the standard toric structure. Equip $M$ with the monotone toric K\"{a}hler form $\omega$ such that $c_1(TM) = [\omega]$ so that the moment map $\mu : M \rightarrow \R^3$ has the image given in Figure \ref{figure_IV_1_1_2}. Take $\xi = (0,-1,1)$. Then the $S^1$-action generated by $\xi$ is semifree and the balanced moment map is given by $\mu_\xi = \langle \mu, \xi \rangle$. The fixed point set consists of \begin{itemize} \item $Z_{-2} = \mu^{-1}(\overline{(0,2,0) ~(1,2,0)})$ \item $Z_{-1} = \mu^{-1}(0,1,0) \cup \mu^{-1}(2,1,0)$ \item $Z_{-2} = \mu^{-1}(\overline{(0,2,2) ~(1,2,2)}) \cup \mu^{-1}(\overline{(1,0,0) ~(2,0,0)})$ \item $Z_1 = \mu^{-1}(0,1,2) \cup \mu^{-1}(2,1,2)$ \item $Z_2 = \mu^{-1}(\overline{(2,0,2) ~(1,0,2)})$ \end{itemize} It is not hard to check that the fixed point data of $M$ coincides with the one in Table \ref{table_IV_1} {\bf (IV-1-1.2)}. \begin{figure}[H] \scalebox{1}{\input{figure_II_4_1_2.pdf_tex}} \caption{\label{figure_IV_1_1_2} $\p^1 \times X_3$} \end{figure} \item {\bf Case (IV-1-1.3)} \cite[7th in Section 12.5]{IP} : Let $(W, \omega)$ be the monotone complete flag variety given in Example \ref{example_II_1} (1) equipped with the Hamiltonian $T^2$-action where the moment polytope is described on the left of Figure \ref{figure_IV_1_1_3}. Consider two edges $A$ and $B$ indicated in Figure \ref{figure_IV_1_1_3} and denote by $C_A$ and $C_B$ the corresponding $T^2$-invariant spheres, respectively. (Note that $C_A$ and $C_B$ are curves of bidegree $(1,0)$ and $(0,1)$ with respect to the Pl\"{u}cker embedding $W \subset \p^2 \times \p^2$.) Using local toric structures on the normal bundles of $C_A$ and $C_B$, respectively, we may take $T^2$-equivariant blow up of $W$ along $C_A$ and $C_B$ and denote the resulting manifold by $M$ and the image of the moment map $\mu : M \rightarrow \R^2$ is given on the right of Figure \ref{figure_IV_1_1_3} (with respect to the monotone K\"{a}hler form). \begin{figure}[H] \scalebox{1}{\input{figure_II_4_1_22.pdf_tex}} \caption{\label{figure_IV_1_1_3} Blow up of $W$ along two disjoint curves of bidegree $(1,0)$ and $(0,1)$. } \end{figure} Take the circle subgroup $S^1$ generated by $\xi = (1,0)$. Then the $S^1$-action is semifree and the balanced moment map is given by $\mu_\xi = \langle \mu, \xi \rangle - 2$. The fixed point set consists of \begin{itemize} \item $Z_{-2} = \mu^{-1}(\overline{(0,1) ~(0,2)})$ \item $Z_{-1} = \mu^{-1}(1,1) \cup \mu^{-1}(1,3)$ \item $Z_{-2} = \mu^{-1}(\overline{(2,1) ~(2,3)})$ \item $Z_1 = \mu^{-1}(3,1) \cup \mu^{-1}(3,3)$ \item $Z_2 = \mu^{-1}(\overline{(4,2) ~(4,3)})$ \end{itemize} and we can easily check that this should coincide with {\bf (IV-1-1.3)} in Table \ref{table_IV_1}. (Note that the symplectic area of $Z_{-2}$ and $Z_2$ are both 1 so that $b_{\min} = b_{\max} = -1$.) \vs{0.5cm} \item {\bf Case (IV-1-2)} \cite[9th in Section 12.5]{IP} : Let Y be the toric blow-up of $\p^3$ along two disjoint $T^3$-invariant lines where the moment map image is given on the left of Figure \ref{figure_IV_1_2} (see also Figure \ref{figure_III} (b)). Let $M$ be a toric blow up of $Y$ along a $T$-invariant exceptional line (corresponding to the edge $A$ in Figure \ref{figure_IV_1_2}). With respect to the $T^3$-invariant monotone K\"{a}hler form, the image of a moment map $\mu$ is described on the right of Figure \ref{figure_IV_1_2}. \begin{figure}[H] \scalebox{1}{\input{figure_II_4_1_222.pdf_tex}} \caption{\label{figure_IV_1_2} Blow up of $Y$ along an exceptional line on $Y$. } \end{figure} \vs{-0.5cm} \noindent Take the circle subgroup $S^1$ of $T^3$ generated by $\xi = (-1, 0, -1)$. Then it is easy to check that the $S^1$-action is semifree and has the balanced moment map given by $\mu_\xi = \langle \mu, \xi \rangle + 2$. Also, the fixed point set consists of \begin{itemize} \item $Z_{-2} = \mu^{-1}(\overline{(2,0,2) ~(3,0,1)})$ \item $Z_{-1} = \mu^{-1}(1,0,2) \cup \mu^{-1}(3,0,0)$ \item $Z_{-2} = \mu^{-1}(\overline{(0,1,2) ~(0,2,2)})$ \item $Z_1 = \mu^{-1}(0,3,1) \cup \mu^{-1}(1,0,0)$ \item $Z_2 = \mu^{-1}(\overline{(0,1,0) ~(0,3,0)})$ \end{itemize} where $\mathrm{Area}(Z_{-2}) = \mathrm{Area}(Z_{0}) = 1$ and $\mathrm{Area}(Z_{2}) = 2$. Thus one can see that the fixed point data of $M$ coincides with {\bf (IV-1-2)} in Table \ref{table_IV_1}. \end{enumerate} \end{example} \begin{theorem}\label{theorem_IV_2} Let $(M,\omega)$ be a six-dimensional closed monotone semifree Hamiltonian $S^1$-manifold with $c_1(TM) = [\omega]$. Suppose that $\mathrm{Crit} H = \{ 2, -1, 0, 1, -2\}$. If the number of fixed points of index two equals one, up to orientation of $M$, the list of all possible topological fixed point data is given in the Table \ref{table_IV_2} \begin{table}[h] \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline & $(M_0, [\omega_0])$ & $e(P_{-2}^+)$ &$Z_{-2}$ & $Z_{-1}$ & $Z_0$ & $Z_1$ & $Z_2$ & $b_2(M)$ & $c_1^3(M)$ \\ \hline \hline {\bf (IV-2-1.1)} & \makecell{$(E_{S^2} \# ~\overline{\p^2},$ \\$3x + 2y - E_1)$} & $-x-y$ &$S^2$ & {\em pt} & \makecell{ $Z_0 \cong S^2$ \\ $\mathrm{PD}(Z_0) = 2x + y - E_1$} &{\em pt} & $S^2$ & $3$ &$38$ \\ \hline {\bf (IV-2-1.2)} & \makecell{$(E_{S^2} \# ~\overline{\p^2},$ \\$3x + 2y - E_1)$} & $-x-y$ &$S^2$ & {\em pt} & \makecell{ $Z_0 = Z_0^1 ~\dot \cup ~ Z_0^2$ \\ $Z_0^1 \cong Z_0^2 \cong S^2$ \\ $\mathrm{PD}(Z_0^1) = \mathrm{PD}(Z_0^2) = x + y - E_1$} &{\em pt} & $S^2$ & $4$ &$38$ \\ \hline {\bf (IV-2-2.1)} & \makecell{$(E_{S^2} \# ~\overline{\p^2},$ \\$3x + 2y - E_1)$} & $-x-y$ &$S^2$ & {\em pt} & \makecell{ $Z_0 \cong S^2$ \\ $\mathrm{PD}(Z_0) = x + y$} &{\em pt} & $S^2$ & $3$ &$42$ \\ \hline {\bf (IV-2-2.2)} & \makecell{$(E_{S^2} \# ~\overline{\p^2},$ \\$3x + 2y - E_1)$} & $-x-y$ &$S^2$ & {\em pt} & \makecell{ $Z_0 = Z_0^1 ~\dot \cup ~ Z_0^2$ \\ $Z_0^1 \cong Z_0^2 \cong S^2$ \\ $\mathrm{PD}(Z_0^1) = y$ \\ $\mathrm{PD}(Z_0^2)= x + y - E_1$} &{\em pt} & $S^2$ & $4$ &$42$ \\ \hline {\bf (IV-2-3)} & \makecell{$(E_{S^2} \# ~\overline{\p^2},$ \\$3x + 2y - E_1)$} & $-x-y$ &$S^2$ & {\em pt} & \makecell{ $Z_0 \cong S^2$ \\ $\mathrm{PD}(Z_0) = x$} &{\em pt} & $S^2$ & $3$ &$46$ \\ \hline {\bf (IV-2-4)} & \makecell{$(E_{S^2} \# ~\overline{\p^2},$ \\$3x + 2y - E_1)$} & $-x-y$ &$S^2$ & {\em pt} & \makecell{ $Z_0 \cong S^2$ \\ $\mathrm{PD}(Z_0) = E_1$} &{\em pt} & $S^2$ & $3$ &$50$ \\ \hline {\bf (IV-2-5)} & \makecell{$(S^2 \times S^2 \# ~\overline{\p^2},$ \\$2x + 2y - E_1)$} & $-y$ &$S^2$ & {\em pt} & \makecell{ $Z_0 = Z_0^1 \dot \cup Z_0^2$ \\ $Z_0^1 \cong Z_0^2 \cong S^2$ \\ $\mathrm{PD}(Z_0^1) = x - E_1$ \\ $\mathrm{PD}(Z_0^2) = y - E_1$ \\ } &{\em pt} & $S^2$ & $4$ &$46$ \\ \hline {\bf (IV-2-6)} & \makecell{$(S^2 \times S^2 \# ~\overline{\p^2},$ \\$2x + 2y - E_1)$} & $-y$ &$S^2$ & {\em pt} & \makecell{ $Z_0 \cong S^2$ \\ $\mathrm{PD}(Z_0) = x - E_1$} &{\em pt} & $S^2$ & $3$ &$50$ \\ \hline \end{tabular} \vs{0.5cm} \caption{\label{table_IV_2} Topological fixed point data for $\mathrm{Crit} H = \{-2, -1,0,1, 2\}$ with $|Z_{-1}| = 1$.} \end{table} \end{theorem} \begin{proof} As we have seen in \eqref{equation_8_solutions}, $b_{\min}$ is either $-1$ or $0$. For each cases, we have \begin{equation}\label{equation_bmin_IV_2} \begin{cases} M_{-2 + \epsilon} \cong E_{S^2}, \quad c_1(TM_0) = [\omega_0] = 3x + 2y - E_1, \quad e(P_{-2}^+) = kx - y = -x -y & \text{if $b_{\min} = -1$} \\ \vs{0.1cm} M_{-2 + \epsilon} \cong S^2 \times S^2, \quad c_1(TM_0) = [\omega_0] = 2x + 2y - E_1, \quad e(P_{-2}^+) = kx - y = -y & \text{if $b_{\min} = 0$} \end{cases} \end{equation} by Lemma \ref{lemma_Euler_extremum}, where $M_0$ is a one point blow-up of $M_{-2 + \epsilon}$ and $E_1$ is the dual class of the exceptional divisor on $M_0$. Let $\mathrm{PD}(Z_0) = ax + by + cE_1$ for some $a,b,c \in \Z$. By the Duistermaat-Heckman theorem \eqref{equation_DH}, we have \[ [\omega_1] = [\omega_0] - e(P_0^+) = \begin{cases} (4-a)x + (3-b)y - (2+c)E_1 & \text{if $b_{\min} = -1$}\\ \vs{0.1cm} (2-a)x + (3-b)y - (2+c)E_1 & \text{if $b_{\min} = 0$}. \end{cases} \] Moreover, only one blow-down occurs at $M_1$ with the vanishing cycle $C$ so that \begin{equation}\label{equation_vanishing_IV_2} \langle [\omega_1], C \rangle = 0. \end{equation} By Lemma \ref{lemma_list_exceptional}, the list of all possible $\mathrm{PD}(C)$ is given by \[ u - E_1 - E_2, \quad E_1, \quad E_2 \] or equivalently, in terms of $\{x,y,E_1\}$, \begin{itemize} \item if $b_{\min} = -1$, then \[ x - E_1, \quad E_1, \quad y. \] \item if $b_{\min} = 0$, then \[ E_1, \quad x - E_1, \quad y - E_1. \] \end{itemize} Now we compute the fixed point data for remaining six cases (on the right of \eqref{equation_8_solutions}) as follows. (Note that the Chern number computation can be easily obtained from the localization theorem \ref{theorem_localization} and Remark \ref{remark_localization_surface}.) \vs{0.3cm} \noindent {\bf (IV-2-1) : $m = 1, \mathrm{Vol}(Z_0) = 4, (b_{\min}, b_{\max}) = (-1,-1)$} \vs{0.3cm} \noindent Because $\mathrm{Vol}(Z_0) = 4$ and $b_{\max} = -1$, it follows that \begin{equation}\label{equation_IV_2_1} \mathrm{Vol}(Z_0) = 2a+b+c= 4, \quad \langle e(P_2^-)^2, [M_{2-\epsilon}] \rangle = 1 ~\text{so that $\langle e(P_0^+)^2, [M_0] \rangle = 0$} \end{equation} by Lemma \ref{lemma_Euler_extremum}. The latter equation can be re-written as \[ 2(a-1)(b-1) - (b-1)^2 - (c+1)^2 = 0 \quad \quad \text{as \quad $e(P_0^+) = (a-1)x + (b-1)y + (c+1)E_1$.} \] \vs{0.1cm} \noindent {\bf Case (1) :} $\mathrm{PD}(C) = x - E_1$. \vs{0.1cm} \noindent Since $b+c = 1$ by \eqref{equation_vanishing_IV_2}, we have $2a=3$ by \eqref{equation_IV_2_1}, and hence no such manifold exists. \vs{0.3cm} \noindent {\bf Case (2) :} $\mathrm{PD}(C) = E_1$. \vs{0.1cm} \noindent In this case, we have $c = -2$ by \eqref{equation_vanishing_IV_2}. Then \eqref{equation_IV_2_1} implies that \[ 2a + b = 6, \quad 2(a-1)(b-1) - (b-1)^2 = (b-1)(2a - b -1) = 1 \] which has the unique integeral solution $(a,b,c) = (2,2,-2)$. So, $\mathrm{PD}(Z_0) = 2x + 2y - 2E_1$ and $[Z_0] \cdot [Z_0] = 0$. Then the adjunction formula \eqref{equation_adjunction} implies that \[ [Z_0] \cdot [Z_0] + \sum (2 - 2g_i) = 4 \quad \text{(sum is taken over connected components of $Z_0$)}. \] Thus there are at least two spheres, namely $Z_0^1$ and $Z_0^2$. Moreover, they satisfy (again by \eqref{equation_adjunction}) \[ [Z_0^1] \cdot [Z_0^1] \geq -1 \quad \text{and } \quad [Z_0^2] \cdot [Z_0^2] \geq -1. \] Note that if $[Z_0^i] \cdot [Z_0^i] = -1$, then $([Z_0] - [Z_0^i]) \cdot [Z_0^i] \neq 0$ by Lemma \ref{lemma_list_exceptional}. So, \[ [Z_0^1] \cdot [Z_0^1] \geq 0 \quad \text{and } \quad [Z_0^2] \cdot [Z_0^2] \geq 0. \] In particular, we have $\mathrm{Vol}(Z_0^i) = [Z_0^i] \cdot [Z_0^i] + 2 \geq 2$ so that the only possibility is that \[ [Z_0^i] \cdot [Z_0^i] = 0, \quad i=1,2. \] One can easily see that $\mathrm{PD} (Z_0^1) = \mathrm{PD}(Z_0^2) = x + y - E_1$. See Table \ref{table_IV_2} : {\bf (IV-2-1.2)}. \vs{0.3cm} \noindent {\bf Case (3) :} $\mathrm{PD}(C) = y$. \vs{0.1cm} \noindent From \eqref{equation_vanishing_IV_2}, we get $a = b + 1$. Then, by \eqref{equation_IV_2_1}, \[ 3b + c = 2, \quad 2b(b-1) - (b-1)^2 - (c+1)^2 = 0, \] whose solution is $(a,b,c) = (2, 1, -1)$, that is, $\mathrm{PD}(Z_0) = 2x + y - E_1$ (and so $[Z_0] \cdot [Z_0] = 2$). Then the adjunction formula \[ [Z_0] \cdot [Z_0] + \sum (2 - 2g_i) = 4 \] implies that there exists a sphere component, say $Z_0^1$, of $Z_0$. If we denote by $\mathrm{PD}(Z_0^1) = \alpha x + \beta y + \gamma E_1$, it satisfies \[ 2\alpha\beta - \beta^2 - \gamma^2 + 2 = [Z_0^1] \cdot [Z_0^1] + 2 = \langle c_1(TM_0), [Z_0^1] \rangle = 2\alpha + \beta + \gamma. \] Also, since $([Z_0] - [Z_0^1]) \cdot [Z_0^1] = 0$, \[ \left( (2 - \alpha)x + (1 - \beta)y - (1+\gamma)E_1) \right) \cdot (\alpha x + \beta y + \gamma E_1) = -2\alpha\beta + \alpha + \beta + \gamma + \beta^2 + \gamma^2 = 0. \] Combining those two equations above, we get $\alpha = 2$ and \[ \beta^2 + \gamma^2 - 3\beta + \gamma + 2 = 0 \quad \Leftrightarrow \quad (\beta - \frac{3}{2})^2 + (\gamma + \frac{1}{2})^2 - \frac{1}{2} = 0. \] Therefore, $(\beta, \gamma) = (2, 0), (2, -1), (1, 0), (1, -1)$. In any case, $\mathrm{Vol}(Z_0^1) \geq 4$ which is impossible unless $Z_0^1 = Z_0$. This implies that $Z_0$ is connected and is a sphere. See {\bf (IV-2-1.1)}. \vs{0.3cm} \noindent {\bf (IV-2-2) : $m = 1, \mathrm{Vol}(Z_0) = 3, (b_{\min}, b_{\max}) = (-1,0)$} \vs{0.3cm} \noindent By Lemma \ref{lemma_Euler_extremum}, it follows that \begin{equation}\label{equation_IV_2_2} \mathrm{Vol}(Z_0) = 2a+b+c= 3, \quad \langle e(P_2^-)^2, [M_{2-\epsilon}] \rangle = 0 ~\text{so that $\langle e(P_0^+)^2, [M_0] \rangle = -1$} \end{equation} where the latter equation is equivalent to \[ 2(a-1)(b-1) - (b-1)^2 - (c+1)^2 = -1. \] \vs{0.1cm} \noindent {\bf Case (1) :} $\mathrm{PD}(C) = x - E_1$. \vs{0.1cm} \noindent By \eqref{equation_vanishing_IV_2}, we have $b+c = 1$ so that $a = 1$ and $(b-1)^2 + (c+1)^2 = 1$ (and so $(b,c) = (1, 0)$ or $(2, -1)$). \vs{0.1cm} \begin{itemize} \item If $(a,b,c) = (1,1,0)$, then $\mathrm{PD}(Z_0) = x + y$ and $[Z_0] \cdot [Z_0] = 1$ so that there exists at least one sphere component, denote by $Z_0^1$, in $Z_0$. Suppose that $Z_0$ is not connected. Then $\mathrm{Vol}(Z_0^1) = 1$ or $2$. If $\mathrm{Vol}(Z_0^1) = 1$, then $[Z_0^1] \cdot [Z_0^1] = -1$ by the adjunction formula, and hence $\mathrm{PD}(Z_0^1) = E_1, y, x - E_1$ by Lemma \eqref{lemma_list_exceptional}. In either case, it follows that \[ [Z_0^1] \cdot ([Z_0] - [Z_0^1]) \neq 0 \] which leads to a contradiction. So, $\mathrm{Vol}(Z_0^1) \neq 1$. On the other hand, if $\mathrm{Vol}(Z_0^1) = 2$, then $[Z_0^1] \cdot [Z_0^1] = 0$ by the adjunction formula. If we let $\mathrm{PD}(Z_0^1) = \alpha x + \beta y + \gamma E_1$, then \begin{itemize} \item $2\alpha\beta - \beta^2 - \gamma^2 = 0$, \quad ($\because ~[Z_0^1]\cdot [Z_0^1] = 0$), \item $\alpha - 2\alpha\beta + \beta^2 + \gamma^2 = 0$, \quad ($\because ~[Z_0^1] \cdot ([Z_0] - [Z_0^1]) = 0$), \item $2\alpha + \beta + \gamma = 2$ \quad ($\because ~\mathrm{Vol}(Z_0^1) = 2$) \end{itemize} whose (real) solution does not exist. Thus $Z_0$ is connected and we have $Z_0 \cong S^2$. See Table \ref{table_IV_2}: {\bf (IV-2-2.1)}.\vs{0.2cm} \item If $(a,b,c) = (1, 2, -1)$, i.e., $\mathrm{PD}(Z_0) = x + 2y - E_1$, then we have $[Z_0] \cdot [Z_0] = -1$ and there are at least two sphere components $Z_0^1$ and $Z_0^2$ in $Z_0$ by the adjunction formula. Since $\mathrm{Vol}(Z_0^1) + \mathrm{Vol}(Z_0^2) \leq 3$, we may assume that $\mathrm{PD}(Z_0^1) = 1$ (so that $[Z_0^1] \cdot [Z_0^1] = -1$). Then we obtain $\mathrm{PD}(Z_0^1) = y$ by the fact that $([Z_0] - [Z_0^1]) \cdot [Z_0^1] = 0$ and Lemma \ref{lemma_list_exceptional}. So, \[ Z_0^1 \cong S^2 ~(\mathrm{PD}(Z_0^1) = y) \quad \text{and} \quad Z_0^2 \cong S^2 ~(\mathrm{PD}(Z_0^2) = x + y - E_1) \] See Table \ref{table_IV_2}: {\bf (IV-2-2.2)}. (Note that $\mathrm{Vol}(Z_0^2) \neq 1$ otherwise $\mathrm{PD}(Z_0^2)$ also should be $y$ which contradicts that $[Z_0^1] \cdot [Z_0^2] = 0$.) \end{itemize} \vs{0.3cm} \noindent {\bf Case (2) :} $\mathrm{PD}(C) = E_1$. \vs{0.1cm} \noindent Since $c=-2$ by \eqref{equation_vanishing_IV_2}, we have \[ 2a + b = 5 \quad \text{and} \quad 2(a-1)(b-1) - (b-1)^2 = 0 \] where it has a unique integral solution $(a,b,c) = (2,1,-2)$. However, since \[ [\omega_1] \cdot y = (2x + 2y) \cdot y = 0, \] the exceptional divisor representing $y$ vanishes at $M_1$, i.e., two simultaneous blow-downs occur at $M_1$. Thus no such manifold exists. \vs{0.3cm} \noindent {\bf Case (3) :} $\mathrm{PD}(C) = y$. \vs{0.1cm} \noindent Now we have $a = b+1$ and so \[ 3b+c = 1 \quad \text{and} \quad 2b(b-1) - (b-1)^2 - (c+1)^2 = -1 \] by \eqref{equation_IV_2_2}. This has a unique integral solution $(a,b,c) = (2,1,-2)$. This case is exactly the same as in {\bf Case (2)} above and we have $[\omega_1] \cdot E_1 = 0$. Then two simultaneous blow-downs occur at $M_1$ which is impossible. Therefore there is no such manifold. \vs{0.3cm} \noindent {\bf (IV-2-3) : $m = 1, \mathrm{Vol}(Z_0) = 2, (b_{\min}, b_{\max}) = (-1,1)$} \vs{0.3cm} \noindent In this case, we have \begin{equation}\label{equation_IV_2_3} \mathrm{Vol}(Z_0) = 2a+b+c= 2, \quad \langle e(P_2^-)^2, [M_{2-\epsilon}] \rangle = -1 ~\text{so that $\langle e(P_0^+)^2, [M_0] \rangle = -2$} \end{equation} where the latter one is \[ 2(a-1)(b-1) - (b-1)^2 - (c+1)^2 = -2. \] \vs{0.1cm} \noindent {\bf Case (1) :} $\mathrm{PD}(C) = x - E_1$. \vs{0.1cm} \noindent Using $b+c = 1$ by \eqref{equation_vanishing_IV_2}, we have $a = \frac{1}{2}$. Thus no such manifold exists. \vs{0.3cm} \noindent {\bf Case (2) :} $\mathrm{PD}(C) = E_1$. \vs{0.1cm} \noindent Substituting $c = -2$, we have \[ 2a + b = 4, \quad 2(a-1)(b-1) - (b-1)^2 = -1 \] and therefore the only possible solution is $(a,b) = (1,2)$, i.e., $\mathrm{PD}(Z_0) = x + 2y - 2E_1$. However, the adjunction formula \eqref{equation_adjunction} implies that \[ [Z_0] \cdot [Z_0] + \sum (2 - 2g_i) = -4 + \sum (2 - 2g_i) = 2, \] i.e., there are three sphere components $Z_0^1, Z_0^2, Z_0^3$ and hence $\mathrm{Vol}(Z_0) \geq 3$ which leads to a contradiction. So, no such manifold exists. \vs{0.3cm} \noindent {\bf Case (3) :} $\mathrm{PD}(C) = y$. \vs{0.1cm} \noindent In this case, $a = b+1$ so that \[ 3b + c = 0, \quad 2b(b-1) - (b-1)^2 - (c+1)^2 = -2 \] and it has a unique solution $(a,b,c) = (1,0,0)$. If $Z_0$ is not connected, then the adjunction formula implies that $Z_0$ consists of two spheres $Z_0^1$ and $Z_0^2$ each of which has symplectic area $1$ (so that it is an exceptional sphere). On the other hand, by the fact that $[Z_0^1] \cdot [Z_0^2] = 0$ and Lemma \ref{lemma_list_exceptional} imply that the dual classes of $Z_0^1$ and $Z_0^2$ are $y$ and $E_1$, respectively. Then it follows that $\mathrm{PD}(Z_0) = x \neq \mathrm{PD}(Z_0^1) + \mathrm{PD}(Z_0^2)$. So, $Z_0$ is connected and \[ Z_0 \cong S^2, \quad \mathrm{PD}(Z_0) = x. \] See Table \ref{table_IV_2}: {\bf (IV-2-3)}. \vs{0.3cm} \noindent {\bf (IV-2-4) : $m = 1, \mathrm{Vol}(Z_0) = 1, (b_{\min}, b_{\max}) = (-1,2)$} \vs{0.3cm} \noindent As $\mathrm{Vol}(Z_0) = 1$, $Z_0$ is connected. Also, \begin{equation}\label{equation_IV_2_4} \mathrm{Vol}(Z_0) = 2a+b+c= 1, \quad \langle e(P_2^-)^2, [M_{2-\epsilon}] \rangle = -2 ~\text{so that $\langle e(P_0^+)^2, [M_0] \rangle = -3$} \end{equation} i.e., \[ 2(a-1)(b-1) - (b-1)^2 - (c+1)^2 = -3. \] \vs{0.1cm} \noindent {\bf Case (1) :} $\mathrm{PD}(C) = x - E_1$. \vs{0.1cm} \noindent We have $b+c = 1$ so that $(a,b,c) = (0, 2,-1)$ or $(0,0,1)$. If $(a,b,c) = (0, 2,-1)$, then $\mathrm{PD}(Z_0) = 2y - E_1$ and $[Z_0] \cdot [Z_0] = -5$. This is impossible by the adjunction formula since $Z_0$ is connected. So, no such manifold exists. On the other hand, if $(a,b,c) = (0,0,1)$, i.e., $\mathrm{PD}(Z_0) = E_1$, then we have \[ Z_0 \cong S^2, \quad \mathrm{PD}(Z_0) = E_1. \] See Table \ref{table_IV_2}: {\bf (IV-2-4)}. \vs{0.3cm} \noindent {\bf Case (2) :} $\mathrm{PD}(C) = E_1$. \vs{0.1cm} \noindent Now, we have $c = -2$ and \eqref{equation_IV_2_4} implies that \[ 2a + b = 3, \quad 2(a-1)(b-1) - (b-1)^2 = -2 \] which has no integral solution. Thus there is no such manifold. \vs{0.3cm} \noindent {\bf Case (3) :} $\mathrm{PD}(C) = y$. \vs{0.1cm} \noindent It follows that $a = b+1$, and we obtain \[ 3b + c = -1, \quad 2b(b-1) - (b-1)^2 - (c+1)^2 = -3 \] where no integral solution exists. Thus there is no such manifold. \vs{0.3cm} \noindent {\bf (IV-2-5) : $m = 1, \mathrm{Vol}(Z_0) = 2, (b_{\min}, b_{\max}) = (0,0)$} \vs{0.3cm} \noindent Since $b_{\min} = 0$, we have $M_{-2 + \epsilon} \cong S^2 \times S^2$ and so $e(P)_{-2}^+ = -y$ and $c_1(TM_0) = 2x + 2y - E_1$, see \eqref{equation_bmin_IV_2}. Also, Lemma \ref{lemma_Euler_extremum} implies that \begin{equation}\label{equation_IV_2_5} \mathrm{Vol}(Z_0) = 2a+2b+c= 2, \quad \langle e(P_2^-)^2, [M_{2-\epsilon}] \rangle = 0 ~\text{so that $\langle e(P_0^+)^2, [M_0] \rangle = -1$} \end{equation} where the latter equation can be re-written by \[ 2a(b-1) - (c+1)^2 = -1. \] Note that if $Z_0$ is connected, then $[Z_0] \cdot [Z_0] = 0$ by the adjunction formula. Also, if $Z_0$ is disconnected with two components $Z_0^1$ and $Z_0^2$ such that $\mathrm{Vol}(Z_0^1) = \mathrm{Vol}(Z_0^2) = 1$, then the adjunction formula implies that $[Z_0^1] \cdot [Z_0^1] = [Z_0^2] \cdot [Z_0^2] = -1$. In particular, $[Z_0] \cdot [Z_0] = -2$. Recall that a possible dual class of the cycle $C$ vanishing at the reduced space $M_1$ is $x - E_1$, $E_1$, or $y - E_1$ by Lemma \ref{lemma_list_exceptional}. \vs{0.1cm} \noindent {\bf Case (1) :} $\mathrm{PD}(C) = x - E_1$. \vs{0.1cm} \noindent By \eqref{equation_vanishing_IV_2}, we have $b+c = 1$ so that \[ 2a -c = 0, \quad -2ac - (c+1)^2 = -1 \] where it has a unique integral solution $(a,b,c) = (0,1,0)$. However, in this case, a cycle representing $y - E_1$ is also vanishing at $M_1$. In other words, two blow-downs occur at $M_1$. So, no such manifold exists. \vs{0.3cm} \noindent {\bf Case (2) :} $\mathrm{PD}(C) = E_1$. \noindent In this case, $c = -2$ so that \[ a+b = 2, \quad 2a(b-1) = 0 \] where the solution is $(a,b,c) = (0,2,-2)$ or $(1,1,-2)$. If $(a,b,c) = (0,2,-2)$, then $[Z_0] \cdot [Z_0] = -4$ so that there are at least three spheres in $Z_0$ by the adjunction formula, which is impossible since $\mathrm{Vol}(Z_0) = 2$. Thus there is no such manifold. If $(a,b,c) = (1,1,-2)$, then $[Z_0] \cdot [Z_0] = -2$ and so $Z_0$ consists of two spheres, say $Z_0^1$ and $Z_0^2$, each of which has self-intersection number $-1$ by the adjunction formula. By Lemma \ref{lemma_list_exceptional}, we get \[ Z_0^1 \cong Z_0^2 \cong S^2, \quad \mathrm{PD}(Z_0^1) = x - E_1, \quad \mathrm{PD}(Z_0^2) = y - E_1. \] See Table \ref{table_IV_2}: {\bf (IV-2-5)}. \vs{0.3cm} \noindent {\bf Case (3) :} $\mathrm{PD}(C) = y - E_1$. \noindent From \eqref{equation_vanishing_IV_2}, we have $a + c = 0$ and so \[ a + 2b = 2, \quad 2a(b-1) - (1-a)^2 = -1 \] and it has the unique solution $(a,b,c) = (0, 1, 0)$. Similar to {\bf Case (1)}, a cycle representing $x - E_1$ also vanishes at $M_1$ so that two blow-downs occur simultaneously at $M_1$. Therefore there is no such manifold. \vs{0.3cm} \noindent {\bf (IV-2-6) : $m = 1, \mathrm{Vol}(Z_0) = 1, (b_{\min}, b_{\max}) = (0,1)$} \vs{0.3cm} \noindent Note that $Z_0$ is connected and the condition $b_{\min} = 0$ implies that $e(P)_{-2}^+ = -y$ by Lemma \ref{lemma_Euler_extremum}. Moreover, $\mathrm{Vol}(Z_0) = 1$ and $b_{\max} = 1$ implies that \begin{equation}\label{equation_IV_2_5} \mathrm{Vol}(Z_0) = 2a+2b+c= 1, \quad \langle e(P_2^-)^2, [M_{2-\epsilon}] \rangle = -1 ~\text{so that $\langle e(P_0^+)^2, [M_0] \rangle = -2$} \end{equation} where the latter one is equivalent to \[ 2a(b-1) - (c+1)^2 = -2. \] \vs{0.1cm} \noindent {\bf Case (1) :} $\mathrm{PD}(C) = x - E_1$. \vs{0.1cm} \noindent Since $b+c = 1$, we have \[ 2a + b = 0, \quad 2a(-2a-1) - (2 + 2a)^2 = -2 \] so that $(a,b,c) = (-1,2,-1)$. That is, $\mathrm{PD}(Z_0) = -x + 2y - E_1$ and so $[Z_0] \cdot [Z_0] = -5$. This contradicts the fact that $Z_0$ is conencted by the adjunction formula. So, there is no such manifold. \vs{0.3cm} \noindent {\bf Case (2) :} $\mathrm{PD}(C) = E_1$. \noindent We have $c = -2$ by \eqref{equation_vanishing_IV_2} which implies that $2a + 2b = 3$. Thus no such manifold exists. \vs{0.3cm} \noindent {\bf Case (3) :} $\mathrm{PD}(C) = y - E_1$. \noindent In this case, we have $a + c = 0$ so that \[ a + 2b = 1, \quad 2a(b-1) - (1-a)^2 = -2. \] It has a unique solution $(a,b,c) = (1,0,-1)$, i.e., \[ Z_0 \cong S^2, \quad \mathrm{PD}(Z_0) = x - E_1. \] See Table \ref{table_IV_2}: {\bf (IV-2-6)}. \end{proof} \begin{example}[Fano variety of type {\bf (IV-2)}]\label{example_IV_2} In this example, we describe Fano varieties of type {\bf (IV-2)} listed in Theorem \ref{theorem_IV_2}. \begin{itemize} \item {\bf (IV-2-1.1)} \cite[20th in Section 12.4]{IP} : Recall that a smooth quadric in $\p^4$, isomorphic to a coadjoint orbit of $\mathrm{SO}(5)$, admits a maximal torus $T^2$ action whose moment map image is given on the left of Figure \ref{figure_IV_2_1_1} (see also \cite[Example 6.4]{Cho}). Let $M$ be the blow-up of the smooth quadric along two disjoint $T^2$-invariant spheres with the induced $T^2$-action. Then the corresponding moment map can be described as on the right of Figure \ref{figure_IV_1_1_1}. \begin{figure}[h] \scalebox{1}{\input{figure_II_4_2_1_1.pdf_tex}} \caption{\label{figure_IV_2_1_1} Blow up of the smooth quadric along two disjoint lines} \end{figure} Now, we take the $S^1$-subgroup of $T^2$ generated by $\xi = (0,1) \in \frak{t}$. Then the fixed point set consists of \begin{itemize} \item $Z_{-2} = S^2$ with $\mu(Z_{-2}) = \overline{(0,-2) ~(1,-2)}$ and $\mathrm{Vol}(Z_{-2}) = 1$, \item $Z_{-1} = \mathrm{pt}$ with $\mu(Z_{-1}) = (2,-1)$, \item $Z_{0} = S^2$ with $\mu(Z_{0}) = \overline{(-2,0) ~(2,0)}$ and $\mathrm{Vol}(Z_{0}) = 4$, \item $Z_1 = \mathrm{pt}$ with $\mu(Z_1) = (-2,1)$, \item $Z_2 = S^2$ with $\mu(Z_2) = \overline{(-1,2) ~(0,2)})$ and $\mathrm{Vol}(Z_{2}) = 1$. \end{itemize} \vs{0.5cm} \item {\bf (IV-2-1.2)} \cite[8th in Section 12.5]{IP} : Consider $X = \p^1 \times \p^1 \times \p^1$ equipped with $T^2$-action defined by \[ (t_1, t_2) \cdot ([x_0 : x_1], [y_0 : y_1], [z_0 : z_1]) := ([t_1x_0 : x_1], [t_2y_0 : y_1], [t_2z_0 : z_1]) \] with respect to the normalized monotone K\"{a}hler form on $X$, the moment map image is give in the middle of Figure \ref{figure_IV_2_1_2}. (Note that the red double line in the middle indicates the image of the upper-left and lower-right red edges in the first of Figure \ref{figure_IV_2_1_2}.) Let $C$ be the $T$-invariant sphere given by \[ C = \{ ([1:0], [y_0 : y_1], [y_0 : y_1]) ~|~ [y_0:y_1] \in \p^1\} \] whose moment map image is indicated by the blue line in Figure \ref{figure_IV_2_1_2}. Then, let $M$ be the $T^2$-equivariant blow-up of $X$ whose moment map is described in the third of Figure \ref{figure_IV_2_1_2}. The fixed point set consists of \begin{itemize} \item $Z_{-2} = S^2$ with $\mu(Z_{-2}) = \overline{(1,-2) ~(2,-2)}$ and $\mathrm{Vol}(Z_{-2}) = 1$, \item $Z_{-1} = \mathrm{pt}$ with $\mu(Z_{-1}) = (0,-1)$, \item $Z_{0} = S^2 ~\dot \cup~ S^2$ with $\mu(Z_{0}^1) =\mu(Z_{0}^2) = \overline{(0,0) ~(2,0)}$ and $\mathrm{Vol}(Z_{0}^1) = \mathrm{Vol}(Z_{0}^2) = 2$, \item $Z_1 = \mathrm{pt}$ with $\mu(Z_1) = (0,1)$, \item $Z_2 = S^2$ with $\mu(Z_2) = \overline{(1,2) ~(2,2)})$ and $\mathrm{Vol}(Z_{2}) = 1$. \end{itemize} \vs{0.3cm} \begin{figure}[h] \scalebox{1}{\input{figure_II_4_2_1_2.pdf_tex}} \caption{\label{figure_IV_2_1_2} Blow up of $\p^1 \times \p^1 \times \p^1$ along $C$} \end{figure} \item {\bf (IV-2-2.1)} \cite[24th in Section 12.4]{IP} : Consider the complete flag variety $\mcal{F}(3) \cong U(3) / T^3$ together with the induced $T^2$-action whose moment map image is given in the first of Figure \ref{figure_IV_2_2_1}. (See also Example \ref{example_II_1}.) Let $C$ be a $T$-invariant sphere (for instance, take a sphere whose moment map image is $\overline{(0,0) ~(0,2)}$ as in Figure \ref{figure_IV_2_2_1}). Let $M$ be the $T^2$-equivariant blow-up of $\mcal{F}(3)$ along $C$. Then the moment map image for the induced $T^2$-action on $M$ can be depicted as in the second in Figure \ref{figure_IV_2_2_1}. The fixed point set consists of \begin{itemize} \item $Z_{-2} = S^2$ with $\mu(Z_{-2}) = \overline{(1,0) ~(2,0)}$ and $\mathrm{Vol}(Z_{-2}) = 1$, \item $Z_{-1} = \mathrm{pt}$ with $\mu(Z_{-1}) = (1,1)$, \item $Z_{0} = S^2$ with $\mu(Z_{0}) = \overline{(1,2) ~(4,2)}$ and $\mathrm{Vol}(Z_{0}) = 3$, \item $Z_1 = \mathrm{pt}$ with $\mu(Z_1) = (1,3)$, \item $Z_2 = S^2$ with $\mu(Z_2) = \overline{(2,4) ~(4,4)})$ and $\mathrm{Vol}(Z_{2}) = 2$. \end{itemize} \vs{0.3cm} \begin{figure}[H] \scalebox{1}{\input{figure_II_4_2_2_2.pdf_tex}} \caption{\label{figure_IV_2_2_1} Blow up of $\mcal{F}(3)$ along $C$} \end{figure} \item {\bf (IV-2-2.2)} \cite[10th in Section 12.5]{IP} : Consider $\C P^1 \times ~X_2$ with the standard $T^3$-action, where $X_k$ is the $k$-times blow-up of $\p^2$. The corresponding moment polytope is given in Figure \ref{figure_IV_2_2_2}. Take a circle subgroup of $T^3$ generated by $\xi = (-1,1,0)$. Then one can easily check that the $S^1$-action is semifree and the fixed point set consists of \begin{itemize} \item $Z_{-2} = S^2$ with $\mu(Z_{-2}) = \overline{(2,0,0) ~(2,0,1)}$ and $\mathrm{Vol}(Z_{-2}) = 1$, \item $Z_{-1} = \mathrm{pt}$ with $\mu(Z_{-1}) = (1,0,2)$, \item $Z_{0} = S^2$ with $\mu(Z_{0}) = \overline{(2,2,0) ~(2,2,1)}$ and $\mathrm{Vol}(Z_{0}) = 1$, \item $Z_1 = \mathrm{pt}$ with $\mu(Z_1) = (1,2,2)$, \item $Z_2 = S^2$ with $\mu(Z_2) = \overline{(0,2,0) ~(0,2,2)})$ and $\mathrm{Vol}(Z_{2}) = 2$. \end{itemize} \vs{0.3cm} \begin{figure}[H] \scalebox{1}{\input{figure_II_4_2_2_1.pdf_tex}} \caption{\label{figure_IV_2_2_2} $S^2 \times X_2$} \end{figure} \item {\bf (IV-2-3)} \cite[26th in Section 12.4]{IP} : Consider $\p^3$ with the standard $T^3$-action and let $M$ be the $T^3$-equivariant blow-up of $\p^3$ along a disjoint union of a fixed point and a $T^3$-invariant sphere. Then the moment map image of $M$ is described in Figure \ref{figure_IV_2_3}. If we take a circle subgroup of $T^3$ generated by $\xi = (0,-1,-1)$, then the $S^1$-action becomes semifree and the fixed point set is give by \begin{itemize} \item $Z_{-2} = S^2$ with $\mu(Z_{-2}) = \overline{(0,2,2) ~(0,3,1)}$ and $\mathrm{Vol}(Z_{-2}) = 1$, \item $Z_{-1} = \mathrm{pt}$ with $\mu(Z_{-1}) = (0,3,0)$, \item $Z_{0} = S^2$ with $\mu(Z_{0}) = \overline{(0,0,2) ~(2,0,2)}$ and $\mathrm{Vol}(Z_{0}) = 2$, \item $Z_1 = \mathrm{pt}$ with $\mu(Z_1) = (3,0,1)$, \item $Z_2 = S^2$ with $\mu(Z_2) = \overline{(0,0,0) ~(3,0,0)})$ and $\mathrm{Vol}(Z_{2}) = 3$. \end{itemize} \vs{0.3cm} \textbf{(e.g. the blow-up of $\C P^3$ with center a disjoint union of a point and a line with $c_1^3(M) = 46$)} (with $\xi = (0,-1,-1)$.) \begin{figure}[H] \scalebox{0.8}{\input{figure_II_4_2_3.pdf_tex}} \caption{\label{figure_IV_2_3} Toric blow up of $\p^3$ along a fixed point and a $T^3$-invariant sphere} \end{figure} \item {\bf (IV-2-4)} \cite[29th in Section 12.4]{IP} : Consider $V_7$, the $T^3$-equivariant blow-up of $\p^3$ at a fixed point. (See also Example \ref{example_II_1}.) Take $C$ be any $T^3$-invariant sphere lying on the exceptional divisor of the blow-up $V_7 \rightarrow \p^3$. Then the moment map image is given in Figure \ref{figure_IV_2_4}. Take a circle subgroup generated by $\xi = (0,-1,-1)$. \begin{figure}[H] \scalebox{0.7}{\input{figure_II_4_2_4.pdf_tex}} \caption{\label{figure_IV_2_4} Blow up of $V_7$ along a $T$-invariant sphere on the exceptional divisor} \end{figure} \noindent The $S^1$-action is semifree and the fixed point set consists of \begin{itemize} \item $Z_{-2} = S^2$ with $\mu(Z_{-2}) = \overline{(0,4,0) ~(0,3,1)}$ and $\mathrm{Vol}(Z_{-2}) = 1$, \item $Z_{-1} = \mathrm{pt}$ with $\mu(Z_{-1}) = (0,1,2)$, \item $Z_{0} = S^2$ with $\mu(Z_{0}) = \overline{(0,0,2) ~(1,0,2)}$ and $\mathrm{Vol}(Z_{0}) = 1$, \item $Z_1 = \mathrm{pt}$ with $\mu(Z_1) = (3,0,1)$, \item $Z_2 = S^2$ with $\mu(Z_2) = \overline{(0,0,0) ~(4,0,0)})$ and $\mathrm{Vol}(Z_{2}) = 4$. \end{itemize} \vs{0.3cm} \item {\bf (IV-2-5)} \cite[12th in Section 12.5]{IP} : We consider $Y$, the blow-up of $\p^3$ along a $T^3$-invariant line (see Example \ref{example_III}). Let $C_1$ and $C_2$ be two $T^3$-invariant disjoint lines lying on the exceptional divisor of $Y \rightarrow \p^3$. See Figure \ref{figure_IV_2_5} (a). Let $M$ be the $T^3$-equivariant blow-up of $Y$ along $C_1$ and $C_2$. Then the moment map image of the induced $T^3$-action is given by Figure \ref{figure_IV_2_5}. Take an $S^1$ subgroup of $T^3$ generated by $\xi = (1,0,1)$. One can easily check that the $S^1$-action is semifree and the fixed point set is given by \begin{itemize} \item $Z_{-2} = S^2$ with $\mu(Z_{-2}) = \overline{(0,4,0) ~(0,2,0)}$ and $\mathrm{Vol}(Z_{-2}) = 2$, \item $Z_{-1} = \mathrm{pt}$ with $\mu(Z_{-1}) = (0,1,1)$, \item $Z_{0} = S^2 ~\dot \cup ~ S^2$ with \[ \mu(Z_{0}^1) = \overline{(0,1,2) ~(0,2,2)}, \quad \mu(Z_{0}^2) = \overline{(1,0,1) ~(2,0,0)}, \quad \quad \mathrm{Vol}(Z_{0}^1) = \mathrm{Vol}(Z_{0}^2) = 1, \] \item $Z_1 = \mathrm{pt}$ with $\mu(Z_1) = (1,0,2)$, \item $Z_2 = S^2$ with $\mu(Z_2) = \overline{(2,0,2) ~(4,0,0)})$ and $\mathrm{Vol}(Z_{2}) = 2$. \end{itemize} \vs{0.3cm} \begin{figure}[H] \scalebox{0.7}{\input{figure_II_4_2_5.pdf_tex}} \caption{\label{figure_IV_2_5} Blow up of $Y$ along disjoint $T$-invariant two spheres on the exceptional divisor} \end{figure} \item {\bf (IV-2-6)} \cite[30th in Section 12.4]{IP} : Consider the $T^3$-equivariant blow-up $V_7$ of $\p^3$ at a fixed point and let $M$ be the blow-up of $V_7$ along a $T^3$-invariant sphere passing through the exceptional divisor of $V_7 \rightarrow \p^3$. Then the moment map image of $M$ with respect to the induced action is given by Figure \ref{figure_IV_2_6}. \begin{figure}[H] \scalebox{0.7}{\input{figure_II_4_2_6.pdf_tex}} \caption{\label{figure_IV_2_6} Blow up of $V_7$ along a $T$-invariant sphere passing through the exceptional divisor} \end{figure} Take a circle subgroup of $T^3$ generated by $\xi = (-1,0,-1)$. Then the action is semifree and the fixed point set consists of \begin{itemize} \item $Z_{-2} = S^2$ with $\mu(Z_{-2}) = \overline{(4,0,0) ~(2,0,2)}$ and $\mathrm{Vol}(Z_{-2}) = 2$, \item $Z_{-1} = \mathrm{pt}$ with $\mu(Z_{-1}) = (1,0,2)$, \item $Z_{0} = S^2$ with $ \mu(Z_{0}) = \overline{(0,1,2) ~(0,2,2)} $ with $\mathrm{Vol}(Z_{0}^2) = 1$, \item $Z_1 = \mathrm{pt}$ with $\mu(Z_1) = (1,0,0)$, \item $Z_2 = S^2$ with $\mu(Z_2) = \overline{(0,1,0) ~(0,4,0)})$ and $\mathrm{Vol}(Z_{2}) = 3$. \end{itemize} \vs{0.3cm} \end{itemize} \end{example} \section{Main Theorem} \label{secMainTheorem} In this section, we prove our main theorem \ref{theorem_main}. \begin{theorem}[Theorem \ref{theorem_main}] Let $(M,\omega)$ be a six-dimensional closed monotone symplectic manifold equipped with a semifree Hamiltonian circle action. Suppose that the maximal and the minimal fixed component of the action are both 2-dimensional. Then $(M,\omega)$ is $S^1$-equivariantly symplectomorphic to some K\"{a}hler Fano manifold with a certain holomorphic Hamiltonian circle action. \end{theorem} We list all possible topological fixed point data in Table \ref{table_list}. Notice that our classification implies that any reduced space of $(M,\omega)$ in Theorem \ref{theorem_main} is either $\p^1 \times \p^1$, or $\p^2 \#~k~ \overline{\p^2}$ for $1 \leq k \leq 4$. The following theorems then imply that those spaces are symplectically rigid (in the sense of \cite[Definition 2.13]{McD2} or \cite[Definition 1.4]{G}). (See also Section \ref{secFixedPointData} or \cite[Section 5]{Cho}.) \begin{table}[h] \begin{adjustbox}{width=1\textwidth} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline & $(M_0, [\omega_0])$ & $e(P_{-2+\epsilon})$ & $Z_{-2}$ & $Z_{-1}$ & $Z_0$ & $Z_1$ & $Z_2$ & $b_2$ & $c_1^3$ \\ \hline \hline {\bf (I-1)} & $(S^2 \times S^2, 2x + 2y)$ & $x-y$ & $S^2$ & & & & $S^2$ & $1$ & $64$\\ \hline {\bf (II-1.1)} & $(S^2 \times S^2, 2x + 2y)$ & $-y$ &$S^2$ & &$Z_0 \cong S^2, ~\mathrm{PD}(Z_0) = x+y$ & & $S^2$ & $2$ &$48$ \\ \hline {\bf (II-1.2)} & $(S^2 \times S^2, 2x + 2y)$ & $-y$ &$S^2$ & & $Z_0 \cong S^2, ~\mathrm{PD}(Z_0) = x$ & & $S^2$ & $2$ & $56$\\ \hline {\bf (II-1.3)} & $(S^2 \times S^2, 2x + 2y)$ & $-y$ &$S^2$ & & \makecell{ $Z_0 = Z_0^1 ~\dot \cup ~ Z_0^2$ \\ $Z_0^1 \cong Z_0^2 \cong S^2$ \\ $\mathrm{PD}(Z_0^1) = \mathrm{PD}(Z_0^2) = y$} & & $S^2$ & $3$ & $48$\\ \hline {\bf (II-2.1)} & $(E_{S^2}, 3x + 2y)$ & $-x -y$ &$S^2$ & & \makecell{ $Z_0 = Z_0^1 ~\dot \cup ~ Z_0^2$ \\ $Z_0^1 \cong Z_0^2 \cong S^2$ \\ $\mathrm{PD}(Z_0^1) = y$, $\mathrm{PD}(Z_0^2) = x+y$} & & $S^2$ & $3$ & $48$\\ \hline {\bf (II-2.2)} & $(E_{S^2}, 3x + 2y)$ & $-x-y$ &$S^2$ & & $Z_0 \cong S^2, ~\mathrm{PD}(Z_0) = 2x+2y$ & & $S^2$ & $2$ &$40$ \\ \hline {\bf (III.1)} & \makecell{$(E_{S^2} \# ~\overline{\p^2},$ \\$3x + 2y - E_1)$} & $-y$ &$S^2$ & { pt} & &{pt} & $S^2$ & $2$ & $54$\\ \hline {\bf (III.2)} & \makecell{$(S^2 \times S^2 \# ~2\overline{\p^2},$ \\ $2x + 2y - E_1 - E_2)$} & $-y$ &$S^2$ & {2 pts} & &{2 pts} & $S^2$ & $3$ & $44$\\ \hline {\bf (III.3)} & \makecell{$(E_{S^2} \# ~\overline{\p^2},$ \\ $3x + 2y - E_1)$} & $-x-y$ &$S^2$ & {3 ~pts} & &{3 ~pts} & $S^2$ & $4$ &$34$ \\ \hline {\bf (IV-1-1.1)} & \makecell{$(E_{S^2} \# ~2\overline{\p^2},$ \\$3x + 2y - E_1-E_2)$} & $-x-y$ &$S^2$ & { 2 pts} & \makecell{ $Z_0 = Z_0^1 ~\dot \cup ~ Z_0^2$ \\ $Z_0^1 \cong Z_0^2 \cong S^2$ \\ $\mathrm{PD}(Z_0^1) = x+y-E_1 - E_2$ \\ $\mathrm{PD}(Z_0^2) = x - E_1$} & { 2 pts} & $S^2$ & $5$ & $36$\\ \hline {\bf (IV-1-1.2)} & \makecell{$(E_{S^2} \# ~2\overline{\p^2},$ \\$3x + 2y - E_1-E_2)$} & $-x-y$ &$S^2$ & { 2 pts} & \makecell{ $Z_0 = Z_0^1 ~\dot \cup ~ Z_0^2$ \\ $Z_0^1 \cong Z_0^2 \cong S^2$ \\ $\mathrm{PD}(Z_0^1) = y$ \\ $\mathrm{PD}(Z_0^2) = x+y-E_1 - E_2$} & { 2 pts} & $S^2$ & $5$ & $36$\\ \hline {\bf (IV-1-1.3)} & \makecell{$(E_{S^2} \# ~2\overline{\p^2},$ \\$3x + 2y - E_1-E_2)$} & $-x-y$ &$S^2$ & { 2 pts} & \makecell{ $Z_0 \cong S^2$ \\ $\mathrm{PD}(Z_0) = x+y-E_1$} & { 2 pts} & $S^2$ & $4$ & $36$\\ \hline {\bf (IV-1-2)} & \makecell{$(E_{S^2} \# ~2\overline{\p^2},$ \\$3x + 2y - E_1-E_2)$} & $-x-y$ &$S^2$ & { 2 pts} & \makecell{ $Z_0 \cong S^2$ \\ $\mathrm{PD}(Z_0) = x - E_1$} & { 2 pts} & $S^2$ & $4$ & $40$\\ \hline {\bf (IV-2-1.1)} & \makecell{$(E_{S^2} \# ~\overline{\p^2},$ \\$3x + 2y - E_1)$} & $-x-y$ &$S^2$ & { pt} & \makecell{ $Z_0 \cong S^2$ \\ $\mathrm{PD}(Z_0) = 2x + y - E_1$} &{ pt} & $S^2$ & $3$ &$38$ \\ \hline {\bf (IV-2-1.2)} & \makecell{$(E_{S^2} \# ~\overline{\p^2},$ \\$3x + 2y - E_1)$} & $-x-y$ &$S^2$ & { pt} & \makecell{ $Z_0 = Z_0^1 ~\dot \cup ~ Z_0^2$ \\ $Z_0^1 \cong Z_0^2 \cong S^2$ \\ $\mathrm{PD}(Z_0^1) = \mathrm{PD}(Z_0^2) = x + y - E_1$} &{ pt} & $S^2$ & $4$ &$38$ \\ \hline {\bf (IV-2-2.1)} & \makecell{$(E_{S^2} \# ~\overline{\p^2},$ \\$3x + 2y - E_1)$} & $-x-y$ &$S^2$ & { pt} & \makecell{ $Z_0 \cong S^2$ \\ $\mathrm{PD}(Z_0) = x + y$} &{ pt} & $S^2$ & $3$ &$42$ \\ \hline {\bf (IV-2-2.2)} & \makecell{$(E_{S^2} \# ~\overline{\p^2},$ \\$3x + 2y - E_1)$} & $-x-y$ &$S^2$ & { pt} & \makecell{ $Z_0 = Z_0^1 ~\dot \cup ~ Z_0^2$ \\ $Z_0^1 \cong Z_0^2 \cong S^2$ \\ $\mathrm{PD}(Z_0^1) = y$ \\ $\mathrm{PD}(Z_0^2)= x + y - E_1$} &{ pt} & $S^2$ & $4$ &$42$ \\ \hline {\bf (IV-2-3)} & \makecell{$(E_{S^2} \# ~\overline{\p^2},$ \\$3x + 2y - E_1)$} & $-x-y$ &$S^2$ & { pt} & \makecell{ $Z_0 \cong S^2$ \\ $\mathrm{PD}(Z_0) = x$} &{ pt} & $S^2$ & $3$ &$46$ \\ \hline {\bf (IV-2-4)} & \makecell{$(E_{S^2} \# ~\overline{\p^2},$ \\$3x + 2y - E_1)$} & $-x-y$ &$S^2$ & { pt} & \makecell{ $Z_0 \cong S^2$ \\ $\mathrm{PD}(Z_0) = E_1$} &{ pt} & $S^2$ & $3$ &$50$ \\ \hline {\bf (IV-2-5)} & \makecell{$(S^2 \times S^2 \# ~\overline{\p^2},$ \\$2x + 2y - E_1)$} & $-y$ &$S^2$ & { pt} & \makecell{ $Z_0 = Z_0^1 \dot \cup Z_0^2$ \\ $Z_0^1 \cong Z_0^2 \cong S^2$ \\ $\mathrm{PD}(Z_0^1) = x - E_1$ \\ $\mathrm{PD}(Z_0^2) = y - E_1$ \\ } &{ pt} & $S^2$ & $4$ &$46$ \\ \hline {\bf (IV-2-6)} & \makecell{$(S^2 \times S^2 \# ~\overline{\p^2},$ \\$2x + 2y - E_1)$} & $-y$ &$S^2$ & { pt} & \makecell{ $Z_0 \cong S^2$ \\ $\mathrm{PD}(Z_0) = x - E_1$} &{ pt} & $S^2$ & $3$ &$50$ \\ \hline \end{tabular} \end{adjustbox} \vs{0.1cm} \caption {List of topological fixed point data} \label{table_list} \end{table} \begin{theorem}\cite[Theorem 1.2]{McD4}\label{theorem_uniqueness} Let $M$ be the blow-up of a rational or a ruled symplectic four manifold. Then any two cohomologous and deformation equivalent\footnote{Two symplectic forms $\omega_0$ and $\omega_1$ are said to be {\em deformation equivalent} if there exists a family of symplectic forms $\{ \omega_t ~|~ 0 \leq t \leq 1 \}$ connecting $\omega_0$ and $\omega_1$. We also say that $\omega_0$ and $\omega_1$ are {\em isotopic} if such a family can be chosen such that $[\omega_t]$ is a constant path in $H^2(M; \Z)$.} symplectic forms on $M$ are isotopic. \end{theorem} \begin{theorem}\cite[Lemma 4.2]{G}\label{theorem_symplectomorphism_group} For any of the following symplectic manifolds, the group of symplectomorphisms which act trivially on homology is path-connected. \begin{itemize} \item $\p^2$ with the Fubini-Study form. \cite[Remark in p.311]{Gr} \item $\p^1 \times \p^1$ with any symplectic form. \cite[Theorem 1.1]{AM} \item $\p^2 \# ~k~\overline{\p^2}$ with any blow-up symplectic form for $k \leq 4$. \cite[Theorem 1.4]{AM}, \cite{E}, \cite{LaP}, \cite{Pin} \cite{LLW}. \end{itemize} \end{theorem} \begin{remark} In \cite[Theorem 9.3]{Cho}, the author only mentioned the symplectic rigidity of $X_k = \p^2 \# k \overline{\p^2}$ for $k \leq 3$ since $X_k$ ($k > 3$) does not appear as a reduced space when an extremal fixed point set is an isolated point. On the other hand, in our case of Theorem \ref{theorem_main}, $X_4$ appears as a reduced space, see {\bf (III.3)}. Recently, Li-Li-Wu proved the symplectic rigidity of $X_4$ in \cite{LLW} (where it fails from $k=5$, see \cite{Se}). \end{remark} To complete the proof of Theorem \ref{theorem_main}, we only need to show that each TFD determines FD uniquely. (Then the proof follows by Gonzalez theorem \ref{theorem_Gonzalez} from the fact that every reduced space is symplectically rigid and the existence of a Fano variety corresponding to each TFD as illustrated from Section \ref{secCaseIMathrmCritMathringHEmptyset} to \ref{secCaseIVMathrmCritMathringH11}.) Note that a topological fixed point data only records homology classes of fixed components regarded as embedded submanifolds of reduced spaces. In general, we cannot rule out the possibility that there are many distinct fixed point data which have the same topological fixed point data. Recall that any non-extremal part of a topological fixed point data in Table \ref{table_list} is one of the forms \[ (M_c, [\omega_c], [Z_c^1], \cdots, [Z_c^{k_c}]), \quad c = -1, 0, 1. \] If $c = \pm 1$, then all $Z_c^i$'s are isolated points. In this case, the topological fixed point data determines a fixed point data uniquely, since if \[ (M_c, \omega_c, p_1, \cdots, p_r) \quad \text{and} \quad (M_c, \omega_c' , q_1, \cdots, q_r), \quad \quad p_i, q_j : \text{points},\quad [\omega_c] = [\omega_c'], \] then it follows from the symplectic rigidity of $M_c$ (obtained by Theorem \ref{theorem_uniqueness} and Theorem \ref{theorem_symplectomorphism_group}) that there exists a symplectomorphism $\phi : (M_c, \omega_c) \rightarrow (M_c, \omega_c')$ sending $p_i$ to $q_i$ for $i=1,\cdots,r$. (See \cite[Proposition 0.3]{ST}.) For $c= 0$, we note that every $Z_0^i$ in Table \ref{table_list} is a sphere with self intersection greater than equal to $-1$. Then the following theorems guarantee that any symplectic embedding $Z_0 \hookrightarrow M_0$ in Table \ref{table_list} can be identified with an algebraic embedding. \begin{theorem}\cite[Proposition 3.2]{LW}\cite[Theorem 6.9]{Z}\label{theorem_Z} Any symplectic sphere $S$ with self-intersection $[S]\cdot[S] \geq 0$ in a symplectic four manifold $(M,\omega)$ is symplectically isotopic to an (algebraic) rational curve. Any two homologous spheres with self-intersection $-1$ are symplectically isotopic to each other. \end{theorem} Furthermore, we may apply the following lemma to each reduced space since every rational surface $X$ satisfies $H^1(X, \mcal{O}_X) = 0$. \begin{lemma}\label{lemma_isotopic}\cite[Lemma 9.6]{Cho} Suppose that $X$ is a smooth projective surface with $H^1(X, \mcal{O}_X) = 0$. Let $H_1$ and $H_2$ be two smooth curves of $X$ representing the same homology class. Then $H_1$ is symplectically isotopic to $H_2$ with respect to the symplectic form $\omega_X = \omega_{\mathrm{FS}}|_X$ on $X$. \end{lemma} Now we are ready to prove Theorem \ref{theorem_main} \begin{proof}[Proof of Theorem \ref{theorem_main}] Let $(M,\omega)$ be a six-dimensional closed monotone symplectic manifold with $c_1(TM) = [\omega]$. Also assume that $(M,\omega)$ admits a semifree Hamiltonian circle action with the balanced moment map $H : M \rightarrow \R$. By Table \ref{table_list}, we know that any reduced space is either \[ \p^1 \times \p^1, \quad \p^2 \# k\overline{\p^2}, \quad \quad k \leq 4 \] and hence is symplecticaly rigid by Theorem \ref{theorem_uniqueness} and Theorem \ref{theorem_symplectomorphism_group}. Moreover, we also know that there exists a smooth Fano 3-fold admitting semifree holomorphic Hamiltonian $S^1$-action whose topological fixed point data equals $\frak{F}_{\mathrm{top}}(M,\omega,H)$. So, it remains to show that $\frak{F}_{\mathrm{top}}(M,\omega,H)$ determines $\frak{F}(M,\omega,H)$ uniquely. By Theorem \ref{theorem_Z}, we may assume that every $(M_c, \omega_c, Z_c) \in \frak{F}(M,\omega,H)$ is an algebraic tuple, that is, $Z_c$ is a complex (and hence K\"{a}hler) submanifold of $M_c$ for every critical value $c$ of the balanced moment map $H$. Moreover, since any reduced space is birationally equivalent to $\p^2$, we see that $H^1(M_c, \mcal{O}_{M_c}) = 0$ and therefore we may apply Lemma \ref{lemma_isotopic} so that $(M_c, \omega_c, Z_c)$ is equivalent to the fixed point data $(X_c, (\omega_X)_c, (Z_X)_c)$ of $X$ at level $c$. This completes the proof. \end{proof} \bibliographystyle{annotation}
1,116,691,497,424
arxiv
\section{Introduction} Multi-agent Path Finding (MAPF) is a problem that makes multiple agents move to their destinations without collisions. MAPF is now receiving a lot of attention due to its high practicality, e.g., traffic control~\cite{dresner2008multiagent}, automated warehouse~\cite{wurman2008coordinating}, or airport surface operation~\cite{morris2016planning}, etc. The efficiency of planned paths is usually evaluated through the sum of travel time. Since search space grows exponentially with the number of agents, the challenge is obtaining relatively efficient paths with acceptable computational time. Considering realistic scenarios, MAPF must be solved iteratively and in real-time since many target applications actually require agents to execute streams of tasks; MAPF variants tackle this issue, e.g., \textit{lifelong} MAPF~\cite{ma2017lifelong}, \textit{online} MAPF~\cite{vsvancara2019online}, or, \textit{iterative} MAPF~\cite{okumura2019priority}. In such situations, decoupled approaches, more specifically, approaches based on prioritized planning~\cite{erdmann1987multiple,silver2005cooperative}, are attractive since they can reduce computational cost. Moreover, decoupled approaches are relatively realistic to decentralized fashion, i.e., each agent determines its own path while negotiating with others. Thus, they have the potential to receive benefits of decentralized systems such as scalability and concurrency. \input{tikz/motivating-example} Priority Inheritance with Backtracking (PIBT)~\cite{okumura2019priority}, a decoupled method proposed recently, solves iterative MAPF by relying on prioritized planning with a unit-length time window, i.e., it determines only the next locations of agents. With flexible priorities, PIBT ensures \textit{reachability}, i.e., all agents reach their own destinations in finite time, provided that the environment is a graph with adequate properties, e.g., biconnected. Unfortunately, the efficiency of the paths planned by PIBT is underwhelming as a result of locality. This is illustrated in Fig.~\ref{fig:motivating-example} which depicts two actual paths (the red and blue arrows) that PIBT plans when an agent $a_{1}$ has higher priority than an agent $a_{2}$. In contrast, the black arrow depicts an ideal path for $a_{2}$. Obviously, the agent with lower priority ($a_{2}$) takes unnecessary steps. This comes as a result of the shortsightedness of PIBT, i.e., PIBT plans paths anticipating only a single step ahead. Extending the time window is hence expected to improve overall path efficiency thanks to better anticipation. In this study, we propose a generalized algorithm of PIBT with respect to the time window, called Windowed PIBT (\winpibt). \winpibt allows agents to plan paths anticipating multiple steps ahead. Approximately, for an agent $a_i$, \winpibt works as follows. At first, compute the shortest path while avoiding interference with other paths. Then, try to secure time-node pairs sequentially along that path (request). If the requesting node is the last node assigned to some agent $a_j$, then keep trying to let $a_j$ plan its path one step ahead and move away from the node by providing the priority of $a_i$ until there are no such agents. The special case of \winpibt with a unit-length window is hence similar to PIBT. Our main contributions are two-folds: 1)~We propose an algorithm \winpibt inheriting the features of PIBT, and prove the reachability in equivalent conditions to PIBT except for the upper bound on time steps. To achieve this, we introduce a safe condition for paths with different lengths, called \emph{disentangled} condition. 2)~We demonstrate both the effectiveness and the limitation of \winpibt with fixed windows through simulations in various environments. The results indicate the potential for more adaptive versions. The paper organization is as follows. Section~\ref{sec:relatedworks} reviews the existing MAPF algorithms. Section~\ref{sec:preliminary} defines the terminology and the problem of iterative MAPF, and reviews the PIBT algorithm. We describe the \safe condition here. Section~\ref{sec:algo} presents the \winpibt algorithm and its characteristics. Section~\ref{sec:evaluation} presents empirical results of the proposal in various situations. Section~\ref{sec:conclusion} concludes the paper and discusses future work. \section{Related Works} \label{sec:relatedworks} We later review PIBT~\cite{okumura2019priority} in detail in section~\ref{subsec:pibt}. Numerous optimal MAPF algorithms are proposed so far, e.g., search-based optimal solvers~\cite{felner2017search}, however, finding an optimal solution is NP-hard~\cite{yu2013structure}. Thus, developing sub-optimal solvers is important. There are complete sub-optimal solvers, e.g., BIBOX~\cite{surynek2009novel} for biconnected graphs, TASS~\cite{khorshid2011polynomial} and multiphase planning method~\cite{peasgood2008complete} for trees. Push and Swap/Rotate~\cite{luna2011push,de2013push} relies on two types of macro operations; move an agent towards its goal (push), or, swap the location of two agents (swap). Push and Swap has several variants, e.g., with simultaneous movements~\cite{sajid2012multi}, or, with decentralized implementation~\cite{wiktor2014decentralized,zhang2016discof}. Priority inheritance in (win)PIBT can be seen as ``push'', but note that there is no ``swap'' in (win)PIBT. Prioritized planning~\cite{erdmann1987multiple} is incomplete but computationally cheap. The well-known algorithm of prioritized planning for MAPF is Hierarchical Cooperative \astar (\hca)~\cite{silver2005cooperative}, which sequentially plans paths in order of priorities of agents while avoiding conflicts with previously planned paths. This class of approaches is scalable for the number of agents, and is often used as parts of MAPF solvers~\cite{wang2011mapp,vcap2015prioritized}. Moreover, prioritized planning are designed to be decentralized, i.e., each agent determines its own path while negotiating with others~\cite{velagapudi2010decentralized,vcap2015prioritized}. Windowed \hca (\whca)~\cite{silver2005cooperative} is a variant of \hca, which uses a limited lookahead window. \whca motivates \winpibt since the longer window causes better results in path efficiency and PIBT partly relies on \whca where the window is a unit-length. Conflict Oriented \whca~\cite{bnaya2014conflict} is an extension of \whca by focusing on the coordination around conflicts, which \winpibt is also focusing on. Since a priority ordering is crucial, how to adjust priority orders has been studied~\cite{azarm1997conflict,bennewitz2002finding,van2005prioritized,bnaya2014conflict,ma2019searching}. Similarly to PIBT, \winpibt gives agents their priorities dynamically online so these studies are not closely relevant, however, we say it is an interesting direction to combine these insights into our proposal, especially in initial priorities. A recent theoretical analysis of prioritized planning~\cite{ma2019searching} identifies instances that fail for any order of static priorities, which motivates planning with \emph{dynamic} priorities, such as taken here. There are variants of classical MAPF. Online MAPF~\cite{vsvancara2019online} addresses a dynamic group of agents, i.e., agents newly appear, or, agents disappear when they reach their goals. Lifelong MAPF~\cite{ma2017lifelong}, defined as the multi-agent pickup and delivery (MAPD) problem, is setting for conveying packages in an automated warehouse. In MAPD, the system issues goals, namely, pickup and delivery locations, dynamically to agents. Iterative MAPF~\cite{okumura2019priority} is an abstract model to address the behavior of multiple moving agents, which consists of solving route planning and task allocation. This model can cover both classical MAPF and MAPD. We use iterative MAPF to describe our algorithm. \section{Preliminary} \label{sec:preliminary} We now define the terminology, review the PIBT algorithm and introduce \safe condition of paths. \subsection{Problem Definition} We first define an abstract model, iterative MAPF. Then, we destinate two concrete instances, namely, classical MAPF and \naive iterative MAPF. Both instances only focus on route planning, and task allocation is regarded as input. The system consists of a set of agents, $A = \{a_{1}, \ldots, a_{n} \}$, and an environment given as a (possibly directed) graph $G = (V, E)$, where agents occupy nodes in $V$ and move along edges in $E$. $G$ is assumed to be 1)~\emph{simple}, i.e., devoid of loops and multiple edges, and 2)~\emph{strongly-connected}, i.e., every node is reachable from every other node. These requirements are met by simple undirected graphs. Let $v_{i}(t)$ denote the node occupied by agent $a_{i}$ at discrete time~$t \in \mathbb{N}$. The initial node $v_{i}(0)$ is given as input. At each step, an agent $a_{i}$ can either move to an adjacent vertex or stay at the current vertex. Agents must avoid 1)~\textit{vertex conflict}: $v_{i}(t) \neq v_{j}(t)$, and 2)~\textit{swap conflict} with others: $v_{i}(t) \neq v_{j}(t+1) \lor v_{j}(t+1) \neq v_{j}(t)$. Rotations (\emph{cycle conflict}) are not prohibited, i.e., $v_{i}(t+1)=v_{j}(t) \land v_{j}(t+1)=v_{k}(t) \land \cdots \land v_{l}(t+1)=v_{i}(t)$ is possible. Consider a stream of tasks $\Gamma = \{ \tau_{1}, \tau_2, \dots \}$. Each task is defined as a finite set of goals $\tau_{j} = \{ g_{1}, g_{2}, \dots, g_{m} \}$ where $g_{k} \in V$, possibly with a partial order on $g_k$. An agent is \emph{free} when it has no assigned task. Only a free agent can be assigned a task $\tau_{j}$. When $\tau_{j}$ is assigned to $a_{i}$, $a_{i}$ starts visiting goals in $\tau_{j}$. $\tau_{j}$ is completed when $a_{i}$ reaches the final goal in $\tau_{j}$ after having visited all other goals, then $a_{i}$ is free again. The solution includes two parts: 1)~route planning: plan paths for all agents without collisions, 2)~task allocation: allocate a subset of $\Gamma$ to each agent, such that all tasks are completed in finite time. The objective function should be determined by concrete instances of iterative MAPF, as shown immediately after. \subsubsection{Classical MAPF} A singleton task $\{ g_i \}$ is assigned to each agent $a_{i}$ beforehand, where $g_i$ is a goal for $a_i$. Since classical MAPF usually requires the solution to ensure that all agents are at their goals simultaneously, a new task $\{ g_{i} \}$ is assigned to $a_{i}$ when $a_{i}$ leaves $g_{i}$. There are two commonly used objective functions: sum of costs (SOC) and makespan. SOC is sum of timesteps when \textit{each} agent reaches its given goal and never moves from it. The makespan is the timestep when \textit{all} agents reach their given goals. \subsubsection{\Naive Iterative MAPF} This setting gives a new singleton task, i.e., a new goal, immediately to agents who arrive at their current goals. We here modify the termination a little to avoid the sensitive effect of the above-defined termination on the performance. Given a certain integer number $K$, the problem is regarded as solved when tasks issued from 1st to $K$-th are all completed. The rationale is to analyze the results in operation. Similarly to classical MAPF, there are two objective functions: average service time, which is defined as the time interval from task generation to its completion, or, makespan, which is the timestep corresponding to the termination. \subsection{PIBT} \label{subsec:pibt} \input{tikz/pibt} PIBT~\cite{okumura2019priority} gives fundamental collision-free movements of agents to solve iterative MAPF. PIBT relies 1)~on \whca~\cite{silver2005cooperative} where the window size is a unit-length, and 2)~on priority inheritance~\cite{sha1990priority} to deal with \textit{priority inversion} akin to the problem in real-time systems. At each timestep, unique priorities are assigned to agents. In order of decreasing priorities, each agent plans its next location while avoiding collisions with higher-priority agents. When a low-priority agent~$X$ impedes the movement of a higher-priority agent~$Y$, agent~$X$ temporarily inherits the higher-priority of agent~$Y$. Priority inheritance is executed in combination with \textit{backtracking} to prevent agents being stuck. The backtracking has two outcomes: valid or invalid. Invalid occurs when an agent inheriting the priority is stuck, forcing the higher-priority agent to replan its path. Fig.~\ref{fig:pibt} shows an example of PIBT. In the sense that PIBT changes priorities to agents dynamically online, PIBT is different from classical prioritized approaches. The foundation of PIBT is the lemma below, which is also important to \winpibt. \begin{lemma} Let $a_{1}$ denote the agent with highest priority at timestep $t$ and $v_{1}^{\ast}$ an arbitrary neighbor node of $v_{1}(t)$. If there exists a simple cycle $\mathbf{C} = (v_{1}(t), v_{1}^{\ast}, \dots)$ and $|\mathbf{C}| \geq 3$, PIBT makes $a_{1}$ move to $v_{1}^{\ast}$ in the next timestep. \label{lemma:pibt-local-movement} \end{lemma} Another key component is dynamic priorities, where the priority of an agent increments gradually until it drops upon reaching its goal. By combining these techniques, PIBT ensures the following theorem. \begin{definition} $G$ is \emph{\graphcond} if $G$ has a simple cycle $\bm{C}$ for all pairs of adjacent nodes and $|\bm{C}| \geq 3$. \label{def:pibt-cond} \end{definition} \begin{theorem} If $G$ is \graphcond, PIBT lets all agents reach their own destination within $\text{diam}(G)|A|$ timesteps after the destinations are given. \label{theorem:pibt} \end{theorem} Examples of \graphcond graphs are undirected biconnected or directed rings. Note that the above theorem does not say \textit{complete} for classical MAPF, i.e., it does not ensure that all agents are on their goals \emph{simultaneously}. \subsection{\Safe Condition} Assume two paths for agents $a_i, a_j$ with different lengths, and let the corresponding last timesteps of those two paths be $t_i$ and $t_j$ such that $t_i < t_j$. Assume that no agents collide until $t_i$. Unless agents vanish after they reach their goals, $a_i$ has to plan its extra path by $t_j$ since two agents potentially collide at some timestep $t$, $t_i < t \leq t_j$. However, $a_i$ does not need to compute the extra path immediately if $a_j$ does not use the last node of paths for $a_i$. This is because the shorter path can be extended so as not to collide with the longer path, i.e. by staying at the last node, meaning that $a_i$ can compute its extra path on demand. We now define these concepts clearly. We define a sequence of nodes $\pi_{i}$ as a determined path of an agent $a_{i}$. Initially, $\pi_{i}$ only contains $v_{i}(0)$. The manipulation to $\pi_{i}$ only allows to append the latest assigned node. We use $\ell_{i}$ as the timestep which corresponds to the latest added node to $\pi_{i}$. Note that $\pi_{i} = \left( v_{i}(0),\dots,v_{i}(\ell_{i}) \right)$ and $\ell_{i} = |\pi_{i}| - 1$ from those definition. The list of paths of all agents $A$ is denoted by $\bm{\pi}$. \begin{definition} Given two paths $\pi_{i}, \pi_{j}$ and assume that $\ell_{i} \leq \ell_{j}$. $\pi_{i}$ and $\pi_{j}$ are \isolated when: \begin{align*} &v_{i}(t) \neq v_{j}(t), & 0 \leq t \leq \ell_{i} \\ &v_{i}(t) \neq v_{j}(t-1) \land v_{i}(t-1) \neq v_{j}(t), & 0 < t \leq \ell_{i} \\ &v_{i}(\ell_{i}) \neq v_{j}(t), & \ell_{i} + 1 \leq t \leq \ell_{j} \end{align*} \end{definition} \begin{definition} If all pairs of paths are \isolated, $\bm{\pi}$ is \safe. \end{definition} \input{tikz/winpibt-example} From the definition of \safe condition, it is trivial that when $\bm{\pi}$ is \safe, agents do not collide until timestep $\min \{ \ell_{i} \}$. Moreover, a combination of extending paths exists such that agents do not ever collide. \begin{proposition} If $\bm{\pi}$ is \safe, for $a_{i}$, there exists at least one additional path until any timestep $t$ ($t \geq \ell_{i}$) while keeping $\bm{\pi}$ \safe. \label{lemma:keep-safety} \end{proposition} \begin{proof} Make $a_{i}$ stay its last assigned location until timestep $t$. This operation obviously keeps $\bm{\pi}$ \safe. \end{proof} The \safe condition might be helpful to developing online solvers by regarding as temporal terminate condition. In online situations, where goals are dynamically assigned to agents, the challenge is replanning paths on demand. One intuitive but excessive approach is to update paths for all agents in the system until a certain timestep, e.g., Replan All~\cite{vsvancara2019online}, and this certainly ensures conflict-free. The disentangled condition can relax this type of replanning, i.e., it enables to update paths for part of agents, and still ensure the safety. PIBT can be understood as making an effort to keep $\bm{\pi}$ \safe. Priority inheritance occurs when $a_{i}$ attempts to break the \isolated condition regarding $\pi_{i}$ and $\pi_{j}$, where $a_{j}$ is an agent with lower priority. Then $a_j$ secures the next node so as to keep $\pi_i$ and $\pi_j$ are \isolated, before $a_i$ does. In strictly, there is one exception: movements corresponding to rotations. Assume that $a_i, a_j, a_k$ tries to move $v_j(t), v_k(t), v_i(t)$, respectively. If $a_i$ has the highest priority, $a_j$ secures the node prior to $a_i$, and $a_k$ does prior to $a_j$. $\pi_k$ and $\pi_i$ are not \isolated temporary, but $\bm{\pi}$ revives in \safe immediately since rotations always succeed in PIBT. \winpibt works as same as PIBT, i.e., update paths while keeping $\bm{\pi} \safe$. The difference is that \winpibt can perform priority inheritance retroactively. \section{Windowed PIBT (\winpibt)} \label{sec:algo} In this section, we first provide a basic concept of how to extend the time window of PIBT and an example. Then, the pseudo code is given with theoretical analysis. We explain \winpibt in centralized fashion. PIBT itself is a relatively realistic approach for decentralized implementation, however, \winpibt with decentralized fashion faces some difficulties as discussed later. \subsection{Concept} Similarly to PIBT, \winpibt makes the agent with highest priority move along an arbitrary path within a time window. The original PIBT algorithm plans paths for all agents one by one timestep, i.e., PIBT relies on a unit-length time window. \winpibt extends the time window of PIBT while satisfying Lemma~\ref{lemma:pibt-local-movement}. Describing simply, the algorithm for one agent $a_{i}$ consists of three phases: \begin{itemize}[leftmargin=0.4cm] \item[1)] Compute a path ideal for $a_{i}$ that excludes already reserved time-node pairs while avoiding interference with the progression of higher-priority agents. \item[2)] Secure time-node pairs sequentially along to the computed path. \item[3)] If the node requested at $t_{i}$ is the last assigned node for some agent~$a_{j}$ at $t_{j}$ such that $t_{j} < t_{i}$, i.e., $\pi_i$ and $\pi_j$ will not be \isolated, then move $a_j$ from the node by priority inheritance. In precisely, let $a_j$ plan its path one step ahead by inheriting the priority of $a_i$ until there are no such agents. If such an agent remains until $t_i$, then $a_i$ executes the PIBT algorithm with the property of Lemma~\ref{lemma:pibt-local-movement}. \end{itemize} \subsubsection{Example} Fig.~\ref{fig:winpibt-example} illustrates how \winpibt works. To simplify, we remove the invalid case of priority inheritance. Here, $a_1$ has the highest priority and it takes initiative. Assume that the window size is three. At the beginning, $a_1$ computes the ideal path $(v4, v5, v6, v3)$ and starts securing nodes. $v5$ at $t=1$ can be regarded as ``unoccupied'' since the last allocated nodes for the other agents are $v2$, $v3$ and $v6$. Thus, $a_{1}$ secures $v5$ at $t=1$. Next, $a_{1}$ tries to secure $v6$ at $t=2$ that is the last assigned node of $a_{4}$, i.e., $v_{4}(\ell_{4})$. $a_{1}$ has to compel $a_{4}$ to move from $v6$ before $t=2$ and priority inheritance occurs between different timesteps (from $a_{1}$ at $t=1$ to $a_{4}$ at $t=0$). This inheritance process continues until $a_{2}$ secures the node via $a_{3}$ and $a_{4}$, just like in PIBT. Now $a_{2}$, $a_{3}$ and $a_{4}$ secure the nodes until $t=1$. This causes $v6$ at $t=2$ to become ``unoccupied'' and hence $a_{1}$ successfully secures the desired node. The above process continues until the initiative agent reserves the nodes at the current timestep ($t=0$) plus the window size (3). After $a_{1}$ finishes reservation, $a_{2}$ now starts reservation from $t=2$ avoiding the already secured node, e.g., $v2$ at $t=2$ cannot be used since it is already assigned to $a_4$ (to make space for $a_1$). Finally, \winpibt gives the paths as follows: \begin{itemize} \item $\pi_{1}$: $(v4, v5, v6, v3)$ \item $\pi_{2}$: $(v2, v1, v4, v5)$ \item $\pi_{3}$: $(v3, v2, v5, v6)$ \item $\pi_{4}$: $(v6, v3, v2, v1)$ \end{itemize} \subsection{Algorithm} We show pseudo code of \winpibt in Algorithm~\ref{algo:func-winpibt} and \ref{algo:caller}. The former describes function $\mathsf{winPIBT}$ that gives $a_{i}$ a path until $t=\alpha$. The latter shows how to call function $\mathsf{winPIBT}$ globally. \winpibt has a recursive structure with respect to priority inheritance and backtracking similarly to PIBT. Function $\mathsf{winPIBT}$ takes four arguments: 1)~$a_{i}$ is an agent determining its own path; 2)~$\alpha$ is timestep until which $a_{i}$ secures nodes, i.e., $\ell_i = \alpha$ after calling function $\mathsf{winPIBT}$; 3)~$\bm{\Pi}$ represents provisional paths of all agents. Each agent plans its own path while referring to $\bm{\Pi}$. We denote by $\Pi_{i}$ a provisional path of $a_{i}$ and $\Pi_{i}(t)$ a node at timestep $t$ in $\Pi_{i}$. Intuitively, $\Pi_{i}$ consists of connecting an already determined path $\pi_{i}$ and a path trying to secure. Note that $0 \leq t \leq \ell_{i}$, $\Pi_{i}(t) = v_{i}(t)$; 4)~$R$ is a set of agents which are currently requesting some nodes, aiming at detecting rotations. In the pseudo code, we also implicitly use $\ell_{j}$, which is not contained in arguments. We use three functions: 1-2) $\mathsf{validPath}(a_{i}, \beta, \bm{\Pi})$ and $\mathsf{registerPath}$ $(a_{i}, \alpha, \beta, \bm{\Pi})$ compute a path for $a_{i}$. The former confirms whether there exists a path for $a_{i}$ such that keeps $\bm{\pi}$ \safe from timestep $\ell_{i} + 1$ to $\beta$. The latter computes the ideal path until timestep $\beta$ and registers it to $\bm{\Pi}$ until $t=\alpha$. We assume that always $\alpha \leq \beta$. In addition to prohibiting collisions, $\Pi_{i}$ is constrained by the following term; $\Pi_{i}(t_{i}) \neq \Pi_{j}(t_{j}), \ell_{i} < t_{i} < \ell_{j}, t_{i} < t_{j} \leq \ell_{j}$. Intuitively, this constraint says that the shorter path cannot invade the longer path. The rationale is to keep $\bm{\Pi}$ \safe. In \winpibt, a path is elongated by adding nodes one by one to its end. Assume two paths with different lengths. The \safe condition is broken in two cases: The longer path adds the last node of the shorter path to its end, or, the shorter path adds the node that the longer path uses in the gap term between two paths. The constraint prohibits the latter case. A critical example, shown in Fig.~\ref{fig:winpibt-reservation:cannotenter}, assumes the following situation: After $a_1$ has fixed its path, $a_2$ starts securing nodes. To do so, $a_2$ tries to secure the current location of $a_3$, thus, $a_3$ has to plan its path. What happens when $a_3$ plans to use the crossing node to avoid a temporal collision with $a_2$? The problem is that $a_3$ is not ensured to return to its first location since $a_2$ has a higher priority. Thus, an agent with a lower priority has the potential to be stuck on the way of a path of an agent with higher priority without this constraint. According to this constraint, $a_3$ cannot use a crossing node until $a_1$ passes. This can cause some problematic cases as shown in Fig.~\ref{fig:winpibt-reservation:inconvenient}, which implies that \emph{extra reservation leads to awkward path planning}. 3) $\mathsf{copeStuck}(a_{i}, \alpha, \bm{\Pi})$ is called when $a_{i}$ has no path satisfying the constraints. This forcibly gives a path to $a_{i}$ such that staying at the last assigned node until timestep $\alpha$, i.e., $v_{i}(\ell_{i}+1),\dots,v_{i}(\alpha) \leftarrow v_{i}(\ell_{i})$. Note that this function also updates $\Pi_i$ such that $\Pi_i = \pi_i$. Algorithm~\ref{algo:func-winpibt} is as follows. An agent $a_{i}$ enters a path decision phase when function $\mathsf{winPIBT}$ is called with the first argument $a_{i}$. Firstly, it checks that the timestep when the last node was assigned to $a_{i}$ is smaller than $\alpha$, or else the path of $a_{i}$ has already been determined over $t=\alpha$, thus $\mathsf{winPIBT}$ returns as valid [Line~\ref{algo:func-winpibt:init-check}]. Next, it computes the prophetic timestep $\beta$ [Line~\ref{algo:func-winpibt:calc-beta}]. The rationale of $\beta$ is that, unless $a_{i}$ sends backtracking, the provisional paths in $\bm{\Pi}$ that may affect the planning of $a_i$ never change. Thus, computing a path based on an upper timestep $\beta$ works akin to forecasting. If no valid path exist, $a_{i}$ is forced to stay at $v_{i}(\ell_{i})$ until $t=\alpha$ via the function $\mathsf{copingStuck}$ and backtracks as invalid [Line~\ref{algo:func-winpibt:validpath-1}--\ref{algo:func-winpibt:end-valid-path-1}]. A similar operation is executed when $a_{i}$ recomputes its path [Line~\ref{algo:func-winpibt:validpath-2}--\ref{algo:func-winpibt:invalid-2}]. After that, $\mathsf{winPIBT}$ proceeds: 1)~Compute an ideal path for $a_{i}$ satisfying the constraints [Line~\ref{algo:func-winpibt:register1}, \ref{algo:func-winpibt:register2}]; 2)~Secure time-node pairs sequentially along path $\Pi_{i}$ [Line~\ref{algo:func-winpibt:target-node}, \ref{algo:func-winpibt:secure-node}]; 3)~If the requesting node $v$ is violating a path of $a_{j}$, let $a_{j}$ leave $v$ by $t=\ell_{i}-1$ via priority inheritance [Line~\ref{algo:func-winpibt:force-agent}--\ref{algo:func-winpibt:end-force-agent}]. If any agent $a_{j}$ remains at $t=\ell_{i}$, then the original PIBT works [Line~\ref{algo:func-winpibt:require-pibt}--\ref{algo:func-winpibt:end-force}]. Note that introducing $R$ prevents eternal priority inheritance and enables rotations. \input{algo/func-winpibt} \input{tikz/winpibt-reservation} \input{algo/caller} There is a little flexibility on how to call function $\mathsf{winPIBT}$. Algorithm~\ref{algo:caller} shows one example. In each timestep~$t$ before the path adjustment phase, the priority $p_{i}(t)$ of an agent $a_{i}$ is updated as mentioned later [Line~\ref{algo:caller:update}]. The window $w_{i}(t)$ is also updated [Line~\ref{algo:caller:update}]. In this paper, we fix $w_{i}(t)$ to be a constant value. Then, agents elongate their own paths in order of priorities. Agents that have already determined their path until the current timestep~$t$, i.e., $\ell_{i} > t$, skip making a path [Line~\ref{algo:caller:skip}]. In order not to disturb paths of agents with higher priorities, an upper bound on timesteps $\kappa$ is introduced. By $\kappa$, agents with lower priorities are prohibited to update the paths beyond lengths of paths of agents with higher priorities. By the next lemma, \winpibt always gives valid paths. \begin{lemma} \winpibt keeps $\bm{\pi}$ \safe. \label{lemma:winpibt-safety} \end{lemma} \begin{proof} Initially, $\bm{\pi}$ is \safe. $\bm{\pi}$ is updated via the function $\mathsf{winPIBT}$. Before an agent $a_{i}$ calculates a path, $a_{i}$ confirms the existence of a path from timestep $\ell_{i} + 1$ until $\beta$, as defined in Line~\ref{algo:func-winpibt:init-check} while avoiding collisions and using $v$ such that $\ell_i < t^{\prime} \leq \ell_j, v_j(t^{\prime}) = v$, with respect to paths registered in $\bm{\Pi}$. We distinguish two cases: 1)~a path exists, or, 2) no path exists. \begin{enumerate}[leftmargin=0.5cm] \item[1)] a path exists: $a_{i}$ now successfully computes a path $\Pi_{i}$ satisfying the condition and starts securing nodes accordingly. Assume that $a_{i}$ tries to secure a node $v$ at timestep $\gamma = \ell_{i} + 1$. We distinguish three cases regarding other agents $a_{j}$ and their $\ell_{j}$. \begin{enumerate} \item[a.] $\ell_{j} > \ell_{i}$: $\Pi_{i}$ is computed without collisions with paths on $\bm{\Pi}$. Moreover, $\Pi_{i}$ avoids $v$ at $t = \gamma$ such that $\forall t^\prime, \gamma < t^\prime \leq \ell_{j}, v_{j}(t^\prime) = v$. Thus, $\pi_{i}$ and $\pi_{j}$ are \isolated if $a_{i}$ adds $v$ in its path $\pi_{i}$. \item[b.] $\ell_{j} < \ell_{i}$: If $v_{j}(\ell_{j}) \neq v$, the operation of adding $v$ to $\pi_{i}$ keeps $\pi_{i}$ and $\pi_{j}$ \isolated, or else $a_{i}$ tries to let $a_{j}$ leave $v$ via priority inheritance [Line~\ref{algo:func-winpibt:let-aj-force}]. $a_{j}$ now gets the privilege to determine $v_{j}(\ell_{j} + 1)$. This action of $a_{j}$ remains $\pi_{i}$ and $\pi_{j}$ \isolated following two reasons. First, $a_{i}$ never secures $v$ until $a_{j}$ goes away. Second, if $a_{j}$ successfully computes a path $\Pi_{j}$, the previous part applies. If failed, $v_{j}(\ell_{j} + 1)$ is set to $v_{j}(\ell)$, i.e., $v$. This action also keeps $\pi_{i}$ and $\pi_{j}$ \isolated. If some agent $a_{j}$ stays on $v$ until timestep $\ell_{i}$, the following part applies. \item[c.] $\ell_{j} = \ell_{i}$: This case is equivalent to the PIBT algorithm. If $v \neq v_{j}(\ell_{j})$, the operation of adding $v$ to $\pi_{i}$ keeps $\pi_{i}$ and $\pi_{j}$ \isolated, or else there are two possibilities: either $a_{j} \not\in R$ or $a_{j} \in R$. If $a_{j} \not\in R$, $a_{j}$ inherits the priority of $a_{i}$. When the outcome of backtracking is valid, this means that, at timestep $\gamma$, $a_{j}$ secures a node other than $v$ and $v_{i}(\ell_{i})$ (to avoid swap conflict), since both have already registered in $\Pi_{i}$. Thus, $a_{i}$ successfully secures $v$ while keeping $\pi_{i}$ and $\pi_{j}$ \isolated. When the outcome is invalid, $a_{j}$ stays at its current node, i.e., $v_{j}(\gamma) = v_{j}(\gamma - 1)$, and $a_{i}$ recomputes $\Pi_{i}$. Still, $\pi_{i}$ and $\pi_{j}$ are \isolated since $a_{i}$ has not secured a node at timestep $\gamma$. Next, consider the case where $a_{j} \in R$. This happens when $a_{j}$ is currently requesting another node. Thus, after $a_{i}$ secures $v$ at timestep $\gamma$ and backtracking returns as valid, $a_{j}$ successfully secures the node. $\pi_{i}$ and $\pi_{j}$ is temporally not \safe, but $\pi_{i}$ recovers the $\safe$ condition immediately. Intuitively, this case corresponds to rotations. \end{enumerate} As a result, $\bm{\pi}$ is kept \safe through the action of $a_{i}$ to secure a node. \item[2)] no path exists: In this case, $a_{i}$ chooses to stay at its current node. Obviously, this action keeps $\bm{\pi}$ \safe. \end{enumerate} Therefore, regardless of whether a path exists or not, $\bm{\pi}$ is \safe. \end{proof} The following lemma shows that the agent with highest priority can move arbitrarily, akin to Lemma~\ref{lemma:pibt-local-movement} for PIBT. \begin{lemma} $\mathsf{winPIBT}(a_{i}, \alpha, \bm{\Pi}, \emptyset)$ gives $a_{i}$ an arbitrary path from $t=\ell_i+1$ until $t = \alpha$ if $G$ is \graphcond, and $\forall j \neq i, \ell_{i} \geq \ell_{j}$ and $\bm{\Pi} = \bm{\pi}$. \label{lemma:winpibt-highest} \end{lemma} \begin{proof} $\forall j \neq i, |\Pi_{j}| \leq |\Pi_{i}|$ holds since $\ell_{j} \leq \ell_{i}$ and $\bm{\Pi} = \bm{\pi}$. Thus, $a_{i}$ can compute an arbitrary path $\Pi_{i}$ from timestep $\ell_{i} + 1$ until $\alpha$. We now show that $a_{i}$ never receives invalid as outcome of backtracking. According to $\Pi_{i}$, $a_{i}$ tries to secure a node sequentially. Let $v$ be this node at timestep $\gamma$. If $\not\exists a_{j}$ s.t. $v_{j}(\ell_{j}) = v$, $a_{i}$ obviously secures $v$ at timestep $\gamma$. The issue is only when $\exists a_{j}$ s.t. $v_{j}(\ell_{j})$, $\ell_{j} = \gamma - 1$, however, the equivalent mechanism of Lemma~\ref{lemma:pibt-local-movement} works and $a_{i}$ successfully moves to $v$ due to the assumption that $G$ is \graphcond. Thus, $a_{i}$ never receives an invalid outcome and moves on an arbitrary path until $t=\alpha$. \end{proof} \subsubsection{Prioritization} The prioritization scheme of \winpibt is exactly the same as in the PIBT algorithm. Let $\eta_{i}(t) \in \mathbb{N}$ be the timesteps elapsed since $a_{i}$ last updated the destination $g_{i}$ prior to timestep $t$. Note that $\eta_{i}(0) = 0$. Let $\epsilon_{i} \in [0,1)$ be a unique value to each agent $a_{i}$. At every timestep, $p_{i}(t)$ is computed as the sum of $\eta_{i}(t)$ and $\epsilon_{i}$. Thus, $p_{i}(t)$ is unique between agents in any timestep. By this prioritization, we derive the following theorem. \begin{theorem} By \winpibt, all agents reach their own destinations in finite timesteps after the destinations are given if $G$ is \graphcond and $\forall i, w_{i}(t)$ is kept finite in any timestep. \label{theorem:global-movement} \end{theorem} \begin{proof} Once $a_{i}$ gets the highest priority, the condition satisfying Lemma~\ref{lemma:winpibt-highest} comes true in finite timestep since no agents can newly reserve the path over the timestep limit set by the previous highest agent. Once such condition is realized, $a_{i}$ can move along the shortest path thanks to Lemma~\ref{lemma:winpibt-highest}. Until $a_{i}$ reaches its destination, this situation continues since Algorithm~\ref{algo:caller} ensures that function $\mathsf{winPIBT}$ is always called by other agents such that the second argument is never over $\ell_{i}$. Thus, $a_{i}$ reaches its destination in finite steps, and then drops its priority. During this, other agents increase their priority based on the definition of $\eta_{j}(t)$ and one of them obtains the highest priority after $a_{i}$ drops its priority. As long as such agents remain, the above-mentioned process is repeated. Therefore, all agents must reach their own destination in finite timestep after the destinations are given. \end{proof} \input{table/result-mapf-1} \input{table/result-mapf-2} \input{table/result-imapf} \subsubsection{Iterative Use} \label{subsubsec:iterative} For iterative use like Multi-agent Pickup and Delivery~\cite{ma2017lifelong}, it is meaningless to force agents to stay at their goal locations from their arrival until $t=\alpha$, i.e., $\alpha$ in function $\mathsf{winPIBT}$ can be treated more flexibly. Once an agent reaches its destination, the agent can immediately return the backtracking by adding the following modifications in function $\mathsf{winPIBT}$ [Algorithm~\ref{algo:func-winpibt}]. Let $\delta$ be the timestep when an agent $a_{i}$ reaches its destination $g_{i}$ according to the calculated path and $\delta \leq \alpha$. First, register the ideal paths until timestep $\delta$, not $\alpha$ [Line~\ref{algo:func-winpibt:register1},\ref{algo:func-winpibt:register2}]. Second, replace $\alpha$ with $\delta$ [Line~\ref{algo:func-winpibt:while-start}--\ref{algo:func-winpibt:endwhile}]. As a result, $a_{i}$ reserves its path until timestep $\delta$ and unnecessary reservations are avoided. \subsubsection{Decentralized Implementation} PIBT with decentralized fashion requires that each agent senses its surroundings to detect potential conflicts, then, communicates with others located within 2-hops, which is the minimum assumption to achieve conflict-free planning. The part of priority inheritance and backtracking can be performed by information propagation. \winpibt with decentralized fashion is an almost similar way to PIBT, however, it requires agents to sense and communicate with other agents located within $2{w}$-hops, where $w$ is the maximum window size that agents are able to take. In this sense, there is an explicit trade-off; \emph{To do better anticipation, agents need expensive ability for sensing and communication}. \section{Evaluation} \label{sec:evaluation} This section evaluates the performance of \winpibt quantitatively by simulation. Our experiments are twofold: classical MAPF and \naive iterative MAPF. The simulator was developed in C++ \footnote{ The code is available at \url{https://github.com/Kei18/pibt} }, and all experiments were run on a laptop with Intel Core i5 1.6GHz CPU and 16GB RAM. \astar was used to obtain the shortest paths satisfying constraints. \subsection{Classical MAPF} \subsubsection{Basic Benchmark} To characterize the basic aspects of the effect of the window size, we first tested \winpibt in four carefully chosen fields, while fixing the number of agents. Three fields (\fourthree, \bridge, \twobridge; Fig.~\ref{fig:mapf-result-1}) are original. In these fields, 10 scenarios were randomly created such that starts and goals were set to nodes in left/right-edge and in right/left-edge, respectively. The warehouse environment (\kivalike; Fig.~\ref{fig:imapf-result}) is from~\cite{cohen2015feasibility}. In \kivalike, 25 scenarios were randomly created such that starts and goals were set in left/right-space, right/left-space, respectively. As baselines for path efficiency, we obtained optimal and bounded sub-optimal solutions by Conflict-based Search (CBS)~\cite{sharon2015conflict} and Enhanced CBS (ECBS)~\cite{barer2014suboptimal}. We also tested PIBT as a comparison. We report the sum of cost (SOC) in Fig.~\ref{fig:mapf-result-1}. We observe that no window size is dominant, e.g., $3$ in \fourthree, $6$ in \twobridge, $50$ in \kivalike work well respectively, and there is little effect of window size in \bridge. Intuitively, efficient window size seems to depend on the length of narrow passages if detours exist, as shown in Fig~\ref{fig:motivating-example}. In empty spaces, window size should be smaller to avoid unnecessary interference such as in Fig.~\ref{fig:winpibt-reservation:inconvenient}. Although PIBT can be seen as \winpibt with window one, it may take time to reach the termination condition even in empty spaces due to the kind of livelock situations (see \fourthree). \subsubsection{MAPF Benchmark} Next, we tested \winpibt via MAPF benchmark~\cite{stern2019multi} while changing the number of agents. Two maps (\emptymid and \ost) were chosen and 25 scenarios (random) were used. Initial locations and destinations were given in order following each scenario, depending on the number of agents. PIBT was also tested. \winpibt or PIBT were considered failed when they could not reach termination conditions after 1000~timesteps. These cases indicate occurrences of deadlock or livelock. The former is due to the lack of graph condition, and the latter is due to dynamic priorities. Fig.~\ref{fig:mapf-result-2} shows 1)~the results of cost per agent, i.e., normalized SOC, 2)~the makespan, 3)~the runtime, and 4)~the number of successful instances. A nice characteristic of \winpibt is to mitigate livelock situations occurring in PIBT, regardless of the window size (see \emptymid). The livelock in PIBT is due to oscillations of agents around their goals by the dynamic priorities. \winpibt can improve this aspect with a longer lookahead. As for cost, PIBT works better than \winpibt in tested cases since those two maps have no explicit detours like two-bridge. Runtime results seemed to correlate with the window size, e.g., they took long time when the window size is 30. Runtime results also correlate with the makespan since (win)PIBT solves problems in online fashion. This is why the implementation with a small window took time compared with the larger one. \subsection{\Naive Iterative MAPF} We used \kivalike as testebed for \naive iterative MAPF. The number of tasks $K$ for the termination was set to $2000$. We tried $10$ repetitions of each experiment with randomly set initial positions. New goals were given randomly. \winpibt is modified for iterative use. PIBT was used as comparison. Note that since \kivalike is \graphcond, PIBT and \winpibt are ensured to terminate. The results of 1) service time, 2) makespan and 3) runtime are shown in Fig.~\ref{fig:imapf-result}. The effect of window size on path efficiency is marginal; Depending on the number of agents, there is little improvement compared with PIBT. We estimate that the reason for the small effect is as follows. First, both small and large window have bad situations. In the lifelong setting as used here, agents may encounter both good and bad situations. Second, truly used window size becomes smaller than the parameter, since the algorithm used here does not allow for agents with lower priorities to disturb planning by higher priorities. This characteristic combined with the treatment for iterative use may counteract the effect of the window size. \subsection{Discussion} In generally, in long aisles where agents cannot pass each other, PIBT plans awkward paths as explained in Fig.~\ref{fig:motivating-example}. \winpibt can improve PIBT in this aspect (see \kivalike in classical MAPF), however, the empirical results demonstrate the limitation of the fixed window. Fortunately, \winpibt allows agents to take different window sizes, meaning that, agents can adjust their window adaptively depending on situations, e.g., their locations, the density of agents. For instance, it seems to be effective to set window size which is enough to cover whole of aisles when an agent tries to enter such aisles. This kind of flexible solution is expected not only to improve path efficiency but also to reduce computation time. Clarifying the relationship between window size and path efficiency helps to develop an adaptive version of \winpibt. We believe that this direction will provide a powerful solution for iterative MAPF. \section{Conclusion} \label{sec:conclusion} This paper introduces \winpibt which generalizes PIBT regarding the time window. We define a \safe condition on all paths with different lengths and \winpibt relies on this concept. The algorithm ensures the reachability for iterative MAPF in adequate properties of graphs, e.g., biconnected. Empirical results demonstrate the potential of \winpibt by adjusting the window size. Future work are as follows. 1)~Develop \winpibt with adaptive windows. 2)~Relax constraints on individual path planning, in such trivial cases as shown in Fig.~\ref{fig:winpibt-reservation:inconvenient}.
1,116,691,497,425
arxiv
\section{Introduction} \label{sec:Intro} The quantification of entanglement in composite quantum systems remains a central question of foundational importance, as well as in terms of practical understanding and implementation of protocols in quantum information and quantum computation. We refer to \cite{HorodeckiEtAl:2009qe} for a comprehensive review of entanglement, and \cite{BengtssonZyczkowski2006bkgqs} for a modern text. Beyond general entropic measures, which typically require optimization over arbitrary realizations or preparations of a quantum state, much analysis has been devoted to concretely defined invariant quantities, which are polynomials in the coordinates parametrizing the state. Many results in this direction concern multipartite qubit systems, ranging from detailed studies for low numbers of subsystems (up to five or six), to systematic identifications of classes of invariants which are available for the general case of $K$ subsystems (specific studies will be cited in the main text). The former category typically treats pure states, and there are fewer such results available for mixed systems, where a density matrix rather than a state vector must be adopted for the description of the quantum state. Notwithstanding the central role of two state models in quantum mechanics and the focus given to qubits in quantum information and quantum computation, there is also considerable interest in the next simplest qu$\!\!D\!$it system, namely the qutrit ($D=3$). Three state systems (involving, for example, a working qubit and an ancilla state) may in fact underlie experimental implementations of quantum protocols \cite{VaziriWeihsZeilinger2002tde,YuYiSongMei2008p2qut}, and it is of great physical importance to characterize their states and interactions. Alongside the enormous effort on qubit systems, there is a smaller literature on qutrits, such as recent studies of the geometry of qutrit states (the generalization of the Bloch sphere for example \cite{goyalsimonsingh2011gbsqut,SarbickiBengtsson2013dqt}). More fundamentally, composite qutrit systems obviously provide a further challenge to the elucidation of entanglement, and the understanding of entanglement measures \cite{DerkaczJakobczyk2007e2qut}. The present paper presents results on polynomial invariants and entanglement monotones for the two qutrit mixed system. It generalizes and partially extends our paper \cite{KingWelshJarvis2007} on the two qubit mixed system. Our results in that paper confirmed earlier enumerations of local unitary (LU) invariants for the two qubit mixed system \cite{GrasslEtAl1998cli}, and subsequent investigations of their role in separating entanglement classes \cite{makhlin2002nlp}. However, by giving a full resolution of the structure of the invariant ring, the paper provided a complete understanding of the algebraic relations between the invariants, and their associated syzygies. As is well known, enumeration of LU invariants is a necessary first step towards the construction of true entanglement measures, which should also satisfy monotonicity criteria under quantum operations. By contrast with our results on the two qubit mixed system, in this paper, only partial results on local unitary invariants for the two qutrit mixed system are attained, but these are subsequently used to establish examples of \emph{bona fide} entanglement monotones, which are based on quantities polynomial in the components of the density operator. In \text{\sf S} \ref{sec:LocalUnitary} below, character methods for the local $SU(3)\times SU(3)$ transformation group are used to establish the count of algebraically independent polynomial invariants up to degree 5 in the components of the density operator. These include 3 quadratic, 7 cubic and 17 quartic quantities, and they are identified up to quartic degree in the standard basis of Gell-Mann matrices, with the help of the calculus of $f$ and $d$ coefficients (Tables \ref{tab:QuadCub}, \ref{tab:GradedCount}). Going in \text{\sf S} \ref{sec:SL3C} to local measurement operations, we study a SLOCC qutrit group, which plays the role of a `relativistic' transformation group analogous to that of the Lorentz group $SL(2,{\mathbb C})_{\mathbb R}\simeq SO(3,1)$ for the qubit case. This is the local special linear (LSL) group $SL(3,{\mathbb C})_{\mathbb R}$, presented as a group of real $9\times 9$ matrices, acting linearly on the 9-dimensional space of projective coordinates for the qutrit density matrix. The counterpart, for qutrits, of the invariant $4\times 4$ Minkowski metric of the qubit case, proves to be a certain $9\times 9 \times 9$ totally symmetric three-fold tensor generalizing the Gell-Mann $d$ coefficient. This 9-dimensional matrix group presentation of $SL(3,{\mathbb C})_{\mathbb R}$ is here denoted $H_d(8,1)$ by analogy with the Lorentz case. We prove directly the isomorphism of Lie algebras, $h_d(8,1)\cong sl(3,{\mathbb C})_{\mathbb R}$. We provide a count of the corresponding two qutrit mixed state LSL polynomial invariants using group character methods. These quantities are proven to yield entanglement monotones. \text{\sf S} \ref{sec:SL3C} ends with the explicit identification of the two lowest degree quantities (the cubic and sextic invariants), and the expansion of the cubic in terms of the local $SU(3)\times SU(3)$ invariants is given. The paper concludes in \text{\sf S} \ref{sec:Conc} with a brief discussion and overview. To aid readability in this paper, a number of definitions and derivations are relegated to several appendices. Appendix \ref{sec:Schurology} provides a summary of the definitions and notation required for handling the group character manipulations, on which our main results are based. In particular, Theorems 1 and 2 in appendix \ref{sec:Schurology} respectively give counting rules for the numbers of local unitary invariants for pure and mixed state systems at each degree, and for the numbers of LSL invariants for bipatrite mixed qubit and qutrit systems at each degree. Appendix \ref{sec:Monotones} presents the standard derivation for constructions of entanglement monotones based on LSL invariants, in a way which generalizes from pure states to mixed states, and from qubits to qutrits (in particular, this applies to the quantities defined in \text{\sf S} \ref{sec:SL3C}). For completeness, we provide in appendix \ref{sec:TwoQubitReview} a brief summary of the results of \cite{KingWelshJarvis2007} on the qubit case. There we also exemplify how the LU $SU(2)\times SU(2)$ invariants allow the construction of associated local $SL(2,{\mathbb C})\times SL(2,{\mathbb C})$ invariants, as a template for the extension to qutrits presented in \text{\sf S} \ref{sec:SL3C}. \section{Local unitary invariants for 2 qutrit mixed states} \label{sec:LocalUnitary} The context for enumerating and identifying local unitary invariants which are polynomial in the coordinates defining the quantum state, is that of the representation theory of the local groups of unitary transformations acting on each subsystem. The count is given in closed form via Molien's theorem, which gives an integral representation for the Molien series $h(z)= \sum_0^\infty h_nz^n$, the generating function for the number of linearly independent invariants $h_n$ at each polynomial degree $n$. In \cite{KingWelshJarvis2007} this was evaluated for the two qubit mixed system with local $SU(2)\times SU(2)$ group, the algebraically independent invariants constructed, and the structure of the invariant ring characterized completely. The coefficients of the Molien series were also verified combinatorially by computations using group character methods. In the 2 qutrit case, a pure state is described by a 9-component complex wavefunction $\psi^{i\underline{j}}$, $i,\underline{j} = 1,2,3$, while a mixed state density operator $\rho^{i\underline{j}, k\underline{\ell}}$ has 81 real components (subject to the constraints of positivity and unit trace). As a matrix it admits an expansion with respect to the standard basis of Gell-Mann matrices on each subspace, \begin{equation} \rho = \textstyle{\frac 19} {\mathbb I}_{9} + \sum_{a=1}^8 r^a \lambda_a\otimes {\mathbb I}_{3} + \sum_{\underline{a}=1}^8 {r}^{\underline{b}} {\mathbb I}_{3}\otimes\lambda_{\underline{b}} + \sum_{a, \underline{b}=1}^8 {R}^{a\underline{b}}\lambda_a\otimes \lambda_{\underline{b}}, \end{equation} showing that (after fixing the trace) the 80 linearly independent real components are comprised of octet vectors ${r}^{{a}}$, ${r}^{\underline{b}}$ belonging to each local $SU(3)$ transformation group, together with an $8 \times 8$ tensor\footnote{ We adopt the convention that index suffices on the subsystem spaces use the same alphabet but distinguished by an underline; thus $i, \underline{j} = 1,2,3$ in the defining representation, $a, \underline{b} = 1,2,\cdots, 8$ in the octet representation, and (\text{\sf S} \ref{sec:SL3C}) $\alpha, \underline{\beta} = 0,1,2,\cdots, 8$ in the reducible adjoint plus singlet representation (see below, and \text{\sf S} \ref{sec:SL3C} for component notation for a single qutrit).} ${R}^{a\underline{b}}$. We now apply the method of group characters for the enumeration of local unitary invariants at low degree in this case. The required notation and basic results from theory are set out in appendix \ref{sec:Schurology} which we briefly summarize here (see also \cite{jarvis:sumner:2012aith}). Returning to the coordinate system for the density matrix based on the fundamental representation, $\rho^{i\underline{j}, k\underline{\ell}}$ transforms as a direct product of the $9$-dimensional adjoint plus singlet representations on each subsystem, with group characters represented as $\{1\}\{\overline{1}\}\!\cdot\!\{1\}\{\overline{1}\}$. The count of linearly independent polynomial invariants at each degree $n$ is then given by the (square of) the number of one dimensional representations in the \emph{plethysm} of characters, $(\{1\}\{\overline{1}\}\!\cdot\!\{1\}\{\overline{1}\})\underline{\otimes}\{n\}$. This can be evaluated using the group theory package {\small \texttt{SCHUR} }\normalsize \cite{SCHURsfg} as detailed in appendix \ref{sec:Schurology}. The resulting first few terms in the Molien series are (from Theorem 1, appendix \ref{sec:Schurology} and (\ref{eq:SU3SU3Molien})) \begin{equation} \label{eq:MolienLUqutrTerms} h(z) = 1 + z + 4 z^2 + 11 z^3 + 34z^4 + 108z^5 + \cdots\,. \end{equation} Given that \begin{equation} \label{eq:MolienLUqutrRatl} \frac{1+ \cdots}{(1-z)(1-z^2)^3(1-z^3)^7(1-z^4)^{17}\cdots} = 1 + z + 4 z^2 + 11 z^3 + 34z^4 + \cdots\,, \end{equation} we infer the existence of a single linear invariant (the trace), as well as three quadratic, seven cubic and seventeen algebraically independent quartic invariants\footnote{In the absence of a complete evaluation for $h(z)$, it is not possible to confirm whether all of these invariants are indeed polynomially independent, as assumed in this trial generating function by setting the denominator to $1$ (see \cite{KingWelshJarvis2007}).}. It is of some interest to identify a concrete set of invariants at each degree. In view of the importance of the octet basis for later sections, we proceed using the $r$, $\overline{r}$, $R$ components (in an obvious notation). Candidate invariants are found in principle by constructing all possible sums of words in the alphabet \[ \{ r^a, r^{\underline{a}}, R^{a\underline{a}}, \delta_{ab}, \delta_{\underline{a}\underline{b}}, f_{abc}, d_{abc}, f_{\underline{a}\underline{b}\underline{c}}, d_{\underline{a}\underline{b}\underline{c}}; a,b,c, \underline{a},\underline{b},\underline{c} = 1,\cdots,8 \} \] over which complete tensor contractions have been applied, and which are connected (that is, cannot be written as a product of two such totally contracted objects). Note that the polynomial degree (in $r$, $\overline{r}$ and $R$) of such an object does not include the count of invariant tensors $f$, $d$ and $\delta$ (the latter usually being suppressed in explicit constructions via the Einstein convention). Table \ref{tab:QuadCub} gives a list of quadratic and cubic polynomials which account for the required number of independent invariants at these degrees. In the quartic case, there exists a large number of possibilities for tensor contractions, from which the correct number of 17 linearly independent quantities must be identified. Here (and as shown already in Table \ref{tab:QuadCub}) it is useful to adopt a more refined grading by degree, for invariant quantities $K_{pqs}$ of the form $r^p\overline{r}^qR^s$, and to adapt the group-theoretical character methods for counting invariants accordingly. This task is carried out in appendix \ref{sec:Schurology} and the results are summarized in Table \ref{tab:GradedCount}. For combinatorial enumeration, at this degree it is slightly easier to use the original basis for the components of $\rho$ in the defining representation, and to transfer to the octet basis after the counting is established. For this purpose, an encyclopaedic source of identities and interrelations between the $f$ and $d$ coefficients is the paper \cite{macfarlane1968gell}; see also \cite{azcarraga:macfarlane:mountain:perezbueno:1998invariant}. The method is illustrated in appendix \ref{sec:Schurology} with the explicit construction of linearly independent candidates for the $K_{103}$, ${r}R^3$, and $K_{013}$, $\overline{r}R^3$, invariants, in both the defining and the octet basis, as well as explicit expressions for candidates for the five $K_{004}$, $R^4$ invariants in the defining representation. In summary, it should be noted that the present methods for identifying LU invariants, based on character theory, are complementary to enumerative constructions of invariants via graphical techniques (deriving from trace and contraction operations on the tensor coordinates describing the state); see for example \cite{hero2009measProc, hero2009stable,vrana2011local,vrana2011algebra,Szalay2012deg6lu}. For qu$\!\!D\!$it systems with small $D$, such as the bipartite mixed qubit case treated in our paper \cite{KingWelshJarvis2007}, and the qutrit case developed here, the group character methods do indeed give the direct count of all linearly independent, and algebraically independent, invariants at each degree (while Molien's theorem gives a closed form, which must be evaluated by integration over the group using the Haar measure, for the complete Molien series). {\small \begin{table} \centering \begin{center} \begin{tabular}[tbp]{|l|l|} \hline & \\ $K_{000}$ &$1$\\ & \\ \hline \end{tabular} \hskip 3ex \begin{tabular}[tbp]{|l|l|} \hline & \\ $K_{200}$ &$ r^a r^a$\\ & \\ \hline & \\ $K_{020}$ &$ r^{\underline{a}} r^{\underline{a}}$\\ & \\ \hline & \\ $K_{002}$ &$ R^{a\underline{b}} R^{a\underline{b}}$\\ & \\ \hline \end{tabular} \hskip 3ex \begin{tabular}[tbp]{|l|l|} \hline & \\ $K_{300}$ &$d_{abc}r^{a}r^{b}r^{c}$\\ & \\ \hline & \\ $K_{030}$ &$d_{\underline{a}\underline{b}\underline{c}}r^{\underline{a}}r^{\underline{b}}r^{\underline{c}}$\\ & \\ \hline & \\ $K_{111}$ &$r^{a}R^{a\underline{b}}r^{\underline{b}}$\\ & \\ \hline & \\ $K_{102}$ &$d_{\underline{a}\underline{b}\underline{c}}r^aR^{b\underline{d}} R^{c\underline{d}}$\\ & \\ \hline & \\ $K_{012}$ &$d_{\underline{a}\underline{b}\underline{c}}r^{\underline{a}}R^{d\underline{b}} R^{d\underline{c}}$\\ & \\ \hline & \\ $K_{003}^d$ &$d_{abc}d_{\underline{a}\underline{b}\underline{c}}R^{a\underline{a}} R^{b\underline{b}}R^{c\underline{c}}$\\ & \\ \hline & \\ $K_{003}^f$ &$f_{abc}f_{\underline{a}\underline{b}\underline{c}}R^{a\underline{a}} R^{b\underline{b}}R^{c\underline{c}}$\\ & \\ \hline \end{tabular} \\ \vskip3ex \end{center} \caption{\protect{\small Qutrit local unitary invariants for degree 1 (left), 2 (centre) and 3 (right) constructed as totally contracted, connected tensors in the components of the density matrix in the octet basis, together with the invariant tensors $f$, $d$ (and the orthogonal metric $\delta$, not explicitly written in summing over repeated indices). The index notation adoped for the labels $K_{pqr}$ reflects grading with respect to the powers of the $r$, $\overline{r}$ and $R$ components of the density matrix in this basis. }} \label{tab:QuadCub} \mbox{}\\ \end{table} } \normalsize {\small \begin{table} \centering \begin{center} \begin{tabular}[tbp]{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline &&&&&&&&&&&\\ ~$400$~&~$040$~&$103$&$013$& $202$&$022$&$112$& $121$&$211$&$301$&$031$&~ $004$~\\ &&&&&&&&&&&\\ \hline &&&&&&&&&&&\\ ~$r^4$~&~$\overline{r}^4~$&$rR^3$&$\overline{r}R^3$& $r^2R^2$&$\overline{r}^2R^2$&$r\overline{r}R^2$& $r\overline{r}^2R$&$r^2\overline{r}R$&$r^3R$&$\overline{r}^3R$&~ $R^4$~\\ &&&&&&&&&&&\\ \hline \hline &&&&&&&&&&&\\ 0&0&2&2&2&2&2&1&1&0&0&5\\ &&&&&&&&&&&\\ \hline \end{tabular} \\ \vskip3ex \end{center} \caption{\protect{\small Counting qutrit local unitary invariants at quartic degree, graded with respect to the powers of the $r$, $\overline{r}$ and $R$ components of the density matrix in the octet basis. See appendix \ref{sec:Schurology} for details of the construction of these 17 quartic mixed state invariants. }} \label{tab:GradedCount} \mbox{}\\ \end{table}} \normalsize \section{The qutrit SLOCC transformation group $SL(3,{\mathbb C})$: two qutrit mixed state invariants and entanglement monotones} \label{sec:SL3C} We now extend the analysis beyond local unitary transformations, to the study of invariants of the density operator under more general quantum operations associated with measurement. Assuming as usual that any multi-outcome measurement can be realized as a composition of elementary two-outcome operations, we treat the case of measurement $\{ E_1,E_2\}$ with $E_1{}^\dagger E_1 + E_2{}^\dagger E_2 = I$, and transformation \begin{equation} \label{eq:RhoTransf} \rho \rightarrow \rho' := p_1 \rho'_1 + p_2 \rho'_2, \qquad \rho'_1= \frac{E_1 \rho E_1{}^\dagger}{p_1}, \quad \rho'_2= \frac{E_2 \rho E_2{}^\dagger}{p_2} \end{equation} with $p_1=Tr(E_1 \rho E_1{}^\dagger)$, and $p_2=Tr(E_2 \rho E_2{}^\dagger)$. Consider moreover the case of local transformations $E= A\otimes B$, or compositions of one-sided operators of the type $A\otimes I$ or $I \otimes B$. The actions $\rho \rightarrow (A\otimes I)\rho(A^\dagger\otimes I)$, $\rho \rightarrow (A\otimes I)\rho(A^\dagger\otimes I)$ are not trace preserving in general and so do not constitute valid quantum operations. However, for invertible maps it is nonetheless fruitful to regard the transformations on $\rho$ in a projective sense, up to scalar multiplication to recover the correct unit trace normalization. Consider first a single qutrit mixed system with density operator $\varrho$. In view of the above, we append to the Gell-Mann matrices the identity ${\mathbb I}_3$ and introduce a corresponding additional ninth component of the density operator. In the octet basis we have extended coordinates \[ \varrho = \textsl{r}^0 {\mathbb I}_3 + \sum_{a=1}^8 \textsl{r}^a \lambda_a \,, \qquad \textsl{r}^0 = \textstyle{\frac 13} Tr(\varrho), \qquad \textsl{r}^a = \textstyle{\frac 12} Tr(\varrho \lambda_a). \] Under transformations $\varrho \rightarrow A \varrho A^\dagger$ with $A$ invertible and of unit determinant, $A\in SL(3,{\mathbb C})$, we have that $Det(\varrho)$ is invariant. This condition can be expressed as a constraint on the transformations of the coordinates via \[ 6Det(\varrho) = \varepsilon^{ijk}\varepsilon_{\ell mn} \varrho_{i}{}^\ell \varrho_{j}{}^m \varrho_{k}{}^n\,. \] This can be written in the octet basis with the help of the standard identity \[ Det(\varrho) = Tr(\varrho)^3 + 2Tr(\varrho^3) - 3Tr(\varrho)Tr(\varrho^2) \] and using the $\lambda$-matrix conventions of \cite{macfarlane1968gell}, we define the $9\times 9 \times 9$ totally symmetric three-fold tensor $\widetilde{d}_{\alpha\beta\gamma}$ via \begin{align} \textstyle{\frac 14 }Det(\rho) :=& \, \widetilde{d}_{\alpha\beta\gamma}\textsl{r}^\alpha \textsl{r}^\beta \textsl{r}^\gamma = \textstyle{\frac{3}{2}} (\textsl{r}^0)^3 - \textstyle{\frac{3}{2}}\textsl{r}^0 \textsl{r}^a \textsl{r}^a + \textsl{r}^a \textsl{r}^b \textsl{r}^c d_{abc}\,, \nonumber \end{align} (with repeated index summations $a,b,c=1,\cdots,8$ and $\alpha,\beta,\gamma=0,1,2,\cdots,8$) so that the nonzero entries are \begin{equation} \widetilde{d}_{000}=\textstyle{\frac{3}{2}},\quad \widetilde{d}_{00a} = \widetilde{d}_{0a0}=\widetilde{d}_{a00}=0, \quad \widetilde{d}_{0ab}=\widetilde{d}_{a0b}=\widetilde{d}_{ab0}= -\textstyle{\frac{1}{2}}\delta_{ab}, \quad \widetilde{d}_{abc}=d_{abc}\,. \end{equation} From the above data we define $H_d(8,1)$ to be the subgroup of $GL(9,{\mathbb R})$ ($9\times 9$ invertible real matrices) which preserve the tensor $\widetilde{d}_{\alpha \beta\gamma}$, that is, \begin{equation} H_d(8,1) = \{ m \in GL(9,{\mathbb R}) : m_\alpha{}^{\alpha'}m_\beta{}^{\beta'}m_\gamma{}^{\gamma'}\widetilde{d}_{\alpha'\beta'\gamma'} = \widetilde{d}_{\alpha\beta\gamma} \}. \end{equation} Obviously $H_d(8,1)$ is a group, and since the above conditions are continuous in the standard metric, it is closed. Hence it is indeed a \emph{bona fide} matrix subgroup of $GL(9,{\mathbb R})$. Moreover, the linear mapping induced by $A \varrho A^\dagger \equiv m_A{}^\alpha{}_\beta \varrho^\beta \lambda_\alpha $ for $A\in SL(3,{\mathbb C})$ provides a 3:1 homomorphism\footnote{$A$, $\omega A$, $\omega^2 A$ all produce the same action on $\varrho$ for $\omega^3=1$.}: $SL(3,{\mathbb C})\rightarrow H_d(8,1)$. We now proceed to show directly the isomorphism of the corresponding Lie algebras, $h_d(8,1)\cong sl(3,{\mathbb C})_{\mathbb R}$. Identifying $h_d(8,1)$ as usual as the tangent space at ${\mathbb I}$, we have for $x_\alpha{}{}^\beta \in h_d(8,1)$ by differentiation \[ x_\alpha{}{}^{\alpha'}\widetilde{d}_{\alpha'\beta \gamma} + x_\beta{}{}^{\beta'}\widetilde{d}_{\alpha\beta' \gamma}+x_\gamma{}{}^{\gamma'}\widetilde{d}_{\alpha\beta \gamma'} = 0 \] and thus examine the four different cases $\widetilde{d}_{\alpha \beta\gamma}$ as above. Defining $u_a{}^b=\textstyle{\frac{1}{2}}\big(x_a{}^b\!+\!x_a{}^b\big)$, $v_a{}^b=\textstyle{\frac{1}{2}}\big(x_a{}^b\!-\!x_a{}^b\big)$. We find: \begin{align} \widetilde{d}_{000}:\qquad x_0{}{}^{0}=& \, 0\,; \nonumber \\ \widetilde{d}_{00a} :\qquad x_0{}{}^{a}=& \,\textstyle{\frac{3}{2}}x_a{}{}^{0}\,; \nonumber \\ \widetilde{d}_{0ab}:\qquad u_a{}{}^{b} = & \,\,x_0{}{}^{e}d_{eab} \,; \nonumber \\ \widetilde{d}_{abc}:\quad \textstyle{\frac{1}{2}}\big(x_a{}^0\delta_{bc} \!+\!x_a{}^0\delta_{bc}\!+\!x_a{}^0\delta_{bc}\big) = &\, \big(u_a{}^e d_{ebc} \!+\! u_b{}^e d_{aec} \!+\!u_c{}^e d_{abe}\big) \!+\!\big(v_a{}^e d_{ebc} \!+\! v_b{}^e d_{aec} \!+\!v_c{}^e d_{abe}\big). \nonumber \end{align} Substituting third condition into the last, with the help of the second relation, yields the cyclically permuted once-contracted $(dd)$ quartet which by a standard identity (for more details see the related discussion in appendix \ref{sec:Schurology} below, and \cite{macfarlane1968gell}) precisely cancels the cyclic $(\delta\delta)$ form on the left hand side, leaving \[ v_a{}^e d_{ebc} \!+\! v_b{}^e d_{aec} \!+\!v_c{}^e d_{abe} =0. \] At the same time from antisymmetry, $v_a{}^b \in so(8)$, as it preserves the metric $\delta_{ab}$. It is well known that $su(3)$ is a maximal (simple) subalgebra of $so(8)$, indeed $v_a^b = f_{abc}$ solves this constraint by virtue of the cyclic $(df)$ quartet identity (for details see appendix \ref{sec:Schurology} below and \cite{macfarlane1968gell}). From the above, the (real) Lie algebra $h_d(8,1)$ is sixteen dimensional, spanned by the adjoint octet $ (F_a )_b{}^c=f_{abc}$, $ (F_a )_0{}^b= (F_a )_0{}^b=0$, together with the additional octet $ (D_a )_0{}^b=\delta_a{}^b$, $ (D_a )_b{}^0=\textstyle{\frac 23}\delta_a{}^b$, $ (D_a )_b{}^c=d_{abc}$. The isomorphism $F_a\rightarrow \lambda_a, D_a\rightarrow i\lambda_a$ with $sl(3,{\mathbb C})_{\mathbb R}$ is verified with the help of another $(dd)$ to $(f\!\!f)$ identity, this time of antisymmetric rather than cyclic type. Clearly the qubit SLOCC group $SL(2,{\mathbb C})_{\mathbb R} \cong SO(3,1)$ (reviewed in appendix \ref{sec:TwoQubitReview} below) finds a precise parallel with $H_d(8,1)$ in the above qutrit setting. We here identify $H_d(8,1)$ as the appropriate qutrit `relativistic' type symmetry group, and explore its role in two qutrit entanglement. As discussed in appendix \ref{sec:TwoQubitReview}, one way to proceed is simply to identify relevant polynomial invariant quantities, once again using group character methods adapted from the $SO(3,1)$ case to this case. In previous work \cite{FauserJarvisKing2006nbr} we have investigated extensions of group character methods (as explained above in the enumeraton of local unitary invariants) to `non-classical' matrix groups, and $H_d(8,1)$ is such a group (although it happens to be locally isomorphic to $SL(3,{\mathbb C})_{\mathbb R}$ in this case). The relevant theory for computing the Molien series is outlined in appendix \ref{sec:Schurology} (see Theorem 2 and equation \ref{eq:MolienHd81app}), and the result is conjectured to be \begin{equation} \label{eq:MolienHd81} 1+z^3+2 z^6+5 z^9+12 z^{12} +\cdots = \frac{1+ \cdots}{(1-z^3)(1-z^6)(1-z^9)^3(1-z^{12})^6\cdots}\, , \end{equation} suggesting the existence of a single invariant at each of degrees 3 and 6, three at degree 9, and six at degree 12 in this case. To exemplify the method, we give the construction for the lowest degree quantities. Recall that the standard coordinates of the density operator are given as $r^a$, $r^{\underline{a}}$, $R^{a\underline{a}}$, or simply $r^{\alpha \underline{\alpha}}$ in the $9\times 9$ basis. By analogy with the method applied in the local unitary case, LSL invariants are now connected, totally contracted words in $\{r^{\alpha \underline{\alpha}}, \widetilde{d}_{\alpha\beta\gamma}, \widetilde{d}_{\underline{\alpha}\underline{\beta}\underline{\gamma}} \}$ written with summation over repeated indices. At degree three, there is clearly only one invariant, \begin{equation} C_3 = \widetilde{d}_{\alpha\beta\gamma}\widetilde{d}_{\underline{\alpha}\underline{\beta}\underline{\gamma}} r^{\alpha \underline{\alpha}}r^{\beta \underline{\beta}}r^{\gamma \underline{\gamma}}\,. \end{equation} For degree 6 we must examine the different possible index connectivities. Without loss of generality, we can order the six $r$ tensors so that their first (non-underlined) index follows the ordering of the corresponding $\widetilde{d}$ subscripts. There remain in principle $\texttt{C}^6_3=20$ possible choices of index matchings for the remaining underlined indices, to partners on their respective $\widetilde{d}$ tensors. However, the total symmetry, and equivalence of, the two $\widetilde{d}$ tensors, and index re-labellings, lead to only one independent form which can be taken as \begin{align} C_6 = & \, \widetilde{d}_{\alpha\beta\gamma}\widetilde{d}_{\rho\sigma\tau} r^{\alpha \underline{\alpha}}r^{\beta \underline{\beta}}r^{\gamma \underline{\gamma}} r^{\rho \underline{\rho}}r^{\sigma \underline{\sigma}}r^{\tau \underline{\tau}} \widetilde{d}_{\underline{\beta}\underline{\gamma}\underline{\rho}} \widetilde{d}_{\underline{\sigma}\underline{\tau}\underline{\alpha}}\, , \end{align} which exemplifies the pattern of cross-overs in the matchings between the underlined $r$ indices, and their corresponding $d$ subscripts. To end our analysis, we give the explicit form of the $H_d(8,1)$ cubic invariant $C_3$ in terms of the previously-identified local unitary invariants of Table \ref{tab:QuadCub} (see \text{\sf S} \ref{sec:LocalUnitary} above). For trace-normalized density operators, $C_3$ will of course no longer be a homogeneous quantity. Using the transcription between the components $r^{\alpha \underline{\alpha}}$ and the octet basis $\{r, \overline{r}, R\}$, and using the above identification of the components of the extended $\widetilde{d}$ tensors, we find \begin{equation} C_3=K^d_{003}+\textstyle{\frac 32}\big(K_{300}+K_{030}\big)+\textstyle{\frac 32}\big(K_{111}-K_{102}-K_{012}\big) -\textstyle{\frac 14}\big(K_{200}+K_{020} \big)+\textstyle{\frac{1}{12}}K_{002}+\textstyle{\frac{1}{324}}\,. \end{equation} The interpretation of such invariants to characterise entanglement is discussed in appendix \ref{sec:Monotones}. In particular, it is proven how appropriate powers of (the absolute value of) such homogeneous LSL polynomial invariants lead to \emph{bona fide} entanglement monotones. In the present case, the analysis shows that the quantity $|C_3|^{\frac 13}$ will be such a quantity. Similarly, at degree 6 the quantity $|C_6|^{\frac 16}$ will provide a further example. \section{Conclusions.} \label{sec:Conc} This paper has developed a group-theoretical analysis of the density operator for the two qutrit system, and its local invariants. Starting with the group $SU(3)\times SU(3)$ of local unitary transformations, character methods have been used to enumerate and construct polynomial invariants (up to degree four) in the components of the density operator (\text{\sf S} \ref{sec:LocalUnitary}). A larger, SLOCC type transformation group acts on the density operator regarded projectively (up to normalization to unit trace). This is the group $SL(3,{\mathbb C})_{\mathbb R}$, presented as a group of real $9\times 9$ matrices (a matrix subgroup of $GL(9,{\mathbb R})$) acting linearly on the 9-dimensional space of projective coordinates for the qutrit density matrix. This group (which is denoted $H_d(8,1)$ in this context), has been explicitly identified in \text{\sf S} \ref{sec:SL3C}, and its character theory discussed, in order to identify appropriate LSL polynomial invariants. The details of the group character methods are outlined in the appendix to the paper, which also includes a review of our earlier paper \cite{KingWelshJarvis2007} on the two qubit system and its ring of local unitary invariants. A further appendix discusses how polynomial invariants can be used to form entanglement monotones, and exemplifies this for the two qubit case for some standard $SL(2,{\mathbb C})$ invariants. In the same vein, in \text{\sf S} \ref{sec:SL3C} we count and construct candidates for independent two qutrit entanglement monotones, giving in particular the explicit form for the lowest degree, cubic invariant in terms of the linear, cubic and quadratic local unitary invariants identified in \text{\sf S} \ref{sec:LocalUnitary}. \vfill \noindent \textbf{Acknowledgements}\\ PDJ thanks S Szalay (Wigner Research Centre, Budapest) for fruitful correspondence, for pointing out key literature on entanglement monotones, and for generously providing his own unpublished notes (see also \cite{szalay2013quantum}). Discussion and correspondence on aspects of this work with G Barwick, A Bracken, D Ellinas, B Fauser, S Jacobsen, R King, P Levay, A Ratcliffe, J Sumner and T Welsh are gratefully acknowledged. \newpage \begin{appendix} \renewcommand{\theequation}{\thesection-\arabic{equation}} \section{Counting invariants for mixed state systems via group characters.} \label{sec:Schurology} \subsection{LU invariants for qubits and qutrits} In this section, for the sake of completeness, we give a brief outline of the combinatorial description of the representations and characters of symmetry groups arising as transformations on the density operator for bipartite qubit and qutrit mixed systems. The enumeration of polynomial invariants (at low degree) will then become possible, with a knowledge of standard manipulations of the required characters (as mentioned in \text{\sf S} \ref{sec:Intro}, this method is not in general sufficient to derive the full structure of the Molien series). We refer to \cite{KingWelshJarvis2007} for details of the two qubit case, and to \cite{jarvis:sumner:2012aith} for a general review from which the following is adapted. The mathematical setting for the study of entanglement invariants for $K$-fold composite qu$\!\!D\!$it systems, is that there is a model space $V$ which is a $K$-fold tensor product, $V \cong {\mathbb C}^D\otimes {\mathbb C}^D\otimes \cdots \otimes {\mathbb C}^D$. The components of $V$ in some standard basis describe the state; for example in Dirac notation a pure state is a ket $|\psi \rangle \in V$ of the form $ |\psi \rangle = \sum_{1}^{D} \psi^{i_1 i_2 \cdots i_K} |i_1,i_2, \cdots, i_K \rangle $ in the case of qu$\!\!D\!$its. For mixed states, the model space is instead $W =V \otimes V^*$ (a $K$-fold tensor product of ${\mathbb C}^D\otimes {\mathbb C}^D{}^*$), and coordinatized via the density operator $\rho \in W$. For present purposes we illustrate only the bipartite case, \begin{equation} \rho =\sum_{1}^{D} \rho^{k\underline{\ell};i\underline{j}}|i,\underline{j}\rangle \langle k ,\underline{\ell}| \end{equation} where we have introduced the underline index convention for denoting components with respect to the second subspace of the composite system. We focus attention on the linear action of the appropriate matrix group $G = G_1 \times G_2 \times \cdots \times G_K$ on $V$ and $W$. In the qu$\!\!D\!$it case each local group $G_k$ is a copy of $U(D)$, but given the irreducibility of the fundamental representation, for polynomial representations, the analysis can be done using the character theory of the complex group. We compute the Molien series $h(z) = \sum_0^\infty h_n z^n$ degree-by-degree using combinatorial methods based on classical character theory \cite{Weyl1939,littlewood1940}. Characters of $GL(D)$ and $SL(D)$ differ only by powers of the determinant character (and similarly for $U(D)$ and $SU(D)$). All evaluations are carried out using the group representation package {\small \texttt{SCHUR} }\normalsize \cite{SCHURsfg}. In terms of class parameters (eigenvalues) $x_1,x_2,\cdots, x_D$ for a nonsingular matrix $m \in GL(D)$, the defining representation, the character is simply $Tr(m) = x_1+ x_2+ \cdots + x_D$; the contragredient has character $Tr(m^T{}^{-1}) = x_1{}^{-1}+ x_2{}^{-1}+ \cdots + x_D{}^{-1}$. Irreducible polynomial and rational characters of $GL(D)$ are given in terms of the celebrated Schur functions \cite{Weyl1939,littlewood1940} denoted $s_\lambda(x)$, where $\lambda = (\lambda_1,\lambda_2,\cdots,\lambda_D)$, $\lambda_1 \ge \lambda_2 \ge \cdots \ge \lambda_D$, is an integer partition of at most $D$ nonzero parts. $\ell(\lambda)$, the length of the partition, is the index of the last nonzero entry (thus $\ell(\lambda)=D$ if $\lambda_D >0$). $|\lambda|$, the weight of the partition, is the sum $|\lambda|=\lambda_1+\lambda_2 + \cdots + \lambda_D$, and we write $\lambda \vdash |\lambda|$. For brevity we write the Schur or $S$-function simply as $\{\lambda \}$ where the class parameters are understood\footnote{Partitions $\lambda$ are also abbreviated as words in monoid style, thus $(2^3) \equiv (2,2,2)$, etc.}. Thus the space $V$ as a representation of $G$ as a $K$-fold Cartesian product is endowed with the corresponding product of $K$ characters of the above defining representation of each local group, $\chi= \{1\} \cdot \{1\}\cdot \, \cdots \, \cdot \{1\}$ in the quantum mechanical pure state case, and $\chi = (\{1\} \{\overline{1}\}) \! \cdot \! (\{1\} \{\overline{1}\})\!\cdot \, \cdots \, \cdot\!(\{1\} \{\overline{1}\})$ in the quantum mechanical mixed state case, where $\{1\}$ is the character of the defining representation, and $\{\overline{1}\}$ that of its contragredient. The space of polynomials of degree $n$ in $\psi$ or $\rho$ is a natural object of interest, and by a standard result is isomorphic to the $n$-fold symmetrised tensor product of $n$ copies of $V$ or $W$. Its character is determined by the corresponding Schur function \emph{plethysm}, $\chi \underline{\otimes} \{n\}$, and the task at hand is to enumerate the one-dimensional representations occurring therein. Before giving the relevant results it is necessary to note two further rules for combining Schur functions. The \emph{outer} Schur function product, is simply the pointwise product of Schur functions, arising from the character of a tensor product of two representations. Of importance here is the \emph{inner} Schur function product $\ast$ defined via the Frobenius mapping between Schur functions and irreducible characters of the symmetric group. We provide here only the definitions sufficient to state the required counting theorems in technical detail. For a Hopf-algebraic setting for symmetric functions and characters of classical (and some non-classical) groups see also \cite{FauserJarvis2003hl,FauserJarvisKing2006nbr,FauserJarvisKing2013has}. Concretely, we introduce outer (pointwise) products in the Schur function basis as follows: \[ \{\lambda \} \{ \mu \} = \sum_\nu C^\nu_{\lambda,\mu} \{\nu \}, \] where the $C^\nu_{\lambda,\mu}$ are the famous `Littlewood-Richardson' coefficients. Closely related is the dual operation of skew, which we note here for completeness: \[ \{\lambda \} / \{ \mu \} = \sum_\nu C^\lambda_{\mu,\nu} \{\nu \}. \] Similarly, we introduce structure constants for inner products in the Schur function basis: \[ \{\lambda \} \ast \{ \mu \} = \sum_\nu g^\nu_{\lambda,\mu} \{\nu \}. \] For partitions $\lambda$, $\mu$ of equal weight\footnote{If $|\lambda| \ne |\mu|$ then $\{\lambda \} \ast \{ \mu \}=0$.}, $|\lambda| = |\mu|= n$, say, this expresses the reduction of a tensor product of two representations of the symmetric group ${\mathfrak S}_n$ labelled by partitions $\lambda$, $\mu$. By associativity, we can extend the definition of the structure constants to $K$-fold inner products, \[ \{\tau_1 \} \ast \{\tau_2 \} \ast \cdots \ast \{\tau_K \} = \sum_\nu g^\nu_{\tau_1, \tau_2, \cdots, \tau_K} \{\nu \}. \] \noindent \textbf{Theorem 1: Counting unitary invariants \cite{jarvis:sumner:2012aith}} \begin{description} \item[(a) Pure states, $K$-fold composite system]\mbox{}\\ Let $D$ divide $n$, $n = rD$, and let $\tau$ be the partition $(r^D)$ (that is, with Ferrers diagram a rectangular array of $r$ columns of length $D$). Then the number $h_n$ of linearly independent invariants is \[ h_n = g^{(n)}_{\tau,\tau,\cdots,\tau}\quad \mbox{($K$-fold inner product)}. \] If $D$ does not divide $n$, then $h_n =0$. \item[(b) Mixed states, bipartite system \cite{KingWelshJarvis2007}]\mbox{}\\ The number $h_n$ of linearly independent invariants is \[ h_n = \sum_{|\tau|= n,\ell(\tau) \le D^2} \left( \sum_{|\sigma|= n, \ell(\sigma) \le D} g^{\tau}_{\sigma,\sigma}\right)^{\!\!\!\!2}. \] \mbox{}\hfill $\Box$ \end{description} \subsection*{$SU(2)\times SU(2)$ invariants for 2 qubit mixed systems} Application of the above counting theorem for the two qubit mixed system was carried out in \cite{KingWelshJarvis2007}. For details we refer to appendix \ref{sec:TwoQubitReview} below, which gives a review of this work and its extension to constructions of entanglement monotones (the Molien series is given in equation (\ref{eq:MolienSU2SU2})). \subsection*{$SU(3)\times SU(3)$ invariants for 2 qutrit mixed systems} Counting the first few coefficients $h_n$ using the above theorem yields the Molien series for the two qutrit mixed system, \begin{equation} \label{eq:SU3SU3Molien} h(z) = 1 + z + 4 z^2 + 11 z^3 + 34z^4 + 108z^5+ \cdots\,. \end{equation} Given that \begin{equation} \label{eq:SU3SU3MolienRatl} \frac{1+ \cdots}{(1-z)(1-z^2)^3(1-z^3)^7(1-z^4)^{17}\cdots} = 1 + z + 4 z^2 + 11 z^3 + 34z^4 + \cdots\,, \end{equation} we look to construct (apart from the trace), three quadratic, seven cubic and seventeen algebraically independent quartic invariants (see \text{\sf S} \ref{sec:LocalUnitary} above). On the basis of these partial results, the count of invariants and the corresponding invariant ring are considerably richer than those for the two qubit mixed system\footnote{Establishing the full generating function via Molien's theorem would require using the Haar measure on the $SU(3)\times SU(3)$ group.}. Candidates for the quadratic and cubic invariants are listed in Table \ref{tab:QuadCub} above. For the seventeen quartic invariants, a count of contributions at separate degrees in each of the contributing tensors $r$, $\overline{r}$, $R$ aids identification (see Table \ref{tab:GradedCount} above). In order to arrive at these assignments, the group character arguments can be adapted as follows. Any invariant quantity $K_{pqs}$ at degree $r^p\overline{r}^qR^s$ must arise as an admissible coupling (to the trivial representation) between all irreducible representations occurring in the respective symmetrized products. Thus individual group character plethysms of the respective powers $p,q,s$ must be derived, and the number of invariants is simply the coefficient of the trivial character after taking the outer product\footnote{This problem also arises in the context of constructing non-subgroup labelling operators for a certain group embedding, in this case $SU(9)$ in a basis of $SU(3)\times SU(3)$. The transcription to the equivalent embedding for the qubit case has been discussed in \cite{KingWelshJarvis2007} and the corresponding $SU(4)$ labelling problem treated in \cite{Quesne1976:sea}. }. Working with $S$-function notation for $SU(3)$, and (octet) adjoint representation $\{21\}$, we have for the direct product characters \[ r \cong \{21\} \cdot 1, \qquad \overline{r}\cong 1\cdot \{21\}, \qquad R \cong \{21\}\cdot \{21\} \] where for better readability the trivial character has here been written simply as `$1$' (it appears formally as $\{0\}$ below) . Thus the number $H_{pqs}$ of such invariants (giving the generating function $H(x,y,z) = \sum H_{pqs}x^py^qz^s$ with $h(z) = H(z,z,z)$) is \[ H_{pqs} = \left(\big(\{21\} \cdot 1\big)\underline{\otimes}\{p\}\right) \left(\big(1\cdot \{21\}\big)\underline{\otimes}\{q\}\right) \left(\big(\{21\} \cdot \{21\}\big)\underline{\otimes}\{s\}\right) \bigg|_{\{0\}\cdot\{0\}} \] Using this result we can enumerate the required contributions term by term from the list of contributing powers, namely $301, 031, 103, 013, 202, 022, 112, 121, 211$, and $004$, as follows. All character manipulations are performed with {\small \texttt{SCHUR} }\normalsize \cite{SCHURsfg} for the group $SU(3)$. \begin{description} \item[301, 031]\mbox{}\\ We have \[ \{21\}\underline{\otimes}\{3\} = \{63\} + \{42\} + \{3 \} + \{3^2\} + \{21\} + \{0\}\,. \] The $\{3\}$ plethysm of one octet is accompanied by a singlet (no involvement of the other octet) and so remains one-sided, $\{21\}\underline{\otimes}\{3\}\cdot \{0\}$ or its left-right swapped version. In the final outer product with the tensor $\{21\}\cdot \{21\}$, there will be on one side a left over $\{21\}\{0\}=\{21\}$, which remains an octet. Thus there is no contribution at this grading. \item[103,013]\mbox{}\\ In this case the $\{3\}$ plethysm applies to the tensor direct product $\{21\} \cdot \{21\}$, and by a standard distributivity property \cite{fauser:jarvis:king:2010a:vertex}, devolves to the evaluation of the sum of direct products of plethysms for all partition types on each side. The total count for each grading $103,013$ thus requires accumulating all resulting terms of the form $\{21\}\cdot \{0\}$, $\{0\}\cdot \{21\}$ respectively, which can couple to the remaining octet: \begin{align} \{21\}\underline{\otimes}\{3\} = & \,\{63\} + \{42\} + \{3 \} + \{3^2\} + {\mathbf \{21\} }+ {\mathbf\{0\} }\,, \nonumber \\ \{21\}\underline{\otimes}\{21\} = & \, \{51\} + \{54\} + 2\{42\} + \{3\} + \{3^2\} + 3\{21\}\,, \nonumber \\ \{21\}\underline{\otimes}\{1^3\} = & \,\{42\} + \{3 \} + \{33\} + {\mathbf\{21\} } + {\mathbf\{0 \} }\,, \nonumber \end{align} in which the first and third lines, $\{21\}\underline{\otimes}\{3\}\cdot \{21\}\underline{\otimes}\{3\}$ and $\{21\}\underline{\otimes}\{1^3\}\cdot\{21\}\underline{\otimes}\{1^3\}$, will each contribute 1 such term. \item[202, 022]\mbox{}\\ We compute \begin{align} \{21\}\underline{\otimes}\{2\} = & \, \{42\} + \{21\} + \{0\}\,, \nonumber \\ \{21\}\underline{\otimes}\{1^2\} = & \, \{3\} + \{3^2 \} + \{21\} \,. \nonumber \end{align} These are to be applied to $\{21\} \cdot \{21\}$, which is to couple only to the symmetric $\{2\}$ plethysm of one of the other octets. Thus only the $\{2\}$ plethysm itself can provide the required singlet on one side. On the other hand, the option of having both quadratic terms coupling to $\{0\}$ would simply amount to a disconnected term. Thus, the only couplings to an overall singlet arise from outer products either between the two resulting octets, $\{21\}\{21\}$, or the two 27's, $\{42\}\{42\}$, and we infer 2 contributions for each of these gradings. \item[112]\mbox{}\\ Both symmetrization and antisymmetization of the tensor $\{21\}\cdot\{21\}$ again include $\{21\}\cdot\{21\}$, which on each side can couple to the two octets, again giving 2 contributions. \item[121, 211]\mbox{}\\ Again the symmetrization entailed in each quadratic term can only couple to an invariant through octet $\{21\}$ contributions, and there is thus just 1 further invariant in each case. \end{description} Discounting the $400$ and $040$ gradings (which cannot yield totally connected tensor contractions) leaves a deficit of 5 invariants out of the required 17 (compare Table \ref{tab:GradedCount}) to come from the remaining case $004$, that is, pure quartic terms in $R$. The counting can indeed be confirmed by examining all $\{21\}\underline{\otimes}\{\sigma\}$ character plethysms for all partitions of weight 4, $\sigma \vdash 4$, in the same manner as with the gradings treated above\footnote{The result is 6 linearly independent terms, which includes the disconnected form $(RR)^2= K_{002}^2$ (see \text{\sf S} \ref{sec:LocalUnitary} above).}. However, to show the connections with concrete instances of the claimed invariants, we provide here instead, a combinatorial argument to support the existence of the 5 remaining invariants at this degree. Since the argument will involve index matchings based on the components of the density operator in the original, defining representation, we first examine a simpler case where the count is already established, and we confirm the correct number of terms by explicit computation. This is done below for the two distinct contributions identified above for the $103$ grading (the $013$ case is analogous). Having established how tensor contractions can be transcribed in principle into the octet basis, we then present an argument for the required five $004$-type quartic terms. Consider then possible total tensor contractions for terms of the form $rRRR$. In the defining representation, $r$ has components $r^i{}_j$, and $R$ has components $R^{i\underline{p}}{}_{j \underline{q}}$ (these must in fact be traceless tensors). Without loss of generality, the total contraction of the underlined indices on the three $R$ tensors can be arranged in cyclic order, with the contraction of the non-underlined indices with the additional $r^i{}_j$ still to be applied: \[ r^i{}_jR^{\cdot \underline{p}}{}_{\cdot \underline{q}}R^{\cdot \underline{q}}{}_{\cdot \underline{r}}R^{\cdot \underline{r}}{}_{\cdot \underline{p}} \,. \] The possibilities for the remaining contractions of $i,j$ can be enumerated by choosing two positions from the three $R$ factors, giving 6 options $(1,2)$, $(2,1)$, $(1,3)$, $(3,1)$, $(2,3)$, $(3,2)$; however, cyclically reordering the factors reveals only two distinct forms (avoiding traces), represented by \[ (1,2) = r^i{}_jR^{k \underline{p}}{}_{i \underline{q}}R^{j \underline{q}}{}_{\ell \underline{r}}R^{\ell \underline{r}}{}_{k \underline{p}} \,,\qquad (2,1) = r^i{}_jR^{j \underline{p}}{}_{\ell \underline{q}}R^{k \underline{q}}{}_{i \underline{r}}R^{\ell \underline{r}}{}_{k \underline{p}} \, . \] Now assuming that (up to normalization) the transcription between adjoint indices (traceless matrices) and the octet basis follows $r^i{}_{j} \leftrightarrow r^d\big(\lambda_d\big){}^i{}_j$, we can match the above index orderings to traces of Gell-Mann matrices from each subspace: \[ (1,2) \rightarrow Tr\big(\lambda_{\underline{a}} \lambda_{\underline{b}}\lambda_{\underline{c}}\big)R^{a \underline{a}} R^{b \underline{b}}R^{c \underline{c}}r^d Tr\big(\lambda_d \lambda_b \lambda_c \lambda_a\big),\, (2,1) \rightarrow Tr\big(\lambda_{\underline{a}} \lambda_{\underline{b}}\lambda_{\underline{c}})R^{a \underline{a}} R^{b \underline{b}}R^{c \underline{c}}r^d Tr\big(\lambda_d \lambda_a \lambda_c \lambda_b\big)\,. \] Using standard trace identities shows that certain terms cancel due to symmetry incompatibilities between $f$ and $d$ tensors, and for $(1,2)$ there remain the index patterns (up to proportionality) \[ d_{\underline{a}\underline{b}\underline{c}}\delta_{ac}\delta_{bd}\,, \,\, d_{\underline{a}\underline{b}\underline{c}} (dd)_{ac,bd}\,, \,\, d_{\underline{a}\underline{b}\underline{c}} (df)_{ac,bd}\,,\,\, f_{\underline{a}\underline{b}\underline{c}} (f\!\!d)_{ac,bd}\,, \,\, \mbox{and}\,\, f_{\underline{a}\underline{b}\underline{c}} (f\!\!f)_{ac,bd} \,, \] where the quartets are once-contracted forms, for example $(dd)_{ac,bd} := d_{ace}d_{ebd}$, and so on. By re-labelling, in the presence of $R^{a \underline{a}} R^{b \underline{b}}R^{c \underline{c}}$, each of these can be replaced by its cyclic sum (over $a,b,c$). Under such cyclic sums, standard identities (see \cite{macfarlane1968gell,azcarraga:macfarlane:mountain:perezbueno:1998invariant}) show that the $(df)$ and $(f\!\!f)$ quartets vanish, while the $(dd)$ quartet reduces to the first, $(\delta\delta)$ term. The only remaining couplings are thus the first and third, $(\delta\delta)$ and $(f\!\!d)$ contributions, which moreover differ in sign between $(1,2)$ and $(2,1)$ above. By taking appropriate linear combinations we can separate these, and finally we identify the required 2 independent candidates for ${103}$ graded invariants in the octet basis as \begin{align} K_{103} = & \,d_{\underline{a}\underline{b}\underline{c}} R^{a \underline{a}} R^{a \underline{b}}R^{d \underline{c}}r^d \, \nonumber \\ K'_{103} = & \, f_{\underline{a}\underline{b}\underline{c}} R^{a \underline{a}} R^{b \underline{b}}R^{c \underline{c}}r^d(f\!\!d)_{ab,cd}\,. \nonumber \end{align} Correspondingly, there will also be two $013$ graded invariants $K_{013}, K_{013}'$, defined by interchanging the roles of underlined and non-underlined indices. Turning now to the $004$ graded ($R^4$) invariants, we provide the following combinatorial argument based on the defining representation, and omit the subsequent transcription to the octet basis with $f$ and $d$ coupled tensors, which can be carried out as done above for grading $103$. As with the $103$ case, we start with\footnote{For completeness we should also consider the case of a product of two separate traces in the underlined indices, $R^{i \underline{p}}{}_{j \underline{q}}R^{\cdot \underline{q}}{}_{\cdot \underline{p}}R^{\cdot \underline{r}}{}_{\cdot \underline{s}}R^{\cdot \underline{s}}{}_{\cdot \underline{r}}$. The `diagonal' trace option (2,2) for the non-underlined indices will amount to the afore-mentioned disconnected form. In the octet basis for the underlined indices, the resulting $(\delta\delta)$ type coupling will in any case arise for the remaining options, as a part of the couplings generated from the totally connected underlined trace form treated in the text. For example, under cyclic symmetry, it is linearly related to cyclic sums of $(dd)$ quartets. An analogous comment applies for the $(1,1)$, $(2,2)$ and $(3,3)$ type options in the $103$ and $013$ cases.} \[ R^{i \underline{p}}{}_{j \underline{q}}R^{\cdot \underline{q}}{}_{\cdot \underline{r}}R^{\cdot \underline{r}}{}_{\cdot \underline{s}}R^{\cdot \underline{s}}{}_{\cdot \underline{p}} \,. \] and consider the patterns of subsequent tensor contractions with the remaining unassigned indices. These can be enumerated by citing the positions of the two $R$ terms at which the ${}^i{}_j$ superscript and subscript of the first $R$ term are summed; the remaining contractions are then determined automatically. For example the $(4,2)$ case would be \[ (4,2)=R^{i \underline{p}}{}_{j \underline{q}}R^{j \underline{q}}{}_{k \underline{r}}R^{k \underline{r}}{}_{\ell \underline{s}}R^{\ell \underline{s}}{}_{i \underline{p}}\,. \] There are thus 9 cases, but again these are subject to re-arrangement and cyclic re-ordering moves, under which it is easy to see that there are only five distinct terms: \begin{align} (3,3); \quad (2,\,&4); \quad (4,2); \nonumber \\ (2,2) \leftrightarrow &\,\, (4,4); \nonumber \\ (3,2) \leftrightarrow (2,3) \leftrightarrow &\, \,(3,4) \leftrightarrow (4,3)\,. \nonumber \end{align} We tentatively suggest then, that the required five $K_{004}$ type invariants can identified with these five distinct terms. With this discussion we complete the identification of the full complement of seventeen algebraically independent quartic invariants for the two qutrit mixed system. \subsection{LSL invariants for qubits and qutrits} As discussed in \text{\sf S} \ref{sec:SL3C} above, for the qutrit case, and following the standard description for qubits as reviewed in appendix \ref{sec:TwoQubitReview} below, local measurement protocols can include as special cases, more general types of symmetry group actions on quantum states and density operators than simply local unitary transformations. We examine here the role played by the local groups $SL(2,{\mathbb C})$ and $SL(3,{\mathbb C})$ respectively. The combinatorial representation and character theory used in the previous section on local unitary invariants, was indeed carried out by default already using the characters of these complex groups, exploiting the irreducibility of the fundamental representation. Thus the counts of independent LU invariants and of LSL invariants coincide in the cases involving pure states. However, the situation for these SLOCC LSL groups is different in their guise as `relativistic' groups for mixed qubit and qutrit systems. They have a presentation via homomorphic images, as real matrix groups of the appropriate dimension ($4\times 4$ and $9 \times 9$, respectively, so that they are subgroups of $GL(4,{\mathbb R})$ and $GL(9,{\mathbb R})$), acting linearly on the space of projective coordinates for the density operator. For qubits, this is simply the well-known 2:1 covering map, leading to the local isomorphism $SL(2,{\mathbb C})_{\mathbb R}\cong SO(3,1)$, and establishing $SL(2,{\mathbb C})_{\mathbb R}$ as the covering group of the Lorentz group. For qutrits, the situation is analogous and establishes a different local isomorphism: in this case, a 3:1 covering map between $SL(3,{\mathbb C})_{\mathbb R}$, and a nine dimensional matrix group, which we here denote $H_d(8,1)$. The dimension is written as $(8,1)$ to emphasize the noncompact nature of the group, by analogy with the nomenclature for noncompact orthogonal groups, and the subscript ${}_d$ relates to its definition as an invariance group for a particular tensor (the generalized $9\times 9\times 9$ $\widetilde{d}$ coefficient, as described in \text{\sf S} \ref{sec:SL3C} above). We have described such (generically) `non-classical' groups and their characters elsewhere \cite{FauserJarvisKing2006nbr}, and elaborated on some of the Schur function constructs, extending the classical techniques needed to manipulate formal characters, in \cite{fauser:jarvis:king:2010a:vertex} (see also \cite{fauser:jarvis:king:2013:ribbon}). We here reiterate the main results, in order to formulate counting rules for the relevant local invariants. In the case of the orthogonal groups, the description of irreducible characters requires the introduction of Schur functions of orthogonal type \cite{littlewood1940}, denoted ${[}\lambda{]}$ for partition $\lambda$. A major result is the transcription between these symmetric functions and standard Schur functions $\{\lambda\}$, formalized as the so-called branching rule $ \{\lambda\} = \sum_{\delta \in {\mathcal D}} {[}\lambda/\delta{]} $ expressing the fact that an irreducible character $\{\lambda\}$ of the general linear group reduces to a sum of irreducible orthogonal group characters, resulting from skewing ${[}\lambda/\delta{]}$ for every $\delta$ belonging to particular infinite set ${\mathcal D}$ (all partitions with even row lengths). ${\mathcal D}$ has a more abstract characterization, that of a plethysm ${\mathcal D}= M_{(2)}\equiv \{2\} \underline{\otimes} M$ of the $S$-function $\{2\}$ (reflecting the symmetric metric tensor) by the formal infinite series of all one-part partitions (all symmetrized powers), $M = 1 + \{1\} + \{ 2\} + \{3\} + \cdots$. The relation between standard and orthogonal type Schur functions can be written symbolically as $\{\lambda\} ={[}\lambda/M_{(2)}{]}$. For the case of formal characters of the matrix group $H_d(8,1)$, our work \cite{FauserJarvisKing2006nbr} leads to a parallel formulation. The character of the defining, nine-dimensional representation is denoted ${[\![}1{]\!]}$, and there is a class of characters ${[\![}\lambda{]\!]}$ corresponding to arbitrary partitions, with the difference that such characters are, in general, only indecomposable, not irreducible. The role of ${\mathcal D}=M_{(2)}$ is now played by the formal series $M_{(3)} \equiv \{3\} \underline{\otimes} M$ (reflecting the totally symmetric invariant 3-fold tensor). The branching rule $\{\lambda\} ={[\![}\lambda/M_{(3)}{]\!]}$ again specifies indecomposable, not irreducible, characters in general. In the present case where $H_d(8,1)$ is locally isomorphic to $SL(3,{\mathbb C})_{\mathbb R}$, we do not expect such pathologies, but double- or triple-counting may still arise because of the presence of associated representations and modification rules (see below). The analysis of universal characters for groups of this type is incomplete and has only been pursued in a case-by-case manner \cite{FauserJarvisKing2006nbr}. We can now formulate\\ \noindent \textbf{Theorem 2: Counting LSL invariants for bipartite mixed systems}\\ The number $h_n$ of linearly independent mixed state LSL invariants at degree $n$ in the density matrix is \\[-.5cm] \nopagebreak \begin{description} \item[(a) Qubits] \[ h_n = \sum_{\sigma\vdash n, \ell(\sigma) \le 4} \left.{[} \sigma/M_{(2)}{]}\cdot {[} \sigma/M_{(2)}{]} \right.\bigg|_{{[}0{]}\cdot {[}0{]}} \] \item[(b) Qutrits (conjecture)]\footnote{ A proof of these formulae (for either qubits or qutrits) is as follows. Let ${\mathcal M}$ denote the appropriate symmetric plethysm series $M_{(2)}$ or $M_{(3)}$ and ${\mathcal L}$ its inverse. Then the inverse branching rule can be written ${[}\lambda{]} = {\{}\lambda/{\mathcal L}{\}}$, from which $({[}\lambda{]}\cdot {[}\mu{]})\underline{\otimes}{\{}n{\}} = \sum_{\sigma,\tau}g^{{\{}n{\}}}{}_{\sigma,\tau} {[}(\{\lambda/{\mathcal L}\underline{\otimes}\sigma)/{\mathcal M}{]}\cdot {[}(\{\mu/{\mathcal L}\underline{\otimes}\tau)/{\mathcal M}{]}$. In the present cases $\{1/{\mathcal L}\} = \{1\}$, $\{1\}\underline{\otimes}\{\sigma\}=\{\sigma\}$ and the inner product coefficient is $\delta_{\sigma,\tau}$ for partitions of weight $n$. }\\[-.5cm] \[ h_n = \sum_{\sigma\vdash n, \ell(\sigma) \le 9} \left.{[\![} \sigma/M_{(3)}{]\!]}\cdot {[\![} \sigma/M_{(3)}{]\!]} \right.\bigg|_{{[\![}0{]\!]}\cdot {[\![}0{]\!]}} \] \mbox{}\hfill $\Box$ \end{description} In contrast to the counting of local unitary invariants, here it is not possible to give a direct formula for the required multiplicity. The notation $\mbox{} |_{{[}\cdot {]}}$ indicates that the coefficient of the respective trivial character (the one dimensional representation), ${{[}0{]}\cdot {[}0{]}}$ or ${{[\![}0{]\!]}\cdot {[\![}0{]\!]}}$, should be extracted after the summation and skew operations have been carried out. The reason is that the specification of symmetric functions of orthogonal type ${[}\lambda{]}$ contains redundant labelling, and a final stage of \emph{modification} is required to determine if non-standard characters are either equivalent to standard ones, are zero, or are equivalent formally to the negative of standard ones. The final summation therefore contains possible alternating signs and internal cancellations, which need to be taken into account. The situation for characters of type ${[\![}\lambda{]\!]}$ is similar, but the result is formulated as a conjecture since the relevant modification rules are not known (see \cite{FauserJarvisKing2006nbr}). \subsection*{$SL(2,{\mathbb C})\times SL(2,{\mathbb C})$ invariants for 2 qubit mixed systems} Implementing the above theorem in the qubit case leads, as claimed in appendix \ref{sec:TwoQubitReview} below, to the Molien series \begin{equation} \label{eq:MolienLSLqubAll} h(z) = 1+z^2 + 3z^4 + 4z^6 + 7z^8 + 9z^{10} + 14 z^{12} + \cdots = \frac{1+z^4}{(1-z^2)(1-z^4)(1-z^6)(1-z^8)}\,. \end{equation} With respect to the above discussion of character modifications, it is instructive to note how the $h_n$ coefficients are built. In the first place, only even partition weights arise because no $S$-function ${\{}\lambda{\}}$ of odd weight can skew with a plethysm of ${\{}2{\}}$ to give ${\{}0{\}}$. At weight 4 we have the obvious cases ${[}4/M_{(2)}{]}$, ${[}2^2/M_{(2)}{]}$ which skew to ${[}0{]}$ by the corresponding elements of $M_{(2)}$ given that ${\{}2{\} }\underline{\otimes}{\{}2{\}} = {\{}4{\}}+{\{}2^2{\}}$. However, ${[}1^4/M_{(2)}{]}$ includes ${[}1^4{]}$ itself, and this non-standard character modifies to ${[}0{]}$ giving a total coefficient of 3, reflecting the existence of the alternative quartic forms $Q_4$ and $\widetilde{Q}{}_4$ (see appendix \ref{sec:TwoQubitReview} below). At weight 6 we have similarly have the cases ${[}6/M_{(2)}{]}$, ${[}42/M_{(2)}{]}$ and ${[}2^3/M_{(2)}{]}$ which skew to ${[}0{]}$ by the corresponding elements of $M_{(2)}$; also ${[}31^3/M_{(2)}{]}$ contains ${[}1^4{]}$ after skewing by $\{2\}$, which again modifies to ${[}0{]}$ giving a total coefficient of 4, rather than 3. Thus there is one additional algebraically independent invariant at degree 6. This pattern continues at degree $8$, but at degree 10, there is now a cancellation, saturating the ${{[}0{]}\cdot {[}0{]}}$ coefficient $h_{10}$ at 9 and preventing further independent invariants at this degree or higher (see also \cite{LuqueThibon2003pi4q}). \subsection*{$SL(3,{\mathbb C})\times SL(3,{\mathbb C})$ invariants for 2 qutrit mixed systems} As mentioned already, the character theory for groups such as $H_d(8,1)$ is not developed completely, and the count of invariants given above in this case should be taken subject to confirmation by explicit constructions. The lowest terms of the Molien series resulting from Theorem 2 are taken to be \begin{equation} \label{eq:MolienHd81app} 1+z^3+2 z^6+5 z^9+12 z^{12} +\cdots = \frac{1+ \cdots}{(1-z^3)(1-z^6)(1-z^9)^3(1-z^{12})^6\cdots}\,, \end{equation} simply reflecting the multiplicities at each weight occurring in the expansion of the relevant symmetric function series \begin{align} M_{(3)} =& \, \{0\} + \{ 3\} + \{3\}\underline{\otimes}\{2\} + \{3\}\underline{\otimes}\{3\} +\{3\}\underline{\otimes}\{4\} +\cdots \nonumber \\ =& \, \{0\} + \{3\} + \big( \{6\} + \{42\}\big) + \big(\{9\} + \{72\} + \{63\} + \{52^2 \} + \{4^2 1\}\big) + \nonumber \\ & \, + \big(\{12 \} + \{10 \,2\} + \{93\} + \{84\} + \{82^2 \} + \{741\} + \{732\} + \nonumber \\ &\, + \{6^2 \} + \{642\} + \{62^3 \} + \{5421\} + \{4^3 \}\big) + \cdots\,, \nonumber \end{align} on the plausible assumption that modification rules, and hence cancellations, will not affect these lowest degree counts. As discussed in \text{\sf S} \ref{sec:SL3C} above, one invariant at each of degrees 3 and 6, three at degree 9, and 6 at degree 12 are expected. This count is partially confirmed in \text{\sf S} \ref{sec:SL3C} where the invariants at degrees 3 and 6 are given explicitly. \section{Homogeneous polynomial entanglement monotones and mixed state systems.} \label{sec:Monotones} As mentioned in \text{\sf S} \ref{sec:SL3C} above, the identification of local unitary invariants (\text{\sf S} \ref{sec:LocalUnitary}) is only the first step towards useful entanglement measures. After quantum operations such as (\ref{eq:RhoTransf}), such a quantity which is a function of the components of the density operator, say $f(\rho)$, is re-evaluated as $f(\rho')$. Since local measurements should not increase the degree of entanglement, the quantity $f$ must be an entanglement monotone, namely we must have the concavity condition \[ p_1 f(\rho'_1)+ p_2 f(\rho'_2) \le f(\rho). \] Given a listing of unitary invariants such as Tables \ref{tab:QuadCub}, \ref{tab:GradedCount} above in the two qutrit case, it is in general a difficult problem to assemble properly behaving entanglement monotones (a discussion of the two qubit case is given in appendix \ref{sec:TwoQubitReview} below; the local unitary invariants are provided explicitly as Table 1 of \cite{KingWelshJarvis2007}). However, for quantities which are invariant under the appropriate `relativistic' transformation group, and which are homogeneous polynomials in the (projective) components of the density operator, a well-known construction exists \cite{dur2000three} (see also \cite{eltschka2012multipartite}). We now briefly describe how it can be adapted to the qutrit mixed state case. Recall that the context of the monotonicity requirement is the quantum operation induced by local measurement operators, for example $E_1\otimes I$, $E_2\otimes I$ such that $E_1{}^\dagger E_1+ E_2{}^\dagger E_2=I$. Each $E_i$, $i=1,2$ is assumed to admit a singular value decomposition, such that $E_i = U_i D_i V_i$ and unitaries $U_i$, $V_i$ in fact with $V_1=V_2 \equiv V$ resulting from the constraint condition. Thus we have for some $0 \le a^2, b^2,c^2 \le1$ \[ D_1 = \left(\begin{array}{ccc} a&0&0\\ 0&b&0 \\ 0&0&c \end{array}\right),\qquad D_2 = \left(\begin{array}{ccc} \sqrt{1-a^2} &0&0\\ 0&\sqrt{1-b^2} &0\\ 0&0&\sqrt{1-c^2} \end{array}\right) \] and in the nonsingular case $0 < a^2, b^2,c^2 < 1$we can write $D_i \equiv (d_i)^{\frac 13} \widehat{D}_i$ with $d_i =Det(D_i)$, namely $d_1 = abc$, $d_2 = \sqrt{(1-a^2)(1-b^2)(1-c^2)}$ and $Det(\widehat{D}_i) =1$, that is, $\widehat{D}_i\in SL(3,{\mathbb C})$. Consider the variation under this measurement operation of a homogeneous polynomial $f(\rho)$ which is invariant under local $SL(3,{\mathbb C})\times SL(3,{\mathbb C})$ transformations on the density operator, with degree of homogeneity $h$. We note\footnote{For ease of writing the tensor product is omitted, thus $E_i \rightarrow E_i\otimes I$, and so on.} \begin{align} f(\rho'_i) =f({E_i \rho E_i{}^\dagger}) = &\, f\Big(\frac{{d_i}^{\frac 23}}{p_i}U_i{\widehat{D}_i}V\rho V{}^\dagger \widehat{D}_iU_i^\dagger\Big) = \left(\frac{{d_i}^{\frac 23}}{p_i}\right)^{\!\!h}f\big(U_i{\widehat{D}_i}V\rho V{}^\dagger \widehat{D}_iU_i^\dagger\big) \nonumber \end{align} Now \begin{align} \qquad f\big(U_i{\widehat{D}_i}V\rho V{}^\dagger \widehat{D}_iU_i^\dagger\big)=&\, f\big({\widehat{D}_i}V\rho V{}^\dagger \widehat{D}_i\big)=f\big(V\rho V{}^\dagger \big)=f\big(\rho\big)\, ,\nonumber \end{align} where local unitary invariance, local $SL(3,{\mathbb C})$ invariance, and again local unitary invariance, have been used in the respective simplifying steps. Thus we have \[ p_1 f(\rho'_1)+ p_2 f(\rho'_2) = \left(p_1\left(\frac{{d_1}^{\frac 23}}{p_1}\right)^{\!\!h} + p_2\left(\frac{{d_2}^{\frac 23}}{p_2}\right)^{\!\!h}\right)f(\rho). \] This will be $\le f(\rho)$ if $f(\rho)\ge 0$, and also if the prefactor is $\le 1$. A guarantee of positivity is simply to adopt the absolute value $|f(\rho)|$ as the invariant; although nonpolynomial, the homogeneity is of course unaffected. The concavity of the final expression then depends on the evaluation of the inequality \[ p_1\left(\frac{{d_1}^{\frac 23}}{p_1}\right)^{\!\!h} + p_2\left(\frac{{d_2}^{\frac 23}}{p_2}\right)^{\!\!h} \le 1 \] which entails computing $p_1,p_2$ as weighted sums over the diagonal matrix elements of $D_1^2$, $D_2^2$ with unknowns $x,y, (1-x-y)$ arising from partial traces of products of the unitary operator $V$ with $\rho$ (compare \cite{dur2000three,eltschka2012multipartite}). The resulting inequality is a rational expression to be satisfied for all $0\le x,y,1-x-y\le 1$. It is not known at present for what range of values of $h$ this condition admits solutions. A weaker possibility, which still achieves an entanglement monotone, is to choose a power law scaling of $|f(\rho)|$ which avoids explicit evaluation of the $p_i$, namely $h=1$. This special value can of course be attained for any homogeneous polynomial invariant $f(\rho)$ of degree $h$, by taking the definitive entanglement measure to be $F(\rho) := |f(\rho)|^{\frac {1}{h}}$ from the outset. In this case, repeating the argument, we require for all $0< a^2,b^2,c^2 < 1$, \[ a^{\frac 23}b^{\frac 23}c^{\frac 23}+ (1-a^2)^{\frac 13}(1-b^2)^{\frac 13}(1-c^2)^{\frac 13} \le 1\,. \] This condition can easily be verified with elementary algebra, after simplifying via the following composition of increasing functions to remove the fractional $\textstyle{\frac 13}$ exponents: exponentiation, taking the cube, and taking the logarithm. The singular case can be handled simply as the limit, where one or more of$a^2,b^2,c^2$ $\rightarrow 0,1$. In such cases, the above condition collapses to a trivial identity. In this way the monotonicity of $F(\rho)$ is established for all $0 \le a^2, b^2,c^2 \le1$. This argument applies directly to the $SL(3,{\mathbb C})$ mixed qutrit invariants identified in \text{\sf S} \ref{sec:SL3C} above: namely, we choose $|C_3|^{\frac 13}$ and $|C_6|^{\frac 16}$, respectively. \section{Local unitary invariants for two qubit mixed states, and $SL(2,{\mathbb C})$ entanglement monotones.} \label{sec:TwoQubitReview} In order to provide a context for and contrast with our present analysis of the two qutrit system, we here present a brief review of the two qubit mixed system, based on our earlier analysis \cite{KingWelshJarvis2007}, and its extension to entanglement monotones which we report on here. In the paper \cite{KingWelshJarvis2007}, a complete count, and explicit identification, of all algebraically independent local unitary $SU(2)\times SU(2)$ polynomial invariants was presented. This work confirmed previous computations, and also extended them in the sense that the complete structure of the ring of invariants was identified, together with extensive calculations to verify auxiliary polynomial relations (syzygies). The enumeration was checked both combinatorially using character methods (see appendix \ref{sec:Schurology} above), as well as directly via Molien's theorem. The Molien series \begin{equation} \label{eq:MolienSU2SU2} h(z) = \frac{1+z^4 + z^5 + 3z^6 + 2z^7+ 2z^8 +3z^9 +z^{10} + z^{11} + z^{15} } {(1-z)(1-z^2)^3(1-z^3)^2 (1-z^4)^3 (1-z^6)} \end{equation} indicates a rich variety of invariants, consisting of 10 polynomially independent quantities, with an additional set of secondary invariants, typically having a discrete spectrum (for the complete list see Table 1 of \cite{KingWelshJarvis2007}). As emphasized in the main text, knowledge of local invariants is a necessary first step towards the identification of entanglement monotones. A complete algorithm for deriving all such quantities given a list of local unitary invariants is not known. Nonetheless, appeal to a higher symmetry group action, namely the local SLOCC transformation groups in the guise of $SL(2,{\mathbb C})_{\mathbb R} \cong SO(3,1)$ acting as real linear tranformations on the space of projective coordinates of the density operator, allows a type of `Lorentz' singular value decomposition to be applied, and for canonical forms of quantities related to $\rho \widetilde{\rho}$ to be identified (see below). A complete diagonalization is not achievable under $SO(3,1) \times SO(3,1)$ local Lorentz transformations, with various exceptional classes in addition to the standard forms \cite{avronbiskerkenneth2007v2qu,avronkenneth2009eg2qu}. An alternative strategy, which we take up here, simply follows the line of identifying and constructing homogeneous polynomial invariants, this time of $SO(3,1) \times SO(3,1)$, and turns out to allow certain types of entanglement monotones to be constructed. Although these quantities are not as fine-grained as the canonical forms, and may not distinguish exceptional types, they have the virtue of being easily computed and do not rely on carrying out a full diagonalisation. Moreover, as we indicate in \text{\sf S} \ref{sec:SL3C} above, they generalize easily to the qutrit case once the appropriate `relativistic' transformation group is identified. The details are as follows. Recall that the single qubit density operator in the Pauli matrix presentation reads \[ \varrho = \textstyle{\frac 12}{\mathbb I}_2 + \sum_{a=1}^3 r^a \sigma_a\,. \] The role of the Lorentz group arises from the observation that under invertible operations $\varrho \rightarrow \varrho' = A \varrho A^\dagger$, we have $Det(\varrho) = Det (\varrho')$ provided $A$ itself has unit determinant, that is $A \in SL(2,{\mathbb C})$. However, the determinant is easily seen to be \[ Det(\varrho) = \textstyle{\frac 14} - \sum r^ar^a\,. \] Since $\varrho'$ no longer has unit trace, the transformation by $A$ must be accompanied by a re-scaling, and it is useful to append an additional projective coordinate ${\textsl r}^0$, \[ \varrho = {\textsl r}^0{\mathbb I}_2 + \textstyle{\sum}_{a=1}^3 {\textsl r}^a \sigma_a \,. \] The determinant is now \[ Det(\varrho) = {\textsl r}^0{\textsl r}^0-\textstyle{\sum}{\textsl r}^a{\textsl r}^a \equiv \textstyle{\sum}_{\alpha=0}^3 {\textsl r}^\alpha{\textsl r}^\beta \eta_{\alpha \beta} \] and so the group $SL(2,{\mathbb C})$ is seen as acting as a matrix group acting on the 4 dimensional space of projective coordinates of $\varrho$, and preserving the bilinear form ${\textsl r}^\alpha{\textsl r}^\beta \eta_{\alpha \beta}$ which is of course nothing but the standard Lorentz metric, with the matrix transformations identified with the Lorentz group $SO(3,1)$ in this case. The one- and two-qubit density operators are therefore \begin{align} \varrho = & \, \textsl{r}^0 {\mathbb I}_2 + \textstyle{\sum}_{a=1}^3 {\textsl r}^a \sigma_a \equiv \textstyle{\sum}_{\alpha=0}^3 {\textsl r}^\alpha \sigma_\alpha\,, \nonumber \\ \rho = & \, \textsl{r}^{0\underline{0}} {\mathbb I}_4 + \textstyle{\sum}_{a=1}^3 r^a \sigma_a\otimes {\mathbb I}_2 + \textstyle{\sum}_{a=1}^3 r^{\underline{a}}{\mathbb I}_2 \otimes\sigma_a + \textstyle{\sum}_{a,\underline{a}=1}^3 R^{a\underline{a}} \sigma_a\otimes \sigma_{\underline{a}} \,, \nonumber \\ \equiv &\, \textstyle{\sum}_{\alpha, \underline{\alpha} = 0}^3 r^{\alpha\underline{\alpha}}\sigma_\alpha\otimes \sigma_{\underline{\alpha}} . \nonumber \end{align} The above-mentioned Lorentz singular value decomposition proceeds with the analysis of the matrix $r (\underline{\eta}\hskip.1ex r^\top\hskip-.2ex \eta)$. This simply amounts to forming the tensor $w^\alpha{}_\beta=r^{\alpha\underline{\alpha}}\eta_{\underline{\alpha}\underline{\beta}}r^{\gamma\underline{\beta}} \eta_{\gamma\beta}$, which evidently transforms only under one local Lorentz group, by the usual rules of raising, lowering and contraction of indices (there is an equivalent tensor $w^{\underline{\alpha}}{}_{\underline{\beta}}$ which is isospectral, and transforms under the other local Lorentz group). As mentioned, a complete diagonalization is achievable in all but some exceptional cases, and the analysis is consistent with the emergence of the well-known convex roof extension entanglement measure \cite{wootters1998entanglement} $max(\lambda_1^\downarrow-\lambda_2^\downarrow-\lambda_3^\downarrow-\lambda_4^\downarrow,0)$. From the perspective of group representations, as emphasized in this paper, the problem is again the enumeration and construction of polynomial invariants of the local group $SO(3,1)\times SO(3,1)$, for the representation corresponding to $r^{\alpha\underline{\alpha}}$, namely the direct product of defining 4-dimensional representations (with character ${[}1{]}\cdot {[}1{]}$ in standard notation; see appendix \ref{sec:Schurology} above). For finite dimensional representations this can be carried out for the corresponding problem in $SO(4)\times SO(4)$, and via the isomorphism $SO(4)\cong SU(2)\times SU(2)$ the count becomes combinatorially identical to the problem of identifying local unitary invariants for the \emph{four} qubit \emph{pure state} system \cite{LuqueThibon2003pi4q} (see also \cite{levay2006geometry}). The Molien series can be readily computed directly from Molien's theorem in this case, or evaluated using character manipulations (see Theorem 2, appendix \ref{sec:Schurology} above); counting invariants up to degree 12 gives \begin{align} h(z) = & \,1+z^2 + 3z^4 + 4z^6 + 7z^8 + 9z^{10} + 14 z^{12} + \cdots \label{eq:MolienLSLqubTerms} \\ & \, = \frac{1}{(1-z^2)(1-z^4)^2(1-z^6)}\equiv \frac{1+z^4}{(1-z^2)(1-z^4)(1-z^6)(1-z^8)} \label{eq:MolienLSLqubRatl} \end{align} which is consistent with an invariant ring generated by the independent invariant traces $Q_{2p}=Tr(w^p)$ of the matrix $w$, $p=1,2,3,4$, with an additional constraint between $Q_8$ and the square of the determinant, $\widetilde{Q}_4 = Det(\rho)$, which plays the role of a secondary invariant in this case (see also \cite{KingWelshJarvis2007}). As an illustration of the method, we evaluate the lowest degree invariants, $Q_2$, $Q_4$ and $\widetilde{Q}_4$, in order to show explicitly how combinations of local unitary invariants combine in forming these quantities. Using the transcription between the basis $\{ r^a, r^{\underline{a}}, R^{a\underline{a}} \}$ and the Lorentz-covariant set $\{ r^{\alpha \underline{\alpha}}\}$ used above, and setting $\{r^{0\underline{0}}\}$ to its standard value $r^{0\underline{0}} = \textstyle{\frac 14}$, we compute \begin{align} Q_2=Tr\big(w\big) = & \, r^{\alpha\underline{\alpha}}\eta_{\underline{\alpha}\underline{\beta}}r^{\gamma\underline{\beta}} \eta_{\gamma\alpha} = (r^{0\underline{0}})^2 - (r^{a \underline{0}})^2 - (r^{0 \underline{\alpha}})^2 + (r^{a \underline{\alpha}})^2 \nonumber \\ \equiv & \, \textstyle{\frac{1}{16}} - r^2 -\overline{r}^2 + RR, \end{align} where the final line gives the expansion in terms of the list given in Table 1 of \cite{KingWelshJarvis2007} using the obvious notation $r^2 = \sum_a r^a r^a$, $\overline{r}^2 = \sum_{\overline{a}} {r}^{\overline{a}}{r}^{\overline{a}}$, $RR = \sum_{a,\overline{a}}R^{a\overline{a}} R^{a\overline{a}}$. As can be seen, the resulting form is \emph{inhomogeneous} in the underlying local unitary invariants because of the requirement of trace normalization. This situation continues with the remaining forms, the first of which is \begin{align} Q_4=Tr\big(w^2\big) = & \,\sum r^{\alpha \underline{\beta}}r_{\beta \underline{\beta}}r^{\beta\underline{\gamma}} r_{\alpha\underline{\gamma}} \nonumber \\ = & \, RRRR +(r^2)^2 + (\overline{r}^2)^2 - 2 r RRr -2 \overline{r}RR\overline{r} + rR\overline{r} -\textstyle{\frac{1}{8}} r^2 -\textstyle{\frac{1}{8}} \overline{r}^2 + \textstyle{\frac{1}{256}}\nonumber \\ \equiv & \, K_7 + (K_3)^2 + (K_4)^2 -2(K_8+K_9) + K_6-\textstyle{\frac{1}{8}}(K_3+K_4) + \textstyle{\frac{1}{256}}, \end{align} using the additional abbreviations $r RRr = \sum_{a,b,\overline{a}} r^a R^{a\overline{a}}R^{b \overline{a}}r^b$, $\overline{r}RR\overline{r} = \sum_{a,\overline{a}, \overline{b}} r^{\overline{a}} R^{a\overline{a}}R^{a \overline{b}}r^{\overline{b}}$, $RRRR = \sum_{a,b,\overline{a},\overline{b}}R^{a\overline{a}}R^{b\overline{a}} R^{b\overline{b}}R^{a\overline{b}}$. Introducing also $r (R \times\hskip-2.6ex\times R) \overline{r} = \varepsilon^{ijk} \varepsilon^{\underline{p}\underline{q}\underline{r}} r^{i} R^{j\underline{p}} R^{k\underline{q}}r^{\underline{r}}$ we have further \begin{align} \widetilde{Q}_4 := Det(\rho) = & \, \textstyle{\frac{1}{24}}\varepsilon_{\lambda \mu\rho\sigma} \varepsilon_{\underline{\alpha}\underline{\beta}\underline{\gamma}\underline{\delta}} r^{\lambda\underline{\alpha}}r^{\mu\underline{\beta}}r^{\rho\underline{\gamma}}r^{\sigma\underline{\delta}}\nonumber \\ = & \, \textstyle{\frac 16}\varepsilon^{\underline{0}\underline{p}\underline{q}\underline{r}} \Big(\varepsilon_{0ijk} r^0{}_{\underline{0}} r^i{}_{\underline{p}} r^j{}_{\underline{q}} r^k{}_{\underline{r}} + \varepsilon_{i0jk} r^i{}_{\underline{0}} r^0{}_{\underline{p}} r^j{}_{\underline{q}} r^k{}_{\underline{r}}+ \varepsilon_{ij0k} r^i{}_{\underline{0}} r^j{}_{\underline{p}} r^0{}_{\underline{q}} r^k{}_{\underline{r}}+ \varepsilon_{ijk0} r^i{}_{\underline{0}} r^j{}_{\underline{p}} r^k{}_{\underline{q}} r^0{}_{\underline{r}} \Big)\nonumber\\ = & \, \textstyle{\frac 16}\Big( r^{0\underline{0}}\varepsilon^{ijk} \varepsilon^{\underline{p}\underline{q}\underline{r}} r^{i\underline{p}} r^{j\underline{q}} r^{k\underline{r}} +3\varepsilon^{ijk} \varepsilon^{\underline{r}\underline{p}\underline{q}} r^{i\underline{0}} r^{0\underline{r}} r^{j\underline{p}} r^{k\underline{q}} \Big)\nonumber \\ =& \, \textstyle{\frac {1}{4}}Det(R) + \textstyle{\frac 12}r (R \times\hskip-2.6ex\times R) \overline{r} \nonumber \\ \equiv & \, \textstyle{\frac {1}{24}}K_5 + \textstyle{\frac 12}U_1\, . \end{align} We do not give the corresponding expression for $Q_6$ here, but it is clear that it can be expanded in an analogous manner. Following the derivation in appendix \ref{sec:Monotones} above, these $SL(2,{\mathbb C})\times SL(2,{\mathbb C})$ invariants will provide entanglement monotones by taking appropriate powers of their absolute value, namely $|Q_2|^{\frac 12}$, $|Q_4|^{\frac 14}$, $|\widetilde{Q}_4|^{\frac 14}$ and similarly $|Q_6|^{\frac 16}$. \end{appendix} \newpage {\small {\small
1,116,691,497,426
arxiv
\section{Introduction} The starting point of this note is the flow \begin{equation} \label{H1} \dot z = f(z), \end{equation} in which $f $ or its conjugate $\bar f$ is an entire function. A trajectory for (\ref{H1}) is a path $z(t)$ in the plane with $z'(t) = f(z(t)) \in \mathbb C$ for $t$ in some maximal interval $(\alpha, \beta) \subseteq \mathbb R$. By the existence-uniqueness theorem, such trajectories are either constant (with $z(t)$ a zero of $f$), periodic or injective. It was shown in \cite[Theorem 5]{kingneedham} that if $f$ is a polynomial in $z$ of degree $n \geq 2$ then there exist $n-1$ disjoint trajectories for (\ref{H1}) which tend to infinity in finite increasing time, that is, which satisfy $\beta \in \mathbb R$ and $\lim_{t \to \beta - } z(t) = \infty$.The following theorem for holomorphic flows with transcendental entire $f$ was proved in \cite[Theorem 1.1]{Latraj}. \begin{thm}[\cite{Latraj}] \label{thm0} Let the function $f$ be transcendental entire: then (\ref{H1}) has infinitely many pairwise disjoint trajectories which tend to infinity in finite increasing time. \end{thm} For meromorphic functions in general, such trajectories need not exist at all \cite{Latraj}, but a result was also proved in \cite{Latraj} for the case where $f$ is transcendental and meromorphic in the plane and the inverse function $f^{-1}$ has a logarithmic singularity over $\infty$: this means that there exist $M > 0$ and a component $U$ of the set $\{ z \in \mathbb C : \, |f(z)| > M \}$ such that $U$ contains no poles of $f$ and $\log f$ maps $U$ conformally onto the half-plane $H = \{ v \in \mathbb C : \, {\rm Re } \, v > \log M \}$ \cite{BE,Nev}. In this case \cite[Theorem 1.2]{Latraj}, (\ref{H1}) has infinitely many pairwise disjoint trajectories tending to infinity in finite increasing time from within a neighbourhood $\{ z \in U : |f(z)| > M' \geq M \}$ of the singularity. On the other hand, for entire $f$ in (\ref{H1}), it seems that trajectories which tend to infinity in finite increasing time are somewhat exceptional. For the simple example $\dot z = - \exp( -z)$, it is easy to check that all trajectories satisfy $\exp(z(t)) = \exp(z(0)) -t$ and so tend to infinity as $t$ increases, but take infinite time to do so unless $\exp( z(0))$ is real and positive. It will be shown that for transcendental entire $f$ there is, in a certain sense, zero probability of landing on a trajectory of (\ref{H1}) which tends to infinity in finite time. To state the theorem, let $f$ be transcendental entire and let \begin{equation} \label{Fdef1} z_0 \in \mathbb C, \quad f(z_0) \neq 0, \quad F(z) = \int_{z_0}^z \frac{du}{f(u)} . \end{equation} Then $F(z)$ is defined near $z_0$ and is real and increasing as $z$ follows the trajectory $\zeta_{z_0} (t)$ of (\ref{H1}) starting at $z_0$. Let $\delta $ be small and positive and take the pre-image $ L_\delta(z_0)$ of the real interval $(- \delta, \delta)$ under the function $- i F(z) $; then $ L_\delta(z_0)$ is perpendicular to $\zeta_{z_0} (t)$ at $z_0$. The proof of the following result is adapted from that of the Gross star theorem \cite[p.292]{Nev}. \begin{thm} \label{thmhol} Let $f$ be a transcendental entire function and let $z_0$ and $F$ be as in (\ref{Fdef1}). For small positive $\delta$ let $Y_\delta$ be the set of $y \in (- \delta, \delta)$ such that the trajectory of (\ref{H1}) starting at $F^{-1}(iy)$ tends to infinity in finite increasing time. Then $Y_\delta$ has Lebesgue measure $0$. \end{thm} Theorem \ref{thmhol} seems unlikely to be best possible, but an example from \cite{Volk} (see \S \ref{uncountable}) shows that there exists a transcendental entire $f$ for which (\ref{H1}) has uncountably many trajectories tending to infinity in finite increasing time. It seems natural to ask similar questions in respect of the antiholomorphic flow \begin{equation} \label{AH} \dot z = \frac{dz}{dt} = \bar g(z), \end{equation} where $g$ is a non-constant entire function. Equation (\ref{AH}) appears widely in textbooks as a model for incompressible irrotational plane fluid flow, and is linked to (\ref{H1}) insofar as if $f = 1/g$ then (\ref{AH}) has the same trajectories as (\ref{H1}), since $\bar g = f/|f|^2$, although zeros of one of $f$ and $g$ are of course poles of the other and in general the speeds of travel differ. The trajectories of (\ref{AH}) are determined by choosing $G$ with $G'(z) = g(z)$ and writing \begin{equation} \label{transform1} v = G(z), \quad \dot v = g(z) \dot z = |g(z)|^2 \geq 0 , \end{equation} which leads to the classical fact that trajectories for (\ref{AH}) are level curves of ${\rm Im} \, G(z)$ on which ${\rm Re} \, G(z)$ increases with $t$. By the maximum principle, ${\rm Im} \, G(z)$ cannot be constant on a closed curve. Thus, apart from the countably many which tend to a zero of $G' = g$, all trajectories for (\ref{AH}) go to infinity, but this leaves open the question as to how long they take to do so. If a non-constant trajectory $\Gamma$ of (\ref{AH}) passes from $z_1 $ to $ z_2$ along an arc meeting no zeros of $g$, then ${\rm Im} \, v = \beta$ is constant on $\Gamma$ and $X = {\rm Re} \, v$ increases from $X_1 = {\rm Re} \, G(z_1) $ to $X_2 = {\rm Re} \, G(z_2)$. Thus (\ref{transform1}) implies that the transit time is \begin{equation} \int_{X_1+i\beta}^{X_2+i \beta } \frac1{|g(z)|^2} \, dv = \int_{X_1+i \beta}^{X_2+i \beta} \left| \frac{dz}{dv} \right|^2 \, dv = \int_{X_1}^{X_2} \left| \frac{dz}{dX} \right|^2 \, dX . \label{transit} \end{equation} This formula shows that a zero of $g$ cannot be reached in finite time, because if $z$ tends to a zero $z_3$ of $g$ of multiplicity $m$ as $X \to X_3$ then, with $c_j$ denoting non-zero constants, \begin{eqnarray*} X - X_3 &=& G(z)-G(z_3) \sim c_1 (z-z_3)^{m+1}, \\ \left| \frac{dz}{dX} \right|^2 &=& \frac1{ |g(z)|^2 } \sim \frac{c_2}{ |X-X_3|^{2m/(m+1)} } \geq \frac{ c_2 }{|X - X_3|} . \end{eqnarray*} Suppose now that $G' = g$ is a polynomial of degree $n \geq 1$ in (\ref{AH}), (\ref{transform1}) and (\ref{transit}). If $S \in \mathbb R$ and $R$ is sufficiently large and positive then each pre-image under $v = G(z)$ of the half-line $v = r + iS, r \geq R,$ gives a trajectory of (\ref{AH}) which tends to infinity, on which (\ref{transform1}) delivers $$\frac{dt}{dv} = \frac1{|g(z)|^2} \sim \frac{c_3 }{ |z|^{2n}} \sim \frac{c_4}{ |v|^{2n/(n+1)}} .$$ Hence (\ref{transit}) implies that the transit time to infinity is finite for $n \geq 2$ and infinite for $n=1$. Thus, if $g$ is a non-linear polynomial, (\ref{AH}) always has uncountably many trajectories tending to infinity in finite increasing time, but this need not be the case for transcendental entire $g$. \begin{thm} \label{thmbbh} There exists a transcendental entire function $g$ such that (\ref{AH}) has no trajectories tending to infinity in finite increasing time. \end{thm} Theorem \ref{thmbbh} also marks a sharp contrast with Theorem~\ref{thm0}, and its proof rests on the following immediate consequence of a result of Barth, Brannan and Hayman \cite[Theorem 2]{BBH}. \begin{thm}[\cite{BBH}] \label{BBHthm} There exists a transcendental entire function $G$ such that any unbounded connected plane set contains a sequence $(w_n)$ tending to infinity on which $U = {\rm Re} \, G$ satisfies $(-1)^n U(w_n) \leq |w_n|^{1/2 } $. \end{thm} To establish Theorem \ref{BBHthm}, it is only necessary to take the plane harmonic function $v$ constructed in \cite[Theorem 2]{BBH}, with the choice of $\psi(r)$ given by \cite[p.364]{BBH}. With $U = v$, and $V$ a harmonic conjugate of $U$, elementary considerations show that the resulting entire function $G = U+iV$ cannot be a polynomial. On the other hand, in the presence of a logarithmic singularity of the inverse function over infinity, trajectories of (\ref{AH}) tending to infinity in finite increasing time exist in abundance. \begin{thm} \label{thm2} Let $g$ and $G$ be transcendental meromorphic functions in the plane such that $G'=g$ and either $G^{-1}$ or $g^{-1}$ has a logarithmic singularity over $\infty$. Then in each neighbourhood of the singularity the flow (\ref{AH}) has a family of pairwise disjoint trajectories $\gamma_Y, Y \in \mathbb R$, each of which tends to infinity in finite increasing time. \end{thm} Theorem \ref{thm2} applies in particular if $g$ or its antiderivative $G$ is a transcendental entire function and belongs to the Eremenko-Lyubich class $\mathcal{B}$, which plays a salient role in complex dynamics \cite{Ber4,EL,sixsmithEL} and is defined by the property that $F \in \mathcal{B}$ if the finite critical and asymptotic values of $F$ form a bounded set, from which it follows that if $F \in \mathcal{B}$ is transcendental entire then $F^{-1}$ automatically has a logarithmic singularity over $\infty$. A specific function to which Theorem \ref{thm2} may be applied is $g(z) = e^{-z} + 1$; here $g$ is in $\mathcal{B}$, but its antiderivative $G$ is not, and this example also gives uncountably many trajectories of (\ref{AH}) taking infinite time to reach infinity through the right half-plane. Theorem \ref{thm2} is quite straightforward to prove when the inverse of $G$ has a logarithmic singularity over infinity, but the method turns out to have a bearing on the following question of Rubel \cite[pp.595-6]{Linear}: if $f$ is a transcendental entire function, must there exist a path tending to infinity on which $f$ and its derivative $f'$ both have asymptotic value $\infty$? This problem was motivated by the classical theorem of Iversen \cite{Nev}, which states that $\infty$ is an asymptotic value of every non-constant entire function. For transcendental entire $f$ of finite order, a strongly affirmative answer to Rubel's question was provided by the following result \cite[Theorem 1.5]{Larubel}. \begin{thm}[\cite{Larubel}] \label{rubelthm} Let the function $f$ be transcendental and meromorphic in the plane, of finite order of growth, and with finitely many poles. Then there exists a path $\gamma$ tending to infinity such that, for each non-negative integer $m$ and each positive real number $c$, \begin{equation} \lim_{z \to \infty, z \in \gamma } \frac{ \log |f^{(m)}(z)|}{ \log |z|} = + \infty \quad \hbox{and} \quad \int_\gamma |f^{(m)}(z)|^{-c} |dz| < + \infty . \label{rr3} \end{equation} \end{thm} For functions of infinite order, Rubel's question appears to be difficult, although a path satisfying (\ref{rr3}) for $m=0$ is known to exist for any transcendental entire function $f$ \cite{LRW}. However, a direct analogue of Theorem \ref{rubelthm} goes through relatively straightforwardly for transcendental entire functions $f$ in the Eremenko-Lyubich class $\mathcal{B}$. \begin{thm} \label{thm1} Let $f$ be a transcendental meromorphic function in the plane such that $f^{-1}$ has a logarithmic singularity over $\infty$, and let $D \in \mathbb R$. Then there exists a path $\gamma$ tending to infinity in a neighbourhood of the singularity, such that $f(z) -iD$ is real, positive and increasing on $\gamma$ and (\ref{rr3}) holds for each integer $m \geq 0 $ and real $c > 0$. \end{thm} This paper is organised as follows: Theorem \ref{thmhol} is proved in \S\ref{pfthmhol}, followed by an example in \S\ref{uncountable} and the proof of Theorem \ref{thmbbh} in \S\ref{pfthmbbh}. It is then convenient to give the proof of Theorem \ref{thm1} in \S\ref{pfthm1}, prior to that of Theorem \ref{thm2} in \S\ref{pfthm2}. \section{Proof of Theorem \ref{thmhol}}\label{pfthmhol} Let $f$, $F$, $z_0$ and $\delta$ be as in the statement of Theorem \ref{thmhol}. For $y \in (- \delta, \delta)$ let $g(y) = F^{-1}(iy)$ and let $T(y)$ be the supremum of $s > 0$ such that the trajectory $\zeta_{g(y)}(t)$ of (\ref{H1}) with $\zeta_{g(y)}(0) = g(y)$ is defined and injective for $0 \leq t < s$. If the trajectory $\zeta_{g(y)}(t)$ is periodic with minimal period $S_y$ then $T(y) = S_y$ and $\zeta_{g(y')}(t)$ has the same period for $y'$ close to $y$ \cite{brickman}. Furthermore, if $\zeta_{g(y)}(t)$ tends to infinity in finite time then $T(y) < + \infty$, while if $T(y)$ is finite but $\zeta_{g(y)}(t)$ is not periodic then $\lim_{t \uparrow T(y)} \zeta_{g(y)}(t) = \infty$ \cite[Lemma 2.1]{Latraj}. Set $$ A = \{ iy + t: \, \, y \in (- \delta, \delta), \, 0 < t < T(y) \} , \quad B = \{ \zeta_{g(y)} (t) : \, y \in (- \delta, \delta), \, 0 < t < T(y) \}. $$ Then $G( iy + t ) = \zeta_{g(y)} (t) $ is a bijection from $A$ to $B$. For $u = \zeta_{g(y)} (t)$, where $y \in (- \delta, \delta)$ and $0 < t < T(y)$, let $\sigma_u$ be the subarc of $ L_\delta(z_0)$ from $z_0$ to $g(y)$ followed by the sub-trajectory of (\ref{H1}) from $g(y)$ to $u$, and define $F$ by (\ref{Fdef1}) on a simply connected neighbourhood $D_u$ of $\sigma_u$. Then $F$ maps $\sigma_u$ bijectively to the line segment $[0, iy]$ followed by the line segment $[iy, iy+t]$, and taking a sub-domain if necessary makes it possible to assume that $F$ is univalent on $D_u$, with inverse function defined on a neighbourhood of $[iy, iy+t]$. Let $y'$ and $t'$ be real and close to $y$ and $t$ respectively. Then the image under $F^{-1}$ of the line segment $[iy', iy' + t']$ is an injective sub-trajectory of (\ref{H1}) joining $g(y') \in L_\delta(z_0)$ to $F^{-1}(iy' +t') = \zeta_{g(y')} (t') = G(iy'+t')$, and so $T(y') \geq t'$. Thus $y \rightarrow T(y)$ is lower semi-continuous and $A$ is a domain, while $G: A \to B$ is analytic. Moreover, $A$ is simply connected, because its complement in $\mathbb C \cup \{ \infty \}$ is connected, and so is $B$. Furthermore, $F$ extends to be analytic on $B$, by (\ref{Fdef1}) and the fact that $f \neq 0$ on $B$, and $F \circ G$ is the identity on $A$ because $F(G(t)) = t$ for small positive $t$. For $N \in (0, + \infty ) $, let $M_N $ be the set of all $ y$ in $ (- \delta, \delta)$ such that $\zeta_{g(y)} (t)$ tends to infinity and $T(y) < N $. To prove Theorem \ref{thmhol}, it suffices to show that each such $M_N$ has measure $0$, and the subsequent steps will be adapted from the proof of the Gross star theorem \cite[p.292]{Nev} and its extensions due to Kaplan \cite{Kaplan}. Let $\Lambda_N \subseteq B$ be the image of $\Omega_N = \{ w \in A : \, {\rm Re} \, w < N \}$ under $G$, let $r$ be large and positive and denote the circle $|z| = r$ by $S(0, r)$. Then $S(0, r) \cap \Lambda_N$ is a union of countably many open arcs $\Sigma_r$. If $y \in M_N$ then $T(y) < N$ and as $t \to T(y)$ the image $z = G(iy+t) $ tends to infinity in $\Lambda_N$ and so crosses $S(0, r)$, and hence there exists $\zeta $ in some $ \Sigma_r$ with ${\rm Im} \, F(\zeta) = y$, since $F: B \to A$ is the inverse of $G$. Thus the measure $\mu_N$ of $M_N$ is at most the total length $s(r)$ of the arcs $F(\Sigma_r)$. It follows from the Cauchy-Schwarz inequality that, as $t \to + \infty$, \begin{eqnarray*} \mu_N^2 &\leq& s(t)^2 = \left( \int_{t e^{i \phi } \in \Lambda_N } |F'(t e^{i \phi } )| t \, d \phi \, \right)^2 \\ &\leq& \left( \int_{t e^{i \phi } \in \Lambda_N } |F'(t e^{i \phi } )|^2 t \, d \phi \, \right) \left( \int_{t e^{i \phi } \in \Lambda_N } t \, d \phi \, \right) \leq 2 \pi t \left( \int_{t e^{i \phi } \in \Lambda_N } |F'(t e^{i \phi } )|^2 t \, d \phi \, \right) . \end{eqnarray*} Thus $\mu_N = 0$, since dividing by $2 \pi t$ and integrating from $r$ to $r^2$ yields, as $r \to + \infty$, \begin{eqnarray*} \frac{ \mu_N^2 \log r }{2 \pi} &\leq& \int_r^{r^2} \int_{t e^{i \phi } \in \Lambda_N } |F'(t e^{i \phi } )|^2 \, t \, d \phi \, dt \leq \int_{\Lambda_N} |F'(t e^{i \phi } )|^2 \, t \, d \phi \, dt = \hbox{area $(\Omega_N)$} \leq 2 \delta N . \end{eqnarray*} \hfill$\Box$ \vspace{.1in} \section{An example}\label{uncountable} Suppose that $G$ is a locally univalent meromorphic function in the plane, whose set of asymptotic values is an uncountable subset $E$ of the unit circle $\mathbb T$. Suppose further that there exists a simply connected plane domain $D$, mapped univalently onto the unit disc $\Delta$ by $G$, such that the branch $\phi$ of $G^{-1}$ mapping $\Delta$ to $D$ has no analytic extension to a neighbourhood of any $\beta \in E$. Let $F = S(G)$, where $S$ is a M\"obius transformation mapping $\Delta$ onto $\{ w \in \mathbb C : \, {\rm Re} \, w < 0 \}$, and for $\beta \in E$ let $\alpha = S(\beta)$ and let $L$ be the half-open line segment $[\alpha -1, \alpha)$. Then $M = S^{-1}(L)$ is a line segment or circular arc in $\Delta$ which meets $\mathbb T$ orthogonally at $\beta$. Moreover, $\phi(M)$ is a level curve of ${\rm Im} \, F$ in $D$, which cannot tend to a simple $\beta$-point of $G$ in $\mathbb C$ because this would imply that $\phi$ extends to a neighbourhood of $\beta$. Hence $\phi(M)$ is a path tending to infinity in $D$, on which ${\rm Im} \, F(z)$ is constant and $F(z)$ tends to $\alpha$. Since $G$ and $F$ are locally univalent, $f = 1/F'$ is entire. As $t \to 0-$ write, on $\phi(M)$, $$ F(z) = \alpha + t, \quad \quad \frac{dt}{dz} = F'(z) = \frac1{f(z)}, \quad \frac{dz}{dt} = f(z), $$ so that $\phi(M)$ is a trajectory of (\ref{H1}) which tends to infinity in finite increasing time, and there exists one of these for every $\beta$ in the uncountable set $E$. A suitable $G$ is furnished by a construction of Volkovyskii \cite{Ermich,Volk}, in which $\mathbb T \setminus E$ is a union of disjoint open circular arcs $I_k = (a_k, b_k)$, oriented counter-clockwise. For each $k$, take the multi-sheeted Riemann surface onto which $(a_k - b_k e^z)/(1-e^z)$ maps the plane, cut it along a curve which projects to $I_k$, and glue to $\Delta$ that half which lies to the right as $I_k$ is followed counter-clockwise. This forms a simply connected Riemann surface $R$ with no algebraic branch points. By \cite[Theorem 17, p.71]{Volk} (see also \cite[p.6]{Ermich}), the $I_k$ can be chosen so that $R$ is parabolic and is thereby the image surface of a locally univalent meromorphic function $G$ in the plane. \hfill$\Box$ \vspace{.1in} \section{Proof of Theorem \ref{thmbbh}}\label{pfthmbbh} Following the notation of the introduction, suppose that $v=G(z)$ is a transcendental entire function with derivative $g$ in (\ref{AH}), (\ref{transform1}) and (\ref{transit}). \begin{prop} \label{propbbh} Let $\Gamma$ be a level curve tending to infinity on which $Y = {\rm Im} \, G(z) = \beta \in \mathbb R $ and $X = {\rm Re} \, G(z) $ increases, with $X \geq \alpha \in \mathbb R $, and assume that $\Gamma$ meets no zero of $g$. Suppose that $(z_n)$ is a sequence tending to infinity on $\Gamma$ such that $v_n = G(z_n ) = X_n + i \beta $ satisfies $v_n = o( |z_n|)^2 $. Then the trajectory of (\ref{AH}) which follows $\Gamma$ takes infinite time in tending to infinity. \end{prop} Here it is not assumed or required that $X \to + \infty$ as $z \to \infty$ on $\Gamma$. \\ \\ \textit{Proof of Proposition \ref{propbbh}.} It may be assumed that $\Gamma$ starts at $z^*$ and $G(z^*) = \alpha + i \beta$. Denote positive constants, independent of $n$, by $C_j$. Then the Cauchy-Schwarz inequality gives, as $n $ and $z_n$ tend to infinity, \begin{eqnarray*} |z_n|^2 &\leq& \left( C_1 + \int_\alpha^{X_n} \left| \frac{dz}{dX} \right| \, dX \right)^2 \\ &\leq& 2 \left( \int_\alpha^{X_n} \left| \frac{dz}{dX} \right| \, dX \right)^2 \\ &\leq& 2 \left( \int_\alpha^{X_n} \, dX \right) \left( \int_\alpha^{X_n} \left| \frac{dz}{dX} \right|^2 \, dX \right) \\ &\leq& 2 \left( |v_n| + C_2 \right) \left( \int_\alpha^{X_n} \left| \frac{dz}{dX} \right|^2 \, dX \right) \\ &\leq& o \left( |z_n|^2 \right) \left( \int_\alpha^{X_n} \left| \frac{dz}{dX} \right|^2 \, dX \right) . \end{eqnarray*} Thus (\ref{transit}) shows that the transit time from $z^*$ to $z_n$ tends to infinity with $n$. \hfill$\Box$ \vspace{.1in} \textit{Proof of Theorem \ref{thmbbh}.} Let $G$ be the entire function given by Theorem \ref{BBHthm}, and set $g = G'$. As noted in the introduction, no trajectory of (\ref{AH}) can pass through a zero of $g$, and in any case it takes infinite time for a trajectory to approach a zero of $g$. Furthermore, if $\Gamma$ is a level curve, starting at $z^*$ say, on which ${\rm Im} \, G(z)$ is constant and $U(z) = {\rm Re} \, G(z)$ increases, and on which $g$ has no zeros, then there exists a sequence $z_n = w_{2n}$ which tends to infinity on $\Gamma$ and satisfies $$U(z^*) \leq U(z_n) \leq |z_n|^{1/2} , \quad |G(z_n)| \leq |U(z_n)| + O(1) \leq |z_n|^{1/2} + O(1).$$ Hence $\Gamma$ satisfies the hypotheses of Proposition \ref{propbbh}. It now follows that (\ref{AH}) has no trajectories tending to infinity in finite increasing time. Since time can be reversed for these flows by setting $s = -t$ and $dz/ds = - \bar g(z)$, the same example has no trajectories tending to infinity in finite decreasing time either. \hfill$\Box$ \vspace{.1in} \section{Proof of Theorem \ref{thm1}}\label{pfthm1} Let $f$ be as in the hypotheses. Then there exist $M > 0$ and a component $U$ of $\{ z \in \mathbb C : \, |f(z)| > M \} $ such that $v = \log f(z)$ is a conformal bijection from $U$ to the half-plane $H$ given by ${\rm Re} \, v > N = \log M$; it may be assumed that $0 \not \in U$. Let $\phi: H \to U$ be the inverse function. If $u \in H$ then $\phi$ and $\log \phi$ are univalent on the disc $|w-u| < {\rm Re} \, u - N$ and so Bieberbach's theorem and Koebe's quarter theorem \cite[Chapter 1]{Hay9} imply that \begin{equation} \label{h3} \left| \frac{\phi''(u)}{\phi'(u)} \right| \leq \frac4{{\rm Re} \, u -N } , \quad \left| \frac{\phi'(u)}{\phi(u)} \right| \leq \frac{4 \pi}{{\rm Re} \, u -N } . \end{equation} \begin{lem} \label{lem1} Let $v_0 $ be large and positive and for $0 \leq k \in \mathbb Z$ write \begin{equation} \label{rub1} V_k = \left\{ v_0 + t e^{i \theta} : \, t \geq 0, \, - \, \frac{\pi}{2^{k+2}} \leq \theta \leq \frac{\pi}{2^{k+2}} \right\}, \quad G_k(v) = \frac{f^{(k)}(z)}{f(z)}, \quad z = \phi(v). \end{equation} Then there exist positive constants $d$ and $c_k $ such that $| \log \phi'(v) | \leq d \log ( {\rm Re} \, v )$ as $v \to \infty$ in $V_1$ and $| \log |G_k(v)| | \leq c_k \log ( {\rm Re} \, v )$ as $v \to \infty$ in $V_k$. \end{lem} \textit{Proof.} For $v \in V_1$, parametrise the straight line segment from $v_0$ to $v$ with respect to $s = {\rm Re} \, u$. Then (\ref{h3}) and the simple estimate $|du| \leq \sqrt{2} ds $ yield $| \log \phi'(v) | = O( \log ( {\rm Re} \, v ))$ as $v \to \infty$ in $V_1$. Next, the assertion for $G_k$ is trivially true for $k=0$, so assume that it holds for some $k \geq 0$ and write \begin{eqnarray*} G_{k+1}(v) &=& \frac{f^{(k+1)}(z)}{f(z)} = \frac{f^{(k)}(z)}{f(z)} \cdot \frac{f'(z)}{f(z)} +\frac{d}{dz} \left( \frac{f^{(k)}(z)}{f(z)} \right) \nonumber \\ &=& G_k(v) G_1(v) + \frac{G_k'(v)}{\phi'(v)} = \frac{G_k(v)}{\phi'(v)} \left( 1 + \frac{G_k'(v)}{G_k(v)} \right). \end{eqnarray*} Thus it suffices to show that $G_k'(v)/G_k(v) \to 0$ as as $v \to \infty$ in $V_{k+1}$. By (\ref{rub1}) there exists a small positive $d_1$ such that if $v \in V_{k+1}$ is large then the circle $|u - v| = r_v = d_1 {\rm Re} \, v$ lies in $V_k$, and the differentiated Poisson-Jensen formula \cite[p.22]{Hay2} delivers $$ \frac{G_k'(v)}{G_k(v)} = \frac1{ \pi} \int_0^{2 \pi} \, \frac{ \log | G_k(v + r_v e^{i \theta } )|}{r_v e^{i \theta}} \, d \theta = O \left( \frac{ \log ( {\rm Re} \, v ) }{ {\rm Re} \, v } \right) \to 0 $$ as $v \to \infty$ in $V_{k+1}$. This proves the lemma. \hfill$\Box$ \vspace{.1in} To establish Theorem \ref{thm1}, take any $D \in \mathbb R$. Then there exist $v_1 \in [1, + \infty) $ and a path $$\Gamma \subseteq \{ v \in \mathbb C : \, {\rm Re} \, v > N , \, | {\rm Im} \, v | < \pi/4 \} \subseteq H$$ which is mapped by $e^v$ to the half-line $\{ t + iD : \, t \geq v_1 \}$. Thus $f(z) -i D = e^v -i D $ is real and positive for $z$ on $\gamma = \phi (\Gamma)$, and $\Gamma \setminus V_k $ is bounded for each $k \geq 0$. Now write, on $\Gamma$, $$ e^v = t+iD, \quad \frac{dv}{dt} = \frac1{t+iD}, \quad s = {\rm Re} \, v = \frac12 \ln (t^2+D^2) .$$ Hence, for any non-negative integers $k, m$, Lemma \ref{lem1} gives, as $v \to \infty$ on $\Gamma $, $$ \left| \frac{f^{(k)}(z)}{z^m} \right| = \left| \frac{f(z) G_k(v)}{z^m} \right| = \left| \frac{e^v G_k(v)}{\phi(v)^m} \right| \geq \frac{e^s }{ s^{c_k+md} } \geq e^{s/2} \to \infty . $$ It then follows that, for $c > 0$, \begin{eqnarray*} \int_\gamma |f^{(k)}(z)|^{-c} \, |dz| &\leq & O(1) + \int_\Gamma e^{- cs/2} |\phi'(v)| \, |dv| \\ &\leq& O(1) + \int_\Gamma e^{- cs/4} \, |dv| \\ &=& O(1) + \int_{v_1}^{+\infty} \frac1{(t^2+D^2)^{1/2+c/8}} \, dt < + \infty . \end{eqnarray*} \hfill$\Box$ \vspace{.1in} \section{Proof of Theorem \ref{thm2}}\label{pfthm2} Suppose first that the inverse function of the antiderivative $G$ of $g$ has a logarithmic singularity over infinity, and take $D \in \mathbb R$. Then Theorem \ref{thm1} may be applied with $f = G$ and $m= c = 1$, giving a level curve $\gamma = \gamma_D$, lying in a neighbourhood of the singularity, on which ${\rm Im} \, G(z) = D$ and ${\rm Re} \, G(z) $ increases. This curve is a trajectory for (\ref{AH}), traversed in time $$ \int_\gamma \frac1{\bar g(z)} \, dz \leq \int_\gamma |G'(z)|^{-1} \, |dz| < + \infty ,$$ which completes the proof in this case. For the proof of the following lemma the reader is referred to the statement and proof of \cite[Lemma 3.1]{blnewqc}. \begin{lem}[\cite{blnewqc}] \label{lemfirstest} Let the function $\phi : H \to \mathbb C \setminus \{ 0 \}$ be analytic and univalent, where $H = \{ v \in \mathbb C : \, {\rm Re} \, v > 0 \}$, and for $v, v_1 \in H$ define $Z(v) = Z(v, v_1)$ by \begin{equation} \label{h1} Z(v, v_1) = \int_{v_1}^v e^{u/2} \phi'(u) \, du = 2 e^{v/2} \phi'(v) - 2 e^{v_1/2} \phi'(v_1) - 2 \int_{v_1}^v e^{u/2} \phi''(u) \, du . \end{equation} Let $\varepsilon $ be a small positive real number. Then there exists a large positive real number $N_0$, depending on $\varepsilon$ but not on $\phi$, with the following property. Let $v_0 \in H$ be such that $S_0 = {\rm Re} \, v_0 \geq N_0$, and define $v_1, v_2, v_3, K_2$ and $ K_3$ by \begin{equation*} \label{vjdef} v_j = \frac{2^j S_0}{128} + i T_0, \quad T_0 = {\rm Im} \, v_0, \quad K_j = \left\{ v_j + r e^{i \theta} : \, r \geq 0, \, - \frac{\pi}{2^j} \leq \theta \leq \frac{\pi}{2^j} \right\}. \end{equation*} Then the following two conclusions both hold:\\ (i) $Z = Z(v, v_1)$ satisfies, for $v \in K_2$, \begin{equation} \label{h2} Z(v, v_1 ) = \int_{v_1}^v e^{u/2} \phi'(u) \, du = 2 e^{v/2} \phi'(v) (1 + \delta (v) ), \quad | \delta (v) | < \varepsilon . \end{equation} (ii) $\psi = \psi (v, v_1) = \log Z(v, v_1) $ is univalent on a domain $H_1$, with $v_0 \in H_1 \subseteq K_3$, and $\psi(H_1)$ contains the strip \begin{equation} \label{Omegaimage} \left\{ \psi (v_0) + \sigma+ i \tau : \, \sigma \geq \log \frac18 \, , \, - 2 \pi \leq \tau \leq 2 \pi \right\} . \end{equation} \end{lem} \hfill$\Box$ \vspace{.1in} Assume henceforth that $g$ is as in the hypotheses of Theorem \ref{thm2} and the inverse function of $g$ has a logarithmic singularity over infinity. This time there exist $M > 0$ and a component $C$ of $\{ z \in \mathbb C : \, |g(z)| > M \} $ such that $\zeta = \log g(z)$ is a conformal mapping of $C$ onto the half-plane given by ${\rm Re} \, \zeta > \log M$. Since (\ref{AH}) may be re-scaled via $z = Mw$ and $g(z) = Mh(w) $, it may be assumed that $M = 1$ and $0 \not \in C$. In order to apply Lemma \ref{lemfirstest}, let $\phi: H \to C$ be the inverse function $z = \phi(v)$ of the mapping from $C$ onto $H$ given by $$ v = 2 \zeta = 2 \log g(z), \quad g(z) = e^{v/2} , $$ As in the proof of Theorem \ref{thm1}, (\ref{h3}) holds for $u \in H$, with $N = 0$. By (\ref{Omegaimage}) there exists $X_0 > 0$ such that $ Z(v, v_1)$ maps a domain $H_2 \subseteq H_1 \subseteq K_3 \subseteq H$ univalently onto a half-plane ${\rm Re} \, Z > X_0$. Hence, for any $Y_0 \in \mathbb R$, there exists a path $\Gamma $ which tends to infinity in $ H_1 \subseteq K_3$ and is mapped by $ Z(v, v_1)$ onto the half-line $L_0 = \{ X + i Y_0, \, X \geq X_0 + 1 \}$. Consider the flow in $H_2$ given by \begin{equation} \label{vflow} \phi'(v) \dot v = \overline{e^{v/2}} ; \end{equation} by (\ref{h2}) this transforms under $Z = Z(v, v_1)$ to \begin{equation} \label{wflow} \dot Z = \frac{dZ}{dv} \, \dot v = e^{v/2} \phi'(v) \dot v = | e^{v} | . \end{equation} Combining (\ref{h3}) and (\ref{h2}) shows that $ | e^{v} | \geq |Z(v)|^{3/2} $ for large $v$ on $\Gamma$. Hence there exists a trajectory of (\ref{wflow}) which starts at $X_0+1 + iY_0$ and tends to infinity along $L_0$ in time $$ T_0 \leq \int_{X_0+1}^\infty \left| \frac{dt}{dX} \right| \, dX \leq O(1) + \int_{X_0+1}^\infty (X^2 + Y_0^2)^{-3/4} \, dX < + \infty . $$ This gives a trajectory of (\ref{vflow}) tending to infinity along $\Gamma$ and taking finite time to do so, and hence a trajectory $\gamma$ of (\ref{AH}) in $C$, tending to infinity in finite increasing time. Since $Y_0 \in \mathbb R$ may be chosen at will, this proves Theorem \ref{thm2}. \hfill$\Box$ \vspace{.1in} {\footnotesize
1,116,691,497,427
arxiv
\section{ Introduction} Quantum entanglement plays an important role in the study of fundamental principles of quantum mechanics\cite{sakurai}. It is also the most important resource in quantum information processing\cite{nielsen,decoy}. Among all types of quantum entanglement, polarization entangled photon-pairs are particularly useful because of easy manipulation and transmission. There are many mature techniques to produce such entangled pairs \emph{probabilistically}\cite{PRL_93_KiessTE, PhysRevLett.75.4337, Nature_04_EdamatsuK, PRL_04_FattalD}, while an \emph{on-demand} entangled photon pair is essential in many tasks in quantum information processing. Recently, an on-demand entangled photon-pair source was proposed\cite{PRL_00_OliverB} and realized in a semiconductor quantum dot system\cite{Nature_06_StevensonRM, AkopianN_PRL_06, NatPhon_ShieldsAJ_07}. However, because of the fine-structure splitting (FSS) there, the relative phase of the entangled state is randomized so that only classical correlation can be detected by traditional time-integrated measurement\cite{stace2003,PRL_07_HudsonAJ,PRL_08_StevensonRM,guo}. So far, there are many methods proposed to explore this ``hidden entanglement''\cite{stace2003,PRL_07_HudsonAJ,PRL_08_StevensonRM,guo, NJP_06_YoungRJ,NJP_07_HafenbrakR,he:157405, PRL_09_YoungRJ,jones}, for example, reducing FSS\cite{NJP_06_YoungRJ,NJP_07_HafenbrakR,he:157405, PRL_09_YoungRJ}, spectral filtering\cite{AkopianN_PRL_06}, time resolving post-selection\cite{PRL_08_StevensonRM}, and so on. Up to now, the smallest FSS realized in experiment is about $0.3\ \mu eV$ and non-classical nature of the radiation field is verified by directly observing violation of the Bell inequality\cite{PRL_09_YoungRJ}. However, the entanglement quality is considerably decreased even by very small FSS, and further reducing FSS is very difficult in experiment. Furthermore, the severe restriction on FSS greatly limits the selection range of quantum dot systems. Certain quantum dots with large FSS cannot be used even if they have distinct advantage, such as emitting photons of frequencies in the easy transmission frequency window in free space or optical fiber. Also, the post-selection method in frequency domain or time domain will significantly decrease the photon collection efficiency. To overcome all these drawbacks, Stace et al proposed to use cavities to control the frequencies. Latter, Jones and Stace proposed a more simplified, downstream solution of polarization-dependent frequency shift to the photons by an acousto-optical modulator (AOM). Basically, there are 3 steps in the circuit: 1) Split the polarization modes of each photons; 2) Shift the frequency of vertical polarization modes by AOM; 3)Combine split beams by a polarization beam splitter. However, the efficiency of a normal commercially available AOM is fairly low. The efficiency of a very good AOM for a single photon is about $80\%$. The joint efficiency of two photons in the scheme in Ref.\cite{jones} is not larger than $64\%$. Moreover, in the proposed set-up\cite{jones}, special care has to be taken to make the two optical paths stable. Say, a fluctuation of half a micro meter in any optical path will entirely destroy the result. Here we propose another solution through using an electro-optical modulator (EOM). EOM is a mature technique which has been demonstrated in many experiments. In particular, two-photon interference has been experimentally observed\cite{np} very recently. In our proposed set-up, we use a dichroic mirror to separate the two photons, and then remove the position-dependent phase difference by using Pockels cells under a ramping voltage. Since a Pockels cell itself makes the phase shift differently to different polarization mode, we don't have to separate the different polarization modes as was proposed in Ref.\cite{jones}. Instead, we only need to separate the two photons. In this way, compared with the existing proposal\cite{jones}, our method has an advantage in its robustness to fluctuations of optical paths. As calculated later, a fluctuation of 1 mm will cause $10^{-3}$ fluctuation in the phase. Moreover, compared with Ref.\cite{jones}, our scheme seem to have a significantly higher efficiency. A commercially available Pockels cell almost has no loss. \section{ The problem} The energy levels of the quantum dot used for photon-pair generation are shown in Fig.~\ref{fig:levels}. After exciting a single quantum dot into biexciton state (XX), two photons are emitted sequentially as the dot decays in a cascade process. \begin{figure} \includegraphics{levels} \caption{\label{fig:levels} Energy levels of the semiconductor quantum dot used to generate polarization entangled photons. The biexciton state (XX) is a zero-spin state formed by two electrons and two heavy holes. When the dot decays, two photons are emitted sequentially, and their polarization is determined by the ``decay path''. Usually an FSS $S$ exists between the two excitons ($X_H$) and ($X_V$).} \end{figure} Because the two exciton states ($X_H$ and $X_V$) are not degenerate\cite{PhysRevLett.76.3005,PhysRevB.65.195315}, the two photons are actually entangled in the complex space of both polarization and frequency \begin{equation} \label{eq:stateFre} \begin{split} |\Psi\rangle =& \frac{1}{\sqrt{2}} \bigg[ \iint_{-\infty}^{\infty} d\omega_1 d\omega_2 \Phi_H(\omega_1,\omega_2) |H_1H_2;\omega_1,\omega_2\rangle\\ & + \iint_{-\infty}^{\infty} d\omega_1 d\omega_2 \Phi_V(\omega_1,\omega_2) |V_1V_2;\omega_1,\omega_2\rangle \bigg]. \end{split} \end{equation} The spectral functions for the two decay path of the quantum dot system can be written as\cite{AkopianN_PRL_06, scully} \begin{subequations} \label{eq:13} \begin{align} \Phi_{H}(\omega_1,\omega_2) =& \frac{\sqrt{2}\Gamma}{2\pi} \frac{1}{\omega_1 + \omega_2 - \omega_{0} + i\Gamma}\nonumber\\ & \times\frac{1}{\omega_2 - \omega_{H_2} + i\Gamma/2},\label{eq:14} \\ \Phi_{V}(\omega_1,\omega_2) =& \frac{\sqrt{2}\Gamma}{2\pi} \frac{1}{\omega_1 + \omega_2 - \omega_{0} + i\Gamma}\nonumber\\ & \times\frac{1}{\omega_2 - \omega_{V_2} + i\Gamma/2}.\label{eq:20} \end{align} \end{subequations} Here, as shown in Fig.~\ref{fig:levels}, $\omega_{H_2}=\omega_{X_H}-\omega_{GS}$, $\omega_{V_2}=\omega_{X_V} -\omega_{GS}$, and $\omega_0 = \omega_{XX}-\omega_{GS}$, where $\hbar\omega_{XX}$, $\hbar\omega_{X_H}$, $\hbar\omega_{X_V}$, $\hbar\omega_{GS}$ are the eigenenergy of levels $XX$, $X_H$, $X_V$, and $GS$, respectively, and $\Gamma$ is the decay rate of the four transitions $XX\rightarrow X_H$, $XX\rightarrow X_V$, $X_H\rightarrow GS$, and $X_V\rightarrow GS$\cite{AkopianN_PRL_06}. Therefore, the state is actually inseparable in the composite space of both polarization and frequency. This hides the entanglement in polarization space only. \section{ Phase modulation with Pockels cells} A Pockels cell contains a crystal whose refraction index of a certain optical axis changes linearly with the external voltage, due to the so called electro-optical effects. Consider a Pockels cell with a time-dependent voltage $V(t)$ and a wave packet passing through it, as shown in Fig.~\ref{fig:modu}. For simplicity, we assume that any non-trivial phase modulation only happens to the vertical polarization of the incident light. Suppose initially the wave function of a wave train in vertical polarization is $e^{ik_Vx}$, with a certain reference original point $O(0)$, at the left side of the crystal. We shall always use the reference framework of the flying wave train itself, i.e., the reference point $O(t)$ propagates with the wave train, in the same speed. Suppose at time $t_0$, the distance between the reference point $O(t_0)$ and the left side surface of the crystal is $L$. The ramping voltage $V(t)$ applied to the crystal is a linear function of time $t$, $V(t)=a+bt$. Suppose the position of any point $X(t_0)$ is $x(t_0)$ at this reference framework. At a later time $t=\tau$, the wave train is at the right side of the crystal and the phase of original points $O(t_0),\;X(t_0)$ have now propagated to points $O(\tau),\;X(\tau)$, respectively. At time $\tau$, we take $O(\tau)$ as the reference original point and denote $x(\tau)$ as the position of $X(\tau)$ in the new reference framework of $O(\tau)$. To see the phase modulation after the wave train passes the crystal, we study the relation between $x(t_0)$ and $x(\tau)$. The refraction index of the crystal is linearly dependent on the applied voltage. At any time $t$, the vertical-polarization-mode light speed inside the crystal is \begin{equation} v(t)=\frac{v_0}{1+\eta V(t)} \end{equation} and $v_0$ is the light speed inside the crystal when there is no applied voltage, $\eta$ is a constant parameter which is dependent on the crystal property itself. Suppose the crystal thickness is $s$. At time $t_0$, the original phase at point $O(t_0)$ is $\varphi_{OV}$ or $\varphi_{OH}$, for vertical polarization wave train or horizontal polarization wave train, respectively. {\em Frequency shift.} Consider the vertical polarization case first. Suppose it takes time $\Delta t(X)$ for point $X$ to pass through the crystal. Explicitly, \begin{equation}\label{dt} \int_{t_{in}(X)}^{t_{in}(X)+\Delta t(X)} v(t)dt = s \end{equation} where $t_{in}(X)=\frac{-x}{c}+L/c$ is the time point that point $X$ in the original wave train reaches the left side of the crystal. For a linearly rising voltage $V(t)=a+bt$, Eq.(\ref{dt}) gives rise to \begin{equation}\label{time} \Delta t (X) = \frac{1+\eta(a+bL/c - b x/c)}{\eta b} \left(e^{\eta bs/v_0}-1\right). \end{equation} At time point $\tau$, the phases of points $O(t_0),\;X(t_0)$ have propagated to points $O(\tau),X(\tau)$, respectively. Using the formula above we find that the position of $X(\tau)$ in the new reference framework $O(\tau)$ is \begin{equation}\label{xtau} x(\tau)=e^{\eta bs/v_0}x. \end{equation} This is to say, after passing through the crystal, the spatial phase function (with reference original point $O(\tau)$) is changed into \begin{equation} \varphi_{out} (x(\tau))=\varphi_{out}(e^{\eta bs/v_0}x(t_0)) = \varphi_{in}(x(t_0)) \end{equation} where $\varphi_{in}(x)=k_Vx$ is the spatial phase function of the wave train before passing through the crystal, with reference original point $O(t_0)$. Therefore, the spatial phase for the wave train after passing through the crystal is \begin{equation} \varphi_{out} (x) = e^{-\eta bs/v_0}kx, \end{equation} where the reference original point is $O(\tau)$. (For simplicity, here we set all initial phases at reference point to be 0.) This is actually a frequency shift to the wave train. The crystal under voltage ramping of $V(t)=a+ bt$ will transform the original frequency $\omega_V$ (or wave vector $k_V$ ) of the wave train at the left side of the crystal into the new frequency $\omega_V'$ (or wavevector $k_V'$ ) of the wave train at the right side of the crystal by the following formula: \begin{equation}\label{vip} \frac{\omega_V'}{\omega_V}=\frac{k_V'}{k_V}=e^{-\eta bs/v_0}=f(b). \end{equation} {\em Phase change.} Eq.(\ref{vip}) is the spatial phase modulation of vertical polarization only. If the original wave train is in horizontal polarization mode, it takes time \begin{equation} \Delta t_H =s/v_0 \end{equation} for any point $X(t_0)$ in the original wave train to pass through the crystal. At time $\tau$, the original reference point $O(t_0)$ propagates to the new reference point $O_H(\tau)$. In this new reference point, the spatial phase function is \begin{equation} \varphi_H (x_H(\tau)) =k_H x_H(\tau), \end{equation} where $k_H$ is the wave vector of the horizontal polarized mode. At the reference framework of $O(\tau)$ (reference point of vertical polarization), the position of $O_H(\tau)$ is \begin{equation}\label{dis}\begin{split} d(a,b,L) &= c(\tau -\frac{s}{v_0}) - c(\tau - \Delta t(O))\\&=\left(\frac{c}{\eta b}+\frac{ac}{b}+L\right)\left(e^{\eta bs/v_0}-1\right)-\frac{cs}{v_0}\end{split} \end{equation} where $\Delta t(O)$ is given by Eq.(\ref{time}) with $x=0$. Therefore, in the same reference framework $O(\tau)$, the spatial phase of horizontal polarization at time $\tau$ is \begin{equation}\label{ph} \varphi_H (x) =k_H (x-d(a,b)) \end{equation} where $k_H$ is the wave vector of the horizontal polarization mode and $d(a,b) $ is given by Eq.(\ref{dis}). The spatial phase difference of two polarizations in reference framework $O(\tau)$ is \begin{equation}\label{phout} \Delta \varphi(x) = \left(e^{-\eta bs/v_0}k_V -k_H\right) x + k_H d(a,b). \end{equation} From this we can see that the crystal under a time dependent voltage not only changes the frequency, but also offers a position-independent phase difference between the two polarization modes of the outcome wave train. This phase is dependent on the parameters $a,\,b$ in the linear function $V(t)$. In the derivation above, we have ignored the possible small dispersion of the crystal. Obviously, our result can be directly extended to this case that the crystal's refraction index is dependent on the frequency of the incident light by setting $v_0$ and $\eta$ frequency dependent. For simplicity, we shall only consider the case without dispersion hereafter. In such a case, taking a very similar derivation, we have the following wavefunction transform formulas for arbitrary wavefunction by its polarization mode: \begin{equation}\label{tr}\begin{split} &\psi_V(x) \longrightarrow e^{-\eta bs/(2 v_0)}\psi_V (e^{-\eta bs/v_0}x); \\ &\psi_H(x)\longrightarrow \psi_H(x-d(a,b)). \end{split} \end{equation} \begin{figure} \includegraphics[width=8cm]{modu1} \caption{\label{fig:modu} Phase modulation. After the wave train passed the crystal (the square box in the middle), the point $X(t_0)$ propagated to $X(\tau)$. The relationship of the positions of $X$ and $X'$ is given in Eq.~(\ref{xtau}). } \end{figure} \section{Our scheme} We propose two schemes here. As shown in Fig.\ref{fig:setup}, scheme 1 contains two EOM phase modulators with ramping voltage of $V_1(t)=a_1+b_1t$ and $V_2(t)=a_2+b_2t$. The two photons are separated by their frequency, and then pass through each modulator. Each voltage ramping covers the time that each photon pass through its modulator, as shown in Fig.~\ref{fig:Torder}. Scheme 1 has no limit to the frequencies of the two photons, say, no matter they are quite close or quite different. However, as shown below, we need the starting time difference of two ramping voltages be controlled much smaller than 1 ns. If the frequency difference of two photons is rather small compared with the frequencies of each photons, we can use Scheme 2. In scheme 2, the problem of starting time difference control is circumvented. Scheme 2 contains only one EOM phase modulator under voltage ramping of $V(t)=a+bt$, as shown in Fig.(\ref{sm}). Two photons are separated and they pass through the modulator in different path. Polarization of photon one is flipped before it reaches the modulator. Also, the voltage ramping covers the time of both photon passing through the modulator. \begin{figure} \includegraphics{Setup} \caption{\label{fig:setup} Proposed scheme 1. The first photon and second photon are separated by a dichroic mirror (DM). The two Pockels cells start to run before the photons arrive. They make reverse phase modulation. } \end{figure} \begin{figure} \includegraphics[width=7cm]{Torder} \caption{\label{fig:Torder} Voltage ramping in scheme 1. } \end{figure} \begin{figure} \includegraphics[width=6cm]{sch} \caption{\label{sm} Proposed scheme 2. The first photon and second photon are separated by a dichroic mirror (D). Polarization of photon 1 is flipped by a flipper (F) before enters the Pockels cell P. Here both photons enter the same Pockels cell. } \end{figure} \subsection{ Voltage ramping of scheme 1} There are two photons emitted from the quantum dot. If we transform Eq.~(\ref{eq:stateFre}) from frequency space to position space, the state of the field can be rewritten as \begin{equation} \label{eq:evo} \begin{split} |\Psi_{in}\rangle =& \frac{\Gamma}{c} \iint_{0>x_1>x_2} dx_1 dx_2 e^{\frac{\Gamma}{2c}(x_2+x_1)} \\ & \times\big(e^{i(k_{H_1} x_1 + k_{H_2} x_2)} |H_1H_2\rangle + e^{i(k_{V_1} x_1 + k_{V_2} x_2)} |V_1V_2\rangle\big), \end{split} \end{equation} where $x_1$ and $x_2$ refer to the position of the first photon and second photon, respectively, with a common reference point, the right end of the wave packet. Eq.~\eqref{eq:evo} is equivalent to the result given in Ref.~\cite{PRL_07_HudsonAJ}. Suppose at a certain time $t_0$, the reference point $O(t_0)$ arrives at the dichroic mirror. According to Eq.(\ref{tr}), the outcome state is \begin{equation} \begin{split} |\Psi_{out}\rangle =& \frac{\Gamma}{c} \iint_{0>x_1-d_1>x_2-d_2} dx_1 dx_2 A_{H}|H_1H_2\rangle\\ & \\ & + \frac{\Gamma}{c}\sqrt{f_1f_2} \iint_{0>f_1x_1>f_2x_2} dx_1 dx_2 \\ &\times A_V e^{i\Delta\varphi_1+i\Delta\varphi_2} |V_1V_2\rangle, \end{split} \end{equation} where $f_i=e^{-\eta b_is/v_0}$, $\Delta\varphi_i$ is given by Eq.(\ref{phout}), with parameters $a=a_i,\;b=b_i,\;L=L_i$ there; \begin{equation}\begin{split}& A_H=e^{\frac{\Gamma}{2c}(x_1 + x_2 - d_1 - d_2)}\\& A_V=e^{\frac{\Gamma}{2c}( f_2x_2+f_1x_1)} \end{split} \end{equation} and $d_i$ is given by Eq.(\ref{dis}) with parameters $a=a_i,b=b_i$. If we set \begin{equation}\label{vol} \begin{split} b_1=\frac{v_0}{\eta s}\ln \left( \frac{k_{V_1}}{k_{H_1}} \right)\\ b_2=\frac{v_0}{\eta s}\ln \left( \frac{k_{V_2}}{k_{H_2}}\right), \end{split} \end{equation} we find that \begin{equation}\label{phase}\begin{split} \Delta \varphi_{1} +\Delta\varphi_2& = k_{H_1}\left(\frac{c}{\eta b_1}+\frac{ca_1}{b_1}+L_1\right)\left(e^{\eta b_1s/v_0}-1\right)\\ &+ k_{H_2}\left(\frac{c}{\eta b_2}+\frac{ca_2}{b_2}+L_2\right)\left(e^{\eta b_2s/v_0}-1\right) -k_0cs/v_0. \end{split} \end{equation} where $L_1,\; L_2$ are the optical path from $O(t_0)$ to the two separate Pockels cells, $k_0 = k_{H_1} + k_{H_2} $ is a constant (i.e., position independent). Therefore the value above is a constant phase independent of positions $x_1,\;x_2$. Also, we find that \begin{equation} \sqrt{f_1f_2}A_V/A_H=\sqrt{f_1f_2}\exp{\left[\frac{\Gamma}{2c}( f_2x_2+f_1x_1)-\frac{\Gamma}{2c}(x_1+x_2-d_1 -d_2)\right]}=1+\epsilon \end{equation} where $\epsilon$ is in the magnitude order of $10^{-6}$, given the coherence length of the wave train $L\approx 0.3$m and $\Gamma\approx 10^{9}$. Therefore, with the setting of Eq.(\ref{vol}), our scheme 1 can produce high quality polarization entangled photon pairs. In our schemes, we only split the two photons by frequency difference instead of splitting the polarization mode. Even though the optical paths of each photons may fluctuate significantly, the result only changes negligibly. This is different from the AOM based scheme in Ref.\cite{jones} which separates two polarization modes. A fluctuation of amount $\delta l_i$ in the optical path of the $i$'th photon will cause a fluctuation of amount \begin{equation} \delta\varphi_i = k_{H_i}\delta l_i(e^{\eta b_i s/v_0}-1) \end{equation} This means, even a fluctuation of 1 mm in one of the optical path will cause only a phase difference fluctuation of in the magnitude order of $10^{-3}$ in our scheme. Since $a_1,a_2$ plays no role in our scheme, we can ramp the voltage from 0, i.e., setting $a_1=a_2=0$. We can also consider the consequence of non-exact simultaneous voltage ramping of the two modulators. Suppose the starting times of ramping are $t_1$ and $t_2$, respectively. To be sure that the voltage ramping covers the incident wave train, we need $t_1\le t_0$ and $t_2\le t_0$. This is equivalent to set $a_1=b_1(t_0-t_1)$ and $a_2=b_2(t_0-t_2)$ and start the voltage ramping exactly at $t=t_0$. The fluctuation in the value of Eq.(\ref{phase})is now \begin{equation}\begin{split} \delta(\varphi) &\approx \frac{c\eta s}{v_0}[k_{H_1} b_1 (t_0- t_1)+k_{H_2}b_2(t_0-t_2)] \approx c k_S \delta t \end{split} \end{equation} where $ k_S =k_{V_1}-k_{H_1}$ which correspond to FSS, and $\delta t = t_2-t_1$. We see that out result only dependent on the time difference $\delta t$, independent of the absolute time. Therefore, the trigger time uncertainty of the quantum dot does not affect the result here. To obtain high quality entanglement, we need \begin{equation} \delta t << \frac{1}{ck_S}. \end{equation} Given an FSS of 1 GHz, we need the ramping time difference to be much smaller than 1 ns. The consequence of time difference $\delta t$ here is equivalent to the time window post-selection scheme with time resolving detection\cite{PRL_08_StevensonRM}. However, here in our scheme there is almost no photon loss. In scheme 1, we need to control the time difference of two voltage ramping in a rather small range (much less than 1 ns). As shown below, our scheme 2 has an intrinsic fault tolerance property, where the technical problem of simultaneous ramping is bypassed. \subsection{Robustness of scheme 2} In our scheme 2 as shown in Fig.(\ref{sm}), we can set $V(t)=bt$ and \begin{equation} \label{b} b=\frac{v_0}{\eta s}\ln(\frac{k_{H_1}}{k_{V_1}}), \end{equation} Before the photons pass through each crystal, the state is \begin{equation} \begin{split} |\Psi_{in}\rangle =& \frac{\Gamma}{c} \iint_{0>x_1>x_2} dx_1 dx_2 e^{\frac{\Gamma}{2c}(x_2+x_1)} \\ & \times\big(e^{i(k_{H_1} x_1 + k_{H_2} x_2)} |V_1H_2\rangle + e^{i(k_{V_1} x_1 + k_{V_2} x_2)} |H_1V_2\rangle\big), \end{split} \end{equation} After the two photons pass through the same Pockels cell under voltage ramping, the state is \begin{equation} \label{single} \begin{split} |\Psi_{out}\rangle =& \frac{\Gamma}{c} \sqrt{f}\iint_{0>f x_1>x_2-d_2} dx_1 dx_2 A_{H}|V_1H_2\rangle\\ & \\ & + \frac{\Gamma}{c} \sqrt{f} \iint_{0>x_1-d_1>f x_2} dx_1 dx_2 \\ &\times A_V e^{-i\Delta\varphi_1+i\Delta\varphi_2} |H_1V_2\rangle, \end{split} \end{equation} where \begin{equation}\begin{split}& \Delta \varphi_1 = \left(f k_{H_1} - k_{V_1}\right) x_1 + k_{V_1}d_1,\\ & \Delta \varphi_2 = \left(f k_{V_2} - k_{H_2}\right) x_2 + k_{H_2}d_2,\\ \end{split} \end{equation} with $f = e^{-\eta b s/v_0}$; \begin{equation}\begin{split}& A_H=e^{\frac{\Gamma}{2c}(fx_1 + x_2 -d_2)}\\& A_V=e^{\frac{\Gamma}{2c}( fx_2+x_1-d_1)} \end{split} \end{equation} As defined in Eq.(\ref{b}), here $b_1=b_2=b$ and $a_1=a_2=0$. According to Eq.(\ref{dis}), $d_i$ here is \begin{equation}\label{dis}\begin{split} d(a=0,b,L_i) =\left(\frac{c}{\eta b}+L_i\right)\left(e^{\eta bs/v_0}-1\right)-\frac{cs}{v_0}\end{split} \end{equation} Direct calculations show that $|A_V/A_H|-1$ is in the magnitude order of $10^{-6}$. As shown earlier, the consequence to the final outcome due to the optical path fluctuation is negligible, therefore we assume zero fluctuation in the optical paths. Also, since both photons enter the same Pockels cell, there is no ramping voltage starting time difference. After calculation, we find that the only non-constant term in $\Delta\varphi_2 - \Delta\varphi_1$ is \begin{equation} \epsilon_2 \approx \frac{k_S\Delta k}{k_{H_1}}x_2 \end{equation} where $\Delta k = k_{V_2}-k_{H_1}$ which is position dependent. In the set-up of Ref.\cite{Nature_06_StevensonRM}, the coherence length of the whole wave train is only about $0.3$m, so the maximal value of $k_s x_2$ is around 1, also, the magnitude order of $\Delta k/k_{H_1}$ is $10^{-3}$, therefore the position-dependent term $\epsilon_2$ is around $10^{-3}$ and hence negligible. \subsection{Feasibility} Similar EOM phase modulation has been used in laser spectroscopy, such as Pound-Drever-Hall laser frequency stabilization\cite{black:79}. Technically, such phase modulation can be accomplished by a commercially available optical device, such as a Pockels cell, which introduces a phase shift to vertically polarized mode. In passing through the Pockels cell, a photon will acquire an additional phase shift $\alpha V$ given the applied voltage $V$. Here $\alpha$ is the phase sensitivity of the Pockels cell. Obviously, $\alpha$ is related to the parameters used in our earlier calculations by \begin{equation} n_0 \eta s =\frac{\alpha \lambda }{2\pi} \end{equation} where $\lambda$ is the wave length of the incident light and $n_0$ equals to $c/v_0$. As far as we have known, the phase sensitivity $\alpha$ of commercially available Pockels cells can be up to $52\ mrad/volt\ @\ 830 nm$\cite{conoptics}. In order to compensate an FSS of $1\ \mu eV = 2 \pi \times 254.6 \ MHz$, we only need to set $b$ and $b'$ around $30\ V/ns$ according to Eq.~(\ref{vol}). This is obviously doable by the existing technology. Because the duration of the field radiation is about several nanoseconds\cite{PRL_08_StevensonRM}, the scan voltage needs only last several nanoseconds. Therefore, the maximal voltage requested is only a few hundred volts according to Eq.~\eqref{vol}, and this is easily accessible. Moreover, we can also choose to arrange several Pockels cells in series along one photon's path to compensate larger FSS. \section{ Concluding remark} We have shown how to compensate the position dependent phase in the entangled photon pair generated by the biexciton cascade decay in a single semiconductor quantum dot with FSS. The EOM phase modulation is done by voltage ramping on a Pockels cell. It is shown that our proposed schemes are robust with respect to imperfections such as optical path fluctuation. With our scheme, the quality of entangled photon pairs can be improved to almost perfect level. \begin{acknowledgments} {\em Acknowledgments---} We would like to thank H. P. Zeng, C.Z. Peng, S. Jiang, Jia-Zhong Hu and Ming Gao for helpful discussions. This work was supported in part by the National Basic Research Program of China grant nos 2007CB907900 and 2007CB807901, NSFC grant number 60725416, and China Hi-Tech program grant no. 2006AA01Z420. \end{acknowledgments}
1,116,691,497,428
arxiv
\section{Introduction} Lascoux-Leclerc-Thibon (\cite{LLT}) conjectured that the irreducible representations of Hecke algebras of type $A$ are controlled by the upper global basis (\cite{Kash91,Kash93}) (or dual canonical basis (\cite{Lus93})) of the basic representation of the affine quantum group $U_q(A^{(1)}_\ell)$. Then Ariki (\cite{A}) proved this conjecture by generalizing it to cyclotomic Hecke algebras. The crucial ingredient in his proof was the fact that the cyclotomic Hecke algebras categorify the irreducible highest weight representations of $U(A^{(1)}_\ell)$. Because of the lack of grading on the cyclotomic Hecke algebras, these algebras do not categorify the representation of the quantum group. Then Khovanov-Lauda and Rouquier introduced independently a new family of {\em graded} algebras, a generalization of affine Hecke algebras of type $A$, in order to categorify arbitrary quantum groups (\cite{KL09, KL08, R08}). These algebras are called {\em Khovanov-Lauda-Rouquier algebras} or {\em quiver Hecke algebras.} Let $U_q(\g)$ be the quantum group associated with a symmetrizable Cartan datum and let $\{R(\beta)\}_{\beta \in \rootl^{+}} $ be the corresponding \KLRs. Then it was shown in \cite{KL09, KL08} that there exists an algebra isomorphism $$U_{\A}^{-}(\g) \simeq \bigoplus_{\beta \in \rootl^{+}}\K{R(\beta)\proj},$$ where $U_{\A}^-(\g)$ is the integral form of the half $U_q^-(\g)$ of the quantum group $U_{q}(\g)$ with $\A = \Z[q, q^{-1}]$, and $\K{R(\beta)\proj}$ is the Grothendieck group of the category $R(\beta)\proj$ of finitely generated projective graded $R(\beta)$-modules. The positive root lattice is denoted by $\rootl^+$. By the duality, we have \eq U_{\A}^{-}(\g)^* \simeq \bigoplus_{\beta \in \rootl^{+}}\K{R(\beta)\gmod},\label{eq:gmod}\eneq where $U_{\A}^-(\g)^*$ is the direct sum of the dual of the weight space $U_{\A}^{-}(\g)_{-\beta}$ of $U_\A^-(\g)$, and $R(\beta)\gmod$ is the abelian category of graded $R(\beta)$-modules which are finite-dimensional over the base field $\cor$. When the generalized Cartan matrix is a symmetric matrix, Varagnolo and Vasserot (\cite{VV09}) and Rouquier (\cite{R11}) proved that the {\em upper global basis} introduced by the author or Lusztig's {\it dual canonical basis} corresponds to the isomorphism classes of simple $R(\beta)$-modules via the isomorphism \eqref{eq:gmod}. However, for a given generalized Cartan matrix, associated \KLRs\ are not unique and depend on the parameters $c$. Varagnolo-Vasserot and Rouquier have proved the above results for a very special choice $c_0$ of parameters (see \eqref{eq:Q}). Let $R(\beta)_{c_0}$ denote the \KLR\ with the choice $c_0$, and $R(\beta)_\cg$ the \KLR\ with a generic choice $\cg$ of parameters. When a simple $R(\beta)_\cg$-module is specialized at the special parameter $c_0$, it may be a reducible $R(\beta)_{c_0}$-module. The purpose of this note is to prove that the specialization of any simple $R(\beta)_\cg$-module at $c_0$ remains a simple $R(\beta)_{c_0}$-module. In other words, the set of isomorphism classes of simple $R(\beta)_\cg$-modules also corresponds to the upper global basis. \noi {\it Acknowledgements.} We thank Shunsuke Tsuchioka for helpful discussions. \section{Review on global bases and \KLRs} \label{sec:R} \subsection{Global bases} Let $I$ be a finite index set. An integral square matrix $A=(a_{i,j})_{i,j \in I}$ is called a {\em symmetrizable generalized Cartan matrix} if it satisfies (i) $a_{i,i} = 2$ $(i \in I)$, (ii) $a_{i,j} \le 0$ $(i \neq j)$, (iii) $a_{i,j}=0$ if $a_{j,i}=0$ $(i,j \in I)$, (iv) there is a diagonal matrix $D=\text{diag} (d_i \in \Z_{> 0} \mid i \in I)$ such that $DA$ is symmetric. A \emph{Cartan datum} $(A,P, \Pi,P^{\vee},\Pi^{\vee})$ consists of \begin{enumerate}[(1)] \item a symmetrizable generalized Cartan matrix $A$, \item a free abelian group $P$ of finite rank, called the \emph{weight lattice}, \item $P^{\vee}\seteq\Hom(P, \Z)$, called the \emph{co-weight lattice}, \item $\Pi= \set{\alpha_i }{i \in I}\subset P$, called the set of \emph{simple roots}, \item $\Pi^{\vee}= \set{h_i}{i \in I}\subset P^{\vee}$, called the set of \emph{simple coroots}, \end{enumerate} satisfying the condition: $\langle h_i,\alpha_j \rangle = a_{i,j}$ for all $i,j \in I$. Since ${A}$ is symmetrizable, there is a symmetric bilinear form $( \ \mid \ )$ on $P$ satisfying $$(\alpha_i | \alpha_j)= d_i a_{i,j} \quad \text{ and } \quad (\alpha_i | \lambda) = d_i \langle h_i, \lambda \rangle \quad \text{ for all } i,j \in I, \ \lambda \in P.$$ The free abelian group $\rootl= \soplus_{i \in I} \Z \alpha_i$ is called the \emph{root lattice}. Set $\rootl^{+}= \sum_{i \in I} \Z_{\ge 0} \alpha_i\subset\rootl$ and $\rootl^{-}= \sum_{i \in I} \Z_{\le0} \alpha_i\subset\rootl$. For $\beta=\sum_{i\in I}m_i\al_i\in\rootl$, we set $\haut(\beta)=\sum_{i\in I}|m_i|$. Let $q$ be an indeterminate. Set $q_i = q^{d_i}$ for $i\in I$ and we define $[n]_i =(q^n_{i} - q^{-n}_{i})(q_{i} - q^{-1}_{i} )^{-1}$ and $[n]_i! = \prod^{n}_{k=1} [k]_i$ for $n\in\Z_{\ge0}$. \Def\label{Def: GKM} The {\em quantum algebra} $U_q(\g)$ associated with a Cartan datum $({A},{P},\Pi,\Pi^{\vee})$ is the algebra over $\Q(q)$ generated by $e_i,f_i$ $(i \in I)$ and $q^{h}$ $(h \in {P}^{\vee})$ satisfying following relations: \bnum \item $q^0=1$, $q^{h} q^{h'}=q^{h+h'} $ for $ h,h' \in {P}^{\vee},$ \item $q^{h}e_i q^{-h}= q^{\langle h, \alpha_i \rangle} e_i$, $q^{h}f_i q^{-h} = q^{-\langle h, \alpha_i \rangle }f_i$ for $h \in {P}^{\vee}, i \in I$, \item $e_if_j - f_je_i = \delta_{i,j} \dfrac{K_i -K^{-1}_i}{q_i- q^{-1}_i }, \ \ \mbox{ where } K_i=q_i^{ h_i},$ \item $\displaystyle \sum^{1-a_{i,j}}_{r=0} (-1)^re^{(1-a_{i,j}-r)}_i e_j e^{(r)}_i =0 \quad \text{ if $i \ne j$,} $ where $e_i^{(n)}=e_i^n/[n]_i!$, \item $\displaystyle \sum^{1-a_{i,j}}_{r=0} (-1)^rf^{(1-a_{i,j}-r)}_if_jf^{(r)}_i=0 \quad \text{ if $i\not=j$,}$ where $f_i^{(n)}=f_i^n/[n]_i!$. \end{enumerate} \enDef Let $U_q^{-}(\g)$ be the $\Q(q)$-subalgebra of $U_q(\g)$ generated by the elements $f_i$. We define the endomorphisms $e_i'$ and $e_i''$ of $U_q^{-}(\g)$ by $$[e_i,a]=(q_i-q_i^{-1})^{-1}(K_ie_i''a-K_i^{-1}e_i'a) \quad\text{for $a\in U_q^{-}(\g)$.}$$ Then $e_i'$ and the left multiplication of $f_j$ satisfy the $q$-boson commutation relations $$e'_if_j-q_i^{-a_{i,j}}f_je_i'=\delta_{i,j}.$$ Set $\A=\Z[q,q^{-1}]$ and let $U_\A^-(\g)$ be the $\A$-subalgebra of $U_q^{-}(\g)$ generated by the elements $f_i^{(n)}$. Then $U_\A^-(\g)$ has a weight decomposition $U_\A^-(\g)=\soplus_{\beta\in \rootl^-}U_\A^-(\g)_{\beta}$ where $U_\A^-(\g)_{\beta}\seteq\set{a\in U_\A^-(\g)}{q^haq^{-h}=q^{\lan h_i,\beta\ran}a}$. Set $U_\A^-(\g)^*=\soplus_{\beta\in \rootl^-}\Hom_\A(U_\A^-(\g)_{\beta},\A)$ and let $e_i$, $f_i'\in\End_\A(U_\A^-(\g)^*)$ be the transposes of $f_i, e_i'\in\End_\A(U_\A^-(\g))$, respectively. Note that $U_\A^-(\g)_0$ is a free $\A$-module with a basis $1$, and hence $U_\A^-(\g)^*_0$ is a free $\A$-module generated by the dual basis of $1$, which is denoted by $\vac$. \Prop[\cite{Kash91,Kash93}]\label{Prop:glbal} There exists a unique basis $\{\G(b)\}_{b\in B}$ of the $\A$-module $U_\A^-(\g)^*$, called {\em the upper global basis}, which satisfies the following conditions: \bnum \item $\vac\in\set{\G(b)}{b\in B}$, \item for any $b\in B$, $G(b)$ belongs to $\bl U_\A^-(\g)_\beta\br^*$ for some $\beta\in\rootl^-$, which is denoted by $\wt(b)$, \item Set $\eps_i(b)=\max\set{n\in\Z_{\ge0}}{e_i^n\G(b)\not=0}$. Then for any $b\in B$ and $i\in I$, there exists $\tf_ib\in B$ such that, when writing $$f_i'\G(b)=\sum_{b'\in B}F^i_{b,b'}\G(b')\quad \text{with $F^i_{b,b'}\in \A$,}$$ we have \bna \item $F^i_{b,\tf_ib}=q_i^{-\eps_i(b)}$,\label{prt1} \item $\eps_i(\tf_ib)=\eps_i(b)+1$, \item $F^i_{b,b'}=0$ if $b'\not=\tf_ib$ and $\eps_i(b')\ge\eps_i(b)+1$, \item $F^i_{b,b'}\in qq_i^{-\eps_i(b)}\Z[q]$ for $b'\not=\tf_ib$.\label{prt2} \ee \item for $b\in B$ such that $\eps_i(b)>0$, there exists $\te_ib\in B$ such that, when writing $$e_i\G(b)=\sum_{b'\in B}E^i_{b,b'}\G(b')\quad \text{with $E^i_{b,b'}\in \A$,} $$ we have \bna \item $E^i_{b,\te_ib}=[\eps_i(b)]_i$, \item $\eps_i(\te_ib)=\eps_i(b)-1$, \item $E^i_{b,b'}=0$ if $b'\not=\te_ib$ and $\eps_i(b')\ge\eps_i(b)-1$, \item any $E_{b,b'}^i$ is invariant under the automorphism $q\mapsto q^{-1}$, \item $E^i_{b,b'}\in qq_i^{1-\eps_i(b)}\Z[q]$ for $b'\not=\te_ib$. \ee \item $\tf_i\te_ib=b$ if $\eps_i(b)>0$, and $\te_i\tf_ib=b$. \ee \enprop Note that $B$ has the weight decomposition $$B=\bigsqcup\nolimits_{\beta\in\rootl^-}B_{\beta}\quad\text{ with $B_{\beta}\seteq\set{b\in B}{\wt(b)=\beta}$.} $$ There exists a unique involution (called the {\em bar involution}) $-\cl U_\A^-(\g)^*\to U_\A^-(\g)^*$ such that \eq&& \hs{-30ex}\parbox{40ex}{\bna \item $(qu)^-=q^{-1}\ol{u}$\quad for any $u\in U_\A^-(\g)^*$, \item $-\circ e_i=e_i\circ -$\quad for any $i$, \item $\ol{\vac}=\vac$. \ee} \label{char:bar} \eneq We have $$\ol{\G(b)}=\G(b)\quad\text{for any $b\in B$.}$$ \subsection{Quiver Hecke algebras} Let $(A, P, \Pi, P^{\vee}, \Pi^{\vee})$ be a Cartan datum. In this subsection, we recall the construction of the \KLRs\ associated with $(A, P, \Pi, P^{\vee}, \Pi^{\vee})$. For $i,j\in I$ such that $i\not=j$, set $$S_{i,j}=\set{(p,q)\in\Z_{\ge0}^2}{(\al_i,\al_i)p+(\al_j,\al_j)q=-2(\al_i,\al_j)}. $$ Let $\cora$ be the commutative $\Z$-algebra generated by indeterminates $\{t_{i,j;p,q}\}$ and the inverse of $t_{i,j;-a_{i,j},0}$ where $i,j\in I$ such that $i\not=j$ and $(p,q)\in S_{i,j}$. They are subject to the defining relations: $$t_{i,j;p,q} = t_{j,i;q,p}.$$ Let us define the polynomials $(Q_{ij})_{i,j\in I}$ in $\cora[u,v]$ by \begin{equation} Q_{ij}(u,v) = \begin{cases}\hs{5ex} 0 \ \ & \text{if $i=j$,} \\ \sum\limits_{(p,q)\in S_{i,j}} t_{i,j;p,q} u^p v^q\quad& \text{if $i \neq j$.} \end{cases} \end{equation} They satisfy $Q_{i,j}(u,v)=Q_{j,i}(v,u)$. We denote by $S_{n} = \langle s_1, \ldots, s_{n-1} \rangle$ the symmetric group on $n$ letters, where $s_i\seteq (i, i+1)$ is the transposition of $i$ and $i+1$. Then $S_n$ acts on $I^n$. \Def[\cite{KL09,{R08}}] \label{def:KLRalg} The {\em \KLR}\ $R(n)$ of degree $n$ associated with a Cartan datum $(A, P, \Pi, P^{\vee}, \Pi^{\vee})$ is the $\Z$-graded algebra over $\cora$ generated by $e(\nu)$ $(\nu \in I^{n})$, $x_k$ $(1 \le k \le n)$, $\tau_l$ $(1 \le l \le n-1)$ satisfying the following defining relations: {\allowdisplaybreaks \begin{align*} & e(\nu) e(\nu') = \delta_{\nu, \nu'} e(\nu), \ \ \sum\nolimits_{\nu \in I^{n}} e(\nu) = 1, \\ & x_{k} x_{l} = x_{l} x_{k}, \ \ x_{k} e(\nu) = e(\nu) x_{k}, \\ & \tau_{l} e(\nu) = e(s_{l}(\nu)) \tau_{l}, \ \ \tau_{k} \tau_{l} = \tau_{l} \tau_{k} \ \ \text{if} \ |k-l|>1, \\ & \tau_{k}^2 e(\nu) = Q_{\nu_{k}, \nu_{k+1}} (x_{k}, x_{k+1}) e(\nu), \\ & (\tau_{k} x_{l} - x_{s_k(l)} \tau_{k}) e(\nu) = \begin{cases} -e(\nu) \ \ & \text{if} \ l=k, \nu_{k} = \nu_{k+1}, \\ e(\nu) \ \ & \text{if} \ l=k+1, \nu_{k}=\nu_{k+1}, \\ 0 \ \ & \text{otherwise}, \end{cases} \displaybreak[3]\\[.5ex] & (\tau_{k+1} \tau_{k} \tau_{k+1}-\tau_{k} \tau_{k+1} \tau_{k}) e(\nu)\displaybreak[0]\\ &\hs{8ex} =\begin{cases} \dfrac{Q_{\nu_{k}, \nu_{k+1}}(x_{k}, x_{k+1}) - Q_{\nu_{k}, \nu_{k+1}}(x_{k+2}, x_{k+1})} {x_{k} - x_{k+2}}e(\nu) \ \ & \text{if} \ \nu_{k} = \nu_{k+2}, \\ 0 \ \ & \text{otherwise}. \end{cases} \end{align*} } The $\Z$-grading on $R(n)$ is given by \begin{equation} \label{eq:Z-grading} \deg e(\nu) =0, \quad \deg\; x_{k} e(\nu) = (\alpha_{\nu_k} | \alpha_{\nu_k}), \quad\deg\; \tau_{l} e(\nu) = - (\alpha_{\nu_l} | \alpha_{\nu_{l+1}}). \end{equation} \enDef \noindent Note that $R(n)$ has an anti-involution $\psi$ that fixes the generators $x_k$, $\tau_l$ and $e(\nu)$. For $n\in \Z_{\ge 0}$ and $\beta \in \rootl^{+}$ such that $\haut(\beta)=n$, we set $$I^{\beta} = \set{ \nu = (\nu_1, \ldots, \nu_n) \in I^n {\alpha_{\nu_1} + \cdots + \alpha_{\nu_n} = \beta }.$$ We define \eq &&\ba{l} e(\beta) = \sum_{\nu \in I^{\beta}} e(\nu), \\[1ex] R(\beta) = R(n) e(\beta)=\soplus_{\nu\in I^\beta}R(n)e(\nu). \ea\eneq The algebra $R(\beta)$ is called the {\it {\KLR} at $\beta$}. Similarly, for $\beta,\gamma\in \rootl^+$ with $m=\haut(\beta)$ and $n=\haut(\gamma)$ \eq &&\ba{l} e(\beta, \gamma) = \sum_{\nu}e(\nu)\in R(m+n)\\[1ex] \hs{10ex}\parbox{50ex}{where $\nu$ ranges over the set of $\nu\in I^{m+n}$ such that $\sum_{k=1}^m\alpha_{\nu_k}=\beta$ and $\sum_{k=m+1}^{m+n}\al_{\nu_k}=\gamma$.} \ea\eneq Then $R(m+n)e(\beta,\gamma)$ is a graded $(R(\beta+\gamma), R(\beta)\otimes R(\gamma))$-bimodule. For a graded $R(\beta)$-module $M$ and a graded $R(\gamma)$-module $N$, we define their convolution $M\circ N$ by $$M\circ N=R(\beta+\gamma)e(\beta,\gamma) \tens_{R(\beta)\otimes R(\gamma)}(M\otimes N). $$ For $\ell\in\Z_{\ge0}$, We define the graded $R(\ell\al_i)$-module $\Ln(i^\ell)$ by $$\Ln(i^\ell)=q_i^{\ell(\ell-1)/2}\Bigl( R(\ell\al_i)/\bigl(\ssum_{k=1}^\ell R(\ell\al_i)x_k\bigr)\Bigr).$$ Here $q\cl \Mod\bl R(\beta)\br\to \Mod\bl R(\beta)\br$ is the grade-shift functor: \eq (qM)_k=M_{k-1},\eneq and $q_i=q^{(\al_i\vert\al_i)/2}$. For a commutative ring $\cor$ and a ring homomorphism $c\cl \cora \to \cor$, we denote by $R(\beta)_\cor$ the algebra $\cor\otimes_{\cora}R(\beta)$. Let us denote by $\Par$ the scheme $\mathrm{Spec}(\cora)$. For $x\in \Par$, let us denote by $\cor(x)$ the residue field of the local ring $(\sho_\Par)_x$ and denote by $R(\beta)_x$ the $\cor(x)$-algebra $\cor(x)\otimes_\cora R(\beta)$. Let us take a commutative field $\cor$ and a homomorphism $\cor(x)\to\cor$. For $\beta\in\rootl^+$, let us denote by $R(\beta)_\cor\gmod$ the abelian category of graded $R(\beta)_\cor$-modules finite-dimensional over $\cor$. Then the set of isomorphism classes of simple objects in $R(\beta)_\cor\gmod$ is isomorphic to the one for $R(\beta)_{x}\gmod$ by $S\mapsto \cor\tens_{\cor(x)}S$ (see \cite[Corollary 3.19]{KL09}). For $i\in I$ and $x\in \Par$ we have functors \eqn \xymatrix{R(\beta)_x\gmod\ar@<.5ex>[r]^-{F_i}&R(\beta+\al_i)_x\gmod \ar@<.5ex>[l]^-{E_i}.} \eneqn Here these functors are defined by \eqn &&F_iM= M\circ\Ln(i)\simeq\bl R(\beta+\al_i)_xe(\beta,\al_i)/R(\beta+\al_i)_xe(\beta,\al_i)x_{n+1} \br \otimes_{R(\beta)_x}M,\\ &&E_iN=e(\beta,\al_i)N \simeq \Hom_{R(\beta+\al_i)_x}\bl R(\beta+\al_i)_xe(\beta,\al_i),N\br\\ &&\hs{5ex}\simeq e(\beta,\al_i)R(\beta+\al_i)_x\otimes _{R(\beta+\al_i)_x}N \eneqn for $M\in R(\beta)_x\gmod$ and $N\in R(\beta+\al_i)_x\gmod$. Then we have \eqn &&E_i F_i\simeq q^{-(\al_i,\al_i)}F_i E_i\soplus \id,\\ &&E_i F_j\simeq q^{-(\al_i,\al_j)}F_j E_i\quad\text{for $i\not=j$,} \eneqn which immediately follows from \cite[Theorem 3.6]{KK}. Let $\K{R(\beta)_x\gmod}$ denote the Grothendieck group of the abelian category $R(\beta)_x\gmod$. Then, it has a structure of a $\Z[q,q^{-1}]$-module induced by the grade-shift functor on $R(\beta)_x\gmod$. Then the following theorem holds. \Th[\cite{KL09}] There exists a unique $\Z[q,q^{-1}]$-linear isomorphism \eq &&\soplus_{\beta\in\rootl^+}\K{R(\beta)_x\gmod}\isoto U^-_\A(\g)^* \label{corr:main} \eneq such that \bnum \item the induced actions $[E_i]$ and $[F_i]$ by $E_i$ and $F_i$ correspond to $e_i$ and $f'_i$, \item $\vac\in U^-_\A(\g)^*$ corresponds to the regular representation of $R(0)_x$. \ee \entheorem Let $\Dual\cl R(\beta)_x\gmod\to \bl R(\beta)_x\gmod\br^\op$ be the duality functor $M\mapsto M^*$ induced by the antiautomorphism $\psi$ of $R(\beta)_x$. We can easily see by the characterization \eqref{char:bar} of the bar involution that the induced endomorphism $[\Dual]$ of $\soplus\nolimits_{\beta\in\rootl^+}\K{R(\beta)_x\gmod}$ corresponds to the bar involution $-$ of $U^-_\A(\g)^*$. The Grothendieck group $\K{R(\beta)_x\gmod}$ is a free $\Z$-module with the basis consisting of $[S]$ where $S$ ranges over the set of isomorphism classes of simple graded $R(\beta)_x$-modules. Khovanov-Lauda (\cite{KL09}) proved that for any simple graded $R(\beta)_x$-module $S$, there exists $r\in\Z$ such that $\Dual(q^rS)\simeq q^rS$. Let $\Irr(R(\beta)_x)$ be the set of isomorphism classes of simple graded $R(\beta)_x$-modules $S$ such that $\Dual(S)\simeq S$. Then $\K{R(\beta)_x\gmod}$ is a free $\Z[q,q^{-1}]$-module with $\set{[S]}{S\in\Irr(R(\beta)_x)}$ as a basis. For a simple graded module $S$, let us denote by $\eps_i(S)$ the largest integer $k$ such that $E_i^kS\not=0$. Recall that $q$ denotes the shift-functor and $q_i=q^{(\al_i\vert\al_i)/2}$. \Prop[\cite{LV09, KL09}] Let $x\in \Par$, $\beta\in \rootl^+$ and $S$ a simple graded $R(\beta)_x$-module. \bnum \item The cosocle of $F_iS$ is a simple module. Its image under $q_i^{\eps_i(S)}$ is denoted by $\tF_iS$. \item If $\eps_i(S)>0$ then the socle of $E_iS$ is simple. Its image under $q_i^{1-\eps_i(S)}$ is denoted by $\tE_iS$. \item $\tF_i\tE_iS\simeq S$ if $\eps_i(S)>0$, and $\tE_i\tF_iS\simeq S$. \item If $S$ is invariant by the duality $\Dual$, then so are $\tF_iS$ and $\tE_iS$. \item The set $\bigsqcup_{\beta\in\rootl^+}\Irr(R(\beta)_x)$ is isomorphic to $B$, and $\tE_i$ and $\tF_i$ correspond to $\te_i$ and $\tf_i$ by this isomorphism. \ee \enprop Hence, the cosocle of $F_iS$ is isomorphic to $q_i^{-\eps_i(S)}\tF_iS$, the socle of $E_iS$ is isomorphic to $q_i^{\eps_i(S)-1}\tE_iS$ and the cosocle of $E_iS$ is isomorphic to $q_i^{-\eps_i(S)-1}\tE_iS$. For $b\in B_{-\beta}$, let us denote by $L_x(b)$ the corresponding simple graded $R(\beta)_x$-module in $\Irr(R(\beta)_x)$. Now assume that $A$ is symmetric and consider a $\cor$-valued point $c_0$ of $\Par$ given by \eq &&Q_{i,j}(u,v)=b_{i,j}(u-v)^{-a_{i,j}}\quad\text{for $i\not=j$} \label{eq:Q} \\ &&\hs{25ex} \text{where $\cor$ is a field of characteristic $0$ and $b_{i,j}\in\cor^\times$.}\nonumber \eneq Then the following theorem is proved by Varagnolo-Vasserot (\cite{VV09}) and Rouquier (\cite{R11}). \Th Assume that the generalized Cartan matrix $A$ is symmetric. Then the basis $\{[L_{c_0}(b)]\}_{b\in B}$ corresponds to the upper global basis $\{\G(b)\}_{b\in B}$ by the isomorphism $\soplus\nolimits_{\beta\in\rootl^+}\K{R(\beta)_{c_0}\gmod}\isoto U^-_\A(\g)^*$. \entheorem For $M\in R(\beta)_x\gmod$, let us define its character $\ch(M)$ by $$\ch(M)=\sum_{\nu\in I^\beta,\;k\in\Z}\dim\bl e(\nu)M\br_k q^k e(\nu) \in\soplus_{\nu\in I^\beta}\Z[q,q^{-1}]e(\nu).$$ Then we have \eq \ch\bl L_{c_0}(b)\br=\sum_{\nu\in I^\beta}\bl e_{\nu_1}\cdots e_{\nu_n}\G(b)\br e(\nu)\quad\text{for $b\in B_{-\beta}$.} \label{eq:character} \eneq \section{Main results} \subsection{} Let $\cg$ be the generic point of $\Par$. For $\beta\in \rootl^+$ and $b\in B_{-\beta}$, let us consider the simple graded $R(\beta)_{\cg}$-module $L_\cg(b)$. \Prop The set $U_b\seteq\set{x\in \Par {\ch \bl L_x(b)\br=\ch \bl L_\cg(b)\br}$ is a Zariski open subset of $\Par$ and there exists a graded $\sho_{U_b}\otimes_{\cora}R(\beta)$-module $\L(b)$ defined on $U_b$ such that it is locally free as an $\sho_{U_b}$-module and the stalk of $\L(b)$ at any $x\in U_b$ is isomorphic to $L_x(b)$. \enprop \Proof We shall prove it by induction on $\haut(\beta)$. We may assume $\beta\not=0$. Take an $i\in I$ such that $\ell\seteq\eps_i(b)\not=0$. Set $\beta'=\beta-\ell\al_i$ and $b'=\te_i^\ell b$. For any $x\in \Par$, the graded $R(\beta)_x$-module $L_x(b)$ is a simple cosocle of $L_x(b')\circ \Ln(i^\ell)$. Moreover the kernel of $L_x(b')\circ \Ln(i^\ell)\epi L_x(b)$ is $\set{s\in L_x(b')\circ \Ln(i^\ell)}{e(\beta',\ell\al_i) R(\beta)s=0}$. By the induction hypothesis, there exists an $\sho_{U_{b'}}\otimes_{\cora}R(\beta')$-module $\L(b')$ as above. Set $\R=\sho_{U_{b'}}\otimes_{\cora}R(\beta)$ and we shall denote by $\M$ the $\R$-module $\L(b')\circ \Ln(i^\ell)$. Let $f$ be the composition \eqn \M&\To& \hhom[{\sho_{\Par}\vert_{U_{b'}}}] (\R,\M)\\ &\To&\hhom[{\sho_{\Par}\vert_{U_{b'}}}] (\R,\M/(1-e(\beta',\ell\al_i))\M)). \eneqn Then the kernel of $f$ coincides with the sheaf $$\set{u\in \M}{e(\beta',\ell\al_i)\R u=0}.$$ The homomorphismt $f$ factors through \eqn \M&\To[\ol{f}] & \hhom[{\sho_{U_{b'}}}] (\R/\R_{\ge m},\M/(1-e(\beta',\ell\al_i))\M)\br\\ &\mono&\hhom[{\sho_{U_{b'}}}] (\R,\M/(1-e(\beta',\ell\al_i))\M)\br \eneqn for a sufficient large integer $m$. Here $\R_{\ge m}=\soplus_{k\ge m}\R_k$. Therefore $\ol{f}$ is a morphism of vector bundles on $U_{b'}$. On the other hand, $U_b$ is the set of $x\in U_{b'}$ such that the rank of $\ol{f}$ at $x$ is equal to its rank at the generic point. Hence $U_b$ is an open subset of $\Par$ and the image of $\bar{f}\vert_{U_b}$ satisfies the condition for $\L(b)$. \QED \subsection{} For $x\in \Par$ and $b\in B$, let us consider the condition \eq &&\text $L_{x}(b)$ corresponds to the upper global basis $\G(b)$ by the isomorphism \eqref{corr:main}.} \label{cond:glob} \eneq In this subsection, we shall prove the following theorem. \Th\label{th:main} Let $c_0$ be a point of $\Par$ satisfying \eqref{cond:glob} for any $b\in B$. Then $c_0$ belongs to $U_b$ for any $b\in B$. Hence \eqref{cond:glob} holds also for any $x\in U_b$. \entheorem \Proof It is enough to show that $\cg$ satisfies \eqref{cond:glob}. W shall take a triple $(K, \sho,\cor)$ such that $K=\cor(\cg)$, $\CO$ is a discrete valuation ring, $K$ coincides with the fraction field of $\CO$, $\cor$ is the residue field of $\CO$, $(\sho_\Par)_{x_0}\subset\CO$ and $(\sho_\Par)_{x_0}\subset\CO\to\cor$ factors through $\cor(x_0)$. Such a triple exists (see \cite[(7.1.7)]{EGA4}). We have the reduction map $$\Res_{K,\cor}\cl \K{R(\beta)_K}\To \K{R(\beta)_\cor}$$ by assigning $[K\otimes_\CO L] \in \K{R(\beta)_K}$ to $[\cor\otimes_\sho L]\in\K{\R(\beta)_\cor}$ for a graded $R(\beta)_\CO$-module $L$ that is finitely generated and torsion-free as an $\CO$-module. The homomorphism $\Res_{K,\cor}$ commutes with the duality $\Dual$. Also it is compatible with the correspondence \eqref{corr:main}, namely we have a commutative diagram: $$\xymatrix@R=2ex@C=6ex{ \soplus_{\beta\in\rootl^+}\K{R(\beta)_K\gmod} \ar[rr]^{\Res_{K,\cor}}\ar[dr]^-\sim&& \soplus_{\beta\in\rootl^+}\K{R(\beta)_\cor\gmod}\ar[dl]^-\sim\\ &U^-_\A(\g)^*} $$ For $b\in B$, set $L(b)_K\seteq L_\cg(b)$ and $L(b)_\cor\seteq \cor\otimes_{\cor(c_0)}L_{c_0}(b)$. Take $b\in B_{-\beta}$, and let $L(b)_\CO$ be an $R(\beta)_\CO$-lattice of $L(b)_K$, i.e., a finitely generated graded $R(\beta)_\CO$-submodule $L(\beta)_\CO$ of $L(b)_K$ such that $K\otimes_\CO L(b)_\CO=L(b)_K$. In order to see the theorem, it is enough to show that $\cor\otimes_\CO L(b)_\CO\simeq L(b)_\cor$. We shall prove it by induction on $\haut(\beta)$. Take an $i\in I$ such that $\eps_i(b)>0$ and set $b'=\te_i b$. Then $[L(b')_K]$ corresponds to $\G(b')$ by the induction hypothesis. We take an $R(\beta')_\CO$-lattice $L(b')_\CO$ of $L(b')_K$. Then by the induction hypothesis, we have $L(b')_\cor\simeq\cor\otimes_{\CO}L(b')_\CO$. The image of $q_i^{\eps_i(b')}F_iL(b')_\CO$ by $q_i^{\eps_i(b')}F_iL(b')_K\epi L(b)_K$ is an $R(\beta)_\CO$-lattice of $L(b)_K$, and we can take it as $L(b)_\CO$. Since $q_i^{\eps_i(b')}F_iL(b')_\cor\simeq q_i^{\eps_i(b')}\cor\otimes_\CO F_i L(b')_\CO\epi\cor\otimes_\CO L(b)_\CO$, the simples in a Jordan-Holder series of $\cor\otimes_\CO L(b)_\CO$ appears in the one of $q_i^{\eps_i(b')}F_iL(b')_\cor$. Now assume that $q^rL(b_1)_\cor$ appears in $\Res_{K,\cor}L(b)_K =[\cor\otimes_{\CO}L(b)_\CO]$ for $r\in \Z$ and $b_1\in B_{-\beta}$. Then $q^r\G(b_1)$ appears in $q_i^{\eps_i(b')}f_i'\G(b')$ by the assumption that $c_0$ satisfies \eqref{cond:glob}. In particular, $L(b)_\cor$ appears in $[\cor\otimes_{\CO}L(b)_\CO]$ exactly once by \eqref{prt1} in Proposition~\ref{Prop:glbal}. Now assume that $(r,b_1)\not=(0,b)$. Then \eqref{prt1} and \eqref{prt2} in Proposition~\ref{Prop:glbal} imply that $r>0$. Since $L(b)_K$ is stable by the duality functor $\Dual$, $q^{-r}L(b_1)_\cor\simeq\Dual \bl q^rL(b_1)_\cor\br$ also appears in $\Res_{K,\cor}L(b)_K$. Hence $-r>0$. It is a contradiction. This shows the desired result: $\cor\otimes_\CO L(b)_\CO\simeq L(b)_\cor$. This completes the proof of Theorem~\ref{th:main}. \QED \Ex Let us give an example of a simple $R(\beta)$-module which does not correspond to any element in the upper global basis. Let $\g=A^{(1)}_1$ with $I=\{0,1\}$, $(\al_0\vert\al_0)=(\al_1\vert\al_1)=-(\al_0\vert\al_1)=2$, and $Q_{0,1}(u,v)=u^2+auv+v^2$. Here $\cor$ is an arbitrary field and $a\in\cor$. Set $\delta=\al_0+\al_1$, $b'=\tf_1\tf_0\vac$ and $N=L(b')$. Then $N=\cor v$ with $x_1v=x_2v=\tau_1v=0$ and $v=e(01)v$. Set $M=N\circ N$, and $u=v\otimes v\in M$. Then $\ch(M)=2e(0101)+[2]^2e(0011)$. Here $e(0101)M=\cor u\oplus\cor w$ with $w\seteq\tau_2\tau_3\tau_1\tau_2u$. By the weight consideration, $\tau_ke(0101)M=0$ for $k=1,3$ and $x_ke(0101)M=0$ for $1\le k\le 4$. Easy calculations show that $\tau_2w=-a\tau_2u$. Hence $y\seteq w+au$ is annihilated by all $x_k$'s and $\tau_k$'s and $\cor y$ is an $R(2\delta)$-submodule of $M$. Set $M_0=M/\cor y$. Then $[M_0]$ corresponds to $\G(b)$ with $b\seteq\tf_1^2\tf_0^2\vac$. It is easy to see that $M_0$ is a simple $R(2\delta)$-module if $a\not=0$. When $a=0$, $e(0011)M_0$ is a simple $R(2\delta)$-submodule of $M_0$ and $L(b)=e(0011)M_0$. Note that the case \eqref{eq:Q} is when $a=\pm2$. \enEx \Ex Let us give another example of a simple $R(\beta)$-module which does not correspond to any element in the upper global basis. Let $\g=A^{(1)}_2$ with $I=\Z/3\Z=\{0,1,2\}$ with $(\al_i|\al_i)=2$ and $(\al_i|\al_j)=-1$ for $i\not=j$ and $Q_{i,i+1}(u,v)=a_iu+b_{i+1}v$ ($i\in I$) with $a_i,b_i\in\cor^\times$, where $\cor$ is an arbitrary field. Set $\delta=\al_0+\al_1+\al_2$, $b'=\tf_2\tf_1\tf_0\vac$ and $N=L(b')$. Then $N=\cor v$ with $x_kv=\tau_\ell v=0$ and $v=e(012)v$. Set $M=N\circ N$ and $u=v\otimes v\in M$. Then $\ch(M)=2e(012012)+[2]^3e(001122) +[2]^2e(001212)+[2]^2e(010122)+[2]e(010212)$. Here $e(012012)M=\cor u\oplus\cor w$ with $w\seteq\tau_3\tau_4\tau_5\tau_2\tau_3\tau_4\tau_1\tau_2\tau_3u$. By the weight consideration $\tau_ke(012012)M=0$ for $k\not=3$ and $x_ke(012012)M=0$ for $1\le k\le 6$. By calculations, we have $\tau_3w=-\gamma\tau_3u$ where $\gamma=a_0a_1a_2-b_0b_1b_2$. Hence $y\seteq w+\gamma u$ is annihilated by all $x_k$'s and $\tau_k$'s and $\cor y$ is an $R(2\delta)$-submodule of $M$. Set $M_0=M/\cor y$. Then $[M_0]$ corresponds to $\G(b)$ with $b\seteq\tf_2^2\tf_1^2\tf_0^2\vac$. It is easy to see that $M_0$ is a simple $R(2\delta)$-module if $\gamma\not=0$. When $\gamma=0$, $S\seteq \bl 1-e(012012)\br M_0=R(2\delta)\tau_3u$ is a simple $R(2\delta)$-module and $L(b)=S$ and $\ch(M_0/S)=e(012012)$. Note that the case \eqref{eq:Q} corresponds to $a_0a_1a_2+b_0b_1b_2=0$. \enEx \Rem If we assume \eq \text{the simple modules of $R(\beta)_\cg$ correspond to the upper global basis,}\label{cond:global} \eneq then $\G(b)\in \sum_{S\in \Irr(R(\beta)_x)}\Z_{\ge0}[q,q^{-1}][S]$ for any $x\in \Par$ and $b\in B$. We can ask if this positivity assertion still holds without the assumption \eqref{cond:global}. \enRem \bibliographystyle{amsplain}
1,116,691,497,429
arxiv
\section{Introduction} The potential to determine the Hubble constant, $H_{0}$, using the double image lens B0218+357 was established shortly after its discovery (Patnaik et al. 1993) to be very high. This is because of the accurately measured value for the time delay between the images, \mbox{(10.5 $\pm$ 0.4) d} (Biggs et al. 1999) and the wealth of data coming from numerous radio and optical observations of this source at various frequencies and epochs that provide constraints for the lens model. Rightly so, at times it is described as the \textit{`Golden Lens'}. Yet this system presents a few `glitches'. One of them is the steady and systematic decline in the radio image flux density ratio with decreasing frequency. One of the possible explanations is a frequency-dependent source structure (the background source is conjectured to be a blazar), combined with the magnification ratio which changes significantly over the extent of the structure. Such changing magnification is perhaps likely, given that the system has the smallest image separation of $\sim$ 330 mas amongst the known galactic lenses. In the model derived by Wucknitz (2002) using LENSCLEAN, a shift of $\sim$ 15 mas in the position of a point-source image can produce a change in relative magnification from 4 to 2.5 (Fig. 1). Furthermore, it is indeed common for the radio spectra of AGN jets to steepen with distance from the nucleus, and for the position of the radio peak at the jet base to change with frequency -- the ``core shift''. Although such a core shift should, in general, show up as a change with frequency of the separation between the two different core images, this effect is insensitive to core shifts in some directions. An unambiguous registration of the VLBI structures of the radio images at different frequencies can show whether this effect is present, and can only be made using the technique of phase referencing. \begin{figure}[h] \resizebox{\hsize}{!}{\includegraphics{rupal1.eps}} \caption{\small The curves indicate constant relative magnifications (A/B) for the best fitting lens model with the lens position at \mbox{$x_o = 260$ mas} and $y_o = 117.5$ mas (Wucknitz, 2002)} \end{figure} \section{Observations} The \textit{phase-referencing} technique is used to correct interferometer phase errors (geometrical, atmospheric or ionospheric and instrumental). These errors are first determined by observation of a strong, near-by \textit{phase (position) reference} calibrator source, which is preferrably point-like with a frequency independent position and then interpolated to times at which the (usually weaker) \textit{target source} is observed, in order to determine its structure and position relative to the reference. In this way the derived relative geometrical offset between the lens and the position reference is a constant only if the respective brightest points maintain their positions with varying frequencies. In the case of B0218+357, the target source (the lens) is sufficiently strong ($\sim$ 1 Jy) to invert its role as the phase-reference, thereby leading to ``Inverse Phase Referencing". The observations were taken on the 13th and 14th of Jan. 2002 using the VLBA (Very Long Baseline Array) and Effelsberg (Eb) at five frequencies, namely \mbox{15.35 GHz}, \mbox{8.40 GHz}, 4.96 GHz, 2.25 GHz and 1.65 GHz. Apart from observing the lens, three position-reference sources were observed along with a fringe finder. The data were correlated at the VLBA correlator and further processed in AIPS. \begin{figure*}[t] \centering \includegraphics[width=17cm,height=4.8cm]{rupal2.eps} \hfill \caption{\small{ Image A (top) and image B (bottom) at 1.65 GHZ, 2.25 GHz, 4.96 GHz, 8.40 GHz, 15.35 GHz (from left to right) plotted with the same beam size with each side of the square measuring 80 mas. } } \end{figure*} \section{Maps of B0218+357} \textit{Hybrid maps} (using phase self-calibration techniques) of the lens were made by cleaning the two sub-fields containing the two images A and B (separated by $\sim$ 334 mas) simultaneously. The images clearly manifest all the earlier observed lensing-characteristics, such as image A being tangentially stretched at a PA $\sim$ $-40\,^{\circ}$ (Fig. 2). At 8.4 GHz and higher frequencies, the images are resolved further into two sub-components, separated by about 1.4 mas, representing the core-jet morphology of the background source. \section{Phase Referencing} For the phase-reference analysis only one source, 0215+364, was chosen as the most appropriate \textit{position reference} based on its flat spectrum in comparison with the others and also being in a suitable enough brightness range for the purpose of determination of the brightest component unambiguously. The hybrid maps of the images A and B were used to investigate the change in their positions with respect to 0215+364 as a function of frequency. Fig. 3 indicates a shift of only $\le 2$ mas in the peaks of the image radio emission between 15.35 GHz and \mbox{1.65 GHz}, comparable to the separation between the two sub-components seen in both images at 15.35 GHz. Over this distance the change in relative magnification is expected to be small. \begin{figure} \centering \includegraphics[width=6.5cm,height=5.8cm]{rupal3.eps} \includegraphics[width=6.5cm,height=5.8cm]{rupal4.eps} \caption{\small The top panel shows the change in position of the peak in image A, relative to the position-reference 0215+364, with frequency. The bottom panel shows the same for image B. } \end{figure} \section{Discussion} Since there is no measured shift with frequency of either image peak positions large enough to account for the anomalous flux density ratios, this effect may be due to the frequency-dependant source-size as seen in the two images. The different image sizes at varying frequencies could also result from scattering in the lens galaxy (see Biggs et al. 2002 for further discussion). At 1.65 GHz there is a relatively huge amount of low brightness emission that extends out to $\sim$ 30 mas, in comparison to \mbox{15.35 GHz} where the emission is dominated by the compact sub-components with a separation of $\sim$ 1.4 mas. Since at lower frequencies the (larger) images extend over regions where lens models predict significant changes in the relative magnification, the image flux densities(and their ratio) do not result from the integral of their radio brightness over the entire structure with a constant magnification. We are therefore attempting to use the optimal lens mass model derived from LENSCLEAN (Wucknitz 2002) to calculate the magnifications for discrete regions in the image plane. We can then compare the ratio of the predicted averaged magnification over image A to that of B for each of the frequencies, to the observed ratio of the image flux densities. Alternatively, sub-structure in the lens galaxy might cause larger gradients in the image magnifications than are found for a smooth lens model (Fig.1) \bibliographystyle{aa}
1,116,691,497,430
arxiv
\section{Introduction.} {\bf \emph{Introduction.}} The discovery of a doubly charm meson~\cite{LHCb:2021vvq,LHCb:2021auc}, as well as the theoretical consensus on the existence of a doubly bottom counterpart~\cite{Bicudo:2012qt,Karliner:2013dqa,Francis:2016hui,Eichten:2017ffp,Leskovec:2019ioa}, is moving the spotlight on heavy-light $QQ\bar q \bar q$ tetraquarks. Since they cannot mix with ordinary charmonia, they turn out to be the simplest exotic system to study, see~\cite{Esposito:2013fma}. Given the separation of masses $M_Q\gg m_q$, one finds a situation similar to that encountered in the hydrogen molecule. The fast motion of the light quarks in the field of the heavy color sources generates an effective potential, dependent on the relative distance $R$ separating the $QQ$ pair. The potential, in turn, regulates the slower motion of the heavy quarks. Such an effective potential, known as the Born-Oppenheimer potential (BO), is obtained by solving the eigenvalue equation for the light particles at fixed values of the coordinates of the heavy particles~(see e.g. \cite{Braaten:2014qka,Brambilla:2017uyf,Bicudo:2017szl,Giron:2019bcs,Prelovsek:2019ywc}). The energy ${\cal E}$ will be a function of the relative distance $R$ between heavy particles and corresponds to the core of the full BO potential, which includes the direct interaction between the sources. When solving the Schr\"odinger equation of the heavy particles, one neglects the momentum of the heavy particles computed as the gradient of the eigenfunction related to ${\cal E}$. This is the content of the {\it Born-Oppenheimer approximation}, illustrated in detail for QED in~\cite{weinbergQM,pauling}. Recently, we have applied the Born-Oppenheimer approximation to calculate the mass of the lowest lying doubly heavy tetraquarks, ${\cal T}_{cc}$ and ${\cal T}_{bb}$~\cite{Maiani:2019lpu}. In synthesis, the calculation gave a mass of ${\cal T}_{cc}$ close to the $D D$ threshold and a mass for ${\cal T}_{bb}$ considerably below the $ \bar B\bar B$ threshold, deep in the stability region against weak and electromagnetic decays. Previous calculations based on constituent quark model~\cite{Karliner:2017qjm,Eichten:2017ffp,Luo:2017eub} had rather indicated a ${\cal T}_{cc}$ mass close to the $D^* D$ threshold and, for ${\cal T}_{bb}$, a $Q$-value well inside the stability region. The observation of ${\cal T}_{cc}(3875)^+$ at the $D^* D$ threshold calls for a closer examination of our calculation~\cite{Maiani:2019lpu}. We find room for improvement with respect to the use of the hyperfine $\kappa[(u d)_{\bf {\bar 3}}]$ coupling taken from baryon spectrum, the coupling which regulates the mass splitting of $\Sigma_Q$--$\Lambda_Q$ baryons. As already demonstrated in previous cases,\footnote{See, Ref.~\cite{Maiani:2014aja} for the suppression in $Z_c(3900)$,~$Z_c'(4020)$ mass spectrum of $\kappa[(u \bar u)_{\bf{ 1}}]$ hyperfine coupling, dominant in meson spectra.} the extension to tetraquarks of hyperfine couplings taken from meson and baryon spectra is, in fact, a weak assumption. Hyperfine couplings depend crucially from the overlap probability of the quark pair involved, which, in tetraquarks cannot be {\it a priori} assumed to be equal to the overlap probabilities of the same pair in mesons and baryons. Within the Born-Oppenheimer scheme, we can improve our calculation in two ways: \begin{itemize} \item {\bf{Method 1: scaling baryon and mesons hyperfine couplings with the dimensions of the BO bound state.} }We use the spin-independent BO formalism to evaluate the average separations of light quarks and of heavy quarks to obtain realistic estimates of the corresponding hyperfine couplings by scaling with respect to the separations in baryons (for $\bar q\bar q^\prime$) and in charmonium/bottomonium (for $QQ$). \item {{\bf Method 2: QCD approach.}} We start from the hyperfine quark-quark QCD interaction~\cite{DeRujula:1975qlm,Godfrey:1985xj,Capstick:1986ter}. Its first order effect on the energy of the light quark system depends on the separation of the heavy sources, $R$, and it adds a contribution to the Born-Oppenheimer potential, which depends on the light quark spin $S_{\bar q\bar q}$ and on the total angular momentum $J$ of the tetraquark, taking fully into account the effect of light-to-light and light-to-heavy quarks hyperfine interactions\footnote{This method is followed in lattice calculations, where the computed Born-Oppenheimer potential takes full account of flavor and spin properties of the light quarks, see e.g.~\cite{Bicudo:2021qxj}.}. The effect of the remaining heavy-to-heavy hyperfine interaction can be evaluated from the same formula applied to the final wave function of the heavy quarks. \end{itemize} This calculation leads to three new results. \begin{enumerate} \item For the $I=0,~J^P=1^+$ state, the two methods give remarkably similar values, close to the observed mass of ${\cal T}_{cc}(3875)^+$. \item With Method~2, we compute the masses of the remaining, double charm states with $I=S_{\bar q\bar q}=1$ and $J^P=0^+,~1^+,~2^+$. Unlike the familiar $ \Lambda_Q,~ \Sigma_Q$ cases, the doubly heavy, $I=1,~J^P=1^+$ tetraquark is predicted {\it {to be lighter than the $ I=0$ tetraquark}} by $15$--$20\ensuremath{{\mathrm{\,Me\kern -0.1em V}}}\xspace$, which may be compatible with it not having been seen by LHCb yet. \item Concerning the $[bb\bar q\bar q],~I=0$ tetraquark, the new evaluation gives a mass below the $ \bar B \bar B$ threshold but rather close to it, not allowing a definite decision about the issue of stability against strong decays. \end{enumerate} {\bf \emph{Color couplings.}} In pursuing the analogy with the treatment of the hydrogen molecule, the coulombic potential terms are rescaled by the appropriate color factors. Quarks are treated as non-relativistic and weakly interacting so that the determination of color factors is done in the one-gluon-exchange approximation. In~\cite{Maiani:2019lpu} we have considered doubly flavored $bb$~and~$cc$ tetraquarks, assuming the doubly heavy pair in color $ \bm{\bar{3}}$. The lowest energy state corresponds to $QQ$ in spin one and light antiquarks in spin and isospin zero. The tetraquark state is $|T\rangle=\left|(QQ)_{\bar {\bm 3}}, (\bar q\bar q)_{ {\bm 3}} \right\rangle_{\bm 1}$. From the Fierz identity \begin{equation} |T\rangle=\sqrt{\frac{1}{3}}\left|(\bar q Q)_{\bm 1},(\bar q Q)_{\bm 1}\right\rangle _{\bm 1}-\sqrt{\frac{2}{3}}\left|(\bar q Q)_{\bm 8},(\bar q Q)_{\bm 8}\right\rangle _{\bm 1}\label{tetra3} \end{equation} weighting with the squared amplitudes in~\eqref{tetra3}, one derives the attractive color factors \footnote{We use the rule based on quadratic Casimir coefficients $\lambda_{12}=1/2(C(\bm S)-C(\bm R_1)-C(\bm R_2))$ where $\bm S$ is one of the representations contained in the Kronecker product $\bm R_1\otimes \bm R_2$. $C(\bm 3)=C(\bar{\bm 3})=4/3$, $C(\bm 6)=10/3$ and $C(\bm 8)=3$. } \begin{align} \lambda_{QQ}&=\lambda_{\bar q \bar q}=-\frac{2}{3}\alpha_s\notag\\ \lambda_{Q\bar q}&=\left[\frac{1}{3}\times\frac{1}{2}\left(-\frac{8}{3}\right)+\frac{2}{3}\times\frac{1}{2}\left(3-\frac{8}{3}\right)\right]\alpha_s=-\frac{1}{3}\alpha_s \label{bqbar3} \end{align} We shall add to the Coulombic, QCD, potential a linearly rising, confining, potential, $V=k_{Q\bar q}\, r$. The string tension $k$, in the $Q\bar q$ orbital, can be taken as \begin{equation} k_{Q\bar q}=\frac{3}{4\alpha_s}|\lambda_{Q\bar q}|\, k =\frac{1}{4} k \label{kbqbar3} \end{equation} where $k=0.15\ensuremath{{\mathrm{\,Ge\kern -0.1em V}^2}}\xspace$ is the string tension derived from quarkonium spectrum (in color singlet $|\lambda_{Q\bar Q}|=4/3$ so that $k_{Q\bar Q}=k$), according to the so-called `Casimir scaling'~\cite{Bali:2000gf}. However, as shown in~\eqref{tetra3}, $Q\bar q$ is in a superposition of color singlet and color octet. The charge of $(\bar q Q)_{\bm 8}$ is represented by an $SU(3)$ tensor $v^i_j$, traceless. In the QCD vacuum this charge might be neutralized by soft gluons, as in $A^j_iv^i_j$: therefore only the singlet component matters, and $k_{Q\bar q}=k$. We call this possibility `triality scaling'.\footnote{Consider a generic color charge described by a SU(3) tensor $v^{i_i\cdots i_n}_{j_1\cdots j_m}$, having triality ${\cal T}=n-m-3\lfloor (n-m)/3\rfloor$. It can be lowered to $v^{i_1\cdots i_{n-m}}$ by repeated contraction with soft gluons $A^{j_m}_{i_n}$. If $n-m=1$ we get a $\bm 3$ tensor. If $n-m=2$ we get a $\bm 6$. If $n-m\geq 3$, $v^{i_1\cdots i_{n-m}}$ can be further reduced by contraction with the $\overline{\bm{10}}$ tensors $A^r_{i_1}A^s_{i_2}\epsilon_{i_3 r s}$ ($i_1, i_2, i_3$ symmetrized) to finally get either one of $\bm 1,\bm 3,\bm 6$. Therefore the product of a charge $v^{i_i\cdots i_n}_{j_1\cdots j_m}$ and its conjugate can be reduced to the non-trivial cases $\bm 3\otimes \bar {\bm 3}$ as in~\eqref{tetra3}, or $\bm 6\otimes \bar{ \bm 6}$. The Kronecker decomposition of $\bm 6\otimes \bm 8$ contains the $\bar{\bm 3}$ representation as well as $\bar{\bm 6}\otimes \bm 8$ contains the $\bm 3$. Therefore, by the effect of the contraction with gluons, also $\bm 6\otimes \bar{\bm 6}$ behaves like $\bm 3\otimes \bar {\bm 3}$ and we still might use $k$ rather than the Casimir scaled value.} We will show the results of both hypotheses for the string tension. \vspace{.5cm} {\bf \emph{Orbitals.}} We consider at first the heavy quarks as fixed color sources at a distance $R$. Light antiquarks are bound each to a heavy quark in orbitals with wave functions $\psi(\bm \xi)$ and $\phi(\bm \eta)$ and the ground state of the $\bar q \bar q$ system is assumed to be symmetric under the exchange of light quarks coordinates (the notation is defined in Fig.~\ref{fig1}). \begin{figure}[ht!] \centering \includegraphics[width=7truecm]{figure} \caption{The heavy quarks are separated by the vector $\bm R$. The vectors $\bm \xi$ and $\bm \eta$ have their application points at the two heavy quarks. \label{fig1}} \end{figure} \begin{equation} \Psi=\frac{\psi(\bm \xi)\phi(\bm \eta)+\psi(\bm \eta)\phi(\bm \xi )}{\sqrt{2\left[1+S^2(R)\right]}}\label{ground} \end{equation} Normalization, $( \Psi, \Psi)=1$, is obtained with the overlap function given by\footnote{Considering ground states only, we restrict $\psi$ and $\phi$ to be real functions.} \begin{equation} S(R)= \int_{\bm \xi} ~\psi(\bm \xi)\phi(\bm \xi \end{equation} The wave function $\psi(\bm \xi)$ gives the amplitude of $\bar q$ at a distance $\bm \xi$ from $Q$, as represented in Fig.~\ref{fig1}. The wavefunction $\phi(\bm \eta)$ is the amplitude of the other light quark $\bar q$ at a distance $\bm \eta$ from the second heavy quark (which is at distance $\bm R$ from the former). The vectors $\bm \xi, \bm \eta$ have the application points in the positions of the two heavy quarks respectively. The $\psi$ and $\phi$ wavefunctions are written in terms of the radial functions ${\mathcal R}=R_{00}/\sqrt{4\pi}$ in the following way \begin{align} \psi(\bm \xi)&= {\cal R}(|\bm \xi|) & \psi(\bm \eta) &= {\cal R}(|\bm R+\bm \eta| )\notag \\ \phi(\bm \eta)&={\cal R}(|\bm \eta|) & \phi(\bm \xi)&={\cal R}(|\bm \xi -\bm R|) \label{cinque} \end{align} ${\cal R}(r)$ is the radial wave function obtained by solving variationally the Schr\"odinger equation of the heavy quark-light antiquark system with the potential, \begin{align} V(r)&= \frac{\lambda_{Q\bar q}}{r} + k_{Q\bar q} \,r + V_0 = -\frac{1}{3}\frac{\alpha_s}{r} +\frac{1}{4}k \, r +V_0 \label{potorb}\\ {\cal R}(r)&=\frac{A^{3/2}}{\sqrt{\pi}}e^{-Ar} \label{sei} \end{align} We have included a constant $V_0$, to be discussed below, that defines the offset of the energy for confined systems. The determination of $A$ comes from the minimization of $({\cal R},H\,{\cal R})=\langle H\rangle$: the value of $A$ used in computations corresponds to $\langle H\rangle_{\rm min}$. The light quarks energy, to zeroth order when we restrict to the interactions that define the orbitals, is \begin{equation} {\cal E}_0=2(\langle H\rangle_{\rm min} +V_0 ) \label{eps0} \end{equation} where $\langle H\rangle_{\rm min}$ is the orbital energy eigenvalue (and the minimum of the Schr\"odinger functional). In Ref.~\cite{Maiani:2019lpu} and in the following, we use the numerical values: \begin{align} \alpha_s(2M_c)&=0.30 & \alpha_s(2M_b)&=0.21 & k&=0.15\ensuremath{{\mathrm{\,Ge\kern -0.1em V}^2}}\xspace \label{bb&bc} \end{align} {\bf \emph{Determination of the BO potential.}} We include in a perturbation Hamiltonian the interactions left out from the construction of the orbitals, namely the interaction of each light quark with the other heavy quark and the interaction among light quarks. Following Fig.~\ref{fig1} \begin{equation} \delta H=\lambda_{Q\bar q} \left(\frac{1}{|\bm \xi- \bm R|}+\frac{1}{|\bm \eta +\bm R|}\right) +\frac{\lambda_{q\bar q}}{|\bm \xi- \bm R-\bm \eta |} \label{nove} \end{equation} with color factors taken from~\eqref{bqbar3}. We compute the total energy of the light system in the presence of fixed sources, ${\cal E}(R)$, to first order in $\delta H$ \begin{align} {\cal E}(R)&={\cal E}_0 + \Delta E(R)\notag \\ \Delta E(R)&=(\Psi, \delta H \Psi)=\frac{1}{1+S^2(R)}\left[ -\frac{1}{3}\alpha_s ( 2I_1(R)+2S(R) I_2(R))-\frac{2}{3}\alpha_s(I_4(R) + I_6(R))\right] \label{dieci} \end{align} The $I_i(R)$ are integrals over the orbital wave functions are defined and computed in~\cite{Maiani:2019lpu},\footnote{When computing e.g. $I_1$, the angle between $\bm \xi$ and $\bm R$ corresponds to the polar angle $\theta$ in the $\bm \xi$ integration. The distance between light quarks $|\bm \xi-\bm R-\bm \eta|=d_{\bar q\bar q}$, occuring in $I_{4,6}$ can be computed by shifting along $x$ or $y$ as in $$d_{\bar q \bar q}= \sqrt{(\xi \sin (\theta_\xi) \cos (\phi_\xi)-\eta \sin (\theta_\eta ) \cos (\phi_\eta ))^2+(\xi \cos (\theta_\xi)-\eta \cos (\theta_\eta ))^2+(-\eta \sin (\theta_\eta ) \sin (\phi_\eta )+\xi \sin (\theta_\xi) \sin (\phi_\xi)-R)^2}$$ where the polar and azimuthal angles are related to $\bm \xi$ and $\bm \eta$. } \begin{align} I_1(R)&\equiv\int_{\bm \xi }\psi(\bm \xi)^2\frac{1}{|\bm \xi-\bm R|}=\int_{\bm \eta}\phi(\bm \eta)^2\frac{1}{|\bm \eta+\bm R|}\notag \\ I_2(R)&\equiv\int_{\bm \xi }\psi(\bm \xi) \phi(\bm \xi) \frac{1}{|\bm \xi-\bm R|}=\int_{\bm \eta }\psi(\bm \eta) \phi(\bm \eta) \frac{1}{|\bm \eta+\bm R|}\notag \\ I_4(R)&\equiv\int_{\bm \xi,\bm \eta }\psi(\bm \xi)^2 \phi(\bm \eta)^2 \frac{1}{|\bm \xi-\bm R-\bm \eta|} = \int_{\bm \xi,\bm \eta }\psi(\bm \eta)^2 \phi(\bm \xi)^2 \frac{1}{|\bm \xi-\bm R-\bm \eta|} \notag \\ I_6(R)&\equiv\int_{\bm \xi,\bm \eta }\psi(\bm \xi)\phi(\bm \xi) \psi(\bm \eta) \phi(\bm \eta) \frac{1}{|\bm \xi-\bm R-\bm \eta|} \end{align} Results in the first three lines are derived from the symmetry transformation $\bm\xi\to \bm\eta$, $\bm R\to -\bm R$, $\psi\to\phi$. With these definitions at hand the result~\eqref{dieci} for $\Delta E(R)$ is readly derived from the definition~\eqref{nove} of $\delta H$. The Born-Oppenheimer potential, to be used in the Scr\"odinger equation of the heavy quarks, is then \b V_{\rm BO}(R)=-\frac{2}{3}\alpha_s\frac{1}{R}+{\cal E}(R)\label{bopot} \e At large separations $V_{\rm BO}(R)$ tends to the constant value \begin{equation} V_{\rm BO}(R)\to {\cal E}_0=2\left( \langle H\rangle_{\rm min} +V_0\right)\qquad \text{for } R\to \infty \end{equation} As noted in~\cite{Maiani:2019lpu}, at infinity the two orbitals tend to a superposition of color ${\bf 8}$--${\bf 8}$ and color ${\bf 1}$--${\bf 1}$. The color of a triality zero pair can be screened by soft gluons from the vacuum, as first noticed in~\cite{Bali:2000gf} and supported by lattice QCD calculations (see~\cite{Bicudo:2021qxj} for recent results). The upshot is that, including the constituent quark rest masses taken from the meson spectrum, Tab.~\ref{mas1}, the limit $V_{\rm BO}(\infty)+2(M_Q + M_q)$ must coincide with the mass of a pair of non-interacting beauty (charmed) mesons with spin-spin interaction subtracted, which is just $2(M_Q + M_q)$. Thus, we derive the boundary condition \begin{equation} \langle H\rangle_{\rm min} +V_0=0 \label{noconf} \end{equation} which fixes $V_0$. \bigskip \emph{\bf{Tetraquark spectrum and $Q$ values.}} The negative eigenvalue $E$ of the Schr\"odinger equation with $V_{\rm BO}(R)$ (including the condition on $V_0$ just found) is the binding energy associated with the BO potential. With the values in \eqref{bb&bc} and in Table~\ref{mas1} we obtained~\cite{Maiani:2019lpu}: \begin{align} E&=-70~(-87)\ensuremath{{\mathrm{\,Me\kern -0.1em V}}}\xspace\qquad\text{for}~cc, \notag \\ E&=-67~(-85)\ensuremath{{\mathrm{\,Me\kern -0.1em V}}}\xspace\qquad\text{for}~bb.\label{eigenvalues} \end{align} Where the first result assumes the Casimir scaling for the string tension ($k_{Q \bar q}=k/4$), while the result in parenthesis assumes the triality scaling ($k_{Q\bar q}=k$). The masses of the lowest tetraquark with $[(QQ)_{S=1}(\bar q\bar q)_{S=0}]$ and of the pseudoscalar mesons $P=Q\bar q$ are \begin{align} M(T)&=2(M_Q + M_q) + E+\frac{1}{2}\kappa_{QQ}-\frac{3}{2}\kappa_{\bar q \bar q}\\ M(P)&=M_Q + M_q -\frac{3}{2}\kappa_{Q\bar q} \end{align} The resulting $Q$-values with respect to the $PP$ thresholds are \begin{equation} Q_{QQ}=M(T)-2M(P)=E+\frac{1}{2}\kappa_{QQ}-\frac{3}{2}\kappa_{\bar q \bar q}+3~\kappa_{Q\bar q}\label{eqQkarl} \end{equation} and numerically, for the $[(QQ)_{S=1}(\bar q\bar q)_{S=0}]$ state~\cite{Maiani:2019lpu}, \begin{align} Q_{cc}&= +7 \,(-10)\ensuremath{{\mathrm{\,Me\kern -0.1em V}}}\xspace \label{Qcc} \\ Q_{bb}&= -138 \,(-156)\ensuremath{{\mathrm{\,Me\kern -0.1em V}}}\xspace \label{Qbb} \end{align} Eq.~\eqref{Qcc} is the result mentioned in the Introduction, which needs a closer examination. \begin{table}[t] \centering \begin{tabular}{|c|c|c|c|c|} \hline Flavors& $q$ & $s$ & $c$ & $b$ \\ \hline $M$(\ensuremath{{\mathrm{Me\kern -0.1em V}}}\xspace) & $308$ & $484$ & $1667$ & $5005$ \\ \hline \end{tabular} \caption{\footnotesize {Constituent quark masses from $S$-wave mesons~\cite{Ali:2019roi}, with $q=u,d$.}} \label{mas1} \end{table} \begin{table}[t] \centering \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Mesons & $(q\bar q)_1$&$(q\bar s)_1$& $(q\bar c)_1$& $(s\bar c)_1$ & $(q \bar b)_1$& $(c\bar c)_1$ & $(b\bar b)_1$ \\ \hline $\kappa$ (\ensuremath{{\mathrm{Me\kern -0.1em V}}}\xspace) & $318$ & $200$ & $70$ & $72$ & $23$ & 56 & 30\\ \hline\hline Baryons & $(qq)_{\bar 3}$ & $(q s)_{\bar 3}$ & $(q c)_{\bar 3}$ & $(s c)_{\bar 3}$& $(q b)_{\bar 3}$& $(c c)_3$ & $(b b)_3$ \\ \hline $\kappa$ (\ensuremath{{\mathrm{Me\kern -0.1em V}}}\xspace) & $98$ &$59$ & $15$ & $50$ & $2.5$ & 28& 15 \\ \hline \hline Ratio $\frac{\kappa_{MES}}{\kappa_{BAR}}$ & 3.2& 3.4 & 4.7 &1.6 & 9.2 & -- &-- \\ \hline \end{tabular} \caption{\footnotesize {$S$-wave Mesons and Baryons: spin-spin interactions of the lightest quarks with the heavier flavours~\cite{Ali:2019roi}. Values for $\kappa[(Q\bar Q)_1]$ are taken from the mass differences of ortho- and para-quarkonia. Following the one-gluon exchange prescription one then takes $\kappa[(QQ)_3]=1/2\kappa[(QQ)_1]$. }} \label{spin} \end{table} To obtain \eqref{Qcc} and \eqref{Qbb} one has used values of quark masses and hyperfine couplings obtained from meson and baryon spectra and reported in Tabs.~\ref{mas1} and~\ref{spin}~\cite{Ali:2019roi}. However, as mentioned in the Introduction, hyperfine couplings depend crucially from the overlap probability of the quark pair involved, which, in tetraquarks cannot be {\it a priori} assumed to be equal to the overlap probabilities of the same pair in mesons and baryons. Within the Born-Oppenheimer scheme we can improve the calculation following the two lines described in the Introduction. {\bf \emph{Hyperfine couplings from rescaling the overlap probabilities: Method~1.}} The average distance of the light quarks as a function of $R$, the heavy quarks distance, is given by the integral~\cite{Maiani:2019lpu}: \begin{equation} d_{\bar q\bar q}(R)=\left(\Psi, |{\bm \xi}-\bm R-{\bm \eta}|\, \Psi\right)= \int_{\bm \xi,\bm \eta } \frac{\psi(\bm\xi)^2\phi(\bm\eta)^2+\psi(\bm\xi)\phi(\bm\xi)\psi(\bm\eta)\phi(\bm\eta)}{1+S^2(R)}\, \left|{\bm \xi}-\bm R-{\bm \eta}\right| \label{dist} \end{equation} The average distance between light quarks in the tetraquark is then given by \begin{equation} \bar d_{\bar q\bar q}=\int dR ~\chi^2(R)\, d_{\bar q\bar q}(R) \end{equation} where $ \chi(R)$ is the normalized radial wave function of the $QQ$ pair, solution of the Schr\"odinger equation in the Born-Oppenheimer potential $V_{BO}(R)$. In correspondence, we scale the hyperfine coupling in the tetraquark by rescaling $\kappa_{qq}$ in Tab.~\ref{spin} as with the inverse cube of $\bar d_{\bar q\bar q}$. The inverse radius of diquarks $[qq]$ in baryons is estimated in Ref.~\cite{Karliner:2017gml} from the electrostatic contributions to the isospin breaking mass differences of baryons. They quote a parameter $a$ from which the radius is derived according to \begin{equation} a=\alpha \left\langle R_{[qq]}^{-1} \right\rangle \simeq 2.83\ensuremath{{\mathrm{\,Me\kern -0.1em V}}}\xspace\Longrightarrow R_{[qq]}\simeq 2.58\ensuremath{{\mathrm{\,Ge\kern -0.1em V}}}\xspace^{-1} \end{equation} This leads to estimate the rescaled copuling \begin{equation} \kappa^\prime_{qq}=\kappa_{qq}\, \left(R_{[qq]}/\bar d_{\bar q\bar q}\right)^3 \end{equation} We proceed analogously for the hyperfine $QQ$ coupling in the tetraquark, defining \begin{equation} \bar d_{QQ}=\int dR \, \chi^2(R)\, R \end{equation} We scale with the quarkonium average radius $R_{Q\bar Q}$, obtained variationally from the wave function of the Cornell potential \begin{equation} V(r)=-\frac{4}{3} \frac{\alpha_s(M_Q)}{r}+ k\, r \end{equation} to obtain \begin{equation} \kappa^\prime_{QQ}=\kappa_{Q Q}~\left(R_{Q\bar Q}/\bar d_{QQ} \right)^3 \end{equation} with $\kappa_{QQ}$ from Tab.~\ref{spin}. From the treatment of charmed baryons which can be found in~\cite{Karliner:2019lau} we extract \begin{equation} R_{Qq} \simeq 2.64\ensuremath{{\mathrm{\,Ge\kern -0.1em V}}}\xspace^{-1} \end{equation} A quark pair $Q\bar q$ in $QQ\bar q\bar q$ has two alternatives: $A)$ $Q$ and $\bar q$ belong to the same orbital, and lie at an average distance $\bar d_{Q\bar q}^A$; $B)$ $Q$ and $\bar q$ belong to different orbitals, being at a relative distance $\bar d_{Q\bar q}^B$ . One has to rescale the couplings by the appropriate distances, i.e. \begin{equation} \kappa^\prime_{Q\bar q}=\frac{\kappa_{Q\bar q}}{4}~\left[\frac{1}{2} \left(R_{Qq}/{\bar d}_{Q\bar q}^A \right)^3+\frac{1}{2}\left(R_{Qq}/{\bar d}_{Q\bar q}^B \right)^3\right \end{equation} where $\kappa_{Q\bar q}$ is taken from Tab.~\ref{spin}, $1/4$ is the color factor of $Q\bar q$ in the tetraquark with respect to the meson, and the average distances are \begin{subequations} \begin{equation} \bar d_{Q\bar q}^A(R)=\int dR ~\chi^2(R)\int_{\bm \xi} \frac{\psi(\bm\xi)^2+\psi(\bm\xi)\phi(\bm\xi)}{1+S^2(R)}\, \left|{\bm \xi}\right| \end{equation} \label{dist2} and \begin{equation} \bar d_{Q\bar q}^B(R)=\int dR ~\chi^2(R)\int_{\bm \xi} \frac{\psi(\bm\xi)^2+\psi(\bm\xi)\phi(\bm\xi)}{1+S^2(R)}\, \left|{\bm \xi}-\bm R\right| \end{equation} \end{subequations} The resulting $Q$-values with respect to the $PP$ thresholds are finally \begin{equation} Q_{QQ}=E+\frac{1}{2}\kappa'_{QQ}+\kappa'_{\bar q \bar q}\left[S_{\bar q \bar q}(S_{\bar q \bar q}+1)-\frac{3}{2}\right]+\kappa'_{Q \bar q}\left[J(J+1) - S_{\bar q \bar q}(S_{\bar q \bar q}+1)- 2\right]+3\kappa_{Q\bar q}\label{eqQ} \end{equation} {\bf \emph{Hyperfine couplings from QCD: Method~2.}} We start from the interaction Hamiltonian at the quark level, \begin{equation} H_{ij}=-\frac{ \lambda_{ij}}{M_iM_j}~\frac{8\pi}{3}~\bm S_i \bm \cdot \bm S_j~\delta^{3}(\bm x_i-\bm x_j)\equiv K_{ij}~\bm S_i \cdot \bm S_j~\delta^{3}(\bm x_i-\bm x_j) \end{equation} with $\lambda_{ij}$ given in Eq.~\eqref{tetra3}. Following~\cite{Godfrey:1985xj}, the light quark interaction Hamiltonian is \begin{equation} H_{\bar q\bar q}=K_{qq}~\bm S_{\bar q} \cdot\bm S_{\bar q}~\delta^3(\bm x_1-\bm x_2)\label{hfli} \end{equation} where $\bm x_1-\bm x_2$ is the distance between the light quarks. According to the $\delta^3$-function in~\eqref{hfli} we have that $\bm \eta =\bm \xi -\bm R$ and \begin{equation} \eta=\sqrt{\xi^2+R^2-2R\xi \cos\theta} \end{equation} In particular we find \begin{equation} V_{\bar q\bar q}(R)=( \Psi, H_{\bar q\bar q} \Psi) = \frac{8\pi \alpha_s}{9M_q^2}~\int_{\bm \xi} \frac{\psi({\bm \xi})^2~\phi({\bm R}-{\bm \xi})^2}{1+S^2(R)}\times~\left\{ \begin{array}{c}-3~(S_{\bar q\bar q}=0)\\ \\+1~(S_{\bar q\bar q}=1)\end{array}\right. \end{equation} In the heavy-light case we have (with an obvious notation we distinguish the two heavy quarks as $A,B$ and the light quarks as $1,2$) \begin{equation} H_{Q\bar q}=K_{Q\bar q}\Big[{\bm S}_A\cdot {\bm { S}}_1~\delta^3({\bm x}_A-{\bm x}_1)+{\bm S}_A\cdot {\bm S}_2~\delta^3({\bm x}_A-{\bm x}_2) + (A\to B)\Big]=H_{A1} +H_{A2} + (A\to B) \end{equation} Therefore \begin{equation} ( \Psi, H_{A1} \Psi)= \frac{K_{Q\bar q}}{2\left[1+S^2(R)\right]}\cdot \Big[ \psi(0)^2+\psi(R)^2+2S~\psi(0)\psi(R)\Big](\bm S_A\cdot \bm S_1) \end{equation} where we used the fact that $\bm \xi =0$ thus $\bm \eta=-\bm R$ (and $\phi(-\bm R)=\phi(\bm R)=\psi(\bm R)$ from~\eqref{cinque} and \eqref{sei}). Adding all terms, one finds \begin{equation} V_{Q\bar q}(R)=K_{Q\bar q}~\frac{ \psi(0)^2+\psi(R)^2+2S~\psi(0)\psi(R)}{2(1+S^2)}~ {\bm S}_{QQ} \cdot {\bm S}_{\bar q \bar q} \end{equation} We have \begin{equation} V_{Q\bar q}(R)=0\quad \text{for}\quad S_{\bar q\bar q}=0 \end{equation} whereas for $S_{\bar q\bar q}=1$ we have \begin{equation} V_{Q\bar q}(R)=\frac{4\pi\alpha_s}{9M_q M_Q}~\frac{ \psi(0)^2+\psi(R)^2+2S~\psi(0)\psi(R)}{2\left[1+S^2(R)\right]}\times~\left\{\begin{array}{cc}-4~(J=0)\\-2~(J=1)\\+2~(J=2) \end{array}\right. \end{equation} Both $V_{\bar q\bar q}(R)$ and $V_{Q\bar q}(R)$ are added to $V_\text{BO}(R)$ in Eq.~\eqref{bopot} before solving the Schr\"odinger equation. Finally the contribution of the $QQ$ interaction is added perturbatively, \begin{equation} Q_{QQ}=E+\frac{1}{2}\kappa''_{QQ}\label{eqQ} \end{equation} where \begin{equation} \kappa''_{QQ} = \frac{K_{QQ}}{2} \int_{\bm R}\,\frac{1}{4\pi}\left(\frac{\chi(R)}{R}\right)^2 \,\delta^3\left({\bm R}\right)=\frac{2\alpha_s}{9M_Q^2} \chi'(0)^2 \end{equation} {\bf \emph{Results.}} We consider the cases $S_{QQ}=1$ and $S_{\bar q\bar q}=0,1$. {\bf{\emph{$\bm I=\bm S_{\bm \bar q\bar q}=0$.}}} The comparison between Table~\ref{tab:karliner-sqq0-J1-x} (the case of $S_{\bar q\bar q}=0$ and total spin $J=1$ as obtained with Method~1) and Table~\ref{tab:out-sqq0-J1-x} (again $S_{\bar q\bar q}=0$ and total spin $J=1$, but obtained with Method~2) is encouraging. \begin{table}[t] \centering \begin{tabular}{||c|c|c|c|c|c|c||} \hline & $\kappa'_{\bar q\bar q}$ & $\kappa'_{QQ}$ & $\kappa'_{Q\bar q}$ & $E$ & $Q$-value & BO Mass \\ \hline $cc$ & $+1.9~(+5.0)$ & $+0.4~(+0.7)$ & $+0.7~(+2.0)$ & $-70.3~(-86.8)$ & $+137.0~(+116.1)$ & $3872~(3851)$ \\ \hline $bb$ & $+2.7~(+8.6)$ & $+0.3~(+0.4)$ & $+3.0~(+1.1)$ & $-72.5~(-91.7)$ & $-7.4~(-35.5)$ & $10553~(10525)$ \\ \hline \end{tabular} \caption{\footnotesize Scaling of couplings, $S_{\bar q\bar q}=0$, $J=1$. All units are in\ensuremath{{\mathrm{\,Me\kern -0.1em V}}}\xspace. The number in parentheses correspond to the triality scaling: in Eq.~\eqref{kbqbar3} use $k$ in place of $k/4$. The $Q$-value is taken from the $PP$ meson pair threshold. } \label{tab:karliner-sqq0-J1-x} \end{table} \begin{table}[t] \centering \begin{tabular}{||c|c|c|c|c|c|c||} \hline & $\kappa''_{\bar q\bar q}$ & $\kappa''_{QQ}$ & $\kappa''_{Q\bar q}$ & $E$ & $Q$-value & BO Mass \\ \hline $cc$ & $+3.1~(+9.4)$ & $+1.2~(+2.0)$ & $+2.1~(+7.9)$ & $-74.8~(-100.2)$ & $+135.8~(+110.8)$ & $3871~(3846)$ \\ \hline $bb$ & $+3.2~(+10.7)$ & $+0.5~(+0.7)$ & $+0.6~(+2.2)$ & $-77.3~(-107.4)$ & $-8.0~(-38.0)$ & $10552~(10522)$ \\ \hline \end{tabular} \caption{\footnotesize Couplings from QCD, $S_{\bar q\bar q}=0$, $J=1$. All units are in\ensuremath{{\mathrm{\,Me\kern -0.1em V}}}\xspace. For comparison with the other table, we also calculate the contributions to $E$ from $\kappa''_{\bar q\bar q}$ and $\kappa''_{Q\bar q}$, averaging the correspondent terms with the BO wave function. The number in parentheses correspond to the triality scaling: in Eq.~\eqref{kbqbar3} use $k$ in place of $k/4$.} \label{tab:out-sqq0-J1-x} \end{table} There is a remarkable agreement between the two results on the ${\cal T}_{cc}$ mass which are very well consistent with the mass value ${\cal T}_{cc}^+(3875)$ observed by LHCb~\cite{LHCb:2021vvq,LHCb:2021auc}. This allows to provide a prediction for the ${\cal T}_{bb}$ mass as reported in Tables~\ref{tab:karliner-sqq0-J1-x},~\ref{tab:out-sqq0-J1-x} \begin{equation} M({\cal T}_{bb})\sim 10552~{\rm MeV} \label{prediz} \end{equation} Also notice that the $Q$-value of ${\cal T}_{bb}$ with respect to the $\bar B \bar B$ threshold compares well to the recent lattice QCD determination $Q=M({\cal T}_{bb})-2M(B)=-13^{+38}_{-30}\ensuremath{{\mathrm{\,Me\kern -0.1em V}}}\xspace$~\cite{Bicudo:2021qxj}. {\bf{\emph{$\bm I=\bm S_{\bm \bar q\bar q}=1$.}}} When studying the $S_{\bar q\bar q}=1$, $J=1$ case, as well as the cases $J=0,2$ (with $S_{QQ}=1$) we appreciate the fact that in the BO description of the system we have all quarks at higher average relative distances than in the diquark-antidiquark picture for example. This translates in the fact that the difference in mass between the ${\cal T}_{QQ}(S_{\bar q\bar q}=0,J=1)$ and ${\cal T}_{QQ}(S_{\bar q\bar q}=1,J=1)$ turns out to be negligible, following either Method~1 or Method~2. In taking $S_{\bar q\bar q}=1$ we also have states with $J=0,2$, but sill with no appreciable mass differences. In giving the result~\eqref{prediz}, as well as in the discussion on the spectrum at different $J$ values, the only source of theoretical error is in the difference we get when using either the Casimir or the `triality' scaling (see the results reported in parentheses in Tables~\ref{tab:karliner-sqq0-J1-x},~\ref{tab:out-sqq0-J1-x}). Clearly the Casimir scaling of the string tension agrees better with the ${\cal T}_{cc}^+(3875)$ determination. As commented below Eq.~\eqref{sei}, the value of the characteristic distance $1/A$ used in orbital wave functions is determined by a variational principle. However we observe that $\langle H\rangle$ as a function of $A$ is rather flat around the minimum. We find that a $5\% $ variation of $\langle H\rangle$ at the minimum induces an error on the masses of approximately $ \pm 7$~MeV in the determination of the masses. This might be compatible with a spectrum having a lighter $J=0$ state, above $D D$ threshold and a heavier $J=2$ state, still too light to be seen. {\bf \emph{Conclusions.}} We have presented the prediction of the double-beauty tetraquark mass, see Tables~\ref{tab:karliner-sqq0-J1-x} and~\ref{tab:out-sqq0-J1-x}, based on a picture of the tetraquark system which is well described in the Born-Oppenheimer approximation. In this scheme the mass of the ${\cal T}_{cc}$ state is found with vey good agreement with data and the prediction on the ${\cal T}_{bb}$ state agrees with some lattice studies, as commented in the text. With the approximations used we are not able at this stage to provide the fine structure of the whole spectrum of $J=0,1,2$ states, but our results are not in contradiction with a lighter $J=0$ state and a slighlty heavier $J=2$ state. Within the Born-Oppenheimer scheme, we have improved our calculation in two ways: $i)$ scaling baryon and mesons hyperfine couplings with the dimensions of the BO bound state and $ii)$ using the plain hyperfine quark-quark QCD interaction. In both ways we get very close numerical results, which adds to the solidity of the BO approach. \acknowledgments We acknowledge interesting discussions with Ahmed Ali, Misha Mikhasenko, Giovanni Passaleva, Marco Pappagallo and Alexis Pompili. We are indebted to Vanya Belyaev for valuable information about the current search of $I=J=1$ doubly charmed tetraquarks in LHCb. AP and ADP thank the CERN-TH Division for kind hospitality during the completion of this work. \bibliographystyle{apsrev4-2.bst}
1,116,691,497,431
arxiv
\section{Introduction} A well-known property of shared object implementations is \emph{linearizability}~\cite{linearizability}. Intuitively, with a linearizable object (implementation) each operation must appear as if it takes effect instantaneously at some point during the time interval that it actually spans. As pointed out by the pioneering work of Golab \emph{et al.} \cite{sl11}, however, linearizable objects are not as strong as atomic objects in the following sense: a randomized algorithm that works with atomic objects may lose some of its properties if we replace the atomic objects that it uses with objects that are only linearizable. In particular, they present a shared-memory randomized algorithm that guarantees that some random variable has \emph{expected value} 1, but if we replace the algorithm's atomic registers with linearizable registers, a \mbox{\emph{strong adversary}} can manipulate the schedule to ensure that this random variable has expected value $\frac{1}{2}$. To avoid this \mbox{weakness} of linearizability, and ``limit the additional power that a strong adversary may gain when atomic objects are replaced with implemented objects'', Golab \emph{et al.} introduced the concept of \emph{strong linearizability}~\cite{sl11}. A natural question is whether this additional power of a strong adversary also applies to \emph{termination properties}, more precisely: is there a randomized algorithm that (a) terminates with probability 1 against a strong adversary when the objects that it uses are atomic, but (b) when these objects are replaced with linearizable objects (of the same type), a strong adversary can ensure that the algorithm never terminates? To the best of our knowledge, the question whether the ``termination with probability~1'' property can be lost when atomic objects are replaced with linearizable ones is not answered by the results in~\cite{sl11}, or in subsequent papers on this subject~\cite{sl19,sl15,sl12}. This question is particularly interesting because one of the main uses of randomized algorithms in distributed computing is to achieve termination with probability 1~\cite{abrahamson1988,aspnes1993,aspnes1998,aspnes2003,aspnes1990,aspnes1992,attiya08,bracha1991,chandra1996} (e.g., to ``circumvent'' the famous FLP impossibility result~\cite{flp}). For example, consider the well-known ABD algorithm that implements linearizable shared registers in message-passing systems~\cite{abd}.\footnote{This implementation works under the assumption that fewer than half of the processes may crash.} One important use of this algorithm is to relate message-passing and shared-memory systems as follows: any algorithm that works with atomic shared registers can automatically be transformed into an algorithm for message-passing systems by replacing its atomic registers with the ABD register implementation. But can we use the ABD algorithm to automatically transform any shared-memory \emph{randomized} algorithm that terminates with probability 1 (e.g., a randomized algorithm that solves consensus) into an algorithm that works in message-passing systems? In this paper, we show that replacing atomic registers with linearizable registers can indeed affect the termination property of randomized algorithms: termination with probability 1 can be lost. In fact we prove that this loss of termination is general in the following sense: every randomized algorithm~$\mathcal{A}$ has a corresponding algorithm $\mathcal{A}'$ that solves the same problem if the registers that it uses are atomic or strongly-linearizable, but does not terminate if these registers are replaced with ``merely'' linearizable ones. More precisely, we show that for every randomized algorithm $\mathcal{A}$ that solves a task $T$ (e.g., consensus) and terminates with probability 1 against a strong adversary, there is a corresponding randomized algorithm $\mathcal{A}'$ that also solves $T$ such that: (1)~$\mathcal{A}'$~uses only a set of shared registers in addition to the set of base objects of $\mathcal{A}$; (2) if these registers are atomic or $\textrm{write-strongly}$ linearizable, then $\mathcal{A}'$ terminates with probability 1 against a strong adversary, and its expected running time is only a small constant more than the expected running time of $\mathcal{A}$; but (3) if the registers are \emph{only} linearizable, then a strong adversary can prevent the termination of $\mathcal{A}'$. It is worth noting that this result allows us to answer our previous question about the ABD register implementation, namely, whether we can use it to automatically transform any randomized algorithm that works in shared-memory systems into a randomized algorithm that works in message-passing systems. In another paper, we proved that, although the registers implemented by the ABD algorithm are linearizable, they are \emph{not} strongly linearizable~\cite{abdnotsl}. Combining this result with the result of this paper proves that, in general, using the ABD register implementation instead of atomic registers in a randomized algorithm may result in an algorithm that does not terminate. \section{Model sketch} We consider a standard asynchronous shared-memory system with \emph{atomic} registers~\cite{lam86,herlihy91} where processes are subject to crash failures. We consider register implementations that are \emph{linearizable}~\cite{linearizability} or \emph{strongly linearizable}~\cite{sl11}. For brevity, in this paper a ``linearizable [strongly-linearizable] register'' refers to an ``implemented register whose implementation is linearizable [strongly-linearizable]". \noindent The precise definition of strong linearizability of~\cite{sl11} is reproduced here for convenience: \begin{definition}\label{SL} A set of histories $\mathcal{H}$ over a set of shared objects is strongly linearizable if there exists a function $f$ mapping histories in close($\mathcal{H}$) to sequential histories, such that: ~~~ (L) for any $H \in close(\mathcal{H})$, f(H) is a linearization of H, and ~~~ (P) for any $G, H \in close(\mathcal{H})$, if G is a prefix of H, then f(G) is a prefix of f(H). \noindent The function f is called a strong linearization function for $\mathcal{H}$. \end{definition} \section{Result} \begin{algorithm}[!ht] \caption{Weakener algorithm} \begin{multicols}{2} \label{toyalgo} For $j = 0,1,2,...$ \begin{itemize} \item $R_1[j]$: MWMR register initialized to $\bot$ \item $C_1[j]$: SWMR register initialized to $-1$ \item $R_2[j]$: SWMR register initialized to $\textsc{false}$ \end{itemize} \vspace{0.5cm} \begin{algorithmic}[1] \STATE \textsc{Code of process $p_i$, $i\in \{0, 1\}$}: \FOR {rounds $j = 0,1,2,...$} \STATE \{* \textbf{Phase 1:} writing $R_1[j]$ *\} \STATE $R_1[j] \gets i$ \label{pwrite1} \IF{$i=0$}\label{cointosser1} \STATE \{* code executed only by $p_0$ *\} \STATE $C_1[j] \gets$ flip coin \label{pcoin1} \ENDIF \STATE \{* \textbf{Phase 2:} reading $R_2[j]$ *\} \STATE $v_1 \gets R_2[j]$ \label{v1} \IF{$v_1 = \textsc{false}$}\label{guard2} \STATE \textbf{exit for loop} \label{exit2} \ENDIF \ENDFOR \STATE \textbf{return}\label{halt0} \columnbreak ~ ~ ~ ~ ~ ~ ~ \STATE \textsc{Code of process $p_i$, \mbox{$i\in \{2, 3, \ldots, n-1\}$}:} \FOR {rounds $j = 0,1,2,...$} \STATE \{* \textbf{Phase 1:} reading $R_1[j]$ and $C_1[j]$ *\} \STATE $u_1 \gets R_1[j]$ \label{u1} \STATE $u_2 \gets R_1[j]$ \label{u2} \STATE $c_1 \gets C_1[j]$ \label{rcoin1} \IF{($u_1 \neq c_1$ \OR $u_2 \neq 1-c_1$)}\label{guard1} \STATE \textbf{exit for loop} \label{exit1} \ENDIF \STATE \{* \textbf{Phase 2:} writing $R_2[j]$ *\} \STATE $R_2[j] \gets \textsc{true}$ \label{pwrite2} \ENDFOR \STATE \textbf{return}\label{halt1} \end{algorithmic} \end{multicols} \end{algorithm} Consider Algorithm~\ref{toyalgo} for $n \ge 3$ processes $p_0, p_1, p_2, \ldots, p_{n-1}$. This algorithm uses linearizable registers $R_1[j]$, $R_2[j]$, and $C_1[j]$ for $j \ge 0$. We first show that if these registers are \emph{not} $\textrm{write-strongly}$ linearizable, then a strong adversary $\mathcal{S}$ can construct an execution of Algorithm~\ref{toyalgo} in which all the processes are \emph{correct}\footnote{A process is correct if it takes infinitely many steps. We assume that after returning from the algorithm in line~\ref{halt0} or~\ref{halt1}, processes are supposed to take NOP steps (forever).} but they loop forever without reaching a return statement in line~\ref{halt0} or~\ref{halt1} (Theorem~\ref{LinearizableIsWeak}). We then show that if these registers are $\textrm{write-strongly}$ linearizable, then all the correct processes return from the algorithm with probability $1$ (within 2 rounds in expectation) (Theorem~\ref{WSLinearizableIsStrong}).\footnote{It turns out that these results depend only on whether the registers $R_1[j]$ ($j \ge 0$) are strongly-linearizable or not; this can be easily seen from the proofs of Theorems~\ref{LinearizableIsWeak} and~\ref{WSLinearizableIsStrong}.} \begin{theorem}\label{LinearizableIsWeak} If the registers of Algorithm~\ref{toyalgo} are linearizable but not $\textrm{write-strongly}$ linearizable, a~strong adversary $\mathcal{S}$ can construct a run where all the processes execute infinitely many rounds (and therefore never return in line~\ref{halt0} or~\ref{halt1}). \end{theorem} \begin{proof} Assume that $R_1[j]$, $R_2[j]$, and $C_1[j]$ (for all $j \ge 0$) are linearizable but \emph{not} $\textrm{write-strongly}$ linearizable. A strong adversary $\mathcal{S}$ can construct an infinite execution of Algorithm~\ref{toyalgo} as follows (Figure~\ref{toy}): \begin{figure}[!htb] \centering \includegraphics[width=1\textwidth]{toy-algo1.pdf} \caption{Phase 1 in a single round of an infinite execution} \label{toy} \end{figure} \newcommand{p_2, p_3, \ldots, p_{n-1}}{p_2, p_3, \ldots, p_{n-1}} \newcommand{\textrm{processes } p_2, p_3, \ldots, p_{n-1}}{\textrm{processes } p_2, p_3, \ldots, p_{n-1}} \begin{enumerate} \item\label{first-step} \textbf{Phase 1:} At time $t_0$, process $p_0$ starts writing~$0$ into $R_1[0]$ in line~\ref{pwrite1}, process $p_1$ starts writing~$1$ into $R_1[0]$ in line~\ref{pwrite1}, and $\textrm{processes } p_2, p_3, \ldots, p_{n-1}$ start reading $R_1[0]$ in line~\ref{u1}. \item At time $t_1 > t_0$, process $p_0$ completes its writing of 0 into $R_1[0]$ in line~\ref{pwrite1}. \item After time $t_1$, process $p_0$ flips a coin and writes the result into $C_1[0]$ in line~\ref{pcoin1}. Let $t_c > t_1$ be the time when $p_0$ completes this write. Depending on the result of $p_0$'s coin flip (and therefore the content of $C_1[0]$), the adversary $\mathcal{S}$ continues the run it is constructing in one of the following two ways: \textbf{Case 1}: $C_1[0]=0$ at time $t_c$. The continuation of the run in this case is shown at the top of Figure~\ref{toy}. \begin{enumerate} \item At time $t_2 > t_c$, $p_1$ completes its writing of 1 into $R_1[0]$ (line~\ref{pwrite1}). Note that \emph{both} $p_0$ and $p_1$ have now completed Phase 1 of round $j=0$. \item The adversary $\mathcal{S}$ linearizes the write of 1 into $R_1[0]$ by $p_1$ \emph{after} the write of 0 into $R_1[0]$ by~$p_0$. \item Note that $p_2, p_3, \ldots, p_{n-1}$ are still reading $R_1[0]$ in line~\ref{u1}. Now the adversary linearizes these read operations \emph{between} the above write of 0 by $p_0$ and the write of 1 by~$p_1$. \item At time $t_3 > t_2$, $\textrm{processes } p_2, p_3, \ldots, p_{n-1}$ complete their read of $R_1[0]$ in line~\ref{u1}. By the above linearization, they read $0$, and so they set (their local variable) $u_1 = 0$ in that line. \item Then $\textrm{processes } p_2, p_3, \ldots, p_{n-1}$ start and complete their read of $R_1[0]$ in line~\ref{u2}. Since (1)~these reads start \emph{after} the time $t_2$ when $p_1$ completed its write of 1 into $R_1[0]$, and (2)~this write is linearized \emph{after} the write of $p_0$ into $R_1[0]$, $\textrm{processes } p_2, p_3, \ldots, p_{n-1}$ read~$1$. So they all set (their local variable) $u_2 = 1$ in line~\ref{u2}. Let $t_4 > t_3$ be the time when every process $p_2, p_3, \ldots, p_{n-1}$ has set $u_2 = 1$. \item After time $t_4$, $\textrm{processes } p_2, p_3, \ldots, p_{n-1}$ start reading $C_1[0]$ in line~\ref{rcoin1}. Since $C_1[0] =0$ at time~$t_c$ and it is not modified thereafter, $p_2, p_3, \ldots, p_{n-1}$ read 0 and set (their local variable) $c_1 = 0$ in line~\ref{rcoin1}. \item Then $p_2, p_3, \ldots, p_{n-1}$ execute line~\ref{guard1} and find that the condition of this line is \emph{not} satisfied because they have $u_1 = c_1=0$ and $u_2 = 1- c_1 = 1$. So $p_2, p_3, \ldots, p_{n-1}$ complete Phase 1 of round $j = 0$ \emph{without exiting in line~\ref{exit1}}. Recall that both $p_0$ and $p_1$ also completed Phase 1 of round $j=0$ without exiting. \end{enumerate} \textbf{Case 2}: $C_1[0]=1$ at time $t_c$. The continuation of the run in this case is shown at the top of Figure~\ref{toy}. This continuation is essentially symmetric to the one for Case 1: the key difference is that the adversary $\mathcal{S}$ now linearizes the write of $p_1$ before the write of $p_0$, as we describe in detail below. \begin{enumerate} \item At time $t_2 > t_c$, $p_1$ completes its writing of 1 into $R_1[0]$ (line~\ref{pwrite1}). Note that \emph{both} $p_0$ and $p_1$ have now completed Phase 1 of round $j=0$. \item $\mathcal{S}$ linearizes the write of 1 into $R_1[0]$ by $p_1$ \emph{before} the write of 0 into $R_1[0]$ by~$p_0$. \item Note that $p_2, p_3, \ldots, p_{n-1}$ are still reading $R_1[0]$ in line~\ref{u1}. Now the adversary linearizes these read operations \emph{between} the above write of 1 by $p_1$ and the write of 0 by~$p_0$. \item At time $t_3 > t_2$, $\textrm{processes } p_2, p_3, \ldots, p_{n-1}$ complete their read of $R_1[0]$ in line~\ref{u1}. By the above linearization, they read $1$, and so they set (their local variable) $u_1 = 1$ in that line. \item Then $\textrm{processes } p_2, p_3, \ldots, p_{n-1}$ start and complete their read of $R_1[0]$ in line~\ref{u2}. Since (1)~these reads start \emph{after} the time $t_1$ when $p_0$ completed its write of 0 into $R_1[0]$, and (2)~this write is linearized \emph{after} the write of $p_1$ into $R_1[0]$, $\textrm{processes } p_2, p_3, \ldots, p_{n-1}$ read~$0$. So they all set (their local variable) $u_2 = 0$ in line~\ref{u2}. Let $t_4 > t_3$ be the time when every process $p_2, p_3, \ldots, p_{n-1}$ has set $u_2 = 0$. \item After time $t_4$, $\textrm{processes } p_2, p_3, \ldots, p_{n-1}$ start reading $C_1[0]$ in line~\ref{rcoin1}. Since $C_1[0] = 1$ at time~$t_c$ and it is not modified thereafter, $p_2, p_3, \ldots, p_{n-1}$ read 1 and set (their local variable) $c_1 = 1$ in line~\ref{rcoin1}. \item Then $p_2, p_3, \ldots, p_{n-1}$ execute line~\ref{guard1} and find that the condition of this line is \emph{not} satisfied because they have $u_1 = c_1=1$ and $u_2 = 1- c_1 = 0$. So $p_2, p_3, \ldots, p_{n-1}$ complete Phase 1 of round $j = 0$ \emph{without exiting in line~\ref{exit1}}. Recall that both $p_0$ and $p_1$ also completed Phase 1 of round $j=0$ without exiting. \end{enumerate} Thus in both cases, all $n$ processes complete Phase 1 of round $j = 0$ without exiting, and are now poised to execute Phase 2 of this round. The adversary $\mathcal{S}$ extends the run that it built so far as follows. \renewcommand{p_2, p_3, \ldots, p_{n-1}}{p_0 \textrm{ and } p_1} \renewcommand{\textrm{processes } p_2, p_3, \ldots, p_{n-1}}{\textrm{processes } p_0 \textrm{ and } p_1} \item\label{first-step} \textbf{Phase 2:} Process $p_2$ writes~\textsc{true} into $R_2[0]$ in line~\ref{pwrite2}; let $t'_0$ be the time when this write operation completes. (Note that $p_2$ has now completed Phase 2 of round 0.) \item After time $t'_0$, $\textrm{processes } p_2, p_3, \ldots, p_{n-1}$ read $R_2[0]$ into $v_1$ in line~\ref{v1}; Since $p_2$ completes its write of \textsc{true} into $R_2[0]$ before $p_2, p_3, \ldots, p_{n-1}$ start to read this register, $p_2, p_3, \ldots, p_{n-1}$ set $v_1 = \textsc{true}$ in line~\ref{v1}. \item Then $p_2, p_3, \ldots, p_{n-1}$ execute line~\ref{guard2} and find that the condition ``$v_1 = \textsc{false}$ ''of this line is \emph{not} satisfied. So $p_2, p_3, \ldots, p_{n-1}$ complete Phase 2 of round $j = 0$ \emph{without exiting in line~\ref{exit2}}. \item Processes ${p_3, \ldots, p_n}$ execute line~\ref{pwrite2}, and so they also complete Phase 2 of round $j = 0$. So all the $n$ processes $p_0, p_1, \ldots, p_{n-1}$, have completed Phase 2 of round $0$ without exiting; they are now poised to execute round $j=1$. \end{enumerate} \noindent The adversary $\mathcal{S}$ continues to build the run by repeating the above scheduling of $p_0, p_1, \ldots, p_{n-1}$ for rounds $j=1,2,\ldots$. This gives a non-terminating run of Algorithm~\ref{toyalgo} with \mbox{probability~1:} in this run, all processes are correct, i.e., each takes an infinite number of steps, but loops forever in a for loop and never reaches the return statement that follows this loop (in line~\ref{halt0} or~\ref{halt1}). \end{proof} We now prove that if the registers $R_1[j]$ for $j=1,2,...$ are $\textrm{write-strongly}$ linearizable, then Algorithm~\ref{toyalgo} terminates with probability 1, even against a strong adversary. Roughly speaking, this is because if $R_1[j]$ is $\textrm{write-strongly}$ linearizable, then the order in which $0$ and $1$ are written into $R_1[j]$ in line~\ref{pwrite1} is already \emph{fixed} before the adversary $\mathcal{S}$ can see result of the coin flip in line~\ref{pcoin1} of round $j$. So for every round $j\ge 0$, the adversary can\emph{not} ``retroactively'' decide on this linearization order according to the coin flip result (as it does in the proof of Theorem~\ref{LinearizableIsWeak}, where $R_1[j]$ is merely linearizable) to ensure that processes $p_i$ ($i \ge 2$) do not exit by the condition of line~\ref{guard1}. Thus, with probability 1/2, all these processes will exit in line~\ref{exit1}. And if they all exit there, then no process will write \textsc{true} in register $R_2[j]$ in line~\ref{pwrite2}, and so $p_0$ and $p_1$ will also exit in line~\ref{exit2} of round $j$. \begin{theorem}\label{WSLinearizableIsStrong} If the registers of Algorithm~\ref{toyalgo} are $\textrm{write-strongly}$ linearizable, then the algorithm terminates, even against a strong adversary: with probability 1, all the correct processes reach the return statement in line~\ref{halt0} or~\ref{halt1}; furthermore, they do so within 2 expected rounds. \end{theorem} \noindent To prove the above theorem, we first show the following two lemmas. \newcommand{\textrm{reach}}{\textrm{reach}} \newcommand{\textrm{reaches}}{\textrm{reaches}} \begin{lemma}\label{smallarestuck} For all rounds $j \ge0$, if no process $\textrm{reaches}$ line~\ref{pwrite2} in round $j$, then neither $p_0$ nor $p_1$ enters round $j+1$. \end{lemma} \begin{proof} Suppose no process $\textrm{reaches}$ line~\ref{pwrite2} in round $j$. Then no process writes into $R_2[j]$, and so $R_2[j] = \textsc{false}$ (the initial value of $R_2[j]$) at all times. Assume, for contradiction, that some process $p_i$ with $i\in\{0,1\}$ enters round $j+1$. So $p_i$ did not exit in line~\ref{exit2} of round $j$. Thus, when $p_i$ evaluated the exit condition ``$v_1 = \textsc{false}$'' in line~\ref{guard2} of round $j$, it found that $v_1= \textsc{true}$. But $v_1$ is the value that $p_i$ read from $R_2[j]$ in line~\ref{v1} of that round, and so $v_1$ can only be $\textsc{false}$ --- a contradiction. \end{proof} \begin{lemma}\label{exitchance} For all rounds $j \ge0$, with probability at least $1/2$, no process enters round $j+1$. \end{lemma} \begin{proof} Consider any round $j\ge0$. There are two cases: \begin{enumerate}[(I)] \item\label{noRwrite} \textbf{Process $p_0$ does not complete its write of register $R_1[j]$ in line~\ref{pwrite1} in round $j$.} Thus, $p_0$ never reaches line~\ref{pcoin1} (where it writes $C_1[j]$) in round $j$. So $C_1[j] = -1$ (the initial value of $C_1[j]$) at all times. \begin{claim}\label{allarestuck-det} No process enters round $j+1$. \end{claim} \begin{proof} We first show that no process $\textrm{reaches}$ line~\ref{pwrite2} in round $j$, To see why, suppose, for contradiction, some process $p_i$ $\textrm{reaches}$ line~\ref{pwrite2} in round $j$. So $p_i$ did not exit in line~\ref{exit1} of round $j$. Thus, when $p_i$ evaluated the exit condition ($u_1 \neq c_1$ or $u_2 \neq 1-c_1$) in line~\ref{guard1} it found the condition to be false, i.e., it found that $u_1= c_1$ and $u_2 = 1-c_1$. Note that $c_1$ is the value that $p_i$ read from $C_1[j]$ in line~\ref{rcoin1}, and so $c_1 = -1$. Thus, $p_i$ found that $u_1= -1$ and $u_2 = 2$ in line~\ref{guard1}. But $u_1$ is the value that $p_i$ read from $R_1[j]$ in line~\ref{u1}, and so $u_1$ can only be $\bot$ (the initial value of $R_1[j]$), or 0~or~1 (the values written into it by $p_0$ and $p_1$, respectively). So $u_1\neq -1$ in line~\ref{guard1} --- a contradiction. So no process $\textrm{reaches}$ line~\ref{pwrite2} in round $j$. This implies that: (i) processes $p_2, \dots, p_{n-1}$ do not enter round $j+1$, and (ii) by Lemma~\ref{smallarestuck}, neither $p_0$ nor $p_1$ enters round $j+1$. \end{proof} \item\label{Rwrite} \textbf{Process $p_0$ completes its write of register $R_1[j]$ in line~\ref{pwrite1} in round $j$.} \begin{claim}\label{allarestuck-rndm} With probability at least 1/2, no process enters round $j+1$. \end{claim} \begin{proof} Consider the set of histories $\mathcal{H}$ of Algorithm~\ref{toyalgo}; this is a set of histories over the registers $R_1[j]$, $R_2[j]$, $C_1[j]$ for $j \ge 0$. Since these registers are strongly linearizable, by Lemma 4.8 of~\cite{sl11}, $\mathcal{H}$~is~strongly linearizable, i.e., it has at least one strong linearization function that satisfies properties (L) and (P) of Definition~\ref{SL}. Let $f$ be the $\textrm{write-strong}$ linearization function that the~adversary~$\mathcal{S}$~uses. Let $G$ be an arbitrary history of the algorithm up to and including the completion of the write of $0$ into $R_1[j]$ by $p_0$ in line~\ref{pwrite1} in round $j$. Since $p_0$ completes its write of $0$ into $R_1[j]$ in $G$, this write operation appears in the $\textrm{write-strong}$ linearization $f(G)$. Now there are two cases: \begin{itemize} \item\label{Casino1} \textbf{Case A}: In $f(G)$, the write of $1$ into $R_1[j]$ by $p_1$ in line~\ref{pwrite1} in round $j$ occurs \emph{before} the write of $0$ into $R_1[j]$ by $p_0$ in line~\ref{pwrite1} in round $j$. Since $f$ is a $\textrm{write-strong}$ linearization function, for every extension $H$ of the history $G$ (i.e., for every history $H$ such that $G$ is a prefix of $H$), the write of $1$ into $R_1[j]$ occurs before the write of $0$ into $R_1[j]$ in the linearization $f(H)$. Thus, in $G$ and every extension $H$ of $G$, no process can first read $0$ from $R_1[j]$ and then read $1$ from~$R_1[j]$~($\star$). Let $\mathcal{P}$ be the set of processes in $\{p_2 , p_3, \ldots , p_{n-1} \}$ that reach line~\ref{guard1} in round $j$ and evaluate the condition ($u_1 \neq c_1$ or $u_2 \neq 1-c_1$) of that line. Note that $u_1$ and $u_2$ are the values that the processes in $\mathcal{P}$ read from $R_1[j]$ consecutively in lines~\ref{u1} and \ref{u2}. So $u_1$ and $u_2$ are in $\{0,1,\bot\}$, and, by~($\star$), no process can have both $u_1=0$ and $u_2=1$ ($\star \star$). Moreover, $c_1$ is the value that the processes in $\mathcal{P}$ read from $C_1[j]$ in line~\ref{rcoin1}, and so $c_1$ is in $\{0,1,-1\}$. Let $\mathcal{P' \subseteq P}$ be the subset of processes in $\mathcal{P}$ that have $c_1 = -1$ or $c_1 = 0$ when they evaluate the condition ($u_1 \neq c_1$ or $u_2 \neq 1-c_1$) in line~\ref{guard1} in round $j$. \begin{cclaim}\label{Mannaggia1} ~ \begin{enumerate}[(a)] \item\label{SC1} No process in $\mathcal{P'}$ $\textrm{reaches}$ line~\ref{pwrite2} in round $j$. \item\label{SC2} If $\mathcal{P'} = \mathcal{P}$ then neither $p_0$ nor $p_1$ enters round $j+1$. \end{enumerate} \end{cclaim} \begin{proof} To see why (a) holds, note that: (i) every process $p_i$ in $\mathcal{P'}$ that has $c_1 = -1$ evaluates the condition ($u_1 \neq c_1$ or $u_2 \neq 1-c_1$) in line~\ref{guard1} to true because $u_1 \neq -1$; and (ii) every process $p_i$ in $\mathcal{P'}$ that has $c_1 = 0$, also evaluates the condition ($u_1 \neq c_1$ or $u_2 \neq 1-c_1$) in line~\ref{guard1} to true (otherwise $p_i$ would have both $u_1 = c_1 = 0$ and $u_2 = 1-c_1 = 1$, which is not possible by ($\star \star$)). Thus, no process $p_i$ in $\mathcal{P'}$ $\textrm{reaches}$ line~\ref{pwrite2} in round $j$ (it would exit in line~\ref{exit1} before reaching that line). To see why (b) holds, suppose $\mathcal{P'} = \mathcal{P}$ and consider an arbitrary process $p$. If $p \not \in \mathcal{P}$ then $p$ does not evaluate the condition ($u_1 \neq c_1$ or $u_2 \neq 1-c_1$) in line~\ref{guard1} in round $j$; and if $p \in \mathcal{P}$, then $p \in \mathcal{P'}$, and so from part (a), $p$ does not $\textrm{reach}$ line~\ref{pwrite2} in round $j$. So in both cases, $p$ does not $\textrm{reach}$ line~\ref{pwrite2} in round $j$. Thus no process $\textrm{reaches}$ line~\ref{pwrite2} in round $j$, and so, by Lemma~\ref{smallarestuck}, neither $p_0$ nor $p_1$ enters round $j+1$. \end{proof} Now recall that $G$ is the history of the algorithm up to and including the completion of the write of $0$ into $R_1[j]$ by $p_0$ in line~\ref{pwrite1} in round $j$. After this write, i.e., in any extension $H$ of $G$, $p_0$ is supposed to flip a coin and write the result into $C_1[j]$ in line~\ref{pcoin1}. Thus, with probability \emph{at least}~$1/2$, $p_0$~will \emph{not} invoke the operation to write $1$ into $C_1[j]$. So with probability at least~$1/2$, processes never read 1 from $C_1[j]$. Thus with probability at least~$1/2$, no process in $\mathcal{P}$ has $c_1 = 1$ when it evaluates the condition ($u_1 \neq c_1$ or $u_2 \neq 1-c_1$) in line~\ref{guard1} in round~$j$. Since $c_1 \in \{0,1,-1\}$, this implies that with probability at least~$1/2$, every process in $\mathcal{P}$ has $c_1 = -1$ or $c_1 = 0$ when it evaluates this condition in line~\ref{guard1} in round $j$; in other words, with probability at least~$1/2$, $\mathcal{P' = P}$. Therefore, from Claim~\ref{Mannaggia1}, with probability at least~$1/2$: \begin{enumerate}[(a)] \item\label{SC1} No process in $\mathcal{P}$ $\textrm{reaches}$ line~\ref{pwrite2} in round $j$. \item\label{SC2} Neither $p_0$ nor $p_1$ enters round $j+1$. \end{enumerate} This implies that in Case~A, with probability (at least)~$1/2$, no process enters round $j+1$. \item\label{Casino2} \textbf{Case B}: In $f(G)$, the write of $1$ into $R_1[j]$ by $p_1$ in line~\ref{pwrite1} in round $j$ does \emph{not} occur before the write of $0$ into $R_1[j]$ by $p_0$ in line~\ref{pwrite1} in round $j$. This case is essentially symmetric to the one for Case~A, we include it below for completeness. Since $f$ is a $\textrm{write-strong}$ linearization function, for every extension $H$ of the history $G$, the write of~$1$ into $R_1[j]$ does not occur before the write of $0$ into $R_1[j]$ in the linearization $f(H)$. Thus, in $G$ and every extension $H$ of $G$, no process can first read $1$ from $R_1[j]$ and then read $0$ from~$R_1[j]$~($\dagger$). Let $\mathcal{P}$ be the set of processes in $\{p_2 , p_3, \ldots , p_{n-1} \}$ that reach line~\ref{guard1} in round $j$ and evaluate the condition ($u_1 \neq c_1$ or $u_2 \neq 1-c_1$) of that line. Note that $u_1$ and $u_2$ are the values that the processes in $\mathcal{P}$ read from $R_1[j]$ consecutively in lines~\ref{u1} and \ref{u2}. So $u_1$ and $u_2$ are in $\{0,1,\bot\}$, and, by ($\dagger$), no process can have both $u_1=1$ and $u_2=0$ $(\dagger \dagger)$. Moreover, $c_1$ is the value that the processes in $\mathcal{P}$ read from $C_1[j]$ in line~\ref{rcoin1}, and so $c_1$ is in $\{0,1,-1\}$. Let $\mathcal{P' \subseteq P}$ be the subset of processes in $\mathcal{P}$ that have $c_1 = -1$ or $c_1 = 1$ when they evaluate the condition ($u_1 \neq c_1$ or $u_2 \neq 1-c_1$) in line~\ref{guard1} in round $j$. \begin{cclaim}\label{Mannaggia2} ~ \begin{enumerate}[(a)] \item\label{SCC1} No process in $\mathcal{P'}$ $\textrm{reaches}$ line~\ref{pwrite2} in round $j$. \item\label{SCC2} If $\mathcal{P'} = \mathcal{P}$ then neither $p_0$ nor $p_1$ enters round $j+1$. \end{enumerate} \end{cclaim} \begin{proof} To see why (a) holds, note that: (i) every process $p_i$ in $\mathcal{P'}$ that has $c_1 = -1$ evaluates the condition ($u_1 \neq c_1$ or $u_2 \neq 1-c_1$) in line~\ref{guard1} to true because $u_1 \neq -1$; and (ii) every process $p_i$ in $\mathcal{P'}$ that has $c_1 = 1$, also evaluates the condition ($u_1 \neq c_1$ or $u_2 \neq 1-c_1$) in line~\ref{guard1} to true (otherwise $p_i$ would have both $u_1 = c_1 = 1$ and $u_2 = 1-c_1 = 0$, which is not possible by ($\dagger \dagger$)). Thus, no process $p_i$ in $\mathcal{P'}$ $\textrm{reaches}$ line~\ref{pwrite2} in round $j$ (it would exit in line~\ref{exit1} before reaching that line). To see why (b) holds, suppose $\mathcal{P'} = \mathcal{P}$ and consider an arbitrary process $p$. If $p \not \in \mathcal{P}$ then $p$ does not evaluate the condition ($u_1 \neq c_1$ or $u_2 \neq 1-c_1$) in line~\ref{guard1} in round $j$; and if $p \in \mathcal{P}$, then $p \in \mathcal{P'}$, and so from part (a), $p$ does not $\textrm{reach}$ line~\ref{pwrite2} in round $j$. So in both cases, $p$ does not $\textrm{reach}$ line~\ref{pwrite2} in round $j$. Thus no process $\textrm{reaches}$ line~\ref{pwrite2} in round $j$, and so, by Lemma~\ref{smallarestuck}, neither $p_0$ nor $p_1$ enters round $j+1$. \end{proof} Now recall that $G$ is the history of the algorithm up to and including the completion of the write of $0$ into $R_1[j]$ by $p_0$ in line~\ref{pwrite1} in round $j$. After this write, i.e., in any extension $H$ of $G$, $p_0$ is supposed to flip a coin and write the result into $C_1[j]$ in line~\ref{pcoin1}. Thus, with probability \emph{at least}~$1/2$, $p_0$~will \emph{not} invoke the operation to write $0$ into $C_1[j]$. So with probability at least~$1/2$, processes never read 0 from $C_1[j]$. Thus with probability at least~$1/2$, no process in $\mathcal{P}$ has $c_1 = 0$ when it evaluates the condition ($u_1 \neq c_1$ or $u_2 \neq 1-c_1$) in line~\ref{guard1} in round~$j$. Since $c_1 \in \{0,1,-1\}$, this implies that with probability at least~$1/2$, every process in $\mathcal{P}$ has $c_1 = -1$ or $c_1 = 1$ when it evaluates this condition in line~\ref{guard1} in round $j$; in other words, with probability at least~$1/2$, $\mathcal{P' = P}$. Therefore, from Claim~\ref{Mannaggia2}, with probability at least~$1/2$: \begin{enumerate}[(a)] \item\label{SC1} No process in $\mathcal{P}$ $\textrm{reaches}$ line~\ref{pwrite2} in round $j$. \item\label{SC2} Neither $p_0$ nor $p_1$ enters round $j+1$. \end{enumerate} This implies that in Case~B, with probability at least~$1/2$, no process enters round $j+1$. \end{itemize} So in both Cases A and B, with probability at least 1/2, no process enters round $j+1$. \end{proof} \end{enumerate} \noindent Therefore, from Claims~\ref{allarestuck-det} and~\ref{allarestuck-rndm} of Cases~\ref{noRwrite} and~\ref{Rwrite}, with probability at least $1/2$, no process enters round $j+1$. \end{proof} \noindent We can now complete the proof of Theorem~\ref{WSLinearizableIsStrong}, namely, that with $\textrm{write-strongly}$ linearizable registers, Algorithm~\ref{toyalgo} terminates with probability 1 in expected $2$ rounds, even against a strong adversary. \noindent Consider any round $j \ge 0$. By Lemma~\ref{exitchance}, with probability at least $1/2$, no process enters round $j+1$. Since this holds for every round $j \ge 0$, then it must be that, with probability 1, all the processes that take an infinite number of steps must exit their loop in lines~\ref{exit1} or \ref{exit2}, and reach the return statement that follows this loop; furthermore, they do so within~2 expected iterations of the loop. \begin{theorem}\label{bigbob} Let $\mathcal{A}$ be any randomized algorithm that solves a task $T$ (such as consensus) for \linebreak \mbox{$n \ge 3$} processes and terminates with probability 1 against a strong adversary. There is a corresponding randomized algorithm $\mathcal{A}'$ that solves $T$ for $n \ge 3$ processes such that: \begin{enumerate} \item $\mathcal{A}'$ uses a set $\cal{R}$ of shared registers in addition to the set of base objects of $\mathcal{A}$. \item If the registers in $\cal{R}$ are atomic or $\textrm{write-strongly}$ linearizable, then $\mathcal{A}'$ terminates with probability 1 against a strong adversary. Furthermore, the expected running time of $\mathcal{A}'$ is only a constant more than the expected running time of $\mathcal{A}$. \item If the registers in $\cal{R}$ are \emph{only} linearizable, then a strong adversary can prevent the termination~of~$\mathcal{A}'$. \end{enumerate} \end{theorem} \begin{proof} Consider any randomized algorithm $\mathcal{A}$ that solves some task $T$ for $n \ge 3$ processes $p_0, p_1, p_2, \ldots, p_{n-1}$, and terminates with probability 1 against a strong adversary. Using $\mathcal{A}$, we construct the following randomized algorithm $\mathcal{A}'$: every process $p_i$ with $i\in\{0,1,2,...,n-1\}$ first executes Algorithm~\ref{toyalgo}; if $p_i$ returns then it executes algorithm $\mathcal{A}$. Note that: \begin{enumerate} \item In addition to the set of base objects that $\mathcal{A}$ uses, the algorithm $\mathcal{A}'$ uses the set of shared registers ${\cal{R}} = \{R_1[j], R_2[j], C_1[j] ~|~ \textrm{for } j \ge 0 \}$. \item Suppose these registers are $\textrm{write-strongly}$ linearizable. Then, by Theorem~\ref{WSLinearizableIsStrong}, Algorithm~\ref{toyalgo} (that processes execute before executing $\mathcal{A}$) terminates with probability 1 in expected $2$ rounds against a strong adversary. Since $\mathcal{A}$ also terminates with probability 1 against a strong adversary, the algorithm $\mathcal{A}'$ also terminates with probability 1 against a strong adversary, and the expected running time of $\mathcal{A}'$ is only a constant time more than the expected running time of the given algorithm $\mathcal{A}$. Since $\mathcal{A}$ solves task $T$, it is clear that $\mathcal{A}'$ also solves $T$. \item Suppose these registers are linearizable but not $\textrm{write-strongly}$ linearizable. Then, by Theorem~\ref{LinearizableIsWeak}, a~strong adversary can construct a run of Algorithm~\ref{toyalgo} where, with probability~$1$, all the processes execute infinitely many rounds and never return. Thus, since $\mathcal{A}'$ starts by executing Algorithm~\ref{toyalgo}, it is clear that a strong adversary can prevent the termination of $\mathcal{A}'$ \emph{with probability 1}. \qedhere \end{enumerate} \end{proof} \bibliographystyle{abbrv}
1,116,691,497,432
arxiv
\section{Introduction} The quantization of the gravitational interaction has crystallized as a considerably more challenging endeavor than the quantization of matter fields. Paired with the success of classical general relativity when it comes to the accuracy of predictions, the question has been raised whether we are on the right track, or whether the question \emph{how} to quantize gravity is misguided and a semiclassical theory, in which only matter is quantized and spacetime remains fundamentally classical, should be sought for instead. The discussion about whether or not gravity must be quantized reaches back as far as to the early days of quantum field theory. At the 1957 Chapel Hill Conference\cite{dewittRoleGravitationPhysics1957} Feynman famously introduced a thought experiment in which a spin superposition state becomes entangled with the position of a macroscopic mass, allegedly showing the need for a quantized gravitational field---at least if one is willing to ``believe in quantum mechanics up to any level''\cite{dewittRoleGravitationPhysics1957}. Similar thought experiments\cite{eppleyNecessityQuantizingGravitational1977,kibbleSemiClassicalTheoryGravity1981,pageIndirectEvidenceQuantum1981,baymTwoslitDiffractionHighly2009,belenchiaQuantumSuperpositionMassive2018} have been repeatedly brought into the discussion since, to prove the necessity of quantization, with a direct refutation\cite{kibbleSemiClassicalTheoryGravity1981,huggettWhyQuantizeGravity2001,mattinglyQuantumGravityNecessary2005,mattinglyWhyEppleyHannah2006,kieferQuantumGravity2007,albersMeasurementAnalysisQuantum2008,kentSimpleRefutationEppley2018,rydvingGedankenExperimentsCompel2021} often following on the heels. As recently as last summer, the question has been asked: \textit{``Do Gedankenexperiments compel quantization of gravity?''}\cite{rydvingGedankenExperimentsCompel2021} with the answer still the same as much more humbly and eloquently\footnote{With the greatest appreciation, I welcome any clues regarding the mode of operation of the mysterious ``Chicago machine'' transforming hogs into sausages.} stated by Rosenfeld\cite{rosenfeldQuantizationFields1963} 59 years ago: \textbf{No!} I take a closer look at these thought experiments and consistency arguments and assert that the underlying concepts of semiclassical gravity fall into three categories, ultimately related to their incorporation of the measurement process. I first analyze the implications of the sole experiment on semiclassical gravity that has actually been conducted\cite{pageIndirectEvidenceQuantum1981}, concluding that it only rules out a rather specific sub-category of semiclassical theories: the Everettian ones which are based on the quantum state without ever collapsing the wave function. I further discuss the three classes of paradoxes that are brought up as arguments against semiclassical gravity, and explain why only one of the paradoxes considered in connection with semiclassical gravity---and only in the third case of the traditional, $\psi$-ontic semiclassical models---poses a challenge for the coupling of classical gravity to quantum matter. In order to resolve it, a semiclassical model for gravity must be of a twofold nature: providing both the coupling of quantum matter to classical spacetime and a dynamical description of wave function collapse. \section{Models of semiclassical gravity}\label{sec:models} Before we begin, let us clarify the definition of some notions. Talking about the necessity of quantizing gravity, one faces the obvious question what it means to \emph{quantize} gravity. For the purpose of this work, quantized gravity refers to any model in which spacetime is \emph{not classical}. By contrast, we refer to a theory with classical spacetime satisfying Einstein's equations for \emph{some} right-hand side, as \textbf{semiclassical gravity}. In semiclassical gravity, matter is usually though of as being described by quantum fields on said classical curved spacetime, with some freedom of choice regarding the backreaction through Einstein's equations and---as we will see---the characterization of measurement. The semiclassical Einstein equations\cite{mollerTheoriesRelativistesGravitation1962,rosenfeldQuantizationFields1963}, \begin{equation}\label{eqn:sce} R_{\mu\nu} - \frac{1}{2} g_{\mu\nu} R = \frac{8 \pi G}{c^4} \bra{\Psi} \hat{T}_{\mu\nu} \ket{\Psi} \,, \end{equation} where the left-hand side is the Einstein tensor, constructed from the scalar and tensor curvatures $R$ and $R_{\mu\nu}$ as well as the metric $g_{\mu\nu}$, and the right-hand side contains the expectation value of the stress energy operator in the quantum state $\Psi$, are only one of potentially many possible realizations of semiclassical gravity. In order to be as precise as possible, I give a vague sketch of a definition by noting that a model of semiclassical gravity generally consists of (some of) the following ingredients: \begin{enumerate} \item[(i)] a classical spacetime, i.\,e.\ a pseudo-Riemannian 4-manifold $(\mathcal{M},g)$ with metric $g_{\mu\nu}$, \item[(ii)] a set of quantum fields described by the total state $\ket{\Psi} \in \mathcal{H}$, where the state space $\mathcal{H}$ is constructed in the spirit of quantum fields on curved spacetime\cite{waldQuantumFieldTheory1994}, \item[(iii)] a set of ``hidden variables'', i.\,e.\ classical fields $\Lambda : \mathcal{M} \to \mathbb{R}^n$ (which can be empty), \item[(iv)] the U-process\cite{penroseGravityRoleQuantum1996}, i.\,e.\ a dynamical law for the quantum states $\ket{\Psi}$, usually given in the form of a Lagrangian for the fields (e.\,g.\ the standard model of elementary particles), \item[(v)] the R-process\cite{penroseGravityRoleQuantum1996}, i.\,e.\ a dynamical law governing quantum state reduction during measurement-like situations, \item[(vi)] a classical stress-energy tensor $T_{\mu\nu}(\Psi,\Lambda,x)$ defining the right-hand side in the semiclassical Einstein equations, depending on the quantum state $\Psi$ and the hidden variables. \end{enumerate} Already the proper definition of the Hilbert space (ii) is nontrivial, as is understanding the dynamics (iv), not to mention the consistent inclusion of backreaction (vi). Nonetheless, the biggest question mark is attached to the definition of the R-process (v), of which we only know its effect, namely that measurements result---at least to good approximation---in eigenstates with Born rule probabilities. In the case that there are hidden variables, the R-process must include the dynamical laws for $\Lambda$, e.\,g.\ the guiding equation for particle coordinates in the de Broglie-Bohm theory\cite{bohmSuggestedInterpretationQuantum1952a}. Of course, U- and R-processes will generally not be strictly separable but only limiting cases of a joint dynamics for all degrees of freedom in the model: $\ket{\Psi}$, $\Lambda$, and $g_{\mu\nu}$. For the purpose of this article, I distinguish three classes of semiclassical models. \paragraph{$\psi$-ontic semiclassical gravity} refers to all models in which $T_{\mu\nu}(\Psi,x)$ is a well defined function of the state $\ket{\Psi}$, independent of the hidden variables (regardless whether there are hidden variables governing the R-process or whether the set of hidden variables is empty). Nonrelativistically, some functional of the wave function plays the role of a mass density; e.\,g.\ $\rho(t,\vec r) = m \abs{\psi(t,\vec r)}^2$ for a single particle in the M\o{}ller-Rosenfeld model based on the semiclassical Einstein equations~\eqref{eqn:sce}. \paragraph{Hidden variable semiclassical gravity} refers to models in which $T_{\mu\nu}(\Lambda,x)$ is primarily a function of the hidden variables (although it implicitly depends on the quantum state via the R-process). One might think that this occurrence of hidden variables $\Lambda$ is a mere philosophical peculiarity, and that any such model should reduce to a $\psi$-ontic one by including $\Lambda$ as an explicit function in the definition of the right-hand side $T_{\mu\nu}(\Psi,x)$. Nonetheless, the distinction is useful if it comes to the question of wave function collapse, as the existence of $\Lambda$ allows for a $\psi$-epistemic interpretation and only the dynamics of $\Lambda$, not those of the quantum state, must be compatible with principles of general relativity. \paragraph{Stochastic semiclassical gravity,} finally, refers to models where the right-hand side of Einstein's equations depends on the density matrix $\hat{\rho}$, or rather on quantities derivable from $\hat{\rho}$ as the outcomes of local measurements. The U- and R-processes then determine $\hat{\rho}$ which allows for the usual stochastic interpretation in terms of Born rule probabilities for measurement outcomes. With this definition, the stochastic models are clearly set apart from the $\psi$-ontic ones, in which the gravitational field can depend on properties of the state that in standard quantum mechanics are not measurable by any local measurement---specifically in nonlocally entangled states. Again, the difference is not obvious as long as one disregards wave function collapse and takes the traditional point of view in which density matrices are merely a statistical tool to keep track of the dynamics of an ensemble of pure states. When it comes to the operational perspective, in which the density matrix plays the central role of describing physical reality whereas the wave function becomes a mere tool of bookkeeping of an observer's knowledge, there is, however, a crucial difference. This is also the case in collapse models\cite{bassiModelsWavefunctionCollapse2013}, where the density matrix still obeys a linear and deterministic dynamical law defined by a Lindblad type master equation, whereas the evolution of the wave function follows a stochastic differential equation. In this case of a linear master equation, the density matrix based approach ensures compatibility with quantum mechanical predictions which avoids most paradoxes; it does however raise difficult questions of interpretation. Note that the notions of $\psi$-ontic, hidden variable, and stochastic models of semiclassical gravity refer solely to the way in which quantum matter is coupled to classical spacetime. The definition of $T_{\mu\nu}$ notwithstanding, the quantum mechanical interpretation as such can be different. For instance, there could be hidden variables within a $\psi$-ontic model, or an objective collapse as part of the R-process in a semiclassical model based on hidden variables. Needless to say, there are other possible definitions of the notions of both semiclassical and quantized gravity which may not agree with the ones adopted here---at least not for all models. For instance, it is often presumed that quantized gravity yields perturbative quantum gravity as its low energy limit; however, for quantized gravity as defined here, this is not a requirement. Similarly, recent proposals\cite{boseSpinEntanglementWitness2017,marlettoGravitationallyInducedEntanglement2017} attracted some attention, which suggest to detect entanglement generation via gravity. As far as the definitions used here are concerned, there is no conclusive argument that in the experimental scenarios at hand \emph{all} models of quantized gravity would result in a confirmative observation of entanglement, nor is there a compelling proof for separability of the respective equations of motion---i.\,e.\ no entanglement---in \emph{all} semiclassical models\cite{reginattoEntanglingQuantumFields2019,palExperimentalLocalisationQuantum2021,carneyNewtonEntanglementGraviton2021,donerGravitationalEntanglementEvidence}. \subsection{Examples for semiclassical gravity models} The go-to example of a $\psi$-ontic model is of course to source gravity via the expectation value of the stress energy operator according to the semiclassical Einstein equations. This model, independently proposed by M\o{}ller\cite{mollerTheoriesRelativistesGravitation1962} and Rosenfeld\cite{rosenfeldQuantizationFields1963}, has been studied extensively, especially in the nonrelativistic limit where it yields the Schrödinger-Newton equation\cite{diosiGravitationQuantummechanicalLocalization1984,penroseQuantumComputationEntanglement1998} and makes distinctive predictions\cite{giuliniGravitationallyInducedInhibitions2011,yangMacroscopicQuantumMechanics2013,grossardtOptomechanicalTestSchrodingerNewton2016,grossardtDephasingInhibitionSpin2021}. In fact, requiring consistency with principles of general relativity and the correct classical limit puts tight constraints on the possible choices of stress-energy tensors\cite{waldQuantumFieldTheory1994}. Whether any other consistent models of this type can be defined is unclear. Hidden variable models, despite being the main subject of inconsistency arguments, as I will argue below, are not commonly discussed. Nevertheless, at least in the Newtonian limit one can easily define such a model based on the de Broglie-Bohm theory\cite{bohmSuggestedInterpretationQuantum1952a}. There, one describes the motion of particles with coordinates $q$ in dependence of the wave function with a guiding equation $\dot{q} = f[\psi](q,t)$. One can then simply use the particle coordinate $q$ in order to source a Newtonian gravitational potential which enters the Schrödinger equation for the wave function $\psi$. Unfortunately, the naive relativistic generalization is inconsistent\cite{struyveSemiclassicalApproximationsBased2020}. As far as the stochastic models are concerned, a fully general relativistic version of such a model has recently been presented by Oppenheim\cite{oppenheimPostquantumTheoryClassical2021}, not unlike the one introduced by Albers et al.\cite{albersMeasurementAnalysisQuantum2008} for scalar gravity. Rather than a single spacetime manifold, these models describe statistical ensembles of spacetimes. Therefore, strictly speaking, they do not qualify as models of semiclassical gravity as defined here, but could potentially be regarded as a theory of (semi-)classical statistical mechanics for such models. On the other hand, one can obtain a genuine semiclassical model, which can also be endowed with a $\psi$-ontic interpretation, starting from collapse models.\cite{tilloySourcingSemiclassicalGravity2016} The stochastic collapse of the wave function renders the evolution of the density matrix linear, allowing to calculate the ``signal'' of the mass distribution in analogy to weak measurements. This signal is then fed back into Einstein's equations as a source of spacetime curvature. At least that is the idea; so far only a nonrelativistic version has been constructed as the consistent definition of relativistic collapse models\cite{bedinghamCollapseModelsRelativity2020} poses serious problems. The often cited model by Kafri et al.\cite{kafriClassicalChannelModel2014} to describe the Newtonian interaction between two masses by local operations and classical communication can be considered a prototype of the Tilloy-Di\'osi collapse based model\cite{tilloySourcingSemiclassicalGravity2016}. By itself, it does not qualify as a model for semiclassical gravity as per the definition here, as it does not provide a meaningful notion of classical spacetime. \section{Refutation of Everettian semiclassical gravity}\label{sec:rho} When people argue for the necessity to quantize the gravitational field, one of the most common sources in which they find support is a letter by Page and Geilker\cite{pageIndirectEvidenceQuantum1981}. One of the more interesting facts about this work is that the results of what is best described as a student lab experiment got published by no lesser than the Physical Review Letters. This is even more astonishing, considering that their experiment rules out only a very specific case of stochastic semiclassical models: the Everettian one in which the wave function never collapses and the semiclassical Einstein equations~\eqref{eqn:sce} are used to source spacetime curvature. As there is no R-process but an entirely unitary dynamics, the equivalence between the pure state density matrix $\hat{\rho} = \ket{\Psi}\bra{\Psi}$ and the global state $\ket{\Psi}$ of all matter fields allows to interpret this model also as a special case of the $\psi$-ontic models. The advances that came with the formalism of quantum mechanics as a statistical theory, based on the density matrix of a system and its evolution via a linear master equation, have resulted in a popular point of view from which quantum mechanics is fundamentally a stochastic theory, with the density matrix playing the central role and the wave function merely being a sometimes helpful tool. Nonetheless, the way the density matrix is introduced in quantum mechanics courses is mostly still based---at least implicitly---on the $\psi$-ontic view of pure Hilbert space states taking the fundamental role of describing physical reality and the density matrix representing stochastic ensembles of such pure states. With regard to semiclassical gravity, and specifically the semiclassical Einstein equations~\eqref{eqn:sce}, this raises the question how to deal with different mixtures that represent the same probability distribution. For example, the density matrix for a superposition \begin{equation}\label{eqn:superpos} \ket{\psi_0} = \frac{1}{\sqrt{2}} \left( \ket{x_1} + \ket{x_2} \right) \end{equation} of a massive particle in two positions $x_1$, $x_2$, represented in the Hilbert subspace basis $\{\ket{x_1},\ket{x_2}\}$, will decohere like \begin{equation}\label{eqn:density-decoherence} \hat{\rho}_0 = \ket{\psi_0} \bra{\psi_0} = \frac{1}{2}\begin{pmatrix} 1&1\\1&1 \end{pmatrix} \quad\longrightarrow\quad \hat{\rho}_t \approx \frac{1}{2}\begin{pmatrix} 1&0\\0&1 \end{pmatrix} \,, \end{equation} when coupled with some environment for a sufficient time $t$. In the nonrelativistic limit, the stress energy operator reduces to the mass density operator $\hat{T}_{\mu\nu} \approx \hat{m}(x) = m \ket{x}\bra{x}$. Taking the expectation value from the stochastic states then results in the same mass distribution $\rho(x) = \mathrm{Tr} \hat{\rho}_0 \hat{m}(x) = \mathrm{Tr} \hat{\rho}_t \hat{m}(x)$ for the initial and decohered states. Regardless of decoherence, according to equation~\eqref{eqn:sce} the superposition would gravitate like an equal distribution of half the total mass at both positions. This would be in obvious contradiction to observations in many everyday situations. In case there were any doubts left, it has also been experimentally ruled out by Page and Geilker\cite{pageIndirectEvidenceQuantum1981}. However, only this specific version of semiclassical gravity where the uncollapsed global state, i.\,e.\ the density matrix for subsystems excluding the environment, acts as the gravitational mass density, is refuted by experiment. In this no-collapse situation, there is also no physical difference between the three categories of semiclassical gravity models introduced in the previous section. Disregarding the R-process, gravitational source terms based on the wave function or the density matrix can be substituted with each other, and the dependence on hidden variables becomes trivial. If, on the other hand, $\hat{\rho}_t$ is understood as a mixture of the pure classical states $\ket{x_1}$, $\ket{x_2}$ instead, one would not expect any deviation of the gravitational field from that of a classical point mass and the outcome of the experiment becomes trivial. This is the reason why Page and Geilker's argument applies neither to the hidden variable nor the $\psi$-ontic models (in the nontrivial case with an R-process): equation \eqref{eqn:density-decoherence} merely describes the ensemble of possible states but semiclassical gravity is sourced by the concrete representative in said ensemble. In this case, in order to end up with such a mixture starting with the superposition state \eqref{eqn:superpos}, the entire state, including the environment, must undergo a nonlinear evolution (``collapse'') \begin{equation}\label{eqn:collapse} \ket{\Psi}_0 = \ket{\psi_0} \otimes \ket{\text{env.}} \quad\longrightarrow\quad \ket{\Psi}_{t,i} = \ket{x_i} \otimes \ket{\text{env. for particle at } x_i} \end{equation} with probabilities given by the Born rule $P_i = \abs{\braket{x_i}{\psi_0}}^2$. The right-hand side in Einstein's equations must be compatible with the collapse dynamics described by equation~\eqref{eqn:collapse} in the nonrelativistic limit. At the same time, the continuity equation $\nabla_\mu {T^\mu}_\nu = 0$, which follows from the vanishing of the covariant divergence of the Einstein tensor, must be obeyed. This condition puts strong constraints on the dynamical laws underlying the wave function collapse which are usually not satisfied by nonrelativistic collapse models.\footnote{An insightful discussion of the issue of energy conservation in semiclassical gravity is presented by Maudlin et al.\cite{maudlinStatusConservationLaws2020}, as was kindly revealed to me in a prudent referee report.} One could also justify an agnostic view of collapse, as it is accepted in most text book formulations of quantum mechanics: the state of a system does not obey the Schr{\"o}\-din\-ger\ equation during a ``measurement''. Instead, one simply \emph{postulates} the state after measurement, and the Schr{\"o}\-din\-ger\ evolution law takes on again thereafter. Analogously, postulating a collapse according to equation~\eqref{eqn:collapse} and requiring the semiclassical Einstein equations~\eqref{eqn:sce} to hold before and after but not during the collapse\cite{okonWeightCollapseDynamical2018,maudlinStatusConservationLaws2020} would circumvent the issue. Such models can certainly be considered as ``semiclassical'' in some meaning of the word, although they are not semiclassical according to the definition given before which explicitly required the validity of Einstein's equations for some right-hand side. We conclude that a consistent deterministic theory of semiclassical gravity must achieve both: provide a description how the right-hand side of Einstein's equations is determined for quantum matter \emph{and} a description how the coupling to a macroscopic system results in a nonlinear dynamics which produces quasi-classical pure states with Born rule probabilities. Notably, the inclusion of the wave function collapse also clarifies the outcome in the thought experiment proposed by Feynman\cite{dewittRoleGravitationPhysics1957}. Kibble\cite{kibbleSemiClassicalTheoryGravity1981}, who introduced a similar though experiment, already points out this connection between semiclassical gravity and measurement theory. Similar thoughts apply to the stochastic models, except for the reasoning being reversed. Although equation~\eqref{eqn:density-decoherence} does apply for these models, contradictions with the experiment can be excluded by modifying the right-hand side in Einstein's equations. In the case of the collapse based model by Tilloy and Diósi\cite{tilloySourcingSemiclassicalGravity2016}, for instance, spacetime curvature is sourced by the signal $\erw{\hat{m}(x)} + \delta m(x)$ with some noise $\delta m$. The stochastic collapse of the wave function results in a gravitational field compatible with the actual measurement outcome, despite the density matrix still having the shape of equation~\eqref{eqn:density-decoherence}. \section{Paradoxes of hidden variable semiclassical gravity}\label{sec:hidden} With the conclusion of the previous section, that semiclassical gravity needs to be accompanied by a description of measurement, we are left with two consistent possibilities, depending on the answer to the question whether or not gravity ``can be used [...] to `collapse the wave function [...]'{}''\cite{eppleyNecessityQuantizingGravitational1977}. With the ability to collapse the wave function comes the capability to acquire which-path information. The concept of which-path information must implicitly assume this information to be about \emph{something} more than the wave function, which does not contain any information about which of the possible states a system will collapse into. Hence, it is evident that the $\psi$-ontic point of view does not allow to acquire which-path information through gravitational observations. Instead, the gravitational field will contain information about the wave function in $\psi$-ontic models. In hidden variable models, on the other hand, the common degree of freedom $\Lambda$ determines both the gravitational interaction and the outcome of wave function collapse. These models, therefore, clearly allow for the acquisition of which-path information through the gravitational interaction. There are two types of paradoxes based on the acquisition of which-path information that have been discussed. Due to the above considerations, these do not pose any threat to the $\psi$-ontic models, whereas their relevance for the stochastic ones seems to depend somewhat on the concrete realization. Be that as it may, I will show that even in the case of the hidden variable models these paradoxes are easily resolved and pose no constraints on the set of possible models for semiclassical gravity. Note that for the subsequent discussion I use Planck units with $G = c = \hbar = 1$. \subsection{Violation of position-momentum uncertainty relation}\label{sec:suba} Assume we could scatter a classical gravitational wave off a quantum particle. As classical waves are not required to obey the de Broglie relation between wave length and momentum, we can choose a wave with $\lambda \ll 1/p$. If the deflection angle of this wave can be detected with sufficient precision, one can conclude the position of the particle with negligible change of its momentum, thereby violating the uncertainty relation $\Delta x \, \Delta p > 1$. This has been presented by Eppley and Hannah\cite{eppleyNecessityQuantizingGravitational1977} as an argument for the necessity of quantizing the gravitational field---and has been refuted many times. Huggett and Callender\cite{huggettWhyQuantizeGravity2001} as well as Kiefer\cite{kieferQuantumGravity2007} discuss the implications of Eppley and Hannah's thought experiment and the necessity to quantize the gravitational field in great detail, whereas Albers et al.\cite{albersMeasurementAnalysisQuantum2008} give an explicit counter-example for a consistent hybrid quantum-classical theory (scalar gravity with a quantized scalar field) and argue that even in a hybrid theory uncertainty of the quantum observables induces uncertainty on the classical ones. Kent\cite{kentSimpleRefutationEppley2018}, on the other hand, presents a simple refutation of the second aspect of Eppley and Hannah's argument, namely that scattering of a gravitational wave off the wave function would result in the problems with causality to be addressed in section~\ref{sec:psi}. For the discussion here, I focus on the objections raised by Mattingly\cite{mattinglyQuantumGravityNecessary2005,mattinglyWhyEppleyHannah2006}, who points out that, at least with the parameters given by Eppley and Hannah, there are some experimental obstacles hard to overcome even in principle---not least that for the given values their detector would lie within a black hole---and in any case, ``it may be that the uncertainty relations \emph{can} be violated [because] they haven't really been tested in this way.''\cite{mattinglyQuantumGravityNecessary2005} In combining those two lines of thoughts, one can repeat Mattingly's analysis in a slightly more general way, not only applying to the specific parameters chosen by Eppley and Hannah. Digging into the details of the detection procedure outlined by Eppley and Hannah, they first describe the generation of a gravitational wave pulse by the collision of two massive objects of size $\lambda$ with a kinetic energy $E$. The wave being scattered by the particle of mass $m$ at distance $r$ from the generation event carries the energy $E_\text{sc} \sim E^2 m^2 \lambda^{-1} r^{-2}$. By comparing the energy density from the scattered gravitational wave in a distance $R$ from the particle with the local gravitational energy density one finds that the amplitude of the gravitational wave at the detector can be expressed as $A \sim E m R^{-1} r^{-1}$. Between the two ends of an oscillator of size $2L \lesssim \lambda$, mass $M$, and frequency $\omega_0 \ll \omega$, this induces the differential force\cite{misnerGravitation1973} $F(t) = m \omega^2 L A \sin\omega t$, where we denote by $\omega = 2 \pi / \lambda$ the frequency of the wave. The result is a driven oscillation with frequency $\omega$, amplitude $L A$, and an oscillation energy $E_\text{osc} \sim M \omega^2 L^2 A^2 \sim M m^2 L^2 E^2 \lambda^{-2} R^{-2} r^{-2}$. The transition probability for such an oscillator is of the order of $E_\text{osc} / \omega_0$, implying that for detection one needs $N \sim \omega_0 / E_\text{osc}$ detectors with a total mass \begin{equation} M_\text{tot} \sim \frac{M \omega_0}{E_\text{osc}} \sim \frac{\omega_0 \lambda^2 R^2 r^2}{m^2 L^2 E^2} \gtrsim \frac{\omega_0 R^2 r^2}{m^2 E^2} \gtrsim \frac{\omega_0 R^2}{m^2} \,. \end{equation} where we require $E \lesssim r$ in the final step, as otherwise our particle would vanish in the singularity created during the generation of the gravitational wave. Eppley and Hannah argue that despite the proposed low value of $\omega_0$, the time of measurement can be made short because it suffices to detect whether the energy of one of the oscillators increased by $\omega_0$. However, as Mattingly\cite{mattinglyWhyEppleyHannah2006} notices, this can only be achieved if the oscillators are at a temperature $T \lesssim \omega_0$; otherwise the increase would not be resolvable against thermal fluctuations. On the other hand, due to the Hawking-Unruh effect\cite{hawkingParticleCreationBlack1975,unruhNotesBlackholeEvaporation1976} equipotential surfaces emit black body radiation\cite{wangSurfacesAwayHorizons2018} at a temperature proportional to the surface gravity. Hence, the oscillator cannot be at a temperature much lower than $T \sim (m + M_\text{tot})/R^2$ and we have \begin{equation} M_\text{tot}\gtrsim \frac{T R^2}{m^2} \sim \frac{1}{m} + \frac{M_\text{tot}}{m^2} \,. \end{equation} Considering the cases $M_\text{tot} > m$ and $M_\text{tot} < m$ separately, one finds that in both cases this implies $m \gtrsim 1$. As a minimum requirement to violate the uncertainty relation with this type of experiment, even in principle, we need a particle mass of at least $m_P \approx \unit{2 \times 10^{-8}}{\kilogram}$. The uncertainty relation has not been confirmed experimentally for masses that large. There is also no theoretical reason to believe that the uncertainty relation must hold for all parameter regimes, especially if one attempts to fundamentally modify the principles of quantum mechanics as in most approaches for semiclassical gravity. The deeper reason why many find in this a convincing argument against semiclassical gravity, or any classical-quantum coupling, is that it allows for a way to access quantum information, i.\,e.\ properties of a quantum state which are not measurable by any experiment in orthodox quantum mechanics. This tremendous deviation from established principles can indeed result in difficult to resolve paradoxes; retrieving quantum information, however, is not sufficient. As I will discuss in section \ref{sec:psi}, one needs to make use of nonlocal entanglement and attempt signalling with the retrievable quantum information in order to bring semiclassical gravity to bay. Although Eppley and Hannah were the first to present a complete idea for a thought experiment, the argument is often attributed to the work of Bohr and Rosenfeld\cite{bohrZurFrageMessbarkeit1933}, allegedly demonstrating a consistency argument that would necessitate the quantization of the \emph{electromagnetic} field. The objection that their argument may not apply to gravity\cite{baymTwoslitDiffractionHighly2009} misses the point. As Rosenfeld himself points out\cite{rosenfeldQuantizationFields1963}, the argument does not even mandate quantization in the electromagnetic case. What Bohr and Rosenfeld actually show is that inconsistencies that arise in a rather naive---and already in its definition inconsistent---treatment, that mixes classical and quantum concepts, is resolved \emph{if} one treats the electromagnetic field as properly quantized. Notably, this is not an \emph{if and only if}. \subsection{Causality violating acquisition of which-path information}\label{sec:subb} Assume Alice wants to make use of the gravitational interaction to send a message to Bob. Alice has at her disposal a sufficiently large mass $M$ which she can move from an initial position $a_0$ into different positions, say $a_1$ and $a_2$. Bob is in possession of a test mass $m$. By monitoring the position of $m$ for some time $\tau$, Bob will be able to tell from the final position $b_1$ or $b_2$ the position of Alice's mass, opening a channel for communication. Let $a_1 < a_2 < b_1 < b_2$ all be on the $x$-axis with $2 \Delta a = a_2 - a_1$ and $d \gg \Delta a$ the average distance between Alice and Bob. A multipole expansion of the Newtonian gravitational potential $\Phi(\vec r) = -\int \ensuremath{\mathrm{d}}^3 r' \rho(\vec r') \abs{\vec r - \vec r'}^{-1}$ at Bob's position around the average position of Alice's mass yields \begin{equation} \Phi_{1,2} = -\frac{M}{d} \pm \frac{D_a}{d^2} - \frac{Q_a}{d^3} \pm \frac{O_a}{d^4} + \dots \,, \end{equation} where $D_a = M \Delta a$, $Q_a = M \Delta a^2$, $O_a = M \Delta a^3$ are the \emph{virtual} gravitational dipole, quadrupole, and octopole moments\footnote{Belenchia et al.\cite{belenchiaQuantumSuperpositionMassive2018} point out that, due to conservation of the center of mass, there is no dipole moment if one considers the entire system of Alice's mass and the surrounding lab. In the symmetric situation considered here, there is also no contribution to $\Delta b$ from the quadrupole\cite{rydvingGedankenExperimentsCompel2021}. For the further discussion, we assume that Alice and Bob are both located within a sufficiently large laboratory together, such that there is a dipole contribution. The generalization to situations where the leading order contributions stem from the quadrupole or octopole moments is straightforward.} associated with Alice's possible position choices (note that there is no real multipole in the classical case). Then we find $2 \Delta b = b_2 - b_1 \approx \tau^2 D_a / d^3$. If Bob can measure $\Delta b$ with a resolution $\delta$, then he can determine the state of Alice's mass in a time shorter than the travel time of a light signal from Alice to Bob, as long as $D_a > \delta d$. Obviously, in classical physics there is no way to actually send a signal faster than light. In order to send a signal, Alice's state must \emph{change} and the consequences of this change will be transmitted to Bob in the form of gravitational waves which only travel at the speed of light. Note that Bob's role is entirely passive; he is the recipient of the signal and his actions have no influence on Alice's system in return. In quantum mechanics, we are facing a slightly different situation\cite{mariExperimentsTestingMacroscopic2016}, as it is possible to find Alice's settings $a_1$ and $a_2$ in \emph{superposition}. Bob measuring $\Delta b$, on the other hand, is gaining information about the position of Alice's particle which, according to the complementarity principle in quantum mechanics, should decohere Alice's state. Contrary to the classical situation, Bob's action now does have a backwards influence on Alice's system. This opens the possibility for Alice to determine whether or not Bob has performed his measurement and thereby allowing some message to be sent from Bob to Alice. In order to detect if her state has decohered, Alice must perform some type of interference experiment which will take some time $T$. Faster-than-light signalling is possible as long as $T + \tau < d$. Considering the finite speed at which changes in the gravitational potential propagate does not help the situation; Alice can have her state readily prepared long before Bob even starts thinking about performing his measurement. The signalling from Bob to Alice is due to the nonlocal entanglement of their respective quantum states, together with the quantum mechanical description of collapse as an instantaneous effect on the wave function everywhere. One may, of course, ask whether an instantaneous collapse is not immediately inconsistent with any relativistic theory, and in fact, a Lorentz invariant description in which the collapse happens (at least) instantaneously in every frame should have the rather odd property that the collapse propagates along the backwards light cone of a measurement event. Nonetheless, the (for all practical purposes) instantaneity of the collapse has been confirmed in experiments\cite{stefanovQuantumCorrelationsSpacelike2002}. For lack of a better alternative, we assume the usual quantum mechanical description to apply: that Bob's acquisition of which-path information decoheres Alice's state even for spacelike separations. Replacing the dynamical equations (i.\,e.\ the Schr{\"o}\-din\-ger\ equation) with corresponding relativistic dynamics (e.\,g.\ quantum field theory) does not avoid the possibility of faster-than-light signalling. Taking the dynamical character of the gravitational interaction into consideration, nevertheless, resolves the paradox. This has been detailed by Belenchia et al.\cite{belenchiaQuantumSuperpositionMassive2018} for the case of quantized gravity and I will reiterate their arguments in a more general fashion. Let us first look at the constraints on Bob's resolution $\delta$. In order to observe the position of his test mass, Bob must in some way interact with it through the exchange of some particle with energy $E$ and wave length $\lambda$, for instance, a photon being scattered off the test mass and reaching Bob's eye. The resolution is limited by the wave length, $\delta > \lambda$. On the other hand, the resolution is limited by the scattering cross section which must be larger than the Schwarzschild radius $r_S = 2 E$ of the particle. We have $\delta \gtrsim 2 E \geq 2 p = 4 \pi / \lambda > 4 \pi / \delta$ which implies $\delta > 2 \sqrt{\pi} > 1$. The test mass position can only be determined up to Planck length precision, and with the considerations from above we find $D_a > d$ as the condition for faster-than-light signalling. In order to perform her interference experiment, Alice must bring the two states in spatial superposition back to one location, eliminating her dipole moment in time $T$ by an acceleration $\ddot{D}_a \sim D_a/T^2$. With the Larmor formula for gravitoelectromagnetism\cite{clarkGaugeSymmetryGravitoelectromagnetism2000} one finds that this amounts to a total gravitational energy $E \sim \ddot{D}_a^2 T \sim D_a^2 / T^3$ being radiated away in form of gravitational waves. We can in principle gain which-path information from the emitted waves, resulting in a loss of coherence before Alice has the chance to finish her experiment. However, if some quantum system is used for the detection, this is only possible if the energy exceeds the threshold given by the time-energy uncertainty relation, $E T > 1$. Hence, Alice will be able to successfully perform her experiment as long as $D_a < T$. In order to send a signal faster than light, one then requires $D_a < T < d$, in contradiction to the requirement $D_a > d$ from the previous paragraph. We conclude that there is no possibility for faster-than-light signalling. Note that we did not require explicitly that the gravitational field be quantized. We only need it to be capable of carrying which-path information. If, in a semiclassical theory, gravity carries no such information, Alice's state will not decohere, neither from the emission of radiation \emph{nor} from Bob's measurement. In this case, there is no problem of signalling in the first place. Belenchia et al.\cite{belenchiaQuantumSuperpositionMassive2018} present the above argument only for the concrete example of pertubatively quantized gravity in analogy to quantum electrodynamics. The Planck length limit for Bob's resolution is then understood as a limit from vacuum fluctuations of the curvature tensor. The condition $D_a < T$, on the other hand, can be phrased as an emission of not even a single graviton of wave length $T$, which is not an essentially different criterion from the one given above based on the emission of classical gravitational waves. A different point of view is the requirement that interference fringes should be at least a Planck length apart in order to be detectable\cite{rydvingGedankenExperimentsCompel2021}, which results in similar restrictions. The definiteness of the above analysis with regard to the impossibility of faster-than-light signalling can be doubted based on the assessment that many of the relations were only approximate. Hence, one may ask if in settings where they are close to being satisfied there could be possibilities to send a signal just above the speed of light, which would suffice for a claim of inconsistency. There are also ideas\cite{bekensteinTabletopSearchPlanck2012} which would allow to detect the mass center of a solid body with a precision $\delta < 1$, at least in principle, invalidating the arguments above which assume that Bob's test mass is a point like particle\footnote{Of course, one can simply postulate the Planck length as a fundamental limit for position measurements\cite{rydvingGedankenExperimentsCompel2021}.}. The essence of the argument, however, remains that it does not matter whether one quantizes the gravitational field or not; the limitations from vacuum fluctuations of curvature\cite{belenchiaQuantumSuperpositionMassive2018} are approximate and rely on the point particle property just as much as the classical arguments presented here. To the degree to which the former is evidence for consistency of quantized gravity, the latter should be regarded as evidence for the consistency of semiclassical gravity. The thought experiment \emph{did} require both an instantaneous collapse of Alice's state upon Bob's measurement and the validity of the complementarity principle. Had we found some violation of causality, we could have attempted to amend it by allowing for deviations of either or both principles. The outcome of the analysis shows, however, that the thought experiment is perfectly causal without the need of any such fundamental changes. \section{Causality paradox in psi-ontic semiclassical gravity}\label{sec:psi} In the previous section, we learned that two of the commonly discussed paradoxes regarding semiclassical gravity are not paradoxical after all. First of all, they do not apply to the orthodox, $\psi$-ontic models, and second of all, even in the cases where they do apply, specifically for the hidden variable models, they are easily resolved. We now address a more serious issue which arises (at least) for the $\psi$-ontic models. Assume we have a pair of entangled spin-$\tfrac12$ particles, with Alice and Bob each in possession of one of these particles, hence having access to half of the entangled state \begin{equation} \ket{\Psi} = \alpha \ket{\uparrow}_a \otimes \ket{\downarrow}_b + \beta \ket{\downarrow}_a \otimes \ket{\uparrow}_b \,. \end{equation} If Alice performs a spin measurement, this state collapses with probabilities $\abs{\alpha}^2$ and $\abs{\beta}^2$, respectively, into one of the two summands, resulting in an ensemble with density matrix \begin{equation}\label{eqn:rhoc} \hat{\rho}_c = \abs{\alpha}^2 \ket{\uparrow \downarrow} \bra{\uparrow \downarrow\,} + \abs{\beta}^2 \ket{\downarrow \uparrow} \bra{\downarrow \uparrow\,} \,, \end{equation} where we write $\ket{\uparrow \downarrow} = \ket{\uparrow}_a \otimes \ket{\downarrow}_b$ etc. If Alice does not perform the measurement, on the other hand, we find the density matrix of the pure state to be \begin{equation}\label{eqn:rhop} \hat{\rho}_p = \hat{\rho}_c + \alpha \beta^* \ket{\uparrow \downarrow} \bra{\downarrow \uparrow\,} + \alpha^* \beta \ket{\downarrow \uparrow} \bra{\uparrow \downarrow\,} \,. \end{equation} Bob, only being able to perform a measurement on his part of the state, must trace out Alice's degrees of freedom. The interference terms in the pure density matrix \eqref{eqn:rhop} then vanish and one ends up with the same reduced density matrix \begin{equation} \hat{\rho}_b = \abs{\alpha}^2 \ket{\uparrow}_b \bra{\uparrow\,}_b + \abs{\beta}^2 \ket{\downarrow}_b \bra{\downarrow\,}_b \end{equation} regardless of Alice's decision to (not) perform a measurement. Introducing the basis labeling $\{\ket{i}\}_{i \in 1\dots 4} = \{\ket{\uparrow\uparrow}, \ket{\uparrow\downarrow}, \ket{\downarrow\uparrow}, \ket{\downarrow\downarrow}\}$, we can express any unitary time evolution by a unitary matrix such that $\ket{i} \to \sum_j U_{ij}(t) \ket{j}$, which induces an evolution law for the density matrices \eqref{eqn:rhoc} and \eqref{eqn:rhop}: \begin{align} \hat{\rho}_c &\to \sum_{i,j} \left(\abs{\alpha}^2 U_{2i} U_{2j}^* + \abs{\beta}^2 U_{3i} U_{3j}^* \right) \ket{i}\bra{j} \\ \hat{\rho}_p &\to \sum_{i,j} \left(\alpha U_{2i} + \beta U_{3i}\right)\left(\alpha^* U_{2j}^* + \beta^* U_{3j}^*\right) \ket{i}\bra{j} \,. \end{align} The difference between the reduced density matrices becomes \begin{equation} \delta \hat{\rho}_b(t) = \mathrm{Tr}_a \left(\hat{\rho}_p - \hat{\rho}_c\right) = \alpha \beta^* \begin{pmatrix} U_{21} U_{31}^* + U_{23} U_{33}^* & U_{21} U_{32}^* + U_{23} U_{34}^* \\ U_{22} U_{31}^* + U_{24} U_{33}^* & U_{22} U_{32}^* + U_{24} U_{34}^* \end{pmatrix} + \text{adj.} \end{equation} This expression is generally nonzero. However, if the evolution affects only Bob's state and leaves Alice's unaltered, the matrix $U$ becomes block diagonal and we find $\delta \hat{\rho}_b(t) = 0$. Hence, Bob is unable to distinguish between the collapsed ensemble $\hat{\rho}_c$ and the pure state $\hat{\rho}_p$ by any local experiment. Let us be more concrete and assume that Bob determines the spin by performing a Stern-Gerlach experiment, i.\,e.\ he subjects his particle to a magnetic field gradient for some short time $\tau_\text{acc}$ with a sign change after half that time, such that the spin state becomes entangled with the particles position, $\ket{x_1}$ for the spin state $\ket{\uparrow}_b$ and $\ket{x_2}$ for the spin state $\ket{\downarrow}_b$. A local experiment at Bob's position then results in a time evolution \begin{subequations}\begin{align} \ket{\uparrow x_2} &\to \int \ensuremath{\mathrm{d}} x \, a_2(x) \ket{\uparrow x} \\ \ket{\downarrow x_1} &\to \int \ensuremath{\mathrm{d}} x \, a_1(x) \ket{\downarrow x} \\ \ket{\Psi} = \alpha \ket{\uparrow x_2} + \beta \ket{\downarrow x_1} &\to \int \ensuremath{\mathrm{d}} x \, \left(\alpha \widetilde{a}_2(x) \ket{\uparrow x} + \beta \widetilde{a}_1(x) \ket{\downarrow x} \right) \,. \end{align}\end{subequations} Bob's reduced density matrix in position space ends up to be \begin{align} \hat{\rho}_{c,b}(x,y) &= \abs{\alpha}^2 a_2(x) a_2^*(y) + \abs{\beta}^2 a_1(x) a_1^*(y) \label{eqn:red-density-cb}\\ \hat{\rho}_{p,b}(x,y) &= \abs{\alpha}^2 \widetilde{a}_2(x) \widetilde{a}_2^*(y) + \abs{\beta}^2 \widetilde{a}_1(x) \widetilde{a}_1^*(y) \,, \label{eqn:red-density-pb} \end{align} for the case of a collapsed wave function and the pure state $\ket{\Psi}$, respectively. In standard quantum mechanics, the linearity of the time evolution law requires $\widetilde{a}_{1,2} = a_{1,2}$ and, hence, ensures that the density matrices \eqref{eqn:red-density-cb} and \eqref{eqn:red-density-pb} are identical. In the orthodox semiclassical approach~\eqref{eqn:sce}, a generic state \begin{equation} \ket{\chi} = \int \ensuremath{\mathrm{d}} x \, \left(\alpha \chi_\uparrow(x) \ket{\uparrow x} + \beta \chi_\downarrow(x) \ket{\downarrow x}\right) \end{equation} results in the mass density distribution \begin{equation} \bra{\chi} \hat{m}(x) \ket{\chi} = m \braket{\chi}{x}\braket{x}{\chi} = m \abs{\alpha}^2 \abs{\chi_\uparrow(x)}^2 + m \abs{\beta}^2 \abs{\chi_\downarrow(x)}^2 \,. \end{equation} For the situation of interest, we find $\abs{\chi_\uparrow(x)}^2 \sim \delta(x-x_2)$ and $\abs{\chi_\downarrow(x)}^2 \sim \delta(x-x_1)$ and hence Newtonian potentials \begin{subequations}\begin{align} V_2(x) &= -\frac{m^2}{\abs{x-x_2}} \\ V_3(x) &= -\frac{m^2}{\abs{x-x_1}} \\ V_\Psi(x) &= \abs{\alpha}^2 V_2(x) + \abs{\beta}^2 V_3(x) \end{align}\end{subequations} for the states $\ket{2} = \ket{\uparrow\downarrow} = \ket{\uparrow x_2}$, $\ket{3} = \ket{\downarrow\uparrow}= \ket{\downarrow x_1}$, and $\ket{\Psi} = \alpha \ket{2} + \beta \ket{3}$, respectively. Ignoring the free spreading of the wave function as well as the self-gravitational effects of these potentials on the wave function, the states $\ket{2}$ and $\ket{3}$ are unaffected, whereas the superposition state experiences a shift \begin{equation} \ket{\Psi} \quad \to \quad \ket{\Psi}_t = \alpha \ket{\uparrow \widetilde{x}_2} + \beta \ket{\downarrow \widetilde{x}_1} \end{equation} with \begin{equation}\label{eqn:deltax} \widetilde{x}_1 = x_1 - \abs{\alpha}^2 \delta x \,,\quad \widetilde{x}_2 = x_2 + \abs{\beta}^2 \delta x \,,\quad \delta x \approx \frac{m t^2}{2 \Delta x^2} \,, \end{equation} for $\Delta x = x_1 - x_2 > 0$, without loss of generality, and assuming $\delta x \ll \Delta x$. Hence, we have $a_i(x) \approx \delta(x - x_i) \neq \widetilde{a}_i(x) \approx \delta(x - \widetilde{x}_i)$ and the density matrices \eqref{eqn:red-density-cb} and \eqref{eqn:red-density-pb} become distinguishable. They predict different outcomes for position measurements at Bob's particle: $x_1$ or $x_2$ versus $\widetilde{x}_1$ or $\widetilde{x}_2$ (in both cases with probabilities $\abs{\alpha}^2$ and $\abs{\beta}^2$). Although we focus on the result of the semiclassical Einstein equations here, other $\psi$-ontic models face the same issue. The gravitational potential is determined by the wave function, rendering the Schrödinger evolution nonlinear in the wave function. It has been argued\cite{gisinStochasticQuantumDynamics1989,bahramiSchrodingerNewtonEquationIts2014} that this distinguishability can be exploited to violate causality, because the reduction from \eqref{eqn:red-density-cb} to \eqref{eqn:red-density-pb} happens instantaneously upon measurement in standard quantum mechanics. In fact, this argument that causality requires a linear evolution of the density matrix, is the very basis upon which the theoretical formalism of collapse models is founded. There are good reasons to question the conclusiveness of this claim. For instance, Kent\cite{kentNonlinearitySuperluminality2005,kentTestingQuantumGravity2021} has shown that a description of measurement based on the ``local state'' of a particle, i.\,e.\ its reduced density matrix conditioned on all the measurement outcomes in the past light cone, allows for nonlinear evolution without the possibility to signal faster than light. On the other hand, even if one \emph{does} believe that distinguishability between \eqref{eqn:red-density-cb} and \eqref{eqn:red-density-pb} poses a problem one may ask if the difference can ever be observed, even in principle, in any sort of experiment that would allow for faster-than-light signalling. Note that the possibility to signal is not (at least not entirely) due to the nonlinear gravitational interaction, it is due to the projective spin measurement and the induced instantaneous, nonlocal collapse of the wave function. Hence, the details of the collapse mechanism are likely to matter. In order to actually resolve the position shift $\delta x$, it must be larger than the free spreading of the wave function due to position-momentum uncertainty: \begin{equation}\label{eqn:inequal-pos-mom} \delta x^4 \gtrsim \frac{t^2}{m^2} \sim \frac{\delta x \, \Delta x^2}{m^3} \quad \Rightarrow \quad m^3 \gtrsim \frac{\Delta x^2}{\delta x^3} \gg \frac{1}{\delta x} \,. \end{equation} This criterion can of course be satisfied simply by choosing a sufficiently large mass. However, if we account for a dynamical collapse of the wave function, a larger mass usually implies a faster collapse and we must take care that the superposition is maintained throughout the entire time $t$ of the experiment. Instead of a precise dynamical law, we consider some prototype of a collapse dynamics which resembles the ideas of Di\'osi\cite{diosiModelsUniversalReduction1989} and Penrose\cite{penroseGravityRoleQuantum1996}: whenever the wave function becomes wider than some collapse radius $r_c$, it collapses towards a position eigenstate at a rate determined by the time-energy uncertainty, $\tau E \sim 1$, where $E \sim m^2/r_c$ is the gravitational self-energy of the superposition state of size $r_c$. The radius $r_c$ is to be considered a free parameter---in Di\'osi's model it is a cut-off required to avoid divergences from localized mass densities, although we can also simply take it as a proportionality constant between the collapse rate and the squared mass, or even a function of mass itself. The collapse time $\tau$, on the other hand, follows from Penrose's argument that the uncertainty for the generators of time translation between the two spacetimes belonging to two classical states in superposition can be associated with the gravitational self-energy in precisely this way. If, then, we require that the superposition must be maintained throughout the experiment, i.\,e.\ $t < \tau$, we have \begin{equation}\label{eqn:rc-limit} r_c \sim m^2 \tau \gtrsim m^2 t \sim \sqrt{m^3 \delta x \, \Delta x^2} \gtrsim \frac{\Delta x^2}{\delta x} \gg \delta x \,, \end{equation} where we used the inequality \eqref{eqn:inequal-pos-mom} in the second to last and $\Delta x \gg \delta x$ in the last step. In conclusion, we can only observe a shift $\delta x$ that is \emph{below} the collapse radius $r_c$. How small can $r_c$ be? The strongest experimental constraints stem from levitated nanoparticles\cite{delicCoolingLevitatedNanoparticle2020} which require $r_c \gtrsim \unit{10^{-15}}{\meter}$ for masses of $m \sim \unit{10^{-17}}{\kilogram}$. If $r_c$ was in fact of this order of magnitude, according to the position-momentum uncertainty relation~\eqref{eqn:inequal-pos-mom} we would require a mass of at least $\unit{10^{-15}}{\kilogram}$ whose center-of-mass position we would need to resolve with femtometer precision. \begin{figure} \centering \includegraphics[scale=0.6]{plot.pdf} \caption{Exclusion plot for possible values of $r_c$. The dotted black line shows the curve $r_c = m^{-3}$, below which no detection of $\delta x$ is possible. The dashed blue line shows the curve $r_c^9 = m^{13}/\rho^4$ for the densest known element, osmium, below which no detection is possible for superpositions smaller than the particle radius, i.\,e.\ below the purple line. Any---possibly mass dependent---value of $r_c$ below the green shaded area would not allow detection before collapse and prevent faster-than-light signalling. The red dots show lower limits on $r_c$ from atomic fountain\cite{sugarbakerEnhancedAtomInterferometer2013}, matter wave\cite{eibenbergerMatterWaveInterference2013}, nanoparticle\cite{delicCoolingLevitatedNanoparticle2020}, and mechanical resonator\cite{oconnellQuantumGroundState2010} experiments, respectively. Neutron interferometry experiments\cite{colellaObservationGravitationallyInduced1975} would be in the far bottom left corner, excluded for better legibility. The red dotted line shows the recent limit on $r_c$ in the Di\'osi-Penrose collapse model\cite{donadiUndergroundTestGravityrelated2021}.} \label{fig:plot} \end{figure} Although the experiment is obviously difficult, we are not interested in a concrete realization. What matters is whether it is possible \emph{in principle}. Comparing equations \eqref{eqn:inequal-pos-mom} and \eqref{eqn:rc-limit}, we find that $r_c \gg 1/m^3$ must hold, in order to be able to acquire the necessary position shift due to semiclassical gravity before the wave function collapses. This implies that one needs a large mass for faster-than-light signalling, yet, quantum superpositions of large masses have been demonstrated and, in fact, mechanical resonators\cite{oconnellQuantumGroundState2010} achieve values which would lie above the $r_c \sim 1/m^3$ threshold. However, for massive particles the superposition size will generally be within the particle radius, $\Delta x < R$, and we must modify equation \eqref{eqn:deltax} to reflect the gravitational force between two overlapping spheres: $\delta x \sim \rho \,\Delta x \,t^2$ for mass density $\rho$. Equations \eqref{eqn:inequal-pos-mom} and \eqref{eqn:rc-limit} then read \begin{align} \delta x^9 &\gtrsim \frac{t^6}{m^6 \, \delta x^3} \sim \frac{1}{m^6 \, \rho^3 \, \Delta x^3} > \frac{1}{m^6 \, \rho^3 \, R^3} \sim \frac{1}{m^7 \rho^2}\\ r_c^9 &\sim m^{18} \tau^9 \gtrsim \sqrt{\frac{m^{36} \delta x^9}{\rho^9 \Delta x^9}} > \sqrt{\frac{m^{33} \delta x^9}{\rho^6}} > \frac{m^{13}}{\rho^4} \,.\label{eqn:rc-massive} \end{align} The two conditions \eqref{eqn:rc-limit} and \eqref{eqn:rc-massive} intersect at the particle radius $r_c = R$. The picture we are left with is illustrated in figure~\ref{fig:plot} as an exclusion plot for $r_c$ in terms of the mass. The dotted purple line shows the values for which $r_c$ equals the particle radius for an osmium\footnote{We choose osmium because it is the densest known element. Note that the effect of the density on the result is marginal, as long as one does not consider densities many orders of magnitude above those naturally occurring in solid bodies.} sphere of a given mass. For values above that line, the detection of $\delta x$ is limited by the criterion \eqref{eqn:rc-limit}, corresponding to the dotted black line; for values below, it is limited by \eqref{eqn:rc-massive}, the dashed blue line. Since the condition \eqref{eqn:rc-massive} poses a stricter criterion than $r_c = R$, the actual limitation on detection of $\delta x$ is set by the latter. Detecting the shift $\delta x$ and, therefore, being able to send a faster-than-light signal is possible only for values of $r_c$ in the green shaded area.\footnote{Note that, although the criteria \eqref{eqn:rc-limit} and \eqref{eqn:rc-massive} are satisfied in the area between the dashed blue and dotted purple lines for small masses below the dotted black line, in this area one would require a contradictory spatial resolution $\delta x > \Delta x$.} The plot also shows the lower limits put on $r_c$ by certain experiments. Any value $r_c$ or function $r_c(m)$ between the red dots and the green shaded area induces a collapse that is fast enough to prevent faster-than-light signalling and is compatible with observation. Levitated nanoparticles are possibly the preferable choice of experiment to exclude $r_c$ values that can avoid faster-than-light signalling. The recent limit\cite{donadiUndergroundTestGravityrelated2021} on the free parameter $r_c \gtrsim 0.54 \times 10^{-10}~$m of the Di\'osi-Penrose model\cite{diosiModelsUniversalReduction1989} from underground tests of radiation emission is plotted as a dotted red line. As a limit on the here proposed type of collapse it must be taken with caution, because the experiment did not involve actual spatial superposition states of that size; and even this result cannot exclude non-signalling semiclassical gravity. The arguments presented here were specifically tailored to the gravitational coupling via the semiclassical Einstein equations~\eqref{eqn:sce}; one would need to repeat a similar analysis for other models to assure their consistency. It may also be possible to construct experiments other than the one described here in order to exploit the nonlinear evolution for signalling. Be that as it may, based on the current state of observation, there is no reason to believe that semiclassical gravity \emph{necessarily} violates causality as long as it is accompanied by some collapse mechanism---as mandated by the Page and Geilker experiment. \section{Summary and discussion}\label{sec:discussion} I have reviewed the consistency arguments that are most commonly raised against semiclassical gravity and have shown how all of them can be avoided if one accepts that a semiclassical theory of gravitation does not only require a coupling mechanism for quantum fields to spacetime curvature, but it must also provide a dynamical description of wave function collapse. The arguments for consistency presented here were not based on any elaborate model for wave function collapse but rather on ad-hoc expectations on some basic features of such a model. Whether a consistent, fully relativistic model compatible with general relativity and the semiclassical coupling exists, remains an open problem. Nonetheless, the discussion shows that there is nothing preventing a fully consistent theory in principle. With the necessity of collapse in mind, there are three approaches one may take in order to define a quantum matter-gravity coupling: the $\psi$-ontic one, taking the wave function as an element of physical reality responsible of sourcing spacetime curvature, the hidden variable approach, postulating some novel, non-quantum degree of freedom, or stochastic models aiming at some ``minimally invasive'' way of modifying quantum mechanics. In that last category of stochastic semiclassical gravity, the model by Tilloy and Di\'osi\cite{tilloySourcingSemiclassicalGravity2016} represents a natural way to source gravity \emph{if} one believes in the presence of a stochastic collapse with linear master equation as in collapse models\cite{bassiModelsWavefunctionCollapse2013}. Besides the somewhat uncalled for occurrence of said collapse, which assumes the introduction of some non-quantum stochastic field, the main challenge is the generalization to a relativistic model. Oppenheim's suggestion\cite{oppenheimPostquantumTheoryClassical2021} for a ``post-quantum'' semiclassical theory is formulated in a fully general relativistic fashion. However, as a stochastic model that describes ensembles of 3-manifolds rather than a single classical spacetime it raises the question if there is an underlying microscopic theory---as in classical statistical physics. It is also not entirely clear whether it can avoid the paradoxes of semiclassical gravity, as the full equations of motion are nonlinear and only become linear for matter after tracing out gravitational degrees of freedom, or the difficulties with self-energy and renormalization at high energies which render perturbative quantum gravity inconsistent. In the light of these arguments, the $\psi$-ontic models---first and foremost the M\o{}ller-Rosenfeld model---remain an interesting possibility despite breaking with many established concepts of quantum theory. One question to devote oneself to is the cause of the necessary dynamical wave function collapse. Although the collapse could simply be caused by an additional, external mechanism, more convincing would be an explanation within semiclassical gravity itself. The ingredients needed for a collapse are nonlinearity and stochasticity, of which the former is readily included in semiclassical gravity. The equations of motion of a semiclassical theory, on the other hand, are by default deterministic. The crucial question, therefore, is whether internal sources of randomness---for instance a random distribution of dark matter or a stochastic gravitational wave background---can provide boundary conditions that would result in stochastic behavior compatible with Born rule probabilities. \begin{acknowledgments} I gratefully acknowledge funding by the Volkswagen Foundation. \end{acknowledgments} \section*{Data Availability} The data that support the findings of this study are available from the corresponding author upon reasonable request. \section*{Author Declarations} The author has no conflicts to disclose.
1,116,691,497,433
arxiv
\section{Introduction}\label{sec:introduction}} \IEEEPARstart{S}{ince} the invention of writing over 5000 years ago, people have been recording a great variety of things in written form. Up until the 20\textsuperscript{th} century, writing has been the primary way of storing information across generations. Consequently, manuscripts are a vital source of knowledge about the past, and for that reason of great interest to contemporary historical research. Once digitized, manuscripts are typically collected in archives \cite{vaticanLibrary, old_bailey, GenderAndWork}. Some are also transcribed \cite{old_bailey}, allowing for text-based searching and processing \cite{Hitchcock_Turkel_2016}. Others are collections of published data used in research \cite{GenderAndWork, pettersson2016histsearch}, though they are most commonly simply photographed \cite{vaticanLibrary}. Research in historical documents often consists of manually searching for small and scattered pieces of information in large amounts of digitized manuscripts. Finding where to look, or even which book to examine can be time consuming, as it is commonplace for a researcher to spend several months with a single book of a few hundred pages. It is a matter of looking for a black cat in a coal cellar. A need for a scalable solution exists as large part of the writings produced throughout history are yet to be studied. \begin{figure}[t!] \begin{center} \includegraphics[width=\linewidth]{kamnarsratten_vasteras2_witnet_cropped.png} \end{center} \caption{A search for the word "witnet" (eng. "the witness") on one page of the Snevringe court records dataset, displaying the top 25 results. The greener the bounding box, the closer it is to the search query. Note that the true number of occurrences of the word "witnet" is 6 and that many of the other results are parts of words that are similar to the query.} \label{fig:example_search} \end{figure} Word spotting \cite{WordSpotting1996, giotis2017survey} is a way to address the problem of finding where to look. The task consists of locating and retrieving images of words given a user supplied query, much like a regular text word search found in common text processors. Since its introduction, two principal modes of variation on the task have been proposed. The first is input data format, where images can either be of full manuscript page, known as \emph{segmentation-free} word spotting; or pre-cropped individual word images, known as \emph{segmentation-based} word spotting. The second variation is the type of query. It may either be a cropped word image, also called \emph{Query-by-Example} (QbE); or a text string, also called \emph{Query-by-String} (QbS) or out-of-vocabulary word spotting. All else being equal, QbS is preferred over QbE as the user is not required to find an instance of the query before being able to search. In real-life practical settings, segmentation-based word spotting is not applicable. First, because the automatic segmentation of a manuscript into words is non-trivial in most handwritten documents. Second, because the manual work of segmenting individual words is very time consuming, almost to the same extent as transcribing a manuscript. An alternative to word spotting that has been successfully applied to manuscripts is crowdsourcing the transcription of a collection of manuscripts \cite{causer2012building, oldweather, AmazonTurk2011, BH2M2014}. Crowdsourcing typically provides a good quality transcription of a source material, enabling the use of text-based search and analysis tools. Nevertheless, there are some drawbacks with crowdsourcing, in particular with regards to historical manuscripts. They are often written in esoteric languages that require specialized skills to transcribe. Another drawback is a possible difficulty in attracting volunteers when working with unknown, prosaic source material lacking the fame, prestige, and historical importance of successful crowdsourcing projects \cite{causer2012building, oldweather, BH2M2014}. Furthermore, it does not scale well compared to word spotting, and requires a priori identification of interesting manuscripts to select for a crowdsourcing project. Another related technology that naively one might try, but falls short, is optical character recognition (OCR) \cite{MoriOCR1992}. As OCR was developed primarily for reading machine printed text, it has many limiting assumptions that are unrealistic when it comes to historical handwritten manuscripts. They include easily separated letters, minor writing style variation, and a canonical orthography, all of which limit what manuscripts are possible to study and are not necessarily true for historical manuscripts. Some of the shortcomings of OCR technologies have been addressed with Handwritten Text Recognition (HTR), where many of the assumptions of OCR software have either been relaxed, or removed altogether. Much work has been done on HTR on handwritten manuscripts \cite{marti2001using, graves2009novel, pham2014dropout, bluche2017scan}, though they come with their own set of assumptions and restrictions. First, compared to word spotting you typically need a relatively large amount of training data to learn a model good enough to perform accurate recognition. Second, current HTR methods typically take text lines as input, requiring an initial text line segmentation step that potentially introduces uncorrectable errors. Considering the messy and compressed layout of many historical manuscripts, segmenting the lines is a challenging task. Third, HTR methods typically rely heavily on language models, which in turn require a large amount of training data that might not be available in the uncommon languages that many manuscripts are written in. Moreover, for the application detailed in the paper, getting a line-by-line transcription is not desirable, as the methodology is based on searching for certain keywords and then reading and interpreting based on the surrounding context. In this case, transcribing the text would only be done for the possibility of searching for keywords, which is what word spotting does directly. \subsection{Contributions} The contributions of this paper include: \begin{enumerate} \item Two models for segmentation-free query-by-string word spotting are introduced: An end-to-end trainable model based on Faster R-CNN \cite{ren2017faster} and previous work \cite{wilkinson2015novel, wilkinson2016semantic}; and a simplified version that performs equally well or better in certain situations. \item Two novel data augmentation strategies for full manuscript pages, crucial for preventing model overfitting. \item Ablation studies that evaluate a set of model and training choices. \item An investigation into performance of the Region Proposal Network \cite{ren2017faster} as applied to manuscript images. \item State-of-the-art results on four datasets, including some very limited data settings. \item A case study conducted in collaboration with historians where we apply our model to a collection of 64 volumes of court records used for contemporary historical research. The result is an increase in processing speed and data size by orders of magnitude, compared to manual work. \end{enumerate} This paper is an extension of \cite{wilkinson2017neural}, where an initial version of this work was presented. Compared to the previous incarnation we have: introduced a simplified model (Section \ref{sec:model_mini}); carried out an ablation study (Section \ref{sec:ablation}); Improved upon previous results and extended experiments to two new datasets (Section \ref{sec:sota}); and extended the previous case study from 1 volume of court records to 64 (Section \ref{sec:case_study}), significantly increasing its scope, historical value, and complexity. \section{Related Work} The work in this paper is based on, and related to, a few different fields that we will briefly review in this section. \subsection{Word spotting} \label{sec:ws_related_work} Since its introduction over 20 years ago, word spotting \cite{WordSpotting1996} in manuscript images has come a long way. The initial approach uses template matching, with the image itself as a feature descriptor. Subsequent work introduced the idea of viewing the images as sequences of column features and applying sequence matching methods, in particular Dynamic Time Warping (DTW) \cite{kolcz2000line, RathManmantha2003, wahlberg2011data}, Hidden Markov Models (HMMs) \cite{rodriguez2009handwritten, fischer2012lexicon, rothacker2013bag}, and to a lesser extent, Recurrent Neural Networks (RNNs) \cite{frinken2012novel}. However, with the size of data ever increasing, the inefficiencies of sequence-based methods were becoming prohibitive. As a result, there was renewed interest in compact, fixed-length representations that allow for fast Euclidean distance calculations. \cite{rusinol2011browsing} build Bag-of-Visual-Words (BoVW) \cite{csurka2004visual} features on top of HOG descriptors, whereas \cite{almazan2014segmentation, LSA_embedding} use SIFT descriptors. In \cite{almazan2014word}, Fisher Vectors \cite{perronnin2007fisher} were built on top of SIFT descriptors. Moreover, \cite{almazan2014word} and \cite{LSA_embedding} both allow for QbS word spotting using similar approaches. In \cite{LSA_embedding}, the authors create a textual descriptor based on character n-grams and use Latent Semantic Analysis \cite{deerwester1990indexing} to perform multi-modal fusion, mapping their visual BoVW and textual representations to the same space. In a similar vein, \cite{almazan2014word} use an attribute representation \cite{lampert2014attribute, farhadi2009describing} called Pyramidal Histogram of Characters (PHOC) as textual descriptor. They use Canonical Correlation Analysis (CCA) to learn a common subspace for the textual PHOC descriptor and visual Fisher Vectors. As it turns out, the approach in \cite{almazan2014word} proved to be the stronger system and the PHOC attribute representation has been widely adopted by the word spotting community, and has even been put to good use in lexicon-based text recognition \cite{poznanski2016cnn}. Since then, many methods extending it have been proposed. In \cite{ghosh2015query, Ghosh_word_spotting}, the approach in \cite{almazan2014word} is extended to the segmentation-free setting. The method proposed in \cite{KrishnanDeepFeatureEmbedding} replaces the Fisher Vectors by a convolutional neural network (CNN) and the PHOC representation is learned using SVMs. Two methods were simultaneously proposed where the two step Fisher Vector and CCA approach is replaced with end-to-end trainable CNNs \cite{sudholdtPhocnet, wilkinson2016semantic}. This strand of work has been consolidated and improved upon in \cite{sudholt2017attribute, krishnan2018word}. A majority of the proposed handwritten word spotting methods assume that the words or text lines have been segmented, or that this is easily achieved. It turns out that for many manuscripts, especially historical ones, this is not a valid assumption. To remedy this, an increasing number of segmentation-free\footnote{Note that segmentation-free has, in the earlier OCR literature, referred to not segmenting the word into characters, rather than manuscript into words.} word spotting methods have been proposed \cite{leydier2007text, rusinol2011browsing, rothacker2013bag, almazan2014segmentation, kovalchuk2014simple, Ghosh_word_spotting, rothacker2015segmentation} \begin{figure*}[t!] \begin{center} \includegraphics[width=0.99\linewidth]{system_overview_v7.pdf} \end{center} \caption{The Ctrl-F-Net model at test time. Given an input image, it is fed through the first CNN of the model and Dilated Text Proposals (DTP) are extracted. These are then fed into the localization layer, where additional text proposals are extracted using a Region Proposal Network (RPN), followed by non-max suppression. The RPN proposals are added to the DTP proposals and fed through the Bilinear Interpolation layer, giving fixed length descriptors for each proposal. The proposals are fed through a second CNN and finally, each box coordinates are fine-tuned, given a wordness score, and a descriptor is extracted. Finally, a second non-max suppression is applied resulting a large number of region proposals, typically 2-3 times the number of ground truth boxes.} \label{fig:ctrlf_net} \end{figure*} Segmentation-free word spotting approaches can typically be placed into two broad categories based on how they generate regions from within an image. The first are methods based on a sliding window approach\cite{rusinol2011browsing, almazan2014segmentation, Ghosh_word_spotting, rothacker2015segmentation, rothacker2013bag}, where regions are generated positions along a regular grid over a manuscript page. The grid is usually either on the pixel level or on top of densely extracted features. This method is common in the QbE setting as the size of the query image in pixels known, allowing for constraints on the sizes of generated regions\cite{rusinol2011browsing,almazan2014segmentation, Ghosh_word_spotting,rothacker2013bag}. For QbS, the width and height of the region needs to be estimated, given the query\cite{rothacker2015segmentation}. The main drawback with these methods is the large amounts of regions generated, resulting in false positives and long processing times \cite{kovalchuk2014simple}. Additionally, due to a lack of attention, sliding window techniques are not robust against small shifts of the input, as this would lead to a misalignment av regions compared to ones extracted from the un-shifted input. The second category are methods using connected components \cite{leydier2007text, ghosh2015query, kovalchuk2014simple, rothacker2017word, ghosh2018text}. In \cite{leydier2007text}, connected components in the shape of vertical strokes are extracted by performing mathematical morphology to separate characters. A popular approach is to binarize the image, extract connected components, and group them in a bottom-up fashion using heuristics and finally extract bounding boxes\cite{kovalchuk2014simple, ghosh2015query}. A similar approach is used in \cite{krishnan2016matching} for matching entire documents using distributions of word images. A combination of the two approaches is used in \cite{rothacker2017word}. Here, extremal regions are extracted on top of text presence scores computed using a sliding window. A related approach is applied to word segmentation in \cite{wilkinson2015novel}, where morphological closing is used using a variety of different kernel sizes to connect characters into words and extract bounding boxes. These methods still over-segment the manuscript, but the number of proposals is typically fewer than for sliding window based methods. They are also more robust to small input shifts as the connected components provide a kind of attention. The downsides include a sensitivity to physical degradations like ink blotches and difficulty with densely written manuscripts. Two approaches for object detection were recently evaluated for the task of word segmentation in historical manuscripts \cite{moysset2018learning}. The first was YOLO \cite{redmon2016you} which did not yield any successful results, which was attributed to the assumption of only detecting one object per spatial cell. This is a reasonable assumption for natural images with several objects, but not for manuscript images with hundreds of words to detect. The second object detector was Multibox \cite{erhan2014scalable}, which managed to generate decent results, but was outperformed by the proposed method in \cite{moysset2018learning}. A recent approach that can be seen as hybrid between sliding window and connected components is presented in \cite{axler2018toward}. Here, the authors use a Resnet encoder-decoder network to produce a heatmap denoting each pixels probability of being inside, outside, or near a bounding box. The heatmap is smoothed using a smoother network before being fed to a Proposal Generation network that produces bounding box coordinates, which are subsequently filtered using a proposal Filter network. \subsection{Attribute Representations and Label Embeddings} Modern word spotting methods are closely related to two common tasks in computer vision. The widely adopted PHOC representation \cite{almazan2014word} is an attribute representation where each dimension corresponds to the presence or absence of a character in a part of a word. In the case of word spotting, the attributes are then used to retrieve words with similar attributes. Attribute representations have been successfully used in zero-shot learning \cite{lampert2014attribute, farhadi2009describing} The recently introduced embedding for word spotting DCToW \cite{wilkinson2016semantic} is similar to PHOC in that it is hand-engineered, but it does not consist of binary attributes, but a low-frequency, real-valued representation of a text string. It shares a greater similarity with the Spatial Pyramid of Characters introduced for text recognition in \cite{rodriguez2015label}, which is similar to PHOC except that character occurrences are counted, not a binary presence/absence. The attribute and label embedding representations allow seamless retrieval of words not present in the training data, known as zero-shot learning. This same technique is used for multi-label image classification \cite{chollet2016information}, and text recognition \cite{rodriguez2015label}. Moreover, word spotting shares similarities with approaches for multi-modal embeddings for zero-shot learning \cite{frome2013devise, socher2013zero}, with the difference that the text modality has a fixed embedding. \subsection{Scene Text Recognition} The model proposed in this paper is similar to work in end-to-end scene text detection and recognition, which has been receiving increasing attention \cite{jaderberg2016reading, neumann2016real, li2017towards, busta2017deep, liu2018fots}. In \cite{jaderberg2016reading}, an end-to-end system for text localization, recognition and retrieval based on region proposals and CNNs is proposed. Another approach is made in \cite{neumann2016real}, where characters are detected and grouped together in a bottom-up fashion to build words and text lines. Two similar approaches \cite{busta2017deep, li2017towards} use region proposal networks (RPNs) to generate candidates for text regions and then transcribe them. The difference lies in that \cite{busta2017deep} uses the Connectionist Temporal Classification (CTC) loss \cite{graves2006connectionist} to decode a region whereas \cite{li2017towards} employs a Recurrent Neural Network (RNN). Similarly, \cite{liu2018fots} makes us of an RPN and a CTC loss, but with a novel ROIRotate operation that maps arbitrarily oriented region proposals to axis-aligned feature maps. Our proposed models are similar to the RPN-based scene text recognition models, where the greatest difference lies in that we learn to embed word images in a word embedding space where they perform text recognition. \section{The Models} In this section we introduce the two models used in this paper. The first is the Ctrl-F-Net that we introduced in the previous version of this paper \cite{wilkinson2017neural}. The second model is a simplified version of Ctrl-F-Net we call Ctrl-F-Mini. In section \ref{sec:ablation}, we provide results from a series of ablation studies investigating the performance of the two models. \subsection{Ctrl-F-Net} The model we propose is a deep convolutional neural network inspired by previous work on object detection\cite{ren2017faster}, dense image captioning \cite{densecap} and segmentation-based word spotting \cite{wilkinson2016semantic}. We call it Ctrl-F-Net, named after the well known shortcut for word search in many word processors. It is a model that simultaneously proposes and scores word candidate region proposals and embeds them into a word embedding space, wherein a search can be performed. The input to the model is a full manuscript page. The output is set of bounding box region proposals, their scores that correspond to the probability of containing a word, and an embedding. Optional external region proposals \cite{wilkinson2015novel} can be added as inputs and be used during both training and testing. It turns out that this increases the performs, see section \ref{sec:experiments}. A total of five loss functions are used, two in the middle of the model, and three towards the end, which lets the model learn all the tasks at hand. Figure \ref{fig:ctrlf_net} contains an overview of the model. A grayscale input image of size $H, W$ first resized so that $max(H, W) = 1720$, while keeping the aspect ratio intact. Then it is fed through several layers of a CNN, until it has been spatially downsampled by a factor of 8. As the input is a full manuscript page, a 34-layer pre-activation ResNet \cite{he2016identity} is used as the CNN architecture due to its small memory footprint while still achieving high performance. The feature maps are then fed through a \emph{localization module}, that consists of: i) an RPN that generates region proposals and corresponding scores; ii) A non-max suppression (NMS) step to remove redundant, overlapping proposals; iii) A resizing layer produces fixed sized outputs given variable sized inputs; iv) loss functions for the region proposals coordinates and scores. \begin{figure*}[t!] \begin{center} \includegraphics[width=0.99\linewidth]{ctrflnet_mini.pdf} \end{center} \caption{Ctrl-F-Mini at test time. Given an input image, it is fed through the first CNN of the model and Dilated Text Proposals (DTP) are extracted. These are then fed through the bilinear interpolation layer, which resizes boxes to a fixed size. The proposals are fed through the rest of the CNN and a wordness score and a descriptor is extracted. Finally, non-max suppression is applied.} \label{fig:ctrlf_net_mini} \end{figure*} The input feature maps are first fed to an RPN, with one branch that regresses K=15 anchor boxes using a convolutional layer with $4K$ output channels. To accommodate the varying aspect ratios of words, the anchor boxes have the sizes $\{20, 40, 60\} \times \{ 30, 90, 150, 210, 300\}$, in pixels. A second parallel branch predicts \emph{wordness} scores for each box, that represent the probability that a box is situated atop a word. The boxes and scores are then combined and NMS is applied. The RPN can be seen to fall into the category of sliding window based segmentation-free word spotting methods. Proposals with an Intersection-over-union (IoU) overlap with a ground truth bounding box greater than 0.75 are considered as positives and negatives are defined as proposals with an IoU lower than 0.4. Proposals that fall between are ignored. A total of 256 proposals, 128 positives and negatives, are sampled and are used to calculate the mid-network wordness and regression losses. The boxes of varying size sampled from the RPN are then resized using Bilinear Interpolation \cite{jaderberg2015spatial, densecap} to a fixed output size of $8 \times 20$ pixels. They are then fed through the rest of the CNN and used as input to three parallel branches. The first branch is fully connected (FC) layer with 4 outputs that refines the box coordinates by regressing them once again. Similarly, the second branch is also a FC layer with a single output that predict the final wordness scores. The third branch is a small FC embedding network with 2 hidden layers, that does the final embedding. For the mid-network scores and output wordness scores, we use a binary logistic loss. The bounding boxes are parameterized according to \cite{girshick2015fast}, both for the anchor box regression and the output box regression. The boxes are represented as the quadruples $(x_c, y_c, w, h)$, where $x_c$ and $y_c$ are the center of a box and $w$ and $h$ is its width and height. The functions to learn are normalized translation offsets for $x$ and $y$ and log-space scaling factors for $w$ and $h$. The loss is a smooth $l1$ loss, also known as a specialized version of the Huber loss \cite{huber1964robust} \begin{equation} L_{reg}(x_i, t_i) = \left\{ \begin{array}{lcr} \frac{1}{2} (x_i - t_i)^2 &\textnormal{if } |x_i - t_i| & < 1 \\[5pt] |x_i - t_i| - \frac{1}{2} &\textnormal{if } |x_i - t_i| & \geq 1 \\ \end{array} \right. \end{equation} where $x_i$ is one of $\{x_c, y_c, w, h\}$ and $t_i$ is its corresponding target. The embedding branch is a fully-connected network with two hidden layers of size 4096, with batch normalization\cite{ioffe2015batch} after each layer followed by the hyperbolic tangent activation function. The final layer is an $l^2$-normalization layer. It only receives the regions labelled positive as input. As loss function for the embedding network, we use the Cosine Embedding loss that has successfully been used in segmentation-based word spotting \cite{wilkinson2016semantic}. It is defined as \begin{equation}\label{eq:cosine_embedding_loss} L_{emb}(\mathbf{u}, \mathbf{v}, y) = \left\{ \begin{array}{lcr} 1 - \frac{\mathbf{u}^\mathsf{T}\mathbf{v} }{||\mathbf{u}|| \cdot ||\mathbf{v}||} &\textnormal{if } y = 1 \\[6pt] \max(0, \frac{\mathbf{u}^\mathsf{T}\mathbf{v} }{||\mathbf{u}|| \cdot ||\mathbf{v}||} - \gamma) &\textnormal{if } y = 0 \end{array} \right. \end{equation} where $\mathbf{v}$ is an embedding of a positive region proposal and $\mathbf{u}$ is a ground truth embedding. If $y=1$, $\mathbf{v}$ and $\mathbf{u}$ match, and they are moved closer together. If $y=0$, they do not match and $\mathbf{v}$ and $\mathbf{u}$ are moved further apart. The total loss function is a weighted linear combination of the five losses. \begin{align} \begin{split} L_{tot} = 10^{-2} &\cdot (L_{rpn\_reg} + L_{rpn\_score}) + \\[5pt] 10^{-1} &\cdot (L_{reg} + L_{score}) + 3 \cdot L_{emb} \end{split} \end{align} \subsection{Ctrl-F-Mini}\label{sec:model_mini} In order to evaluate different model choices, in particular the source of region proposals, we introduce the Ctrl-F-Mini model. The main difference to the full model is the removal of the region proposal network, which in turn leads to other parts that are no longer relevant. From the localization module, only the bilinear interpolation remains. Towards the end, the end box scoring and embedding branches are kept. Ctrl-F-Mini is trained using external region proposals. The mid-network box scoring and regression losses and the end box regression loss is removed. The reduced model can be seen in Figure \ref{fig:ctrlf_net_mini}. This results in a greatly simplified model, which takes about a quarter of the time to train. Furthermore, inference becomes a lot faster as fewer region proposals are used, and the computations related to the RPN (in particular the non-max suppression over several hundred of thousand proposals) are not performed. \subsection{Querying} \label{sec:querying} During inference, a manuscript page and $N_1$ optional external region proposals are fed through the model that outputs: an $N \times 4$ matrix of region proposals; an $N \times D$ matrix of descriptors, where $D$ is the dimensionality of the word embedding that is used; and an $N\textnormal{-dimensional}$ vector of wordness scores, where for Ctrl-F-Net $N = N_1 + N_2$. We typically set $N_2 = N_1$ on a page-by-page basis ($N = N_1$ Ctrl-F-Net mini). We then threshold the wordness scores, only keeping proposals with a score $>t_s$, followed by an NMS step using an overlap threshold $t_{nms}$. Once a query is selected, either by cropping a part of an image for QbE, or providing a search query for QbS, it is first transformed to the word embedding space. Then the cosine distance is used to compare the query to each region proposal and they are sorted w.r.t. their similarity to the query. Using the similarity the query as a score, we perform a final NMS step with an overlap threshold set to 0. \subsection{Word Embeddings} In the recent word spotting literature based on using word embeddings, two have been most successful, see Figure \ref{fig:embeddings}. The first, and by far most popular, is the Pyramidal Histogram of Characters, or PHOC \cite{almazan2014word, krishnan2018word, sudholt2017attribute, ghosh2018text}. Provided a text string, the number of pyramid levels, and an alphabet of length $K=36$ (we use the digits 0-9 and lower-case letters for all experiments), construct a binary occurrence vector for each sub-word in each level of the pyramid (we use pyramid levels 1-5), and concatenate them. The resulting vector $\mathbf{u}_p$ is a $36 \cdot (1 + 2 + 3 + 4 + 5) = 540$ dimensional binary vector, i.e., $\mathbf{u}_p \in \{0, 1\}^{540}$. The earliest papers using PHOC augmented the alphabet with the most common bi-grams for a particular language. Subsequent work have achieved better results without them while keeping the embedding language agnostic, and we do the same. The second embedding is the Discrete Cosine Transform of Words (DCToW), recently introduced in \cite{wilkinson2016semantic}. It is a low frequency, distributed representation of a word, that has recently achieved state-of-the-art results in segmentation-based word spotting. Given a word of length $m$ and an alphabet of length $K$, first build a $m \times K$ one-hot matrix representation. Then apply the Discrete Cosine Transform (DCT) to each row of the matrix. Finally, keep the first $r$ components of the DCT, and flatten the $m \times K$ matrix into an $r \cdot K$ dimensional vector, $\mathbf{u}_d$. Following \cite{wilkinson2016semantic}, we set $r=3$ making $\mathbf{u}_d \in {\rm I\!R}^{108}$. Words that are shorter than $r$ characters are padded with zeros to get the correct length. \begin{figure}[t!] \begin{center} \includegraphics[height=0.37\linewidth]{dctow.pdf} \\ \vspace{0.5cm} \includegraphics[height=0.37\linewidth]{phoc2.pdf} \end{center} \caption{The two word embeddings evaluated in this paper, DCToW (top) and PHOC (bottom). Note that we only show 3 of the 5 levels of the PHOC embedding here.} \label{fig:embeddings} \end{figure} \subsection{Dilated Text Proposals} The ideal case for a segmentation-free word spotting system is maximizing the recall while keeping the number of proposals as low as possible. Referring back to the distinctions made in section \ref{sec:ws_related_work}, the RPN would fall into the category of sliding window approaches. As such, a likely improvement of the recall can be achieved by using a complementary external region proposal method based on connected components. We use the approach from \cite{wilkinson2015novel}, which we call Dilated Text Proposals (DTP) for sake of clarity. Given a grayscale image, DTP first creates a set of $j$ binary image by thresholding at $j$ different multiples of the image mean value. Then applies morphological closing to each binary image using a set of $l$ generated rectangular kernels. For each of the $j \cdot l$ images, find the connected components, then extract bounding boxes for each connected component and remove duplicate boxes. \section{Data Augmentation} As we are operating in a small data setting (as few as 5 manuscript pages for training), data augmentation is crucial to prevent severe overfitting on the training data. We propose two complementary ways of augmenting the entire manuscript pages that we call \emph{full-page} and \emph{in-place} augmentation. The two techniques are visually compared in Figure \ref{fig:data_aug}. Full-page augmentation allows to have control over the distribution of classes, which is important for learning a discriminative word embedding. It works by uniformly sampling word images from the training set, augmenting them, and placing the row-by-row on background canvas. We adopt the affine and grayscale morphology augmentation from \cite{wilkinson2016semantic}. The canvas is created by uniformly sampling a background colour from an interval centered on the median of all images in the training set and adding on some Gaussian noise. The finished augmented page looks like left-aligned manuscripts of randomly sampled word images. In-place augmentation is designed to keep the overall look of the page intact, while still providing some useful variation in writing style. Ideally, this helps the model generate and score region proposals, while still providing variation for the learning word embeddings, although without control of class distributions. For a given manuscript page, we iterate through the ground truth bounding boxes and augment each word image in-place. We apply a shearing transform and followed by grayscale morphological dilation or erosion while ensuring the output has the same size as the input so that it can be slotted back into place. \begin{figure}[t!] \begin{center} \includegraphics[width=0.45\linewidth]{in_place_augmentation.png} \hspace{0.2cm} \includegraphics[width=0.45\linewidth]{full_page_augmentation.png} \end{center} \caption{A visual comparison between in-place (left) and full page augmentation (right). In-place augmentation provides style variation while full page augmentation allows us to control word class distributions.} \label{fig:data_aug} \end{figure} \section{Experiments}\label{sec:experiments} Here, we perform the main quantitative evaluation of our models, including an ablation study, on two widely used benchmarks for word spotting. \textbf{The George Washington (GW) Dataset \cite{lavrenko2004holistic}} was written in English the middle of the 18\textsuperscript{th} century by George Washington and his secretaries. It consists of 20 pages, or 4860 words. We follow the evaluation procedure used in \cite{rothacker2015segmentation}, by using two different splits of the pages into training, validation and test sets. For the first split called GW 15-5, has a training set of 15 pages and 5 for testing. The second is a 5-15 split with 5 training and 15 test pages. In both cases, we use 1 page as a validation set. The reported results are the average of four cross validations. The bounding boxes for the GW are manually annoatated, resulting in a large amount of extra space around 1-2 character words, causing a significant decrease in recall for higher overlap thresholds. To counteract this annotation issue, we pad the DTP proposals with 10 pixels for this dataset. \textbf{The IAM Offline Handwriting Dataset \cite{marti2002iam}} is a modern cursive dataset consisting of 1539 pages, or 115320 words, written by 657 writers. We use the official train/val/test split for \emph{writer independent text line recognition}, where there is no writer overlap between the different splits. Following standard protocol, we remove stop words from the set of queries, and in line with \cite{almazan2014word}, queries that come from lines that are marked as containing segmentation errors are removed. Ground truth boxes that are so small that they collapse to a width or height of zero when downsampled by a factor 8 are also removed. \textbf{The Botany and Konzilsprotokolle Datasets} were introduced in the ICFHR 2016 Handwritten Keyword Spotting competition \cite{pratikakis2016icfhr2016}. The collections are from the 19\textsuperscript{th} and 17\textsuperscript{th} centuries respectively. Following \cite{rothacker2017word}, we use the official \texttt{Train III} set for training. It contains 114 and 45 images for each respective dataset. We select 5 and 1 images respectively as validation set for the datasets. The test set for both datasets are 20 pages. We use the set of queries defined for the competition as well as the official software to calculate results. \begin{table*}[t!] \renewcommand{\arraystretch}{1.3} \caption{Ablation results for different model variants the GW 15-5 and IAM datasets. Recall is calculated based on the proposals left after the final NMS stage.} \label{tab:ablation} \centering \begin{tabular}{llcccccccccccc} & & \multicolumn{6}{c}{GW 15-5} & \multicolumn{6}{c}{IAM} \\\cmidrule(r){3-8} \cmidrule(l){9-14} & & \multicolumn{2}{c}{MAP 50\%} & \multicolumn{2}{c}{MAP 25\%} & \multicolumn{2}{c}{Recall} & \multicolumn{2}{c}{MAP 50\%} & \multicolumn{2}{c}{MAP 25\%}& \multicolumn{2}{c}{Recall} \\ \cmidrule(r){3-4}\cmidrule(lr){5-6}\cmidrule(lr){7-8}\cmidrule(lr){9-10}\cmidrule(lr){11-12}\cmidrule(l){13-14} Model Variant & Embedding & QbE & QbS & QbE & QbS & 50\% & 25\% & QbE & QbS& QbE & QbS & 50\% & 25\% \\ \thickhline \addlinespace[1.5pt] \multicolumn{14}{c}{Baselines from \cite{wilkinson2017neural}}\\ \addlinespace[1.5pt] \hline Ctrl-F-Net & DCToW & 90.5 & 91.0 & 97.0 & 95.2 & \textbf{99.4} & 99.9 & 72.0 & 80.3 & 74.1 & 82.5 & 98.1 & 98.9 \\ Ctrl-F-Net & PHOC & 90.9 & 90.1 & 96.7 & 93.9 & - & - & 71.5 & 78.8 & 73.7 & 80.8 & - & - \\ \hline \addlinespace[1.5pt] \multicolumn{14}{c}{Baselines}\\ \addlinespace[1.5pt] \hline Ctrl-F-Net & DCToW & 91.9 & 92.9 & 96.8 & 95.7 & 93.8 & 99.5 & 73.9 & 81.9 & 77.2 & 85.3 & 97.6 & 98.5\\ Ctrl-F-Mini & DCToW & \textbf{92.5} & 93.4 & 96.9 & 96.2 & 95.6 & 99.0 & 73.9 & 83.2 & 75.8 & 85.1 & 93.2 & 94.3\\ Ctrl-F-Net & PHOC & 90.8 & 90.5 & 97.1 & 95.3 & 96.7 & 99.8 & 74.6 & 83.0 & 78.6 & 87.4 & 98.0 & 98.9 \\ Ctrl-F-Mini & PHOC & 92.2 & 92.9 & 97.1 & 96.1 & 96.1 & 99.2 & 74.6 & 84.9 & 77.0 & 87.0 & 93.1 & 94.2 \\ \hline \addlinespace[1.5pt] \multicolumn{14}{c}{Cosine loss}\\ \addlinespace[1.5pt] \hline Ctrl-F-Net& DCToW & 91.8 & 92.3 & 96.9 & 95.4 & 95.8 & 99.6 & 74.7 & 83.7 & 77.7 & 86.7 & 97.2 & 98.2 \\ Ctrl-F-Mini & DCToW & \textbf{92.5} & \textbf{93.5} & 97.2 & 96.3 & 96.1 & 99.0 & 74.9 & 84.9 & 77.2 & 86.8 & 92.5 & 93.7 \\ Ctrl-F-Net & PHOC & 91.7 & 92.3 & 96.6 & 95.1 & 95.1 & 99.4 & 74.0 & 83.0 & 78.3 & 86.8 & 98.5 & 99.3 \\ Ctrl-F-Mini & PHOC & 91.6 & 91.8 & 97.0 & 96.2 & 96.5 & 99.2 & 75.7 & \textbf{86.1} & 77.8 & 87.9 & 91.7 & 92.9 \\ \hline \addlinespace[1.5pt] \multicolumn{14}{c}{Use DTP proposals during training}\\ \addlinespace[1.5pt] \hline Ctrl-F-Net & DCToW & 89.7 & 89.2 & \textbf{97.6} & \textbf{96.8} & 98.2 & 99.8 & 73.4 & 83.4 & 76.4 & 85.8 & 92.0 & 94.2 \\ Ctrl-F-Net & PHOC & 91.4 & 93.2 & 96.9 & 96.0 & 94.5 & 99.3 & 72.4 & 82.0 & 78.3 & \textbf{88.1} & 92.3 & 93.6\\ \hline \addlinespace[1.5pt] \multicolumn{14}{c}{RPN proposals only}\\ \addlinespace[1.5pt] \hline Ctrl-F-Net & DCToW & 80.5 & 79.4 & 94.0 & 90.6 & 96.5 & 99.6 & 49.7 & 56.3 & 61.3 & 67.7 & 64.39 & 79.10 \\ Ctrl-F-Net & PHOC & 80.8 & 79.7 & 94.1 & 90.8 & 97.2 & \textbf{100.0} & 53.8 & 62.7 & 69.6 & 77.3 & 64.04 & 81.90 \\ \hline \addlinespace[1.5pt] \multicolumn{14}{c}{Binary Cross Entropy loss}\\ \addlinespace[1.5pt] \hline Ctrl-F-Net & PHOC & 90.9 & 86.2 & 95.7 & 88.3 & 94.8 & 99.3 & \textbf{79.1} & 72.1 & \textbf{81.5} & 74.5 & \textbf{98.7} & \textbf{99.4} \\ Ctrl-F-Mini & PHOC & 91.8 & 86.6 & 96.2 & 88.9 & 94.9 & 98.5 & 73.3 & 69.5 & 75.1 & 70.8 & 92.8 & 93.9 \\ \thickhline \end{tabular} \end{table*} \subsection{Training} \label{sec:training} The models are trained in a single phase. We first train a model (with weights initialized randomly using \cite{he2015delving} for convolutional layers, and a zero-mean Gaussian with a standard deviation of 0.01 for fully connected layers) using the synthetic IIIT-HWS-10k dataset \cite{krishnan2016matching}. Since it only consists of word images, we use the full-page augmentation technique to create 3000 synthetic document images. This model was used to initialize all other models. For the other datasets, we create 5000 augmented images, split evenly between in-place and full-page augmentation, and add them to the original data. The input image is rescaled such that its longest side is 1720. We train each model for a maximum of 25000 iterations, and the measure the performance on a held out validation set every 1000 iterations. The model with the highest validation MAP score is used for testing. The learning rate is initially set to $10^{-3}$ for all models (except for Ctrl-F-Net on IAM which starts at $2\cdot10^{-4}$) and is multiplied every 10000 iterations by 0.1. We use ADAM \cite{kingma2014adam} to update the weights. Our implementation \footnote{\url{https://github.com/tomfalainen/neural-word-search}} is in Pytorch \cite{paszke2017automatic} and training time is approximately 10 hours for Ctrl-F-Net and 3 hours for Ctrl-F-Mini on an NVIDIA Titan GTX. \subsection{Evaluation} For the GW and IAM datasets, we evaluate our model using the standard metric used for word spotting, Mean Average Precision (MAP), where the Average Precision for a collection of size $N$ is defined as \begin{equation} AP = \frac{\sum_{k=1}^{N} P_k \cdot r_k}{R} \end{equation} where $P(k)$ is the precision measured at cut-off $k$ in the returned list, $R$ is the number of relevant results, and $r(k)$ is an indicator function that is 1 if a returned result at rank $k$ is relevant, and 0 otherwise. A retrieved word is considered relevant if its IoU overlap with a ground truth box is greater than a threshold $t_o \in \{0.25, 0.5\}$ and the label matches the query. The MAP score is the mean of the AP over the set of queries \begin{equation} MAP = \frac{\sum_{q=1}^{Q} AP(q)}{Q} \end{equation} where $Q$ is the number of queries. Unless stated otherwise, for QbE evaluation all the ground truth segmented word images in the test set is used. For QbS, all unique ground truth labels are used. Some methods, in particular \cite{ghosh2015query, Ghosh_word_spotting, ghosh2018text}, use a slightly different protocol for the GW dataset. Here all word instances in the dataset are used as queries for QbE, and all unique labels for QbS. The search is performed in all 20 pages. We perform a grid search over the score NMS overlap threshold $t_{nms}$, score threshold, and RPN score NMS overlap threshold when applicable. \subsection{Ablation and analysis}\label{sec:ablation} To perform the ablation study, we adopt the methodology of using baseline model settings and changing one setting at a time, and always comparing with the baseline. We investigate the quantitative performance of various model choices, with the most significant being the source of region proposals (RPN, DTP, or both). Other experiments include comparing the PHOC and DCToW and three embedding loss functions. Recent work \cite{sudholt2017evaluating} argues that the Cosine loss \cite{chollet2016information} has outperformed other common loss functions for segmentation-based word spotting, including the Cosine Embedding loss. We evaluate this loss in the segmentation-free setting. The Cosine loss is defined as \begin{equation} L(\mathbf{u}, \mathbf{v}) = 1 - \frac{\mathbf{u}^\mathsf{T}\mathbf{v} }{||\mathbf{u}|| \cdot ||\mathbf{v}||} \end{equation} where $\mathbf{v}$ is an embedding of a positive region and $\mathbf{u}$ is the embedding of the ground truth label. It is the part of the Cosine Embedding loss (Equation \ref{eq:cosine_embedding_loss}) where $y=1$ . We also evaluate the Binary Cross Entropy (BCE) loss, which models the embedding as a multi-label binary classification problem, and is a common choice of loss function for the PHOC embedding. As the BCE loss requires a binary embedding, it is not applicable to the DCToW. We use the GW 15-5 and IAM datasets to evaluating the different model choices. \begin{table} \renewcommand{\arraystretch}{1.3} \caption{Recall comparison in \%, averaged over pages between the region proposal network and dilated text proposals using Ctrl-F-Net with the DCToW embedding. Filtered refers to the score thresholding and non-max suppression steps described in section \ref{sec:querying}.} \label{tab:recall} \centering \begin{tabular}{lccccccc} &&\multicolumn{2}{c}{GW 15-5} & \multicolumn{2}{c}{GW 5-15} & \multicolumn{2}{c}{IAM} \\ \cmidrule(lr){3-4} \cmidrule(lr){5-6} \cmidrule(lr){7-8} Method & Filtered & 50\%& 25\%& 50\%& 25\%& 50\%& 25\% \\ \thickhline RPN & & 98.3 & 100.0 & 98.6 & 99.9 & 69.1 & 86.5 \\ DTP & & 98.8 & 99.9 & 98.8 & 99.9 & 98.6 & 99.4 \\ Combined& & 99.9 & 100.0 & 99.9 & 100.0 & 99.0 & 99.7 \\ RPN & \checkmark & 4.2 & 6.1 & 3.1 & 8.7 & 46.9 & 62.8 \\ DTP & \checkmark & 89.6 & 94.4 & 91.8 & 96.7 & 97.0 & 98.1 \\ Combined& \checkmark & 93.8 & 99.5 & 94.9 & 99.7 & 97.6 & 98.5 \\ \thickhline \end{tabular} \end{table} The top section of Table \ref{tab:ablation} contains the results from \cite{wilkinson2017neural}, which are a bit different from the new baselines. Since \cite{wilkinson2017neural} was published, we discovered a few mostly small bugs in our code, the most notable one is that the margin $\gamma$ for the Cosine Embedding loss was actually 0.2 instead of the reported 0.1, and the learning rate was accidentally multiplied by 0.1 in the first iteration of training, making the initial learning rate $2^{-4}$ instead of $2^{-3}$. The first variant we evaluated is using the Cosine loss instead of the Cosine Embedding loss. Although the results are not unanimous, the trend suggests that the Cosine loss is superior. This corroborates the findings in \cite{sudholt2017evaluating} that the Cosine loss is better for word spotting, and coupled with the fact that it is simpler makes the Cosine loss a better choice compared to the Cosine Embedding loss. From our experiments, the Binary Cross Entropy loss works very well on the QbE setting, notably getting the highest MAP score on IAM by a good margin, around 3\% for 50\% IoU. However, it underperforms when it comes to QbS. The second set of experiments in Table \ref{tab:ablation} involve using both proposals from the RPN and DTP during training the Ctrl-F-Net. This is implemented as sampling both positive and negative boxes from the proposal pool from the DTP and RPN separately, and concatenating them before continuing the forward pass. Towards the end of the model, both sets of proposals are used for the box scoring and embedding losses, but only the RPN proposals are used for the end box regression loss. Although the results are mixed, this modification seems to give an improvement for the MAP using 25\% IoU. To investigate the quality of the proposals w.r.t. MAP score, we only use RPN proposals to retrieve words from the manuscripts. There is a noticeable drop in performance for both GW and IAM datasets with this setup. Comparing proposal quality, the DTP only model i.e., Ctrl-F-Mini, clearly outperforms the RPN only model. Similarly, adding DTP proposals to the Ctrl-F-Net increases the MAP score by a good amount. Across all experiments, the DCToW and the PHOC seem to work best on the GW and IAM datasets respectively. This suggests that it is best to evaluate both embeddings for the task at hand. All else being equal, the DCToW is preferable due to its smaller dimensionality, approximately a fifth of the PHOC. Finally, analysing the Ctrl-F-Net and Ctrl-F-Mini, the latter performs at a similar if not higher level compared to the former over all experiments. This comes at the cost of a slightly lower rate of recall, most notably on the IAM dataset. The small difference in recall between the Ctrl-F-Net (both RPN and DTP) and Ctrl-F-Mini (only DTP) models merits further investigation into the contributions of the RPN and DTP proposals in the recall of Ctrl-F-Net. Table \ref{tab:recall} shows the recall rates of the two sources of region proposals, the RPN and the DTP, and their union using the baseline DCToW model on 15-5 and 5-15 GW datasets and the IAM dataset. The recall is the average over pages, and the number of proposals for each method is held the same. We evaluate the recall before and after filtering, which refers to the score thresholding and non-max suppression from Section \ref{sec:querying}. Before the filtering step, the recall is practically equal on the GW dataset. For the IAM dataset on the other hand, there is a noticeable gap between the two sources of region proposals. Post filtering, the gap widened considerably for the IAM dataset and a chasm appeared between the RPN and DTP on the GW dataset. Considering that it is the model that is scoring the proposals from the two sources, and the model was only trained on RPN proposals, for the model to favour the DTP heavily suggests that the DTP generates far superior proposals. This is further corroborated by the MAP results in Table \ref{tab:ablation}, where there is a significant drop in score and that the Ctrl-F-Mini is performing on a similar level to Ctrl-F-Net. \begin{table} \renewcommand{\arraystretch}{1.3} \caption{MAP and recall comparison in \% on the Uppsala petitions dataset} \label{tab:petitions_dataset} \centering \begin{tabular}{lcccccc} &\multicolumn{6}{c}{Uppsala Petitions} \\ \cmidrule(lr){2-7} & \multicolumn{2}{c}{MAP 50\%} & \multicolumn{2}{c}{MAP 25\%} & \multicolumn{2}{c}{Recall} \\ \cmidrule(r){2-3}\cmidrule(lr){4-5}\cmidrule(lr){6-7} Method & QbE & QbS & QbE & QbS & 50\% & 25\% \\ \thickhline Ctrl-F-Net & 47.4 & 33.0 & 56.8 & 37.8 & 86.9 & 93.4 \\ Ctrl-F-Mini & 41.1 & 31.6 & 48.0 & 35.1 & 71.3 & 79.8 \\ \thickhline \end{tabular} \end{table} \subsubsection{When to use the Ctrl-F-Net} From the results of the ablation study, one may wonder why to use the Ctrl-F-Net at all when the Ctrl-F-Mini performs equally well but is faster and simpler. This question can also be formulated as: When to include the region proposal network? The key issue is how densely written, and therefore, how easily segmented the words are for a given manuscript collection. All the public benchmarks used for evaluation in this paper are relatively easily segmented. In fact, this is an issue with all widely used public word spotting benchmarks. This makes it difficult to evaluate the performance in terms of segmentation in a fair way. They therefore fail to encompass the important scenario of densely written, messy, crossed out text, which occur frequently in historical manuscripts. It is in this context the Ctrl-F-Net with its region proposal network can give a significant boost in recall over the Ctrl-F-Mini. The early modern court records in Section \ref{sec:case_study} contain many such pages. The DTP underperforms in terms of recall for these difficult pages because it has great trouble with separating intersecting text. We would argue that this is an issue for many (if not all) region proposal methods that are based on connected components (or detecting the ink on the page), such as the MSER based approach in \cite{rothacker2017word} or the approach from \cite{ghosh2018text} where they extract connected components from a thresholded image. The RPN has no such issues because it uses a sliding window, which is the main motivation for including it in the Ctrl-F-Net\footnote{Note that this is only just one way of mitigating the problem of these difficult-to-segment pages. There are undoubtedly other ways to tackle this issue.}. We conducted an additional experiment on a soon-to-be-released dataset of historical handwriting, which consists of source material from similar time period and geographical area, and written in the same style of writing. The main difference lies in their purpose, the dataset is made up of petitions to regional and national government in the form of letters. The dataset consists of 45 pages for training and 28 pages for testing. We train a Ctrl-F-Net and Ctrl-F-Mini (initialised with the model trained on the IIIT-HWS dataset), and evaluate their respective performance in terms of recall and MAP in Table \ref{tab:petitions_dataset}. We observe that in terms of MAP, the Ctrl-F-Net slightly but consistently outperforms the Ctrl-F-Mini. The same trend is true for recall, though the gap is much more pronounced. In light of these results, we would recommend to use the Ctrl-F-Net (i.e., add the RPN) whenever you are working large and heterogeneous manuscript collections. For the public benchmarks, the largest discrepancies between the Ctrl-F-Net and Ctrl-F-Mini lie at the 50\% overlap threshold, however for the application we show in Section \ref{sec:case_study} where computation time is not crucial and results are manually reviewed by a user, the performance at 25\% overlap matters more. Here there is no discernible difference in MAP between the Ctrl-F-Net and Ctrl-F-Mini. \begin{table} \renewcommand{\arraystretch}{1.3} \caption{Inference (inf) and search times averaged over pages and queries respectively for each dataset in seconds and storage space requirements in megabytes.} \label{tab:query_time} \centering \begin{tabular}{lcccccc} &\multicolumn{3}{c}{Ctrl-F-Net} & \multicolumn{3}{c}{Ctrl-F-Mini}\\ \cmidrule(lr){2-4} \cmidrule(lr){5-7} Method & inf & search & space & inf & search & space \\ \thickhline GW 15-5 & 12.45 & 0.06 & 0.96 & 3.69 & 0.12 & 1.4 \\ GW 5-15 & 12.46 & 0.44 & 2.7 & 3.67 & 0.79 & 5.3 \\ IAM & 1.75 & 5.09 & 24 & 0.50 & 7.69 & 19 \\ Konz & 15.69 & 1.11 & 8.1 & 7.39 & 2.27 & 13 \\ Botany & 11.75 & 3.53 & 16 & 3.42 & 3.23 & 12 \\ \thickhline \end{tabular} \end{table} \begin{table*}[t!] \renewcommand{\arraystretch}{1.3} \caption{MAP comparison in \% with state-of-the-art segmentation-free methods on the GW and IAM datasets. The Ctrl-F-Net results marked with an asterisk use the evaluation protocol from \cite{Ghosh_word_spotting,ghosh2015query, ghosh2018text} (only relevant for GW 15-5). } \label{tab:sota} \centering \begin{tabular}{llcccccccccccc} & & \multicolumn{4}{c}{GW 15-5} & \multicolumn{4}{c}{GW 5-15} & \multicolumn{4}{c}{IAM}\\\cmidrule(r){3-6} \cmidrule(l){7-10} \cmidrule(l){11-14} & & \multicolumn{2}{c}{MAP 50\%} & \multicolumn{2}{c}{MAP 25\%} & \multicolumn{2}{c}{MAP 50\%} & \multicolumn{2}{c}{MAP 25\%} & \multicolumn{2}{c}{MAP 50\%}& \multicolumn{2}{c}{MAP 25\%} \\ \cmidrule(r){3-4}\cmidrule(lr){5-6}\cmidrule(lr){7-8}\cmidrule(lr){9-10}\cmidrule(lr){11-12}\cmidrule(l){13-14} Method & Embedding & QbE & QbS & QbE & QbS & QbE & QbS & QbE & QbS& QbE & QbS & QbE & QbS \\ \thickhline Ctrl-F-Net & DCToW & 90.9 & 91.7 & 97.0 & 96.2 & 84.1 & 76.2 & 94.0 & 83.8 & 75.4 & 85.5 & 77.4 & 87.0 \\ Ctrl-F-Net & PHOC & 90.7 & 92.0 & \textbf{97.3} & \textbf{96.4} & 87.5 & 79.9 & 93.4 & 82.4 & 72.1 & 81.9 & 78.0 & \textbf{88.4} \\ Ctrl-F-Mini & DCToW & \textbf{92.5} & \textbf{93.5} & 97.2 & 96.3 & 87.6 & 80.8 & \textbf{94.7} & 86.0 & 74.9 & 84.9 & 77.2 & 86.8 \\ Ctrl-F-Mini & PHOC & 91.6 & 91.8 & 97.0 & 96.2 & \textbf{87.9} & \textbf{81.4} & 94.3 & \textbf{86.1} & 75.7 & \textbf{86.1} & 77.8 & 87.9 \\ BoF HMMs \cite{rothacker2015segmentation} & n/a & - & 76.5 & - & 80.1 & - & 54.6 & - & 58.1 & - & - & - & -\\ $\textnormal{AAM+SIFT}_{\textnormal{quant}}$ \cite{rothacker2017word} & PHOC & 81.6 & 84.6 & 92.0 & 90.6 & - & - & - & - & - & - & - & -\\ Encoder-Decoder Net \cite{axler2018toward} & PHOC & - & - & - & - & - & - & - & - & - & 85.4 & - & 85.6\\ Hwnet v2 Ctrl-F-Mini \cite{krishnan2019hwnet} & Hwnet & 92.0 & - & 96.7 & - & - & - & - & - & \textbf{82.0} & - & \textbf{82.4} & -\\ \thickhline \addlinespace[1.5pt] \multicolumn{14}{c}{Alternate Evaluation Protocol}\\ \addlinespace[1.5pt] \thickhline \addlinespace[2pt] Ctrl-F-Net* & DCToW & \textbf{83.1} &\textbf{ 84.7} & \textbf{97.1} & \textbf{94.5} & - & - & - & - & - & - & - & -\\ SW \cite{Ghosh_word_spotting}& PHOC & 67.7 & - & - & - & - & - & - & - & 42.1 & - & - & -\\ BG index \cite{ghosh2015query} & PHOC & - & 73.3 & - & - & - & - & - & - & - & 48.6 & - & -\\ SVM Fisher Vectors \cite{ghosh2018text} & PHOC & 77.2 & 69.9 & - & - & - & - & - & - & 38.7 & 44.7 & - & -\\ \thickhline \end{tabular} \end{table*} \begin{table*}[h] \renewcommand{\arraystretch}{1.3} \caption{MAP comparison in \% with state-of-the-art segmentation-free methods on the Botany and Konzilsprotokolle datasets. Results marked with asterisk uses a smaller training data split.} \label{tab:sota_konz_botany} \centering \begin{tabular}{llcccccccc} & & \multicolumn{4}{c}{Botany} & \multicolumn{4}{c}{Konzilsprotokolle} \\\cmidrule(r){3-6} \cmidrule(l){7-10} & & \multicolumn{2}{c}{MAP 50\%} & \multicolumn{2}{c}{MAP 25\%} & \multicolumn{2}{c}{MAP 50\%} & \multicolumn{2}{c}{MAP 25\%} \\ \cmidrule(r){3-4}\cmidrule(lr){5-6}\cmidrule(lr){7-8}\cmidrule(lr){9-10} Method & Embedding & QbE & QbS & QbE & QbS & QbE & QbS & QbE & QbS \\ \thickhline Ctrl-F-Net & DCToW & 75.3 & 78.0 & 90.9 & 94.9 & 74.9 & 79.9 & 96.7 & 97.9 \\ Ctrl-F-Net & PHOC & 73.9 & 77.2 & 91.0 & 94.5 & 67.9 & 74.7 & 95.8 & 97.8 \\ Ctrl-F-Mini & DCToW & \textbf{81.2} & \textbf{83.8} & \textbf{92.1} & \textbf{95.2} & 86.2 & \textbf{90.4} & \textbf{97.0} & \textbf{98.3} \\ Ctrl-F-Mini & PHOC & 79.1 & 81.5 & 90.3 & 93.9 & 85.7 & 89.1 & 95.8 & 97.9 \\ TAU* \cite{pratikakis2016icfhr2016} & n/a & 37.5 & - & - & - & 61.8 & - & - & -\\ $\textnormal{LRC+SIFT}_{\textnormal{quant}}$ \cite{rothacker2017word} & PHOC & 74.5 & 78.8 & 80.4 & 85.3 & \textbf{91.1} & 89.9 & 95.6 & 95.3\\ $\textnormal{AAM+SIFT}_{\textnormal{quant}}$ \cite{rothacker2017word} & PHOC & 69.4 & 74.0 & 75.9 & 80.3 & 89.6 & 88.9 & 96.2 & 96.0\\ Encoder-Decoder Net \cite{axler2018toward} & PHOC & - & 79.0 & - & 78.7 & - & - & - & -\\ \thickhline \end{tabular} \end{table*} \subsubsection{Computational and Storage Requirements} We have computed inference and search times as well as storage requirements for the pre-processed images in Table \ref{tab:query_time} for all datasets. The inference and search times are averaged over pages and queries respectively. Across the board we see that the Ctrl-F-Net has slower inference times than the Ctrl-F-Mini, but surprisingly enough the search times are lower, even when storage space is larger (that is, more proposals to search through). This is most likely due to the non-max suppression step in the querying removing more proposals for the Ctrl-F-Net, which leads to fewer proposals to sort. The same process is most likely in effect in the cases when space requirements of the Ctrl-F-Mini are higher than the Ctrl-F-Net even with one only source of region proposals. In this case, the wordness score based non-max suppression removes enough DTP proposals using RPN proposals that the total number of proposals go down. \subsection{State of the art comparison} \label{sec:sota} In this section, we compare the best performing models determined from the ablation study in the previous section to the state-of-the-art in segmentation-free word spotting for all four public datasets and segmentation-based word spotting for the GW and IAM datasets. We adopt the Cosine loss instead of the Cosine Embedding loss, and for Ctrl-F-Net we make use of DTP proposals during training. Table \ref{tab:sota} shows the results of the top Ctrl-F-Net and Ctrl-F-Mini models with the DCToW and PHOC embeddings, and contrasts them with the state of the art on the GW and IAM datasets. For the GW dataset, we outperform the other methods by a large margin, in both QbE and QbS and across the 25\% and 50\% overlap thresholds. The difference in MAP score between the GW 5-15 and GW 15-5 datasets are relatively small, considering the number of training and validation pages (5 vs 15) and test pages (15 vs 5). Our results that are marked with an asterisk use the evaluation protocol of \cite{Ghosh_word_spotting, ghosh2015query, ghosh2018text}. A small caveat is needed for the Washington dataset for its annotation quality. As the dataset was manually annotated, there can be a relatively large amount of empty space around the words. This has little effect for longer words, but for single letter words, this can cause perfectly segmented words to have less than 50\% overlap with the ground truth. Although, the issue is much smaller for the 25\% overlap threshold. For the IAM dataset, the recently introduced model in \cite{axler2018toward} has very high performance, beating the previous version of this work. However, the changes introduced in this paper led us to outperform their model. Since the first version of this paper, the Ctrl-F-Mini has been adopted in \cite{krishnan2019hwnet} with their own learned embedding to achieve the highest mAP for QbE on IAM. We note that the results in \cite{Ghosh_word_spotting} and \cite{ghosh2015query} on the IAM dataset is not directly comparable, as they do line spotting where whole lines are retrieved and they perform their search in the annotated text lines, not the full pages, and their distance between a query and a text line is the shortest distance between the query and the word candidates of that line. According to the results presented in \cite{almazan2014word}, this is a slightly easier task. In Table \ref{tab:sota_konz_botany} we compare our models on the Botany and Konzilsprotokolle datasets and note that in all but one setting (QbE 50\% overlap) we outperform existing methods, often with a large margin. For example, with QbS using 25\% overlap on the Botany dataset we improve with over 10 percentage points, reducing the error by 67\%. We can observe a similar result for Konzilsprotokolle, where the error is reduced by 68\%. In Table \ref{tab:sota_seg_comp}, we compare the best segmentation-free setup with a 25\% overlap threshold with state of the art methods for segmentation-based word spotting, that use the same evaluation protocol. We observe that we have competitive results for the GW 15-5 split in both QbE and QbS, whereas for IAM the QbS performance is quite close the best segmentation-based methods, even though they depend on manually segmented bounding boxes. We also include some of the top methods for line-level word spotting for the IAM dataset as it is reasonably easy to segment into lines. While the numbers are not as directly comparable as the word-level methods ( due to a different query set and vocabulary, additional data for language models, retrieval of lines rather than words to name a few), we include them for additional context. \begin{table} \renewcommand{\arraystretch}{1.3} \caption{MAP comparison in \% with state-of-the-art segmentation-based methods using a 25\% overlap threshold and the GW 15-5 split. Methods marked with $^\dagger$ use ground truth segmented word images, but otherwise use a similar evaluation method. }\label{tab:sota_seg_comp} \centering \begin{tabular}{lcccc} &\multicolumn{2}{c}{GW} & \multicolumn{2}{c}{IAM} \\ \cmidrule(lr){2-3} \cmidrule(lr){4-5} Method & QbE & QbS & QbE & QbS \\ \thickhline Ctrl-F-Net PHOC & 97.3 & 96.4 & 78.0 & 88.4 \\ Embed attributes$^\dagger$ \cite{almazan2014word} & 93.0 & 91.3 & 55.7 & 73.7 \\ DCToW$^\dagger$ \cite{wilkinson2016semantic} & 98.0 & 93.7 & 77.0 & 85.3 \\ TPP-PHOCNet (CPS)$^\dagger$ \cite{sudholt2017attribute} & 98.0 & 97.9 & 82.7 & 93.4 \\ DeepEmbed$^\dagger$ \cite{krishnan2018word} & 98.0 & \textbf{98.9} & 90.4 & \textbf{94.0} \\ Hwnet v2$^\dagger$ \cite{krishnan2019hwnet} & \textbf{98.2} & - & \textbf{92.4} & - \\ \thickhline \end{tabular} \end{table} \subsection{Discussion} We make several observations from the experiments. The first is that we get an increase in the MAP score across the board from lowering the overlap threshold to 25\%. This suggests that there is further performance to be gained from more accurate proposals. Another possible explanation is that this effect is due to tightness of the annotation for a dataset. This means that for single letter words like "I", the amount of space surrounding the ground truth box makes a region proposal that tightly attends to the ink overlapping less than 50\%. This effect is greater for manually labelled datasets like the GW datasets, and less so for the IAM dataset, which is reflected in the results. However, for the application of word search in manuscripts as a way to assist scholars (as considered in the following Section), 25\% overlap is more than enough as the user would in any case manually inspect each result. The results in Table \ref{tab:sota_seg_comp} show a minor difference in performance between the segmentation-free and segmentation-based word spotting methods. The segmentation-based method most similar to this work is \cite{wilkinson2016semantic}, and the overall performance of the Ctrl-F-Net is higher. This suggests that there might be possible upsides to using a segmentation-free approach when learning a representation for a word image. For example, the increased consistency of automatic region proposal methods compared to manual labeling could be beneficial for learning. A result that is of great significance for practical adoption of this work in historical research, is the high performance on GW 5-15 dataset, where we train on 4 pages, validate on 1 page and test on 15 pages. This suggests that the model is learning efficiently with respect to the amount of training data. This is crucial when working with historical manuscripts, which are very expensive to annotate due to the expert knowledge required. An important aspect of training on such small data is extensive use of data augmentation. To that end we have introduced two complementary data augmentation techniques for full page manuscripts to facilitate the learning of proposed models. From the experiments presented in the previous section, we conclude that using the RPN as a main source of region proposals without any external region proposals is a suboptimal approach. While there has been some work adapting the RPN for text proposals \cite{liao2018textboxes++}, two of the suggested improvements relate to handling arbitrarily oriented text and increasing recall. Both of these are not issues here, in fact, the RPN has higher recall than the DTP. A third adaptation of the RPN is changing the aspect ratio of the anchor boxes, which has been adopted our formulation. Instead, the results suggests that the RPN, while generic in its current form, could be improved upon or specialized for sub-domains of computer vision. One example relevant for this work would be how to generate proposals that work well for word spotting. Compared to the natural image object detectors (Multibox \cite{erhan2014scalable} and YOLO \cite{redmon2016you}) that were evaluated in \cite{moysset2018learning}, the RPN is a capable source of region proposals for manuscript images. Because of its implementation as a convolution (i.e., sliding window) where a set of anchor boxes are regressed into every position on the feature map, it is able to detect hundreds of objects per image. While YOLO fails to work at all for manuscripts in \cite{moysset2018learning}, Multibox does reasonably well but is outperformed by their proposed approach. A possible reason as to why the RPN works well in terms of recall is because to the large amount of initial proposals that are then reduced via score thresholding and non-max suppression. \begin{figure}[t!] \begin{center} \includegraphics[width=0.99\linewidth]{court_records.pdf} \end{center} \caption{Parts of 4 sample images drawn from the Snevringe dataset. Going from top left to bottom right, they are written in 1719, 1735, 1797, and 1879.} \label{fig:dombok} \end{figure} Furthermore, the ablation study showed how the Ctrl-F-Mini performed on par with the full Ctrl-F-Net, and in some instances even outperformed the full model. This held true also in the small data setting of GW 5-15. Considering the reduced model complexity, and decreased training and inference times Ctrl-F-Mini provides over Ctrl-F-Net, it is a recommended alternative to the full model when it comes to easily segmented data. However, the Ctrl-F-Mini with DTP proposals still suffers from the known limitations of connected components based methods for segmenting. So for more densely written, noisy manuscripts where document binarization is difficult and large parts of the text is one connected component, the full Ctrl-F-Net with its sliding window RPN would be recommended. \section{Case study: early modern court records}\label{sec:case_study} Court records are often used in historical research. Their usefulness stretches far beyond the study of crime and judicial systems as they offer insights into practices and mentalities of ordinary people that very few sources do. In the words of a well-known introduction to historical methodology, "court records are probably the single most important source we have for social history of the medieval and early modern periods." \cite{tosh2015pursuit}. Not surprisingly, several of the most famous and influential historical studies are based on court record \cite{ladurie2013montaillou, ginzburg1992cheese, zemon1983return}. At the same time, the richness in information and the variety of subjects dealt with in a single volume make the records difficult to work with. Swedish court records mix criminal and civil cases and they often lack even simple search tools such as indexes. Finding relevant information is extremely time-consuming and researchers need often to restrict their empirical research to a very limited number of volumes. \textbf{The Snevringe Court Records} consists of 64 volumes of newly digitized court records from the magistrate court of Snevringe judicial district, written between 1719 and 1880. The 64 volumes consist of 55k images, each of which contain 2 pages. Figure \ref{fig:dombok} shows a sample of four images of the dataset. The court records provide several challenges: unlike modern text, there is no standardized spelling; said spelling evolves over time, compounding the problem; and there are hundreds of different writers, adding their personal variation. A final peculiarity of the court records is that they are written during a time in Sweden where a change of script took place. Earlier volumes use Kurrent script (or German cursive), with particular words written in Latin cursive. A gradual, non-linear change of scripts to Latin cursive occurs over the time span that the court records were written. This dual script provides interesting challenges where certain characters are written in two completely different ways in the same dataset. The queries we evaluate are chosen according to their relevance to contemporary historical research. We have manually annotated 11 pages for training, where 3 are from the Snevringe set of court records (these pages are removed when searching). The rest are from another set of court records from adjacent judicial districts and a nearby town. We use the Ctrl-F-Net and training is done as detailed in section \ref{sec:training} except that initialized with a model trained on the IAM dataset. \begin{table} \renewcommand{\arraystretch}{1.3} \caption{Quantitative results for the queries used for the Snevringe dataset. $P(k)$ is the precision at rank k. OOV denotes out-of-vocabulary, meaning not present in training vocabulary. } \label{tab:snevringe} \centering \begin{tabular}{lccccc} Queries & Translation & OOV & P(1) & P(10) & P(50)\\ \hline str{\"o}msholm & Str{\"o}msholm & & 1.00 & 1.00 & 0.98 \\ wester{\aa}s & V{\"a}ster{\aa}s & & 1.00 & 1.00 & 1.00 \\ madame & madame & & 1.00 & 0.70 & 0.20 \\ stalldr{\"a}ng & stableman & & 1.00 & 0.70 & 0.52 \\ vidk{\"a}ndt & acknowledged & & 1.00 & 0.80 & 0.58 \\ l{\"a}nsmannen & county sheriff & & 1.00 & 1.00 & 0.8 \\ februari & February & \checkmark & 1.00 & 0.80 & 0.58 \\ informator & tutor & \checkmark & 0.00 & 0.10 & 0.02 \\ gifta & married & \checkmark & 0.00 & 0.70 & 0.48 \\ sala & Sala & \checkmark & 0.00 & 0.00 & 0.04 \\ \hline Average & n/a & & 0.70 & 0.68 & 0.52 \\ \end{tabular} \end{table} Because we are working with an unexplored collection of manuscripts, doing an exhaustive evaluation is not possible. Instead we adopt common web-scale metrics that do not require knowledge of $R$, the number of relevant instances for a query. We manually annotate the top-50 results for each query and calculate the $P_k$ for each query at different at $k=\{1, 10, 50\}$. Contrary to the MAP, this metric has no measure of the ordering of results, only the precision of the top-k results. Quantitative results are presented in Table \ref{tab:snevringe}. We show the performance of 10 queries relevant for contemporary historical research. As expected, the words that are present in the training vocabulary perform the best on average. The out-of-vocabulary (OOV) words seem to perform worse, which is not so surprising as we are doing zero-shot retrieval. An interesting exception is the query "gifta" (eng. married), for which the results were surprisingly good. Another noteworthy aspect for the query "informator" (eng. tutor) is that while it is most commonly written using Latin script, the model seems to be searching for the query using Kurrent script, illustrating some of the difficulties in working with this data. This is likely due to the Kurrent script being heavily overrepresented in the training data. We further provide qualitative results in Figure \ref{fig:qualitative_results} to showcase some variability of the writing styles used in the court records. With the Snevringe court records we provide results on a set of queries being investigated in contemporary historical research. In essence, we are testing our model in the wild, i.e., directly evaluating our proposed approach in a setting where it is designed to be deployed. The data is unexplored, un-curated and noisy. There are blank pages, images of book covers, and extremely messy pages with lots of stricken out text, faded ink, and extensive notes between text lines. This, together with its size, makes the court records more difficult to work with than any publicly available word spotting dataset. The value of word spotting for research in historical manuscripts cannot be overstated. The work often entails manually locating small pieces of information scattered throughout large amounts of texts, and finding only where to look can be very time consuming. In effect, limited sets of elusive data are what a historians' interpretations are based on, and a limiting factor when it comes to which inquiries can be conducted at all. Speeding up the process of identifying relevant sections in handwritten texts would not only make it possible to gather more data, but also make way for new questions to be researched. \section{Conclusion} We have introduced Ctrl-F-Net, a model for segmentation-free query-by-string word spotting. It simultaneously produces region proposals, and embeds them into a word embedding space in which searches are performed. Using an ablation study, we investigate several model choices, most notably the source of region proposals. The ablation leads us to propose the simplified Ctrl-F-Mini, a model suited to manuscripts that easily segmented into words. Our models outperform the previous state-of-the-art approaches for segmentation-free word spotting, in some cases by a large margin. Moreover, in a case study applying the Ctrl-F-Net to a collection of 64 volumes of court records, spanning the most of the 18\textsuperscript{th} and 19\textsuperscript{th} centuries, we enable a historical study using orders of magnitude greater data than would be possible otherwise. \begin{figure} \begin{center} \includegraphics[height=0.78\linewidth]{stalldrang_colour.png} \includegraphics[height=0.78\linewidth]{stromsholm_colour.png} \includegraphics[height=0.78\linewidth]{madame_colour.png} \includegraphics[height=0.78\linewidth]{gifta_colour.png} \end{center} \caption{Qualitative search results. The figure depicts the top 10 results starting from the top for the four queries "Stalldr{\"a}ng" (stableman), "Str{\"o}msholm", "madame", and "gifta" (married). Incorrect retrievals are highlighted in red.} \label{fig:qualitative_results} \end{figure} \ifCLASSOPTIONcompsoc \section*{Acknowledgments} This project is a part of q2b, From quill to bytes, which is a digital humanities initiative sponsored by the Swedish Research Council (Dnr 2012-5743), Riksbankens Jubileumsfond (NHS14-2068:1) and Uppsala university. \else \section*{Acknowledgment} \fi \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
1,116,691,497,434
arxiv
\section{Introduction} Interactions between a gaseous disk and embedded proto-planetary cores could be decisive to understand the distribution of semi-major axis, eccentricities and masses of extra-solar giant planets (Perryman 2000, Schneider 2004). Exchanges of angular momentum can efficiently operate through the excitation of spiral density waves at the sites of Lindblad resonances, leading to inward migration of solid bodies (Goldreich \& Tremaine 1979 \& 1980, Ward 1997). For planets with mass less than about a Jovian mass, the interaction between the disk and the planet is mostly linear (type-I migration), in contrast with the non-linear type-II migration where massive planets open a gap. In any case, the drift time-scale is much shorter than the time required for the completion of a (giant) planet. Several works have recently focused on means to slow down or stop migration. The presence of a toroidal magnetic field (Terquem 2003), tri-dimensional effects (Tanaka, Takeuchi \& Ward 2002) or corotation torques (D'Angelo, Henning \& Kley 2002) could act in this way. From numerical simulations, Nelson \& Benz (2003a \& b) have shown that disk self-gravity can noticeably affect the drift velocity (even for low mass disks). They suggested that even very weak changes of the rotation curve induced by disk gravity significantly modifiy the location of Lindblad resonances, and subsequently the total differential torque. Their conclusions are however strongly resolution-dependent. In this short communication, we clarify the influence of disk gravity on type-I migration by a semi-analytical approach. In particular, we determine analytically for the first time the location of Lindblad resonances modified by disk gravity, and compute the corresponding gravitational torques. Analytical techniques generally provide reliable diagnostic tools and powerful predictions as they implicitely correspond to an infinite numerical resolution. In Sect. \ref{sec:location}, we derive and discuss a general expression for the location of Lindblad resonances as functions of the disk surface density profile, orbit of the planet, relative disk mass and edges. In Sect. \ref{sec:influence}, we successfully check this expression in the simple case of radially homogenous disks. The effect of the disk mass on Lindblad torques is then analyzed. Finally, we consider the case of disks with power-law surface density profiles. We conclude in Sect. \ref{sec:conc}. \section{Location of Lindblad resonances} \label{sec:location} \subsection{Background} A planet embedded in a gaseous disk exerts gravitational torques at sites of Lindblad resonances. For low mass bodies, the disk response to the perturbing point mass potential can be considered as linear, so that torques can explicitly be determined (Goldreich \& Tremaine 1979, Artymowicz 1993). The nominal positions $R_{\rm L}(m)$ of the inner (ILRs) and outer (OLRs) Lindblad resonances associated with the $m$-th order Fourier pattern are found from the equation $D(R_{\rm L})=0$ with \begin{equation} D = \kappa^2-m^2(\Omega -\Omega_{\rm pl})^2, \label{eq:d} \end{equation} where $\kappa$ is the epicyclic frequency defined by $\kappa^2=4\Omega^2+2R\Omega d\Omega/dR$, $\Omega$ is the fluid angular velocity and $\Omega_{\rm pl}$ is the angular velocity of the planet. Generally, $R_{\rm L}(m)$ differs from the effective locations of Lindblad resonances (hereafter $R_*$) where the waves become evanescent. Effective resonances are found from the equation $D_*(R_*)=0$ with (Artymowicz 1993, Ward 1997) \begin{equation} D_* = D +\frac{m^2c_{\rm s}^2}{R^2}, \label{eq:dstar} \end{equation} where $c_{\rm s}$ is the sound speed. If the disk aspect ratio $H/r \equiv \eta$ is a constant, we then have $c_{\rm s} = \Omega_{\rm K} \eta R$, and so \begin{equation} R_*=R_{\rm pl} \left(1+\epsilon \frac{f}{m}\right)^{2/3}, \label{eq:reff} \end{equation} where $\epsilon=-1$ stands for the ILRs, $\epsilon=+1$ is for the OLRs, and $f=\sqrt{1+m^2\eta^2}$. This expression shows that nominal and effective resonances coincide only for low $m$ values. \subsection{Effects of disk gravity} The gas in the disk, whatever its mass relative to the central mass, is a source of gravity. It modifies the positions of the Lindblad resonances in two ways. First, it changes the dynamics of both the gas component itself (i.e. disk self-gravity) and the planet according to \begin{equation} \begin{cases} \Omega^2 = \Omega_{\rm K}^2+\Omega_{\rm d}^2,\\ {\Omega_{\rm pl}'}^2= \Omega_{\rm pl}^2+\omega^2,\\ \end{cases} \end{equation} where $\Omega_{\rm d}^2$ is the contribution due to the disk, and $\omega^2 = \Omega_{\rm d}^2(R_{\rm pl})$. Second, it shifts the location of effective resonances closer to the planet (density waves can propagate between Lindblad resonances). According to Nelson \& Benz (2003b), the new locations $R_{\rm dg}$ of effective ILRs and OLRs modified by the disk gravity are determined by the equation $D_{\rm dg}(R_{\rm dg})=0$ where \begin{equation} D_{\rm dg} = D_* -\frac{2\pi G\Sigma m}{R}, \label{eq:dsg} \end{equation} and $\Sigma$ is the disk local surface density. \subsection{Second-order differential method} Provided the relative shift $\delta R \equiv R_{\rm dg} - R_*$ is small compared to $R_*$, we can derive a reliable analytical expression for $R_{\rm dg}$ by a low order expansion of $D_{\rm dg}(R_* + \delta R)$. Keeping terms of second order in $\Omega_{\rm d}/\Omega_{\rm K}$ only, we find after some algebra \begin{equation} \label{eq:totalshift} \delta R = \delta R_1+ \delta R_2 + \delta R_3 \end{equation} where \begin{equation} \begin{cases} \frac{\delta R_1}{R_*} = \frac{\Omega_{\rm d}^2 \xi }{3\Omega_{\rm K}^2}\left( 4 + \gamma + m \epsilon f\right)\\ \frac{\delta R_2}{R_*} = - \frac{2 \pi G m \Sigma \xi }{3 R \Omega_{\rm K}^2}\\ \frac{\delta R_3}{R_*} = - \frac{\omega^2 \xi f m^2}{3 \Omega_{\rm K}^2 (\epsilon m + f)}, \label{eq:shifts} \end{cases} \end{equation} with \begin{equation} \begin{cases} \gamma &= \frac{d \ln |\Omega_{\rm d}^2|}{d \ln R},\\ \xi &= \frac{1}{1+m \epsilon f + m^2 \eta^2}. \end{cases} \end{equation} All quantities in Eq. (\ref{eq:shifts}) are to be expressed at $R_*$. The first two terms in the right-hand side of Eq.(\ref{eq:totalshift}) are due to the disk self-gravity, whereas the third one corresponds to the change of the planet angular velocity due to the disk mass. \begin{table} \begin{center} \begin{tabular}{lcccc} shift & ILRs & OLRs & contribution & inward migration\\ \hline $\delta R_1$ & $>0$ & $>0$ & outwards & slowed down \\ $\delta R_2 $ & $>0$ & $<0$ & inwards & accelerated\\ $\delta R_3$ & $<0$ & $<0$ & inwards & accelerated \\ total & $>0$ & $<0$ & {\bf inwards} & {\bf accelerated} \\ \hline \end{tabular} \end{center} \caption{Expected sign of the three individual shifts, and corresponding prediction about planetary migration.} \label{tab:droverr} \end{table} The ``efficiency'' of migration can qualitatively be deduced from the shifts $\delta R/R_*$ of the ILRs and OLRs since, in general, the amplitude of gravitational torques depends strongly on the location of the resonances. Further, these torques are the largest for intermediate $m$ values $\sim 10-20$ ($\sim 1/\eta$). In these conditions, we have $f \sim 1$, $\xi \sim 1/m \epsilon$. Using the (crude) monopole approximation for $\Omega_{\rm d}^2$ (Mineshige \& Umemura 1996) and assuming a power law surface density profile, we find \begin{equation} \label{eq:simple} \begin{pmatrix} \delta R_1\\ \delta R_2\\ \delta R_3 \end{pmatrix} \sim R_* \times \frac{\Omega_{\rm d}^2}{3\Omega_{\rm K}^2} \begin{pmatrix} 1+\frac{(4+\gamma)\epsilon}{m}\\ - \epsilon \\ - 1+ \frac{\epsilon}{m}\\ \end{pmatrix}. \end{equation} Note that $|\gamma|$ is of the order of unity in most cases. To conclude on the influence of disk gravity, one must compare each term in Eq.(\ref{eq:simple}) together. Table \ref{tab:droverr} summarizes the sign of each shift, and the consequence on migration. We see that if self-gravity is neglected (i.e. assuming $\delta R\approx \delta R_3$), then inward migration is accelerated. This effect has been clearly observed in the hydrodynamical simulations by Nelson \& Benz (2003a \& b). On the contrary, self-gravity alone (i.e. $\delta R \approx \delta R_1 + \delta R_2$) slows down migration due to an asymetrical shift of the resonances. Further, since \begin{equation} \left| \frac{\delta R_1 + \delta R_3}{\delta R_2} \right| \sim \frac{(5+\gamma)}{m} < 1, \end{equation} we conclude that the total shift is dominated by the term $\delta R_2$. From Eq.(\ref{eq:simple}), we have \begin{flalign} \left.\frac{\delta R}{R}\right|_{\rm OLR} + \left.\frac{\delta R}{R}\right|_{\rm ILR} & \sim - \left(1-\frac{5+\gamma}{m}\right) \\ & \qquad \qquad \times \frac{4 R_{\rm pl}}{9 m} \frac{d}{dR} \left( \frac{\Omega_{\rm d}^2}{\Omega_{\rm K}^2} \right) < 0 \nonumber \end{flalign} For most disks of astrophysical interest, we expect that $\Omega_{\rm d}^2/\Omega_{\rm K}^2$ to be an increasing function of the radius. It then follows that OLRs should be shifted more than ILRs, meaning that {\it migration tends to proceed inward more rapidly when disk gravity in included}. \section{Influence of the disk gravity on low mass planet migration} \label{sec:influence} \subsection{Gravitational acceleration in an homogeneous disk} As Eq.(\ref{eq:shifts}) shows, resonance shifts due to the disk mass can be determined once the radial gravity field $g_R = - \Omega_{\rm d}^2 R$ is known. Since there is no reliable formula for potential/density pairs in flat disks, we shall consider a simple case which allows some analytics, namely a thin disk with uniform surface density $\Sigma$ (a case often considered in simulations). Then, the radial field $g_R$ inside the disk exactly writes (e.g. Durand 1964) \begin{equation} g_R=4G\Sigma\left[\frac{{\mathbf E}(v_{\rm out})-{\mathbf K}(v_{\rm out})}{v_{\rm out}}+{\mathbf K}(u_{\rm in})-{\mathbf E}(u_{\rm in})\right] \label{eq:gr} \end{equation} where ${\mathbf K}$ and ${\mathbf E}$ are the complete elliptic integrals of the first and second kinds respectively, $u_{\rm in} =a_{\rm in}/R \le 1$, $v_{\rm out} = R/a_{\rm out} \le 1$, where $a_{\rm in} \ge 0$ is the disk inner edge and $a_{\rm out} > a_{\rm in}$ is the outer edge. A more tractable expressions for $g_R$ can be derived from truncated expansions of the complete elliptic integrals. For instance, with a classical second-order expansions over the modulus $x < 1$ (e.g. Gradshteyn \& Ryzhik 1994), namely \begin{equation} \begin{cases} {\mathbf K}(x)&= \frac{\pi}{2} \left( 1 + \frac{1}{4}x^2 \right) + {\cal O}(x^4),\\%+ \frac{9}{64}m^4 \right)\\\\ {\mathbf E}(x)&= \frac{\pi}{2} \left( 1 - \frac{1}{4}x^2 \right) + {\cal O}(x^4) \end{cases} \label{eq:kseries} \end{equation} the field far from edges is given in a good approximation by \begin{equation} g_R \approx -\pi G \Sigma \left(v_{\rm out} - u_{\rm in}^2 \right). \label{eq:grapprox} \end{equation} Note that contrary to Eq.(\ref{eq:gr}), that formula remains finite at the edges, and then is more realistic although approximated. Since the gravitational potential of the disk is minimum very close to the inner edge (like in most astrophysical disks), we can simplify Eq.(\ref{eq:grapprox}) by considering only the term linear in $R$, and so \begin{equation} \Omega_{\rm d}^2 \sim \frac{\pi G \Sigma}{a_{\rm out}}. \end{equation} \subsection{Shifts and torques at the Lindblad resonances} In the homogeneous disk model, we thus have \begin{equation} \frac{\Omega_{\rm d}^2}{\Omega_{\rm K}^2} \sim \frac{\mu R^3}{a_{\rm out}^3}, \end{equation} where $\mu=M_{\rm d}/M$, $M_{\rm d}$ being the disk mass, and assuming $a_{\rm in}^2 \ll a_{\rm out}^2$. Hence, $\gamma=0$. Using Eq.(\ref{eq:reff}), our formula for the relative shift becomes \begin{equation} \frac{\delta R}{R_*} = \frac{\mu R_{\rm pl}^3}{3a_{\rm out}^3} \left(\frac{m+\epsilon f}{m}\right)^2\left(\frac{ 4 + \frac{mf^2}{m+\epsilon f}- 2m \frac{a_{\rm out}}{R_*}}{ 1+m \epsilon f + m^2 \eta^2 }\right). \label{eq:shifthomo} \end{equation} We see that the relative shift is strongly sensitive to the location of the planet with respect to the disk outer edge. Figure \ref{fig:anavsnum.eps} shows $\delta R/R_*$ as predicted by Eq.(\ref{eq:shifthomo}) for typical parameters. The agreement between the above formula and values determined numerically from Eq.(\ref{eq:dsg}) by standard root finding techniques is excellent. Both Lindblad resonances get closer to the planet. The OLRs are slightly more shifted than the ILRs, specially for small $m$ (depending on the disk aspect ratio). This can be understood by computing the relative displacement \begin{equation} \left.\frac{\delta R}{R}\right|_{\rm OLR} + \left.\frac{\delta R}{R}\right|_{\rm ILR} \approx \frac{8\mu R_{\rm pl}^3}{3m^2a_{\rm out}^3} \left(1 - \frac{m a_{\rm out}}{2 R_*} \right) \lesssim 0. \label{eq:shiftdiff} \end{equation} Figure \ref{fig:torques.eps} displays the Lindblad torques computed following the procedure described in Ward (1997). It confirms that the outer torques have larger amplitude than inner ones when the disk gravity is accounted for. Differential torques are larger, and inwards migration should be accelerated. \begin{figure} \includegraphics[width=8.75cm]{anavsnum.eps} \caption{Relative shift $\delta R/R_*$ of the effective ILRs and OLRs due to the disk mass, for the following parameters: $a_{\rm in}=0.01$, $a_{\rm out}=1$, $\mu=0.05$ and $R_{\rm pl}/a_{\rm out}=~0.5$.} \label{fig:anavsnum.eps} \end{figure} \begin{figure} \includegraphics[width=8.7cm]{torque005.eps} \caption{Inner and outer Lindblad torques ({\it left}) and differential torques ({\it right}) when the disk mass is accounted for, compared to the case without disk gravity. The conditions are the same as for Fig. \ref{fig:anavsnum.eps}.} \label{fig:torques.eps} \end{figure} \begin{figure} \includegraphics[width=8.8cm]{vss.xfig.eps} \caption{Relative shifts of the OLRs and ILRs ({\it left}) and differential Lindblad ({\it right}) for different power law surface density profiles (see text). The conditions are the same as for Fig. \ref{fig:anavsnum.eps}.} \label{fig:vss.xfig.eps} \end{figure} \subsection{The case of non homogeneous disks} We have computed the resonance shifts and associated Lindblad torques for disks with power law surface density profiles (i.e. $\Sigma \propto R^{-s}$), typical from disk models and observations. The disk gravity field $g_R$ has been determined numerically from the splitting method described in Hur\'e \& Pierens (2004). As shown in Fig. \ref{fig:vss.xfig.eps}, resonances shifts are weakly affected by the surface density profile for $s=1/2$ and $1$. For steeper profiles however (like for $s=3/2$), shifts are smaller. Figure \ref{fig:vss.xfig.eps} also displays the differential torques. These conserve the same shape as in the homogeneous case, are always larger than in the case without disk gravity. Their magnitude decreases as the surface density profile gets steeper, especially for low $m$ values. Changes are minor for high Fourier modes. Globally, the conclusions established in the homogeneous disk model still hold: inward migration should proceed faster due to the disk gravity. For large $s$-values, the effect of disk gravity on migration is predicted to be less and less efficient. \begin{figure}[h] \includegraphics[width=8.7cm]{mig.xfig.eps} \caption{Total differential torque versus the planet position relative to the outer edge for $\mu=0.05$ ({\it left}), and versus the relative disk mass $\mu$ for $R_{\rm pl}=0.5$ ({\it right}), for different power law surface density profile in the disk (see text).} \label{fig:mig.xfig.eps} \end{figure} Figure \ref{fig:mig.xfig.eps} displays the differential torque as functions of $R_{\rm pl}$ and $\mu$ for various exponents $s$ in the power law surface density. We see that torques increase as the planet orbits closer to the outer edge, and as the disk mass rises. Both effects are due to the fact that the the resonances get closer to the planet as $\mu$ or $R_{\rm pl}/a_{\rm out}$ increases, however with a slightly larger shift of the OLRs with respect to the ILRs (for reasons explained above). \section{Concluding remarks} \label{sec:conc} In this paper, we have reported a general expression for the shift of the Lindblad resonances due to the disk gravity, whatever the surface density profile, and computed associated torques. In contrast with current numerical simulations (Nelson \& Benz 2003a \& b), our analysis is not resolution-dependent, thereby enabling reliable predictions about the migration mechanism of low mass embedded objects. We have considered the effect of the disk gravity i) on the planet dynamics, and ii) on the disk itself (i.e. self-gravity). Both effects are important and act in opposite ways. We confirm that disk gravity plays an important role on type-I migration, even for low mass disks. We find that the position of the resonances are significantly modified and get closer to the planet when the disk mass is taken into account. The differential Lindblad torques are stronger than in the case where the disk mass is neglected (Ward 1997). Our results are also compatible with the recent simulations by Nelson \& Benz (2003a \& b): migration is accelerated when the disk gravity is accounted for in the motion of the planet only, and slowed down when self-gravity is added, but does not stop it (assuming that all Fourier modes exist). Regarding extrasolar planets, our conclusions reinforce the necessity to seek for mecanisms able to cancel inward migration. We note that the possible suppression of low $m$-modes of the OLRs (for instance if the planet evolves too close to the disk outer edge) could decrease the total torque exerted on the planet and change its drift. It would be of great interest i) to seek for a general expression of $\delta R$ as a function of the surface density profile (for instance as an explicit function of the $s$-exponent), and ii) to compare the location of Lindblad resonances and associated torques as predicted here with those obtained directly by numerical simulations of fully self-gravitating disks. This would require very high numerical resolutions.
1,116,691,497,435
arxiv
\section{Introduction} \vspace{-1mm} \label{introduction} Video deblurring is a fundamental yet challenging task in low-level computer vision and graphics communities. It aims to restore the latent frames from a blurry video sequence. Serving as a preprocessing technique, video deblurring has wide applications such as video stabilization~\cite{matsushita2006full}, tracking~\cite{track}, autonomous driving~\cite{3d_det}, \emph{etc.} Hand-held devices are more and more popular in capturing videos of dynamic scenes, where prevalent depth variations, abrupt camera shakes, and high-speed object movements lead to undesirable blur in videos. To alleviate the effect of motion blur, researchers have put a lot of efforts into video deblurring. Conventional methods are mainly based on hand-crafted priors and assumptions, which limits the model capacity. Besides, the assumptions on motion blur and latent frames usually lead to complex energy functions that are difficult to solve. Also, the inaccurately estimated motion blur kernel with hand-crafted priors may easily result in severe artifacts. In the past decade, video deblurring has witnessed significant progresses with the development of deep learning. Convolutional neural network (CNN) applies a powerful model to learn the mapping from blurry videos to sharp videos under the supervision of a large-scale dataset of blurry-sharp video pairs. CNN-based methods yield impressive performance but show limitations in modeling long-range spatial dependencies and capturing non-local self-similarity. Recently, the emergence of Transformer provides an alternative to alleviate the constraints of CNN-based methods. \textbf{Firstly}, Transformer excels at modeling long-range spatial dependencies. The contextual information and spatial correlations are critical to restoring the motion blur. \textbf{Secondly}, similar and sharper scene patches from neighboring frames provide crucial cues for video deblurring. Fortunately, the self-attention module in Transformer is dedicated to calculating the correlations among pixels and capturing the self-similarity along the temporal sequence. Thus, Transformer inherently resonates with the goal of learning similar information from spatio-temporal neighborhoods. \textbf{Nevertheless}, directly using existing Transformers for video deblurring has two issues. \textbf{On one hand}, when the standard global Transformer~\cite{global_msa} is utilized, the computational cost is quadratic to the spatio-temporal dimensions. This burden is nontrivial and sometimes unaffordable. Meanwhile, the global Transformer attends to redundant $key$ elements, which may easily cause non-convergence issue~\cite{de_detr} and over-smoothing results~\cite{xiangtl_gald}. \textbf{On the other hand}, when the local window-based Transformer~\cite{liu2021swin} is used, the self-attention is calculated within position-specific windows, causing limited receptive fields. The model may neglect some $key$ elements of similar and sharper scene patches in the spatio-temporal neighborhood when fast motions are present. We summarize the main reason for the above problems, \emph{i.e.}, previous Transformers lack the guidance of motion information, when calculating self-attention. We note that the motion information can be estimated by optical flow. Exploiting an optical flow estimator to capture motion information and align neighboring frames is a common strategy in video restoration~\cite{makansi2017end,Su,xue2019video,tsp}. Previous flow-based methods mainly adopt the pre-warping strategy. Specifically, they employ an optical flow estimator to produce motion offsets, warp neighboring frames, and align regions corresponding to the same object but misaligned in neighboring image or feature domains. This scheme suffers from the following issues: \textbf{(i)} The interpolating operations in the warping module modify the original image information. As a result, some critical image priors such as self-similarity and sharp textures may be sacrificed. Undesirable artifacts may be introduced to the restored video and the deblurring performance may degrade. \textbf{(ii)} The frame alignment and subsequent representation aggregation are separated. This paradigm is inflexible and does not make full use of optical flow. Besides, the deblurring results are easily affected by the performance of the optical flow estimator. The robustness of this scheme can be further improved. \begin{figure*}[t] \begin{center} \begin{tabular}[t]{c} \hspace{-2mm} \includegraphics[width=0.95\textwidth]{Images/pipeline_new_v2.pdf} \end{tabular} \end{center} \vspace*{-6.5mm} \caption{\small The architecture of FGST. (a) FGST consists of an encoder, a bottleneck, and a decoder. FGST is built up by FGABs. (b) FGAB is composed of a layer normalization, an FGSW-MSA, and a feed-forward network. (c) RE aggregates the output of the last frame and the input of the current frame. Some intermediate steps between FGABs are omitted. (d) The components of residual block.} \label{fig:pipeline} \vspace{-1mm} \end{figure*} This work aims to cope with the above problems. We propose a novel method, Flow-Guided Sparse Transformer (FGST), for video deblurring. \textbf{Firstly}, we adopt Transformer instead of CNN as the deblurring model because of its advantages of capturing long-range spatial dependencies and non-local self-similarity. \textbf{Secondly}, to alleviate the limitations of previous Transformers and the pre-warping strategy, we customize Flow-Guided Sparse Multi-head Self-Attention (FGS-MSA) as shown in Fig.~\ref{fig:teaser} (a). For each $query$ element on the reference frame, FGS-MSA guided by an optical flow estimator globally samples spatially sparse $key$ elements corresponding to the same scene patch but misaligned in the neighboring frames. These sampled $key$ elements provide self-similar and highly related image prior information, which is critical to restoring motion blur. Different from original global and local Transformers, our FGST neither blindly samples redundant $key$ elements nor suffers from limited receptive fields. Meanwhile, our alignment scheme is different from the pre-warping operation mainly used by previous flow-based methods. Instead of warping the neighboring frames, our FGST samples $key$ elements in consecutive frames to calculate the self-attention. Thus, the original image prior information can be preserved. \textbf{Thirdly}, we promote FGS-MSA to Flow-Guided Sparse Window-based Multi-head Self-Attention (FGSW-MSA) as shown in Fig.~\ref{fig:teaser} (b). The feature maps are split into non-overlapping windows. Instead of sampling a single $key$ element on each neighboring frame for a single $query$ element, FGSW-MSA samples $key$ elements assigned by the optical flow corresponding to all the $query$ elements of the window on the reference frame. Thus, FGSW-MSA is more robust to accommodate pixel-level flow offset prediction deviations. \textbf{Finally}, our FGSW-MSA is calculated within a short temporal sequence reducing the computational cost. Hence, the receptive field of FGSW-MSA is spatially global but temporally local. Motivated by RNN-based methods~\cite{Nah,RNN_3}, we propose Recurrent Embedding (RE) to transfer information of past frames and capture long-range temporal dependencies. Our contributions can be summarized as follows: \begin{itemize} \setlength{\itemsep}{1pt} \setlength{\parsep}{1pt} \setlength{\parskip}{1pt} \vspace{-4mm} \item We propose a new method, FGST, for video deblurring. To the best of our knowledge, it's the first attempt to explore the potential of Transformer in this task. \item We customize a novel self-attention mechanism, FGS-MSA, and its improved version, FGSW-MSA. \item We design an embedding scheme, RE, to transfer frame information and capture temporal dependencies. \item Our FGST outperforms SOTA methods on DVD and GOPRO datasets by a large margin and yields more visually pleasing results in real-world video deblurring. \end{itemize} \section{Related Work} \label{related_work} \subsection{Video Deblurring} In recent years, the deblurring research focus is shifting from single image deblurring~\cite{zoran2011learning,chakrabarti2016neural,purohit2020region} to the more challenging video deblurring~\cite{real_blur,matsushita2006full}. Traditional methods~\cite{li2010generating,zhang2013multi} are based on hand-crafted image priors and assumptions, which lead to limited generality and representing capacity. With the development of deep learning, recent methods are mainly CNN-based or RNN-based. \cite{dblrnet} employ 3D convolutions to model spatio-temporal relations of frames. \cite{hyun2017online} and \cite{Nah} use RNN-based models to restore the latent frames. However, CNN-based methods show limitations in capturing long-range dependencies while RNN-based methods are not sensitive to patch-level spatial correlation and motion information. \subsection{Vision Transformer} Transformer is firstly proposed by \cite{vaswani2017attention} for machine translation. Recently, Transformer has been introduced to high-level~\cite{global_msa,liu2021swin,de_detr,SETR,xcit,to_1,prtr,tc_2,tc_3,rsn} and low-level vision~\cite{ipt,cai2021mask,uformer,vsrt,cai2021learning,hu2021pseudo}. \cite{arnab2021vivit} factorize the spatial and temporal dimensions of the input video and propose a Transformer model for video classification. \cite{ipt} present a large model IPT pre-trained on large-scale datasets with a multi-task learning scheme. \cite{vsrt} propose VSR-Transformer that uses the self-attention mechanism for better feature fusion in video super-resolution, but image features are still extracted from CNN. \cite{uformer} use Swin Transformer~\cite{liu2021swin} blocks to build up a U-shaped structure for single image restoration. In \cite{vaswani2021scaling,cao2021swin,liu2021swin}, window-based local self-attention is adopted to replace the global self-attention module of the standard Transformer. However, directly using previous global or local Transformers for video deblurring leads to unaffordable computational cost or limited receptive fields. \vspace{-1mm} \subsection{Flow-based Video Restoration} \vspace{-1mm} Optical flow estimators are widely used in video restoration tasks~\cite{gast2019deep,xue2019video,gong2017motion,sun2015learning,makansi2017end,Su,tsp} to align highly related but mis-aligned frames. Previous flow-based video deblurring methods~\cite{xue2019video,makansi2017end,Su,tsp,gast2019deep} mainly adopt the pre-warping strategy, which firstly estimates the optical flow and then warps the neighboring frames. For example, \cite{Su} experiments with pre-warping input images based on classic optical flow methods to register them to the reference frame. Nonetheless, this flow-based pre-warping scheme separates the frame alignment and subsequent information aggregation. The original frame information is sacrificed and the guidance effect of optical flow is not fully explored. \vspace{-1.5mm} \section{Method} \vspace{-0.5mm} \subsection{Overall Architecture} Figure~\ref{fig:pipeline} (a) shows the architecture of FGST that adopts the widely used U-shaped structure, consisting of an encoder, a bottleneck, and a decoder. Figure~\ref{fig:pipeline} (b) depicts the basic unit of FGST, $i.e.$, Flow-Guided Attention Block (FGAB). The input is a blurry video $\mathbf{V}\in \mathbb{R}^{T\times 3\times H \times W}$, where $T$ denotes the sequence length, $H$ and $W$ denote the width and height of the frame. \textbf{Firstly}, FGST exploits 5 residual blocks to map $\mathbf{V}$ into tokens $\mathbf{X}_0\in \mathbb{R}^{T\times C\times H \times W}$, where $C$ denotes the channel number. The details of residual block are shown in Fig.~\ref{fig:pipeline} (d). \textbf{Secondly}, $\mathbf{X}_0$ passes through two FGABs and patch merging layers to generate hierarchical features. The patch merging layer is a strided 4$\times$4 convolution that downsamples the feature maps and doubles the channels. Thus, the tokens of the $i_{th}$ layer in the encoder are denoted as $\mathbf{X}_i\in \mathbb{R}^{T\times 2^{i}C\times \frac{H}{2^{i}} \times \frac{W}{2^{i}}}$. \textbf{Thirdly}, $\mathbf{X}_2$ passes through the bottleneck, which consists of two FGABs. \textbf{Subsequently}, following the spirit of U-Net~\cite{unet}, we customize a symmetrical decoder, which is composed of two FGABs and patch expanding layers. The patch expanding layer is a strided 2$\times$2 deconvolution that upsamples the feature maps. To alleviate the information loss caused by downsampling, skip connections are used for feature fusion between the encoder and decoder. After undergoing the decoder, the feature maps pass through 5 residual blocks to generate a residual frame sequence $\mathbf{R}\in \mathbb{R}^{T\times 3\times H \times W}$. \textbf{Finally}, the deblurred video $\mathbf{V'}\in \mathbb{R}^{T\times 3\times H \times W}$ can be derived by $\mathbf{V'} = \mathbf{V} + \mathbf{R}$. \vspace{-1.5mm} \subsection{Flow-Guided Attention Block} \vspace{-1mm} As analyzed in Sec.~\ref{introduction}, the standard global Transformer brings quadratic computational complexity with respect to the token number and easily leads to non-convergence issue and over-smoothing results. The previous window-based local Transformers suffer from the limited receptive fields. To address these problems, we propose to use optical flow as the guidance to sample $key$ elements from spatio-temporal neighborhoods when calculating the self-attention. Based on this motivation, we customize the basic unit, FGAB as shown in Fig.~\ref{fig:pipeline} (b). FGAB consists of a layer normalization (LN), a Flow-Guided Sparse Window-based Multi-head Self-Attention (FGSW-MSA), a feed-forward network (FFN), and two identity mappings. The FFN is composed of 5 consecutive residual blocks. In this part, we first introduce Flow-Guided Sparse Multi-head Self-Attention (FGS-MSA) and then its improved version, FGSW-MSA. \noindent\textbf{FGS-MSA.} The details of FGS-MSA are shown in Fig.~\ref{fig:teaser} (a). Given the $\boldsymbol{t}_{th}$ input blurry video frame $\boldsymbol{v}_t \in \mathbb{R}^{3\times H \times W}$ as the reference frame, $\boldsymbol{q}_{i,j}^t$ and $\boldsymbol{k}_{i,j}^t \in \mathbb{R}^{C}$ respectively denote the \emph{query} and \emph{key} elements at the position ($i$,$j$) on $\boldsymbol{v}_t$. FGS-MSA aims to model long-range spatial dependencies and capture non-local self-similarity. To this end, FGS-MSA produces $key$s from the $key$ elements of similar and sharper scene patches in the spatio-temporal neighborhood of $\boldsymbol{v}_t$. The $key$ sampling is directed by the motion information that is predicted by an optical flow estimator. This set of $key$ elements is corresponding to $\boldsymbol{q}_{i,j}^t$ and we denote it as \begin{equation} \small \mathbf{\Omega}_{i,j}^t = \{\boldsymbol{k}^f_{i+{\Delta x_f},j+{\Delta y_f}}\bigm||f-t| \leq r\}, \label{eq:Omega_k} \end{equation} where $r$ represents the temporal radius of the neighboring frames. $({\Delta x_f}, {\Delta y_f})$ denotes the value at position ($i, j$) of the estimated motion offset map, which is predicted from the reference frame $\boldsymbol{v}_t$ to the neighboring frame $\boldsymbol{v}_f$: \begin{equation} \small ({\Delta x_f}, {\Delta y_f}) = [F_o(\boldsymbol{v}_t, \boldsymbol{v}_f)~(i,j)], \label{eq:flow} \end{equation} where $F_o$ denotes the mapping function of the optical flow estimator and [$\cdot$] refers to the rounding operation. Subsequently, FGS-MSA can be formulated as \vspace{-0.5mm} \begin{equation} \small \text{FGS-MSA}(\boldsymbol{q}_{i,j}^t,\mathbf{\Omega}_{i,j}^t) = \sum_{n=1}^{N} \mathbf{W}_n \sum_{\boldsymbol{k}\in\mathbf{\Omega}_{i,j}^t} \mathbf{A}_{n\boldsymbol{q}_{i,j}^t\boldsymbol{k}}~ \mathbf{W'}_n ~ \boldsymbol{k}, \label{eq:OFGMSA} \end{equation} where $N$ is the number of the attention heads. $\mathbf{W}_n \in \mathbb{R}^{C\times d}$ and $\mathbf{W'}_n \in \mathbb{R}^{d\times C}$ are learnable parameters, where $d = \frac{C}{N}$ denotes the representation dimension per head. $\mathbf{A}_{n\boldsymbol{q}_{i,j}^t\boldsymbol{k}}$ is the self-attention of the $n_{th}$ head, which is formulated as \vspace{-0.5mm} \begin{equation} \mathbf{A}_{n\boldsymbol{q}_{i,j}^t\boldsymbol{k}} = \underset{\boldsymbol{k}\in\mathbf{\Omega}_{i,j}^t}{\text{softmax}} (\frac{(\boldsymbol{q}_{i,j}^t)^T\mathbf{U}_n^T\mathbf{V}_n\boldsymbol{k}}{\sqrt{d}}), \label{eq:ScaledDotProductAttn} \end{equation} where $\mathbf{U}_n$ and $\mathbf{V}_n \in \mathbb{R}^{d\times C}$ are learnable parameters. Given an input $\mathbf{V} \in \mathbb{R}^{T\times 3 \times H \times W}$, the computational cost of the global MSA~\cite{global_msa} and FGS-MSA are \vspace{-1mm} \begin{equation} \small \begin{aligned} O{(\text{global MSA})}&=4(THW)C^2 + 2(THW)^2C,\\ O{(\text{FGS-MSA})}&= 2(THW)C\big(2(r+1)C+2r+1\big).\\ \label{eq:complexity} \end{aligned} \vspace{-6mm} \end{equation} The standard global MSA leads to quadratic ($(THW)^2$) computational complexity while our proposed FGS-MSA contributes to much cheaper linear computational cost with respect to the token number $(THW)$. Detailed analysis are provided in the supplementary material (SM). \noindent\textbf{FGSW-MSA.} For each neighboring frame, FGS-MSA only samples a single $key$ element. When the optical flow estimation is inaccurate, the deblurring performance may be easily affected. To further improve the robustness and reliability of our method, we promote FGS-MSA to FGSW-MSA. As shown in Fig.~\ref{fig:teaser} (b), the feature maps are split into non-overlapping windows. The spatial size of each window is $M\times M$. $\mathbf{\Phi}_{i,j}^t$ denotes the set of $query$ elements in the window centering at position $(i,j)$ of the $t_{th}$ frame: \vspace{-1mm} \begin{equation} \small \mathbf{\Phi}_{i,j}^t = \{\boldsymbol{q}_{m,n}^t \big|~|m-i| \leq M/2, |n-j| \leq M/2\}. \label{eq:FGSWMSA_1} \vspace{-1mm} \end{equation} For each $\boldsymbol{q}_{m,n}^t \in \mathbf{\Phi}_{i,j}^t$, FGSW-MSA samples not only its corresponding $key$ elements in $\mathbf{\Omega}_{m,n}^t$ (Eq.~\eqref{eq:Omega_k}) assigned by the flow offsets but also the $key$ elements corresponding to other $query$ elements in $\mathbf{\Phi}_{i,j}^t$. We denote the set of these $key$ elements as $\mathbf{\Psi}_{i,j}^t$, which can be formulated as \vspace{-1mm} \begin{equation} \small \mathbf{\Psi}_{i,j}^t = \underset{\footnotesize |m-i| \leq M/2,~|n-j| \leq M/2}{\bigcup~~\mathbf{\Omega}_{m,n}^t}. \label{eq:FGSWMSA_2} \vspace{-1mm} \end{equation} Instead of attending to a single $key$ element on each neighboring frame for a single $query$, FGSW-MSA pays attention to the $key$ elements from similar and sharper scene patches corresponding to all $query$ elements in $\mathbf{\Phi}_{i,j}^t$. The attending region is enlarged from pixel to window. Thus, FGSW-MSA is more robust to accommodate pixel-level flow prediction deviations. FGSW-MSA can be formulated as \vspace{-0.5mm} \begin{equation} \small \text{FGSW-MSA}(\mathbf{\Phi}_{i,j}^t,\mathbf{\Psi}_{i,j}^t) = \{ \text{FGS-MSA}(\boldsymbol{q},\mathbf{\Psi}_{i,j}^t) | \boldsymbol{q} \in \mathbf{\Phi}_{i,j}^t \}. \vspace{-0.5mm} \end{equation} Given the input $\mathbf{V}$, the computational complexity is \vspace{-0.5mm} \begin{equation} \small O(\text{FGSW-MSA}) = 2(THW)C\big(C+(2r+1)(C+M^2)\big). \label{eq:complexity_2} \vspace{-0.5mm} \end{equation} The computational cost of FGSW-MSA is linear with respect to the number of tokens ($THW$). Eq.~\eqref{eq:complexity} and \eqref{eq:complexity_2} reveal the high efficiency and resource economy of our FGST. Please refer to the SM for more detailed analysis. \begin{figure}[h] \begin{center} \begin{tabular}[t]{c} \hspace{-3mm} \includegraphics[width=0.48\textwidth]{Images/flow_align_gopro_new.pdf} \end{tabular} \end{center} \vspace{-5mm} \caption{\small The pre-warping strategy mainly adopted by previous video deblurring methods sacrificies the input image information.} \label{fig:pre_warping} \vspace{-3mm} \end{figure} \begin{figure*}[t] \begin{center} \begin{tabular}[t]{c} \hspace{-2mm} \includegraphics[width=1.0\textwidth]{Images/dvd_compare.pdf} \end{tabular} \end{center} \vspace*{-5mm} \caption{\small Visual comparisons between FGST and SOTA methods on DVD dataset~\cite{Su}. Please zoom in for a better view.} \label{fig:dvd} \vspace{-3mm} \end{figure*} \begin{table*}[t] \begin{center} \setlength{\tabcolsep}{2.5pt} \scalebox{0.62}{ \begin{tabular}{l c c c c c c c c c c} \toprule \rowcolor{color3} Method & Kim and Lee & Gong \emph{et al.} & Su \emph{et al.} & Kim \emph{et al.} & STFAN & Xiang \emph{et al.} & TSP & Suin \emph{et al.} & ARVo & \textbf{FGST} \\ \rowcolor{color3} &\cite{Kim} &\cite{gong2017motion} &\cite{Su} &\cite{hyun2017online} &\cite{stfan} &\cite{Xiang} &\cite{tsp} &\cite{Suin} &\cite{arvo} &\textbf{(Ours)} \\ \midrule PSNR~$\textcolor{black}{\uparrow}$ &26.94 &28.27 &30.01 &29.95 &31.15 &31.68 &32.13 &32.53 &32.80 &\textbf{33.36}\\ SSIM~$\textcolor{black}{\uparrow}$ &0.816 &0.846 &0.888 &0.869 &0.905 &0.916 &0.927 &0.947 &0.935 &\textbf{0.950}\\ \bottomrule \end{tabular}} \vspace{-3mm} \caption{\small Video deblurring results compared with other methods on the DVD benchmark \cite{Su}. FGST achieves SOTA results.} \label{tab:dvd} \end{center}\vspace{-4mm} \end{table*} \begin{figure*}[h] \begin{center} \begin{tabular}[t]{c} \hspace{-2.5mm} \includegraphics[width=\textwidth]{Images/gopro_compare_new.pdf} \end{tabular} \end{center} \vspace*{-4mm} \caption{\small Visual comparisons between our FGST and SOTA methods on GOPRO dataset~\cite{GoPro}. Zoom in for a better view.} \label{fig:gopro} \vspace{-3mm} \end{figure*} \begin{table*}[t] \begin{center} \setlength{\tabcolsep}{2.5pt} \scalebox{0.625}{ \begin{tabular}{l c c c c c c c c c c c} \toprule \rowcolor{color3} Method & Gong \emph{et al.} & Kim \emph{et al.} &EDVR &Su \emph{et al.} & STFAN &Nah \emph{et al.} &Tao \emph{et al.} &TSP & Suin \emph{et al.} & \textbf{FGST} \\ \rowcolor{color3} &\cite{gong2017motion} &\cite{hyun2017online} &\cite{edvr} &\cite{Su} &\cite{stfan} &\cite{Nah} &\cite{Tao} &\cite{tsp} &\cite{Suin} &\textbf{(Ours)} \\ \midrule PSNR~$\textcolor{black}{\uparrow}$ &26.06 &26.82 &26.83 &27.31 &28.59 &29.97 &30.29 &31.67 &32.10 &\textbf{32.90}\\ SSIM~$\textcolor{black}{\uparrow}$ &0.863 &0.825 &0.843 &0.826 &0.861 &0.895 &0.901 &0.928 &0.960 &\textbf{0.961}\\ \bottomrule \end{tabular}} \vspace{-4mm} \caption{\small Video deblurring results compared with other methods on the GOPRO dataset \cite{GoPro}. FGST achieves SOTA results.} \label{tab:gopro} \end{center}\vspace{-5.5mm} \end{table*} \begin{figure}[h] \begin{center} \begin{tabular}[t]{c} \hspace{-2mm} \includegraphics[width=0.48\textwidth]{Images/fea_compare_new_small_new.pdf} \end{tabular} \end{center} \vspace{-6mm} \caption{\small We visualize the last feature maps of the deblurring models with and without FGSW-MSA. The model using our FGSW-MSA pays more attention to similar but misaligned scene patches.} \label{fig:fea} \vspace{-5mm} \end{figure} \noindent\textbf{Discussion.} \textbf{(i)} Our FGSW-MSA enjoys much larger receptive fields than W-MSA~\cite{liu2021swin}. Specifically, according to Eq.~\eqref{eq:Omega_k}, \eqref{eq:flow}, \eqref{eq:FGSWMSA_1}, and \eqref{eq:FGSWMSA_2}, the receptive field of FGSW-MSA can cover the whole input feature map when the estimated flow offset is large enough. In practice, the motion offset predicted by the optical flow estimator between two adjacent frames can reach 40 and 38 pixels on GOPRO and DVD datasets. The input spatial size is 256$\times$256. $M$ is set to 3. Thus, the receptive field of FGSW-MSA can reach 83$\times$83 (83 = 40$\times$2+3) and 79$\times$79 while that of W-MSA is still 3$\times$3. \textbf{(ii)} Unlike previous flow-based methods that adopt the pre-warping operation sacrificing the original image information as shown in Fig.~\ref{fig:pre_warping}, our FGST combines motion cues with self-attention calculation. Thus, the original image information can be preserved and the guidance effect of the optical flow can be further explored. In addition, our flow-guided scheme enjoys higher flexibility and robustness because adjacent FGABs sample contents independently. Please refer to the SM for detailed discussions. \subsection{Recurrent Embedding} Our FGSW-MSA is calculated within a short temporal sequence for the computational complexity consideration (approximately linear to the temporal radius $r$ in Eq.~\eqref{eq:complexity_2} ). Therefore, the receptive field of FGSW-MSA is temporally local and overlooking the distant frames limits the video deblurring performance. To further capture more robust long-range temporal dependencies, we propose Recurrent Embedding (RE) mechanism. RE is motivated by Recurrent Neural Network (RNN). More specifically, as shown in Fig.~\ref{fig:pipeline} (c), we exploit RE in each Transformer layer to transfer information from past frames and establish long-range temporal correlations. With RE, the FGAB is calculated in a recurrent manner for $T$ time steps. $\boldsymbol{y}^l_t$, $\boldsymbol{e}^l_t, \boldsymbol{q}^l_t$, $\boldsymbol{k}^l_t$ respectively denote the output, RE, $query$ elements, and $key$ elements of the $l_{th}$ FGAB in the $t_{th}$ time step. We have \vspace{-1.5mm} \begin{equation} \small \begin{aligned} \boldsymbol{e}_t^l &= f_w(\boldsymbol{y}_{t-1}^{l}), ~~\boldsymbol{q}_t^l = f_c([\boldsymbol{e}_t^l, \boldsymbol{y}_t^{l-1}]), \\ \boldsymbol{k}_t^l &= \underset{\footnotesize |j-t| \leq r}{\bigcup~~\boldsymbol{y}_j^{l-1}}, ~~\boldsymbol{y}_t^l = \text{FGAB}(\boldsymbol{q}_t^l, \boldsymbol{k}_t^l), \\ \label{eq:RE_1} \end{aligned} \vspace{-7.5mm} \end{equation} where $f_w(\cdot)$ represents the spatial warping that align the feature map at $t$ and ${t-1}$ time step, [$\cdot$,$\cdot$] is the concatenating operation, $f_c(\cdot)$ denotes 3$\times$3 convolution to aggregate the recurrent embedding $\boldsymbol{e}_t^l$ and the output from last FGAB layer $\boldsymbol{y}_t^{l-1}$, and $\boldsymbol{y}_t^l = \text{FGAB}(\boldsymbol{q}_t^l, \boldsymbol{k}_t^l)$ is formulated in details as \vspace{-1.5mm} \begin{equation} \small \begin{aligned} \boldsymbol{o}_t^l &= \text{FGSW-MSA}(\text{LN}(\boldsymbol{q}_t^l), \text{LN}(\boldsymbol{k}_t^l) )+ \boldsymbol{q}_t^l, \\ \boldsymbol{y}_t^l &= \text{FFN}(\boldsymbol{o}_t^l)+\boldsymbol{o}_t^l, \\ \label{eq:RE_2} \end{aligned} \vspace{-5.5mm} \end{equation} where LN denotes the layer normalization and FFN refers to the Feed Forward Network. Our RE sequentially propagates the information from the first frame to the last frame, thus capturing reliable long-range temporal dependencies. \section{Experiment} \vspace{0mm} \subsection{Datasets} \textbf{DVD.} The DVD~\cite{Su} dataset consists of 71 videos with 6,708 blurry-sharp image pairs. It is divided into \texttt{train/test} subsets with 61 videos (5,708 image pairs) and 10 videos (1,000 image pairs). DVD is captured with mobile phones and DSLR at a frame rate of 240 fps. \noindent\textbf{GOPRO.} The GOPRO~\cite{GoPro} benchmark is composed of over 3,300 blurry-sharp image pairs of dynamic scenes. It is obtained by a high-speed camera. The training and testing subsets are split in proportional to 2:1. \noindent\textbf{Real Blurry Videos.} To validate the generality of FGST, we evaluate models on real blurry datasets collected by \cite{real_blur}. Because the ground truth (GT) is inaccessible, we only compare visual results of FGST and others. \vspace{-2mm} \subsection{Implementation Details} We implement FGST in PyTorch. We adopt a pre-trained SPyNet~\cite{spynet} as the optical flow estimator. All the modules are trained with the Adam~\cite{adam} optimizer ($\beta_1$ = 0.9 and $\beta_2$ = 0.999) for 600 epochs. The initial learning rate is set to 2$\times$10$^{-4}$ and 2.5$\times$10$^{-5}$ respectively for the deblurring model and optical flow estimator. The learning rate is halved every 200 epochs during the training procedure. Patches at the size of 256$\times$256 cropped from training frames are fed into the models. The batch size is 8. The temporal radius $r$ of the neighboring frames is set to 1. The sequence length $T$ is set to 9 in training and the whole video length in testing. The horizontal and vertical flips are performed for data augmentation. Peak signal-to-noise ratio (PSNR) and structural similarity (SSIM)~\cite{ssim} are adopted as the evaluation metrics. The models are trained with 8 V100 GPUs. $\mathcal{L}_1$ loss between the restored and GT videos is used for supervision. \begin{figure*}[t] \begin{center} \begin{tabular}[t]{c} \hspace{-2mm} \includegraphics[width=0.98\textwidth]{Images/real_compare.pdf} \end{tabular} \end{center} \vspace*{-5mm} \caption{\small Visual results of FGST and SOTA methods on the real blurry videos of \cite{real_blur}. Please zoom in for a better view.} \label{fig:real} \vspace{-2mm} \end{figure*} \subsection{Quantitative Results} \vspace{-1mm} The comparisons between FGST and other SOTA methods are listed in Tabs.~\ref{tab:dvd}, \ref{tab:gopro}, and \ref{tab:efficiency}. As can be observed: \textbf{(i)} Our FGST outperforms SOTA methods by a large margin on the two benchmarks. Specifically, as shown in Tab.~\ref{tab:dvd}, our FGST surpasses the recent best algorithm ARVo~\cite{arvo} by 0.56 dB on DVD. As reported in Tab.~\ref{tab:gopro}, our method exceeds Suin \emph{et al.}~\cite{Suin} and TSP~\cite{tsp} by 0.80 dB and 1.23 dB respectively on GOPRO. These results demonstrate the effectiveness of our method. \textbf{(ii)} Tab.~\ref{tab:efficiency} exhibits efficiency comparisons of different algorithms on GOPRO. The FLOPS is tested at the input size of 1$\times$3$\times$240$\times$240. The running time per frame is tested at the spatial size of 1,280$\times$720 on the same RTX 2080 GPU. Our FGST is more cost-effective and achieves a better trade-off between PSNR, Params, FLOPS, and inference speed. For instance, when compared to TSP~\cite{tsp}, FGST only requires 59.9\% (9.70 / 16.19) Params and 36.8\% (131.6 / 357.9) FLOPS while achieving even 1.23 dB improvement and 2.34$\times$ (579.7 / 247.8) speed. This evidence suggests the promising efficiency advantage of our proposed FGST. \vspace{-1mm} \subsection{Qualitative Results} \vspace{-1mm} We provide visual comparisons on DVD, GOPRO, and real blurry videos as shown in Figs.~\ref{fig:dvd}, \ref{fig:gopro}, and \ref{fig:real}. Previous methods are less favorable to restore abrupt motion blur. They either yield over-smoothing images sacrificing fine textural details and structural contents or introduce redundant blotchy texture and chromatic artifacts when fast motions exists. In contrast, our FGST excels at modeling long-range dependencies and exploits motion information to guide the self-attention module to capture non-local self-similarity in spatio-temporal neighborhoods. As a result, FGST is capable of restoring structural contents and textural details while preserving spatial smoothness of the homogeneous regions. Supplementary file provides more visual results \begin{table*}[t]\vspace{-3mm} \subfloat[Break-down ablation study toward better performance. \label{tab:breakdown}]{ \tablestyle{4pt}{1.05}\scalebox{0.67}{ \begin{tabular}{c c c c c} \toprule \rowcolor{color3}~~Baseline~~ &~~RE~~ &~~FGSW-MSA~~~~~~ &~~~~PSNR~~$\textcolor{black}{\uparrow}$~~~ &~~~~SSIM~~$\textcolor{black}{\uparrow}$~~~\\ \midrule $\checkmark$ & & &31.18~\colorbox{color3}{(+0.00\%)} &0.924~\colorbox{color3}{(+0.00\%)}\\ $\checkmark$ &$\checkmark$ & &32.34~\colorbox{color3}{(+3.72\%)} &0.943~\colorbox{color3}{(+2.06\%)}\\ $\checkmark$ & &$\checkmark$ &32.84~\colorbox{color3}{(+5.32\%)} &0.957~\colorbox{color3}{(+3.57\%)}\\ $\checkmark$ &$\checkmark$ &$\checkmark$ &\bf 32.90~\colorbox{color3}{(+5.52\%)} &\bf 0.961~\colorbox{color3}{(+4.00\%)}\\ \bottomrule \end{tabular}}}\hspace{3.7mm}\vspace{-1mm} \subfloat[\small Ablation study of using different self-attention mechanisms.\label{tab:attention}]{ \tablestyle{4pt}{1.05}\scalebox{0.78}{ \begin{tabular}{l c c c c c} \toprule \rowcolor{color3} Method & ~~Baseline~~ & ~~Global MSA~~ &~~Local W-MSA~~ & ~~FGS-MSA~~ & ~~FGSW-MSA~~ \\ \midrule PSNR &31.18 &29.20 &31.71 &\bf 32.48 &\bf 32.84\\ SSIM &0.924 &0.880 &0.938 &\bf 0.944 &\bf 0.957\\ Params &5.15 &64.40 &8.26 &9.69 &9.70\\ FLOPS &43.93 &138.68 &108.09 &125.08 &125.67\\ \bottomrule \end{tabular}}}\vspace{-6mm} \subfloat[\small Efficiency comparisons with SOTA CNN-based methods. \label{tab:efficiency}]{ \tablestyle{2.5pt}{1.05}\scalebox{0.78}{\begin{tabular}{l c c c c c} \toprule \rowcolor{color3}Method &~~~~EDVR~~~~ &~~~Su \emph{et al.}~~~ &~~~STFAN~~~ &~~~~TSP~~~~ &~FGST (Ours)~ \\ \midrule PSNR &26.83 & 27.31 & 28.59 &31.67 &32.90 \\ Params (M) &23.60 &15.30 &5.37 &16.19 &9.70\\ FLOPS (G) &159.2 &38.7 &35.4 &357.9 &131.6\\ Time (ms/f) &268.5 &133.2 &145.9 &579.7 &247.8\\ \bottomrule \end{tabular}}}\hspace{3.2mm} \subfloat[\small FGSW-MSA \emph{v.s.} FGDeConv and DeConv on GOPRO dataset. \label{tab:deformable}]{ \tablestyle{2.5pt}{1.05}\scalebox{0.78}{\begin{tabular}{l c c c c} \toprule \rowcolor{color3}Method~~~~ & ~~~~Baseline~~~~ & ~~~~+ DeConv~~~~ &~~~~+ FGDeConv~~~~ &~+ FGSW-MSA~ \\ \midrule PSNR &31.18 &32.35 &32.59 &\bf 32.84\\ SSIM &0.924 &0.941 &0.954 &\bf 0.957 \\ Params (M) &5.15 &8.34 &9.78 &9.70\\ FLOPS (G) &43.93 &108.38 &125.96 &125.67\\ \bottomrule \end{tabular}}}\vspace{-6.5mm} \subfloat[\small Pre-warping \emph{v.s.} our FGSW-MSA. \label{tab:model_size}]{ \tablestyle{2.5pt}{1.05}\scalebox{0.73}{\begin{tabular}{l c c c} \toprule \rowcolor{color3}Method~ &~Local W-MSA~ &~pre-warping~ &~FGSW-MSA~ \\ \midrule PSNR &31.71 &32.54 &\textbf{32.84}\\ SSIM &0.938 &0.953 &\textbf{0.957}\\ \bottomrule \end{tabular}}}\hspace{3.7mm} \subfloat[\small Ablation study of window sizes.\label{tab:win_size}]{ \tablestyle{4.8pt}{1.05}\scalebox{0.73}{ \begin{tabular}{c c c c c c } \toprule \rowcolor{color3} ~Win Size~ &~1$\times$1~ &~2$\times$2~ &~3$\times$3~ &~4$\times$4~ &~5$\times$5~ \\ \midrule PSNR &32.48 &32.62 &\bf 32.90 &32.71 &32.66 \\ SSIM &0.944 &0.955 &\bf 0.961 &0.955 &0.957 \\ \bottomrule \end{tabular}}}\hspace{3.7mm} \subfloat[\small Ablation study of optical flow estimators.\label{tab:flow}]{ \tablestyle{2.2pt}{1.05}\scalebox{0.73}{ \begin{tabular}{l c c c c c} \toprule \rowcolor{color3}Method~&~~Baseline~~ &~~FlowNet~~ &~~SPyNet~~ &~~PWC-Net~~ \\ \midrule PSNR~$\textcolor{black}{\uparrow}$ &31.18 &32.85 &32.90 &\bf 33.03\\ SSIM~~$\textcolor{black}{\uparrow}$ &0.924 &0.960 &0.961 &\bf 0.964\\ \bottomrule \end{tabular}}}\vspace{-3mm} \caption{\small Ablation studies. The models are trained and tested on GOPRO. PSNR, SSIM, Params, FLOPS, and inference time are reported.} \label{tab:ablations}\vspace{-5mm} \end{table*} \vspace{-10mm} \subsection{Ablation Study} \vspace{-1mm} In this part, we conduct ablation studies on GOPRO dataset. The baseline model is derived by directly removing all the proposed RE and FGSW-MSA modules from our FGST. \noindent\textbf{Break-down Ablation.} We firstly conduct a break-down ablation to investigate the effect of each component toward better performance. The results are reported in Tab.~\ref{tab:breakdown}. The baseline model yields 31.18 dB. After applying RE and FGSW-MSA respectively, the deblurring model achieves 1.16 dB and 1.66 dB improvements. While using both RE and FGSW-MSA modules, the model gains by 1.72 dB. The results suggest the effectiveness of RE and FGSW-MSA. \noindent\textbf{Self-Attention Mechanism.} We compare our self-attention mechanisms with other competitors in Tab.~\ref{tab:attention}. The baseline model yields 31.18 dB while costing 5.15M Params and 43.93G FLOPS. \textbf{(i)} When using global MSA~\cite{global_msa}, the feature maps are downsampled into $\frac{1}{4}$ size and the channel is increased by 4 times to avoid out of memory and information loss. The deblurring model degrades by 1.98 dB while costing 12.5$\times$ Params and 3.2$\times$ FLOPS. This is mainly because global MSA attends to too redundant $key$ elements, requiring a large amount of computation and memory resources while leading to ambiguous gradients for input features~\cite{de_detr} and thus non-convergence problem. Meanwhile, features from global aggregation tend to over-smooth the predictions of small patterns~\cite{xiangtl_gald}. \textbf{(ii)} When using local W-MSA~\cite{liu2021swin}, the model gains by only 0.53 dB while adding 3.11M Params and 64.16G FLOPS. The improvement is limited while the additional burden is nontrivial. That is because W-MSA calculates self-attention within position-specific windows. The receptive field is limited. \textbf{(iii)} Our FGS-MSA exploits the optical flow as the guidance to sample spatially sparse $keys$ of similar and sharper regions in the spatio-temporal neighborhood for each $query$ on the reference frame. Compared to global MSA, the $key$ elements of FGST are less but highly related to the selected $query$. Thus, when using FGS-MSA, the model gains by 1.30 dB while adding 4.54M Params and 81.15G FLOPS. These results show that FGS-MSA costs cheaper resources but achieves better performance than global MSA. When exploiting FGSW-MSA, the model yields an improvement of 1.72 dB while adding 4.55M Params and 87.69G FLOPS. This evidence suggests: \textbf{(a)} FGSW-MSA is more effective than W-MSA in fast motion blur restoration. \textbf{(b)} FGSW-MSA is more reliable than FGS-MSA and achieves better deblurring performance. In addition, we conduct visual analysis on three adjacent frames by visualizing the last feature map of models with and without (w/o) FGSW-MSA in Fig.~\ref{fig:fea}. Deeper color indicates larger weights. It can be observed that the model without FGSW-MSA responds weakly to similar regions in the neighboring frames. In contrast, the model equipped with FGSW-MSA generates much stronger responses to highly related but misaligned scene patches. Moreover, FGST pays more attention to the regions with fast motion blur. These results demonstrates the effectiveness of FGSW-MSA in capturing non-local self-similarity in dynamic scenes. \noindent\textbf{Flow-Guided Deformable Convolution.} We compare our FGSW-MSA with deformable convolution (DeConv)~\cite{edvr} and recent flow-guided deformable convolution (FGDeConv)~\cite{chan2021basicvsr++} in Tab.~\ref{tab:deformable}. Our proposed FGSW-MSA achieves the most significant improvement. This mainly stems from that FGSW-MSA excels at capturing non-local similarity and long-range dependencies, which are the limitations of CNN-based methods. \noindent\textbf{Pre-warping Strategy.} We compare our FGSW-MSA with the pre-warping strategy mainly adopted by previous methods in Tab.~\ref{tab:model_size}. We start from the baseline model equipped with W-MSA. It can be observed that using FGSW-MSA is 0.30 dB and 0.004 in terms of PSNR and SSIM higher than using pre-warping operation. This performance gap is mainly because the model using our FGSW-MSA can learn from non-corrupted representations of input video and further explore the guidance effect of the optical flow. \noindent\textbf{Window Size.} We change the window size of FGSW-MSA to study its effect. The results are listed in Tab.~\ref{tab:win_size}. We start by setting the window size at 1$\times$1 and then gradually increase it. The performance achieves its maximum when the window size is 3$\times$3. Thus, the optimal setting is 3$\times$3. \noindent\textbf{Optical Flow Estimator.} We adopt three representative optical flow estimators (FlowNet~\cite{flownet}, SPyNet~\cite{spynet}, and PWC-Net~\cite{pwcnet}) to investigate their effects in Tab.~\ref{tab:flow}. \textbf{(i)} No matter what flow estimator is used, FGST reliably outperforms the baseline model, suggesting the robustness and generality of our method. \textbf{(ii)} The performance of FGST can be further improved by using a better flow estimator. To be specific, when equipped with PWC-Net, FGST is 0.18 dB and 0.13 dB higher than those using FlowNet and SPyNet. These results demonstrate that FGST can directly and conveniently enjoy the benefits of SOTA optical flow estimators. \vspace{-2mm} \section{Conclusion} \vspace{-1mm} In this paper, we propose a novel Transformer-based method, FGST, for video deblurring. In FGST, we customize a self-attention mechanism, FGS-MSA, and then promote it to FGSW-MSA. Guided by an optical flow estimator, FGSW-MSA samples spatially sparse but highly related $key$ elements corresponding to similar and sharper scene patches in the spatio-temporal neighborhoods. Besides, we present an embedding scheme, RE, to transfer information of past frames and capture long-range temporal dependencies. Comprehensive experiments demonstrate that our FGST significantly surpasses SOTA methods and generates more visually pleasant results in real video deblurring. \textbf{Acknowledgements:} This work is partially supported by the NSFC fund (61831014), the Shenzhen Science and Technology Project under Grant (CJGJZD20200617102601004, JSGG20210802153150005).
1,116,691,497,436
arxiv
\section*{Introduction} The enormous amount of data produced by the Next-Generation Sequencing (NGS) technologies opens the way for a more comprehensive characterization of mechanisms at the molecular level, which are at the basis of cellular life and may have a role in the occurrence and progress of disorders and diseases. This may help to answer to fundamental questions for biological and clinical research, such as how the interactions between cellular components and the chromatin structure may affect gene activity, or to what extent complex diseases such as diabetes or cancer may involve specific (epi)genomic traits. Indexing NGS data is an important problem in this context \cite{ceri}. In particular, an index is a data structure that enables efficient retrieval of stored objects. Indexing strategies used in NGS allow space-efficient storage of biological sequences in a {\it full-text index} that enables fast querying, in order to return exact or approximate string matches. Popular full-text index data structures include variants of suffix arrays \cite{AbouelhodaKO04}, FM-index based on the Burrows–Wheeler transform (BWT) and some auxiliary tables \cite{FerraginaM05}, and hash tables \cite{pone}. The choice of a specific index structure is often a trade-off between query speed and memory consumption. For example, hash tables can be very fast but their memory footprint is sometimes prohibitive for large string collections \cite{schmit17}. Here we address the problem of computing BWT in the distributed, exploiting Big Data technologies such as Apache Spark \cite{spark}. In particular, previous research has been proposed on the BWT computation in a MapReduce \cite{DeanG10} fashion based on Apache Hadoop \cite{menon11}. The use of Spark and Hadoop together, as proposed here, have shown to notably improve the performance in several application contexts, due to the optimal exploitation of both memory and cloud. Another available tool relying on Hadoop and BWT computation is BigBWA \cite{DAbuinPPA15}. However, the BigBWA parallelism is intended only to split the input sequences and then apply another existing framework, i.e., BWA \cite{LiD09}, in order to align them via BWT. Therefore, the BWT computation is not based itself on Big Data technologies in BigBWA. We propose two algorithms for the BWT computation that fully exploit parallelism afforded by a cloud computing environment, combining advantages of MapReduce paradigm and Spark Resilient Distributed Datasets (RDD). Validation results obtained on real biological datasets, including genomic and proteomic data, are provided, showing that our approach improves the performance for BWT computation with respect to its competitors. \subsection*{Preliminaries} Let $S$ be a string of $n$ characters defined on the alphabet $\Sigma$. We denote by $S(i)$ the $i$-th character in $S$ and by $S_i$ its $i$-th suffix. We recall the following basic notions. \paragraph{BWT} The Burrows-Wheeler transform of $S$ is useful in order to rearrange it into runs of similar characters. This may have advantages both for indexing and for compressing more efficiently $S$. The BWT applied to $S$ returns: \begin{itemize} \item a permutation $bwt(S)$ of $S$, obtained by sorting all its circular shifts in lexicographic order, and then extracting the last column; \item the index (0-based) $I$ of the row containing the original string $S$. \end{itemize} \noindent Among the most important properties of BWT, it is reversible. Figure \ref{fig::bwt} shows an example of BWT for the string $S$=$BANANA\$$. In particular, $bwt(S)=BNN\$AAA$, and $I=3$. \paragraph{Suffix Array} The suffix array $SA$ of $S$ is defined as an array of integers providing the starting positions of suffixes of $S$ in lexicographical order. Therefore, an entry $SA[i]$ contains the starting position of the $i$-th suffix in $S$ among those in lexicographic order. Figure \ref{fig::sa} shows the Suffix Array for the same example of BWT. \paragraph{Inverse Suffix Array} Inverse Suffix Array of $S$, $ISA[i] = j$ means that the rank of the suffix $i$ is $j$, i.e., $SA[j] = i$. \section*{Implementation} We use the notation $[i,j]$ for denoting the set $\{i,\dots,j\}$. Let $S \in \Sigma^*$ a string of length $n$. For $i \in [0, n-1]$, let $S_i$ denote the suffix of $S$ starting in position $i$ and let $S_{i,j}$ denote the sub-string $S[i]S[i+1]\dots S[j]$. In order for the bwt calculation to be easily reversed we assume that the string S always ends with a \$ sentinel character, i.e., the smallest character in $\Sigma$. In addition, let's assume that for $S[i]= \$$ for $i > n$. \noindent In the following, the two algorithms proposed here are described in detail. \subsection*{Sorting based MapReduce (SMR)} The first phase of the algorithm SMR is suffix partitioning. The goal is to partition the set of possible suffixes into sub-sets $K_{1},..., K_{r},$ where $r$ is a positive integer representing the desired number of partitions (i.e. the number of nodes within the cluster). The partitioning has to comply with the following property. \noindent \textit{{\bf Property 1} For each pair $K_{i},K_{j}$, all suffixes in $K_{i}$ are lower or greater in lexicographical order than all those ones in $K_{j}$.} To this aim, suffixes are discriminated by their first $k$ characters (i.e., $k$-mers in the following). Here, $k$ is a positive integer chosen in advance. The algorithm SMR maps each suffix $S_{i}$ of $S$ in a key-value tuple {\it (k-mer, i)} (\textsc{Procedure 1}). Therefore, suffixes are partitioned according to the key, and then the tuples are sorted with respect to this key to maintain the order of the partitions. \paragraph{Partitioning.} A fairly important issue is how to implement partitioning in practice. An efficient technique is to sample the set of keys and then determine ranges based on the desired number of partitions. Then the set of keys is partitioned according to the determined ranges, thus that the partitions are balanced. In this way, it is possible to partition and sort at the same time. In \cite{menon11} this technique is used and optimized for the case of genomics sequences. It is also provided by the Spark framework \cite{zaharia2012resilient} through the RangePartitioner functionality \footnote{\texttt{https://spark.apache.org/docs/2.3.0/api/java/org/apache/spark/RangePartitioner.html}}. \noindent \\ \textit{Example:}\\ \-\hspace{1.5cm}$S = $\texttt{ C\quad A\quad T\quad T\quad A\quad T\quad T\quad A\quad G\quad G\quad A} \\ \-\hspace{2.1cm}\texttt{ 0\quad 1\quad 2\quad 3\quad 4\quad 5\quad 6\quad 7\quad 8\quad 9\quad 10}\\ \-\hspace{2.1cm}\texttt{ \quad\quad\quad *\quad * \quad\quad *\quad * }\\\\ \-\hspace{1.0cm} For $k = 3$ \\ \-\hspace{1.2cm} \begin{tabular}{rr} \rule[-2ex]{0pt}{4ex} & Sorted partitions\\ (\texttt{CAT},0) & \{(\texttt{\$\$\$},11),(\texttt{A\$\$},10) \} \\ (\texttt{ATT},1) & \{(\texttt{ATT},1),(\texttt{ATT},4),(\texttt{AGG},7) \} \\ (\texttt{TTA},2) & \{(\texttt{CAT},0),(\texttt{GA\$},9),(\texttt{GGA},8) \} \\ (\texttt{TAT},3) & \{(\texttt{TTA},2),(\texttt{TAT},3),(\texttt{TTA},5), \} \\ (\texttt{ATT},4) \\ (\texttt{TTA},5) \\ (\texttt{TAG},6) \\ (\texttt{AGG},7) \\ (\texttt{GGA},8) \\ (\texttt{GA\$},9) \\ (\texttt{A\$\$},10) \\ (\texttt{\$\$\$},11) \\ \end{tabular} \begin{algorithm}[H] \caption{Preparation to Partitioning} \begin{algorithmic}[1] \STATE\textbf{procedure }{\textsc{MAP}}{(\textit{$S_{i}$})} \STATE \textbf{return} ($k$-mer of $S_{i}$, $i$) \end{algorithmic} \label{alg:alg_iterative_radix_sort_partitioning} \end{algorithm} The second stage consists of completing the work by ordering suffixes partition by partition (see \textsc{Procedure 2}). The task is therefore to collect the suffix indexes $p$ in the partition and to produce by a single procedure what is here called \textit{Partial SA}. The idea is to reduce the problem of calculating the partial SA $SA_{p}$ to the calculation of another SA, $SA_{t}$, that refers to a new string $T$ built from $p$. The order of suffixes in $T$ implicitly defines the order of suffixes indexed from $p$ in $S$. This idea is behind the recursive step used in the DC3 \cite{karkkainen2006linear} algorithm. \begin{algorithm}[H] \caption{Partial Suffix Arrays Computation} \begin{algorithmic}[1] \STATE\textbf{procedure }{\textsc{CALCULATE PARTIAL SA}}{(\textit{p})} \STATE Calculate $l_{max}$ the maximum distance between two elements in $p$ \\ \STATE Generate $\mathcal{L}$ the list of ${S[p[i], l_{max}]}$ for each $i \in p$ \\ \STATE Sort $\mathcal{L}$ using \textsc{AlgorithmX} \\ \STATE \textbf{return} \textit{$SA_{p}$} \end{algorithmic} \label{alg:alg_iterative_radix_sort} \end{algorithm} Two different variants of SMR are considered: SMR$_r$, such that \textsc{AlgorithmX} is the Radix Sort, and SMR$_t$, if it is the Timsort. \subsection*{Prefix Doubling Algorithm (PDA)} The more crucial aspect for the BWT computation considered here is the calculation of the Suffix Array of the input string. Indeed, BWT can be calculated from the Suffix Array in a MapReduce fashion via join operation. Therefore, the algorithm proposed for the computation of the Suffix Array, based on the idea of \textit{prefix doubling} inspired by \cite{FlickA15}, is described below (see \textsc{Procedures 3} and \textsc{4}). \paragraph{Input:} Let $S$ be a string of length $n$, the input tuples set is: $$\text{Input} = \{ (\texttt{null}, S(i)) : i = 1,\dots,n)\}$$ \paragraph{Output:} A set of tuples of the form $(i, r)$, where $i$ is the index of a suffix in $S$ and $r$ is its rank (i.e., its position in the list of the sorted suffixes). In the literature this is referred to as the ISA. For our purpose, the resulting output is inverted in order to obtain the Suffix Array of $S$ and, then, its BWT. \begin{algorithm}[H] \caption{Sketch Algorithm Iterative with Prefix Doubling} \begin{algorithmic}[1] \STATE\textbf{procedure }{\textsc{CALCULATEISA}}{(\textit{S})}{ \STATE $Input = \{(null, S(i)): i = 1,...,n\}$ \\ \STATE Initialize set \textit{ISA} with \textit{Input} \\ \FOR{($k \gets 0$\ to\ $\lceil log_{2}n \rceil$)}{ \STATE Apply the operation of \textit{Shifing} to \textit{ISA} obtaining two sets\\ \STATE Join the two sets obtained with the operation of \textit{Pairing} \STATE Update \textit{ISA} by calling \textsc{RE-RANKING} } \ENDFOR \\ \STATE \textbf{return} \textit{ISA} } \end{algorithmic} \label{alg:alg_iterative_prefix_doubling} \end{algorithm} \begin{algorithm}[H] \caption{ Re-ranking} \begin{algorithmic}[1] \STATE\textbf{procedure }{\textsc{RE-RANKING}}{(\textit{Pairs})}{ \STATE Sort tuple of pairs in Pairs by value \\ \FORALL{($(i,(r_{a1}, r_{a2})) \in Pairs$)}{ \STATE $Let (j,(r_{b1}, r_{b2}))$ the previous pair to $(i,(r_{a1}, r_{a2}))$\\ \IF{$(r_{a1}, r_{a2}) = (r_{b1}, r_{b2})$} \STATE $r_{new} = j$ \ELSE \STATE Assign to $r_{new}$ the position in the sorted set \textit{Pairs} of $(i,(r_{a1}, r_{a2}))$ \ENDIF \STATE Update tuple $(i,r)$ in \textit{ISA} with tuple $(i,r_{new})$ } \ENDFOR \\ } \end{algorithmic} \label{alg:re_ranking} \end{algorithm} \paragraph{Initialization.} The first step is starting from the Input set and initialize the set of tuples $(i, r)$, as described in the previous paragraph. In this phase, the rank is calculated by the first character of the suffix. In particular, let $Occ(c)$ be the number of occurrences of the character lexicographically smaller of $c$ in the string $S$, then the rank of the suffix $i$ can be determined as $Occ(S(i))$.\\ In a MapReduce fashion, this can be accomplished by first counting the occurrences of each character in $S$, and then computing the cumulative sum $Occ$ on the sorted counts. The map and reduce steps are: $$\texttt{map: } (\texttt{null}, S(i)) \rightarrow (S(i), 1)$$ $$\texttt{reduce: } (c, \texttt{list}[1,1,\dots,1]) \rightarrow (c, \text{sum of ones}) $$ From this $Occ$ is calculated locally by collecting the result.\\ The $ISA$ set can be then initialized with the following map step: $$\texttt{map: } (\texttt{null}, S(i)) \rightarrow (i, occ(S(i)))$$ \paragraph{ISA Extending.} The next step is to extend each rank contained in the initialized $ISA$ by the whole suffix. Here we use a technique called \textit{Prefix Doubling} which is based on the following statement: \begin{center} \textit{Given that the suffixes of a string are already sorted by their prefix of length $h$, we can deduce their ordering by their prefix of length $2h$.} \end{center} Given two suffixes $S_i$ and $S_j$ with an identical prefix of length $h$, we can deduce their sorting by comparing the order of the suffixes $S_{i+h}$ and $S_{j+h}$. Thus the idea is to pair, for each suffix $S_i$, its rank with the rank of the suffix $S_{i+h}$ (i.e., $(ISA[i]), ISA[i+h])$) and sort all these pairs in order to obtain the sorting by the prefix of length $2h$. Indeed, an iteration double the prefix, since the longest suffix has size $n$, all suffixes will be sorted after at most $\log_2(n)$ iterations. \paragraph{Shifting and Paring.} To implement the above idea in a MapReduce fashion, we apply the two considered map steps to the latest $ISA$ calculated to obtain two different sets: $$\texttt{map: } (i, r) \rightarrow (i, (r, 0))$$ $$\texttt{map: } (i, r) \rightarrow (i - 2^k, (-r, 0))$$ where $k$ is the number of the iterations minus one. The indices of rank are shifted this way, then the rank is paired by a reduce step. It is worth noticing that a negative number is used to denote a shifted rank, and the value is mapped as a tuple with a zero term in order to consider the ranks shifted that overflow the string length.\\ The union of the two obtained sets is considered and all tuples with a negative key are discarded (the corresponding ranks do not pair with any other rank in the set). The following reduce step is applied to the union: $$\texttt{reduce: } (i, \texttt{list}[(r1,0), (r2,0)]) \rightarrow (i, (r1, -r2))$$ where $r2$ is the rank shifted. Some ranks may occur that are not reduced due to the unique key. These ranks overflow the length of $S$ and remain paired with zero. We denote the final set derived from this phase by $Pairs$. \paragraph{Re-Ranking.} Our purpose is to extend the previous rank with a new rank, obtained by considering the prefix doubled. Therefore, we compute the new rank according to the tuple in $Pairs$ as follows: firs we sort all tuples by value, then we compare each tuple at position $i$ (after sorting) with the one in position $i-1$. If they are equal, the new rank is equal to the rank of the previous tuple, otherwise the new rank is $i$. Finally, a new ISA set with rank extended is obtained, and the procedure is iterated on it again. All operations described above can be achieved also in a distributed manner: \begin{itemize} \item For the sorting operation, a certain number of partitions can be identified by range into roughly equal ranges the elements in the set (the ranges can be determined by sampling the data). Then for each partition a sorting algorithm is applied that sort each partition locally. This is easily provided by the framework Apache Spark. \item In order to compute the new rank, the partition identified previously is considered and the procedure above is applied locally, as described before, using the length of the partition and the offset (i.e., the number of elements in the previous partition) for computing the position of the tuples. \end{itemize} \subsubsection*{Example} Let S = \textit{BANANA\$} be the input string of length $n = 7$. The input pairs are: \begin{equation*} \begin{gathered} \text{Input} = \{ (\texttt{null}, B), (\texttt{null}, A), (\texttt{null}, N),\\ (\texttt{null}, A), (\texttt{null}, N), (\texttt{null}, A), (\texttt{null}, \$) \} \end{gathered} \end{equation*} As for $Occ(c)$, it is shown in Table \ref{tab::occ}. \noindent After the initialization, the initial ISA set is: \begin{equation*} \begin{gathered} \text{ISA} = \{ (0, 3), (1, 0), (2, 4), (3, 0),\\ (4, 4), (5, 0),(6, 6)\} \end{gathered} \end{equation*} After the first iteration, the shifted tuples are: \begin{equation*} \begin{gathered} \text{Shifted} = \{(-1, (-3, 0)), (0, (0, 0)), (1, (-4, 0)),\\ (2, (0, 0)), (3, (-4, 0)), (4, (0, 0)),(5, (-6, 0))\} \end{gathered} \end{equation*} After the the pairing we obtain the set: \begin{equation*} \begin{gathered} \text{Pairs=}\{(0, (3, 0)), (1, (0, 4)), (2, (4, 0), (3, (0, 4),\\ (4, (4, 0), (5, (0, 6), (6, (6, 0)\} \end{gathered} \end{equation*} Finally, we sort by value and we re-rank the indices. Then the new ISA is: \begin{equation*} \begin{gathered} \text{ISA} = \{ (0, 3), (1, 1), (2, 4), (3, 1),\\ (4, 4), (5, 0),(6, 6)\} \end{gathered} \end{equation*} We observe that the only rank updated in this iteration is the one with index $5$, indeed shifting by $1$ it is possible to distinguish among the prefixes $AN$, $AN$ and $A\$$ corresponding to the suffixes $S_1$, $S_3$ and $S_5$. \section*{Results} The presented algorithms have been evaluated on real datasets taken from the Pizza$\&$Chili website \cite{pizzachili}, where a set of text collections of various types and sizes are available to test experimentally compressed indexes. In particular, the text collections stored on this website have been selected to form a representative sample of different applications where indexed text searching might be useful. From this collection, we have chosen the following three datasets: \begin{itemize} \item PROTEINS, containing a sequence of newline-separated protein sequences obtained from the Swissprot database. \item DNA, a sequence of newline-separated gene DNA sequences obtained from files of the Gutenberg Project. \item ENGLISH, the concatenation of English text files selected from collections of the Gutenberg Project. \end{itemize} We have implemented in Apache Spark the algorithms described here and the basic approach proposed in \cite{menon11}, and we have run them on the GARR Cloud Platform. In particular, we have configured the cluster with $1$ master and $33$ slave nodes, each node with $6$ VCore, $32$ GB of RAM and $200$ GB for disk. We have used Apache Hadoop $3.1.3$ and Spark $2.3.4$. Results are shown in Table \ref{tab:time-result} (when the running time was larger than $10$ hours it has not been reported). For the PROTEINS dataset, it has been considered the only first $25$ MB, the only first $100$ MB and the full dataset. \subsection*{Discussion} From the results of the experiments it is evident that the SMR version with Radix-Sort presents very long and impractical elaboration times. Although the implementation is realized in C language, in order to optimize as much as possible the calculation, this is still very expensive with input files of modest size. From a theoretical point of view this is easily explained by the fact that the algorithm has a computational complexity equal to $O(|p| - l_{max})$, where we recall that $p$ is the partition identified and $l_{max}$ is the maximum distance between two indices in $p$ considering also the final index. For partitions where the indices are not uniformly distributed, $l_{max}$ may become very large, causing very slow processing time. The SMR version with Timsort has better performance. However, it is not able to process very large files. In contrast with the two SMR variants, PDA is able to process all the datasets. This is what we expected due to the fact that it fully introduces parallelism in the computation of BWT, allowing to benefit of cloud computing. \section*{Conclusion} Two MapReduce algorithms for the implementation of a full-text index, that is, the Burrows Wheeler transform, have been proposed here. The algorithms have been implemented in Apache Spark and they have been validated on real datasets. Among the various applications where an efficient and distributed implementation of BWT may be useful (e.g., data compression, pattern matching, etc.), we mention that searching for a suitable combination of Indexing and Machine Learning techniques has recently proved to be a promising issue \cite{GRAHAM,raff2019new,FerraginaV20}. Therefore, we plan to focus our future studies in this direction. \section*{Funding PRIN research project ``Multicriteria Data Structures and Algorithms: from compressed to learned indexes, and beyond'', grant n. 2017WR7SHH, funded by MIUR. \noindent GNCS 2020 research project ``Algorithms, methods and software tools for knowledge discovery in the context of Precision Medicine'', funded by INDAM. \section*{Availability of data and materials \texttt{https://github.com/MR6996/spark-bwt} \section*{Competing interests} The authors declare that they have no competing interests. \bibliographystyle{bmc-mathphys}
1,116,691,497,437
arxiv
\section{Introduction} \label{Intro} Ferromagnetic two-dimensional (2D) structures are of uttermost importance in spintronics applications~\cite{Cinchetti2017_NatMat}. Such 2D magnetic systems can either be 2D Van der Waals (vdW) ferromagnets~\cite{gong2017_Nat, huang2017_Nat}, or ultrathin magnetic overlayers~\cite{Gambardella2020_book}. Both types of systems present quantum and topological phases~\cite{Burch2018_Nature}, but achieving new exotic properties requires design and investigation of novel materials and architectures. In thin magnetic overlayers, the surface contains transition and/or rare-earth metals. The interest in such 2D ferromagnetic systems is prompted by the reduced atomic-scale size and the diversity of magnetic states that arise. The synthesis of 2D vdW ferromagnets is significantly different compared to the direct epitaxy of magnetic layers. 2D vdW systems are exfoliated from crystals, which in turn are grown by ex-situ chemical vapor or atomic layer deposition~\cite{Geim2013_Nature,Gibertini2019_NatNanotech,Gong2019_Science}. Magnetic overlayers, however, are in-situ grown by physical vapor deposition in vacuum, a process that does not guarantee a sharp 2D system that stands harsh conditions. Achieving the latter demands a detailed in-situ, atomic-scale investigation, comprising structure and electronic states, as well as chemical stability in ambient conditions. 2D magnetic alloys are quite reactive in air, loosing or changing their magnetic properties. Protection of such surfaces can be achieved by means of ceramic coatings, polymer protection films or deposition of non-reactive metals. Nevertheless, most of these protecting overlayers influence the magnetism of the surfaces leading to a variation of the desired properties. Recently, protection of the surfaces by graphene (Gr)~\cite{Coraux_JPCL2012,Liu2013_NatComm,Martin2015_APL,Weatherup2015_JACS,Cattelan2015_Nanoscale,Naganuma_APL2020,sutter2010_JACS, sokolov2020_MatHor,Anderson2017_PRM}, hexagonal boron nitride (hBN) \cite{Liu2013_NatComm,Caneva2017_ACS_ApplMat,Jiang2017_NanoRes,Tang2021_ACSApplNanoMat,Holler2019_2DMat,Zihlmann2016_2DMat,Ma2022_Nature} and a mixture of both materials~\cite{Pis2018_Carbon} is being considered. In this context, the use of a hBN protecting layer is very appealing, since it would provide close contact of the ferromagnetic material with a wide-gap semiconductor, enabling charge injection. Therefore, the question that arises is whether we can achieve a sufficiently protective hBN layer that preserves the magnetic properties of the 2D compound in ambient conditions. Here, we study a hBN-protected ferromagnetic Eu-Pt surface alloy. The Eu-Pt compound is formed after Eu intercalation under the hBN film previously grown on a Pt crystal surface. Metal intercalation below Gr or hBN overlayer has been extensively studied over the last two decades~\cite{sutter2010_JACS, sokolov2020_MatHor, Anderson2017_PRM, Schumacher2014_PRB, Scardamaglia2021_Carbon}. The purpose in the majority of the works was to separate the 2D overlayer from the substrate~\cite{Daukiya2019_ProgSS,Liu2021_JPCC}. Most often this is done by the intercalation of noble metal atoms like Au, Ag, or Cu~\cite{Daukiya2019_ProgSS}. Conversely, in order to force a stronger 2D material interface interaction, one can proceed with the intercalation of alkaline~\cite{Demiroglu2019_JPCL} or earth-alkaline metals~\cite{Grubisic2021_ASS,Kotsakidis2021_AdvMatInter,Kotsakidis2020_ChemMat}. If a too-strongly interacting substrate-2D overlayer is achieved, additional intercalation of oxygen lifts again the 2D layer and re-establishs the original 2D material properties~\cite{sutter2010_JACS}. However, oxygen exposure may result in the oxidation of the protecting hBN layer~\cite{Makarova2019_JPCC}. Eu intercalation has been less investigated~\cite{Schumacher2014_PRB,schroder2016_2DMat,sokolov2021_JAlloyComp,Anderson2017_PRM,sokolov2020_MatHor} despite of its interesting magnetic properties, mainly due to the strong reactivity of this rare earth metal. All Eu intercalation studies have been carried out on graphite or graphene epilayers, but no experiments exist using hBN. Among the different rare earth metal compounds, europium alloys are particularly interesting due to the various valence states of the Eu atoms. It may adopt a di-valent Eu$^{2+}$, a tri-valent Eu$^{3+}$, or even a mixed-valent state. For trivalent Eu$^{3+}$, Eu has a 4$f^6$ configuration with $S$ = $L$ = 3 and $J$ = 0. The ground state $J$-multiplet level (in $^{2S+1}L_J$ configuration) is $^7F_0$ presenting a non-magnetic singlet. This situation differs from divalent Eu$^{2+}$ with 4$f^7$ configuration, $S$ = 7/2, $L$ =0 and $J$ = 7/2 leading to a $^8S_{7/2}$ ground state. In the latter case, Eu$^{2+}$ is able to form ferromagnetic compounds, e.g., europium chalcogenides~\cite{McGuire1964_JAP}. The different valence states of Eu are found to depend on several factors: the surrounding material, the lattice pressure, the number and type of nearest neighbors, etc. In the particular case of Eu-Pt compounds there is a smooth valence transition when changing the stoichiometry from EuPt$_5$ (completely tri-valent) to EuPt$_2$ (Eu atoms in a di-valent state)~\cite{Wickman1968_JPhysChemSol,DeGraaf1980_PhysicaB,Sauer1997_JAllComp}. Additionally, valence instabilities can be induced in EuPt$_3$ by high pressure~\cite{Ebd-Elmeguid1981_JPC}. Valence changes may also happen at the surface due to a reduced coordination~\cite{johansson79}. Such transitions from trivalent to divalent configuration have also been observed for Eu-Ni or Eu-Pd compounds~\cite{Wieling2002_PRB,Wieling1998_PRB}. Here we present a Eu-Pt surface alloy formed after intercalation of Eu at the hBN/Pt interface. First, we perform a structural analysis of the interface, followed by a detailed electronic and magnetic characterization of the Eu-Pt compound. We demonstrate that the topmost layer under the hBN coat is a 2D EuPt$_2$ surface alloy, with di-valent Eu atoms that reveal ferromagnetic behavior at low temperature. Next, we check the efficiency of the hBN layer protection in ambient pressure, by analyzing the electronic properties prior and after air exposure. The sample is a platinum crystal curved around the (111) direction (c-Pt). This provides a smooth variation of the crystallographic orientation across the (macroscopic) surface, allowing us to extend the analysis of the EuPt$_2$ alloy to vicinal Pt crystal planes, characterized by a high density of atomic steps. By scanning our different experimental (electron,photon) probes on top, we can rigorously study the influence of steps and terraces in the structural, magnetic, and electronic properties of the EuPt$_2$ surface alloy, as well as the protecting quality of the hBN layer. \section{Experimental details} \label{Experimental} The growth and electronic properties were mainly investigated at the Nanophysics laboratory in San Sebastian, Spain using a combined system containing scanning tunneling microscopy (STM), low energy electron diffraction (LEED), X-ray photoemission (XPS) and angle-resolved photoemission spectroscopy (ARPES). Part of the electronic structure investigations have been carried out at BACH beamline of Elettra synchrotron (Trieste, Italy). The XPS setup in the laboratory is equipped with a Specs Al $K_\alpha$ $\mu$-FOCUS 600 monochromator while the ultraviolet light source consists of a Specs UVS-300 discharge lamp with monochromator (Specs TMM 304) tuned to HeII$\alpha$ light with h$\nu$ = 40.8eV. At Elettra synchrotron, $p$-polarized light was applied at a photon energy of 272eV. All measurements were taken with the sample at room temperature. STM experiments were carried out in a Omicron VT-setup by holding the sample at room temperature and scanning with a W tip. The analysis of the STM images has been performed with WSXM software~\cite{Horcas2007_RevSciInst}. The magnetic properties were investigated at ID 32 of the European synchrotron radiation facility (ESRF) by means of X-ray magnetic circular dichroism (XMCD). For this purpose the sample was placed normal or grazing (70$^\circ$) with respect to the incoming photon beam and field. The field was ramped between +6 and -6 T with the sample hold at $T$ = 7K. Horizontal, left and right circularly polarized light (99\% polarization) was used for photon energies around the Eu M$_{4,5}$ X-ray absorption edge. As a substrate material, a cylindrical sector of a Pt (c-Pt) single crystal was used whose cylinder axis is along a [1$\bar 1$0] direction. The centre of the curved surface points towards the [111] direction, while the borders are oriented $\pm$15$^\circ$ with respect to the (111) center (Fig.~\ref{fig:STM}). This curved Pt surface was cleaned by Ar ion sputtering (room temperature) and temperature annealing (1000K) as well as by occasional oxygen heating (2$\times$10$^{-8}$ mbar O$_2$, 950K) followed by a flash in UHV to 1050K. This standard procedure, as described elsewhere~\cite{Walter2015_NatComm,GarciaMartinez2020_AngeChem}, leads to sharp LEED patterns where the typical step splitting was observed. hBN was grown by chemical vapor deposition (CVD) process from borazine precursor (KATCHEM spol. s r.o.). For this purpose the curved Pt crystal was held at 1020K while borazine was dosed for 20 minutes at 2$\times$10$^{-7}$ mbar. As can be observed in Fig.~\ref{fig:LEED}, this growth produces a sharp and well ordered Moire pattern in LEED at the Pt(111) position of the curved crystal. On the other substrate positions, corresponding to the vicinal surfaces in the mentioned $\pm$15$^\circ$ range around (111), the LEED reveals less ordered structures with line like features pointing to a multi-facet structure. Eu was deposited in a third step on top of this hBN/c-Pt substrate while the sample was held at an elevated temperature to allow Eu intercalation below hBN. As pointed out earlier~\cite{Schumacher2014_PRB}, the high temperature is quite important to immediately protect the Eu from oxidation. We used substrate temperatures between 570 and 870K. The deposition process was carried out in UHV systems with a base pressure prior to Eu deposition below 1$\times$10$^{-9}$ mbar not surpassing 3$\times$10$^{-9}$ mbar during deposition. For lower substrate temperatures incomplete intercalation takes place and part of the Eu stays on top of the hBN, see supplementary material for details. The oxidation protection experiments consisted in exposing the sample to ambient pressure conditions (6 hours, room temperature, 80\% humidity). \section{Results and Discussion} \label{Results} \subsection{Formation of Eu-Pt surface alloy below hBN} \subsubsection{Eu intercalation in the hBN/Pt(111) interface} \begin{figure*}[b!] \centerline{\includegraphics[width=1.0\columnwidth]{Fig1.pdf}} \caption{\textbf{LEED images along the preparation process.} (a): Pt(111), (b) after borazine exposure at 1020K producing a (9$\times$9) Moire pattern, (c) additional Eu deposition/intercalation at $T_{sample}$ = 870 K leading to EuPt$_2$ layer below hBN and a different Moire pattern, (d) after 6 hours room temperature air exposure and subsequent 570 K vacuum annealing (LEED kinetic energy 70eV).} \label{fig:LEED} \end{figure*} The structural evolution of the hBN/Eu/Pt intercalated system can readily be monitored with low energy electron diffraction (LEED) experiments. LEED patterns in Figure~\ref{fig:LEED} correspond to the (111) plane on the Pt curved crystal. The clean Pt(111) pattern is shown in Fig.~\ref{fig:LEED}(a), which transforms, after Borazine dosing at $T$ = 1020K, into the characteristic hBN (9$\times$9) Moire of Fig.~\ref{fig:LEED}(b)~\cite{PREOBRAJENSKI2007_PRB}. For successful Eu intercalation, there is a threshold temperature of the substrate of $T$ = 770K. Below this temperature the majority of the rare-earth material stays on top of the hBN coat, being subject to rapid contamination/oxidation (see Supplementary Information). At the lower range of the intercalation temperatures, right above 770K, the LEED pattern only shows the progressive extinction of the hBN Moire. When rising the temperature to $T$ = 870K a new $\approx$($\sqrt{3}\times\sqrt{3})$ R30$^\circ$ pattern emerges [Fig.~\ref{fig:LEED}(c)], with some weak satellite spots. A detailed inspection of the $\approx$($\sqrt{3}\times\sqrt{3})$ structure reveals a 10/11$\cdot$($\sqrt{3}\times\sqrt{3})$ R30$^\circ$ geometry with respect to Pt(111). This pattern reflects the presence of a EuPt$_2$/Pt(111) Moire-like coincidence lattice, similar to those found in rare earth RE-Au and RE-Ag surface alloys with RE-Au$_2$ and RE-Ag$_2$ composition~\cite{Corso2010_PRL,Corso2010_ACSNano,Ormaza_PRB2013,Fernandez2020_Nanoscale,Xu2020_PCCP,Que2020_JPCL,Ormaza_NanoLett2016}. The ($\sqrt{3}\times\sqrt{3}$) ordering arises from the 1:2 Pt:Eu stoichiometry of the alloy at the local atomic-scale. The Moire emerges from the lattice mismatch of the 2D RE-noble metal alloy layer and the substrate. The hBN/Eu/Pt(111) system is different from the Gr/Eu/Ir(111) one~\cite{Schumacher2014_PRB}, where the superstructure LEED spots belong to the graphene diffracted beams. In that case it was proposed that Eu forms a floating layer between the Ir(111) substrate and the graphene layer. In the here considered hBN/Eu/Pt system, however, the strongest LEED spots correspond to the EuPt$_2$ layer at the Pt interface. As indicated in Fig.~\ref{fig:LEED}(c), we can still detect extra satellite spots around the 10/11$\cdot$($\sqrt{3}\times\sqrt{3})$ R30$^\circ$ structure, which arise from the coincidence lattice defined by the mismatched hBN/EuPt$_2$ interface. Being all LEED structures properly identified, one can calculate real space lattice parameters out of the pattern. Taking into account the Pt lattice constant of $a_{Pt}$ = 3.92\AA, we obtain the EuPt$_2$ coincidence lattice constant of 11$\cdot$$a_{Pt}$/$\sqrt{2}$ = 30.5\AA, from which we deduce the EuPt$_2$ lattice parameter $a_{EuPt_2}$ = $a_{Pt}$/$\sqrt{2}\cdot$10/11$\cdot\sqrt{3}$ = 5.29\AA~, with a nearest neighbor distance of 3.05\AA. The lattice mismatch of the EuPt$_2$ layer with the hBN lattice on top (2.504\AA) is quite large, but explains the 4.6$\times$4.6 weak superstructure spots, marked by blue arrows in Fig.~\ref{fig:LEED}(c). Bulk EuPt$_2$ exists and crystalizes in a MgCu$_2$ Laves phase structure, as shown in Fig.~\ref{fig:Structure}(a). A bulk lattice constant $a_{Laves}$ between 7.64 and 7.73 \AA\ has been reported~\cite{Wickman1968_JPhysChemSol,Erdmann1973SolStChem,DeGraaf1980_PhysicaB}. However, in such bulk EuPt$_2$ structure, and along the [111] direction, one cannot find any stoichiometric EuPt$_2$ plane. Contiguous (111) planes contain either Pt and Eu solely, as shown in Fig.~\ref{fig:Structure}(b). The plane containing the Eu atoms is shifted by 3/8 of the densely packed Pt planes (approx. 0.8 \AA). The fundamental Pt containing (111) plane is formed by a Kagom{\'e} lattice [blue layer in Fig.~\ref{fig:Structure}(a)], with the Eu atoms sitting above and below each Pt hexagon of the Kagom{\'e} lattice. Considering the Eu-Pt bilayer, this defines a (2$\times$2) superstructure with a EuPt$_3$ composition. The here found EuPt$_2$ structure formed below hBN is therefore not related to bulk EuPt$_2$ and can be understood by the simple incorporation of Eu atoms in the uppermost Pt(111) surface plane. Due to the larger size of the Eu atom, the interatomic distance at the surface increases, and a mismatch with the Pt(111) substrate underneath arises, leading to the 10/11$\cdot$($\sqrt{3}\times\sqrt{3})$ R30$^\circ$ coincidence lattice. \begin{figure}[tb!] \centerline{\includegraphics[width=1.0\columnwidth]{Fig2.pdf}} \caption{\textbf{Structural Model of the EuPt$_2$ surface.} (a): Laves phase (C15) of bulk EuPt$_2$ in the MgCu$_2$ structure, (b) central Pt Kagom{\'e} (blue layer in (a)) and neighboring Eu (111) layer stacking inside bulk EuPt$_2$ that would result in a PtEu$_3$ composition with (2$\times$2) superstructure after Eu incorporation, (c) ($\sqrt{3}\times\sqrt{3}$) R30$^\circ$ monolayer surface alloy structure below a 2D hBN layer.} \label{fig:Structure} \end{figure} The chemical characterization of the Eu intercalation process is carried out by means of X-ray photoemission spectroscopy. Results are shown in Fig.~\ref{fig:XPS_Eu3d}. In the bulk EuPt$_2$ compound Eu atoms are in a di-valent Eu$^{2+}$ configuration, while Eu atoms in compounds with higher Pt content become mixed-valent or completely tri-valent~\cite{DeGraaf1980_PhysicaB}. The latter situation would be the case for Eu interstitial atoms, e.g., those that diffuse into the bulk and are surrounded by Pt completely. Fig.~\ref{fig:XPS_Eu3d} reveals the Eu 3d core level for sub- and monolayer preparations at different temperatures. Submonolayer Eu deposition and intercalation at low substrate temperature ($T$ = 570K) leads to a (nearly) complete divalent configuration while preparations at higher $T$ result in the appearance of an additional Eu$^{3+}$ signal. We interpret these observations as follows: at lower temperature only a partial Eu intercalation takes place, leading to the formation of EuPt$_2$ patches. For higher temperature, the Eu intercalation is complete, but together with the Eu$^{2+}$ species at the EuPt$_2$ interface the Eu$^{3+}$ component arises, which can be ascribed to Eu interstitials in the Pt bulk, or simply to the buildup, under the topmost EuPt$_n$ patches, of EuPt$_n$ ($n>$ 2) alloys, giving rise to a tri-valent or a mixed-valent situation. At higher coverage ($>$3 \AA), preparations at both low and high temperatures already force extra Eu atoms to diffuse below the completed EuPt$_2$ layer, leading to similar di- and tri-valent contributions in the Eu 3d XPS spectra, as shown in the top part of Fig.~\ref{fig:XPS_Eu3d}. Increasing the temperature at high coverage enhances the bulk diffusion, and leads to even stronger Eu$^{3+}$ emission compared to Eu$^{2+}$. \begin{figure}[tb!] \centerline{\includegraphics[width=0.5\columnwidth]{Fig3.pdf}} \caption{\textbf{Eu 3d photoemission signal.} X-ray photoemission spectra for the Eu 3d edge taken at h$\nu$ = 1486.6eV (Al K$_\alpha$) for sub- and monolayer preparations at sample temperatures $T$ = 570K and 870K, respectively.} \label{fig:XPS_Eu3d} \end{figure} \subsubsection{Eu intercalation in vicinal hBN/Pt(111) interfaces} \begin{figure*}[tb!] \centerline{\includegraphics[width=1\columnwidth]{Fig4.pdf}} \caption{\textbf{STM images of the (Eu)/hBN/c-Pt(111) system.} Large scale STM images of a hBN monolayer prior and after Eu intercalation collected at three different positions on the curved c-Pt(111) substrate (scanning parameters: I=0.13 nA; U=0.5 V). The line scans were taken close to the top of the images corresponding to the Eu intercalated systems. (111) and side facets are indicated by different color.} \label{fig:STM} \end{figure*} After characterizing the Eu intercalation below the hBN monolayer on Pt(111), we focus on surfaces vicinal to the Pt(111) plane, investigated with the curved sample sketched in Fig.~\ref{fig:STM}. The negative sign of the vicinal angle $\alpha$ corresponds to surfaces with A-type steps (\{100\} microfacets), and the positive to B-type step arrays (\{111\} microfacets)~\cite{Walter2015_NatComm}. STM images in Fig.~\ref{fig:STM} correspond to three representative points of the curved substrate, namely the Pt(223) position ($\alpha$=-11.5$^\circ$), a low vicinal angle ($\alpha$=-2.2$^\circ$) close to Pt(111), and the Pt(554) surface ($\alpha$=5.8$^\circ$). Prior to hBN growth, all vicinal surfaces exhibit well-ordered 1D step arrays, either at low and high vicinal angles~\cite{Walter2015_NatComm}. However, the hBN monolayer induces drastic structural changes, leading to a more complex nanoscale landscape. Close to the (111) position, large hBN/Pt(111) areas develop, which alternate with densely bunched steps. At larger vicinal angles the step bunching process remains, and the surface becomes a faceted structure. At the (554) position one observes a rather well ordered structure consisting of (111) terraces and side facets tilted at approx. 6$^\circ$. At the (223) position one does not get a clear long range order. The latter situation is similar to stepped Ni crystals covered by hBN~\cite{Fernandez2019_2DMat}, while the ordered structures are rather close to the Rh case, where hBN growth on stepped surfaces leads to periodically arranged nanofacets~\cite{Ali2021_SciAdv}. One interesting question is whether the hBN forms a continuous, defect-free coat over the hill-and-valley structure underneath, since this requires ``bending'' of the hBN layer over Pt substrate facets. In general terms, however, one expects an increasing number of defects for an increasing presence of facet/step boundaries at large vicinal angles. Eu deposition and intercalation on the vicinal surfaces changes the facet periodicity, size and inclination, as observed in the STM images. At the A-step type Pt(223) position, the rather disordered hBN/Pt(223) structure transforms into a well ordered array after Eu intercalation. On B-type steps, however, the Eu intercalation is leading to smaller facets. A statistical analysis of the STM images reveals an increasing average facet distance [(111)+side facet] of 20 facets/$\mu m$ close to (111), 25 facets/$\mu m$ at (223), and 65 facets/$\mu m$ at (554). This means that at A-type steps the surface rugosity decreases, contrary to B-type steps. On the other hand, XPS spectra (see below) reveal a similar Eu$^{2+}$/Eu$^{3+}$ relation on the different crystal positions, with a slightly higher di-valent amount at (111) compared to stepped surface planes. \subsection{Magnetism of the hBN/EuPt$_2$ system in the (111) plane} \begin{figure}[tb!] \centerline{\includegraphics[width=0.5\columnwidth]{Fig5.pdf}} \caption{\textbf{Magnetic properties of intercalated Eu below hBN/Pt(111).} (a) X-ray absorption spectrum (total electron yield - TEY) of horizontally (top, XAS) and circularly polarized light with opposite sign and its resulting difference spectrum shown below (XMCD) of 3\AA~Eu intercalated below hBN/Pt(111). The XAS spectrum was fitted with several gaussian profiles (black, yellow, green) and a linear background (grey) for each of the two main Eu$^{2+}$ and Eu$^{3+}$ contributions. (b) Magnetization curve taken at the maximum of the Eu M$_{5}$ XMCD signal for a variable applied field from +6 T to -6 T (blue) and in the opposite direction (orange), respectively. The sample was oriented with an angle of 70$^\circ$ with respect to the applied field at temperature $T$ = 7K. The inset displays the Arrot plot analysis (see text) confirming a ferromagnetic state. The black line is a linear fit to the high field values.} \label{magnetism} \end{figure} Divalent Eu (4$f^7$, $J$ = 7/2) has a low-temperature ferromagnetic state in bulk EuPt$_2$ and similar compounds~\cite{Nakamura2016_JPhysSocJap}. We measured the magnetic properties of our hBN-protected EuPt$_2$ surface alloy with X-ray circular magnetic dichroism (XMCD) at the (111) position in the curved crystal. Results are shown in Fig.~\ref{magnetism}. The X-ray absorption (XAS) spectrum in part (a) reveals a mixture of Eu$^{2+}$ and Eu$^{3+}$ contribution after Eu intercalation below the hBN/Pt(111) surface. This observation confirms the coexistence of the two Eu valences and it is consistent with the XPS results, namely di-valent Eu atoms at the EuPt$_2$ surface and tri-valent Eu atoms diffused into the Pt bulk. The XMCD signal results from the difference of the absorption spectra of left and right circularly polarized light and is shown in the bottom of Fig.~\ref{magnetism}(a). At the applied field of 6 T, it shows the typical lineshape of pure di-valent Eu~\cite{Blanco2022_PRRes}. This is expected since, as stated before, Eu$^{3+}$ has a 4$f^6$ configuration with $S$ = $L$ = 3 and $J$ = 0, hence the Eu$^{3+}$ signal should not contribute significantly to the anisotropy. Ferromagnetism can be probed by measuring the Eu XMCD signal while varying the applied magnetic field. The XMCD signal is proportional to the magnetization $M$ in the system. The resulting magnetization curve is shown in Fig.~\ref{magnetism}(b). It reveals a pronounced ``S'' shape. The Arrot plot analysis~\cite{Arrott1957_PR} shown in the inset confirms the ferromagnetic state of the EuPt$_2$ surface alloy. In such analysis, the square of the magnetization $M^2$ is plotted against the applied field divided by the magnetization $\mu_0 H/M$. A linear fit of the high field (high $\mu_0 H/M$) values indicates ferromagnetic state if the line hits the ordinate at a positive $M^2$ value, and paramagnetism if the ordinate crossing is negative. Therefore, the Arrot plot reveals ferromagnetic behavior at the intercalated hBN/Eu-Pt(111) system. Nevertheless, the magnetization curve shows that both, remanence and coercive field, are too small to be detectable at the measurement temperature of 7K. For the out-of-plane geometry with the sample normal to the magnetic field and light incidence, the magnetization curves are slightly more rectangular at small fields pointing to an out-of-plane easy axis (see supplementary material for details). \subsection{Exposure of the hBN/Eu/Pt system to air} \begin{figure}[tb!] \centerline{\includegraphics[width=0.5\columnwidth]{Fig6.pdf}} \caption{\textbf{XPS analysis of Eu intercalation below hBN on the curved Pt crystal.} (a) N 1s, B 1s, Eu 3d, O 1s, and Pt 4p$_{3/2}$ X-ray photoemission spectra taken at h$\nu$=1486.6 eV (Al K$_\alpha$) for the Eu 4 \AA~preparation before and after exposing to ambient conditions at Pt(111). (b) Comparative Eu 3d, O 1s, and Pt 4p$_{3/2}$ core levels at three positions of the curved Pt substrate, at Pt(332), Pt(111) and Pt(335) positions, respectively.} \label{fig:XPS_allPt111} \end{figure} Fig.~\ref{fig:XPS_allPt111}(a) displays the complete evolution of the XPS spectra during the intercalation of 4-\AA-Eu in the hBN/Pt(111) interface and after exposure to air. Prior to Eu intercalation, we obtain the characteristic shape and positions in the B 1s and N 1s core levels for the weakly-coupled hBN/Pt(111) system~\cite{PREOBRAJENSKI2007_PRB}. After Eu intercalation B 1s and N 1s core levels notably change their shape and energy, reflecting the fact that the hBN contact interface is now different (EuPt$_2$/hBN), with a stronger hBN-EuPt$_2$ interaction. With respect to the Eu core level, as mentioned above, it proves that Eu atoms are present in two configurations, di-valent Eu$^{2+}$ for the 2D EuPt$_2$ alloy and tri-valent Eu$^{3+}$ for Eu atoms incorporated into the Pt bulk below the EuPt$_2$ layer. After sample exposition to ambient conditions (6 hours, room temperature, 80\% humidity) and a vacuum annealing to 770K to remove impurities from the air exposure, the ratio of Eu$^{2+}$/Eu$^{3+}$ drops to one third. The O 1s core level is detected at the higher binding energy side of the Pt 4p$_{3/2}$ spectrum. A detailed view (inset) indicates a double-peak with energy positions of 530.5 eV and 532 eV, respectively. The former is in good agreement with the binding energy in the tri-valent Eu$_2$O$_3$ oxide~\cite{Mercier2006_JElecSpecRelPhen,Baltrus2019_SurfSciSpec}. The 532 eV emisssion is interpreted as due to hydroxide -OH, typical from Eu samples exposed to air. On the other hand, B 1s and N 1s core levels partially recover their shape and energy prior to Eu exposure. This indicates that the hBN interaction with partially oxidized EuPt$_2$ patches underneath is weaker. On the other hand, oxidation of the hBN layer is not observed. The hBN protection for the intercalated EuPt$_2$ alloy in vicinal Pt substrates is examined in Fig.~\ref{fig:XPS_allPt111}(b), in direct comparison with the Pt(111) plane. Here we show the Eu 3d, Pt 4p$_{3/2}$ and O 1s core levels at the Pt(332) and Pt(223) surfaces. At the Pt(332) plane, no di-valent signal reamins in the Eu 3d spectrum after exposure to air. At the Pt(223) position the Eu$^{2+}$/Eu$^{3+}$ is strongly reduced, but the Eu$^{2+}$ peak is still visible. Again, the O 1s spectrum suggests that the strong reduction of the number of di-valent Eu atoms is due to the formation of tri-valent Eu oxides and hydroxides. The overall O 1s intensity increases as the Eu$^{2+}$ peak decreases at the stepped surfaces, although a more detailed peak analysis indicates that the intensity of the Eu-OH peak at 532 eV is similar in all three cases, and it is the 530.5 eV peak from Eu$_2$O$_3$ the one that scales reciprocally with the Eu$^{2+}$/Eu$^{3+}$ ratio. The complete oxidation of the B-type (332) surface correlates with the high facet density observed in the STM analysis of Fig.~\ref{fig:STM}, since this allows a higher number of hBN bending or breaks at facet borders. The rather small divalent signal that remains in Pt(223) may reflect the presence of larger (111) and side facets where the intercalated EuPt$_2$ alloy remains better protected at ambient conditions. \begin{figure}[tb!] \centerline{\includegraphics[width=0.8\columnwidth]{Fig7.pdf}} \caption{\textbf{Angle resolved Photoemission analysis of the hBN layer.} He IIa (h$\nu$ = 40.8eV) photoemission intensity maps along $\bar\Gamma\bar K$ direction of the hBN band structure for (a) hBN/Pt(111), (b) after intercalation of 1 \AA~ of Eu at $T$ = 770K, (c) exposure of that sample to air, and (d) after another annealing to 770K to desorb air contamination, respectively. The most intense features correspond to the $\pi$-band of hBN at the indicated interfaces.} \label{fig:air_exposure_bands} \end{figure} Finally, we analyze the impact of the air exposure on the hBN protecting layer. For this purpose angle-resolved photoemission spectroscopy (ARPES) is particularly appropriate, since it is highly sensitive to the hBN valence band, as shown in Fig.~\ref{fig:air_exposure_bands}. The photoemission intensity map of Fig.~\ref{fig:air_exposure_bands}(a) corresponds to the hBN/Pt(111) interface (centre of the curved sample), and has been measured using the He II excitation energy (h$\nu$ = 40.8eV). The strongly dispersing, intense features are the hBN $\pi$ bands, with minimum at the $\bar\Gamma$ point of the Brillouin zone and maximum at $\bar K$. The emission between approx. 2 and 8 eV binding energy entirely belongs to hBN, while the Pt valence band features appear closer to the Fermi level. After intercalation of a very small amount of Eu (1 \AA ), the main $\pi$ band appears unaltered, although a replica of this band emerges below, at 2 eV higher binding energy (Fig.~\ref{fig:air_exposure_bands}(b)). The unaltered band corresponds to pure Pt areas without Eu while the shifted band arises in areas where the Eu intercalates. The band shift is explained by the enhanced interaction of the hBN with the EuPt$_2$ substrate, with a net electron transfer from Eu atoms to the hBN layer. After air exposure all bands get quite blurry, due to surface contamination. The dominating hBN $\pi$ band is still visible, slightly shifted to higher binding energies. Interestingly, both the pure hBN/Pt and the Eu-intercalated bands can be recovered by annealing the sample again to 770K, which removes adsorbates from the surface. This indicates that the oxygen intercalation does not affect (oxidize) the hBN layer, which remains intact. The photoemission intensity mapping still includes the higher binding energy replica, indicating that strongly interacting hBN/EuPt$_2$ patches remain, despite the partial oxidation of the EuPt$_2$ layer. \section{Conclusions} We have investigated the structural, magnetic and electronic properties of Eu after intercalation between a Pt substrate and a hBN monolayer. We used a Pt sample curved around the (111) direction in order to additionally assess the role of substrate steps. Our LEED analysis of the (111) interface shows a $\sim$($\sqrt{3}\times\sqrt{3}$)R30$^\circ$ pattern, revealing the presence of the EuPt$_{2}$ surface alloy under the hBN layer. We find that Eu atoms in this EuPt$_{2}$ layer are divalent, while Eu atoms that have diffused further inside the Pt bulk during the intercalation process are trivalent. Interestingly, the Eu$^{2+}$/Eu$^{3+}$ ratio is not affected by the presence of steps at the Pt substrate. XMCD magnetization curves on the di-valent Eu atom reveal a ferromagnetic behavior. Air exposure of the sample leads to a partial protection of the magnetic, divalent Eu atoms at the (111) plane, while at vicinal surfaces the protecting role of the hBN layer is less efficient, as reflected in the larger attenuation of the divalent Eu state. Such incomplete protection of vicinal planes may be related to a larger number of defects and domain boundaries in a more discontinuous hBN layer, since this covers a much rougher hill-and-valley faceted structure. This facilitates oxygen diffusion, intercalation and the EuPt$_{2}$ alloy oxidation. In contrast, the hBN layer itself remains intact upon both Eu intercalation and air exposure. \section*{Supplementary material} Supplementary material contains information on electron spectroscopy analysis for Eu intercalation at insufficient substrate temperatures. Furthermore, the magnetic easy axis direction is investigated by XMCD technique. \ack We acknowledge financial support from grants PID2020-116093RB-C44 funded by the Spanish MCIN/AEI/ 10.13039/501100011033 and the Basque Government (Grant IT-1591-22). We acknowledge the European Synchrotron Radiation Facility for provision of beam time on ID32. ESRF access was provided through proposal MA-5454~\cite{ESRF_Ma5454}. Part of the research leading to the result has been supported by the project CALIPSOplus under Grant Agreement 730872 from the EU Framework Programme for Research and Innovation HORIZON 2020. Y. H. appreciates the support of Japan Society for the Promotion of Science (JSPS) Overseas Research Fellowships and I.P. and F.B. acknowledge financial support from EUROFEL project (RoadMap Esfri). \pagebreak \section{Spectroscopy analysis of insufficient temperature for Eu intercalation} Eu does not completely intercalate below a hBN layer on Pt if the substrate temperature is not sufficient high. Here we will show Near-edge X-ray absorption fine structure (NEXAFS) and X-ray photoelectron spectroscopy (XPS) data for such preparations. Data were taken at ID32 and BACH beamlines of ESRF and Elettra synchrotrons, respectively. \begin{figure}[b!] \includegraphics[width=0.75\textwidth,angle=0,clip]{FigS1.pdf} \caption{\textbf{X-ray absorption spectra at the Eu M$_{4,5}$ absorption edge for Eu deposition on hBN/Pt(111) at $T$ = 610K and successive air exposure.}} \label{XAS} \end{figure} X-ray absorption spectroscopy (normal to the field and light incidence) for a approx. 1 ML thick Eu film deposited at a $T$ = 610K hot hBN/Pt(111) substrate and successive exposure to air is shown in Fig.~\ref{XAS}. The air exposure was carried out at with the sample at room temperature during five minutes. After air exposure a strong transformation of the spectral shape towards tri-valent Eu configuration took place. Nevertheless, even after the air exposure experiment a small di-valent part is still present. In order to remove remaining air contamination, the sample was post-annealed in UHV to $T$ = 470K during 10 minutes. \begin{figure}[t!] \includegraphics[width=0.75\textwidth,angle=0,clip]{FigS2.pdf} \caption{\textbf{XPS analysis for sub-monolayer Eu deposition on hBN/Pt(111) at $T$ = 470K.} XPS survey, B 1s and Pt 4f emissions of 0.7 and 1.5\AA ~Eu coverage on hBN/Pt(111), respectively, taken for h$\nu$ = 272eV. In the case of the Pt 4f also the clean Pt emission is included to better observe the surface and bulk contributions (h$\nu$ = 190eV). After low binding energy normalization, the areas $A$ below the emissions were extracted and from them the overlayer amount was calculated by taking into account the electron mean free path of the electrons from the standard curve~\cite{dench79}. From this analysis we observe that not all Eu is intercalated below the hBN layer but rather stay on top.} \label{FigS1} \end{figure} Fig.~\ref{FigS1} represents X-ray photoemission data from BACH beamline taken at a photon energy $h\nu$ = 272eV. The substrate temperature during Eu deposition for the data shown in Fig.~\ref{FigS1} was $T$ = 470K. The data set includes measurements of hBN/Pt(111) and two successive Eu intercalations, each for 10 min at a very low rate. The total Eu thickness was calculated from the Pt 4f intensity loss taking into account the mean free path of the electrons. The latter was extracted from the universal curve of the electron mean free path~\cite{dench79} and resulted in Eu thickness of 0.7 and 1.5\AA\, for 10 and 20 min Eu deposition, respectively. In Fig.~\ref{FigS1} one observes the spin orbit split 4f$_{7/2}$ and 4f$_{5/2}$ components of the Pt 4f core level. These are further split in the surface (S) and bulk (B) emissions indicated in the Figure. It is \begin{figure}[tbph!] \includegraphics[width=0.75\textwidth,angle=0,clip]{FigS3.pdf} \caption{\textbf{Resonant Photoemission at the Eu 4d$\rightarrow$4f absorption edge.} Normal emission photoemission spectra of the valence band region of 1.5\AA~Eu deposited onto a 470K hot hBN/Pt(111) surface for three photon energies corresponding to off, on-resonant Eu$^{2+}$, and on-resonant Eu$^{3+}$ at h$\nu$ = 132, 140, and 144.5eV, respectively. Shown are spectra prior and after exposure of the sample to ambient conditions (5 min, room temperature).} \label{FigS2} \end{figure} interesting to note that the surface emission can be still observed for hBN/Pt(111). This is possible due to the weak interaction of hBN and the Pt, specially at the (111) face that hosts a hBN Moire lattice. After Eu deposition the surface component shrinks much more than the bulk component, revealing that partial Eu intercalation occurred. We again calculate the Eu overlayer thickness but now from the B 1s core level that give us the amount of Eu on top of the hBN layer. The thickness that results from the area diminution is 0.35 and 0.9\AA, respectively, for the two evaporations. This means that during the first deposition 0.35\AA~intercalated and after the second deposition 0.6\AA~out of the 1.5\AA~Eu intercalated. We also observe the doping effect of Eu towards the hBN layer that shifts the B 1s core level to higher binding energies in a similar way as for complete intercalation, see Fig. 6(a) of the main text. The analysis of the valency has been carried out with resonant photoemission applying photon energies around the 4d$\rightarrow$4f absorption edge. The results can be found in Fig.~\ref{FigS2}. At off-resonant photoemission (h$\nu$ = 132eV) the Eu 4f emission is strongly suppressed, therefore the valence band is dominated by Pt 5d valence band emission. Nevertheless, there is already some Eu 4f emission. The 4f emissions are much better observed for the resonant photon energies h$\nu$ = 140eV and 144.5eV corresponding to Eu$^{2+}$ and Eu$^{3+}$ resonances, respectively~\cite{Schneider1983_RPB28,ColonSantana2012_JPCM,Banik2012_PRB}. The di-valent emission is located at approx. 1.1eV binding energy, the tri-valent multiplet is observed between 4 and 12eV. Due to the unknown cross-section effects around the resonances an exact Eu$^{2+}$/Eu$^{3+}$ ratio cannot be extracted. The tri-valent contribution arises from a part of the intercalated Eu atoms that diffuse further inside the Pt bulk, but can also arise from oxidation of Eu to tri-valent Eu in Eu$_2$O$_3$ of the Eu atoms atop of the hBN layer. The latter oxidation is very difficult to avoid due to the strong reactivity of Eu even under very good ultra-high vacuum conditions~\cite{Schumacher2014_PRB}. After air exposure, the situation changes drastically. There is still a small di-valent signal, but now at a binding energy of 2.3eV. This Eu$^{2+}$ emission cannot arise from di-valent Eu surrounded by a metallic environment whose binding energy is always lower than 2eV. But an emission at such 2.3eV arises from divalent Eu in EuO at surfaces~\cite{Caspers2011_PRB,ColonSantana2012_JPCM}. This oxide is usually unstable under ambient conditions but seems to be able to form and remain stable under used experimental conditions. EuO is created either on the surface or at the interface under hBN (and stabilized thanks to the interface). Further investigations would be necessary to distinguish both possibilities. In any case, we do not observe a possible divalent EuPt$_2$ structure below the hBN after the air exposure process for Eu deposition at 470K substrate temperature. \begin{figure}[b!] \includegraphics[width=0.75\textwidth,angle=0,clip]{FigS4.pdf} \caption{\textbf{Eu magnetization curves at different geometries.} Eu magnetization curves normal and at $\theta$ = 70$^\circ$ with respect to the field. The inset shows a closeup at small fields to distinguish better the two curves. One clearly observes the more rectangular like shape for the out-of-plane geometry revealing an out-of-plane easy axis.} \label{mag_loops} \end{figure} \section{Magnetic anisotropy determination} The magnetic anisotropy of the intercalated Eu can be determined from the two different geometries applied. The magnetization curves for the sample perpendicular and at an angle of 70$^\circ$ with respect to the applied field is shown in Fig.~\ref{mag_loops}. The close-up at small fields reveal a more rectangular shape of the out-of-plane geometry magnetization loop pointing to this direction as the easy axis of magnetization. The XMCD values close to zero-field has been taken from individual complete XMCD measurements to avoid the typical zero-field artifacts of XMCD measurements. \pagebreak
1,116,691,497,438
arxiv
\section{Introduction} Weakly bound nuclei near the drip-line have properties which are not seen in strongly bound stable nuclei. The neutron halo is an typical example~\cite{Tanihata1985,Tanihata2013}. Apart from quantal penetration caused by the small separation energy, the neutron pairing correlation plays crucial roles here, for example, to determine the binding of two-neutron halo nuclei~\cite{Hansen1987,Bertsch1991,Meng1996,Barranco2001,Myo2002}. Note, however, that the pairing correlation in weakly bound nuclei is different from that in stable nuclei since it causes configuration mixing involving both bound and unbound (continuum) single-particle orbits, and this continuum coupling brings about novel features~\cite{Meng1996,Meng2006,Dobaczewski1996,Dobaczewski2007,Bennaceur2000,Hamamoto2003,Hamamoto2004,Matsuo2005,Matsuo2010,Zhang2011,Zhang2014}. For example, the pairing correlation persists in drip-line nuclei only with the continuum coupling to allow binding of a two-neutron halo~\cite{Meng1996,Meng2006}. The continuum coupling is necessary also for the di-neutron correlation, characteristic spatial correlation in neutron-rich nuclei~\cite{Matsuo2005,Hagino2005,Pillet2007,Zhang2014}. On the other hand, the continuum coupling has seemingly opposite mechanism to suppress the development of the halo radius, called the pairing anti-halo effects~\cite{Bennaceur2000,Hamamoto2004,Chen2014}. Another interesting example is possible manifestation of a new type of resonance generated by the pairing correlation and the continuum coupling, called the quasi-particle resonance~\cite{Belyaev1987,Bulgac1980}. If one describes a single-particle scattering problem within the scheme of Bogoliubov's quasi-particle theory, even a scattering state becomes a quasi-particle state which has both `particle' and `hole' components. In other words, an unbound nucleon couples to a Cooper pair and a bound hole orbit, then forms a resonance. This quasi-particle resonance is expected also to exhibit new features in weakly bound nuclei since the continuum coupling becomes stronger as the separation energy decreases. In the case of well bound stable nuclei, the depth of Fermi surface is around 8 MeV. Therefore, quasi-particle resonances, which emerge above the separation energy, have excitation energy larger than 8 MeV, and hence they correspond to deep hole orbits. The excitation energy $E^{\mathrm{stable}}_{x}$ of quasi-particle resonance is much larger than the pair gap $\Delta$: $E^{\mathrm{stable}}_{x}\gg\Delta$. In this case, the effect of the pairing correlation is treated in a perturbative way~\cite{Belyaev1987,Bulgac1980}. The resonance width $\Gamma$, for example, is evaluated on the basis of the Fermi's golden rule. The width $\Gamma$ is predicted to be proportional to the square of the pair gap $|\Delta_{\mathrm{average}}|^{2}$, and $\Gamma$ is estimated to be small (i.e. order of 1-100 keV)~\cite{Belyaev1987}, much smaller than the experimentally known typical width (several MeV) of the deep-hole resonances~\cite{RingSchuck}. Experimental identification of the pairing effect on the deep-hole resonances is not very promising in this respect~\cite{Dobaczewski1996}. In the case of small separation energy, in particular, in neutron-rich nuclei, property of the quasi-particle resonance may be different from those in stable nuclei. A neutron-rich nucleus has a shallow Fermi energy, with an extreme depth smaller than 1 MeV to be realized in neutron drip-line nuclei. In this case the excitation energy of a quasi-particle resonance might be comparable with or smaller than the pair gap: $E^{\mathrm{unstable}}_{x}\lesssim\Delta$. The pairing correlation may cause strong configuration mixing between weakly bound orbits and low-lying continuum orbits, since both are located near the Fermi surface. The perturbative description may not be applicable, and we expect undisclosed relation between the quasi-particle resonance and the pairing correlation. The small neutron separation energy provides another merit in studying the quasi-particle resonance. In this case the quasi-particle resonance appears also in the low-lying region where the level density is low. Other mechanisms beyond the mean-field approximation, for instance, the fragmentation due to coupling to complex configurations~\cite{Bertsch1983}, are expected to be suppressed. This might increase the possibility to observe the quasi-particle resonance directly. There exist several theoretical works that studied quasi-particle resonance in nuclei near the neutron drip-line~\cite{Hamamoto2003,Hamamoto2004,Zhang2011,Grasso2000,Fayans2000,Michel2008,Pei2011,Zhang2012,Oba2009,Zhang2013,Sandulescu2000,Betan2006,Sandulescu2005}. Many of them employ the selfconsistent Hartree-Fock-Bogoliubov (HFB) scheme~\cite{Zhang2011,Grasso2000,Fayans2000,Michel2008,Pei2011,Zhang2012}, or its variation in which the Hartree-Fock potential is replaced with the Woods-Saxon potential~\cite{Hamamoto2003,Hamamoto2004}. The quasi-particle resonance in deformed nuclei is also discussed~\cite{Oba2009,Zhang2013}. Approximate schemes using the Hartree-Fock+BCS theory are also adopted both in non-relativistic and relativistic frameworks~\cite{Sandulescu2000,Betan2006,Sandulescu2005}. Despite these previous studies, effects of the pairing correlation on the low-lying quasi-particle resonance in weakly bound nuclei have not been revealed yet. We shall discuss this subject in order to understand behavior of the pairing correlation in drip-line nuclei and unbound nuclei. In the present study, we particularly aim to reveal effects of the pairing correlation on the width of low-lying quasi-particle resonance in drip-line nuclei. We focus on neutron resonances, in particular, in the $p$ wave having small excitation energy $E_{x}\lesssim$ a few MeV. The continuum coupling is expected to be influential for neutrons in low angular momentum partial waves, i.e. in the $s$ and $p$ waves because of no (or small) Coulomb and centrifugal barriers. And neutrons in these partial waves plays an important role in the neutron halo. Also a scattering neutron in low angular momentum waves is a major contributor in the low-energy neutron capture phenomena~\cite{Raman1985}, important for the astrophysical applications. In the present work, we discuss the $p$ wave quasi-particle resonance as a first step of a series study. The case of $s$ wave, which involves a virtual state, will be discussed separately in a future publication. It is not appropriate to treat effects of the pairing correlation as a perturbation in the calculation of the resonance width in weakly bound nuclei. We therefore describe the continuum quasi-particle states by solving numerically the Hartree-Fock-Bogoliubov equation (equivalent to the Bogoliubov de-Genne equation) in the coordinate space~\cite{Dobaczewski1996,Dobaczewski1984,Belyaev1987,Bulgac1980} to obtain the wave function of a neutron quasi-particle in the continuum. We impose the scattering boundary condition~\cite{Belyaev1987,Hamamoto2003,Hamamoto2004,Grasso2000}. In this way, we calculate the phase shift for the continuum quasi-particle state and the elastic cross section for a neutron scattered by the superfluid nucleus. Then the resonance width and the resonance energy are extracted from the obtained phase shift. As a concrete example, we describe ${}^{46}$Si and an impinging neutron, in other words, a quasi-particle resonance in ${}^{47}$Si. Hartree-Fock-Bogoliubov calculations predict that this nucleus is located at or close to the neutron drip-line~\cite{Stoitsov2003}. Also it has the neutron $2p$ orbits in ${}^{46}$Si are expected to be weakly bound or located just above the threshold energy. This paper is constructed as follows: In Sect.~2, we explain the HFB theory in the coordinate space, the scattering boundary condition of the Bogoliubov quasi-particle and some details of the adopted model. In Sect.~3, we show the results of numerical analysis performed for the ${}^{46}$Si + n system. We also discuss effects of the pairing correlation on the resonance width using systematic calculation with various pairing strengths and nuclear potential depths. Finally, we draw conclusions in Sect.~4. \section{Theoretical Framework} \subsection{The Hartree-Fock-Bogoliubov equation in the coordinate space with the scattering boundary condition} We introduce the wave function of the Bogoliubov quasi-particle state in the notation of Ref.~\cite{Dobaczewski1984,Matsuo2001}. It has two components; \begin{equation} \phi_{i}(\vec{r}\sigma)= \left( \begin{array}{c} \varphi_{1,i}(\vec{r}\sigma) \\ \varphi_{2,i}(\vec{r}\sigma) \end{array} \right). \end{equation} Here $\vec{r}$ is the spatial coordinate and $\sigma$ represents the spin variable. Assuming that the system has spherical symmetry, we write the Bogoluibov quasi-particle wave function as \begin{equation} \varphi_{1,i}(\vec{r}\sigma)=\frac{u_{lj}(r)}{r}[Y_{l}(\theta ,\varphi)\chi_{\frac{1}{2}}(\sigma)]_{jm},\quad \varphi_{2,i}(\vec{r}\sigma)=\frac{v_{lj}(r)}{r}[Y_{l}(\theta ,\varphi)\chi_{\frac{1}{2}}(\sigma)]_{jm}, \end{equation} where $l$, $j$ and $m$ are the angular momentum quantum numbers of the quasi-particle state, with $Y$ and $\chi$ being the spherical harmonics and the spin wave function. We also assume that the HF potential and the pair hamiltonian $\Delta (\vec{r})$ are local and real, then the Hartree-Fock-Bogoliubov equation in the coordinate space is written as \begin{equation} \left( \begin{array}{ccc} -\frac{\hbar^{2}}{2m}\frac{d^{2}}{dr^{2}}+U_{lj}(r)-\lambda & \Delta (r) \\ \Delta (r) &\frac{\hbar^{2}}{2m}\frac{d^{2}}{dr^{2}}-U_{lj}(r)+\lambda \end{array} \right) \left( \begin{array}{c} u_{lj}(r) \\ v_{lj}(r) \end{array} \right) =E \left( \begin{array}{c} u_{lj}(r) \\ v_{lj}(r) \end{array} \right), \label{hfbeq2_sec4} \end{equation} where $\lambda (<0)$ and $E$ are the Fermi energy and the quasi-particle energy, respectively. Here the upper component of quasi-particle wave function $u_{lj}(r)$ represents an amplitude of the quasi-particle having the particle character, called hereafter the `particle' component in short. The lower component $v_{lj}(r)$ represents the `hole' component. $U_{lj}(r)$ is the mean field potential and $m$ is the mass of neutron. The spectrum of quasi-particle consists of discrete states with $E<|\lambda|$ and continuum states with $E>|\lambda|$~\cite{Dobaczewski1984}. We intend to describe a system consisting of a superfluid nucleus and an impinging neutron, which in principle should be treated as a many-body unbound state. However, we adopt an approximation to which the neutron is treated as an unbound quasi-particle state, governed by Eq.~(3), built on the pair correlated even-even nucleus. In other words, we neglect selfconsistent effect of unbound neutron on the mean field and the pair correlation. Under this assumption, we focus on continuum quasi-particle states with $E>|\lambda|$ which correspond to unbound single-particle states with positive neutron kinetic energy. We impose the scattering boundary condition on the Bogoliubov quasi-particle at distances far outside the nucleus as \begin{equation} \frac{1}{r}\left( \begin{array}{c} u_{lj}(r)\\ v_{lj}(r) \end{array} \right)= C\left( \begin{array}{c} \cos \delta_{lj}j_{l}(k_{1}r)-\sin \delta_{lj}n_{l}(k_{1}r)\\ Dh^{(1)}_{l}(i\kappa_{2}r) \end{array} \right)\xrightarrow[r\to\infty]{} C\left( \begin{array}{c} \frac{\sin\left( k_{1}r-\frac{l\pi}{2}+\delta_{lj} \right)}{k_{1}r}\\ 0 \end{array} \right) \label{bogoscatt} \end{equation} where $k_{1}=\sqrt{2m(\lambda+E)}/\hbar$, $\kappa_{2}=\sqrt{-2m(\lambda-E)}/\hbar$~\cite{Belyaev1987,Bulgac1980,Dobaczewski1984,Grasso2000,Hamamoto2003}. The normalization factor $C$ is $C=\sqrt{2mk_{1}/\hbar^{2}\pi}$ to satisfy $\sum_{\sigma}\int d\vec{r}\phi^{\dagger} (\vec{r}\sigma,E)\phi (\vec{r}\sigma,E^{\prime})=\delta(E-E^{\prime})$. Here $\delta_{lj}$, $j_{l}(z)$, $n_{l}(z)$, $h^{(1)}_{l}(z)$ are the phase shift, the spherical Bessel function, the spherical Neumann function and the first kind spherical Hankel function, respectively. The quasi-particle resonance can be seen in the elastic scattering of a neutron, and the elastic cross section $\sigma_{lj}$ associated with each partial wave is \begin{eqnarray} \sigma_{lj}=\frac{4\pi}{k^{2}_{1}} \left( j+\frac{1}{2} \right) \sin ^{2}\delta_{lj}. \end{eqnarray} \subsection{Details of numerical calculation} We solve the radial HFB equation~(\ref{hfbeq2_sec4}) in the radial coordinate space under the scattering boundary condition~(\ref{bogoscatt}) of the Bogoliubov quasi-particle. In the present study, we simplify the HF mean field by replacing it with the Woods-Saxon potential in a standard form: \begin{equation} U_{lj}(r)=\left[ V_{0}+ (\vec{l}\cdot\vec{s})V_{\mathrm{SO}}\frac{r^{2}_{0}}{r}\frac{d}{dr} \right] f_{\mathrm{WS}}(r)+\frac{\hbar^{2}l(l+1)}{2mr^{2}}, \quad f_{\mathrm{WS}}(r)=\left[ 1+\exp \left( \frac{r-R}{a} \right) \right]^{-1}. \label{hamil} \end{equation} Although the selfconsistency of the mean fields is neglected, an advantage of this treatment is that we can easily change parameters of the potentials, facilitating systematic numerical analysis. On the other hand, effects of weakly binding on the potential, for instance, large diffuseness and long tail, are not taken into account in the present calculation. We also assume that the pair potential $\Delta (r)$ has the Woods-Saxon shape: \begin{equation} \Delta(r)=\Delta_{0}f_{\mathrm{WS}}(r), \end{equation} following Ref.~\cite{Hamamoto2003}. The magnitude of the pair potential $\Delta_{0}$ is controled by the average pair strength $\bar{\Delta}$~\cite{Hamamoto2003}: \begin{equation} \bar{\Delta}=\frac{\int^{\infty}_{0}r^{2}\Delta (r)f_{\mathrm{WS}}(r)dr}{\int^{\infty}_{0}r^{2}f_{\mathrm{WS}}(r)dr}=0.0 - 3.0~\mathrm{MeV}. \end{equation} We change the strength $\bar{\Delta}$ from 0.0 MeV to 3.0 MeV in this study, considering the empirical systematics of the pair gap $\Delta\sim 12.0/\sqrt{A}$ MeV~\cite{BohrMottelson} ($\Delta\sim 1.7$ MeV for ${}^{46}$Si). The parameters of the Woods-Saxon potential are taken from Ref.~\cite{BohrMottelson}. The radial wave function is numerically solved up to $r_{\mathrm{max}}=40$ fm, where it is connected to the Hankel functions, Eq.~(4). We consider the ${}^{46}$Si + n system for the following reasons. First, ${}^{46}$Si is predicted be the drip-line nucleus in Si isotopes and the deformation of this nucleus is small according to the HFB calculations (for instance, the Refs~\cite{Stoitsov2003,Werner1996,Terasaki1997}). It may be reasonable to assume that ${}^{46}$Si has spherical shape in the present calculation. Second, the neutron $2p_{3/2}$ or $2p_{1/2}$ orbits are expected be either weakly bound or slightly unbound, and hence they are expected to form low-lying quasi-particle resonances. Note that ${}^{46}$Si has not been observed yet experimentally~\cite{Thoennessen2012}. The neutron single-particle energies around the Fermi energy for ${}^{46}$Si in the Woods-Saxon potential is shown in Table.~1. Both of $2p$ orbits are bound very weakly for the original parameter set. In particular, the energy of $2p_{1/2}$ orbit is very small: $e_{\mathrm{sp}}=-0.056$ MeV. For the Fermi energy $\lambda$, we use a fixed value $\lambda=-0.269$ MeV which is obtained by the Woods-Saxon-Bogoliubov calculation~\cite{Oba2009}. \begin{table}[t] \centering \begin{tabular}{ccc} \hline Single-particle orbit && Single-particle energy $e_{\mathrm{sp}}$ [MeV] \\ \hline $2p_{1/2}$ && -0.056 \\ $2p_{3/2}$ && -1.068 \\ $1f_{7/2}$ && -2.821 \\ \hline \end{tabular} \caption{Neutron single-particle orbits in the Woods-Saxon potential of ${}^{46}$Si, obtained with the standard Woods-Saxon parameter~\cite{BohrMottelson}.} \label{spene} \end{table} \section{Results and discussion} \subsection{Cross section and phase shift of neutron elastic scattering} Figure~1 shows the calculated elastic cross section which is obtained (a) without the pairing correlation ($\bar{\Delta}=0.0$ MeV) and (b) with the pairing correlation ($\bar{\Delta}=1.0$ MeV). In the case of $\bar{\Delta}=0.0$ MeV, single-particle potential resonances are found in the $f_{5/2}$ and $g_{9/2}$ waves, corresponding to the $1f_{5/2}$ and $1g_{9/2}$ orbits trapped by the centrifugal barrier. Note that configurations with the last neutron occupying the $2p_{3/2}$ or $2p_{1/2}$ orbits are bound states, and are not seen in Fig.~1~(a). \begin{figure} \begin{center} \includegraphics[scale=0.59]{fig1a.eps} \includegraphics[scale=0.59]{fig1b.eps} \caption{(a) Elastic cross sections $\sigma_{lj}$ for various partial waves in the case of $\bar{\Delta}=0.0$ MeV. (b) The same as (a), but in the case of $\bar{\Delta}=1.0$ MeV.} \end{center} \end{figure} On the other hand, in the case of $\bar{\Delta}$=1.0 MeV, we see narrow low-lying peaks in the $p_{1/2}$, $p_{3/2}$ and $f_{7/2}$ waves, which do not exist in the case of $\bar{\Delta}$=0.0 MeV. These peaks are not potential resonances caused by the centrifugal barrier. These characteristic resonances are the quasi-particle resonances which are caused by the pairing correlation. They are associated with the weakly bound single-particle orbits $2p_{1/2}$, $2p_{3/2}$ and $1f_{7/2}$. With $\bar{\Delta}=1.0$ MeV, the quasi-particle states corresponding to $2p_{3/2}$ or $2p_{1/2}$ orbits become unbound resonances, seen as the low-lying peaks in Fig.~1~(b). It is noted the $2p_{1/2}$ resonance energy is lower than that of $2p_{3/2}$, with the ordering opposite to the standard single-particle states. In the following discussion, we focus on the low-lying 2$p_{1/2}$ resonance. Figure~2 shows the elastic cross sections and the phase shifts of the 2$p_{1/2}$ resonance which are obtained for various values of the pairing strength $\bar{\Delta}$. It is seen in these figures that the resonance is influenced significantly by the pairing strength $\bar{\Delta}$. \begin{figure} \begin{center} \includegraphics[scale=0.59]{fig2a.eps} \includegraphics[scale=0.59]{fig2b.eps} \caption{(a) Elastic cross section $\sigma_{p1/2}$ of the partial wave $p_{1/2}$ for various values of $\bar{\Delta}$. (b) Elastic phase shift $\delta_{p1/2}$ of the partial wave $p_{1/2}$ for various values of $\bar{\Delta}$.} \end{center} \end{figure} For $\bar{\Delta}=0.0$ MeV, no single-particle resonance is seen in the $p_{1/2}$ wave since the $2p_{1/2}$ orbit is bound with the single-particle energy $e_{2p1/2}=-0.056$ MeV and the corresponding quasi-particle energy $E_{2p1/2}=|e_{2p1/2}-\lambda|=0.213$ MeV is smaller than the threshold $|\lambda|=0.269$ MeV. As $\bar{\Delta}$ increases ($\bar{\Delta}\sim 0.5$ MeV), the $2p_{1/2}$ quasi-particle state acquires the quasi-particle energy $E$ larger than $|\lambda|$, and then appears in the continuum region as a resonance. For further increasing $\bar{\Delta}\gtrsim1$ MeV, both the resonance width and the resonance energy are found to increase. The increase of the resonance energy may be anticipated qualitatively as the conventional BCS expression for the quasi-particle energy $E=\sqrt{(e_{\mathrm{sp}}-\lambda)^{2}+\Delta^{2}}$ suggests. The increase of the width $\Gamma$ as the function of the pair potential ($\propto |\bar{\Delta}|^{2}$) is suggested in the perturbative analysis ~\cite{Belyaev1987,Bulgac1980}. However, we found that non-trivial pairing effects are involved here as we discuss below. \subsection{Resonance width and resonance energy} We evaluate the resonance width and the resonance energy in order to investigate quantitatively effects of the pairing correlation on these values. We extract the resonance width and the resonance energy from the calculated phase shift using a fitting method. We employ the following function to fit: \begin{equation} \delta(e)=\arctan \left( \frac{2(e-e_{R})}{\Gamma} \right)+a(e-e_{R})+b \label{fiteq_sec6} \end{equation} where $e$, $\Gamma$ and $e_{R}$ are the kinetic energy of the scattering neutron, the resonance width (defined as the full width at half maximum (FWHM)) and the resonance energy, respectively, and constants $a$ and $b$ representing a smooth background. We perform the fitting in the following two steps. First, we introduce a tentative energy interval and perform a fitting. Next, using a zero-th order values $e^{(0)}_{R}$ and $\Gamma^{(0)}$, we perform the second fitting for the interval $\max(e^{(0)}_{R}-\Gamma^{(0)}, 0)\le e\le e^{(0)}_{R}+\Gamma^{(0)}$. Figure~3 shows the resonance width $\Gamma$ and the resonance energy $e_{R}$ for various values of $\bar{\Delta}$ corresponding to Fig.~2~(b). The vertical axis is the resonance width $\Gamma$ and the horizontal axis is the resonance energy $e_{R}$. Both the resonance width $\Gamma$ and the resonance energy $e_{R}$ increase as the strength of pairing correlation $\bar{\Delta}$ increases. Although the resonance width $\Gamma$ becomes larger than the resonance energy $e_{R}$ for $\bar{\Delta}\geq 2.0$ MeV, we regard it as a meaningful resonance since the fitting has as good quality as that in the cases of $\bar{\Delta}<2.0$ MeV. \begin{figure}[t] \begin{center} \includegraphics[scale=0.59]{fig3.eps} \caption{The $e_{R}$-$\Gamma$ relation of the $2p_{1/2}$ quasi-particle resonance for various values of $\bar{\Delta}$. The vertical axis is the resonance width $\Gamma$ and the horizontal axis is the resonance energy $e_{R}$.} \end{center} \end{figure} To investigate systematically influence of the position of single-particle orbit on the resonance, we change not only the strength of pairing correlation $\bar{\Delta}$ but also the single-particle energy of the $2p_{1/2}$ orbit. We vary the depth of the Woods-Saxon potential $V_{0}$ to change the single-particle energy. The variation from the original value is denoted by $\Delta V_{0}$. Figure~4~(a) shows the $2p_{1/2}$ single-particle energy as a function of $\Delta V_{0}$. The length of vertical bars in the figure represents the resonance width (FWHM). It is seen that the $2p_{1/2}$ orbit enters into the continuum as the depth is arisen by $\Delta V_{0}\sim 0.5$ MeV. The resonance width (vertical bars) grows with further raise of potential depth. The height of centrifugal barrier $E_{\mathrm{barrier}}$ for the $p_{1/2}$ wave (the dotted curve in the Fig.~4~(a)) is $\sim$0.5 MeV, being independent approximately on $\Delta V_{0}$. Figure~4~(b) shows the $e_{R}$-$\Gamma$ relation of the single-particle potential resonance corresponding to the Fig.~4~(a). For $\Delta V_{0}\gtrsim 4.0$ MeV, the resonance width is very broad, $\Gamma\gtrsim 2e_{R}$, as expected from $e_{R}\gtrsim E_{\mathrm{barrier}}$. \begin{figure} \begin{center} \includegraphics[scale=0.59]{fig4a.eps} \includegraphics[scale=0.59]{fig4b.eps} \caption{(a) The single-particle energy of the neutron $2p_{1/2}$ orbit for various depths of the Woods-Saxon potential. The vertical axis is the neutron single-paticle energy and the horizontal axis is the variation of potential depth $\Delta V_{0}$. Positive single-particle energy represents the resonance energy, and the length of attached vertical bar represents the resonance width (FWHM). The dotted line indicates the height of the centrifugal barrier. (b) The $e_{R}$-$\Gamma$ relation of the $2p_{1/2}$ single-particle potential resonance for various potential depths $\Delta V_{0}$.} \end{center} \end{figure} The resonance width and the resonance energy evaluated for various $\bar{\Delta}$ and $\Delta V_{0}$ are plotted in the $e_{R}$-$\Gamma$ plane in Fig.~5. As a reference, the $e_{R}$-$\Gamma$ relation of the single-particle potential resonance (Fig.~4~(b)) is also shown. \begin{figure} \begin{center} \includegraphics[scale=0.59]{fig5a.eps} \includegraphics[scale=0.59]{fig5b.eps} \caption{The $e_{R}$-$\Gamma$ relation of the $2p_{1/2}$ quasi-particle resonance with various values of $\bar{\Delta}$ and $\Delta V_{0}$. (a) The $e_{R}$-$\Gamma$ relation for given values of $\Delta V_{0}$ with varying $\bar{\Delta}$ from $0.0$ to $3.0$ MeV. (b) The $e_{R}$-$\Gamma$ relation for given values of $\bar{\Delta}$ with varying $\Delta V_{0}$ from $-6.0$ to $4.0$ MeV. The curve with $\bar{\Delta}=0.0$ MeV is the $e_{R}$-$\Gamma$ relation of the $2p_{1/2}$ single-particle resonance, shown in Fig. 4 (b).} \end{center} \end{figure} Figure~5~(a) is a plot displaying dependence of $\Gamma$ on $\bar{\Delta}$ for fixed values of $\Delta V_{0}$. We see that both the resonance width and the resonance energy increase with increasing $\bar{\Delta}$ for all the values of $\Delta V_{0}$. Figure~5~(b) is another plot showing dependence on $\Delta V_{0}$ for fixed values of $\bar{\Delta}$. A distinctive feature seen in Fig.~5 is that the quasi-particle resonance exist even at energies $e_{R}$ higher than the barrier height $E_{\mathrm{barrier}}\sim0.5$ MeV. It is seen also that the $e_{R}$-$\Gamma$ relation displays two different features. One is seen in the bottom-right region of Fig.~5~(b) where the resonance width changes only slightly for change of the resonance energy. The other is that the resonance width increases sensitively as the resonance energy changes, seen in the upper-left region. This difference in the $e_{R}$-$\Gamma$ relation is related to whether the $2p_{1/2}$ orbit is located above or below the Fermi energy. In other words, the difference originates from whether the original $2p_{1/2}$ orbit is particle-like or hole-like. More precisely, the $2p_{1/2}$ orbit is particle-like (hole-like) for $\Delta V_{0}>-0.854$ MeV ($\Delta V_{0}\leq -0.854$ MeV). The boundary $\Delta V_{0}=-0.854$ MeV is plotted in Fig.~5~(b) with open circles. In the following discussion, we call the former a particle-like quasi-particle resonance, and the latter a hole-like quasi-particle resonance. Concerning the hole-like quasi-particle resonance, the resonance width approximately independent on the resonance energy $e_{R}$. Deviation from this simple behavior is seen for $e_{R}\lesssim1.0$ MeV. As for the particle-like quasi-particle resonance, the behavior is much more complicated and non-trivial. We shall examine these points in the following subsections. \subsection{Pairing effect on the hole-like quasi-particle resonance} Let us first analyze the hole-like quasi-particle resonances, i.e. in the case of $e_{{\rm sp}} < \lambda$. As already seen in connection with Fig.~5~(b), the dependence of the resonance width $\Gamma$ on the average pairing gap $\bar{\Delta}$ appears rather simple: $\Gamma$ increases monotonically with $\bar{\Delta}$ while $\Gamma$ depends only weakly on the resonance energy $e_R$ or the single-particle energy $e_{{\rm sp}}$. We shall now analyze the pairing dependence of the resonance width $\Gamma$ by comparing with the analytical expression~\cite{Belyaev1987,Bulgac1980} which is derived for the hole-like quasi-particle resonance on the basis of the perturbation with respect to the pairing gap or the pairing potential. The perturbative evaluation assumes that a single-hole state with energy $e_{{\rm sp}}$ and wave function $\varphi_i(\vec{r}\sigma)$ couples to unbound single-particle states $\varphi_{e}(\vec{r}\sigma)$ only weakly via the pair potential $\Delta(\vec{r})$. This leads to the expression \begin{equation} \Gamma_{i}=2\pi\left| \sum_{\sigma}\int d\vec{r}\varphi^{\dagger}_{i}(\vec{r\sigma})\Delta(\vec{r})\varphi_{e}(\vec{r}\sigma) \right|^2 \propto \left|\Delta_{\mathrm{average}}\right|^{2} \label{qpreswidth} \end{equation} where the wave function of the unbound single-particle orbit at energy $e$ is normalized as \begin{equation} \sum_{\sigma}\int d\vec{r}\varphi^{\dagger}_{e}(\vec{r}\sigma)\varphi_{e^{\prime}}(\vec{r}\sigma)=\delta (e - e^{\prime}). \label{kikaku} \end{equation} The resonance energy in the zero-th order is $e_R^0= |e_i - \lambda| + \lambda=|e_i|- 2|\lambda|$, corresponding to the quasi-particle energy $E_i^0 = |e_i - \lambda| $ of the hole state. We shall now compare the resonance width $\Gamma$ obtained from the numerical fit to the phase shift and that from the perturbative evaluation Eq.~(\ref{qpreswidth}). The results are shown in Fig.~6, which plots the evaluated widths as functions of the average pairing gap $\bar{\Delta}$. The perturbative calculation using Eq.~(\ref{qpreswidth}) is performed in two different ways, and they are plotted with the upward and downward triangles in Fig.~6. The curve with upward triangles is the case where the wave functions $\varphi_{i}$ and $\varphi_{e}$ of the hole and continuum orbits are fixed, and only $\Delta (r)$ is changed. For the energy of $\varphi_{e}$, we use the zero-th order resonance energy $e^{0}_{R}=|e_{2p1/2}|-2|\lambda|$. This scheme is named ``Fermi's golden rule 1'' hereafter. In the calculation for the curve with downward triangles, we fix the single-particle wave function of bound orbit $\varphi_{i}$, but we choose the energy $e$ of $\varphi_{e}$ that reproduces the resonance energy $e_{R}(\bar{\Delta})$ obtained from the phase shift for each $\bar{\Delta}$ (called ``Fermi's golden rule 2''). Figure~6~(a) shows the $\bar{\Delta}$-dependence of resonance width $\Gamma$ for the resonance arising from the $2p_{1/2}$ hole state at $e_{\mathrm{sp}}=-4.127$ MeV ($\Delta V_{0}=-10.0$ MeV). Figure~6~(b) and (c) are the same as (a), but these are for the $2p_{1/2}$ hole orbits at $e_{\mathrm{sp}}=-1.347$ MeV ($\Delta V_{0}=-4.0$ MeV) and $e_{\mathrm{sp}}=-0.618$ MeV ($\Delta V_{0}=-2.0$ MeV), respectively. Figure~6~(a) is the case where the single-particle energy of hole orbit is smaller than the Fermi energy $\lambda =-0.269$ MeV by about 4 MeV. This is a typical hole-like quasi-particle resonance since the resonance width $\Gamma$ evaluated with perturbative calculations reproduce the non-perturbative evaluation of the resonance width $\Gamma$. Deviations from the perturbative expression are seen in Fig.~(b) and (c). The difference between the perturbative and the non-perturbative evaluation becomes large as the single-particle energy $e_{\mathrm{sp}}$ approaches the Fermi energy $\lambda$ and the pair potential grows as seen in Fig.~6~(b) and (c). \begin{figure}[t] \centering \begin{minipage}{0.3\hsize} \begin{center} \includegraphics[width=50mm,angle=0]{fig6a.eps} \end{center} \end{minipage} \begin{minipage}{0.3\hsize} \begin{center} \includegraphics[width=50mm,angle=0]{fig6b.eps} \end{center} \end{minipage} \begin{minipage}{0.3\hsize} \begin{center} \includegraphics[width=50mm,angle=0]{fig6c.eps} \end{center} \end{minipage} \caption{Comparison of the perturbative evaluations of the resonance width $\Gamma$ obtained with Eq.~(10) (plotted with triangles) and the width $\Gamma$ obtained from the phase shift (plotted with circles), for $2p_{1/2}$ hole-like quasi-particle resonance, corresponding to the single-particle energies $e_{\mathrm{sp}}=-4.127$ MeV ($\Delta V_{0}=-10.0$ MeV) [panel (a)], $-1.347$ MeV ($\Delta V_{0}=-4.0$ MeV) [panel (b)] and $-0.618$ MeV ($\Delta V_{0}=-2.0$ MeV) [panel (c)]. The horizontal axis is the average pairing potential $\bar{\Delta}$. The upward triangle is the perturbative width $\Gamma$ in the scheme ``Fermi's golden rule 1'', while the downward triangle is that in the scheme ``Fermi's golden rule 2'' (see text).} \end{figure} \begin{figure}[t] \centering \begin{minipage}{0.3\hsize} \begin{center} \includegraphics[width=50mm,angle=0]{fig7a.eps} \end{center} \end{minipage} \begin{minipage}{0.3\hsize} \begin{center} \includegraphics[width=50mm,angle=0]{fig7b.eps} \end{center} \end{minipage} \begin{minipage}{0.3\hsize} \begin{center} \includegraphics[width=50mm,angle=0]{fig7c.eps} \end{center} \end{minipage} \caption{Probability distribution $|u(r)|^{2}+|v(r)|^{2}$ of the $2p_{1/2}$ quasi-particle resonance, corresponding to (a) $e_{\mathrm{sp}}=-4.127$ MeV ($\Delta V_{0}=-10.0$ MeV), (b) $e_{\mathrm{sp}}=-1.347$ MeV ($\Delta V_{0}=-4.0$ MeV) and (c) $e_{\mathrm{sp}}=-0.618$ MeV ($\Delta V_{0}=-2.0$ MeV). The pairing strength is commonly $\bar{\Delta}=2.0$ MeV. Partial probabilities $|u(r)|^{2}$ and $|v(r)|^{2}$ associated with the particle- and hole-components, respectively, are also plotted. The Woods-Saxon radius $R=4.550$ fm is indicated with an arrow. The wave functions $u(r)$ and $v(r)$ are normalized so that $u(r)$ has a common asymptotic amplitude 1.} \end{figure} Figure~7 shows the probability distributions $|v(r)|^2$ and $|u(r)|^2$ of the three examples of the hole-like quasiparticle resonance. The panels (a), (b) and (c) correspond to Fig. 6~(a), (b) and (c), respectively (for $\bar{\Delta}=2.0$ MeV). Note that $|u(r)|^{2}$ is the probability distribution of the particle-component while $|v(r)|^{2}$ is that of the hole-component, and $|u(r)|^{2}+|v(r)|^{2}$ is the total probability to find the quasi-particle at position $r$. As expected, the probability $|u(r)|^{2}$ of the particle-component is much smaller than the probability $|v(r)|^{2}$ of the main hole-component in the case (a) where the perturbation works well. Contrarily, in the case (c) where the perturbation breaks down, $|u(r)|^{2}$ is comparable to the probability $|v(r)|^{2}$ of the main hole-component indicating strong mixing of the particle-component. For more quantitative argument we evaluate the probability distributions $|v(r)|^2$ and $|u(r)|^2$ integrated within the nuclear surface: $\bar{u}^{2}=\int^{R}_{0}|u(r)|^{2}dr$ and $\bar{v}^{2}=\int^{R}_{0}|v(r)|^{2}dr$, and evaluate the ratio $\bar{u}^{2}/\bar{v}^{2}$. The ratio is $0.021$ and $0.254$ for the case (a) and (c), respectively. In the case (b), corresponding to the boundary region for the breaking down of the perturbation, the ratio is $0.091$. We have examined the applicability of the perturbative evaluation, Eq.~(\ref{qpreswidth}), systematically for all the combinations of $\bar{\Delta}$ and $\Delta V_{0}$ shown in Fig.~5. We adopt a criterion that both of the two evaluations of Eq.~(\ref{qpreswidth}) with different choices of $\varphi_{e}$ agree with the non-perturbative numerical evaluation of the resonance width within 10\% error. We find then that the applicability of Eq.~(\ref{qpreswidth}) is represented in terms of the single-particle energy $e_{\mathrm{sp}}$, the Fermi energy $\lambda$ and the pair gap $\bar{\Delta}$ as \begin{equation} e_{\mathrm{sp}}\lesssim \lambda -0.5\bar{\Delta}. \end{equation} We also examined validity of Eq.~(10) in terms of the ratio $\bar{u}^{2}/\bar{v}^{2}$. It is found that the applicability of Eq.~(10) is represented also by \begin{equation} \bar{u}^{2}/\bar{v}^{2}\lesssim 0.1. \label{uv} \end{equation} The above analysis indicates that the perturbative evaluation works not only for the quasi-particle resonances associated with deeply-bound hole orbit, which has been considered previously~\cite{Belyaev1987,Bulgac1980}, but also for quasi-particle resonances arising from a shallowly-bound hole orbit, for instance, that with $e_{\mathrm{sp}}\sim\lambda-0.5\bar{\Delta}$. Even in the latter case, the mixing of the particle-component into the main hole-component is small $\bar{u}^{2}\lesssim0.1\bar{v}^{2}$. This is probably the reason why the perturbation works in the rather broad situation. On the contrary, it is natural that the perturbation, Eq.~(\ref{uv}), breaks down in the case of $e_{\mathrm{sp}}>\lambda$, where the dominant component of the quasi-particle state is not the hole-component $v(r)$, but the particle-component $u(r)$. A quite different, probably non-perturbative, mechanism of the pairing effect on the resonance width is expected in this case. \subsection{Pairing effect on the particle-like quasi-particle resonance} We then analyze the particle-like quasi-particle resonances, i.e. those in the case of $e_{{\rm sp}} \geq \lambda$. As typical examples, we examine two cases with $e_{2p1/2}=-0.056$ MeV ($\Delta V_{0}=0.0$ MeV) and with $e_{2p1/2}=0.251$ MeV ($\Delta V_{0}=2.0$ MeV). Note $e_{2p1/2}>\lambda$ in both cases. Curves in Fig.~5~(a) corresponding to these cases are shown in Fig.~8. The $e_R$-$\Gamma$ relation of the single-particle potential resonance is also shown as a reference. \begin{figure}[t] \begin{center} \includegraphics[width=75mm,angle=0]{fig8.eps} \end{center} \caption{The $e_{R}$-$\Gamma$ relation of the $2p_{1/2}$ quasi-particle resonance in the case of particle-like single-particle energy $e_{{\rm sp}}=-0.056$ MeV ($\Delta V_{0}=0.0$ MeV) and $e_{{\rm sp}}=0.251$ MeV ($\Delta V_{0}=2.0$ MeV) (dashed and dotted curves), obtained for varying the average pairing gap $\bar{\Delta}=0.0 - 3.0$ MeV. The $e_{R}$-$\Gamma$ relation of the $2p_{1/2}$ single-particle potential resonance is also shown (solid curve).} \label{sienewid_p12} \end{figure} As seen in Fig.~8 (and also in Fig.~5~(a)), increase of the pairing potential increases monotonically both the resonance width $\Gamma$ and the resonance energy $e_{R}$, displaying a trend similar to that of the hole-like quasi-particle resonance. However, Fig.~5~(b) indicates also that increase of the resonance energy with a fixed value of the pair potential leads to the increase of the resonance width in the particle-like case. We therefore suppose that two mechanisms are involved here. One is a kinematical effect: Due to the increase of the resonance energy, the penetrability of the centrifugal barrier increases, and consequently it leads to the increase of $\Gamma$. The other is a direct pairing effect, originating from the mixing among the particle- and hole-component caused by the pair potential. In order to extract the latter mixing effect, we compare these three curves at the same resonance energy. As an example, we make a comparison at $e_{R}=0.45$ MeV. We then find that the resonance width for $\bar{\Delta}=1.634$ MeV is narrower than that for $\bar{\Delta}=0.0$ MeV and the width for $\bar{\Delta}=1.897$ MeV is the smallest among the three cases. The resonance widths for these three cases are listed in Table~3, together with other examples compared at $e_{R}=0.300$ and $0.375$ MeV. It shows that the pairing correlation has an effect to {\it reduce} the resonance width if the comparison is made at the same resonance energy. \begin{table}[t] \centering \begin{tabular}{cccccccccccc} \hline $e_{R}$ [MeV] & \multicolumn{3}{c}{0.300} && \multicolumn{3}{c}{0.375} && \multicolumn{3}{c}{0.450}\\ \hline $\bar{\Delta}$ [MeV] & 0.0 & 0.728 & 1.477 && 0.0 & 1.246 & 1.688 && 0.0 & 1.634 & 1.897 \\ $\Gamma$ [MeV] & 0.387 & 0.361 & 0.244 && 0.582 & 0.500 & 0.338 && 0.854 & 0.652 & 0.453 \\ $e_{\mathrm{sp}}$ [MeV] & 0.300 & 0.251 & -0.056 && 0.375 & 0.251 & -0.056 && 0.450 & 0.250 & -0.056 \\ \hline \end{tabular} \caption{Resonance width $\Gamma$ of the $2p_{1/2}$ quasi-particle and single-particle resonances which have $e_{R}=0.300, 0.375$ and 0.450 MeV for three different values of $\bar{\Delta}$. The single-particle resonance energy (or bound single-particle energy) $e_{\mathrm{sp}}$ is also listed.} \label{enewid_comp} \end{table} To examine mechanism of the reduced resonance width, we look into wave functions of the three resonances with $e_{R}=0.450$ MeV. Figure~9 shows the probability distribution of the resonant quasi-particle states with $e_{R}=0.450$ MeV. In the case of $\bar{\Delta}=0.0$ MeV, the hole-component $v(r)$ vanishes and $u(r)$ coincide with the single-particle wave function of the $2p_{1/2}$ potential resonance. With finite values of $\bar{\Delta}$, and increasing of $\bar{\Delta}$, the probability $|u(r)|^{2}+|v(r)|^{2}$ within the surface of the nucleus ($r\lesssim R$) become larger. This is consistent with our finding that the resonance width become narrower with larger pair potential. In particular, it is seen that the increase of the probability inside the nucleus originates mainly from the increase of the hole-component $v(r)$. \begin{figure}[t] \centering \begin{minipage}{0.3\hsize} \begin{center} \includegraphics[width=50mm,angle=0]{fig9a.eps} \end{center} \end{minipage} \begin{minipage}{0.3\hsize} \begin{center} \includegraphics[width=50mm,angle=0]{fig9b.eps} \end{center} \end{minipage} \begin{minipage}{0.3\hsize} \begin{center} \includegraphics[width=50mm,angle=0]{fig9c.eps} \end{center} \end{minipage} \caption{Probability distribution of the $2p_{1/2}$ resonances with common resonance energy $e_{R}=0.45$ MeV, but for different pairing strengths: (a) $\bar{\Delta}=0.0$ MeV, (b) $\bar{\Delta}=1.634$ MeV and (c) $\bar{\Delta}=1.897$ MeV.} \end{figure} \begin{table}[t] \centering \begin{tabular}{cccccccccccc} \hline $e_{R}$ [MeV] & \multicolumn{3}{c}{0.300} && \multicolumn{3}{c}{0.375} && \multicolumn{3}{c}{0.450}\\ \hline $\bar{\Delta}$ [MeV] & 0.0 & 0.728 & 1.477 && 0.0 & 1.246 & 1.688 && 0.0 & 1.634 & 1.897 \\ $\bar{v}^{2}/\bar{u}^{2}$ & 0.0 & 0.069 & 0.891 && 0.0 & 0.187 & 1.003 && 0.0 & 0.297 & 1.107 \\ $v^{2}_{\mathrm{BCS}}/u^{2}_{\mathrm{BCS}}$ & 0.0 & 0.045 & 0.456 && 0.0 & 0.107 & 0.503 && 0.0 & 0.161 & 0.543 \\ \hline \end{tabular} \caption{The ratio $\bar{v}^{2}/\bar{u}^{2}$ of the probability distributions of the hole- and particle-components of the quasi-particle wave functions of the $2p_{1/2}$ resonance, evaluated for different values of $\bar{\Delta}$, but for the common resonance energy $e_{R}$. The $v^{2}_{\mathrm{BCS}}/u^{2}_{\mathrm{BCS}}$ based on the BCS formula is also listed. See text for details.} \label{prob_part_comp} \end{table} The increase of the hole-component $v(r)$ is a natural consequence of the pairing correlation. Here we recall the simple BCS formula for the $u$ and $v$ factors: the amplitudes of the particle- and hole-components are \begin{equation} v^{2}_{\mathrm{BCS}}=\frac{1}{2}\left( 1-\frac{e-\lambda}{E} \right),\quad u^{2}_{\mathrm{BCS}}=\frac{1}{2}\left( 1+\frac{e-\lambda}{E} \right), \label{bcs} \end{equation} respectively, with the quasi-particle energy $E=\sqrt{(e-\lambda)^{2}+\Delta^{2}}$. The hole-probability $v^{2}_{\mathrm{BCS}}$, which vanishes for $\Delta =0$, increases with increasing $\Delta$ since the pair potential causes the mixing among the particle- and hole-components. We consider that a similar mixing mechanism takes place in the present case. We show in Table~3, the ratio $\bar{v}^{2}/\bar{u}^{2}$ of the particle- and hole-components obtained from the HFB calculation, and $v^{2}_{\mathrm{BCS}}/u^{2}_{\mathrm{BCS}}$ evaluated by using the BCS formula (\ref{bcs}). Here the quasi-particle energy $E$ is related to the resonance energy $e_{R}$ as $E=|\lambda|+e_{R}$. It is seen that the increasing trend of $\bar{v}^{2}/\bar{u}^{2}$ is consistent with that of the BCS formula except a difference by a factor of $\sim0.5$. The consistency is also seen in examples at the other resonance energies. The above observation leads to the following interpretation. The amplitude $v(r)$ of hole-component increases due to the mixing of the hole- and particle-components via the pair potential. Since the hole-component $v(r)$ is localized inside and around the nuclear surface, the increase of $v(r)$ leads to the increase of probability distribution $|u(r)|^{2}+|v(r)|^{2}$ inside the nuclear radius $r\lesssim R$. This brings about the decrease of the resonance width. As a secondary mechanism, we find that the particle-component $u(r)$ inside and around the surface increases with $\bar{\Delta}$. This also contributes to the increase of $|u(r)|^{2}+|v(r)|^{2}$. We will leave analysis of this mechanism for forthcoming paper since this contribution is small compared with the contribution from the hole-component. \section{Conclusion} The quasi-particle resonance is predicted in the Bogoliubov's quasi-particle theory as an unbound single-particle mode of excitation caused by the pair correlation in nuclei. Expecting strong influence of the pair correlation, we have studied in the present paper properties of the quasi-particle resonance emerging in nuclei near the neutron drip-line. We focused on the resonance in the $p$ wave neutron with low kinetic energy in the ${}^{46}$Si + n system, and analyzed in detail how the pair correlation controls the width of the quasi-particle resonance. By solving numerically the Hartree-Fock-Bogoliubov equation in the coordinate space to obtain the quasi-particle wave function satisfying the scattering boundary condition, we calculate the phase shift of the neutron elastic scattering and then extract the resonance energy and the resonance width. Analyses are performed systematically for various strengths of the average pairing gap, and for different situations concerning whether the quasi-particle state is particle-like or hole-like, i.e. whether the single-particle orbit being the origin of the resonance is located above or below the Fermi energy. We have disclosed that the pairing effect on the width of the particle-like quasi-particle resonance is very different from that of the hole-like quasi-particle resonance, for which a perturbative treatment~\cite{Belyaev1987,Bulgac1980} of the pair potential is known. A peculiar feature of the particle-like quasi-particle resonance is that the resonance width for a strong pairing is smaller than that of a weaker pairing if comparison is made at the same resonance energy: The pairing correlation has an effect to {\it reduce} the resonance width. This is opposite to the pairing effect on the of the hole-like quasi-particle resonance. In the hole-like case, the pair potential causes a coupling of the hole state to the scattering neutron states, leading to a decay of the hole state. In the particle-like case, in contrast, the pair potential causes the scattering state, represented by the particle-component $u(r)$ of the quasi-particle wave function, to mix with the hole-component $v(r)$, which is however confined inside and around the nuclear surface. Therefore, with increasing the strength of the pair potential, the probability of the quasi-particle state inside the nucleus increases, and hence the width (decay probability) decreases. Concerning the hole-like quasi-particle resonances, we have examined the applicability of the perturbative evaluation~\cite{Belyaev1987,Bulgac1980} of the resonance width. It is found that the perturbation can be applied not only to the quasi-particle resonances associated with deeply bound hole state, as known previously, but also to hole-like quasi-particle resonances whose corresponding hole energy is close to the Fermi energy $\lambda$. More precisely the applicability condition is evaluated to be $e_{{\rm sp}} \lesssim \lambda - 0.5\bar{\Delta}$. \begin{acknowledgments} We thank Kenichi Yoshida for useful discussions. This work is supported by Grant-in-Aid for Research Fellowships of Japan Society for the Promotion of Science (JSPS) for Young Scientists. It is also supported by Grant-in-Aid for Scientific Research from Japan Society for Promotion of Science No. 23540294, No. 24105008 and No. 26400268. \end{acknowledgments}
1,116,691,497,439
arxiv
\section{Introduction}\label{Introduction} In homological algebra the projective and injective modules play a central role. The analogue in Gorenstein homological algebra are the Gorenstein projective and Gorenstein injective modules. These were defined by Auslander and Bridger in \cite{AB69} for a two sided Noetherian ring, and were later extended to a general ring in \cite{EJ95}. Nowadays, the field of Gorenstein homological algebra has turned into a well-developed subject and an active area of research, see \cite{EJ11,EJ11a}. Some examples of other papers are \cite{AM02,Bel05,Bel11,BK08,BR07,BM07,EEG08,Hol04,J07,JZ00}. It has also found applications in other areas, see for example \cite{DSS17}. In particular, the Gorenstein projective modules are used when categorifying cluster algebras \cite{JKS16,NCha17,Pre17a}, and being able to describe them is therefore important. Let $k$ be a commutative ring, let ${\mathcal B}$ be a $k$-linear abelian category with enough projectives, and let ${\mathcal C}$ be a small $k$-linear category. Furthermore, let ${\mathcal B}^{{\mathcal C}}$ denote the category of $k$-linear functors from ${\mathcal C}$ to ${\mathcal B}$. \begin{Example}\label{Example:I1} Let ${\mathcal C}=k\mathbb{A}_2$ where $k\mathbb{A}_2$ is the $k$-linearization of the category $\bullet \to \bullet$. The category ${\mathcal B}^{k\mathbb{A}_2}$ can then be identified with the morphism category $\operatorname{Mor}({\mathcal B})$ of ${\mathcal B}$. Since $\operatorname{Mor}({\mathcal B})$ is abelian and has enough projectives, it also has Gorenstein projective objects. By Corollary 3.6 in \cite{JK11}, a morphism $B_1\xrightarrow{f}B_2$ is Gorenstein projective in $\operatorname{Mor}({\mathcal B})$ if and only if $f$ is a monomorphism and $\operatorname{Coker} f$, $B_1$, and $B_2$ are Gorenstein projective in ${\mathcal B}$. Since Gorenstein projective objects are closed under kernels of epimorphisms, this is equivalent to only requiring $\operatorname{Coker} f$ and $B_2$ to be Gorenstein projective. \end{Example} Motivated by this example, one can hope to describe the Gorenstein projective objects in ${\mathcal B}^{{\mathcal C}}$ more generally. Several authors \cite{EEG09,EHS13,HLXZ17, LZ13,LZ17,She16} have studied this problem. However, their descriptions only hold in special cases. In \cite{HLXZ17,LZ13,LZ17,She16} they assume $k$ is a field and ${\mathcal C}$ is either $kQ$ where $Q$ is a finite acyclic quiver, $kQ/I$ where $I$ is generated by monomial relations, or a finite-dimensional Iwanaga-Gorenstein algebra, while in \cite{EEG09,EHS13} they assume $k=\mathbb{Z}$ and ${\mathcal C}=\mathbb{Z} Q$ for a left rooted quiver $Q$. The latter results has motivated Holm and J\o rgensen to give a description of cotorsion pairs in ${\mathcal B}^{\mathbb{Z} Q}$ from cotorsion pairs in ${\mathcal B}$, see \cite{HJ16}. We give a more systematic description of the Gorenstein projective objects in ${\mathcal B}^{{\mathcal C}}$, which works for any commutative base ring $k$. Since $({\mathcal B}^{{\mathcal C}})\ensuremath{^{\mathrm{op}}}=({\mathcal B}\ensuremath{^{\mathrm{op}}})^{{\mathcal C}\ensuremath{^{\mathrm{op}}}}$, the dual results for Gorenstein injective objects are obtained by considering the opposite category. We leave the explicit statements of these results to the reader. The first step is to give a suitable generalization of what it means for $f$ to be a monomorphism in Example \ref{Example:I1}. For this we need to assume that ${\mathcal C}$ is a locally bounded and Hom-finite category, see Definition \ref{Definition:12,5}. The evaluation functor \[ i^*\colon {\mathcal B}^{{\mathcal C}} \to \prod_{c\in {\mathcal C}}{\mathcal B} \quad F\to (F(c))_{c\in{\mathcal C}} \] then has a left adjoint $i_!\colon \prod_{c\in {\mathcal C}}{\mathcal B}\to {\mathcal B}^{{\mathcal C}}$. In \cite{Kva17} it was shown that there exists a \emphbf{Nakayama functor} $\nu\colon {\mathcal B}^{{\mathcal C}}\to {\mathcal B}^{{\mathcal C}}$ \emphbf{relative to} $i_!\dashv i^*$, see Definition \ref{Nakayama functor for adjoint pair}. This means that the following holds: \begin{enumerate} \item $\nu$ has a right adjoint $\nu^-$; \item The composite $\nu\circ i_!$ is right adjoint to $i^*$; \item The unit $\lambda$ of the adjunction $\nu\dashv \nu^-$ induces an isomorphism \[ \lambda_{i_!((B_c)_{c\in {\mathcal C}})}\colon i_!((B_c)_{c\in {\mathcal C}}) \to \nu^-\nu i_!((B_c)_{c\in {\mathcal C}}) \] for all objects $(B_c)_{c\in {\mathcal C}}\in \prod_{c\in {\mathcal C}}{\mathcal B}$. \end{enumerate} Explicitly, the Nakayama functor is given by the weighted colimit $\nu(F)= \operatorname{Hom}_k({\mathcal C},k)\otimes_{{\mathcal C}}F$, and in Example \ref{Example:I1} it is just the cokernel functor $\nu(B_1\xrightarrow{f}B_2)= B_2\to \operatorname{Coker} f$. We give another example to illustrate this definition. \begin{Example}[Example 3.2.6 in \cite{Kva17}]\label{Example introduction} Let $k$ be a commutative ring, let $\Lambda_1$ be a $k$-algebra which is finitely generated projective as a $k$-module, and let $\Lambda_2$ be a $k$-algebra. If we consider $\Lambda_1$ as a $k$-linear category with one object and with endomorphism ring $\Lambda_1$, we get the identification \[ (\Lambda_1\otimes_k \Lambda_2)\text{-}\operatorname{Mod} = (\Lambda_2\text{-}\operatorname{Mod})^{\Lambda_1}. \] In particular, we have an adjoint pair $i_!\dashv i^*$ on $(\Lambda_1\otimes_k \Lambda_2)\text{-}\operatorname{Mod}$ and a Nakayama functor $\nu$ relative to $i_!\dashv i^*$. Explicitly, \begin{align*} & i^*\colon (\Lambda_1\otimes_k \Lambda_2)\text{-}\operatorname{Mod}\to \Lambda_2\text{-}\operatorname{Mod} \quad i^*(M)={}_{\Lambda_2}M \\ & i_!\colon \Lambda_2\text{-}\operatorname{Mod}\to (\Lambda_1\otimes_k \Lambda_2)\text{-}\operatorname{Mod} \quad i_!(M)=\Lambda_1\otimes_k M \\ & \nu\colon (\Lambda_1\otimes_k \Lambda_2)\text{-}\operatorname{Mod}\to (\Lambda_1\otimes_k \Lambda_2)\text{-}\operatorname{Mod} \quad \nu(M)=\operatorname{Hom}_k(\Lambda_1,k)\otimes_{\Lambda_1}M \end{align*} Note that if $k$ is a field and $\Lambda_2=k$, then we just obtain the classical Nakayama functor for a finite-dimensional algebra. \end{Example} We can now apply the machinery developed in \cite{Kva17}. In particular, we can define the category $\relGproj{P}{{\mathcal B}^{{\mathcal C}}}$ of Gorenstein $P$-projective objects where $P=i_!\circ i^*$. Explicitly, $A\in \relGproj{P}{{\mathcal B}^{{\mathcal C}}}$ if and only if \begin{enumerate} \item The $i$th left derived functor $L_i\nu(A)$ is $0$ for all $i>0$; \item The $i$th right derived functor $R^i\nu^-(\nu(A))$ is $0$ for all $i>0$; \item the unit $\lambda_A\colon A\to \nu^-\nu(A)$ of the adjunction $\nu\dashv \nu^-$ is an isomorphism on $A$. \end{enumerate} See Definition \ref{Definition:7} and Theorem \ref{Theorem:2.5}. In Example \ref{Example introduction} with $k$ a field and $\Lambda_2=k$ the objects in $\relGproj{P}{{\mathcal B}^{{\mathcal C}}}$ are precisely the ordinary Gorenstein projective modules. Also, it turns out that for ${\mathcal C}=k\mathbb{A}_2$ the Gorenstein $P$-projective objects are precisely the monomorphisms. More generally, for ${\mathcal C}=kQ$ where $Q$ is a locally bounded acyclic quiver, the Gorenstein $P$-projective objects are precisely the monic representations, see Definition \ref{Definition:13} and Proposition \ref{Proposition:9} part \ref{Proposition:9,2}. The next step is to generalize the requirement in Example \ref{Example:I1} that $B_2$ and $\operatorname{Coker} f$ are Gorenstein projective. Since $i^* \nu(B_1\xrightarrow{f} B_2) = (B_2,\operatorname{Coker} f)$, a natural guess would be that the image of $i^*\circ \nu$ must be Gorenstein projective, i.e. that we should consider the category \[ \{F\in {\mathcal B}^{{\mathcal C}}\mid F\in \relGproj{P}{{\mathcal B}^{{\mathcal C}}} \text{ and } i^* \nu(F)\in \prod_{c\in {\mathcal C}}\cG\cP({\mathcal B})\} \] which we denote by $\cG\cP(\relGproj{P}{{\mathcal B}^{{\mathcal C}}})$. We obtain the following result for this subcategory. \begin{Theorem}[Theorem \ref{Theorem:3}]\label{Theorem:I2} Assume ${\mathcal B}$ is a $k$-linear abelian category with enough projectives and ${\mathcal C}$ is a small, $k$-linear, locally bounded, and Hom-finite category. Then the subcategory $\cG\cP(\relGproj{P}{{\mathcal B}^{{\mathcal C}}})$ is an admissible subcategory of $\cG\cP({\mathcal B}^{{\mathcal C}})$. \end{Theorem} We refer to Definition \ref{Definition:1.5} for our definition of admissible subcategory. It implies that \[ \cG\cP(\relGproj{P}{{\mathcal B}^{{\mathcal C}}})\subset \cG\cP({\mathcal B}^{{\mathcal C}}) \] where $\cG\cP({\mathcal B}^{{\mathcal C}})$ denotes the category of Gorenstein projective objects in ${\mathcal B}^{{\mathcal C}}$. It also implies that $\cG\cP(\relGproj{P}{{\mathcal B}^{{\mathcal C}}})$ is a Frobenius exact subcategory of ${\mathcal B}^{{\mathcal C}}$. In fact, Theorem \ref{Theorem:3} holds more generally for any admissible subcategory of $\prod_{c\in {\mathcal C}}\cG\cP({\mathcal B})$ and any $P$-admissible subcategory of $\relGproj{P}{{\mathcal B}^{{\mathcal C}}}$, see Definition \ref{Definition:10}. This gives examples of other Frobenius exact categories, see Example \ref{Example:7} and \ref{Example:8}. It remains to determine when $\cG\cP(\relGproj{P}{{\mathcal B}^{{\mathcal C}}})=\cG\cP({\mathcal B}^{{\mathcal C}})$. In general, this is not true, see Example \ref{Example:11}. However, under some mild conditions the equality holds. \begin{Theorem}[Theorem \ref{Theorem:4}]\label{Theorem:I3} Assume ${\mathcal B}$ is a $k$-linear abelian category with enough projectives and ${\mathcal C}$ is a small, $k$-linear, locally bounded and Hom-finite category. If either of the following conditions hold, then $\cG\cP(\relGproj{P}{{\mathcal B}^{{\mathcal C}}})= \cG\cP({\mathcal B}^{{\mathcal C}})$: \begin{enumerate} \item\label{Theorem:I3,1} For any long exact sequence in ${\mathcal B}^{{\mathcal C}}$ \[ 0\to K\to Q_0\to Q_1\to \cdots \] with $Q_i$ projective for $i\geq 0$, we have $K\in \relGproj{P}{{\mathcal B}^{{\mathcal C}}}$; \item\label{Theorem:I3,2} If $B\in {\mathcal B}$ satisfy $\operatorname{Ext}^1_{{\mathcal B}}(B,B')=0$ for all $B'$ of finite projective dimension, then $B\in \cG\cP({\mathcal B})$. \end{enumerate} \end{Theorem} Condition \ref{Theorem:I3,1} holds when $P$ is Iwanaga-Gorenstein, see Definition \ref{Definition:9} and Corollary \ref{Gorenstein adjoint pairs lifts Gorenstein projectives}. In this case \[ A\in \relGproj{P}{{\mathcal B}^{{\mathcal C}}} \quad \quad \text{if and only if} \quad \quad L_i\nu(A)=0 \text{ for all }i>0 \] and $\relGproj{P}{{\mathcal B}^{{\mathcal C}}}$ is therefore particularly easy to compute. \begin{Example}\label{Example:I1 computation} Consider ${\mathcal C}=k\mathbb{A}_2$ as in Example \ref{Example:I1}. In this case, $P$ is Iwanaga-Gorenstein of dimension $1$. This implies that $L_i\nu(A)=0$ for $i>1$, and hence $A\in \relGproj{P}{{\mathcal B}^{k\mathbb{A}_2}}$ if and only if $L_1\nu(A)=0$. If we let $A=(B_1\xrightarrow{f}B_2)$, then a simple computation shows that \[ L_1\nu(B_1\xrightarrow{f}B_2)= 0\to \ker f. \] In particular, $(B_1\xrightarrow{f}B_2)\in \relGproj{P}{{\mathcal B}^{k\mathbb{A}_2}}$ if and only if $f$ is a monomorphism. Since $\nu(B_1\xrightarrow{f}B_2)=B_2\to \operatorname{Coker} f$, we recover the description in Example \ref{Example:I1} \end{Example} More generally, for any locally bounded quiver, $P$ is Iwanaga-Gorenstein of dimension less than or equal $1$. Using this, we recover the description in \cite{EHS13} and \cite{LZ13}, see Proposition \ref{Proposition:10}. We also illustrate how to compute the Gorenstein projectives for quivers with relations in Examples \ref{Example:12}, \ref{Example:13} and \ref{Example:14}. Finally, note that Condition \ref{Theorem:I3,2} of Theorem \ref{Theorem:I3} holds when $\operatorname{G.pdim} B<\infty$ for all $B\in {\mathcal B}$, see Lemma \ref{Proj Gorenstein}. In particular, $\cG\cP(\relGproj{P}{{\mathcal B}^{{\mathcal C}}})= \cG\cP({\mathcal B}^{{\mathcal C}})$ if ${\mathcal B}=\operatorname{mod}\text{-}\Lambda$ or $\operatorname{Mod}\text{-}\Lambda$ for an Iwanaga-Gorenstein algebra $\Lambda$. Applying Theorem \ref{Theorem:I3} to Example \ref{Example introduction} with $k$ a field, we obtain the following result. \begin{Theorem}[Example \ref{Example:10}]\label{Theorem:I4} Let $k$ be a field, let $\Lambda_1$ be a finite-dimensional $k$-algebra, and let $\Lambda_2$ be a $k$-algebra. If $\Omega^{\infty}(\Lambda_1\text{-}\operatorname{Mod})\subset \cG\cP (\Lambda_1\text{-}\operatorname{Mod})$ or \begin{multline*}\cG\cP(\Lambda_2\text{-}\operatorname{Mod})=\{M\in \Lambda_2\text{-}\operatorname{Mod}\mid \operatorname{Ext}^1_{\Lambda}(M,M')=0 \\ \text{ for all } M' \text{ of finite projective dimension} \} \end{multline*} then \begin{align*} \cG\cP((\Lambda_1\otimes_k \Lambda_2)\text{-}\operatorname{Mod}) = & \{M\in (\Lambda_1\otimes_k \Lambda_2)\text{-}\operatorname{Mod} \mid \text{ } _{\Lambda_1}M\in \cG\cP(\Lambda_1\text{-}\operatorname{Mod}) \\ &\text{ and } _{\Lambda_2}(\operatorname{Hom}_k(\Lambda_1,k)\otimes_{\Lambda_1} M)\in \cG\cP(\Lambda_2\text{-}\operatorname{Mod})\}. \end{align*} Hence, this equality holds in particular if $\Lambda_1$ or $\Lambda_2$ is Iwanaga-Gorenstein. \end{Theorem} We have an analogous statement for finitely presented modules, see Example \ref{Example:9}. Finally, using the explicit description of the Gorenstein projective objects in Theorem \ref{Theorem:I3} we also obtain a partly generalization of \cite[Theorem 4.6]{DSS17}, see Theorem \ref{Theorem:6} and Remark \ref{Remark:3}. The paper is organized as follows. In Section \ref{Section Preliminaries} we recall the notion of Nakayama functors relative to adjoint pairs and the necessary notions in Gorenstein homological algebra. We introduce $P$-admissible subcategories of $\relGproj{P}{{\mathcal A}}$ in Subsection \ref{Admissible subcategories}. In Subsection \ref{Lifting admissible subcategories} we show that adjoint pairs with Nakayama functor lift admissible subcategories of Gorenstein projectives, see Theorem \ref{Theorem:3}. In Subsection \ref{Lifting Gorenstein projectives} we use Theorem \ref{Theorem:3} to lift Gorenstein projective objects, and we provide sufficient criteria for when all Gorenstein projective objects are obtained, see Theorem \ref{Theorem:4}. In Section \ref{Application to functor categories} we study the functor category ${\mathcal B}^{{\mathcal C}}$ in detail. In Subsection \ref{Monic representations of a quiver} we use Theorem \ref{Theorem:4} to recover the known description of $\cG\cP({\mathcal B}^{kQ})$ for $Q$ a finite acyclic quiver, and in Subsection \ref{More examples} we compute $\cG\cP({\mathcal B}^{{\mathcal C}})$ for other examples of ${\mathcal C}$. \subsection{Conventions}\label{Coventions} For a ring $\Lambda$ we let $\Lambda\text{-}\operatorname{Mod}$ ($\Lambda\text{-}\operatorname{mod}$) denote the category of (finitely presented) left $\Lambda$-modules. We fix $k$ to be a commutative ring. All categories are assumed to be preadditive and all functors are assumed to be additive. ${\mathcal A}$ and ${\mathcal B}$ always denote abelian categories, and ${\mathcal D}$ and ${\mathcal E}$ always denote additive categories. We let $\operatorname{Proj}({\mathcal A})$ denote the category of projective objects in ${\mathcal A}$. The projective dimension of an object $A\in {\mathcal A}$ is denote by $\operatorname{pdim} A$. If ${\mathcal B}$ and ${\mathcal C}$ are $k$-linear categories, then ${\mathcal B}^{{\mathcal C}}$ denotes the category of $k$-linear functors from ${\mathcal C}$ to ${\mathcal B}$. We write $F\dashv G\colon {\mathcal D}\to {\mathcal E}$ to denote that we have a functor $F\colon {\mathcal D}\to {\mathcal E}$ with right adjoint $G\colon {\mathcal E}\to {\mathcal D}$. In this case we let $\unit{F}{G}$ and $\counit{F}{G}$ denote the unit and counit of the adjunction, respectively. Furthermore, $\adjiso{F}{G}\colon {\mathcal E}(F(D),E)\to {\mathcal D}(D,G(E))$ denotes the adjunction isomorphism. If $\sigma\colon F_1\to F_2$ is a natural transformations, then $\sigma_G\colon F_1\circ G\to F_2\circ G$ denotes the natural transformation obtained by precomposing with $G$. \section{Preliminaries}\label{Section Preliminaries} \subsection{Gorenstein projective objects}\label{Gorenstein projective objects} Let ${\mathcal A}$ be an abelian category. We say that ${\mathcal A}$ has \emphbf{enough projectives} if for any object $A\in {\mathcal A}$ there exists an object $Q\in \operatorname{Proj}({\mathcal A})$ and an epimorphism $Q\to A$. \begin{Definition}\label{Definition:1} Assume ${\mathcal A}$ has enough projectives: \begin{enumerate} \item An acyclic complex of projective objects in ${\mathcal A}$ \[ Q_{\bullet} = \cdots \xrightarrow{f_2} Q_1\xrightarrow{f_1} Q_0 \xrightarrow{f_{0}} \cdots \] is called \emphbf{totally acyclic} if the complex \[ {\mathcal A}(Q_{\bullet},Q) = \cdots \xrightarrow{-\circ f_0} {\mathcal A}(Q_0,Q)\xrightarrow{-\circ f_1} {\mathcal A}(Q_1,Q)\xrightarrow{-\circ f_2} \cdots \] is acyclic for all $Q\in \operatorname{Proj} ({\mathcal A})$. \item An object $A\in {\mathcal A}$ is called \emphbf{Gorenstein projective} if there exists a totally acyclic complex $Q_{\bullet}$ with $A=Z_0(Q_{\bullet})=\operatorname{Ker} f_0$. We denote the full subcategory of Gorenstein projective objects in ${\mathcal A}$ by $\cG\cP({\mathcal A})$. \end{enumerate} \end{Definition} \begin{Lemma}\label{Lemma:0.1} If ${\mathcal A}$ has enough projectives, then the subcategory $\cG\cP ({\mathcal A})$ is closed under extensions and direct summands \end{Lemma} \begin{proof} The fact that ${\mathcal A}$ is closed under direct summands follows from Theorem 1.4(2) in \cite{Hua13}. The fact that $\cG\cP({\mathcal A})$ is closed under extensions follows from Proposition 2.13 (1) in \cite{Bel00} \end{proof} \begin{Definition}\label{Definition:1.5} Assume ${\mathcal A}$ has enough projectives. A full subcategory ${\mathcal F}\subset {\mathcal A}$ is called an \emphbf{admissible subcategory of} $\cG\cP({\mathcal A})$ if it is closed under extensions, direct summands, and satisfies the following properties: \begin{enumerate} \item\label{Definition:1.5,1} ${\mathcal F}$ contains the projective objects in ${\mathcal A}$; \item\label{Definition:1.5,2} $\operatorname{Ext}^1(A,Q)=0$ for all $A\in {\mathcal F}$ and $Q\in \operatorname{Proj}({\mathcal A})$; \item \label{Definition:1.5,3} For all $A\in {\mathcal F}$ there exists an exact sequence $0\to A'\to Q\to A\to 0$ with $A'\in {\mathcal F}$ and $Q\in \operatorname{Proj}({\mathcal A})$; \item \label{Definition:1.5,4} For all $A\in {\mathcal F}$ there exists an exact sequence $0\to A\to Q\to A'\to 0$ with $A'\in {\mathcal F}$ and $Q\in \operatorname{Proj}({\mathcal A})$. \end{enumerate} \end{Definition} Assume ${\mathcal F}$ is an admissible subcategory of $\cG\cP({\mathcal A})$. Since ${\mathcal F}$ is closed under extensions, it inherits an exact structure from ${\mathcal A}$ (see \cite{Bue10} for the theory of exact categories). In fact, under this exact structure ${\mathcal F}$ becomes a Frobenius exact category, and the projective objects in ${\mathcal F}$ are precisely the projective objects in ${\mathcal A}$. The following result is immediate from the definition. \begin{Proposition}\label{Proposition:2} Assume ${\mathcal A}$ has enough projectives. The following holds: \begin{enumerate} \item\label{Proposition:2,1} $\cG\cP({\mathcal A})$ is an admissible subcategory of $\cG\cP({\mathcal A})$; \item $\operatorname{Proj}({\mathcal A})$ is an admissible subcategory of $\cG\cP({\mathcal A})$; \item\label{Proposition:2,2} Assume ${\mathcal F}$ is an admissible subcategory of $\cG\cP({\mathcal A})$. Then ${\mathcal F}\subset \cG\cP({\mathcal A})$. \end{enumerate} \end{Proposition} Recall that a full subcategory ${\mathcal X} \subset{\mathcal A}$ is called \emphbf{generating} if for any $A\in {\mathcal A}$ there exists an object $X\in {\mathcal X}$ and an epimorphism $X\to A$. A full subcategory ${\mathcal X} \subset{\mathcal A}$ is called \emphbf{resolving} if it is generating and closed under direct summands, extensions, and kernels of epimorphisms. Here we follow the same conventions as in \cite{Sto14}. Note that a resolving subcategory contains all the projective objects in ${\mathcal A}$. \begin{Lemma}\label{Lemma:1} Assume ${\mathcal A}$ has enough projectives, and let ${\mathcal F}$ be an admissible subcategory of $\cG\cP({\mathcal A})$. Then ${\mathcal F}$ is a resolving subcategory of ${\mathcal A}$. \end{Lemma} \begin{proof} We only need to check that it is closed under kernels of epimorphisms. Let $0\to A_3\xrightarrow{f} A_2\xrightarrow{g} A_1\to 0$ be an exact sequence in ${\mathcal A}$ with $A_2\in {\mathcal F}$ and $A_1\in {\mathcal F}$. Choose an exact sequence $0\to A\xrightarrow{i} Q\xrightarrow{p} A_1\to 0$ in ${\mathcal A}$ with $Q$ projective and $A\in {\mathcal F}$. Since $Q$ is projective, there exists a morphism $s\colon Q\to A_2$ satisfying $g\circ s = p$. This gives a commutative diagram \[ \begin{tikzpicture}[description/.style={fill=white,inner sep=2pt}] \matrix(m) [matrix of math nodes,row sep=2.5em,column sep=5.0em,text height=1.5ex, text depth=0.25ex] {0 & A & Q & A_1 & 0\\ 0 & A_3 & A_2 & A_1 & 0 \\}; \path[->] (m-1-1) edge node[auto] {$$} (m-1-2) (m-1-2) edge node[auto] {$i$} (m-1-3) (m-1-3) edge node[auto] {$p$} (m-1-4) (m-1-4) edge node[auto] {$$} (m-1-5) (m-2-1) edge node[auto] {$$} (m-2-2) (m-2-2) edge node[auto] {$f$} (m-2-3) (m-2-3) edge node[auto] {$g$} (m-2-4) (m-2-4) edge node[auto] {$$} (m-2-5) (m-1-2) edge node[auto] {$$} (m-2-2) (m-1-3) edge node[auto] {$s$} (m-2-3) (m-1-4) edge node[auto] {$1_{A_1}$} (m-2-4); \end{tikzpicture} \] with exact rows, where the morphism $A\to A_3$ is induced from the commutativity of the right square. By Lemma 5.2 in \cite{Pop73} the left square is a pushforward and a pullback square, and hence we get an exact sequence \[ 0\to A\to A_3\oplus Q\to A_2\to 0. \] Since ${\mathcal F}$ is closed under extensions and direct summands, it follows that $A_3\in {\mathcal F}$. \end{proof} In particular, it follows that $\cG\cP({\mathcal A})$ is a resolving subcategory of ${\mathcal A}$. One can define the \emphbf{resolution dimension} $\dim_{{\mathcal X}}(A)$ of any object $A\in {\mathcal A}$ with respect to a resolving subcategory ${\mathcal X}$ of ${\mathcal A}$, see \cite{Sto14}. It is the smallest integer $n\geq 0$ such that there exists an exact sequence \[ 0\to X_n\to \cdots X_1\to X_0\to A\to 0 \] where $X_i\in {\mathcal X}$ for $0\leq i\leq n$. In this case, if \[ 0\to X'_n\to \cdots X'_1\to X'_0\to A\to 0 \] is another exact sequence with $X_i'\in {\mathcal X}$ for all $0\leq i\leq n-1$, then $X'_n\in {\mathcal X}$, see \cite[Proposition 2.3]{Sto14}. We write $\dim_{{\mathcal X}}(A)=\infty$ if there doesn't exist such an $n$. The \emphbf{global resolution dimension} $\dim_{{\mathcal X}}({\mathcal A})$ of ${\mathcal A}$ with respect to ${\mathcal X}$ is the supremum of $\dim_{{\mathcal X}}(A)$ over all $A\in {\mathcal A}$. Putting ${\mathcal X}=\cG\cP({\mathcal A})$ we get the \emphbf{Gorenstein projective dimension} \[ \operatorname{G.pdim} (A):= \dim_{\cG\cP({\mathcal A})}(A) \] and the \emphbf{global Gorenstein projective dimension} \[ \operatorname{gl.Gpdim} ({\mathcal A}):= \dim_{\cG\cP({\mathcal A})}({\mathcal A}). \] We need the following lemma later. \begin{Lemma}\label{Lemma:2} Let $A_2\xrightarrow{f}A_1$ be a morphism in ${\mathcal A}$ with $A_2\in \cG\cP({\mathcal A} )$. Assume \[ {\mathcal A}(A_1,Q)\xrightarrow{-\circ f} {\mathcal A}(A_2,Q) \] is an epimorphism for all projective objects $Q\in {\mathcal A}$. Then $f$ is a monomorphism. \end{Lemma} \begin{proof} Let $A_2\xrightarrow{i} Q$ be a monomorphism into a projective object $Q$. By assumption, there exists a morphism $h\colon A_1\to Q$ such that $i=h\circ f$. This implies that $f$ is a monomorphism, and we are done. \end{proof} \subsection{Derived functors}\label{Derived functors} For a functor $F\colon {\mathcal D}\to {\mathcal E}$, we let $\operatorname{im} F$ denote the full subcategory of ${\mathcal E}$ consisting of the objects $F(D)$ for $D\in {\mathcal D}$. \begin{Proposition}[Proposition 3.1.4 in \cite{Kva17}]\label{Right and left derived functors} Let ${\mathcal A}$ and ${\mathcal B}$ be abelian categories, and let $G\colon {\mathcal A}\to {\mathcal B}$ be a functor. \begin{enumerate} \item\label{Right and left derived functors:1} Assume $G$ is left exact, $L\dashv R\colon {\mathcal A}\to {\mathcal D}$ is an adjunction, and $\operatorname{im} R$ is a cogenerating subcategory of ${\mathcal A}$. If $R\circ L$ and $G\circ R\circ L$ are exact functors, then the $i$th right derived functor $R^iG$ of G exists for all $i>0$, and $R^iG(X)=0$ for all $i>0$ and $X\in \operatorname{im} R$; \item\label{Right and left derived functors:2} Assume $G$ is right exact, $L'\dashv R'\colon {\mathcal D}\to {\mathcal A}$ is an adjunction, and $\operatorname{im} L'$ is a generating subcategory of ${\mathcal A}$. If $L'\circ R'$ and $G\circ L'\circ R'$ are exact functors, then the $i$th left derived functor $L_iG$ of G exists for all $i>0$, and $L_iG(X)=0$ for all $i>0$ and $X\in \operatorname{im} L'$. \end{enumerate} \end{Proposition} We say that $R$ \emphbf{is adapted to} $G$ or $L'$ \emphbf{is adapted to} $G$ in these two cases, respectively. Note that $\operatorname{im} R$ is cogenerating if and only the unit of the adjunction $L\dashv R$ is a monomorphism. By the dual of \cite[Theorem IV.3.1]{MLan98} this is equivalent to $L$ being faithful. Dually, $\operatorname{im} L'$ is generating if and only if the counit of $L'\dashv R'$ is an epimorphism, and by \cite[Theorem IV.3.1]{MLan98} this is equivalent to $R'$ being faithful. We need the following result. \begin{Lemma}\label{Lemma:5**} Let ${\mathcal A}$ and ${\mathcal B}$ be abelian categories, and let $\eta\colon G_1\to G_2$ and $\epsilon\colon G_2\to G_3$ be two natural transformations between functors ${\mathcal A}\to {\mathcal B}$. \begin{enumerate} \item\label{Lemma:5**:1} Assume $G_i$ is left exact for all $i$. Furthermore, assume there exists an adjunction $L\dashv R\colon {\mathcal A}\to {\mathcal D}$ such that $R$ is adapted to $G_i$ for all $i$. If the sequence \[ 0\to G_1\circ R\circ L \xrightarrow{\eta_{R\circ L}}G_2\circ R\circ L \xrightarrow{\epsilon_{R\circ L}}G_3\circ R\circ L \to 0 \] is exact, then there exists a long exact sequence \begin{multline*} 0\to G_1 \xrightarrow{\eta} G_2 \xrightarrow{\epsilon} G_3\to L_1G_1\to L_1G_2\to L_1G_3\to L_2G_1\to \cdots \end{multline*} \item\label{Lemma:5**:2} Assume $G_i$ is right exact for all $i$. Furthermore, assume there exists an adjunction $L'\dashv R'\colon {\mathcal D}\to {\mathcal A}$ such that $L'$ is adapted to $G_i$ for $1\leq i\leq 3$. If the sequence \[ 0\to G_1\circ L'\circ R' \xrightarrow{\eta_{L'\circ R'}}G_2\circ L'\circ R' \xrightarrow{\epsilon_{L'\circ R'}}G_3\circ L'\circ R' \to 0 \] is exact, then there exists a long exact sequence \begin{multline*} \cdots \to L_2G_3\to L_1G_1\to L_1G_2\to L_1G_3 \to G_1\xrightarrow{\eta}G_2\xrightarrow{\epsilon}G_3\to 0 \end{multline*} \end{enumerate} \end{Lemma} \begin{proof} We prove part (ii), part (i) follows dually. Let $\mathsf{S}$ be the induced comonad on ${\mathcal A}$ from the adjunction $L'\dashv R'$. There is an obvious natural isomorphism $L_nG_i\cong H_n(-,G_i)$ and $G_i\cong H_0(-,G_i)$ where $H_i(-,G_i)$ is the comonad homology relative to $\mathsf{S}$ as defined in Section 1 in \cite{BB69}. The claim follows now from Section 3.2 in \cite{BB69}. \end{proof} \subsection{Nakayama functor} We need the notion of Nakayama functor relative to adjoint pairs which was introduced in \cite{Kva17}. \begin{Definition}\label{Nakayama functor for adjoint pair} Let $f^*\colon {\mathcal A}\to {\mathcal D}$ be a faithful functor with left adjoint $f_!\colon {\mathcal D}\to {\mathcal A}$. A \emphbf{Nakayama functor} relative to $f_!\dashv f^*$ is a functor $\nu\colon \mathcal{A}\to \mathcal{A}$ with a right adjoint $\nu^-$ satisfying: \begin{enumerate} \item $\nu\circ f_!$ is right adjoint to $f^*$; \item The unit of $\nu\dashv\nu^-$ induces an isomorphism $f_!\xrightarrow{\cong} \nu^-\circ \nu \circ f_!$ when precomposed with $f_!$. \end{enumerate} \end{Definition} We let $\lambda\colon 1_{{\mathcal A}}\to \nu^-\circ \nu$ and $\sigma\colon \nu\circ \nu^-\to 1_{{\mathcal A}}$ denote the unit and counit of the adjunction $\nu\dashv \nu^-$. We also fix the notation $f_*:=\nu\circ f_!$, $P:=f_!\circ f^*$ and $I:=f_*\circ f^*$. Note that we have adjunctions \[ f^*\circ \nu \dashv f_!\dashv f^*\dashv f_*\dashv f^*\circ \nu^-. \] We call summands of objects $P(A)$ for $P$-\emphbf{projective} and summands of objects $I(A)$ for $I$-\emphbf{injective}. By the triangle identities the $P$-projectives and $I$-injectives are precisely the summands of objects of the form $f_!(D)$ and $f_*(D)$ for $D\in {\mathcal D}$, respectively. Since $P$, $\nu\circ P=I$, and $\nu^-\circ I\cong P$ are exact, it follows from Proposition \ref{Right and left derived functors} that $f_!$ is adapted to $\nu$ and $f_*$ is adapted to $\nu^-$. In particular, the derived functors $L_i\nu$ and $R^i\nu^-$ exist for all $i>0$. \begin{Definition}[Definition 4.1.1 in \cite{Kva17}]\label{Definition:7} Assume $\nu$ is a Nakayama functor relative to $f_!\dashv f^*\colon {\mathcal D}\to {\mathcal A}$. An object $X\in {\mathcal A}$ is called \emphbf{Gorenstein} $P$-\emphbf{projective} if there exists an exact sequence \[ A_{\bullet}=\cdots \xrightarrow{f_{2}} A_{1}\xrightarrow{f_{1}} A_0\xrightarrow{f_0} A_{-1}\xrightarrow{f_{-1}} \cdots \] with $A_i\in {\mathcal A}$ being $P$-projective for all $i\in \mathbb{Z}$, such that the sequence \[ \nu(A_{\bullet})=\cdots \xrightarrow{\nu(f_{2})} \nu(A_{1})\xrightarrow{\nu(f_{1})} \nu(A_0)\xrightarrow{\nu(f_0)} \nu(A_{-1})\xrightarrow{\nu(f_{-1})} \cdots \] is exact, and with $Z_0(A_{\bullet})=\operatorname{Ker} f_0=X$. The subcategory of ${\mathcal A}$ consisting of all Gorenstein $P$-projective objects is denoted by $\relGproj{P}{{\mathcal A}}$. \end{Definition} \begin{Proposition}\label{Closure of Gorenstein P-projective} Assume $\nu$ is a Nakayama functor relative to the adjunction $f_!\dashv f^*\colon {\mathcal D}\to {\mathcal A}$. The following holds: \begin{enumerate} \item $\relGproj{P}{{\mathcal A}}$ is a resolving subcategory of ${\mathcal A}$; \item Assume $i\colon A_2\to A_1$ is a morphism such that $\nu(i)$ is a monomorphism and $A_1,A_2\in \relGproj{P}{{\mathcal A}}$. Then $i$ is a monomorphism and $\operatorname{Coker} i\in \relGproj{P}{{\mathcal A}}$. \end{enumerate} \end{Proposition} \begin{proof} This follows from \cite[Proposition 4.1.5]{Kva17} and \cite[Lemma 4.1.6]{Kva17}. \end{proof} \begin{Definition}[Definition 4.2.1 in \cite{Kva17}]\label{Definition:9} Assume $\nu$ is a Nakayama functor relative to $f_!\dashv f^*\colon {\mathcal D}\to {\mathcal A}$. We say that $P$ is \emphbf{Iwanaga-Gorenstein} if there exists an integer $n\geq 0$ such that $L_i\nu=0$ and $R^i\nu^-=0$ for all $i>n$. \end{Definition} \begin{Theorem}[Theorem 4.2.6 in \cite{Kva17}]\label{Theorem:2} Assume $\nu$ is a Nakayama functor relative to $f_!\dashv f^*\colon {\mathcal D}\to {\mathcal A}$, and that $P$ is Iwanaga-Gorenstein. Then the following numbers coincide: \begin{enumerate} \item\label{Theorem:2,1} $\dim_{\relGproj{P}{{\mathcal A}}}({\mathcal A})$; \item\label{Theorem:2,2} The smallest integer $r$ such that $L_i\nu=0$ for all $i>r$; \item\label{Theorem:2,3} The smallest integer $s$ such that $R^i\nu^-=0$ for all $i>s$. \end{enumerate} \end{Theorem} If this common number is $n$ we say that $P$ is $n$\emphbf{-Gorenstein}. We also say that $n$ is the Gorenstein dimension of $P$. The following theorem is useful for computing examples. \begin{Theorem}\label{Theorem:2.5} Assume $\nu$ is a Nakayama functor relative to $f_!\dashv f^*\colon {\mathcal D}\to {\mathcal A}$. The following holds: \begin{enumerate} \item\label{Theorem:2.5:1} $A\in \relGproj{P}{{\mathcal A}}$ if and only if \begin{enumerate} \item $L_i\nu(A)=0$ for all $i>0$; \item $R^i\nu^-(\nu(A))=0$ for all $i>0$; \item $\lambda_A\colon A\to \nu^-\nu(A)$ is an isomorphism. \end{enumerate} \item\label{Theorem:2.5:2} If $P$ is Iwanaga-Gorenstein, then \[ \relGproj{P}{{\mathcal A}}=\{A\in {\mathcal A} \mid L_i\nu(A)=0 \text{ for all }i>0\}. \] \end{enumerate} \end{Theorem} \begin{proof} This is \cite[Proposition 4.1.3]{Kva17} and \cite[Theorem 4.2.2]{Kva17}. \end{proof} \begin{Example}[Example 3.2.6 in \cite{Kva17}]\label{Example:3 Let $k$ be a commutative ring, and let $\Lambda_1$ and $\Lambda_2$ be $k$-algebras. Consider the adjoint pair $f_!\dashv f^*$ where $f^*$ is the restriction functor \[ f^*:= \operatorname{res}^{\Lambda_1\otimes_k \Lambda_2}_{\Lambda_2}\colon (\Lambda_1\otimes_k \Lambda_2) \text{-}\operatorname{Mod} \to \Lambda_2 \text{-}\operatorname{Mod} \] and $f_!:=\Lambda_1\otimes_k -\colon \Lambda_2 \text{-}\operatorname{Mod} \to (\Lambda_1\otimes_k \Lambda_2) \text{-}\operatorname{Mod}$. If $\Lambda_1$ is finitely generated projective as a $k$-module, then the functor \[ \nu:=\operatorname{Hom}_k(\Lambda_1,k)\otimes_{\Lambda_1} -\colon (\Lambda_1\otimes_k \Lambda_2)\text{-} \operatorname{Mod} \to (\Lambda_1\otimes_k \Lambda_2) \text{-} \operatorname{Mod}. \] is a Nakayama functor relative to $f_!\dashv f^*$. \end{Example} \begin{Example}\label{finitely presented} Assume $k$, $\Lambda_1$ and $\Lambda_2$ are as in Example \ref{Example:3}. If in addition $\Lambda_2$ is left coherent, then the categories $\Lambda_2 \text{-}\operatorname{mod}$ and $(\Lambda_1\otimes_k \Lambda_2)\text{-}\operatorname{mod}$ of finitely presented left modules are abelian. In this case $f^*$, $f_!$ and $\nu$ restrict to functors \begin{align*} & f^*:=\operatorname{res}^{\Lambda_1\otimes_k \Lambda_2}_{\Lambda_2}\colon (\Lambda_1\otimes_k \Lambda_2) \text{-}\operatorname{mod} \to \Lambda_2 \text{-}\operatorname{mod} \\ & f_!:=\Lambda_1\otimes_k -\colon \Lambda_2 \text{-}\operatorname{mod} \to (\Lambda_1\otimes_k \Lambda_2) \text{-}\operatorname{mod} \\ & \nu:=\operatorname{Hom}_k(\Lambda_1,k)\otimes_{\Lambda_1} -\colon (\Lambda_1\otimes_k \Lambda_2)\text{-} \operatorname{mod} \to (\Lambda_1\otimes_k \Lambda_2) \text{-} \operatorname{mod}. \end{align*} and $\nu$ is still a Nakayama functor relative to $f_!\dashv f^*$ in this case. \end{Example} \section{Lifting Frobenius exact subcategories}\label{Lifting Frobenius exact subcategories} In this section we fix abelian categories ${\mathcal A}$ and ${\mathcal B}$, a faithful functor $f^*\colon {\mathcal A}\to {\mathcal B}$ with left adjoint $f_!\colon {\mathcal B}\to {\mathcal A}$, and we assume $f_!\dashv f^*$ has a Nakayama functor $\nu\colon {\mathcal A}\to {\mathcal A}$. Our goal is to investigate when the subcategory $(f^*\circ \nu)^{-1}(\cG\cP({\mathcal B}))\cap \relGproj{P}{{\mathcal A}}$ is equal to $\cG\cP({\mathcal A})$ if ${\mathcal A}$ and ${\mathcal B}$ have enough projectives. In the first part we show that $(f^*\circ \nu)^{-1}({\mathcal F})\cap {\mathcal X}$ is an admissible subcategory of $\cG\cP({\mathcal A})$ if ${\mathcal X}$ is a $P$-admissible subcategory of $\relGproj{P}{{\mathcal A}}$ and ${\mathcal F}$ is an admissible subcategory of $\cG\cP({\mathcal B})$. \subsection{\texorpdfstring{$P$}{}-admissible subcategories of \texorpdfstring{$\relGproj{P}{{\mathcal A}}$}{}}\label{Admissible subcategories} \begin{Definition}\label{Definition:10} A full subcategory ${\mathcal X}\subset {\mathcal A}$ is called a $P$-\emphbf{admissible subcategory of} $\relGproj{P}{{\mathcal A}}$ if it is closed under extensions, direct summands, and satisfies the following properties: \begin{enumerate} \item \label{Definition:10,1}${\mathcal X}$ contains all the $P$-projective objects of ${\mathcal A}$; \item \label{Definition:10,2}$L_1\nu(X)=0$ for all $X\in {\mathcal X}$; \item \label{Definition:10,3}For all $X\in {\mathcal X}$ there exists a short exact sequence $0\to X'\xrightarrow{} A\xrightarrow{} X\to 0$ with $A$ being $P$-projective and $X'\in {\mathcal X}$; \item \label{Definition:10,4}For all $X\in {\mathcal X}$ there exists a short exact sequence $0\to X\xrightarrow{} A\xrightarrow{} X'\to 0$ with $A$ being $P$-projective and $X'\in {\mathcal X}$. \end{enumerate} \end{Definition} The following result is immediate from the definition. \begin{Proposition}\label{Proposition:2*} The following hold: \begin{enumerate} \item\label{Proposition:2,1*} $\relGproj{P}{{\mathcal A}}$ is a $P$-admissible subcategory of $\relGproj{P}{{\mathcal A}}$; \item\label{Proposition:2,2*} Assume ${\mathcal X}$ is a $P$-admissible subcategory of $\relGproj{P}{{\mathcal A}}$. Then ${\mathcal X}\subset \relGproj{P}{{\mathcal A}}$. \end{enumerate} \end{Proposition} \begin{Example}\label{Example:4 Let $\Lambda$ be a finite-dimensional algebra over a field $k$. Furthermore, let $g^*\colon \Lambda\text{-}\operatorname{Mod}\to k\text{-}\operatorname{Mod}$ be the restriction functor and $g_!=\Lambda \otimes_k -\colon k\text{-}\operatorname{Mod}\to \Lambda\text{-}\operatorname{Mod}$ its left adjoint. As stated in Example \ref{Example:3}, the adjoint pair $g_!\dashv g^*$ has Nakayama functor \[ \nu'=\operatorname{Hom}_k(\Lambda,k)\otimes_{\Lambda}-\colon \Lambda\text{-}\operatorname{Mod}\to \Lambda\text{-}\operatorname{Mod} \] In this case the $P'$-projective objects are just the projective $\Lambda$-modules, where $P':= g_!\circ g^*$. Also, $L_1\nu'(M)= \operatorname{Tor}^{\Lambda}_1(\operatorname{Hom}_k(\Lambda,k),M)=0$ if and only if \[ \operatorname{Hom}_k(\operatorname{Tor}^{\Lambda}_1(\operatorname{Hom}_k(\Lambda,k),M),k)\cong \operatorname{Ext}^1_{\Lambda}(M,\Lambda )=0. \] In this case $\operatorname{Ext}^1_{\Lambda}(M,\prod\Lambda )\cong \prod \operatorname{Ext}^1_{\Lambda}(M,\Lambda ) = 0$. Since any projective $\Lambda$-module is a direct summand of a product $\prod \Lambda$ when $\Lambda$ is finite-dimensional, it follows that $L_1\nu'(M)=0$ if and only if $\operatorname{Ext}^1_{\Lambda}(M,Q)=0$ for any $Q\in \operatorname{Proj} (\Lambda\text{-}\operatorname{Mod})$. Hence, the $P'$-admissible subcategories of $\relGproj{P'}{\Lambda\text{-}\operatorname{Mod}}$ are precisely the admissible subcategories of $\cG\cP(\Lambda\text{-}\operatorname{Mod})$. In particular, it follows that \[ \cG\cP(\Lambda\text{-}\operatorname{Mod})=\relGproj{P'}{\Lambda\text{-}\operatorname{Mod}}. \] \end{Example} In the following we consider the adjunctions \begin{align*} & \adjiso{f^*\circ \nu}{f_!}\colon {\mathcal B}(f^*\nu(A),B)\xrightarrow{\cong}{\mathcal A}(A,f_!(B)) \\ & \adjiso{f_!}{f^*}\colon {\mathcal A}(f_!(B),A)\xrightarrow{\cong}{\mathcal B}(B,f^*(A)) \end{align*} with units and counits \begin{align*} & \unit{f^*\circ \nu}{f_!}\colon 1_{{\mathcal A}}\to f_!\circ f^*\circ \nu \quad \counit{f^*\circ \nu}{f_!}\colon f^*\circ \nu\circ f_! \to 1_{{\mathcal B}} \\ & \unit{f_!}{f^*}\colon 1_{{\mathcal B}}\to f^*\circ f_! \quad \counit{f_!}{f^*}\colon f_!\circ f^* \to 1_{{\mathcal A}} \end{align*} Since $f^*$ is faithful, it follows that $\counit{f_!}{f^*}$ is an epimorphism. \begin{Lemma}\label{Lemma:6} Let ${\mathcal X}$ be a $P$-admissible subcategory of $\relGproj{P}{{\mathcal A}}$, and let $X\in {\mathcal X}$. The following holds: \begin{enumerate} \item\label{Lemma:6,1} $\unit{f^*\circ \nu}{f_!}_X$ is a monomorphism and $\operatorname{Coker} \unit{f^*\circ \nu}{f_!}_X\in {\mathcal X}$; \item\label{Lemma:6,2}$\operatorname{Ker} \counit{f_!}{f^*}_X\in {\mathcal X}$. \end{enumerate} \end{Lemma} \begin{proof} We prove \ref{Lemma:6,1}. Since $X\in {\mathcal X}$ there exists an exact sequence $0\to X\xrightarrow{i} f_!(B)\xrightarrow{} X'\to 0$ with $X'\in {\mathcal X}$ and $B\in {\mathcal B}$. Since $i=f_!((\adjiso{f^*\circ \nu}{f_!})^{-1}(i))\circ \unit{f^*\circ \nu}{f_!}_X$, it follows that $\unit{f^*\circ \nu}{f_!}_X$ is a monomorphism. We therefore have a commutative diagram \begin{equation*} \begin{tikzpicture}[description/.style={fill=white,inner sep=2pt}] \matrix(m) [matrix of math nodes,row sep=3.5em,column sep=5.0em,text height=1.5ex, text depth=0.25ex] { X & f_!f^*\nu(X) & \operatorname{Coker} \unit{f^*\circ \nu}{f_!}_X \\ X & f_!(B) & X' \\}; \path[->] (m-1-1) edge node[auto] {$\unit{f^*\circ \nu}{f_!}_X$} (m-1-2) (m-1-2) edge node[auto] {$$} (m-1-3) (m-2-1) edge node[auto] {$i$} (m-2-2) (m-2-2) edge node[auto] {$$} (m-2-3) (m-1-1) edge node[auto] {$1_X$} (m-2-1) (m-1-2) edge node[auto] {$f_!((\adjiso{f^*\circ \nu}{f_!})^{-1}(i))$} (m-2-2) (m-1-3) edge node[auto] {$$} (m-2-3); \end{tikzpicture} \end{equation*} where the rows are short exact sequences. By the dual of Lemma 5.2 in \cite{Pop73} it follows that the right square is a pushforward and a pullback square. Hence we get a short exact sequence \[ 0\to f_!f^*\nu(X)\to f_!(B)\oplus \operatorname{Coker} \unit{f^*\circ \nu}{f_!}_X \to X'\to 0. \] Since ${\mathcal X}$ is closed under extensions and direct summands, it follows that $\operatorname{Coker} \unit{f^*\circ \nu}{f_!}_X\in {\mathcal X}$. For \ref{Lemma:6,2}, choose an exact sequence $0\to X''\xrightarrow{} f_!(B')\xrightarrow{p} X\to 0$ with $X''\in {\mathcal X}$ and $B'\in {\mathcal B}$. We then get a commutative diagram \begin{equation*} \begin{tikzpicture}[description/.style={fill=white,inner sep=2pt}] \matrix(m) [matrix of math nodes,row sep=3.5em,column sep=5.0em,text height=1.5ex, text depth=0.25ex] { \operatorname{Ker} \counit{f_!}{f^*}_X & f_!f^*(X) & X \\ X'' & f_!(B') & X \\}; \path[->] (m-1-1) edge node[auto] {$$} (m-1-2) (m-1-2) edge node[auto] {$\counit{f_!}{f^*}_X$} (m-1-3) (m-2-1) edge node[auto] {$$} (m-2-2) (m-2-2) edge node[auto] {$p$} (m-2-3) (m-2-1) edge node[auto] {$$} (m-1-1) (m-2-2) edge node[auto] {$f_!(\adjiso{f_!}{f^*}(p))$} (m-1-2) (m-2-3) edge node[auto] {$1_X$} (m-1-3); \end{tikzpicture} \end{equation*} where the rows are short exact sequences. The left square is a pushforward and a pullback square, and therefore gives rise to an exact sequence \[ 0\to X''\to f_!(B')\oplus \operatorname{Ker} \counit{f_!}{f^*}_X \to f_!f^*(X)\to 0. \] Since ${\mathcal X}$ is closed under extensions and direct summands, it follows that $\operatorname{Ker} \counit{f_!}{f^*}_X\in {\mathcal X}$. \end{proof} \begin{Lemma}\label{Lemma:10} Let ${\mathcal X}$ be a $P$-admissible subcategory of $\relGproj{P}{{\mathcal A}}$. The following holds: \begin{enumerate} \item\label{Lemma:10:1} Let $s\colon X\to f_!(B)$ be a morphism in ${\mathcal A}$ with $X\in {\mathcal X}$ and $B\in {\mathcal B}$. Assume $(\adjiso{f^*\circ \nu}{f_!})^{-1}(s)\colon f^*\nu(X)\to B$ is a monomorphism. Then $s$ is a monomorphism and $\operatorname{Coker} s\in {\mathcal X}$; \item\label{Lemma:10:2} Let $s'\colon f_!(B)\to X$ be a morphism in ${\mathcal A}$ with $X\in {\mathcal X}$ and $B\in {\mathcal B}$. Assume that $\adjiso{f_!}{f^*}(s')\colon B\to f^*(X)$ is an epimorphism. Then $s'$ is an epimorphism and $\operatorname{Ker} s'\in {\mathcal X}$. \end{enumerate} \end{Lemma} \begin{proof} We only prove part \ref{Lemma:10:1}, part \ref{Lemma:10:2} is proved dually. Consider the commutative diagram \[ \begin{tikzpicture}[description/.style={fill=white,inner sep=2pt}] \matrix(m) [matrix of math nodes,row sep=3.5em,column sep=5.0em,text height=1.5ex, text depth=0.25ex] { X & f_!f^*\nu(X) & \operatorname{Coker} \unit{f^*\circ \nu}{f_!}_X \\ X & f_!(B) & \operatorname{Coker} s \\}; \path[->] (m-1-1) edge node[auto] {$\unit{f^*\circ \nu}{f_!}_X$} (m-1-2) (m-1-2) edge node[auto] {$$} (m-1-3) (m-2-1) edge node[auto] {$s$} (m-2-2) (m-2-2) edge node[auto] {$$} (m-2-3) (m-1-1) edge node[auto] {$1_X$} (m-2-1) (m-1-2) edge node[auto] {$f_!((\adjiso{f^*\circ \nu}{f_!})^{-1}(s))$} (m-2-2) (m-1-3) edge node[auto] {$t$} (m-2-3); \end{tikzpicture} \] where $t$ is induced from the commutativity of the left square. Since $\unit{f^*\circ \nu}{f_!}_X$ is a monomorphism by Lemma \ref{Lemma:6}, we get that $s$ is a monomorphism. Hence, the upper and lower row are short exact sequences. Therefore, by the snake lemma $t$ is a monomorphism and \[ \operatorname{Coker} t \cong \operatorname{Coker} f_!((\adjiso{f^*\circ \nu}{f_!})^{-1}(s)) \cong f_! (\operatorname{Coker} (\adjiso{f^*\circ \nu}{f_!})^{-1}(s)) \] Hence, we get an exact sequence \[ 0\to \operatorname{Coker} \unit{f^*\circ \nu}{f_!}_X\xrightarrow{t} \operatorname{Coker} s\to f_! (\operatorname{Coker} (\adjiso{f^*\circ \nu}{f_!})^{-1}(s))\to 0. \] Since ${\mathcal X}$ is closed under extensions, $f_! (\operatorname{Coker} (\adjiso{f^*\circ \nu}{f_!})^{-1}(s))$ is $P$-projective, and $\operatorname{Coker} \unit{f^*\circ \nu}{f_!}_X\in {\mathcal X}$ by Lemma \ref{Lemma:6}, we get that $\operatorname{Coker} s\in {\mathcal X}$. \end{proof} \begin{Example}\label{Example:5 Let $k$ be a field, let $\Lambda_1$ be a finite-dimensional $k$-algebra, and let $\Lambda_2$ be a $k$-algebra which is left coherent. Let $f_!\dashv f^*$ be the adjoint pair with Nakayama functor $\nu$ as in Example \ref{finitely presented}. Let ${\mathcal F}\subset \cG\cP(\Lambda_1\text{-}\operatorname{Mod})$ be an admissible subcategory. We claim that the category \[ {\mathcal X} =\{ M\in (\Lambda_1\otimes_k \Lambda_2)\text{-}\operatorname{mod} \mid\text{ } {}_{\Lambda_1}M\in {\mathcal F}\} \] is a $P$-admissible subcategory of $\relGproj{P}{(\Lambda_1\otimes_k \Lambda_2)\text{-}\operatorname{mod}}$, where $P:=f_!\circ f^*$: Indeed, the $P$-projective objects are summands of modules of the form $\Lambda_1\otimes_k M$. Since they are projective when restricted to $\Lambda_1\text{-}\operatorname{Mod}$, they are contained in ${\mathcal X}$, which shows \ref{Definition:10,1}. Furthermore, for $M\in {\mathcal X}$ we have $L_1\nu(M)= \operatorname{Tor}^{\Lambda_1}_1(\operatorname{Hom}_k(\Lambda_1,k),M)$, and this is $0$ since ${}_{\Lambda_1}M\in {\mathcal F} \subset \cG\cP(\Lambda_1\text{-}\operatorname{Mod})$ and $\operatorname{Hom}_k(\operatorname{Tor}^{\Lambda_1}_1(\operatorname{Hom}_k(\Lambda_1,k),M),k)\cong \operatorname{Ext}^1_{\Lambda_1}(M,\Lambda_1)$. This shows \ref{Definition:10,2}. Also, ${\mathcal X}$ is closed under kernels of epimorphisms by Lemma \ref{Lemma:1}, and hence it satisfies \ref{Definition:10,3}. It only remains to show \ref{Definition:10,4}: By Example \ref{Example:4} we know that ${\mathcal F}$ is a $P'$-admissible subcategory of $\relGproj{P'}{\Lambda_1\text{-}\operatorname{Mod}}$, where $P'=g_!\circ g^*$ and $g^*\colon \Lambda_1\text{-}\operatorname{Mod}\to k\text{-}\operatorname{Mod}$ is the restriction with left adjoint $g_!=\Lambda_1\otimes_k-\colon k\text{-}\operatorname{Mod}\to \Lambda_1\text{-}\operatorname{Mod}$. Consider the exact sequence \[ 0\to M\xrightarrow{\unit{f^*\circ \nu}{f_!}_M} f_!f^*\nu (M) \to \operatorname{Coker} \unit{f^*\circ \nu}{f_!}_M \to 0 \] of $\Lambda_1\otimes_k \Lambda_2$-modules, Restricting to $\Lambda_1\text{-}\operatorname{Mod}$ gives the exact sequence \[ 0\to {}_{\Lambda_1}M\xrightarrow{\unit{g^*\circ \nu'}{g_!}_{{}_{\Lambda_1}M}} g_!g^*\nu' ({}_{\Lambda_1}M) \to \operatorname{Coker} \unit{g^*\circ \nu'}{g_!}_{{}_{\Lambda_1}M} \to 0 \] It follows from Lemma \ref{Lemma:6} that $\operatorname{Coker} \unit{g^*\circ \nu'}{g_!}_{{}_{\Lambda_1}M}\in {\mathcal F}$. Therefore, we have that $\operatorname{Coker} \unit{f^*\circ \nu}{f_!}_M\in {\mathcal X}$. This implies that ${\mathcal X}$ satisfies \ref{Definition:10,4}, which proves the claim. In particular, ${\mathcal X}$ is a $P$-admissible subcategory of $\relGproj{P}{(\Lambda_1\otimes_k \Lambda_2)\text{-}\operatorname{mod}}$ when ${\mathcal F} =\cG\cP (\Lambda_1 \text{-}\operatorname{Mod})$ or ${\mathcal F}= \operatorname{Proj} (\Lambda_1 \text{-}\operatorname{Mod} )$. Now assume ${\mathcal F} =\cG\cP (\Lambda_1 \text{-}\operatorname{Mod})$. We claim that ${\mathcal X}=\relGproj{P}{(\Lambda_1\otimes_k \Lambda_2)\text{-}\operatorname{mod}}$. By the argument above we know that ${\mathcal X}\subset \relGproj{P}{(\Lambda_1\otimes_k \Lambda_2)\text{-}\operatorname{mod}}$, so we only need to show the other inclusion. Assume $M\in \relGproj{P}{(\Lambda_1\otimes_k \Lambda_2)\text{-}\operatorname{mod}}$, and let $A_{\bullet}$ be an exact sequence in $(\Lambda_1\otimes_k \Lambda_2)\text{-}\operatorname{mod}$ with $Z_0(A_{\bullet})=M$ as in Definition \ref{Definition:7}. Note that the components of ${}_{\Lambda_1}A_{\bullet}$ are projective $\Lambda_1$-modules. Furthermore, since the sequence $\nu(A_{\bullet})$ is exact, the sequence \[ \operatorname{Hom}_k(\nu(A_{\bullet}),k)=\operatorname{Hom}_k(\operatorname{Hom}_k(\Lambda_1,k)\otimes_{\Lambda_1}A_{\bullet},k) \cong \operatorname{Hom}_{\Lambda_1}(A_{\bullet},\Lambda_1) \] is exact. Since any projective $\Lambda_1$-module is a summand of a product of $\Lambda_1$, and $\operatorname{Hom}_{\Lambda_1}(A_{\bullet},\prod \Lambda_1)\cong \prod \operatorname{Hom}_{\Lambda_1}(A_{\bullet},\Lambda_1)$ is exact, it follows that ${}_{\Lambda_1}A_{\bullet}$ is a totally acyclic complex of $\Lambda_1$-modules. This shows that ${}_{\Lambda_1}M\in \cG\cP(\Lambda_1\text{-}\operatorname{Mod})$, and the claim follows. \end{Example} \begin{Example}\label{Example:6} Let $k$ be a field, let $\Lambda_1$ be a finite-dimensional $k$-algebra, and let $\Lambda_2$ be a $k$-algebra. Let $f_!\dashv f^*$ be the adjoint pair with Nakayama functor $\nu$ as in Example \ref{Example:3}. By a similar argument as in Example \ref{Example:5} we get that if ${\mathcal F}\subset \Lambda_1\text{-}\operatorname{Mod}$ is an admissible subcategory of $\cG\cP(\Lambda_1\text{-}\operatorname{Mod})$, then \[ {\mathcal X} =\{ M\in (\Lambda_1\otimes_k \Lambda_2)\text{-}\operatorname{Mod} \mid \text{ } {}_{\Lambda_1}M\in {\mathcal F}\} \] is a $P$-admissible subcategory of $\relGproj{P}{(\Lambda_1\otimes_k \Lambda_2)\text{-}\operatorname{Mod}}$, where $P=f_!\circ f^*$. Also, we get that \[ \relGproj{P}{(\Lambda_1\otimes_k\Lambda_2)\text{-}\operatorname{Mod}) =\{ M\in (\Lambda_1\otimes_k \Lambda_2)\text{-}\operatorname{Mod} \mid \text{ } {}_{\Lambda_1}M\in \cG\cP(\Lambda_1\text{-}\operatorname{Mod}}\}. \] \end{Example} \subsection{Lifting admissible subcategories}\label{Lifting admissible subcategories} Note that $f_!$ preserves projective objects since it has an exact right adjoint. In fact, we have the following result. \begin{Lemma} Assume ${\mathcal B}$ has enough projectives. Then the full subcategory \[ f_!(\operatorname{Proj}({\mathcal B})):= \{f_!(Q)\mid Q\in \operatorname{Proj}({\mathcal B})\} \] is generating in ${\mathcal A}$. In particular, ${\mathcal A}$ has enough projectives. \end{Lemma} \begin{proof} For $A\in {\mathcal A}$ choose an epimorphism $Q\xrightarrow{p} f^*(A)$ in ${\mathcal B}$ with $Q$ projective. The composition $f_!(Q)\xrightarrow{f_!(p)} f_! f^*(A)\xrightarrow{\counit{f_!}{f^*}_A} A$ is then an epimorphism in ${\mathcal A}$. This proves the claim. \end{proof} For the remainder of this section we assume ${\mathcal B}$ has enough projective objects. Furthermore, we fix a $P$-admissible subcategory ${\mathcal X}$ of $\relGproj{P}{{\mathcal A}}$ and an admissible subcategory ${\mathcal F}$ of $\cG\cP({\mathcal B})$. Let \[ (f^*\circ \nu)^{-1}({\mathcal F}):= \{ A\in {\mathcal A} \mid f^*\nu(A)\in {\mathcal F}\} \] Our goal is to show that $(f^*\circ \nu)^{-1}({\mathcal F})\cap {\mathcal X}$ is an admissible subcategory of $\cG\cP({\mathcal A})$. \begin{Lemma}\label{Lemma:12} The category $(f^*\circ \nu)^{-1}({\mathcal F})\cap {\mathcal X}$ is closed under extensions and direct summands in ${\mathcal A}$. \end{Lemma} \begin{proof} It is immediate that $(f^*\circ \nu)^{-1}({\mathcal F})\cap {\mathcal X}$ is closed under direct summands. We show that it is closed under extensions. Let $0\to A_1\xrightarrow{s} A_2\xrightarrow{t} A_3\to 0$ be an exact sequence in ${\mathcal A}$ with $A_1,A_3\in (f^*\circ \nu)^{-1}({\mathcal F})\cap {\mathcal X}$. Since ${\mathcal X}$ is closed under extensions, it follows that $A_2\in {\mathcal X}$. Also, since $L_1\nu(A_3)=0$, we have an exact sequence \[ 0\to f^*\nu(A_1)\xrightarrow{f^*\nu(s)} f^*\nu(A_2)\xrightarrow{f^*\nu(t)} f^*\nu(A_3)\to 0 \] in ${\mathcal B}$. Since ${\mathcal F}$ is closed under extensions, it follows that $f^*\nu(A_2)\in {\mathcal F}$. This proves the claim. \end{proof} Since $(f^*\circ \nu)^{-1}({\mathcal F})\cap {\mathcal X}$ is closed under extensions, it inherits an exact structure from ${\mathcal A}$. \begin{Lemma}\label{Lemma:13} The category $(f^*\circ \nu)^{-1}({\mathcal F})\cap {\mathcal X}$ contains the projective objects in ${\mathcal A}$. \end{Lemma} \begin{proof} Let $Q\in {\mathcal B}$ be projective. Then $f^*\nu f_! (Q)$ is projective since the functor ${\mathcal B}(f^*\nu f_! (Q),-)\cong {\mathcal B}(Q,f^*f_!(-))$ is exact. Since ${\mathcal X}$ contains all the $P$-projective objects of ${\mathcal A}$ and ${\mathcal F}$ contains all the projective objects of ${\mathcal B}$, it follows that $f_!(Q)\in (f^*\circ \nu)^{-1}({\mathcal F})\cap {\mathcal X}$. Since any projective object in ${\mathcal A}$ is a summand of an object of the form $f_!(Q)$, the claim follows. \end{proof} \begin{Lemma}\label{Lemma:15} We have $\operatorname{Ext}^{i}_{{\mathcal A}}(A,Q)=0$ for all $i>0$, $A\in (f^*\circ \nu)^{-1}({\mathcal F})\cap {\mathcal X}$, and $Q\in {\mathcal A}$ projective. \end{Lemma} \begin{proof} We only need to show the statement for $Q=f_!(Q')$ where $Q'\in {\mathcal B}$ is projective. Note first that any exact sequence $0\to f_!(Q')\to \cdots \to A\to 0$ stays exact under the functor $f^*\circ \nu$ since $L_i\nu(A)=0$ for all $i>0$ and as $f^*$ is exact. Since we have an adjunction $f^*\circ \nu\dashv f_!$ and the functor $f_!$ is exact it follows that $\operatorname{Ext}^i_{{\mathcal A}}(A,f_!(Q'))\cong \operatorname{Ext}^i_{{\mathcal B}}(f^*\nu(A),Q')$ by Lemma 6.1 in \cite{HJ16}. Since the latter is $0$ by the assumption on $A$, the claim follows. \end{proof} \begin{Lemma}\label{Lemma:14} If $A\in (f^*\circ \nu)^{-1}({\mathcal F})\cap {\mathcal X}$, then there exists a projective object $Q\in {\mathcal A}$ and an epimorphism $p\colon Q\to A$ such that $\operatorname{Ker} p\in (f^*\circ \nu)^{-1}({\mathcal F})\cap {\mathcal X}$. \end{Lemma} \begin{proof} Let $A\in (f^*\circ \nu)^{-1}({\mathcal F})\cap {\mathcal X}$ be arbitrary, and choose an epimorphism $q\colon Q'\to f^*(A)$ in ${\mathcal B}$ with $Q'$ projective. By Lemma \ref{Lemma:10} part \ref{Lemma:10:2} the morphism $(\adjiso{f_!}{f^*})^{-1}(q)\colon f_!(Q')\to A$ is an epimorphism and $\operatorname{Ker} (\adjiso{f_!}{f^*})^{-1}(q)\in {\mathcal X}$. Since $f_!(Q')$ is projective, it only remains to show $\operatorname{Ker} (\adjiso{f_!}{f^*})^{-1}(q) \in (f^*\circ \nu)^{-1}({\mathcal F})$. To this end, note that applying $f^*\circ \nu$ to \[ 0\to \operatorname{Ker} (\adjiso{f_!}{f^*})^{-1}(q)\to f_!(Q')\xrightarrow{(\adjiso{f_!}{f^*})^{-1}(q)}A\to 0 \] gives an exact sequence \[ 0\to f^*\nu(\operatorname{Ker} (\adjiso{f_!}{f^*})^{-1}(q))\to f^*\nu f_!(Q')\xrightarrow{f^*\nu((\adjiso{f_!}{f^*})^{-1}(q))}f^*\nu(A)\to 0 \] in ${\mathcal B}$ since $L_1\nu(A)=0$. By Lemma \ref{Lemma:1} we have that ${\mathcal F}$ is resolving, and therefore $f^*\nu(\operatorname{Ker} (\adjiso{f_!}{f^*})^{-1}(q))\in {\mathcal F}$. This proves the claim. \end{proof} \begin{Lemma}\label{Lemma:16} If $A\in (f^*\circ \nu)^{-1}({\mathcal F})\cap {\mathcal X}$, then there exists a projective object $Q\in {\mathcal A}$ and a monomorphism $j\colon A\to Q$ such that $\operatorname{Coker} j\in (f^*\circ \nu)^{-1}({\mathcal F})\cap {\mathcal X}$. \end{Lemma} \begin{proof} Let $A\in (f^*\circ \nu)^{-1}({\mathcal F})\cap {\mathcal X}$ be arbitrary. Choose a projective object $Q'\in {\mathcal B}$ and an exact sequence \[ 0\to f^*\nu(A)\xrightarrow{i} Q' \xrightarrow{p} B\to 0 \] with $B\in {\mathcal F}$. By Lemma \ref{Lemma:10} we get that $j:=\adjiso{f^*\circ \nu}{f_!}(i)\colon A\to f_!(Q')$ is a monomorphism and $\operatorname{Coker} j\in {\mathcal X}$. Since $f_!(Q')$ is projective, it only remains to show that $\operatorname{Coker} j\in (f^*\circ \nu)^{-1}({\mathcal F})$. To this end, note that we have a commutative diagram \[ \begin{tikzpicture}[description/.style={fill=white,inner sep=2pt}] \matrix(m) [matrix of math nodes,row sep=2.5em,column sep=4em,text height=1.5ex, text depth=0.25ex] { f^*\nu(A) & f^*\nu f_!(Q') & f^*\nu(\operatorname{Coker} j) \\ f^*\nu (A) & Q' & B \\}; \path[->] (m-1-1) edge node[auto] {$f^*\nu(j)$} (m-1-2) (m-1-2) edge node[auto] {$$} (m-1-3) (m-2-1) edge node[auto] {$i$} (m-2-2) (m-2-2) edge node[auto] {$$} (m-2-3) (m-1-1) edge node[auto] {$1_{f^*\nu (A)}$} (m-2-1) (m-1-2) edge node[auto] {$\counit{f^*\circ \nu}{f_!}_{Q'}$} (m-2-2) (m-1-3) edge node[auto] {$$} (m-2-3); \end{tikzpicture} \] where the rows are short exact sequences. Hence, the right square is a pullback and a pushout square. Therefore, we get an exact sequence \[ 0\to f^*\nu f_! (Q')\to f^*\nu (\operatorname{Coker} j) \oplus Q'\to B\to 0. \] We know that $B\in {\mathcal F}$, $f^*\nu f_!(Q')$ is projective, and ${\mathcal F}$ is closed under extensions and direct summands. Therefore, it follows that $f^*\nu(\operatorname{Coker} j)\in {\mathcal F}$. This proves the claim. \end{proof} \begin{Theorem}\label{Theorem:3} The category $(f^*\circ \nu)^{-1}({\mathcal F})\cap {\mathcal X}$ is an admissible subcategory of $\cG\cP({\mathcal A})$. \end{Theorem} \begin{proof} This follows from Lemma \ref{Lemma:12}, \ref{Lemma:13}, \ref{Lemma:15}, \ref{Lemma:14} and \ref{Lemma:16}. \end{proof} \begin{Example}\label{Example:7} Let $k$ be a field, let $\Lambda_1$ be a finite-dimensional algebra over $k$, and let $\Lambda_2$ be a left coherent $k$-algebra. Theorem \ref{Theorem:3} together with Example \ref{Example:5} show that the categories \begin{enumerate} \item $\{M\in (\Lambda_1\otimes_k\Lambda_2)\text{-}\operatorname{mod} \mid \text{ } _{\Lambda_1}M\in\cG\cP(\Lambda_1\text{-}\operatorname{Mod} ) \\ \text{ and } _{\Lambda_2}(\operatorname{Hom}_k(\Lambda_1,k)\otimes_{\Lambda_1} M)\in \cG\cP(\Lambda_2\text{-}\operatorname{mod} )\}$ \item $\{ M\in (\Lambda_1\otimes_k\Lambda_2)\text{-}\operatorname{mod} \mid \text{ } _{\Lambda_1}M\in\cG\cP(\Lambda_1\text{-}\operatorname{Mod} ) \\ \text{ and } _{\Lambda_2}(\operatorname{Hom}_k(\Lambda_1,k)\otimes_{\Lambda_1} M)\in \operatorname{Proj}(\Lambda_2\text{-}\operatorname{mod} )\}$ \item $\{ M\in (\Lambda_1\otimes_k\Lambda_2)\text{-}\operatorname{mod} \mid \text{ } _{\Lambda_1}M\in\operatorname{Proj}(\Lambda_1\text{-}\operatorname{Mod} ) \\ \text{ and } _{\Lambda_2}(\operatorname{Hom}_k(\Lambda_1,k)\otimes_{\Lambda_1} M)\in \cG\cP(\Lambda_2\text{-}\operatorname{mod} )\}$ \item $\{ M\in (\Lambda_1\otimes_k\Lambda_2)\text{-}\operatorname{mod} \mid \text{ } _{\Lambda_1}M\in\operatorname{Proj}(\Lambda_1\text{-}\operatorname{Mod} ) \\ \text{ and } _{\Lambda_2}(\operatorname{Hom}_k(\Lambda_1,k)\otimes_{\Lambda_1} M)\in \operatorname{Proj}(\Lambda_2\text{-}\operatorname{mod} )\}$ \end{enumerate} are admissible subcategories of $\cG\cP((\Lambda_1\otimes_k \Lambda_2)\text{-}\operatorname{mod})$. \end{Example} \begin{Example}\label{Example:8} Let $k$ be a field, let $\Lambda_1$ be a finite-dimensional algebra over $k$, and let $\Lambda_2$ be a $k$-algebra. Example \ref{Example:6} together with Theorem \ref{Theorem:3} show that the categories \begin{enumerate} \item $\{M\in (\Lambda_1\otimes_k\Lambda_2)\text{-}\operatorname{Mod} \mid \text{ } _{\Lambda_1}M\in\cG\cP(\Lambda_1\text{-}\operatorname{Mod} ) \\ \text{ and } _{\Lambda_2}(\operatorname{Hom}_k(\Lambda_1,k)\otimes_{\Lambda_1} M)\in \cG\cP(\Lambda_2\text{-}\operatorname{Mod} )\}$ \item $\{ M\in (\Lambda_1\otimes_k\Lambda_2)\text{-}\operatorname{Mod} \mid\text{ } _{\Lambda_1}M\in\cG\cP(\Lambda_1\text{-}\operatorname{Mod} ) \\ \text{ and } _{\Lambda_2}(\operatorname{Hom}_k(\Lambda_1,k)\otimes_{\Lambda_1} M)\in \operatorname{Proj}(\Lambda_2\text{-}\operatorname{Mod} )\}$ \item $\{ M\in (\Lambda_1\otimes_k\Lambda_2)\text{-}\operatorname{Mod} \mid\text{ } _{\Lambda_1}M\in\operatorname{Proj}(\Lambda_1\text{-}\operatorname{Mod} ) \\ \text{ and } _{\Lambda_2}(\operatorname{Hom}_k(\Lambda_1,k)\otimes_{\Lambda_1} M)\in \cG\cP(\Lambda_2\text{-}\operatorname{Mod} )\}$ \item $\{ M\in (\Lambda_1\otimes_k\Lambda_2)\text{-}\operatorname{Mod} \mid\text{ } _{\Lambda_1}M\in\operatorname{Proj}(\Lambda_1\text{-}\operatorname{Mod} ) \\ \text{ and } _{\Lambda_2}(\operatorname{Hom}_k(\Lambda_1,k)\otimes_{\Lambda_1} M)\in \operatorname{Proj}(\Lambda_2\text{-}\operatorname{Mod} )\}$ \end{enumerate} are admissible subcategories of $\cG\cP((\Lambda_1\otimes_k\Lambda_2)\text{-}\operatorname{Mod})$. \end{Example} \subsection{Lifting Gorenstein projectives}\label{Lifting Gorenstein projectives} Now assume ${\mathcal X}=\relGproj{P}{{\mathcal A}}$ and ${\mathcal F}=\cG\cP({\mathcal B})$. We define \[ \cG\cP(\relGproj{P}{{\mathcal A}}):=(f^*\circ \nu)^{-1}(\cG\cP({\mathcal B}))\cap \relGproj{P}{{\mathcal A}}. \] By Theorem \ref{Theorem:3} we know that $\cG\cP(\relGproj{P}{{\mathcal A}})$ is an admissible subcategory of $\cG\cP({\mathcal A})$, and therefore \[ \cG\cP(\relGproj{P}{{\mathcal A}})\subset \cG\cP({\mathcal A}). \] We want to investigate when this inclusion is an equality. We first give a different description of the objects in $\cG\cP(\relGproj{P}{{\mathcal A}})$. \begin{Proposition}\label{Proposition:3} Let $A\in {\mathcal A}$ be arbitrary. Then $A\in \cG\cP(\relGproj{P}{{\mathcal A}})$ if and only if there exists a totally acyclic complex \[ Q_{\bullet} = \cdots \xrightarrow{s_{2}} Q_{1}\xrightarrow{s_{1}} Q_0\xrightarrow{s_0} Q_{-1}\xrightarrow{s_{-1}} \cdots \] in ${\mathcal A}$, such that $Z_i(Q_{\bullet})\in \relGproj{P}{{\mathcal A}}$ for all $i\in \mathbb{Z}$, and such that $Z_0(Q_{\bullet})=A$. \end{Proposition} \begin{proof} Assume $A\in \cG\cP(\relGproj{P}{{\mathcal A}})$. Since $\cG\cP(\relGproj{P}{{\mathcal A}})$ is an admissible subcategory of $\cG\cP({\mathcal A})$, we can find a long exact sequence \[ Q_{\bullet} = \cdots \to Q_{1}\to Q_0\to Q_{-1}\to \cdots \] with $Q_i\in {\mathcal A}$ projective, $Z_0(Q_{\bullet})=A$, and $Z_i(Q_{\bullet})\in \cG\cP(\relGproj{P}{{\mathcal A}})$ for all $i\in \mathbb{Z}$. Furthermore, $\operatorname{Ext}^1_{{\mathcal A}}(A',Q')=0$ for all $A'\in \cG\cP(\relGproj{P}{{\mathcal A}})$ and $Q'\in \operatorname{Proj}({\mathcal A})$ since $\cG\cP(\relGproj{P}{{\mathcal A}})$ is admissible. This shows that $Q_{\bullet}$ is totally acyclic. For the converse, assume $Q_{\bullet}$ is totally acyclic, $Z_i(Q_{\bullet})\in \relGproj{P}{{\mathcal A}}$ for all $i\in \mathbb{Z}$, and $A=Z_0(Q_{\bullet})$. The sequence \[ f^*\nu(Q_{\bullet})=\cdots \xrightarrow{f^*\nu(s_{2})}f^*\nu(Q_{1})\xrightarrow{f^*\nu(s_{1})} f^*\nu(Q_0)\xrightarrow{f^*\nu (s_0)} \cdots \] is then exact since $L_1\nu(A')=0$ for all $A'\in \relGproj{P}{{\mathcal A}}$. Furthermore, the objects $f^*\nu(Q_i)\in {\mathcal B}$ are projective since $f^*\circ \nu$ preserves projectives. Applying ${\mathcal B}(-,Q)$ for $Q\in {\mathcal B}$ projective and using the isomorphism ${\mathcal B} (f^*\nu(Q_i),Q)\cong {\mathcal A}(Q_i,f_!(Q))$ gives us the sequence \[ \cdots \xrightarrow{-\circ s_{-1}}{\mathcal A}(Q_{-1},f_!(Q))\xrightarrow{-\circ s_0} {\mathcal A}(Q_{0},f_!(Q))\xrightarrow{-\circ s_{1}} {\mathcal A}(Q_{1},f_! (Q))\xrightarrow{-\circ s_{2}} \cdots \] which is exact since $Q_{\bullet}$ is totally acyclic. Hence, $f^*\nu(Q_{\bullet})$ is totally acyclic, and therefore $f^*\nu (A) = Z_0(f^*\nu(Q_{\bullet}))\in \cG\cP({\mathcal B} )$. This shows that $A\in \cG\cP(\relGproj{P}{{\mathcal A}})$, and we are done. \end{proof} \begin{Remark}\label{Remark:1} Proposition \ref{Proposition:3} shows that $A\in \cG\cP(\relGproj{P}{{\mathcal A}})$ if and only if $A$ is Gorenstein projective inside the exact category $\relGproj{P}{{\mathcal A}}$. This is the reason for the notation $\cG\cP(\relGproj{P}{{\mathcal A}})$. \end{Remark} \begin{Proposition}\label{Proposition:4} The following statements are equivalent: \begin{enumerate} \item\label{Proposition:4,1} $\cG\cP(\relGproj{P}{{\mathcal A}}) = \cG\cP({\mathcal A})$; \item\label{Proposition:4,2} $\cG\cP({\mathcal A})\subset \relGproj{P}{{\mathcal A}}$; \item\label{Proposition:4,3} $f^*\circ \nu\colon {\mathcal A}\to {\mathcal B}$ preserves Gorenstein projectives. \end{enumerate} \end{Proposition} \begin{proof} Obviously, \ref{Proposition:4,1} $\implies$ \ref{Proposition:4,2} and \ref{Proposition:4,1}$\implies$ \ref{Proposition:4,3}. Also, if \ref{Proposition:4,2} holds then any totally acyclic complex satisfies the assumptions in Proposition \ref{Proposition:3}, and therefore \ref{Proposition:4,1} holds. We show the implication \ref{Proposition:4,3}$\implies$ \ref{Proposition:4,1}. Assume $f^*\circ \nu$ preserves Gorenstein projectives, and let $A\in \cG\cP({\mathcal A})$ be arbitrary. We only need to show that $L_1\nu(A)=0$ since this implies that if $Q_{\bullet}$ is totally acyclic, then $\nu(Q_{\bullet})$ is exact, and hence $Z_0(Q_{\bullet})\in \relGproj{P}{{\mathcal A}}$ by definition since projective objects are $P$-projective. Let \[ 0\to A'\xrightarrow{s} Q\xrightarrow{t} A\to 0 \] be an exact sequence in ${\mathcal A}$ with $Q$ projective and $A'\in \cG\cP({\mathcal A})$. Applying $\nu$ gives an exact sequence \[ 0\to L_1\nu(A)\to \nu(A')\xrightarrow{\nu(s)} \nu (Q)\xrightarrow{\nu(t)} \nu (A)\to 0 \] Hence, $L_1\nu(A)=0$ if and only if $\nu(s)$ is a monomorphism. Let $Q'\in {\mathcal B}$ be a projective object. We know that the map ${\mathcal A}(Q,f_!(Q'))\xrightarrow{-\circ s} {\mathcal A}(A', f_!(Q'))$ is an epimorphism since $\operatorname{Ext}^1_{{\mathcal A}}(A,f_!(Q'))=0$. Hence, from the adjunction $f^*\circ \nu\dashv f_!$ we get that \[ {\mathcal B}(f^*\nu(Q), Q')\xrightarrow{-\circ f^*\nu (s)} {\mathcal B}(f^*\nu(A'), Q') \] is an epimorphism. It follows therefore from Lemma \ref{Lemma:2} that $f^*\nu(s)$ is a monomorphism. Since $f^*$ is faithful, we get that $\nu(s)$ is a monomorphism. This proves the claim. \end{proof} The following result gives sufficient criteria for when $\cG\cP(\relGproj{P}{{\mathcal A}})= \cG\cP({\mathcal A})$. \begin{Theorem}\label{Theorem:4} We have that $\cG\cP(\relGproj{P}{{\mathcal A}})= \cG\cP({\mathcal A})$ if either of the following conditions hold: \begin{enumerate} \item\label{Theorem:4,1} For any long exact sequence \[ 0\to K\to Q_0\to Q_{-1}\to \cdots \] with $Q_i\in {\mathcal A}$ projective for $i\leq 0$, we have $K\in \relGproj{P}{{\mathcal A}}$; \item\label{Theorem:4,2} If $B\in {\mathcal B}$ satisfy $\operatorname{Ext}^1_{{\mathcal B}}(B,B')=0$ for all $B'$ of $\operatorname{pdim} B'< \infty$, then $B\in \cG\cP({\mathcal B})$. \end{enumerate} \end{Theorem} \begin{proof} Proposition \ref{Proposition:4} part \ref{Proposition:4,2} shows that condition \ref{Theorem:4,1} is sufficient. Assume condition \ref{Theorem:4,2} holds. By Proposition \ref{Proposition:4} part \ref{Proposition:4,3} it is sufficient to show that $f^*\nu(A)\in \cG\cP({\mathcal B})$ for all $A\in \cG\cP({\mathcal A})$. Fix $A\in \cG\cP({\mathcal A})$, and let $0\to A'\xrightarrow{s} Q\xrightarrow{t} A\to 0$ be an exact sequence in ${\mathcal A}$ with $Q\in \operatorname{Proj}({\mathcal A})$. Applying $f^*\circ \nu$ gives an exact sequence $ f^*\nu(A')\xrightarrow{f^*\nu(s)} f^*\nu(Q)\xrightarrow{f^*\nu(t)} f^*\nu(A)\to 0$ in ${\mathcal B}$. Let $i\colon K \to f^*\nu(Q)$ be the inclusion of the kernel of $f^*\nu(t)$, let $p\colon f^*\nu(A')\to K$ be the surjection induced from $f^*\nu(s)$, and let $B'\in {\mathcal B}$ be an arbitrary object. Applying ${\mathcal B}(-,B)$ gives an exact sequences \begin{multline*} 0\to {\mathcal B}(f^*\nu(A),B')\xrightarrow{-\circ f^*\nu(t)} {\mathcal B}(f^*\nu(Q),B')\xrightarrow{-\circ i} {\mathcal B}(K,B') \\ \to \operatorname{Ext}^1_{{\mathcal B}}(f^*\nu(A),B')\to 0. \end{multline*} where $\operatorname{Ext}^1_{{\mathcal B}}(f^*\nu(Q),B')=0$ since $f^*\nu$ preserves projective objects. Hence, we only need to show that $-\circ i\colon {\mathcal B}(f^*\nu(Q),B')\to {\mathcal B}(K,B')$ is an epimorphism if $\operatorname{pdim} B'<\infty$. To this end, note that $\operatorname{Ext}^1_{{\mathcal A}}(A,f_!(B))=0$ if $\operatorname{pdim} B'<\infty$ since $A\in \cG\cP({\mathcal A})$ and $f_!$ preserves objects of finite projective dimension. Therefore, we have an exact sequence \[ 0\to {\mathcal A}(A,f_!(B'))\xrightarrow{-\circ t} {\mathcal A}(Q,f_!(B'))\xrightarrow{-\circ s} {\mathcal A}(A',f_!(B'))\to 0. \] Via the adjunction ${\mathcal A}(-,f_!)\cong {\mathcal B}(f^*\circ \nu,-)$ the map \[ {\mathcal A}(Q,f_!(B'))\xrightarrow{-\circ s} {\mathcal A}(A',f_!(B')) \] corresponds to \[ {\mathcal B}(f^*\nu(Q),B')\xrightarrow{-\circ f^*\nu(s)} {\mathcal B}(f^*\nu(A'),B') \] which is therefore also an epimorphism. But $-\circ f^*\nu(s)$ factors as \[ {\mathcal B}(f^*\nu(Q),B')\xrightarrow{-\circ i} {\mathcal B}(K,B') \xrightarrow{-\circ p} {\mathcal B}(f^*\nu(A'),B'). \] Since ${\mathcal B}(K,B') \xrightarrow{-\circ p} {\mathcal B}(f^*\nu(A'),B')$ is a monomorphism, it follows that \[ {\mathcal B}(f^*\nu(Q),B')\xrightarrow{-\circ i} {\mathcal B}(K,B') \] is an epimorphism. This proves the claim. \end{proof} \begin{Corollary}\label{Gorenstein adjoint pairs lifts Gorenstein projectives} If $P$ is Iwanaga-Gorenstein, then \[ \cG\cP(\relGproj{P}{{\mathcal A}})= \cG\cP({\mathcal A}). \] \end{Corollary} \begin{proof} This follows from condition \ref{Theorem:4,1} in Theorem \ref{Theorem:4} and the fact that $\dim_{\relGproj{P}{{\mathcal A}}}({\mathcal A})<\infty$ when $P$ is Iwanaga-Gorenstein. \end{proof} Recall that ${\mathcal B}$ is $\operatorname{Proj}({\mathcal B})$\emphbf{-Gorenstein} if $\operatorname{G.pdim} (B)<\infty$ for all $B\in {\mathcal B}$ \cite[Corollary 4.13]{Bel00}. \begin{Lemma}\label{Proj Gorenstein} If ${\mathcal B}$ is $\operatorname{Proj}({\mathcal B})$-Gorenstein, then \[ \cG\cP({\mathcal B})= \{B\in {\mathcal B}\mid \operatorname{Ext}^1_{{\mathcal B}}(B,B')=0 \text{ for all } B' \text{ satisfying } \operatorname{pdim} B'<\infty \}. \] \end{Lemma} \begin{proof} Assume $\operatorname{Ext}^1_{{\mathcal B}}(B,B')=0$ for all $B'$ satisfying $\operatorname{pdim} B<\infty'$. Since $\operatorname{G.pdim} (B)<\infty$, there exists an exact sequence $0\to B_2\to B_1\to B\to 0$ such that $B_1\in \cG\cP ({\mathcal B})$ and $\operatorname{pdim} B_2<\infty$ by \cite[Theorem 1.1]{AB89}. Since $\operatorname{Ext}^1_{{\mathcal B}}(B,B_2)=0$ by assumption, the sequence is split. Hence, $B$ is a direct summand of $B_1$, and therefore $B\in \cG\cP({\mathcal B})$. This proves the claim. \end{proof} \begin{Corollary} If ${\mathcal B}$ is $\operatorname{Proj}({\mathcal B})$-Gorenstein, then \[ \cG\cP(\relGproj{P}{{\mathcal A}})= \cG\cP({\mathcal A}). \] In particular, this holds if ${\mathcal B}=\Lambda\text{-}\operatorname{mod}$ or ${\mathcal B}=\Lambda\text{-}\operatorname{Mod}$ for an Iwanaga-Gorenstein ring $\Lambda$. \end{Corollary} For an abelian category ${\mathcal A}$ we let $\Omega^{\infty}({\mathcal A})$ denote the collection of objects $A\in {\mathcal A}$ such that there exists an exact sequence $0\to A\to Q_0\to Q_{-1}\to \cdots$ with $Q_i\in {\mathcal A}$ projective for all $i\leq 0$. \begin{Example}\label{Example:9} Let $k$ be a field, let $\Lambda_1$ be a finite-dimensional algebra over $k$, and let $\Lambda_2$ be a left coherent $k$-algebra. From Example \ref{Example:5} we have that \begin{align*} & \cG\cP(\relGproj{P}{(\Lambda_1\otimes_k \Lambda_2)\text{-}\operatorname{mod}} = \{M\in (\Lambda_1\otimes_k \Lambda_2)\text{-}\operatorname{mod} \mid \\ & \text{ } _{\Lambda_1}M\in \cG\cP(\Lambda_1\text{-}\operatorname{Mod}) \text{ and } _{\Lambda_2}(\operatorname{Hom}_k(\Lambda_1,k)\otimes_{\Lambda_1} M)\in \cG\cP(\Lambda_2\text{-}\operatorname{mod})\}. \end{align*} If $\Omega^{\infty}(\Lambda\text{-}\operatorname{Mod})\subset \cG\cP(\Lambda\text{-}\operatorname{Mod})$ or \begin{multline*} \cG\cP(\Lambda_2\text{-}\operatorname{mod})=\{M\in \Lambda_2\text{-}\operatorname{mod} \mid \operatorname{Ext}^1_{\Lambda}(M,M')=0 \\ \text{ for all } M' \text{ satisfying }\operatorname{pdim} M'<\infty \}. \end{multline*} then by Theorem \ref{Theorem:4} we have \[ \cG\cP((\Lambda_1\otimes_k \Lambda_2)\text{-}\operatorname{mod})= \cG\cP(\relGproj{P}{(\Lambda_1\otimes_k \Lambda_2)\text{-}\operatorname{mod}}). \] In particular, the equality holds if $\Lambda_1$ or $\Lambda_2$ is Iwanaga-Gorenstein. This description of $\cG\cP((\Lambda_1\otimes_k \Lambda_2)\text{-}\operatorname{mod})$ has previously been obtained in \cite{She16}, but it was only shown to hold under the assumption that $\Lambda_1$ is Iwanaga-Gorenstein. \end{Example} \begin{Example}\label{Example:10} Let $k$ be a field, let $\Lambda_1$ be a finite-dimensional algebra over $k$, and let $\Lambda_2$ be a $k$-algebra. From Example \ref{Example:6} we get that if $\Omega^{\infty}(\Lambda\text{-}\operatorname{Mod})\subset \cG\cP(\Lambda\text{-}\operatorname{Mod})$ or \begin{multline*} \cG\cP(\Lambda_2\text{-}\operatorname{Mod})=\{M\in \Lambda_2\text{-}\operatorname{Mod}\mid \operatorname{Ext}^1_{\Lambda}(M,M')=0 \\ \text{ for all } M' \text{ satisfying }\operatorname{pdim} M'<\infty \} \end{multline*} then the criteria in Theorem \ref{Theorem:4} holds, and therefore \begin{align*} \cG\cP((\Lambda_1\otimes_k \Lambda_2)\text{-}\operatorname{Mod}) = & \{M\in (\Lambda_1\otimes_k \Lambda_2)\text{-}\operatorname{Mod} \mid \text{ } _{\Lambda_1}M\in \cG\cP(\Lambda_1\text{-}\operatorname{Mod}) \\ &\text{ and } _{\Lambda_2}(\operatorname{Hom}_k(\Lambda_1,k)\otimes_{\Lambda_1} M)\in \cG\cP(\Lambda_2\text{-}\operatorname{Mod})\}. \end{align*} In particular, this equality holds if $\Lambda_1$ or $\Lambda_2$ are Iwanaga-Gorenstein. \end{Example} Since $\cG\cP(\relGproj{P}{{\mathcal A}})$ is closed under direct summands and contains all the projective objects, the projectively stable category $\underline{\cG\cP(\relGproj{P}{{\mathcal A}})}$ is a thick triangulated subcategory of $\underline{\cG\cP({\mathcal A} )}$. \begin{Definition}\label{Definition:12} We define the \emphbf{Gorenstein discrepancy category} of $P$ to be the Verdier quotient $\Discr{P}{{\mathcal A}}= \underline{\cG\cP({\mathcal A} )}/\underline{\cG\cP(\relGproj{P}{{\mathcal A}})}$. \end{Definition} The triangulated category $\Discr{P}{{\mathcal A}}$ measures how far $\cG\cP(\relGproj{P}{{\mathcal A}})$ is from $\cG\cP({\mathcal A})$. The following example shows that the Gorenstein discrepancy category can be nonzero. \begin{Example}\label{Example:11} Let $k$ be a field, and let $\Lambda_1$ be the path algebra of the quiver \begin{equation}\label{eq:12} \begin{tikzpicture}[description/.style={fill=white,inner sep=2pt}] \matrix(m) [matrix of math nodes,row sep=2.5em,column sep=2.0em,text height=1.5ex, text depth=0.25ex] { 1 & 2 \\ }; \path[->] (m-1-1) edge node[auto] {$\alpha$} (m-1-2) (m-1-2) edge[loop above] node[auto] {$\beta$} (m-1-2); \end{tikzpicture} \end{equation} with relations $\beta^2 = \beta \circ \alpha = 0$. Let $e_1$ and $e_2$ be the two primitive idempotents of $\Lambda_1$. Note that $\cG\cP(\Lambda_1 \text{-}\operatorname{mod})=\operatorname{Proj}(\Lambda_1 \text{-}\operatorname{mod})$. In fact, up to isomorphism the only indecomposable $\Lambda_1$-modules are the two simple modules $S_1$ and $S_2$ concentrated in vertex $1$ and $2$, the two projective modules $P_1=\Lambda e_1$ and $P_2=\Lambda e_2$, and the two injective modules $I_1=\operatorname{Hom}_k(e_1\Lambda,k)$ and $I_2=\operatorname{Hom}_k(e_2\Lambda,k)$. Furthermore, we have an equality $I_2=S_1$. Now since $I_1$ and $I_2$ are injective but not projective, they can't be Gorenstein projective. Also, $S_2$ is not Gorenstein projective since there exists a non-split exact sequence \[ 0\to P_1\to I_2\to S_2\to 0 \] This shows that $\cG\cP(\Lambda_1 \text{-}\operatorname{mod})=\operatorname{Proj}(\Lambda_1 \text{-}\operatorname{mod})$. Now let $\Lambda_2$ be a finite-dimensional $k$-algebra. A module $M\in (\Lambda_1\otimes_k \Lambda_2)\text{-}\operatorname{mod}$ can be identified with a representation \[ \begin{tikzpicture}[description/.style={fill=white,inner sep=2pt}] \matrix(m) [matrix of math nodes,row sep=2.5em,column sep=2.0em,text height=1.5ex, text depth=0.25ex] { M_1 & M_2 \\ }; \path[->] (m-1-1) edge node[auto] {$u$} (m-1-2) (m-1-2) edge[loop above] node[auto] {$v$} (m-1-2); \end{tikzpicture} \] where $M_1,M_2\in \Lambda_2\text{-}\operatorname{mod}$ and $u,v$ are morphisms of $\Lambda_2$-modules satisfying $v^2=0$ and $v\circ u=0$. Let \begin{align*} & f^*:=\operatorname{res}^{\Lambda_1\otimes_k \Lambda_2}_{\Lambda_2}\colon (\Lambda_1\otimes_k \Lambda_2) \text{-}\operatorname{mod} \to \Lambda_2 \text{-}\operatorname{mod} \\ & f_!:=\Lambda_1\otimes_k -\colon \Lambda_2 \text{-}\operatorname{mod} \to (\Lambda_1\otimes_k \Lambda_2) \text{-}\operatorname{mod} \\ & \nu_1:=\operatorname{Hom}_k(\Lambda_1,k)\otimes_{\Lambda_1} -\colon (\Lambda_1\otimes_k \Lambda_2)\text{-} \operatorname{mod} \to (\Lambda_1\otimes_k \Lambda_2) \text{-} \operatorname{mod}. \end{align*} and \begin{align*} & g^*:=\operatorname{res}^{\Lambda_1\otimes_k \Lambda_2}_{\Lambda_1}\colon (\Lambda_1\otimes_k \Lambda_2) \text{-}\operatorname{mod} \to \Lambda_1 \text{-}\operatorname{mod} \\ & g_!:=\Lambda_2\otimes_k -\colon \Lambda_1 \text{-}\operatorname{mod} \to (\Lambda_1\otimes_k \Lambda_2) \text{-}\operatorname{mod} \\ & \nu_2:=\operatorname{Hom}_k(\Lambda_2,k)\otimes_{\Lambda_2} -\colon (\Lambda_1\otimes_k \Lambda_2)\text{-} \operatorname{mod} \to (\Lambda_1\otimes_k \Lambda_2) \text{-} \operatorname{mod}. \end{align*} be two adjoint pairs with Nakayama functors as in Example \ref{finitely presented}. Let $P_1:=f_!\circ f^*$ and $P_2:=g_!\circ g^*$. We have that \begin{multline*} \cG\cP(\relGproj{P_1}{(\Lambda_1\otimes_k \Lambda_2)\text{-}\operatorname{mod}}) =\{M\in (\Lambda_1\otimes_k \Lambda_2)\text{-}\operatorname{mod} \mid \\ \text{ } _{\Lambda_1}M\in \cG\cP(\Lambda_1\text{-}\operatorname{mod}) \text{ and } _{\Lambda_2}(\operatorname{Hom}_k(\Lambda_1,k)\otimes_{\Lambda_1} M)\in \cG\cP(\Lambda_2\text{-}\operatorname{mod})\} \end{multline*} and \begin{multline*} \cG\cP(\relGproj{P_2}{(\Lambda_1\otimes_k \Lambda_2)\text{-}\operatorname{mod}}) =\{M\in (\Lambda_1\otimes_k \Lambda_2)\text{-}\operatorname{mod} \mid \\ \text{ } _{\Lambda_2}M\in \cG\cP(\Lambda_2\text{-}\operatorname{mod}) \text{ and } _{\Lambda_1}(\operatorname{Hom}_k(\Lambda_2,k)\otimes_{\Lambda_2} M)\in \cG\cP(\Lambda_1\text{-}\operatorname{mod})\} \end{multline*} as in Example \ref{Example:9}. Note that $_{\Lambda_1}M\in \cG\cP(\Lambda_1\text{-}\operatorname{mod})=\operatorname{Proj}(\Lambda_1\text{-}\operatorname{mod})$ if and only if the following holds: \begin{enumerate} \item $u$ is a monomorphism; \item $\operatorname{im} u \cap \operatorname{im} v = (0)$; \item $\operatorname{im} u \oplus \operatorname{im} v = \operatorname{Ker} v$. \end{enumerate} Also, a simple computation shows that \[ _{\Lambda_2}(\operatorname{Hom}_k(\Lambda_1,k)\otimes_{\Lambda_1} M)= \operatorname{Coker} u \oplus \operatorname{Coker} v. \] Hence, $M\in \cG\cP(\relGproj{P_1}{(\Lambda_1\otimes_k \Lambda_2) \text{-}\operatorname{mod}})$ if and only the following holds: \begin{enumerate} \item $u\colon M_1\to M_2$ is a monomorphism; \item $\operatorname{im} u\cap \operatorname{im} v=(0)$; \item $\operatorname{im} u\oplus \operatorname{im} v = \operatorname{Ker} v$; \item $\operatorname{Coker} u, \operatorname{Coker} v\in \cG\cP(\Lambda_2\text{-}\operatorname{mod})$. \end{enumerate} Also, $M\in \cG\cP(\relGproj{P_2}{(\Lambda_1\otimes_k \Lambda_2)\text{-}\operatorname{mod}})$ if and only the following holds: \begin{enumerate} \item $M_1,M_2\in \cG\cP(\Lambda_2\operatorname{mod})$; \item $1\otimes_{}u$ is a monomorphism; \item $\operatorname{im} (1\otimes_{}u)\cap \operatorname{im} (1\otimes_{}v) = (0)$; \item $\operatorname{im} (1\otimes_{}u)\oplus \operatorname{im} (1\otimes_{}v) = \operatorname{Ker} (1\otimes_{}v)$. \end{enumerate} where \begin{align*} & 1\otimes_{}u\colon \operatorname{Hom}_k(\Lambda_2,k)\otimes_{\Lambda_2}M_1\to \operatorname{Hom}_k(\Lambda_2,k)\otimes_{\Lambda_2}M_2 \\ & 1\otimes v\colon \operatorname{Hom}_k(\Lambda_2,k)\otimes_{\Lambda_2}M_2\to \operatorname{Hom}_k(\Lambda_2,k)\otimes_{\Lambda_2}M_2. \end{align*} Now set $\Lambda_2:=\Lambda_1\ensuremath{^{\mathrm{op}}}$, and let $Q_2 = \Lambda_2e_2$ and $J_2= \operatorname{Hom}_k(e_2\Lambda_2,k)$ be the projective and injective left $\Lambda_2$-module corresponding to vertex 2. Furthermore, let $s\colon Q_2\to Q_2$ be a nonzero morphism satisfying $s^2=0$ (there exists a unique one up to scalars). Let $M\in \Lambda_1\otimes \Lambda_2\text{-}\operatorname{mod}$ be given by $M_1=0$, $M_2= Q_2$ and $v=s$. Under the isomorphism $\operatorname{Hom}_k(\Lambda_2,k)\otimes_{\Lambda_2}Q_2\cong J_2$ the map $s$ corresponds to a nonzero map $t\colon J_2\to J_2$ satisfying $t^2=0$. There exists a unique such map up to scalars, and it also satisfies $\operatorname{im} t = \ker t$. This shows that $M\in \cG\cP(\relGproj{P_2}{(\Lambda_1\otimes_k \Lambda_2)\text{-}\operatorname{mod}})$, and $M$ is therefore Gorenstein projective in $(\Lambda_1\otimes_k \Lambda_2)\text{-}\operatorname{mod}$. On the other hand, we have that $\operatorname{im} s \neq \operatorname{Ker} s$, and hence $M\notin \cG\cP(\relGproj{P_1}{(\Lambda_1\otimes_k \Lambda_2)\text{-}\operatorname{mod}})$. This shows that the discrepancy category corresponding to $P_1$ is nonzero. \end{Example} We end this section with a result on the Gorenstein projective dimension of ${\mathcal A}$. \begin{Proposition}\label{Proposition:5} We have the inequality \[ \operatorname{gl.Gpdim} {\mathcal A} \leq \operatorname{gl.Gpdim} {\mathcal B} + \dim_{\relGproj{P}{{\mathcal A}}}{\mathcal A}. \] \end{Proposition} \begin{proof} It is obviously true if $\operatorname{gl.Gpdim} {\mathcal B}=\infty$ or $\dim_{\relGproj{P}{{\mathcal A}}}{\mathcal A}=\infty$. We therefore assume $\operatorname{gl.Gpdim} {\mathcal B} =n<\infty$ and $\dim_{\relGproj{P}{{\mathcal A}}}{\mathcal A} = m<\infty$. Let $A\in {\mathcal A}$ be arbitrary, and let \[ 0\to K\xrightarrow{i} Q_{n+m}\xrightarrow{s_{n+m}}Q_{n+m-1}\xrightarrow{s_{n+m-1}}\cdots \xrightarrow{s_2}Q_1\xrightarrow{s_1}A\to 0 \] be an exact sequence in ${\mathcal A}$ with $Q_j$ projective for $1\leq j\leq n+m$. Since $Q_j$ is in $\relGproj{P}{{\mathcal A}}$ and $\dim_{\relGproj{P}{{\mathcal A}}}A\leq m$, we get that $\operatorname{Ker} s_j\in \relGproj{P}{{\mathcal A}}$ for $j\geq m$. In particular, this implies that the sequence \begin{multline*} 0\to f^*\nu(K)\xrightarrow{f^*\nu (i)}f^*\nu (Q_{n+m})\xrightarrow{f^*\nu(s_{n+m})}\cdots \\ \xrightarrow{f^*\nu(s_{m+2})}f^*\nu (Q_{m+1})\xrightarrow{}f^*\nu (\operatorname{Ker} s_m)\to 0 \end{multline*} is exact. Since $f^*\nu (Q_j)$ is projective in ${\mathcal B}$ and $\operatorname{G.pdim} f^*\nu (\operatorname{Ker} s_m)\leq n$, we get that $f^*\nu (K)\in \cG\cP({\mathcal B} )$. Hence, $K\in \cG\cP(\relGproj{P}{{\mathcal A}}) = \cG\cP({\mathcal A} )$, and the claim follows. \end{proof} \section{Application to functor categories}\label{Application to functor categories} Our goal in this section is to compute $\cG\cP({\mathcal B}^{{\mathcal C}})$ in examples using the theory we have developed. \subsection{Preliminaries}\label{Subsection Preliminaries} Let $k$ be a commutative ring, and let ${\mathcal C}$ be a small $k$-linear category. Recall that a right ${\mathcal C}$-module is a $k$-linear functor ${\mathcal C}\ensuremath{^{\mathrm{op}}}\to k\text{-}\operatorname{Mod}$. We let $\operatorname{Mod}\text{-}{\mathcal C}$ denote the categories of right ${\mathcal C}$-modules. A right ${\mathcal C}$-module $M$ is called \emphbf{finitely presented} if there exists an exact sequence \[ \oplus_{i=1}^m{\mathcal C}(-,c_i) \to \oplus_{j=1}^n{\mathcal C}(-,d_j)\to M\to 0 \] in $\operatorname{Mod} \text{-}{\mathcal C}$ for objects $c_i, d_j\in {\mathcal C}$. The category of finitely presented right ${\mathcal C}$-modules is denoted by $\operatorname{mod} \text{-}{\mathcal C}$. Dually, the category of left ${\mathcal C}$-modules and finitely presented left ${\mathcal C}$-modules are $\operatorname{Mod} \text{-}{\mathcal C}\ensuremath{^{\mathrm{op}}}$ and $\operatorname{mod} \text{-}{\mathcal C}\ensuremath{^{\mathrm{op}}}$, respectively. Let ${\mathcal B}$ be a $k$-linear abelian category, and let ${\mathcal B}^{{\mathcal C}}$ denote the category of $k$-linear functors from ${\mathcal C}$ to ${\mathcal B}$. Up to isomorphism there exists a unique functor \begin{align*} - \otimes_{{\mathcal C}}- \colon (\operatorname{mod}\text{-} {\mathcal C}) \otimes {\mathcal B}^{{\mathcal C}} \to {\mathcal B} \end{align*} such that ${\mathcal C}(c,-)\otimes_{{\mathcal C}}F=F(c)$ and the induced functor $-\otimes_{{\mathcal C}} F\colon\operatorname{mod}\text{-} {\mathcal C} \to {\mathcal B}$ is right exact for all $F\in {\mathcal B}^{{\mathcal C}}$, see chapter 3 in \cite{Kel05} or \cite{OR70} for details. If ${\mathcal C}=k$ we get a functor \[ -\otimes_{k}- \colon (k\text{-}\operatorname{mod}) \otimes {\mathcal B} \to {\mathcal B} \] For $N\in {\mathcal C}\text{-}\operatorname{mod}$ and $B\in {\mathcal B}$ we have a functor $N\otimes_k B\in {\mathcal B}^{{\mathcal C}}$ given by $c\mapsto N(c)\otimes_k B$. If furthermore $M\in \operatorname{mod}\text{-}{\mathcal C}$ then we get a natural isomorphism \[ M\otimes_{{\mathcal C}}(N\otimes_k B) \cong (M\otimes_{{\mathcal C}}N)\otimes_k B \] see (3.23) in \cite{Kel05}. We use the same terminology as in \cite{DSS17} in the following definition. \begin{Definition}\label{Definition:12,5} Let ${\mathcal C}$ be a small $k$-linear category. \begin{enumerate} \item ${\mathcal C}$ is \emphbf{locally bounded} if for any object $c\in {\mathcal C}$ there are only finitely many objects in ${\mathcal C}$ mapping nontrivially in and out of $c$. This means that for each $c\in {\mathcal C}$ we have \begin{align*} & {\mathcal C}(c,c')\neq 0 \quad \text{for only finitely many } c'\in {\mathcal C} \\ & {\mathcal C}(c'',c)\neq 0 \quad \text{for only finitely many } c''\in {\mathcal C}. \end{align*} \item ${\mathcal C}$ is \emphbf{Hom-finite} if ${\mathcal C}(c,c')$ is a finitely generated projective $k$-module for all $c,c'\in {\mathcal C}$. \end{enumerate} \end{Definition} If ${\mathcal C}$ is locally bounded and Hom-finite, and $M\in \operatorname{Mod}\text{-}{\mathcal C}$ satisfies \begin{enumerate} \item $M(c)$ is a finitely generated projective $k$-module for all $c\in {\mathcal C}$ \item $M(c)\neq 0$ for only finitely many $c\in {\mathcal C}$ \end{enumerate} then it follows from \cite[Lemma 5.2.2]{Kva17} that $M\in \operatorname{mod}\text{-}{\mathcal C}$. Let $k(\operatorname{ob}\text{-} {\mathcal C})$ be the category with the same objects as ${\mathcal C}$, and with morphisms \[ k(\operatorname{ob} \text{-}{\mathcal C})(c_1,c_2) = \begin{cases} 0 & \text{if $c_1\neq c_2$},\\ k & \text{if $c_1=c_2$}. \end{cases} \] The functor category ${\mathcal B}^{k(\operatorname{ob}\text{-} {\mathcal C})}$ is just a product of copies of ${\mathcal B}$, indexed over the objects of ${\mathcal C}$. Let $i\colon k(\operatorname{ob}\text{-} {\mathcal C})\to {\mathcal C}$ be the inclusion. We have functors \begin{align*} & i_!\colon {\mathcal B}^{k(\operatorname{ob}\text{-} {\mathcal C})}\to {\mathcal B}^{{\mathcal C}} \quad i_!((B^c)_{c\in {\mathcal C}})= \bigoplus_{c\in {\mathcal C}}{\mathcal C}(c,-)\otimes_k B^c \\ & i^*\colon {\mathcal B}^{{\mathcal C}}\to {\mathcal B}^{k(\operatorname{ob}\text{-} {\mathcal C})} \quad \quad i^*(F)= (F(c))_{c\in {\mathcal C}} \\ & \nu\colon {\mathcal B}^{{\mathcal C}}\to {\mathcal B}^{{\mathcal C}} \quad \nu(F)= D({\mathcal C})\otimes_{{\mathcal C}}F \end{align*} where $D=\operatorname{Hom}_k(-,k)$ and $(D({\mathcal C})\otimes_{{\mathcal C}}F)(c)= D({\mathcal C}(c,-))\otimes_{{\mathcal C}}F$, see Subsection 5.3 in \cite{Kva17} for details. \begin{Theorem}[Theorem 5.3.3 in \cite{Kva17}]\label{Nakayama functor on functor categories} Let ${\mathcal C}$ be a small, $k$-linear, locally bounded and Hom-finite category, let ${\mathcal B}$ be a $k$-linear abelian category, and let $i_!$, $i^*$ and $\nu$ be as above. Then $\nu$ is a Nakayama functor relative to $i_!\dashv i^*$. \end{Theorem} \begin{Theorem}[Theorem 5.3.4 in \cite{Kva17}]\label{Gorenstein locally bounded Hom-finite} Let ${\mathcal C}$ be a small, $k$-linear, locally bounded, and Hom-finite category, and let $i_!$, $i^*$ and $\nu$ be as above with ${\mathcal B}=k\text{-}\operatorname{Mod}$. Then \[ \sup_{c\in {\mathcal C}}(\operatorname{pdim} D({\mathcal C}(-,c)))<\infty \quad \text{and} \quad \sup_{c\in {\mathcal C}}(\operatorname{pdim} D({\mathcal C}(c,-)))<\infty \] if and only if the endofunctor $P=i_!\circ i^*\colon {\mathcal C}\text{-}\operatorname{Mod}\to {\mathcal C}\text{-}\operatorname{Mod}$ is Iwanaga-Gorenstein. In this case we have that \[ \sup_{c\in {\mathcal C}}(\operatorname{pdim} D({\mathcal C}(-,c))) = \sup_{c\in {\mathcal C}}(\operatorname{pdim} D({\mathcal C}(c,-))). \] and this number is equal to the Gorenstein dimension of the functor $P$. \end{Theorem} \subsection{Properties of locally bounded and Hom-finite categories} In this subsection we fix a small, $k$-linear, locally bounded and Hom-finite category ${\mathcal C}$ and a $k$-linear abelian category ${\mathcal B}$. Let $M$ be a finitely presented right ${\mathcal C}$-module. Since \[ ((M\otimes_{{\mathcal C}}-)\circ i_!\circ i^*)(F)\cong \bigoplus_{c\in {\mathcal C}}M(c)\otimes_k F(c) \] it follows that the functor $(M\otimes_{{\mathcal C}}-)\circ i_!\circ i^*\colon {\mathcal B}^{{\mathcal C}}\to {\mathcal B}^{{\mathcal C}}$ is exact if $M(c)$ is a finitely generated projective $k$-module for all $c\in {\mathcal C}$. By Proposition \ref{Right and left derived functors} part \ref{Right and left derived functors:2} we have that $i_!$ is adapted to $M\otimes_{{\mathcal C}}-$, and hence the left derived functor \[ \operatorname{Tor}^{{\mathcal C}}_n(M,-):=L_n(M\otimes_{{\mathcal C}}-). \] exists. \begin{Lemma}\label{Lemma:23} Let $0\to M_3 \xrightarrow{f} M_2\xrightarrow{g} M_1\to 0$ be an exact sequence of finitely presented right ${\mathcal C}$-modules, and assume $M_i(c)$ is a finitely generated projective $k$-module for all $i$ and all $c\in {\mathcal C}$. Then there exists a long exact sequence of functors \begin{multline*} \cdots \to \operatorname{Tor}_{i+1}^{{\mathcal C}}(M_1,-)\to \operatorname{Tor}_i^{{\mathcal C}}(M_3,-)\to \operatorname{Tor}_i^{{\mathcal C}}(M_2,-)\to \operatorname{Tor}_{i}^{{\mathcal C}}(M_1,-)\to \\ \cdots \to \operatorname{Tor}_1^{{\mathcal C}}(M_1,-)\to (M_3\otimes_{{\mathcal C}}-) \xrightarrow{f\otimes 1} (M_2\otimes_{{\mathcal C}}-) \xrightarrow{g\otimes 1} (M_1\otimes_{{\mathcal C}}-)\to 0. \end{multline*} \end{Lemma} \begin{proof} Consider the sequence \[ (M_3\otimes_{{\mathcal C}}-)\xrightarrow{f\otimes 1} (M_2\otimes_{{\mathcal C}}-) \xrightarrow{g\otimes 1} (M_1\otimes_{{\mathcal C}}-) \] of functors. Evaluating at the object $i_!i^*(F)= \bigoplus_{c\in {\mathcal C}}{\mathcal C}(c,-)\otimes_k F(c)$ gives the exact sequence \[ 0\to \bigoplus_{c\in {\mathcal C}}M_3(c)\otimes_k F(c) \xrightarrow{f\otimes 1} \bigoplus_{c\in {\mathcal C}}M_2(c)\otimes_k F(c)\xrightarrow{g\otimes 1} \bigoplus_{c\in {\mathcal C}}M_1(c)\otimes_k F(c)\to 0 \] The claim follows therefore by Lemma \ref{Lemma:5**}. \end{proof} From now on we let $P_{{\mathcal B}^{{\mathcal C}}}=i_!\circ i^*$ denote the endofunctor on ${\mathcal B}^{{\mathcal C}}$ and $P_{{\mathcal C}\text{-}\operatorname{Mod}}$ the endofunctor on ${\mathcal C}\text{-}\operatorname{Mod}$ in Theorem \ref{Gorenstein locally bounded Hom-finite}. \begin{Lemma}\label{Lemma:24} Assume $P_{{\mathcal C}\text{-}\operatorname{Mod}}$ is $n$-Gorenstein. Then $P_{{\mathcal B}^{{\mathcal C}}}$ is $m$-Gorenstein where $m\leq n$. \end{Lemma} \begin{proof} Let $c\in {\mathcal C}$ be arbitrary. By Theorem \ref{Gorenstein locally bounded Hom-finite} there exists an exact sequence \[ 0\to M_n\to M_{n-1}\to \cdots \to M_1\to M_0\to D({\mathcal C}(c,-))\to 0 \] in $\operatorname{mod}\text{-} {\mathcal C}$ where $M_i$ are projective. By Lemma \ref{Lemma:23} and dimension shifting we get that \[ \operatorname{Tor}_j^{{\mathcal C}}(D({\mathcal C}(c,-)),-)\colon {\mathcal B}^{{\mathcal C}}\to {\mathcal B} \] is $0$ for all $j\geq n+1$. Since $c\in {\mathcal C}$ was arbitrary we get that \[ L_j\nu= \operatorname{Tor}_j^{{\mathcal C}}(D({\mathcal C}),-)\colon {\mathcal B}^{{\mathcal C}}\to {\mathcal B}^{{\mathcal C}} \] is $0$ for $j\geq n+1$. Dually, we also have that $R^j\nu^-=0$ for $j\geq n+1$. The claim follows. \end{proof} A small $k$-linear category ${\mathcal C}'$ is called \emphbf{left Gorenstein} if \[ \operatorname{gl.Gpdim}{\mathcal C}'\text{-}\operatorname{Mod}< \infty. \] Note that by \cite[Theorem 4.16]{Bel00} the category ${\mathcal C}'$ is left Gorenstein if and only if $\operatorname{gl.Gidim} {\mathcal C}'\text{-}\operatorname{Mod} <\infty$, where $\operatorname{gl.Gidim} {\mathcal C}'\text{-}\operatorname{Mod}$ is the global Gorenstein injective dimension of ${\mathcal C}'\text{-}\operatorname{Mod}$. Furthermore, if ${\mathcal C}'$ is left Gorenstein then \[ \operatorname{gl.Gpdim}{\mathcal C}'\text{-}\operatorname{Mod}= \operatorname{gl.Gidim} {\mathcal C}'\text{-}\operatorname{Mod} \] and ${\mathcal C}'$ is called \emphbf{left} $m$\emphbf{-Gorenstein} if this common number is $m$. \begin{Theorem}\label{Theorem:6} Let ${\mathcal C}'$ be a small $k$-linear category, and assume ${\mathcal C}'$ is left $m$-Gorenstein. Furthermore, assume the endofunctor $P_{{\mathcal C}\text{-}\operatorname{Mod}}$ is $n$-Gorenstein. Then the category ${\mathcal C}'\otimes {\mathcal C}$ is left $p$-Gorenstein where $p\leq m + n$. \end{Theorem} \begin{proof} This follows from Proposition \ref{Proposition:5}, Theorem \ref{Nakayama functor on functor categories}, and Lemma \ref{Lemma:24} applied to $({\mathcal C}'\otimes_k {\mathcal C})\text{-}\operatorname{Mod}= (\operatorname{Mod}\text{-}{\mathcal C}')^{{\mathcal C}}$. \end{proof} It would be interesting to know when the equality $p=m+n$ in Theorem \ref{Theorem:6} holds. \begin{Remark}\label{Remark:3} Following the conventions in \cite{DSS17}, we say that the category ${\mathcal C}$ has a Serre functor relative to $k$ if there exists an equivalence $S\colon {\mathcal C}\to {\mathcal C}$ together with a natural isomorphism \[ {\mathcal C}(c_1,c_2)\cong D({\mathcal C}(c_2, S(c_1))) \] for all $c_1,c_2\in {\mathcal C}$. This implies in particular that $P_{{\mathcal C}\text{-}\operatorname{Mod}}$ is $0$-Gorenstein. Theorem \ref{Theorem:6} therefore gives a partial generalization of \cite[Theorem 4.6]{DSS17}. \end{Remark} \subsection{Monic representations of a quiver}\label{Monic representations of a quiver} Let $Q=(Q_0,Q_1,s,t)$ be a quiver (not necessarily finite) such that for each vertex $i\in Q_0$ there are only finitely many paths starting in $i$ and only finitely many paths ending in $i$. Let ${\mathcal C}=kQ$ be the $k$-linearization of $Q$. Obviously, $kQ$ is a Hom-finite and locally bounded category. An object $F\in {\mathcal B}^{kQ}$ is a representation of $Q$ over ${\mathcal B}$, given by the datum $F=(F(i),f_{\alpha}, i\in Q_0, \alpha\in Q_1)$, where $F(i)\in {\mathcal B}$ and $f_{\alpha}\colon F(s(\alpha ))\to F(t(\alpha ))$ are morphisms in ${\mathcal B}$. A morphism \[ \phi\colon (F(i),f_{\alpha}, i\in Q_0, \alpha\in Q_1)\to (F'(i),g_{\alpha}, i\in Q_0, \alpha\in Q_1) \] is given by morphisms $\phi_i\colon F(i)\to F'(i)$ for each $i\in Q_0$, such that the diagram \begin{equation*} \begin{tikzpicture}[description/.style={fill=white,inner sep=2pt}] \matrix(m) [matrix of math nodes,row sep=2.5em,column sep=5.0em,text height=1.5ex, text depth=0.25ex] { F(s(\alpha )) & F(t(\alpha )) \\ F'(s(\alpha )) & F'(t(\alpha )) \\}; \path[->] (m-1-1) edge node[auto] {$f_{\alpha}$} (m-1-2) (m-2-1) edge node[auto] {$g_{\alpha}$} (m-2-2) (m-1-1) edge node[auto] {$\phi_{s(\alpha )}$} (m-2-1) (m-1-2) edge node[auto] {$\phi_{t(\alpha )}$} (m-2-2); \end{tikzpicture} \end{equation*} commutes for each $\alpha\in Q_1$. We let $kQe_i$ and $e_ikQ$ denote the representable functors $kQ(i,-)$ and $kQ(-,i)$. \begin{Definition}\label{Definition:13} A representation $F=(F(i),f_{\alpha}, i\in Q_0, \alpha\in Q_1)$ is \emphbf{monic} if \[ (f_{\alpha})_{t(\alpha)=i}\colon \bigoplus_{t(\alpha)=i}F(s(\alpha))\to F(i) \] is a monomorphism for all $i\in Q_0$. \end{Definition} Let $\operatorname{Mon} (Q,{\mathcal B} )$ denote the subcategory of ${\mathcal B}^{kQ}$ consisting of the monic representations. It was considered in \cite{LZ13} for $Q$ a finite acyclic quiver, $k$ a field, and ${\mathcal B} =\Lambda\text{-}\operatorname{mod}$ the category of finite dimensional modules over a finite dimensional algebra $\Lambda$. It was also considered in \cite{EHS13} for $Q$ a left rooted quiver and ${\mathcal B} = \Lambda\text{-}\operatorname{Mod}$ for $\Lambda$ an arbitrary ring. In both cases it is used to give a description of the Gorenstein projective objects in ${\mathcal B}^{kQ}$. We recover this description using the theory we have developed. \begin{Proposition}\label{Proposition:9} The following holds: \begin{enumerate} \item\label{Proposition:9,1} The endofunctor $P_{kQ\text{-}\operatorname{Mod}}$ is $m$-Gorenstein where $m\leq 1$; \item\label{Proposition:9,2} A representation $F\in {\mathcal B}^{kQ}$ is monic if and only if it is Gorenstein $P_{{\mathcal B}^{kQ}}$-projective. \end{enumerate} \end{Proposition} \begin{proof} Fix a vertex $i\in Q_0$, and let $S_i\in \operatorname{Mod}\text{-}kQ$ be the representation $$ S_i(j)= \begin{cases} k & \text{if } i=j \\ 0 & \text{if } i\neq j. \end{cases} $$ We have a projective resolution of $S_i$ given by \begin{equation}\label{Equation:5} 0\to \bigoplus_{t(\alpha) =i}e_{s(\alpha )}kQ\to e_i kQ\to S_i\to 0 \end{equation} where the morphism $e_{s(\alpha )}kQ\to e_i kQ$ is induced from $\alpha\colon s(\alpha)\to i$. This shows that $\operatorname{pdim} S_i\leq 1$ for all $i\in Q_0$. Also, $D(kQe_i)$ has a filtration \begin{equation}\label{Equation:6} 0=M_0\subset M_1\subset \cdots \subset M_n=D(kQe_i) \end{equation} in $\operatorname{mod}\text{-} kQ$ such that $M_{i+1}/M_i\cong S_{j_i}$ for vertices $j_0,j_1,\cdots j_{n-1}\in Q_0$. Therefore, we get that $\operatorname{pdim} D(kQe_i) \leq 1$ for all $i\in Q_0$. Dually, the same argument applied to $Q\ensuremath{^{\mathrm{op}}}$ shows that $\operatorname{pdim} D(e_ikQ)\leq 1$ for all $i\in Q_0$. This proves that the endofunctor $P_{kQ\text{-}\operatorname{Mod}}$ is $m$-Gorenstein where $m\leq 1$. We now describe the objects which are Gorenstein $P_{{\mathcal B}^{kQ}}$-projective. By Lemma \ref{Lemma:24} we know that $P_{{\mathcal B}^{kQ}}$ is Iwanaga-Gorenstein of dimension $0$ or $1$. Hence, by Theorem \ref{Theorem:2} and Theorem \ref{Theorem:2.5} part \ref{Theorem:2.5:2} the Gorenstein $P_{{\mathcal B}^{kQ}}$-projective functors are precisely the functors $F\in {\mathcal B}^{kQ}$ such that \[ \operatorname{Tor}^{kQ}_1(D(kQe_i),F)=0 \] for all $i\in Q_0$. Now for all $i\in Q_0$ we have an exact sequence \begin{equation}\label{Equation:7} 0\to S_i \to D(kQe_i) \to \bigoplus_{t(\alpha) =i}D(kQe_{s(\alpha )})\to 0 \end{equation} obtained by applying $D(-)$ to the sequence \eqref{Equation:5} with $Q$ replaced by $Q\ensuremath{^{\mathrm{op}}}$. Hence, we get that \begin{multline*} \operatorname{Tor}^{kQ}_1(D(kQe_i),F)=0 \text{ }\forall i\in Q_0 \implies \operatorname{Tor}^{kQ}_1(S_i,F)=0 \text{ }\forall i\in Q_0 \end{multline*} by tensoring $F$ with the sequence in \eqref{Equation:7} and using Lemma \ref{Lemma:23}. Conversely, from the filtration \eqref{Equation:6} we get that \begin{multline*} \operatorname{Tor}^{kQ}_1(S_i,F)=0 \text{ }\forall i\in Q_0 \implies \operatorname{Tor}^{kQ}_1(D(kQe_i),F)=0 \text{ }\forall i\in Q_0 \end{multline*} by repeated use of Lemma \ref{Lemma:23}. Hence, $F$ is Gorenstein $P_{{\mathcal B}^{kQ}}$-projective if and only if $\operatorname{Tor}^{kQ}_1(S_i,F)=0$ for all $i\in Q_0$. Tensoring the sequence \eqref{Equation:5} with $F$ gives the exact sequence \begin{equation}\label{Equation:8} 0\to \operatorname{Tor}^{kQ}_1(S_i,F)\to \bigoplus_{t(\alpha) =i}F(s(\alpha))\xrightarrow{(f_{\alpha})_{t(\alpha)=i}} F(i)\to S_i\otimes_{kQ}F\to 0. \end{equation} Hence, $F$ is Gorenstein $P_{{\mathcal B}^{kQ}}$-projective if and only if it is monic. \end{proof} \begin{Proposition}\label{Proposition:10} Assume ${\mathcal B}$ has enough projectives. The following holds: \begin{enumerate} \item\label{Proposition:10,1} A functor $F=(F(i),f_{\alpha}, i\in Q_0, \alpha\in Q_1)\in {\mathcal B}^{kQ}$ is Gorenstein projective if and only if it is monic and the cokernel of the map \[ (f_{\alpha})_{t(\alpha)=i}\colon \bigoplus_{t(\alpha)=i}F(s(\alpha))\to F(i) \] is Gorenstein projective in ${\mathcal B}$ for all $i\in Q_0$; \item\label{Proposition:10,2} If $F$ is Gorenstein projective in ${\mathcal B}^{kQ}$, then $F(i)$ is Gorenstein projective in ${\mathcal B}$ for all $i\in Q_0$. \end{enumerate} \end{Proposition} \begin{proof} We know by Corollary \ref{Gorenstein adjoint pairs lifts Gorenstein projectives} and Proposition \ref{Proposition:9} that $F$ is Gorenstein projective if and only if it is monic and $D(kQe_i)\otimes_{kQ}F\in \cG\cP({\mathcal B})$ for all $i\in Q_0$. Assume $F$ is monic, and consider the exact sequence \eqref{Equation:7}. Tensoring with $F$ gives an exact sequence \[ 0\to S_i\otimes_{kQ}F \to D(kQe_i)\otimes_{kQ}F \to (\bigoplus_{t(\alpha) =i}D(kQe_{s(\alpha )}))\otimes_{kQ}F\to 0 \] since $\operatorname{Tor}^1_{kQ}(\bigoplus_{t(\alpha) =i}D(kQe_{s(\alpha )}),F) =0$. Hence, we get that \[ D(kQe_i)\otimes_{kQ}F\in \cG\cP({\mathcal B}) \text{ }\forall i\in Q_0 \implies S_i\otimes_{kQ} F\in \cG\cP({\mathcal B}) \text{ }\forall i\in Q_0 \] since $\cG\cP({\mathcal B} )$ is closed under kernels of epimorphisms. Also, from the filtration in \eqref{Equation:6} we have an exact sequence \[ 0\to M_i\to M_{i+1}\to S_{j_i}\to 0 \] for each $0\leq i\leq n-1$. Tensoring this with $F$ gives an exact sequence \[ 0\to M_i\otimes_{kQ}F\to M_{i+1}\otimes_{kQ}F\to S_{j_i}\otimes_{kQ}F\to 0 \] since $\operatorname{Tor}^1_{kQ}(S_{j_i},F)=0$. Therefore, \[ S_i\otimes_{kQ}F\in \cG\cP({\mathcal B}) \text{ }\forall i\in Q_0 \implies D(kQe_i)\otimes_{kQ} F\in \cG\cP({\mathcal B}) \text{ }\forall i\in Q_0 \] since $\cG\cP({\mathcal B} )$ is closed under extensions. Hence, a functor $F\in {\mathcal B}^{kQ}$ is Gorenstein projective if and only if it is monic and $S_i\otimes_{kQ}F\in \cG\cP({\mathcal B})$ for all $i\in Q_0$. By the exact sequence in \eqref{Equation:8} we see that $S_i\otimes_{kQ}F$ is the cokernel of the map \[ (f_{\alpha})_{t(\alpha)=i}\colon \bigoplus_{t(\alpha)=i}F(s(\alpha))\to F(i) \] and the claim follows. For statement \ref{Proposition:10,2}, note that $e_ikQ$ has a filtration $0=M_0\subset M_1\subset \cdots \subset M_{n'}=e_ikQ$ such that $M_{i+1}/M_i\cong S_{j'_i}$ for $j'_0,j'_1,\cdots j'_{n'-1}\in Q_0$. Hence, if $F$ is Gorenstein projective, then $e_ikQ\otimes_{kQ}F\cong F(i)$ is Gorenstein projective for all $i\in Q_0$. This proves the claim. \end{proof} \subsection{More examples}\label{More examples} In this subsection we calculate the Gorenstein projective objects in examples for representation of quiver with relations over ${\mathcal B}$. \begin{Example}\label{Example:12} Let ${\mathcal C}$ be the $k$-linear category generated by the quiver \[ \cdots \xrightarrow{d_{i+2}} c_{i+1}\xrightarrow{d_{i+1}} c_{i}\xrightarrow{d_{i}} \cdots \] with vertex set $\{c_i \mid i\in \mathbb{Z}/n \mathbb{Z}\}$ and relations $d_i\circ d_{i+1}=0$. The category ${\mathcal B}^{{\mathcal C}}$ can be identified with $n$-periodic complexes over ${\mathcal B}$ (for $n=0$ this is just unbounded complexes over ${\mathcal B}$). It was shown in \cite[Proposition 4.12]{DSS17} that ${\mathcal C}$ has a relative Serre functor $S$ given by $S(c_i)=c_{i-1}$ and $S(d_i)=d_{i-1}$. Therefore, the endofunctor $P_{{\mathcal C}\text{-}\operatorname{Mod}}$ is $0$-Gorenstein. Hence, by Theorem \ref{Theorem:2.5} we get that $\relGproj{P_{{\mathcal B}^{{\mathcal C}}}}{{\mathcal B}^{{\mathcal C}}} = {\mathcal B}^{{\mathcal C}}$. If ${\mathcal B}$ has enough projectives, then the Gorenstein projective objects in ${\mathcal B}^{{\mathcal C}}$ are precisely the functors $F$ such that \[ D{\mathcal C}(c_{i+1},-)\otimes_{{\mathcal C}}F \cong {\mathcal C}(-,c_{i})\otimes_{{\mathcal C}}F \cong F(c_{i})\in \cG\cP({\mathcal B} ) \] for all $c_i\in {\mathcal C}$. Note that for $n=0$ this recovers the description obtained in \cite[Theorem 2.2]{YL11}. Also, if we put ${\mathcal X} = \relGproj{P_{{\mathcal B}^{{\mathcal C}}}}{{\mathcal B}^{{\mathcal C}}}$ and ${\mathcal Y} = \operatorname{Proj}({\mathcal B})$ in Theorem \ref{Theorem:3} we recover the result that the collection of $n$-periodic complexes over ${\mathcal B}$ with projective components form a Frobenius exact category. \end{Example} \begin{Example}\label{Example:13} Let ${\mathcal C}$ be the $k$-linear category generated by the quiver \[ c_n\xrightarrow{d_n}c_{n-1}\xrightarrow{d_{n-1}}\cdots \xrightarrow{d_{1}}c_0 \] with relations $d_{i}\circ d_{i+1}=0$ for $1\leq i\leq n-1$. Then $D({\mathcal C}(c_i,-))\cong {\mathcal C}(-,c_{i-1})$ in $\operatorname{Mod}\text{-}{\mathcal C}$ for $1\leq i\leq n$ and $D({\mathcal C}(-,c_i))\cong {\mathcal C}(c_{i+1},-)$ in ${\mathcal C}\text{-} \operatorname{Mod}$ for $0\leq i\leq n-1$. Furthermore, we have an exact sequence \begin{equation}\label{Equation:9} 0\to {\mathcal C}(-,c_n)\to {\mathcal C}(-,c_{n-1})\to \cdots \to {\mathcal C}(-,c_{0})\to D({\mathcal C}(c_{0},-))\to 0 \end{equation} in $\operatorname{Mod}\text{-} {\mathcal C}$ and an exact sequence \[ 0\to {\mathcal C}(c_{0},-)\to {\mathcal C}(c_{1},-)\to \cdots \to {\mathcal C}(c_n,-)\to D({\mathcal C}(-,c_n))\to 0 \] in ${\mathcal C}\text{-} \operatorname{Mod}$. Hence, the endofunctor $P_{{\mathcal C}\text{-}\operatorname{Mod}}$ is $n$-Gorenstein. Let $F\in {\mathcal B}^{{\mathcal C}}$ be a functor. We can identify $F$ with a complex \[ F(c_n)\xrightarrow{f_n} F(c_{n-1})\xrightarrow{f_{n-1}} \cdots \xrightarrow{f_{1}} F(c_0). \] with $n+1$ terms. Tensoring the sequence \eqref{Equation:9} with $F$ gives a sequence \[ F(c_n)\xrightarrow{f_n} F(c_{n-1})\xrightarrow{f_{n-1}} \cdots \xrightarrow{f_{1}} F(c_0) \to D{\mathcal C}(c_{0},-)\otimes_{{\mathcal C}}F. \] By Theorem \ref{Theorem:2.5} part \ref{Theorem:2.5:2} we get that $F$ is Gorenstein $P_{{\mathcal B}^{{\mathcal C}}}$-projective if and only if $\operatorname{Tor}^{kQ}_j(D{\mathcal C}(c_{0},-),F)=0$ for all $1\leq j\leq n$. Since \begin{align*} & \operatorname{Tor}^{kQ}_j(D{\mathcal C}(c_{0},-),F)= \operatorname{Ker} f_{j}/\operatorname{im} f_{j+1} \quad \text{for} \quad 1\leq j\leq n-1 \\ & \operatorname{Tor}^{kQ}_n(D{\mathcal C}(c_{0},-),F)=\operatorname{Ker} f_n \end{align*} it follows that $F$ is Gorenstein $P_{{\mathcal B}^{{\mathcal C}}}$-projective if and only if the sequence \begin{equation}\label{Equation:10} 0\to F(c_n)\xrightarrow{f_n} F(c_{n-1})\xrightarrow{f_{n-1}} \cdots \xrightarrow{f_{1}} F(c_0) \end{equation} is exact. Now assume ${\mathcal B}$ has enough projectives. Then $\cG\cP(\relGproj{P_{{\mathcal B}^{{\mathcal C}}}}{{\mathcal B}^{{\mathcal C}}}) = \cG\cP({\mathcal B}^{{\mathcal C}})$ by Corollary \ref{Gorenstein adjoint pairs lifts Gorenstein projectives}. Therefore, the Gorenstein projective objects in ${\mathcal B}^{{\mathcal C}}$ are precisely the functors $F$ such that sequence \eqref{Equation:10} is exact and \begin{align*} & D({\mathcal C}(c_i,-))\otimes_{{\mathcal C}}F\cong F(c_{i-1})\in \cG\cP ({\mathcal B}) \quad \text{for } 1\leq i \leq n \\ & D({\mathcal C}(c_0,-))\otimes_{{\mathcal C}}F \cong \operatorname{Coker} f_{1}\in \cG\cP({\mathcal B}). \end{align*} \end{Example} \begin{Example}\label{Example:14} Let ${\mathcal C}$ be the $k$-linear category generated by the quiver \begin{equation*} \begin{tikzpicture}[description/.style={fill=white,inner sep=2pt}] \matrix(m) [matrix of math nodes,row sep=2.5em,column sep=5.0em,text height=1.5ex, text depth=0.25ex] { c_1 & c_2 \\ c_3 & c_4 \\}; \path[->] (m-1-1) edge node[auto] {$\alpha$} (m-1-2) (m-2-1) edge node[auto] {$\gamma$} (m-2-2) (m-1-1) edge node[auto] {$\mu$} (m-2-1) (m-1-2) edge node[auto] {$\beta$} (m-2-2); \end{tikzpicture} \end{equation*} with relations $\beta \circ \alpha = \gamma \circ \mu$. A functor $F\in {\mathcal B}^{{\mathcal C}}$ is just a commutative diagram in ${\mathcal B}$. Note that ${\mathcal C}(-,c_4)\cong D{\mathcal C}(c_1,-)$. Also, there are exact sequences \begin{align*} & 0\to {\mathcal C}(-,c_3)\xrightarrow{\gamma\circ -} {\mathcal C}(-,c_4)\to D{\mathcal C}(c_2,-)\to 0 \\ & 0\to {\mathcal C}(-,c_2)\xrightarrow{\beta\circ -} {\mathcal C}(-,c_4)\to D{\mathcal C}(c_3,-)\to 0 \end{align*} and \begin{multline*} 0\to {\mathcal C}(-,c_1)\xrightarrow{\begin{bmatrix}-(\alpha\circ -) \\ \mu\circ -\end{bmatrix}} {\mathcal C}(-,c_2)\oplus {\mathcal C}(-,c_3)\xrightarrow{\begin{bmatrix}\beta\circ -& \gamma\circ -\end{bmatrix}} {\mathcal C}(-,c_4) \\ \to D{\mathcal C}(c_4,-)\to 0 \end{multline*} in $\operatorname{Mod}\text{-} {\mathcal C}$. Since ${\mathcal C}$ is isomorphic to ${\mathcal C}\ensuremath{^{\mathrm{op}}}$ the same holds for ${\mathcal C}\ensuremath{^{\mathrm{op}}}$. Hence, the endofunctor $P_{{\mathcal C}\text{-}\operatorname{Mod}}$ is $2$-Gorenstein. By Theorem \ref{Theorem:2.5} part \ref{Theorem:2.5:2} we get that $F\in {\mathcal B}^{{\mathcal C}}$ is Gorenstein $P_{{\mathcal B}^{{\mathcal C}}}$-projective if and only if $\operatorname{Tor}^{{\mathcal C}}_j(D({\mathcal C}(c_i,-)),F)=0$ for $1\leq j\leq 2$ and $1\leq i\leq 4$. Tensoring $F$ with the exact sequences above shows that $F\in {\mathcal B}^{{\mathcal C}}$ is Gorenstein $P_{{\mathcal B}^{{\mathcal C}}}$-projective if and only if \[ F(c_3)\xrightarrow{F(\gamma)} F(c_4) \quad \text{and} \quad F(c_2)\xrightarrow{F(\beta)} F(c_4) \] are monomorphisms and the diagram \begin{equation*} \begin{tikzpicture}[description/.style={fill=white,inner sep=2pt}] \matrix(m) [matrix of math nodes,row sep=2.5em,column sep=5.0em,text height=1.5ex, text depth=0.25ex] { F(c_1) & F(c_2) \\ F(c_3) & F(c_4) \\}; \path[->] (m-1-1) edge node[auto] {$F(\alpha)$} (m-1-2) (m-2-1) edge node[auto] {$F(\gamma)$} (m-2-2) (m-1-1) edge node[auto] {$F(\mu)$} (m-2-1) (m-1-2) edge node[auto] {$F(\beta)$} (m-2-2); \end{tikzpicture} \end{equation*} is a pullback square. If ${\mathcal B}$ has enough projectives, then a functor $F\in {\mathcal B}^{{\mathcal C}}$ is Gorenstein projective if and only if it is Gorenstein $P_{{\mathcal B}^{{\mathcal C}}}$-projective and \begin{align*} & D{\mathcal C}(c_1,-)\otimes_{{\mathcal C}}F \cong F(c_4)\in \cG\cP({\mathcal B}) \\ & D{\mathcal C}(c_2,-)\otimes_{{\mathcal C}}F\cong \operatorname{Coker} (F(c_3)\xrightarrow{F(\gamma)}F(c_4))\in \cG\cP({\mathcal B}) \\ & D{\mathcal C}(c_3,-)\otimes_{{\mathcal C}}F \cong \operatorname{Coker} (F(c_2)\xrightarrow{F(\beta)}F(c_4))\in \cG\cP({\mathcal B}) \\ & D{\mathcal C}(c_4,-)\otimes_{{\mathcal C}}F \cong \operatorname{Coker} (F(c_2)\oplus F(c_3)\xrightarrow{\begin{bmatrix}F(\beta )& F(\gamma )\end{bmatrix}}F(c_4))\in \cG\cP({\mathcal B}). \end{align*} \end{Example}
1,116,691,497,440
arxiv
\section{Introduction} \label{sec1} \newabbreviation{lbm}{LBM}{lattice Boltzmann method} \newabbreviation{lb}{LB}{lattice Boltzmann} \newabbreviation{ns}{NS}{Navier-Stokes} \newabbreviation{pde}{PDE}{partial differential equation} Mandelic acid is an aromatic alpha-hydroxy acid, with formula ${\rm C}_8{\rm H}_8{\rm O}_3$. It is a white crystalline powder that is soluble in water and most common organic solvents. It has a density of 1.3 g/cm$^3$ and molecular weight of 152.5 g/mol. It is particularly important in the pharmaceutical industry for the organic synthesis of pharmaceutical components. For instance an ester of mandelic acid is essential to produce homatropine, used in eye drops as both a cycloplegic and mydriatic substance. In addition, it is also popular in the production of face peeling components~\cite{taylor1999summary}, urinary tract infection treatments~\cite{brittain2002mandelic}, and for oral antibiotics~\cite{sharon2018mandelic}. In toxicological studies, the concentration of styrene or styrene oxide is quantified by converting it into mandelic acid. \begin{figure}[H] \centering \includegraphics[width= 0.4\textwidth]{enantiomers.pdf} \caption{Molecular structure of mandelic acid enantiomers. } \label{aass} \end{figure} Mandelic acid exists in two enantiomeric forms as shown in Fig.~\ref{aass}, (S)- and (R)- mandelic acid. Most practical applications require the enantiopure form~\cite{brittain2002mandelic}. Amongst the different approaches to separate enantiomers, crystallization process such as classical resolution and preferential crystallization approaches are frequently used~\cite{lorenz2014processes}. In such separation processes, the properties of the crystalline products such as crystal size and shape are largely determined by the growth process, which in turn depends on the crystallization conditions. In the pharmaceutical industry, the resulting crystal morphology is often of great importance, since it influences the rate of dissolution and the absorption of drugs. Compressibility, hardness, and flow properties of the drug are also strongly dependent on the crystal form~\cite{higgins1994numerical}. Accurate investigations regarding crystal growth are difficult because the growth process varies greatly even under similar conditions: \emph{crystal growth dispersion} is the term used to describe the fact that crystals, although initially of same shape and size, can rapidly grow differently even under the same growth conditions~\cite{srisanga2015crystal,ma2008crystal}. The main reason for these growth differences is probably related to minute tensions and deformations, leading in turn to minimal structural differences~\cite{hofmann2004kristallisation}. Other reasons are accidental deposits, or deposits of foreign bodies on the growing crystals' surface, which lead to incorporation into the crystal and ultimately different growth. A proper understanding of growth conditions and their effect on the final product is therefore essential to design and scale-up production units for enantiopure substances.\\ A lot of experimental studies have been conducted concerning crystallization-based enantioseparation process including growth kinetics of mandelic acid, e.g.~\cite{alvarez2004online,lorenz2014processes, coquerel2006preferential, gansch2021continuous, gou2012investigation, perlberg2005crystal, srisanga2015crystal,codan2013growth}. However, numerical studies regarding crystal habit and size of enantiopure S-mandelic acid remain scarce. The phase-field method has been shown in general to be a powerful tool for modeling structural evolution of materials and crystals. It is now widely used for modeling solidification~\cite{boettinger2002phase,nestler2002phase} and grain growth~\cite{chen1994computer,takaki2016two,karma1998quantitative,tourret2017grain}. The phase-field approach has also been used in the context of the lattice Boltzmann method, now widely recognized as an efficient alternative to classical tools, to simulate solidification processes~\cite{younsi2016anisotropy,lin2014three,wang2019brief,rojas2015phase,schiedung2020simulation,vakili2020multi,m2019non}. This approach can reproduce numerically the solid-liquid interface interactions and the hydrodynamic effects affecting the habits of growing crystals~\cite{medvedev2005lattice,Henniges2017,medvedev2006influence,sakane2018three,chakraborty2007enthalpy,tan2021modeling}.\\ In this contribution, we study the growth of a single (S)-mandelic acid crystal under different conditions (supersaturation, initial crystal size, flow rate) with a previously developed and validated lattice Boltzmann-based numerical model~\cite{tan2021modeling}. All simulations presented in this article are carried out using the in-house solver ALBORZ~\cite{hosseini2020development}. The obtained results are validated and compared with experimental data. After validating the numerical procedure in a standalone manner via a self-convergence test, it is used to model the growth of a single (S)-mandelic acid rhombic seed at temperature and supersaturation corresponding to experimental settings; this provides a further, independent validation of the numerical model. The solver is then used to investigate the effect of different parameters such as supersaturation and initial seed size on crystal growth. Finally, a detailed study of the interaction between forced convection and crystal growth is presented. Analyzing the results, a simple solution is proposed to improve symmetrical growth under natural convection in the single-crystal cell used for all experimental investigations. \section{Numerical method} \subsection{Diffuse-interface formulation: governing equations} In the phase-field method solid growth dynamics are expressed via a non-dimensional order parameter, $\phi$, going from (+1) in the solid to (-1) in the pure liquid phase. The space/time evolution equations are written as~\cite{jeong2001phase,beckermann1999modeling}: \begin{multline} \tau_0 a_s^2(\textbf{n}) \frac{\partial \phi}{\partial t} = W_0^2 \bm{\nabla} \cdot \left(a_s^2(\textbf{n})\right) \bm{\nabla} \phi + W_0^2 \bm{\nabla} \cdot \left (|\bm{\nabla} \phi|^2 \frac{\partial[a(\textbf{n})^2]}{\partial \bm{\nabla} \phi}\right )\\ + (\phi - \phi^3) + \lambda U (1 - \phi^2)^2 , \label{a} \end{multline} and: \begin{equation} \frac{\partial U}{\partial t} + \left( \frac{1-\phi}{2}\right) \bm{u} \cdot \bm{\nabla} U =D \bm{\nabla} \cdot \left(q(\phi) \bm{\nabla} U \right) - \frac{\partial \phi}{\partial t}. \label{b} \end{equation} Here $\tau = \tau_0 a_s^2(\textbf{n})$. The coefficient $\lambda$ describes the strength of the coupling between the phase-field and the supersaturation field, $U$. The parameter $\tau_0$ denotes the characteristic time and $W_0$ the characteristic width of the diffuse interfaces. In Eq.~(\ref{a}), the latent heat of melting is written as $L$. The specific heat capacity $c_p$ is assumed to be the same in the two phases (symmetric model). The quantity $\textbf{n} = - \frac{\bm{\nabla} \phi}{\left| \bm{\nabla} \phi \right|}$ is the unit vector normal to the crystal interface -- pointing from solid to fluid, while $a_s(\textbf{n})$ is the surface tension anisotropy function. In the context of the hexagonal mandelic acid crystal growth, this quantity is defined as: \begin{equation} a_s(\textbf{n}) = 1 + \epsilon_{xy} \cos(6 \theta), \end{equation} \newabbreviation{rhs}{RHS}{right hand side} \newabbreviation{lhs}{LHS}{left hand side} where $\theta = \arctan(n_y/n_x)$. The numerical parameter $\epsilon_{xy}$ characterizes the anisotropy strength, and is set in the present study to $\epsilon_{xy} = 0.05$~\cite{karma1996phase}. The term $(\phi - \phi^3)$ is the derivative of the double-well potential. The last term in Eq.~(\ref{a}) is a source term accounting for the coupling between supersaturation $U$ and order parameter $\phi$. There, $(1 - \phi^2)^2$ is an interpolation function minimizing the bulk potential at $\phi = \pm 1$.\\ In Eq.~(\ref{b}), $\bm{u}$ denotes the local fluid velocity while $q(\phi) = (1 - \phi)$ is a function canceling out diffusion within the solid. As a consequence, solute transport is assumed to take place only within the fluid phase (one-sided model). The parameter $D$ is the diffusion coefficient of S-mandelic acid in water. \subsection{Lattice Boltzmann formulation} \paragraph{Flow field solver} The flow field behavior, described by the incompressible \gls{ns} and continuity equations, is modeled using the classical isothermal \gls{lb} formulation consisting of the now-famous stream-collide operators: \begin{equation} f_\alpha \left( \bm{r}+\bm{c}_\alpha \delta_t, t+\delta_t\right) - f_\alpha \left( \bm{r}, t\right) = \delta_t \Omega_\alpha\left( \bm{r}, t\right) + \delta_t F_{\alpha}, \end{equation} where $f_\alpha$ and $\bm{c}_\alpha$ are the discrete populations and corresponding particle velocity vectors, $\bm{r}$ and $t$ are the position in physical space and time, $\delta_t$ is the time-step size. $F_\alpha$ represents contributions from external body forces defined as~\hbox{\cite{guo2002discrete}}: \begin{equation} F_\alpha = w_\alpha \left(1-\frac{\delta_ t}{2\tau_f}\right)\left[\frac{\bm{F}\cdot\bm{c}_\alpha}{c_s^2} + \frac{\left(\bm{u}\otimes\bm{F}+\bm{u}\otimes\bm{F}^\dagger\right)\left(\bm{c}_\alpha\otimes\bm{c}_\alpha - c_s^2\bm{I}\right)}{2c_s^4}\right]. \end{equation} {Here $\bm{F}$ is the external body force vector, $\bm{u}$ is the local fluid velocity vector, $\bm{I}$ is the unit rank-two tensor, $c_s$ is the so-called lattice sound speed, $w_\alpha$ are weights associated to each discrete velocity and $\tau_f$ is the relaxation time. Here, the external body force also includes interaction with the solid phase} as~\cite{beckermann1999modeling}: \begin{equation} \bm{F}= -\frac{h\tau_f (1+\phi)^2 (1-\phi)\bm{u}}{4W_0^2}, \end{equation} {where $\phi$ is the phase indicator detailed in the next paragraphs, $W_0$ is the interface thickness tied to the phase-field solver and} $h$ is a dimensionless constant, chosen as $h = 2.757$ following~\cite{beckermann1999modeling}. Due to the absence of fluid velocity within the solid crystal, the fluid velocity $\bm{u}$ is updated as: \begin{equation} \bm{u^*} = \frac{(1-\phi)}{2}\bm{u}, \end{equation} where the re-defined fluid velocity $\bm{u^*}$ is used in the equilibrium distribution function~\cite{beckermann1999modeling}. The collision operator $\Omega_\alpha$ follows the linear Bhatnagar-Gross-Krook approximation: \begin{equation} \Omega_\alpha = \frac{1}{\tau_f}\left[f^{(eq)}_\alpha - f_\alpha\right], \end{equation} \newabbreviation{edf}{EDF}{equilibrium distribution function} where $f_\alpha^{(eq)}$ is the discrete isothermal \gls{edf} defined as: \begin{equation} f_\alpha^{(eq)} = \rho w_\alpha\sum_i \frac{1}{i! c_s^{2i}} a^{(eq)}_i(\bm{u}):\mathcal{H}_{i}(\bm{c}_\alpha), \end{equation} where $a^{(eq)}_i$ and $\mathcal{H}_{i}(\bm{c}_\alpha)$ are the corresponding multivariate Hermite coefficients and polynomials of order $i$~\cite{hosseini2020compressibility}. Further information on the expansion along with detailed expressions of the \gls{edf} can be found in~\cite{shan2006kinetic,hosseini2019extensive,hosseini2020development}. In the present work, an extended range of stability is obtained by using a central Hermite multiple relaxation time implementation; corresponding details can be found in~\cite{hosseini2021central}. The relaxation time {$\tau_f$} is tied to the fluid kinematic viscosity{, $\nu$,} as: \begin{equation} \tau_f = \frac{\nu}{c_s^2} + \frac{\delta_t}{2}. \end{equation} It must be noted that conserved variables, {i.e.}, density and momentum are defined as moments of the discrete distribution function: \begin{equation} \rho = \sum_\alpha f_\alpha, \end{equation} \begin{equation} \rho \bm{u} = \sum_\alpha \bm{c}_\alpha f_\alpha. \end{equation} \paragraph{Advection-diffusion-reaction solver for supersaturation field} The space/time evolution equation of the supersaturation field $U$ is modeled using an advection-diffusion-reaction \gls{lb}-based discrete kinetic equation. It is defined as\cite{ponce1993lattice,hosseini2020weakly,hosseini2019lattice}: \begin{equation} g_\alpha \left( \bm{r}+\bm{c}_\alpha \delta_t, t+\delta_t\right) - g_\alpha \left( \bm{r}, t\right) = \delta_t \Omega_\alpha\left( \bm{r}, t\right) + \delta_t \dot{\omega}_\alpha, \end{equation} where {$g_\alpha$ are the corresponding discrete populations and} $\dot{\omega}_\alpha$ is the source term: \begin{equation} \dot{\omega}_\alpha = - w_\alpha \frac{\partial \phi}{\partial t}. \end{equation} The collision operator $\Omega_\alpha$ for the supersaturation field is: \begin{equation} \Omega_\alpha = \frac{1}{\tau_U}\left[g^{(eq)}_\alpha - g_\alpha\right]. \end{equation} where $g_\alpha^{(eq)}$ is the \gls{edf} defined as: \begin{equation} g^{(eq)}_\alpha = w_\alpha U\left[ 1 + \frac{\bm{c}_\alpha \cdot \bm{u} }{c_s^2} \right]. \end{equation} The supersaturation is the zeroth-order moment of $g_\alpha$: \begin{equation} U = \sum_\alpha g_\alpha, \end{equation} and the relaxation coefficient is tied to the diffusion coefficient of mandelic acid {in the aqueous solution, $D$, as:} \begin{equation} \tau_U = \frac{D q(\phi)}{c_s^2} + \frac{\delta_t}{2}. \end{equation} \paragraph{Solver for phase-field equation} The phase-field equation is modeled using a modified \gls{lb} scheme defined as~\cite{walsh2010macroscale,cartalade2016lattice}: \begin{multline} a_s^2(\bm{n}) h_\alpha(\bm{r} + \bm{c}_\alpha \delta_t, t + \delta_t) = h_\alpha(\bm{r},t) - \left( 1 - a_s^2(\bm{n}) \right ) h_\alpha(\bm{r} + \bm{c}_\alpha \delta_t, t) - \\ \frac{1}{\tau_\phi (\bm{r},t) } \left [ h_\alpha(\bm{r},t) - h_\alpha^{eq}(\bm{r},t) \right] + w_\alpha Q_\phi (\bm{r},t)\frac{\delta_t}{\tau_0}, \label{d} \end{multline} where the scalar function $Q_\phi$ is the source term of the phase-field defined as: \begin{equation} Q_\alpha = (\phi - \phi^3) + \lambda U (1 - \phi^2)^2, \end{equation} while the \gls{edf}, $h_\alpha^{eq}$, is defined as: \begin{equation} h_\alpha^{eq} = w_\alpha \left( \phi - \frac{1}{c_s^2} \bm{c}_\alpha \cdot \frac{W_0^2}{\tau_0} |\bm{\nabla} \phi|^2 \frac{\partial (a_s(\bm{n})^2}{\partial \bm{\nabla} \phi} \frac{\delta_t}{\delta_r} \right). \label{e} \end{equation} {where $\delta_r$ is the grid-size.} The local value of the order parameter $\phi$ is computed as: \begin{equation} \phi = \sum_{\alpha} h_\alpha, \end{equation} while the relaxation is set to: \begin{equation} \tau_\phi = \frac{1}{c_s^2}a_s^2(\bm{n})\frac{W_0^2}{\tau_0} + \frac{\delta_t}{2}. \end{equation} \section{Experimental setup} Experimental data for the growth rates have been obtained in the single-crystal growth cell~\cite{gou2012investigation, Juan} illustrated in Fig.~\ref{ba}. The supersaturated aqueous solution of mandelic acid is pumped into a constant-temperature cylindrical crystallization cell, with solution temperatures varying between 20 and 30 $^{\circ}$C. The temperature within the cell is maintained constant via a water-based cooling/heating system connected to a Pt-100 sensor monitoring the temperature inside the cell. Vessel 2, denoted V2 in Fig.~\ref{ba}b contains a saturated solution at temperature $T_2$ while vessel 1 (V1) was set to a lower temperature $T_1$, corresponding to the temperature of the cell. To create the supersaturated solution, the initially saturated solution in V2 is pumped into V1 and cooled down to $T_1$ before entering the growth cell. This effectively allows to control the supersaturation level of the incoming solution by choosing temperature $T_1$. In the present case, the supersaturation is defined as~\cite{mullin2001crystallization}: \begin{equation} U = \frac{C_{sat,2} - C_{sat,1}}{C_{sat,1}} \end{equation} \begin{figure}[H] \centering \includegraphics[width=0.8\textwidth]{Reactorr.pdf} \caption{Single-crystal growth cell used for all experiments: (a) photograph; (b) Schematic diagram of experimental arrangement for the measurement of growth rate of a single crystal~\cite{gou2012investigation, Juan}}. \label{ba} \end{figure} To start the experiment, the supersaturated solution is continuously pumped from vessel 1 to the growth cell, in which a single (S)-mandelic crystal is glued on the pin head of a crystal holder. Then, the solution is recycled to vessel 2 and the concentration of the solution is compensated. In that way, a stable degree of supersaturation is guaranteed during the whole process. A microscope with camera (Stemi2000C, company Carl Zeiss) is used to take pictures of the single crystal at every one hour. The images are afterwards post-processed by applying Carl Zeiss' Axio Vision software~\cite{gou2012investigation}. A picture of the single-crystal cell is shown in Fig.~\ref{ba}.\\ \section{Simulations and analysis of the results} \label{sec3} \subsection{Validation} \subsubsection{Self-convergence of the numerical solver} Based on the experiments, the evolution of mandelic acid without enantiomers follows habits with hexagonal symmetry. First, before going into further validation steps against experimental results, we look into the convergence behavior of the numerical scheme. To that end growth simulations are conducted using the hexagonal anisotropy function that will be used for the remainder of this work, starting with a rhombic initial seed. The seed is placed at the center of a fully periodic rectangular domain, with a length of 31 and a width of 26 mm. The perimeter of the initial rhombic crystal is $6.9$ mm and the initial supersaturation is set to $U = 0.06$. Simulations are conducted using four different spatial resolutions, $\delta_r\in\{0.04, 0.025, 0.02, 0.0125\}$ mm. Since the overall size of the numerical domain is kept fixed, an improved spatial resolution automatically comes with a larger number of grid points. \begin{figure}[H] \centering \includegraphics[width=0.8\textwidth]{error.pdf} \caption{Left: $\phi = 0$ iso-contour, showing the boundary of the solid crystal (only the central part of the numerical domain is shown) after 16 hours; Right: Evolution of function $\phi$ in space along the line joining the center of the domain at (0,0) and point (4 mm,0) for $U = 0.06$ at increasing spatial resolution (grids with 775 $\times$ 650, 1240 $\times$ 1040, 1550 $\times$ 1300, and 2480 $\times$ 2080 points, respectively). } \label{cc} \end{figure} The highest resolution simulation with 2480 $\times$ 2080 points is used as reference to compute relative errors at the three lower spatial resolutions. The $\mathit{l^2}$ relative error norm is calculated based on the $\phi$-profiles plotted along the $x$-direction on the centerline starting from the center of the domain (0,0) in positive $x$-direction until point (4 mm,0). The corresponding profiles along with the crystal shape obtained after 16 hours are shown in Fig.~\ref{cc}. The $\mathit{l^2}$ norm is defined as: \begin{equation} {\rm E}_{\mathit{l^2}}=\sqrt{\frac{\sum_i \left( \phi_{i} - \phi_{ref,i} \right) ^2}{\sum_i \phi_{ref,i}^2}} \label{eew} \end{equation} where $\phi$ represents a lower resolution and $\phi_{ref}$ denotes the highest resolution (used as reference). The errors obtained from the different simulations are illustrated in Figure~\ref{csfm}. \begin{figure}[H] \centering \includegraphics[width=0.4\textwidth]{Self_convergence.pdf} \caption{Scaling of the $l^2$ norm of errors as obtained from the self-convergence study. Black markers represent error data from the simulations while the black dashed line shows the theoretical $-2$ slope.} \label{csfm} \end{figure} As observed from this plot, the numerical scheme is convergent as the error decreases with resolution. Furthermore, as expected from theoretical analyses, a second-order convergence is obtained in space. \subsubsection{Validation against experimental data} Next, to showcase the ability of the model to correctly reproduce the behavior of the real system, 2-D simulations are considered using the real reactor geometry. The simplification to two-dimensional simulations is justified by the fact that, in all conditions considered here, the crystal follows a platelet growth mode indicating clear separation of scales between growth in the axial and planar directions, in connection to a symmetry of the flow field~\cite{Henniges2017}. The geometry used for the simulations is shown in Fig.~\ref{bn}. First, configurations are considered where forced convection is negligible. For all experiments presented in this section the initial seed is a rhombic crystal. Two different initial supersaturations are considered, namely $U=0.06$ and $U=0.11$ for the same temperature, $T=20^\circ$C. The diffusion coefficient of mandelic acid in the aqueous solution under the conditions considered here is $D=4.273 \times 10^{-4} {\rm mm}^2/{\rm s}$~\cite{chenyakin2017diffusion} {and the other physical parameters are listed in table \ref{phy}}. Based on the excellent agreement observed in the previous section, all simulations are conducted with a spatial resolution of $\delta_r=0.025$~mm. The interface thickness is set to $W_0 = 0.05$~mm, the relaxation time to $\tau_0 = 11$~s, and the coupling coefficient $\lambda = 3$ was treated as a numerical parameter, consistently with the standard phase-field method for dendrite growth~\cite{ramirez2004phase}. At the walls of the reactor, zero-flux boundary conditions are applied to both the species and phase fields via the anti-bounce-back scheme. At the inlet a constant supersaturation is imposed. Details on the implementation can be found in~\cite{kruger2017lattice}.\\ \begin{table}[!htbp] \centering \setlength{\tabcolsep}{1.4mm}{ \caption{{Physical parameters for a single S-ma crystal growth\cite{zhang2006evolution,suzuki2011specific,satoh1941heat}}\label{phy}} \begin{tabular}{c c c c c} \hline Surface energy & Melting temperature & Vometric heat capacity &Latent heat &Capillary length \\ $[\hbox{J} \hbox{m}^{-2}]$ & $[\hbox{K}]$ & $[\hbox{J}/\hbox{m}^3 \hbox{K}]$ & $[\hbox{J}/\hbox{m}^3]$ & $[\hbox{m}]$ \\ \hline 0.05 & 392 & 1.7 $\times 10^6$ & 6.6 $\times 10^7$ & 7.65 $\times 10^{-9}$ \\ \hline \end{tabular}} \end{table} Simulation results are compared to experiments and validated both qualitatively using the crystal shape, and quantitatively by comparing the growth rate. \begin{figure}[H] \centering \includegraphics[width=0.3\textwidth]{2dgeometry.pdf} \caption{Reactor geometry employed for all simulations.} \label{bn} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.4\textwidth]{side_numbering.pdf} \caption{Method used to number the crystal sides and the associated normal directions~\cite{Juan}.} \label{ccf} \end{figure} To measure the crystal growth rate in both experiments and numerical calculations, the average length quantifying the crystal size is introduced following~\cite{Juan} as illustrated in Fig.~\ref{ccf}: \begin{itemize} \item Connect opposite sides via their centers. \item Identify the crystal center as the intersection between those lines. \item Number the different sides as shown in Fig.~\ref{ccf}. \item Compute the lengths of the corresponding normal distance from the center identified in the previous step ($L_1$, $L_2$, $L_3$, $L_4$, $L_5$ and $L_6$). \item The average normal length is simply defined as $L_{\rm avg} = (L_1 + L_2 + L_3 + L_4 + L_5 + L_6)/6$. \end{itemize} The experiments are systematically conducted over a period of 12 hours. The average length is computed every hour and subsequently fitted with a linear function to extract an average growth rate $G_{th}=L_{avg}/t$, where $t$ is the corresponding growth time during the crystallization process. The evolution of the crystal shape in both experiments and simulations are illustrated in Figures~\ref{bk} and~\ref{bak}. \begin{figure}[H] \centering \includegraphics[width=1.0\textwidth]{contour_exp.pdf} \caption{Contours of an (S)}-mandelic acid crystal vs time as obtained from (a) experiments~\cite{Juan} and (b) simulations. The supersaturation is $U=0.06$ in both cases. The spatial scale is the same in all images, enabling a direct comparison. \label{bk} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.95\textwidth]{expp9.pdf} \caption{Contours of the (S)-mandelic acid crystal vs time as obtained from simulations for $U=0.11$.} \label{bak} \end{figure} A visual comparison regarding crystal shape and size over time for $U=0.06$ points to a good agreement between experiments and simulations. For $U=0.11$ only numerical results are shown since experimental snapshots are not available. To validate the results in a quantitative manner, the growth rates corresponding to the six different sides of the crystal for both supersaturations as obtained from experiments and simulations are compared in Table~\ref{ss}.\\ \begin{table}[!htbp] \centering \setlength{\tabcolsep}{1.4mm}{ \caption{Comparisons between experiments and simulations for supersaturation $U = 0.06$ and $U=0.11$~\cite{Juan} \label{ss}} \begin{tabular}{c|c|c|c|c|c|c|c|c} \hline Cases &\multicolumn{4}{c|}{Experiments}&\multicolumn{4}{c}{Simulations}\\ \hline Number & \multicolumn{2}{c|}{I} & \multicolumn{2}{c|}{II} & \multicolumn{2}{c|}{I} & \multicolumn{2}{c}{II}\\ \hline Supersaturation(\%) & \multicolumn{2}{c|}{0.06} & \multicolumn{2}{c|}{0.11}& \multicolumn{2}{c|}{0.06}& \multicolumn{2}{c}{0.11}\\ \hline Seed perimeter (mm) & \multicolumn{2}{c|}{6.9} & \multicolumn{2}{c|}{9.5} & \multicolumn{2}{c|}{6.9} & \multicolumn{2}{c}{9.5}\\ \hline Parameter & Slope & $R^2$ & Slope & $R^2$ & \multicolumn{2}{c|}{Slope}& \multicolumn{2}{c}{Slope}\\ \hline Normal 1 & 0.07 & 0.99 & 0.14 & 0.98 & \multicolumn{2}{c|}{0.08} & \multicolumn{2}{c}{0.14}\\ Normal 2 & 0.09 & 0.97 & 0.14 & 0.94 & \multicolumn{2}{c|}{0.08} & \multicolumn{2}{c}{0.14}\\ Normal 3 & 0.00 & 0.10 & 0.00 & 0.50 & \multicolumn{2}{c|}{0.01} & \multicolumn{2}{c}{0.02}\\ Normal 4 & 0.06 & 0.94 & 0.06 & 0.78 & \multicolumn{2}{c|}{0.08} & \multicolumn{2}{c}{0.14}\\ Normal 5 & 0.03 & 0.92 & 0.08 & 0.86& \multicolumn{2}{c|}{0.08} & \multicolumn{2}{c}{0.14}\\ Normal 6 & 0.00 & 0.20 & 0.09 & 0.93& \multicolumn{2}{c|}{0.01} & \multicolumn{2}{c}{0.02} \\ \hline Average growth rate & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{} & \multicolumn{2}{c}{}\\ $G_{th}$ (mm/h) & \multicolumn{2}{c|}{0.06} & \multicolumn{2}{c|}{0.1} & \multicolumn{2}{c|}{0.057} & \multicolumn{2}{c}{0.10}\\ \hline \end{tabular}} \end{table} $R^2$ is the coefficient of determination shown here to characterize the reliability of the linear regression used to extract growth rates. Representing the length of side $i$ measured at time $t$ in experiments as $L_i(t)$ and the value of the linear function as $L_i'(t)$ the coefficient is computed as: \begin{equation} R^2 = 1 - \frac{A_{\rm res}}{A_{\rm tot}}, \end{equation} where the residual sum of squares is: \begin{equation} A_{\rm res} = \sum_{t} {(L_i(t) - L_i'(t))}^2, \end{equation} and the total sum of squares is: \begin{equation} A_{\rm tot} = \sum_{t} {(L_i(t) - \overline{L_i})}^2, \end{equation} where $\overline{L_i}$ represents the average over all data points.\\ A direct comparison of the growth rates for both values of $U$ confirms the very good agreement between experimental observations and numerical simulations. For $U=0.06$, the growth rate is numerically underpredicted by less than 6\%. At $U=0.11$, the relative difference is even reduced to 2\%. This proves the ability of the numerical model to capture the growth of (S)-mandelic acid in a pure aqueous environment. At higher supersaturation, $U=0.11$, the crystal experiences as expected a faster growth as compared to the lower supersaturation case, $U=0.06$. It is interesting to take now a closer look into the effects of supersaturation on growth rate. \subsubsection{Impact of supersaturation on growth rate} In order to have a better understanding of the effect of supersaturation on crystal growth dynamics we keep a configuration similar to the previous one, but considering many more supersaturation values, $U\in\{0.06, 0.085, 0.11, 0.15, 0.2\}$. In all simulations presented in this section, the initial seed size and geometry follow that of configuration I in the previous section, see Table~\ref{ss}. The evolution of the crystal shape over time as obtained from these simulations are shown in Fig.~\ref{san}. {In Fig.~\ref{san}, the facets of the crystal start deviating from a straight line resulting from the onset of primary branching instabilities. This usually occurs when the initial value of supersaturation is sufficiently large.} \begin{figure}[H] \centering \includegraphics[width=0.6\textwidth]{S.pdf} \caption{Boundaries of the single crystal of (S)-mandelic acid (iso-contours of $\phi=0$) at time $t$=0 (blue), 4 (red), 8 (black), 12 hours (purple) as a function of supersaturation: (top-left) $U= 0.085$, (top-right) $U= 0.11$, (bottom-left) $U= 0.15$, and (bottom-right) $U= 0.2$.} \label{san} \end{figure} Furthermore, the data representing average growth rate are listed in Table~\ref{ljs}. \begin{table}[!htbp] \centering \setlength{\tabcolsep}{1.4mm}{ \caption{Numerically observed averaged crystal growth rate as function of supersaturation between $U = 0.06$ and $0.2$\label{ljs}} \begin{tabular}{c|c|c|c|c|c} \hline Supersaturation(\%) & 0.06 & 0.085 &0.11 & 0.15 & 0.2\\ \hline Seed perimeter (mm) & 6.9 & 6.9 & 6.9& 6.9 & 6.9\\ \hline Average growth rate & & & & & \\ $G_{th}$ (mm/h)& 0.057 & 0.0871 & 0.1136 & 0.1553 & 0.2093\\ \hline \end{tabular}} \end{table} As expected, higher supersaturations lead to faster crystal growth. It is worth to note that the average growth rate of the crystal can be very well approximated in the considered range as a linear function of supersaturation with slope one. Another parameter that has been observed experimentally to affect growth dynamics, especially during the early phase, is the initial seed size, better quantified via its perimeter. We will look into that effect in the next section. \subsubsection{Impact of initial size on growth rate} To quantify the effects of initial seed size, which is known to be sometimes important~\cite{srisanga2015crystal}, a configuration with supersaturation $U=0.06$ at crystallization temperature $T=20^\circ$C has been retained. Seeds with the same initial rhombic shape but different sizes have been simulated; the initial seed perimeters are $P\in\{6.9, 6.96, 8.4, 8.8\}$~mm, corresponding to available experimental data. The resulting growth rate data and parameters extracted from the experiments are listed in Table~\ref{la}. \begin{table}[!htbp] \centering \setlength{\tabcolsep}{1.4mm}{ \caption{Impact of initial seed size in experiments with supersaturation $U = 0.06$~\cite{Juan}\label{la}} \begin{tabular}{c|c|c|c|c|c|c|c|c} \hline Experiment Number & \multicolumn{2}{c|}{I(1)} & \multicolumn{2}{c|}{I(2)} & \multicolumn{2}{c|}{I(3)} & \multicolumn{2}{c}{I(4)}\\ \hline Perimeter (mm) & \multicolumn{2}{c|}{6.9} & \multicolumn{2}{c|}{6.96} & \multicolumn{2}{c|}{8.4} & \multicolumn{2}{c}{8.8}\\ \hline Parameter & Slope & $R^2$ & Slope & $R^2$ & Slope & $R^2$ & Slope & $R^2$ \\ \hline Normal 1 & 0.07 & 0.99 & 0.09 & 0.99 & 0.09 & 0.99 & 0.10 & 0.98\\ Normal 2 & 0.09 & 0.97 & 0.09 & 0.99 & 0.12 & 0.98 & 0.10 & 0.97 \\ Normal 3 & 0.00 & 0.10 & 0.01 & 0.24 & 0.01 & 0.67 & 0.02 & 0.82 \\ Normal 4 & 0.06 & 0.94 & 0.03 & 0.92 & 0.07 & 0.94 & 0.07 & 0.85 \\ Normal 5 & 0.03 & 0.92 & 0.03 & 0.89 & 0.06 & 0.94 & 0.05 & 0.98 \\ Normal 6 & 0.00 & 0.20 & 0.01 & 0.45 & 0.03 & 0.85 & 0.00 & 0.10 \\ \hline Average growth rate& \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{} & \multicolumn{2}{c}{}\\ $G_{th}$ (mm/h) & \multicolumn{2}{c|}{0.06} & \multicolumn{2}{c|}{0.06} & \multicolumn{2}{c|}{0.06} & \multicolumn{2}{c}{0.07}\\ \hline \end{tabular}} \end{table} Simulations with exactly the same configurations have been conducted. The corresponding results are listed in Table~\ref{lax}. \begin{table}[!htbp] \centering \setlength{\tabcolsep}{1.4mm}{ \caption{Impact of initial seed size in simulations with supersaturation $U = 0.06$\label{lax}} \begin{tabular}{c|c|c|c|c} \hline Simulation Number & I(1) & I(2) & I(3) & I(4)\\ \hline Perimeter (mm) & 6.9 &6.96 & 8.4 & 8.8\\ \hline Parameter & Slope & Slope & Slope & Slope \\ \hline Normal 1 & 0.08 & 0.08 & 0.082 &0.082 \\ Normal 2 & 0.08 & 0.08 & 0.082 & 0.082 \\ Normal 3 & 0.01 & 0.01 & 0.012 & 0.016 \\ Normal 4 & 0.08 & 0.08 & 0.082 & 0.082 \\ Normal 5 & 0.08 & 0.085 & 0.085 & 0.082\\ Normal 6 & 0.01 & 0.01 & 0.012 & 0.015 \\ \hline Average growth rate & & & & \\ $G_{th}$ (mm/h)& 0.0567 & 0.0575 & 0.0592& 0.0598\\ \hline \end{tabular}} \end{table} Both experiments and simulations, while in fair agreement with each other, point to the fact that the average growth rate is only slightly affected by the initial seed size. The effect appears to be somewhat stronger for a larger initial seed. The growth behavior over time is illustrated in Fig.~\ref{ms}. The results for initial perimeters $6.9$ and $6.96$ mm cannot be differentiated visually. For a larger initial seed, the differences between $8.4$ and $8.8$ mm can be recognized, but only a minute increase in average growth rate can be recognized. It can be concluded that, compared to supersaturation, the influence of initial seed size is minor. Still, a larger initial seed corresponds to a slightly increased growth rate. \begin{figure}[H] \centering \includegraphics[width=0.5\textwidth]{perim.pdf} \caption{Numerical results concerning the effect of initial seed perimeter on the growth rate.} \label{ms} \end{figure} All the previous results have been obtained while neglecting any convection effect around the crystal. However, it is known that forced convection might lead to asymmetric crystal growth. This will be explored next. \subsection{Ventilation effects} In the real single-crystal reactor the incoming flow of (S)-mandelic acid in water might have a large impact on crystal growth rate and shape development. The aim of the present section is to check and quantify this point. \paragraph{Validation in presence of convection} First, we validate the numerical model against available experimental data taking into account the real inflow conditions used in the reactor cell. For this purpose, and following the experimental settings~\cite{Linzhu}, a hexagonal seed of perimeter $P = 3.7$~mm is used. The initial supersaturation is $U=0.045$, the Reynold number Re = 17.2. Here, we keep the same Reynold number as the experiment~\cite{Linzhu}. The results, represented by the crystal shape, are compared over time via snapshots taken every two hours over an overall growth period of 16 hours, as shown in Fig.~\ref{ta}. \begin{figure}[H] \centering \includegraphics[width=0.7\textwidth]{ven.pdf} \caption{Morphologies of (S)-mandelic acid crystal captured by (a) camera during the experiments~\cite{Linzhu}; (b) simulations.} \label{ta} \end{figure} It is observed that the evolution of the crystal shape over time as obtained from both experiment and simulation are in good agreement with each other; they both point to a non-symmetrical growth. The constant inflow of a solution with a higher concentration hitting the inlet-facing sides of the crystal subjects them to noticeably larger gradients at the interface, as compared to the other, leeward sides; this induces lower adsorption rates at the latter. As a result the (S)-mandelic crystal grows faster on the sides facing the inflow, leading to a steady increase of the aspect ratio, defined as the ratio of the horizontal size of the crystal to its vertical size. This is clearly visible in Fig.~\ref{ai} where both the velocity and supersaturation fields at $t=$16 hours are shown. \begin{figure}[H] \centering \includegraphics[width=0.8\textwidth]{V0.pdf} \caption{Non-symmetric growth of a (S)-mandelic acid crystal taking into account convection as obtained from the simulation for $U=0.045$ after 16 hours. Flow is from left to right in the reactor.} \label{ai} \end{figure} To further illustrate the effects of hydrodynamics on the crystal habit, the effect of the Reynolds number and of the initial orientation of the seed are considered numerically. \paragraph{Effect of Reynolds number} For this purpose, the supersaturation is kept constant at $U=0.045$. The Reynolds number is the most important non-dimensional parameter of fluid dynamics, comparing quantitatively convective effects to dissipation by viscosity. It is defined as: \begin{equation} {\rm Re} = \frac{u_{\rm in} P}{\nu_f}, \end{equation} where $P$ is the initial seed perimeter. Two different Reynolds numbers, i.e. Re$=8.6$ and $17.2$, are considered. The obtained crystal shape and velocity fields are shown in Fig.~\ref{aib}. \begin{figure}[H] \centering \includegraphics[width=0.8\textwidth]{V2.pdf} \caption{Convection effects on (S)-mandelic acid crystal growth after 10 hours for $U=0.045$ and two different Reynolds numbers. Left side: Re $= 17.2$; Right side: Re $= 8.6$}. The white line represents the crystal boundary taking into account the flow (ventilation effect), while the grey line shows the same results in the absence of any inflow. \label{aib} \end{figure} As expected, higher inlet velocities hitting the inflow-facing sides of the crystal result in a faster growth in that direction; the resulting asymmetry becomes more marked, leading to elongated crystals in the horizontal direction for the present setup. This is particularly clear looking at Fig.~\ref{aicb}. After 10 hours, the aspect ratio for Re$=17.2$ is already more than twice as large than for Re$=8.6$. \begin{figure}[H] \centering \includegraphics[width=0.5\textwidth]{aspect.pdf} \caption{Evolution of the aspect ratio vs time for Reynolds numbers Re$=8.6$ and $17.2$.} \label{aicb} \end{figure} \paragraph{Effect of the initial orientation of the seed} To show the effect of seed orientation, four simulations have been carried out with the same Reynolds number, Re$=17.2$, but with different initial orientations, namely $\theta\in\{0, \frac{\pi}{12}, \frac{\pi}{6}, \frac{\pi}{4}\}$. This choice of tilt is motivated by the six-fold symmetry of the crystal natural habit, so that only the 0-$\pi/3$ range is relevant. The obtained crystal habits after 10 hours along with the corresponding velocity fields are illustrated in Fig.~\ref{aia}. \begin{figure}[H] \centering \includegraphics[width=0.7\textwidth]{V1.pdf} \caption{Effect of initial seed orientation on (S)-mandelic acid crystal growth after 10 hours for $U=0.045$. Top left: without rotation; Top right: initial rotation of $\frac{\pi}{12}$ (clockwise rotation); Bottom left; initial rotation of $\frac{\pi}{6}$; Bottom right: initial rotation of $\frac{\pi}{4}$. The white line represents the crystal boundary taking into account the flow, while the green line shows the same results in the absence of any inflow.} \label{aia} \end{figure} It is seen that the initial orientation not only affects the symmetry of the crystal, but also its average growth rate. Taking into account convective effects and initial seed orientation, the crystal habits become highly asymmetrical. It is also observed that a slight initial rotation in clockwise direction can result in a final habit showing preferential counter-clockwise orientation, due to a strong interaction with the convective flow field. \paragraph{Improving symmetrical growth in presence of convection using a} baffle As seen from the previous simulations, the overall shape of the crystal varies considerably as function of the Reynolds number. It was mentioned earlier that the regularity of the crystal shape is a property of high interest regarding the final products performance. Therefore, it is desirable to find a simple geometrical modification to the single-crystal growth cell leading to isotropic growth rates and a desired final aspect ratio. For this purpose, a simple flat baffle has been placed in the simulation directly in front of the inlet in order to prevent a direct impact of the incoming flow onto the growing seed. Three different configurations (different position, different size) of the baffles have been compared. The resulting configurations are illustrated in Fig.~\ref{aih}; configuration 1 is the original case, without any baffle. \begin{figure}[H] \centering \includegraphics[width=0.35\textwidth]{2ddgeo.pdf} \caption{Proposed modifications of the geometry of the single-crystal growth cell reactor including a baffle (three different possible configurations).} \label{aih} \end{figure} To check the robustness of the proposed modification with the three different baffles (plus the original case), two different Reynolds numbers (Re$=8.6$ or $17.2$), and two different initial seed orientations ($\theta$=0 or $\pi/6$) have been considered, making up for a total of ($4\times 2\times 2=$) 16 different cases. All numerical results after 10 hours of growth are shown in Figs.~\ref{aif} (for Re=$17.2$) and \ref{aig} (for Re$=8.6$). \begin{figure}[H] \centering \includegraphics[width=0.9\textwidth]{V6.pdf} \caption{Numerical prediction for the growth on (S)-mandelic acid crystal after 10 hours for $U=0.045$ and Re$= 17.2$. First column: original configuration, without baffle; second column: baffle configuration 2; third column: baffle configuration 3; fourth column: baffle configuration 4. Top line (a): without any rotation of the initial seed; Bottom line (b): with initial rotation of the seed by $\pi$/6.} \label{aif} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.9\textwidth]{V7.pdf} \caption{Numerical prediction for the growth on (S)-mandelic acid crystal after 10 hours for $U=0.045$ and Re$= 8.6$. First column: original configuration, without baffle; second column: baffle configuration 2; third column: baffle configuration 3; fourth column: baffle configuration 4. Top line (a): without any rotation of the initial seed; Bottom line (b): with initial rotation of the seed by $\pi$/6.} \label{aig} \end{figure} To quantify the effects of the baffles on the quality of the crystal, a quality parameter is defined as $Q=\max(L_i)/\min(L_i)$ where index $i\in\{0,\dots,5\}$ covers the length of all sides of the resulting crystal. Parameter $Q$ quantifies non-isotropic growth, with $Q=1$ (the minimum value) corresponding to a perfectly isotropic growth, while an increasing value of $Q$ corresponds to growing non-isotropy. The values of crystal quality as obtained from all simulations after 10 hours of growth are listed in Table~\ref{zz}. \begin{table}[!htbp] \centering \setlength{\tabcolsep}{1.4mm}{ \caption{Impact of the baffles for $U = 0.06$ for two different Reynolds numbers and seed orientations\label{zz} in Fig.~\ref{aih}} \begin{tabular}{c|c|c|c|c} \hline Q & Re=8.6; tilt=0 &Re=8.6; tilt=$\pi$/6 & Re=17.2; tilt=0 & Re=17.2; tilt=$\pi$/6\\ \hline No Baffle & 1.28 & 1.23 & 1.625 & 1.59\\ Baffle 1 & 1.22 & 1.21 & 1.41 & 1.39 \\ Baffle 2 & 1.13 & 1.16 & 1.24 & 1.21 \\ Baffle 3& 1.05 & 1.07 & 1.14 & 1.12 \\ \hline \end{tabular}} \end{table} From Table~\ref{zz} it is clearly observed that, while all baffles contribute to reducing asymmetrical growth, the larger one, i.e. baffle 3, leads to the best crystal quality in terms of symmetry for all considered conditions. It successfully reduces deviations from perfect symmetry by about 20\% for Re$=8.6$ and 30\% for Re$=17.2$. Since the complexity of the experimental setup would not be significantly increased by adding a baffle, such a modification is recommended for further studies. \section{Conclusions and perspectives} In this work, a numerical model based on lattice Boltzmann has been developed and validated to describe crystal growth. It has been shown to correctly capture the dynamics of (S)-mandelic acid crystal growth. The numerical simulations were compared to experimental data from a single-crystal growth reactor and are in very good agreement. The model was then used to investigate the effects of important parameters such as supersaturation and initial seed size on growth dynamics. It was depicted that higher supersaturation levels lead to much faster growth rates; the impact of a larger initial seed crystal is by far not as strong, but increases slightly the growth rate as well.\\ It was also demonstrated that hydrodynamics can have pronounced effects on both average growth rate and habit, and may lead to a clear rupture of symmetry. The evolution of the crystal habit was shown to change significantly with the Reynolds number, but also with the initial orientation of the seed with respect to the incoming flow. Finally, a simple modification of the reactor geometry was proposed to minimize non-symmetrical growth. This will be tested in later experiments.\\ While the model was successfully applied in the present study to pure (S)-mandelic acid crystal growth under isothermal conditions, in the future, these simulations will be extended to more complex situations involving a mixture of both (S)- and (R)-mandelic acid and taking into account temperature changes. \section*{Acknowledgments} Q.T. would like to acknowledge the financial support by the EU-program ERDF (European Regional Development Fund) within the Research Center for Dynamic Systems (CDS). S.A.H. acknowledges the financial support of the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) in TRR 287 (Project-ID 422037413). The authors gratefully acknowledge the computing time granted by the Universit\"at Stuttgart-H\"ochstleistungsrechenzentrum Stuttgart (HLRS); all calculations for this publication were conducted with computing resources provided under project number 44216.\\ \bibliographystyle{plain}
1,116,691,497,441
arxiv
\section{Introduction} The field theory approach to gravity, see e.g. \cite{Feynman:1996kb}, tells us that gravity is not a gauge theory. Indeed, the carriers of force in a gauge theory (such as e.g. Maxwell electrodynamics) are spin one particles. For this reason there are two types of charged objects interacting by exchange of carriers of force - those negatively and positively charged. Like particles repel and unlikes attract. In contrast, there is only one type of charge in gravity and everything attracts everything. Thus, gravity is not a gauge theory, see \cite{Feynman:1996kb} for a more detailed discussion. This simple argument forbids a direct gauge theory description of gravity. It says nothing, however, about less direct possible relationships. And indeed, a relation of a completely different type is now being very popular. This has its origin in the open-closed string duality, which implies that amplitudes for closed strings are squares of those for open strings. Since the low energy limit of the closed string theory is gravity, and that for open strings is gauge theory, this implies that scattering amplitudes for gravitons must be expressible as squares of amplitudes for gluons, see e.g. a review \cite{Bern:2002kj} and/or a more recent paper \cite{Bern:2010yg} and references therein. The relationship is not direct, and it is in particular not easy to find a Lagrangian version of the correspondence. However, it has recently led to some very interesting developments on loop divergences in $N=8$ supergravity. Another example of a gauge theory/gravity relation is the AdS/CFT correspondence \cite{Witten:1998qj} of string theory. The aim of the present paper is to develop (further, see historical remarks below) yet another gravity/gauge theory correspondence. Currently there appears to be no relation between the present story and that of \cite{Bern:2002kj}. The relationship of interest for us here has its origins in the discovery of Plebanski \cite{Plebanski:1977zz} that certain triple of self-dual two-forms can be used as the basic variables for gravity\footnote{Similar ideas has appeared in the literature much earlier, see e.g. \cite{Krasnov:2009pu} for historical remarks, but it was Plebanski who proposed to reformulate general relativity without the metric, with only two-forms as dynamical variables.}. The same "self-dual" formulation of general relativity (GR) has been rediscovered a decade later by Ashtekar \cite{Ashtekar:1987gu} via a completely different path of a canonical transformation on the phase space of GR. The two discoveries were later linked in \cite{Jacobson:1988yy}, and the outcome was a realisation that gravity can be reformulated as a theory whose phase space coincides with that of an ${\rm SU}(2)$ gauge theory. This gravity/gauge theory relationship was taken one step further in \cite{Capovilla:1989ac}. Thus, it was realized that the two-form fields of Plebanski formulation of GR \cite{Plebanski:1977zz} can be integrated out to obtain a "pure connection" formulation of general relativity, where the only dynamical field is an ${\rm SU}(2)$ connection. The result was a completely new perspective on general relativity, in which GR becomes reformulated as a novel type of a theory of the gauge field --- a {\it diffeomorphism invariant gauge theory}. The work on "pure connection" formulation of GR \cite{Capovilla:1989ac} has led to some further advances in that it was realized in \cite{Bengtsson:1990qg} that there is not a single diffeomorphism invariant gauge theory, but an infinite parameter class of them. All these theories share the same key properties with GR, as they have the same number of propagating degrees of freedom (DOF). Thus, for any theory in the class the phase space is that of an ${\rm SU}(2)$ gauge theory. However, in addition to the usual ${\rm SU}(2)$ gauge rotations, there are also diffeomorphisms acting on the phase space variables, which reduce the number of propagating DOF from 6 of ${\rm SU}(2)$ gauge theory to 2 of GR. Unfortunately, the new "pure connection" viewpoint on GR originating in \cite{Capovilla:1989ac} (and having its roots in Plebanski's key insight \cite{Plebanski:1977zz}) has not been significantly developed. The phase space version \cite{Ashtekar:1987gu} of this story has formed the foundation of the approach of loop quantum gravity \cite{Rovelli:2008zza}, but the pure connection formulation of GR \cite{Capovilla:1989ac} and of the infinite-parameter family \cite{Bengtsson:1990qg} of "neighbours of GR" has not had any significant applications, as far as the author is aware. The main aim of this paper is to revisit this "pure connection" formalism for gravity and develop it further. Our main motivation is a (future) application of this formalism to the perturbative quantization of gravity. However, as it will become clear below, the pure connection perspective on gravity developed here may have other uses. We motivate our interest in this formalism for (quantum) gravity with some historical remarks. Thus, the author's interest in the subject started in \cite{Krasnov:2006du} from a simple power counting argument describing how the non-renormalizability of GR manifests itself in the Plebanski formulation \cite{Plebanski:1977zz}. The outcome was an infinite-parameter family of Plebanski-like theories, where the constraint term of the Plebanski action was replaced by a "potential" term for the would-be Lagrange multipliers. Each of the new theories is just the familiar from discussions of non-renormalizability counterterm corrected GR (in disguise), and so the interpretation of the infinite number of new parameters is that they are related to coefficients in front of counterterms constructed from the curvature and its derivatives in the usual metric description of gravity. It was very quickly realized \cite{Bengtsson:2007zzd} that the new infinite-parameter family of theories \cite{Krasnov:2006du} is essentially the same as the one introduced and studied by Begtsson and collaborators a decade earlier \cite{Bengtsson:1990qg}, with the difference being that \cite{Bengtsson:1990qg} worked at the level of a "pure connection" formulation, while the theories \cite{Krasnov:2006du} are formulated as Plebanski-like theories with two-form fields as the basic variables. The class of gravity theories \cite{Bengtsson:1990qg}, \cite{Krasnov:2006du} can be thought of as summing at least some of the quantum corrections that arise in the process of renormalization of GR, and in \cite{Krasnov:2009ik} this was confirmed by directly exhibiting the familiar GR counterterms as appearing from \cite{Krasnov:2006du}. The work \cite{Krasnov:2006du} also conjectured that this class of gravity theories sums up {\it all} the arising quantum corrections; in other words, it was conjectured that the class \cite{Krasnov:2006du} is closed under the renormalization, and that the arising renormalization group flow is that in the space of "potential" functions defining the theory. At the time of writing \cite{Krasnov:2006du} the only motivation for this conjecture was the author's optimism --- the conjecture did not contradict anything one knew about the non-renormalizability of GR, and was the most optimistic scenario for how the divergences of GR might organise themselves. The remark \cite{Bengtsson:2007zzd} relating the Plebanski-like theories \cite{Krasnov:2006du} to the pure connection theories \cite{Bengtsson:1990qg} brought with it an additional justification. Thus, a closer look at these theories made it clear that they are just the most general diffeomorphism invariant gauge theories. The class of such theories should therefore be closed under the renormalization, because any counterterm that can be needed for cancelling the arising quantum divergences is already included into the action, see \cite{Krasnov:2009ip} for the first spell-out of this argument. One of the aims of the present paper is to set the stage for a systematic study of the quantum perturbation theory for the gravitational theories introduced in \cite{Krasnov:2006du} (and previously in \cite{Bengtsson:1990qg}). Our final goal is to settle the status of the conjecture of \cite{Krasnov:2006du} that these class of theories is closed under the renormalization, and then to compute the resulting renormalization group flow. However, it would be impractical to try to write up all the necessary calculations in a single paper. For this reason, in the present paper we develop the classical theory to the extent that the propagating degrees of freedom (gravitons) are manifest. We also do some preliminary steps necessary for the perturbative loop computations in that the gauge fixing is discussed in details and the propagator is obtained. It is then straightforward to start to compute loop diagrams. This is however not attempted in the present paper, and the task to develop a sufficiently economical way to study the renormalization is left to future work. Apart from just setting up the stage for future quantum calculations, a somewhat unexpected outcome of this work is a completely new viewpoint on the gravitational perturbation theory. As we shall see, in the present diffeomorphism invariant gauge theoretic approach to gravity, the fundamental scale is set not by the Newton's constant, which does not appear in the original formulation of the theory at all, but rather by the radius of curvature of the background that is used to expand the theory around. Thus, the natural fundamental length scale is set by the cosmological constant. This has the effect that in our theory the Newton's constant becomes a derived quantity. This leads to some puzzles about the cutoff scale of our perturbation theory, to be discussed towards the end of the paper. Another point that is worth emphasizing from the outset is that our gauge-theoretic approach to gravity only works for a non-zero value of the cosmological constant $\Lambda$. As we shall see more explicitly below, the actions we work with blows up in the limit $\Lambda\to 0$. The puzzles about the behaviour of the perturbation theory that we discuss in the main text are directly related to this feature. To summarize, the main aim of this work is to develop a new approach to the gravitational perturbation theory, for future use in particular in the quantum loop calculations. What makes this paper distinct from previous works (in particular of this author) is that here for the first time the "pure connection" formalism close in spirit to the formulation in \cite{Bengtsson:1990qg} is used as a starting point for the gravitational perturbation theory. Thus, all previous works on theories \cite{Krasnov:2006du} used the two-form formulation. The gravitational perturbation theory in the two-form formulation is similar to that in the usual metric approach, see \cite{Krasnov:2009ik}. In particular, the fundamental scale that determines the self-coupling of the gravitons and sets the scale of the strong coupling regime is, as in the usual metric case, the Planck scale. However, the number of field components one has to works with in the two-form formulation is quite large --- it is that of an ${\rm SU}(2)$ Lie algebra-valued two-form field. Moreover, there are second class constraints that require the path integral measure to be somewhat non-trivial. For all these reasons it proved to be rather difficult to set up an economical perturbation theory in the two-form formalism. At the same time, for a long time it seemed that the "pure connection" formulation is ill suited for being a starting point of a perturbative description, as it was not at all clear how one can expand the theory around the Minkowski spacetime background which corresponds to a zero connection, see e.g. remarks in \cite{TorresGomez:2009gs}. In this paper the prejudices about the "pure connection" formulation of gravity are put aside and this formulation is used as a starting point for the gravitational perturbation theory. And, as we hope to convince the reader, this formulation can be used rather effectively, in that the arising perturbation theory is reasonably economical. In particular, the linearized theory is very simple (arguably simpler than in the metric description), and the propagator can be obtained without too much difficulty. As we shall see, in the "pure connection" formalism developed here gravity becomes not too dissimilar to ${\rm SU}(2)$ gauge theory, the main difference being that a certain additional projector on diffeomorphism equivalence classes is inserted into the standard $1/k^2$ propagator of the gauge theory. This gives hope that the renormalization in this class of gravity theories will eventually become manageable. As we have already mentioned, this is left to the future work. What is new in this work as compared to previous works on the "pure connection" formulation, in particular the work \cite{Bengtsson:1990qg} and works by Bengtsson and collaborators that followed, is that our treatment uses in an essential way the formulation in terms of a homogeneous potential function applied to a matrix-valued 4-form. This was developed in earlier works of the author, and first spelled out in \cite{Krasnov:2009iy} for the version of the theory that uses a two-form field, and in \cite{Krasnov:2009ip} for the pure connection formulation. This formulation renders the action principle of the theory very compact, and makes it possible to set up the perturbation theory without too much difficulty. Before we proceed with a description of the theory, there are a few things that ought to be emphasised to avoid misunderstanding. In our gauge-theoretic approach to gravity the theory (or any of the class of theories that we study) remains as non-renormalizable as GR in the usual metric-based treatment. Thus, as we shall explicitly see below, the coupling constant of our theory has a negative mass dimension, which signals non-renormalizability by power counting. Thus, the final goal of our enterprise is not to show that the theory is renormalizable --- it is not --- but rather to show that the infinite-parameter class of theories that we study is closed under the renormalization, and then to compute the arising renormalization group flow. In other words, we are not after the renormalizability in the usual sense of quantum field theory, which is that a Lagrangian with a finite number of couplings is closed under the renormalization. Rather, we are after the renormalizability in the effective field theory sense of Weinberg, see e.g. \cite{Weinberg:2009bg} for a recent discussion, where any theory is renormalizable once all possible counterterms are added to the action. Our aim is then to show that in the case of gravity in four spacetime dimensions it is sufficient to consider only those counterterms (infinite in number) that can be compactly summed up into our diffeomorphism invariant gauge theory Lagrangian. Should this indeed be the case, the renormalization group flow in the infinite dimensional space of gravity theories will be just a flow in the space of defining functions, and will become manageable. Note once again, however, that the quantum theory, while being our main motivation, is not the subject of the present work. We would also like to explain at the outset how a gauge theory (with spin one excitations) can describe gravity with its spin two excitations. This is a version of the story "spin one plus spin one is spin two", of relevance for the gauge theory gravity relationship \cite{Bern:2002kj}. There are, however, also significant differences. Thus, the main dynamical field of our theories is an ${\rm SU}(2)$ connection $A_\mu^i$, where $\mu$ is a spacetime index, and $i=1,2,3$ is a Lie algebra one. Let us recall that in the usual gauge theories in Minkowski spacetime the temporal component $A_0^i$ of the connection field becomes a Lagrange multiplier --- the generator of the gauge rotations. Then of the spatial components $A_a^i$, where $a=1,2,3$ is a spatial index, some components are pure gauge in that they can be set to any desired value by a gauge transformation. The physical propagating degrees of freedom of the theory can be described as the gauge equivalence classes of the spatial projection of the connection. In the case of gauge group ${\rm SU}(2)$, the gauge invariance removes 3 of the 9 components of the spatial connection $A_a^i$, leaving two propagating polarizations per each Lie algebra generator. As we shall see below, in the case of our gravitational theories the situation is very similar, with the exception that the Lagrangian is in addition invariant under diffeomorphisms. The way this is realized in our theories is that the Lagrangian is simply independent of certain 4 combinations of the connection field $A_\mu^i$. This is where the spin two comes from. Thus, consider once again the spatial projection of the connection $A_a^i$. We shall see that (using the background) it will be possible to identify two types of indices --- the spatial and the internal Lie algebra ones. Once this is done, the spatial connection can be thought of as a $3\times 3$ matrix, or, in representation theoretic terms, it constitutes the spin one tensor spin one representation. This decomposes as spin two plus spin one plus spin zero. On the other hand, the temporal component of the connection $A_0^i$ forms the spin one (adjoint) representation of ${\rm SU}(2)$. The diffeomorphism invariance projects out the spin zero components of the spatial connection $A_a^i$, as well as a certain combination of the spin one component of $A_a^i$ and $A_0^i$, leaving only one of these spin one components in the game. Thus, after the projection induced by the diffeomorphisms, the Lagrangian depends only on the spin two component of $A_a^i$, as well as on the spin one set of Lagrange multipliers --- generators of ${\rm SU}(2)$ rotations. These make the 3 longitudinal components of the 5 component spin two field unphysical, leaving only 2 propagating physical modes. To summarize, in our version of gauge theory/gravity correspondence the spin two also comes from the tensor product of two spin one representations. As in any gauge theory in Minkowski space, one of these spin one representations is supplied by the spatial projection of the connection field. The other spin one is provided by the adjoint representation of the ${\rm SU}(2)$ Lie algebra in which the connection field takes values. Our final remark here is about the issues of the reality conditions. As we shall see below, in the physically realistic case of Lorentzian signature gravity, the main dynamical field of our theories is a complex-valued ${\rm SO}(3,{\mathbb C})$ connection. Thus, appropriate reality conditions need to be imposed to select the field configurations corresponding to real Lorentzian metrics. Our strategy for dealing with these in the present paper is as follows. We shall see that at the level of the linearized theory the reality conditions are straightforward (one can easily determine them from the requirement that the linearized Hamiltonian is positive-definite). In the full theory, however, the reality conditions need to be imposed non-perturbatively. We do not yet know how to do this. However, for many applications, in particular the ones we are most interested in, this is not needed. Thus, for calculations studying the renormalization of gravity (e.g. ones done using the background field method), one performs the Wick rotation to the Riemannian signature metrics. For the later case our gauge field is a real-valued ${\rm SO}(3)$ connection, and the reality conditions are straightforward. Similarly, for the perturbative loop computations one uses the knowledge of the linearized reality conditions to specify the physical external states. It is then only necessary to specify the contour in the complex connections space that is used in the loop integrations. There is typically very little freedom in the choice of this contour provided one wants the integrals to converge. So, again it is possible to perform computations without specifying explicitly the full non-linear theory reality conditions. This state of affairs, is, of course, not completely satisfactory, for one would like to have a complete control over the full theory reality conditions as well. This is, however, beyond the scope of this paper (and is never needed here). With these preparatory remarks having been made, we can proceed to describe how gravity can be reformulated as a diffeomorphism invariant gauge theory. The organization of the paper is as follows. In Section \ref{sec:formulation} we define an action principle for our theories, explain how their parameterization by a homogenous function works, derive the field equations and verify gauge invariances of the action. Section \ref{sec:backgr} studies the theory linearized around a constant curvature background. In particular, a simple quadratic in the gauge field fluctuations action is obtained, and its Hamiltonian analysis is performed. This confirms the outlined above picture of how the spin two nature of excitations comes about. Section \ref{sec:prop} is central to our analysis. It discusses the gauge-fixing appropriate to the situation at hand, and inverts the gauge-fixed quadratic form to obtain the propagator. In Section \ref{sec:inter} we derive the (cubic and quartic) interaction terms of our theory. We conclude with a brief discussion. \section{Diffeomorphism invariant gauge theories} \label{sec:formulation} \subsection{Gravity as a gauge theory} In the pure connection formulation gravity becomes the most general diffeomorphism invariant gauge theory. In the case of a purely gravitational theory\footnote{One can also consider unified Yang-Mills-gravity theories of the same sort, see \cite{TorresGomez:2010cd}.} the gauge group is (complexified) ${\rm SU}(2)\sim{\rm SO}(3)$. The action is a functional of an ${\rm SU}(2)$ connection $A^i, i=1,2,3$ on a spacetime manifold $M$. Let $F^i=dA^i+(1/2)\epsilon^{ijk}A^j\wedge A^k$ be the curvature of $A^i$. The action is given by the following gauge and diffeomorphism invariant functional of the connection: \be\label{action} S[A]=(1/{\mathrm i}) \int_M f(F^i\wedge F^j). \ee Here ${\mathrm i}=\sqrt{-1}$ is a factor introduced for future convenience, and $f$ is a function with properties to be spelled out below. We shall refer to $f$ as the defining function of our theory. It is a holomorphic, homogeneous of degree one and gauge invariant function of its matrix (and 4-form) valued argument. Thus, let $X^{ij}\in {\mathfrak su}(2)\otimes_S{\mathfrak su}(2)$ be a matrix valued in the second symmetric power of the Lie algebra. The gauge group ${\rm SU}(2)\sim{\rm SO}(3)$ acts in the space of such matrices via $X\to g X g^T$, where $T$ is the operation of the transpose. We first consider scalar valued functions $f: {\mathfrak su}(2)\otimes_S{\mathfrak su}(2) \to {\mathbb C}$ that are holomorphic, gauge-invariant $f(g X g^T)=f(X)$ and homogeneous of degree one $f(\alpha X)=\alpha f(X)$. A convenient for practical computations parameterization of such functions is as follows. Consider the following 3 ${\rm SU}(2)$ invariants of $X^{ij}$: \be {\rm Tr}(X), \qquad {\rm Tr}(X^2), \qquad {\rm Tr}(X^3), \ee where the traces (and powers of $X$) are computed using the Killing metric $\delta^{ij}$ on the Lie algebra. When ${\rm Tr}(X)\not=0$ we can parameterise the defining function $f$ as follows: \be\label{f} f(X) = {\rm Tr}(X)\, \chi\left( \frac{{\rm Tr}(X^2)}{({\rm Tr}(X))^2}, \frac{{\rm Tr}(X^3)}{({\rm Tr}(X))^3}\right), \ee where $\chi$ is now an arbitrary holomorphic function of its two arguments. Given $f$ with the properties as spelled out above, e.g. one parameterised as in (\ref{f}), it can be seen that this function can be applied to a matrix valued 4-form, with the result being a 4-form. Indeed, consider $F^i\wedge F^j$, which is a ${\mathfrak su}(2)\otimes_S{\mathfrak su}(2)$ valued 4-form. Choose a reference volume form on $M$ (we assume that $M$ is orientable), and denote it by $(\rm vol)$. Of course, $(\rm vol)$ is only defined modulo the multiplication by a nowhere zero function. Using this reference volume form we can write $F^i\wedge F^j = X^{ij} ({\rm vol})$, where $X^{ij}$ is again defined only modulo rescalings. We can now use the homogeneity of $f$ to write \be f(F^i\wedge F^j)=({\rm vol}) f(X). \ee It is moreover clear that the result on the right-hand-side does not depend on which reference volume form is used in this argument. This is again due to the homogeneity of $f$. This shows that the integrand in (\ref{action}) is a well-defined 4-form that can be integrated to obtain the action. This finishes the formulation of our theory. We note that, as formulated, there are no dimensionful parameters in our theory. Indeed, we assume the connection field $A^i$ to have the usual mass dimension one, so that the curvature has the mass dimension two, and the matrix of the wedge products $[X]=4$. The defining function $f$ is essentially the function $\chi$ of ratios of powers of $X^{ij}$ that are dimensionless, and so does not contain any dimensionful parameters (but contains an infinite number of dimensionless "coupling constants", once expanded appropriately). Thus, due to the homogeneity of $f$, its mass dimension is the same as that of $X$ (in the parameterization (\ref{f}) the mass dimension is carried by the first term ${\rm Tr}(X)$, while the function $\chi$ is dimensionless). The function $f$ can then be integrated to produce a dimensionless action (as usual we work in the units $c=\hbar=1$). As we shall see, the fact that there are no dimensionful coupling constants in our theory has profound implications for the structure of its perturbation theory. Classically (\ref{action}) is a theory that can be shown, see e.g. \cite{Krasnov:2007cq}, to propagate two (complex for the time being, reality conditions will be discussed below) degrees of freedom. We will see a version of the argument that leads to this conclusion below when we consider the perturbation theory. One can also show that theory (\ref{action}) is a gravity theory, in spite of the fact that no metric is present anywhere. Thus, it can be reformulated explicitly as a theory of metrics via a sequence of transformations. The main idea is to note that declaring the 3 two-forms $F^i$ to span the space of (anti-) self-dual two-forms determines a conformal metric on $M$ whenever the matrix $X^{ij}$ defined from the wedge product of curvatures is non-degenerate. One can then rewrite the theory (\ref{action}) explicitly as the theory of this metric, see \cite{Krasnov:2009ik} for details. We also note that the usual general relativity (with or without the cosmological constant) can be rewritten in this language, see below. However, in this paper, we shall not need this relation to metric theories. Our plan is to study (\ref{action}) as is. We shall set the stage for its perturbative quantization and a study of its renormalization. The main justification for this undertaking is that a whole class of gravity theories (for varying defining functions $f$) can be treated in one go. Moreover, our theories are theories of a connection, and we can hope to use the expertise that was accumulated in quantum field theory for dealing with quantum gauge theories. \subsection{GR with the cosmological constant} Before we proceed with our analysis of theories (\ref{action}), we would like to state the action principle that reformulates the usual GR (with the cosmological constant) in this language. Consider the following action principle: \be\label{sec-GR-action} S_{\rm{GR}}[A] = \frac{1}{16\pi{\mathrm i} G\Lambda} \int \left( {\rm Tr}\sqrt{F^i\wedge F^j} \right)^2. \ee Here $ {\rm Tr}\sqrt{F^i\wedge F^j} $ is the trace of a matrix square root of the matrix $F^i\wedge F^j$. It is clear that the above action is of the general form (\ref{action}), for the scalar function that is used in the action (\ref{sec-GR-action}) is gauge-invariant and homogeneous of degree one, as required. The quantities $G$, $\Lambda$ in the denominator in front of the action are the usual Newton's constant and the cosmological constant respectively. Note that these appear in the action only in the dimensionless combination $G\Lambda$. For the currently accepted value of $\Lambda$ the value of $G\Lambda$ is exceedingly small $G\Lambda\sim M_\Lambda^2/M_p^2\sim 10^{-120}$. Thus, the value of the dimensionless parameter in front of GR action is very large. Below we shall see what kind of implication this has for the gravity perturbation theory. For the convenience of the reader the action (\ref{sec-GR-action}) is derived in the Appendix. \subsection{Topological action} Another prominent member of the class of theories (\ref{action}) is the topological theory: \be\label{top-action} S_{\rm top}[A]= \frac{1}{{\mathrm i} \kappa} \int {\rm Tr} (F^i\wedge F^j), \ee where $\kappa$ is some numerical parameter. It is not hard to see that the Lagrangian is a total derivative, and so the action describes a theory without propagating DOF --- a topological theory. Being topological, this theory is certainly a fixed point of the (sought) renormalization group flow in the space of theories (\ref{action}). Conjecturally, the renormalization group flow takes one from (\ref{sec-GR-action}) at low energies to (\ref{top-action}) at high energies. \subsection{A convex functional} We would also like to give an example of an action with a convex (near the point $X^{ij}\sim \delta^{ij}$) defining function. Let us consider the following theory: \be\label{joel-action} S[A] = \frac{1}{{\mathrm i}\tilde{\kappa}} \int \frac{{\rm Tr}(F^i\wedge F^j)^2} {{\rm Tr}(F^i\wedge F^j)}, \ee where $\tilde{\kappa}$ is a dimensionless parameter. The downward gradient flow for this functional of the connection has been studied in \cite{Joel}. The renormalization group flow for the class of theories (\ref{action}) should in particular explain why at low energies one flows to a concave functional (\ref{sec-GR-action}) instead of a convex functional such as (\ref{joel-action}). \subsection{First variation and field equations} The first variation of the action (\ref{action}) gives us the field equations. To write these down, let us give a parameterization of the matrix $X^{ij}$ useful for practical computations. Thus, let $\tilde{\epsilon}^{\mu\nu\rho\sigma}$ be a completely anti-symmetric rank 4 vector, which is a density of weight one (as is indicated by the tilde over its symbol). This object exists on any orientable manifold and does not need a metric for its definition. Consider: \be\label{X} \tilde{X}^{ij}:= \frac{1}{4} \tilde{\epsilon}^{\mu\nu\rho\sigma} F^i_{\mu\nu} F^j_{\rho\sigma}, \ee where as before $F^i_{\mu\nu}$ is the curvature two-form, with its spacetime indices now indicated explicitly. The quantity $\tilde{X}^{ij}$ is a ${\mathfrak su}(2)\otimes_S{\mathfrak su}(2)$ valued matrix, and a density of weight one. One takes the defining function $f$ to be a function of $\tilde{X}^{ij}$ given by the same expression as in (\ref{f}). With convention $dx^\mu \wedge dx^\nu\wedge dx^\rho \wedge dx^\sigma=\tilde{\epsilon}^{\mu\nu\rho\sigma} d^4x$ we can write the action (\ref{action}) as \be S[A] = (1/{\mathrm i}) \int_M d^4x\, f(\tilde{X}^{ij}) \, . \ee The first variation of the action can now be easily computed and reads: \be\label{first-var} \delta S[A]=(1/{\mathrm i}) \int_M d^4x\, \frac{\partial f}{\partial \tilde{X}^{ij}} \frac{1}{2} \tilde{\epsilon}^{\mu\nu\rho\sigma} F^i_{\mu\nu} D_{A\,\rho} \delta A^j_\sigma \, . \ee Integrating by parts, we see that the field equations for (\ref{action}) can be written as \be\label{feqs} D_A B^i =0, \ee where we have used the form notations again, and the two-form $B^i$ is defined via \be\label{B} B^i:= \frac{\partial f}{\partial \tilde{X}^{ij}} F^j. \ee We note that the matrix of first derivatives that appear on the right-hand-side of this expression is a symmetric matrix, and has density weight zero (as the ratio of the density weight one function $f(\tilde{X})$ and the density weight one quantity $\tilde{X}$). Thus, (\ref{B}) is a well-defined two-form. For example, in the case of GR we get: \be B^i_{\rm GR} = \frac{{\rm Tr}\sqrt{X}}{16\pi G\Lambda} ((\sqrt{X})^{-1})^{ij} F^j. \ee Then, using the definition of $X^{ij}\sim F^i\wedge F^j$ we can easily see that in the case of GR \be B^i_{\rm GR}\wedge B^j_{\rm GR}\sim \delta^{ij}, \ee which is the usual "metricity" equation of Plebanski formulation of GR \cite{Plebanski:1977zz}. Another example is that of the topological theory (\ref{top-action}). In this case the $B$-field is given by: \be B_{\rm top}^i = \frac{1}{\kappa} F^i, \ee and the field equation (\ref{feqs}) is satisfied automatically as a consequence of the Bianchi identity $D_A F=0$. \subsection{Symplectic structure} The computation of the first variation in the previous subsection also gives us the symplectic structure of the theory. Thus, the phase space of the theory is the space of all solutions of (\ref{feqs}), and the symplectic structure can be obtained by considering the boundary term that was neglected in passing from (\ref{first-var}) to (\ref{feqs}). The integral of the boundary term gives rise to an integral over the spatial slice $\Sigma$ of the following quantity \be \Theta := \frac{1}{2{\mathrm i}} \int_\Sigma B^i \wedge \delta A^i, \ee where $B^i$ is as in (\ref{B}). This is a one-form on the phase space of the theory. Its exterior derivative produces the symplectic two-form. We see that the significance of the quantity $B^i$ defined by (\ref{B}) is that its spatial projection plays the role of the momentum canonically conjugate to the spatial projection of the connection $A^i$. We emphasise that in the present "pure gauge" formulation, the two-form $B^i$ is not independent and is a function of the connection field. A formulation that "integrates in" the two-form field as an independent variable is possible, and has been studied in previous works by the author, but will not be considered here. \subsection{Gauge invariance} Let us now verify by an explicit computation that our theory is invariant under diffeomorphisms as well as ${\rm SO}(3,{\mathbb C})$ rotations. This is of course expected, because the action was constructed in the way that these invariances should hold. However, an explicit verification of this fact will allow us to establish some identities for the latter. The gauge transformations act on the connection field as follows \be \delta_\xi A^i_\mu = \xi^\alpha F_{\mu\alpha}^i, \qquad \delta_\phi A^i_\mu = D_{A\, \mu}\phi^i. \ee The first of these transformations can be seen to be a diffeomorphism corrected by a gauge transformation, while the second one is the usual gauge rotation with the parameter $\phi^i$. It is not too difficult to prove the invariance of our action (\ref{action}) under these transformations. Let us first consider the diffeomorphisms. The variation of the action (\ref{first-var}) becomes proportional to \be\label{0-diff-inv-1} \int_M d^4x\, \frac{\partial f}{\partial \tilde{X}^{ij}} \tilde{\epsilon}^{\mu\nu\rho\sigma} F^i_{\mu\nu} D_{A\,\rho} \xi^\alpha F^j_{\sigma\alpha}. \ee We now need some identities. First we note that one can write the Bianchi identity $D_A F^i=0$ as \be\label{bianchi} D_{A\,[\mu} F^i_{\nu]\rho}=-\frac{1}{2}D_{A\,\rho} F_{\mu\nu}^i. \ee Another identity that we need is \be\label{FF-ident} \tilde{\epsilon}^{\mu\nu\rho\sigma} F^{(i}_{\mu\nu} F^{j)}_{\sigma\alpha}=-\frac{1}{4} \delta^\rho_\alpha \tilde{\epsilon}^{\mu\nu\gamma\delta} F^i_{\mu\nu} F^j_{\gamma\delta}=-\delta^\rho_\alpha \tilde{X}^{ij}, \ee where $\delta^\rho_\alpha$ is the Kronecker delta. Note that the symmetrisation is taken on the left hand-side. The above two identities, as well as the definition (\ref{X}) of the matrix $\tilde{X}^{ij}$, allow us to rewrite (\ref{0-diff-inv-1}) as \be\label{0-diff-inv-4} -\int_M d^4x\, \frac{\partial f}{\partial \tilde{X}^{ij}} \left( \tilde{X}^{ij} \partial_\alpha \xi^\alpha + \frac{1}{2} \tilde{\epsilon}^{\mu\nu\rho\sigma} F^i_{\mu\nu} \xi^\alpha D_{A\,\alpha} F_{\rho\sigma}^j\right)= -\int_M d^4x\, \frac{\partial f}{\partial \tilde{X}^{ij}} D_{A\,\alpha} ( \xi^\alpha \tilde{X}^{ij}) . \ee Integrating by parts, this becomes equal to \be\label{0-diff-inv-2} \int_M d^4x\, \xi^\alpha \tilde{X}^{ij} D_{A\,\alpha} \frac{\partial f}{\partial \tilde{X}^{ij}} . \ee We should now see that the integrand here is zero. This follows from the homogeneity of the function $f$. Indeed, we have \be\label{f-hom} \tilde{X}^{ij} \frac{\partial f}{\partial \tilde{X}^{ij}} =f \ee from the fact that $f$ is a homogeneous function of degree one. Let us now apply the operator of partial derivative $\partial_\mu$ to both sides of this equation. We get \be ( \partial_\mu \tilde{X}^{ij} ) \frac{\partial f}{\partial \tilde{X}^{ij}} + \tilde{X}^{ij} \partial_\mu \frac{\partial f}{\partial \tilde{X}^{ij}} = \partial_\mu f = \frac{\partial f}{\partial \tilde{X}^{ij}} \partial_\mu \tilde{X}^{ij}. \ee Comparing the two sides we see that \be\label{X-df} \tilde{X}^{ij} \partial_\mu \frac{\partial f}{\partial \tilde{X}^{ij}} =0, \ee which is almost the integrand in (\ref{0-diff-inv-2}), except for the fact that we have the covariant derivative in (\ref{0-diff-inv-2}). Let us now consider the difference between the covariant and the usual derivatives. We have \be\label{0-diff-inv-3} \tilde{X}^{ij} (D_{A\,\mu} - \partial_\mu ) \frac{\partial f}{\partial \tilde{X}^{ij}}= 2 \tilde{X}^{ij} \epsilon^{ikl} A_\mu^k \frac{\partial f}{\partial \tilde{X}^{lj}}. \ee The quantity here is zero in view of the gauge invariance of the function $f$. Indeed, under infinitesimal gauge transformations an ${\mathfrak su}(2)\otimes_S{\mathfrak su}(2)$-valued matrix $\tilde{X}^{ij}$ transforms as \be \delta_\phi \tilde{X}^{ij} = \epsilon^{ikl} \phi^k \tilde{X}^{lj} + \epsilon^{jkl} \phi^k \tilde{X}^{il}. \ee Then the statement that $f$ is an ${\rm SO}(3,{\mathbb C})$ invariant function becomes \be\label{X-comm-df} \epsilon^{ikl} \tilde{X}^{kj} \frac{\partial f}{\partial \tilde{X}^{lj}} = 0, \ee which can be expressed in words by saying that the commutator of the matrix $\tilde{X}^{ij}$ with the matrix $\partial f/\partial \tilde{X}^{ij}$ of the first derivatives of the defining function is zero. The identity (\ref{X-comm-df}) immediately implies that the difference of the derivatives in (\ref{0-diff-inv-3}) is zero and thus \be\label{X-Df} \tilde{X}^{ij} D_{A\,\mu} \frac{\partial f}{\partial \tilde{X}^{ij}} =0, \ee which proves the invariance of the action (\ref{action}) under diffeomorphisms. Let us now prove the invariance of (\ref{action}) under the gauge rotations. The variation of the action in this case becomes proportional to \be \int_M d^4x\, \frac{\partial f}{\partial \tilde{X}^{ij}} \tilde{\epsilon}^{\mu\nu\rho\sigma} F^i_{\mu\nu} D_{A\,\rho} D_{A\,\sigma} \phi^j. \ee Expressing the commutator of the covariant derivatives as the commutator with the curvature, and recalling the definition (\ref{X}) of the matrix $\tilde{X}^{ij}$ we get \be 4 \int_M d^4x\, \frac{\partial f}{\partial \tilde{X}^{ij}} \epsilon^{jkl} \tilde{X}^{ik} \phi^l, \ee which is zero in view of (\ref{X-comm-df}). This proves the invariance of the action (\ref{action}) under the ${\rm SO}(3,{\mathbb C})$ rotations. \subsection{Second variation} We can now compute the second variation, in preparation for the next section treatment. We have \be\label{sec-var} \delta^2 S[A] = (1/{\mathrm i}) \int_M d^4x\, \left( \frac{\partial^2 f}{\partial \tilde{X}^{ij}\partial \tilde{X}^{kl}} \delta \tilde{X}^{ij} \delta \tilde{X}^{kl} +\frac{\partial f}{\partial \tilde{X}^{ij}} \delta^2 \tilde{X}^{ij} \right). \ee Here the first variation of $\tilde{X}^{ij}$ was already computed above and reads \be\label{X-1} \delta \tilde{X}^{ij} =\frac{1}{2} \tilde{\epsilon}^{\mu\nu\rho\sigma} F^{(i}_{\mu\nu} D_{A\,\rho} \delta A^{j)}_\sigma\, . \ee The second variation reads \be\label{X-2} \delta^2 \tilde{X}^{ij} =\frac{1}{2} \tilde{\epsilon}^{\mu\nu\rho\sigma} D_{A\,\mu} \delta A^i_\nu D_{A\,\rho} \delta A^j_\sigma + \frac{1}{2} \tilde{\epsilon}^{\mu\nu\rho\sigma} F^{(i}_{\mu\nu} \epsilon^{j)kl} \delta A^k_\rho \delta A^l_\sigma \, . \ee \section{Constant curvature background} \label{sec:backgr} In this and the next section, to get a better feel for our theory and also to prepare for its quantization, we consider the action (\ref{action}) expanded around a specific background connection $A^i$. \subsection{Second order action around a general background} We now write our connection as the background $A^i$ plus a fluctuation ${\mathcal A}^i$, and obtain the part of the action quadratic in ${\mathcal A}^i$ directly from (\ref{sec-var}). Thus, we divide the second variation by 2, replace $\delta A^i_\mu$ by ${\mathcal A}_\mu^i$, and get the following Lagrangian \be\label{act-2} (8{\mathrm i}) {\cal L}_{\mathcal A} = \frac{\partial^2 f}{\partial \tilde{X}^{ij}\partial \tilde{X}^{kl}} (\tilde{\epsilon}^{\mu\nu\rho\sigma} F^i_{\mu\nu} D_{A\,\rho} {\mathcal A}^j_\sigma) (\tilde{\epsilon}^{\alpha\beta\gamma\delta} F^k_{\alpha\beta} D_{A\,\gamma} {\mathcal A}^l_\delta) \\ \nonumber + 2\frac{\partial f}{\partial \tilde{X}^{ij}} \tilde{\epsilon}^{\mu\nu\rho\sigma} \left( D_{A\,\mu} {\mathcal A}^i_\nu D_{A\,\rho} {\mathcal A}^j_\sigma + F^{i}_{\mu\nu} \epsilon^{jkl} {\mathcal A}^k_\rho {\mathcal A}^l_\sigma \right) \, . \ee In the following works this action will be used for a background field method one-loop computation, but here we specialise to a particular background. \subsection{The background} The background that we take is a constant curvature one and can be defined as follows. As we have already mentioned, any connection defines a (conformal) metric, obtained by requiring the triple of curvature two-forms $F^i$ to be (anti-)self-dual. Because of this, in practice, to specify a background it is easier to start with the corresponding metric, and then construct the connection, so that the triple of curvature two-forms for this connection is (anti-)self-dual with respect to the metric one started from. This is the procedure we follow. So, we first describe the corresponding metric, and then use it to construct the background connection in question. Thus, let $ds^2$ be the interval for a constant curvature metric in 4 spacetime dimensions (de Sitter space). For our purposes it is convenient to describe it using the flat slicing so that the metric reads: \be\label{metric-backgr} ds^2 = a^2(\eta)( - d\eta^2 + \sum_{i=1}^3 (dx^i)^2), \ee where $\eta$ is the conformal time and $x^i$ are the spatial coordinates. For the de Sitter metric the function $a^2(\eta)$ is a specific one, see below. The tetrad $\theta^I, I=0,1,2,3$ associated to the above metric reads: \be\label{tetrad} \theta^0 = a d\eta, \qquad \theta^i = a dx^i, \ee so that $ds^2=\theta^I\otimes \theta^J\eta_{IJ}$, where $\eta_{IJ}={\rm diag}(-1,1,1,1)$. As is known to anyone with experience with the Plebanski formulation of General Relativity \cite{Plebanski:1977zz}, it is very convenient to define the following set of objects (two-forms): \be\label{Sigma} \Sigma^i := {\mathrm i} \theta^0 \wedge \theta^i - \frac{1}{2} \epsilon^{ijk}\theta^j \wedge \theta^k, \ee where, as before $i=1,2,3$. Explicitly, for the metric (\ref{metric-backgr}) we have: \be\label{Sigma-backgr} \Sigma^i = a^2 \left( {\mathrm i} d\eta\wedge dx^i -\frac{1}{2} \epsilon^{ijk} dx^j \wedge dx^k\right). \ee As is not hard to check, in general the two-forms (\ref{Sigma}) are anti-self-dual \be\label{asd} \frac{{\mathrm i}}{2}\epsilon_{\mu\nu}{}^{\rho\sigma} \Sigma^i_{\rho\sigma}=\Sigma^i_{\mu\nu} \ee with respect to the Hodge star operation on two-forms defined by the metric $ds^2=\theta^I\otimes \theta^J\eta_{IJ}$. Here the object $\epsilon_{\mu\nu}{}^{\rho\sigma}$ is obtained from the volume form $\epsilon_{\mu\nu\rho\sigma}$ by raising two of its indices using the metric, and in our conventions $\epsilon^{0123}=+1$. Thus, (\ref{Sigma-backgr}) are anti-self-dual with respect to the metric (\ref{metric-backgr}). Let us now introduce our background connection. It is ${\rm SU}(2)$ connection $A_0^i$ such that the covariant exterior derivative of $\Sigma^i$ given by (\ref{Sigma-backgr}) with respect to $A_0^i$ is zero. In other words: \be 0=D_{A_0} \Sigma^i =d\Sigma^i + \epsilon^{ijk} A_0^j \wedge \Sigma^k. \ee It is not hard to solve this equation for $A_0^i$ explicitly. We get \be\label{conn-backgr} A_0^i = {\mathrm i} {\mathcal H} dx^i, \ee where \be {\mathcal H}=\frac{a'}{a}, \ee and the prime denotes the derivative with respect to the conformal time. It is not hard to show that the connection (\ref{conn-backgr}) is just the (anti-) self-dual part of the spin connection compatible with the tetrad (\ref{tetrad}). We have not yet used (imposed) the condition that the background (\ref{metric-backgr}) is constant curvature. This condition can be written as \be\label{curv-0} F^i(A_0) = M_0^2 \Sigma^i, \ee where we have introduced a dimensionful parameter $M_0$, with dimensions of mass. The equation (\ref{curv-0}) states that the curvature of $A^i$ is a constant $M_0^2$. For the connection (\ref{conn-backgr}) this gives two equations: \be {\mathcal H}'={\mathcal H}^2=a^2 M_0^2, \ee where the second equality is the familiar Friedman equation. Its solution can be written as $a=(M_0 (\eta_{max}-\eta))^{-1}$, where $\eta_{max}$ is an integration constant. This means that the physical time $t$ (obtained from $a d\eta=dt$) is determined by the relation $(M_0 (\eta_{max}-\eta))^{-1}=e^{M_0 t}$, where a convenient choice of the integration constant was made. Thus we have $a=e^{M_0 t}$ and therefore an exponentially expanding Universe. The discussed constant curvature metric (de Sitter space) is of course a solution of Einstein's theory. The quantity $M_0$ is then related to the cosmological constant $\Lambda$ via $M_0^2=\Lambda/3$. In the case of GR the cosmological constant is a parameter of the theory. In the case of our theories, however, there is no similar parameter in the Lagrangian, so we will not in general be able to identify $M_0$ with any $\Lambda$, as there is no such parameter in the theory. However, we shall see below that a certain analog of $\Lambda$ can be defined for any of our theories by evaluating the action on the background (\ref{conn-backgr}). We thus take the constant curvature connection \be\label{A0} A_0^i= {\mathrm i} M_0 a dx^i. \ee as the background for the perturbative expansion of (\ref{action}). Note that, as far as the background is concerned, the flat limit $M_0\to 0$ can be taken without any difficulty. In this limit $a\to 1$ and $A_0^i\to 0$. \subsection{Action evaluated on the backgorund} In GR the gravitational action evaluated on the de Sitter metric is proportional to the volume of the Universe. Indeed, the Einstein-Hilbert action for the signature $(-,+,+,+)$ reads \be\label{EH} S_{\rm EH}[g]=-\frac{1}{16\pi G} \int (R-2\Lambda)\sqrt{-g}\, d^4x. \ee On a constant curvature background (in 4 dimensions) $R=4\Lambda$ and we get \be\label{SEH0} S^0_{\rm EH}= -\frac{\Lambda}{8\pi G} \int \sqrt{-g}\, d^4x. \ee Let us see what our action evaluated on (\ref{A0}) gives us. Using \be \tilde{\epsilon}^{\mu\nu\rho\sigma} \theta_\mu^0 \theta_\nu^i \theta_\rho^j\theta_\sigma^k = \sqrt{-g} \epsilon^{ijk}, \ee where $\sqrt{-g}$ is the square root of the determinant of the metric $ds^2=\theta^I\otimes \theta^J\eta_{IJ}$, we easily get \be \Sigma^i\wedge \Sigma^j = -2{\mathrm i} \sqrt{-g} \, \delta^{ij} d^4x\, , \ee where $\Sigma^i$ are the anti-self-dual forms (\ref{Sigma}). Thus, the matrix $\tilde{X}^{ij}$ at the background is equal to \be\label{X0} \tilde{X}^{ij}_0 = -2{\mathrm i} M_0^4 \sqrt{-g} \, \delta^{ij}\, , \ee i.e., is proportional to the identity matrix. Thus, the value of the action (\ref{action}) on the background is \be\label{backgr-action} S[A_0]=-2M_0^4 f_0 \int \sqrt{-g} \, d^4x, \ee where $f_0:=f(\delta)$ is the value of the defining function at the identity matrix $X^{ij}=\delta^{ij}$. Thus, for the function $f$ that corresponds to GR we expect (\ref{SEH0}) to be equal to (\ref{backgr-action}) and thus \be\label{M0-Lambda} 2M_0^4 f_0 = \frac{\Lambda}{8\pi G}. \ee As is shown in the Appendix for the defining function of GR $f_0= 9/(16\pi G\Lambda)$, and so the relation (\ref{M0-Lambda}) holds. For diffeomorphism invariant gauge theories different from GR we have neither $G$ nor $\Lambda$ parameters. The only parameters of the theory are those (dimensionless) parameters arising by expanding the defining function. The sole dimensionful parameter comes in when the background curvature parameter $M_0$ is chosen. The relation (\ref{M0-Lambda}) then shows that we have a natural analog of the ratio $\Lambda/G$ present in our theory, and given by the product of the defining function evaluated at the identity matrix times $M_0^4$. However, there is yet no natural way to define either $\Lambda$ or $G$. Indeed, we could choose to expand a theory with given $f$ around background with any value of $M_0$, so there is no reason for the identification $M_0^2=\Lambda/3$ as in GR. Similarly, without analyzing how our gravitons interact it is impossible to determine any analog of the Newton's constant for our theory. We note, however, that since for GR we have $f_0\sim 10^{120}$, we should expect that for the defining functions of interest the value of the defining function on the identity matrix is extremely large. This knowledge will be of help when we analyze the graviton self-interactions. \subsection{Linearized action} We first check that the constant curvature background (\ref{A0}) is a solution of (\ref{feqs}) and then evaluate the second variation of the action (\ref{sec-var}) at the background. The derivatives of (\ref{f}) at the identity matrix are easily computed. Let us first write down the general expression for the first derivative. We omit the tilde from $X$ for brevity (we can always pull out the density weight factor from the function $f$ using the homogeneity). We have \be\label{der-1} \frac{\partial f}{\partial X^{ij}} = \delta^{ij} \chi(X) + {\rm Tr}(X)\chi'_1(X) \left( \frac{2X^{ij}}{({\rm Tr}(X))^2} - \frac{2{\rm Tr}(X^2)}{({\rm Tr}(X))^3} \delta^{ij} \right) \\ \nonumber + {\rm Tr}(X)\chi'_2(X) \left( \frac{3(X^2)^{ij}}{({\rm Tr}(X))^3} - \frac{3{\rm Tr}(X^3)}{({\rm Tr}(X))^4} \delta^{ij} \right), \ee where $\chi_{1,2}'(X)$ are the derivatives of the function $\chi$ with respect to the first and second arguments, evaluated at $X$. It is easy to check that for $X^{ij}_0\sim \delta^{ij}$ the second and third terms on the right are zero, and we have: \be\label{fp} \frac{\partial f}{\partial X^{ij}}\Big|_{X_0} = \delta^{ij} \chi(X_0) = \frac{f_0}{3} \delta^{ij}. \ee We note that this is $M_0$ independent. We remind the reader that the background value $X_0$ of matrix $X$ is given by (\ref{X0}) above. Let us now compute the matrix of second derivatives of the defining function. Since the expressions in brackets in (\ref{der-1}) become zero when evaluated on $X_0$, the only way to get a non-zero result in the second derivative is to act by a derivative on these expressions. We get \be\label{fpp} \frac{\partial^2 f}{\partial X^{ij}\partial X^{kl}}\Big|_{X_0} = \frac{2(\chi_1'(X_0)+\chi_2'(X_0))}{{\rm Tr}(X_0)} P^{ij|kl} \, , \ee where \be P^{ij|kl}:= I^{ij|kl} -\frac{1}{3}\delta^{ij}\delta^{kl}, \qquad I^{ij|kl} := \frac{1}{2} \left(\delta^{ik}\delta^{jl}+\delta^{il}\delta^{jk}\right). \ee We have introduced a special notation $P^{ij|kl}$ for the matrix that appeared in (\ref{fpp}), as this is just the projector on the symmetric traceless part $P^{ij|kl} \delta_{ij} = 0$, and similarly for the contraction with $\delta_{kl}$. Having evaluated the derivatives of the defining function at the background, we are ready to specialise (\ref{act-2}) for our constant curvature background (\ref{A0}). However, let us first check that our chosen background is indeed a solution of field equations (\ref{feqs}). With the quantity $\chi(X_0)$ being a constant, the background $B^i_0\sim F^i(A_0)$, and thus the field equations (\ref{feqs}) are satisfied (in view of the Bianchi identity). Let us now consider the second term in (\ref{sec-var}). Since the matrix of the first derivatives is proportional to the identity matrix (\ref{fp}) with a constant proportionality coefficient we need to consider the integral of $\delta_{ij} \delta^2 \tilde{X}^{ij}$ over the manifold. Let us see that this is a total derivative. We have: \be\label{comp-sec-var-1} \int_M d^4x \, \delta_{ij} \delta^2 \tilde{X}^{ij} = \frac{1}{2} \int_M \left( D_A \delta A^i \wedge D_A \delta A^i + F^i(A) \wedge \epsilon^{ijk} \delta A^j \wedge \delta A^k \right), \ee where we wrote everything in terms of forms (our form convention is $F=(1/2)F_{\mu\nu} dx^\mu\wedge dx^\nu$). Integrating by parts in the first term (and neglecting the total derivative term), the first term becomes \be \frac{1}{2} \int_M \delta A^i \wedge D_A D_A \delta A^i = \frac{1}{2} \int_M \delta A^i \wedge \epsilon^{ijk} F^j(A) \wedge \delta A^k, \ee which is minus the second term in (\ref{comp-sec-var-1}), and so (\ref{comp-sec-var-1}) is a total derivative. We therefore only need to consider the first term in (\ref{sec-var}). Let us write this directly in terms of the two-forms $\Sigma^i$ by substituting the expression (\ref{curv-0}) for the background curvature. Using the anti-self-duality (\ref{asd}) of $\Sigma^i$ we have the following compact expression for the second variation \be\label{lin-act-1} \delta^2 S \Big|_{A_0}= - g_0 \int_M d^4x \sqrt{-g} \, P^{ij|kl} (\Sigma^{i\,\mu\nu} D_{A_0\, \mu} \delta A^j_\nu) (\Sigma^{k\,\rho\sigma} D_{A_0\, \rho} \delta A^l_\sigma), \ee where we have introduced a notation \be\label{g} g_0:= \frac{\chi_1'(X_0)+\chi_2'(X_0)}{3}. \ee Note that the factors of $M_0$ have cancelled from this result. The combination (\ref{g}) of the first derivatives of the defining function plays an important role below. Thus, we shall see that the constant determining the strength of self-interactions of our gravitons will be built from $M_0$ and $g_0$. We note that for the case of GR the quantity $g_0$ is of the same order as $f_0$ (see Appendix), and thus is very large. \subsection{High energy limit} For applications in quantum gravity one is mostly interested in the UV behaviour of the theory. Thus, we are interested in its behaviour at energies $E\gg M_0$. In this case we can neglect the fact that the background is curved, and consider an effective theory in Minkowski space. As we shall see in this subsection, this certainly works at the limearized level. At the level of interactions we shall face some puzzles (related to the fact that the GR action blows up in the limit $\Lambda\to 0$), to be discussed below. At energies $E\gg M_0$ the terms in the covariant derivative containing the usual derivative become much larger than the terms containing the background connection (the latter being of the order $M_0$). Thus, in the high energy limit we can replace the covariant derivatives with the ordinary ones, and neglect the fact that the background is curved. However, the field $\delta A^i$ in the linearized action (\ref{lin-act-1}) is not canonically normalized, as there is a numerical constant $g_0$ in front of the action. Absorbing this constant into the linearized fields by rescaling we obtain the following action \be\label{lin-action} S_{\rm lin}[a] = -\frac{1}{2} \int_M d^4x \, P^{ij|kl} (\Sigma^{i\,\mu\nu} \partial_\mu a^j_\nu) (\Sigma^{k\,\rho\sigma}\partial_\rho a^l_\sigma), \ee where the (rescaled) linearized field is now called $a^i_\mu=\sqrt{g_0}(\delta A_\mu^i)$, and we have divided the second variation of the action by 2 to get the correct linearized action. The two-forms $\Sigma^i_{\mu\nu}$ are now those corresponding to the Minkowski spacetime \be \Sigma^i = {\mathrm i} dt\wedge dx^i - \frac{1}{2} \epsilon^{ijk} dx^j \wedge dx^k. \ee Thus, in the high energy limit one effective works in the Minkowski background, and the connection perturbation has been rescaled to have a canonically normalized kinetic term. The operation of absorbing $g_0$ into the connection field is not that innocuous, as $g_0$ blows up in the limit $\Lambda\to 0$. But if one is not taking this limit, just considers the connection perturbations changing on scales much smaller than the scale of the curvature, then it is natural to absorb the (very large) quantity $g_0$ into the connection field to make it canonically normalized. We shall now study the action (\ref{lin-action}) in some detail, to see that it does describe the usual Minkowski spacetime gravitons. After this we turn to interactions. A quick note about dimensions of all the fields. As we have already mentioned, we take the connection to have the mass dimension one, as is appropriate for a field that can be combined into a derivative operator. Then the curvature has mass dimension two, the matrix $X^{ij}$ has mass dimension 4, the matrix of first derivatives of the defining function is dimensionless, and the matrix of second derivatives has dimension minus 4. The two-forms $\Sigma^i$ that are constructed from the dimensionless metric are dimensionless. The constant $g_0$ introduced in (\ref{g}) is a sum of derivatives of a function of dimensionless arguments, and thus is dimensionless. Overall, we see that the mass dimension of the integrand in (\ref{lin-action}) is 4, as needed. \subsection{Symmetries} We have started from a diffeomorphism invariant action (\ref{action}) and linearized it around the constant curvature (and then zero curvature) background. We should check that the linearized action that we have obtained is still diffeomorphism invariant. As before, the diffeomorphisms can be lifted to the ${\rm SU}(2)$ bundle as follows: \be \delta_\eta A^i_\mu = \eta^\alpha F_{\mu\alpha}^i(A). \ee Here $\eta^\mu$ is the vector field (of mass dimension minus one) - generator of an infinitesimal diffeomorphism, and $F^i_{\mu\nu}(A)$ is the curvature of $A_\mu^i$. It can be checked that the above formula is a diffeomorphism corrected by a gauge transformation. Replacing the background curvature by its value (\ref{curv-0}) we get the following formula for an infinitesimal variation \be \delta_\eta a^i_\mu = M_0^2 \eta^\alpha \Sigma^i_{\mu\alpha}. \ee This suggests that we consider vector fields $\xi^\mu =M_0^2 \eta^\mu$ of mass dimension one that are finite in the limit $M_0 \to 0$. Thus, let us consider the following variations \be\label{diffeo} \delta_\xi a^i_\mu = \xi^\alpha \Sigma^i_{\mu\alpha}, \ee which will play the role of an infinitesimal diffeomorphism for the theory (\ref{lin-action}). Another set of transformations that we have to consider are gauge symmetries. An infinitesimal gauge transformation is given by \be\label{gauge} \delta_\phi a^i_\mu = \partial_\mu \phi^i. \ee Let us now verify that the linearized action is invariant under (\ref{diffeo}) and (\ref{gauge}). For this we will need the following identity \be\label{ident} \Sigma^{i\,\mu\nu} \Sigma^j_{\nu\rho} = -\delta^{ij} \eta^\mu{}_\rho + \epsilon^{ijk} \Sigma^{k\,\mu}{}_\rho\, , \ee which can be verified by a direct computation. Here $\eta^{\mu\nu}$ is the Minkowski metric. Let us first consider diffeomorphisms. Thus, consider the quantity \be \Sigma^{i\,\mu\nu} \partial_\mu \delta_\xi a^j_\nu =\Sigma^{i\,\mu\nu} \partial_\mu \xi^\alpha \Sigma^j_{\nu\alpha}. \ee Using (\ref{ident}) we see that $ij$-symmetric part of this quantity is proportional to $\delta^{ij}$. However this, when contracted with the projector in (\ref{lin-action}) gives zero. Thus, the invariance under infinitesimal changes of coordinates is established. The invariance under gauge transformations (\ref{gauge}) follows by noting that the quantity $\Sigma^{i\,\mu\nu}$ is anti-symmetric and therefore $\Sigma^{i\,\mu\nu} \partial_\mu \delta_\phi a^j_\nu=0$. Since our gauge theory action (\ref{lin-action}) is both diffeomorphism and gauge invariant we can already make a suspected count of the number of propagating DOF. Indeed, the configurational variable of the theory should be the spatial projection of the connection. This has $3\times 3=9$ components. Subtracting 4 diffeomorphisms and 3 gauge DOF leaves us with 2 suspected propagating DOF. Let us confirm this count by the Hamiltonian analysis of the linearized theory. This will also help us to see the gravitons explicitly. \subsection{Hamiltonian analysis} In this subsection we give a more detailed demonstration of the spin two nature of our theory given in the introduction. To obtain the action in the Hamiltonian form let us expand the quantity that appears as the main building block of the linearized action (\ref{lin-action}). We have \be\label{ham-1} \Sigma_i^{\mu\nu} \partial_\mu a^j_\nu = {\mathrm i} \partial_i a_0^j - {\mathrm i} \dot{a}_i^j - \epsilon_i^{kl} \partial_k a_l^j. \ee Here we have identified the spatial $a$ and internal $i$ indices using e.g. the component $\delta_a^i:=\Sigma_{0a}^i$ of the background two-form, and $\partial_i$ are the partial derivatives with respect to spatial coordinates. We raise and lower spatial indices freely using $\delta^{ij}$ metric. It is now easy to compute the conjugate momenta. Since the time derivatives that appear in the action are those of the spatial projection of the connection, it is clear that only these components can have non-zero momenta. However, since the projector is involved in (\ref{lin-action}), we see that only the symmetric tracefree part of $a_i^j$ has non-zero momenta. These are \be\label{pi-conn} \pi^{ij} = P^{ij|kl} \left( \dot{a}_{kl} - \partial_k a_{0\, l} - {\mathrm i} \epsilon_{kmn} \partial_m a_{n\, l} \right). \ee We note that the action (\ref{lin-action}) does not at all depend on the trace part of the spatial connection $a_i^j$. However, there is a dependence on the anti-symmetric (and of course symmetric) parts. Let us separate the trace, symmetric and anti-symmetric parts of $a_i^j$ and write \be\label{conn-decomp} a_{ij} = a_{ij}^s + b\delta_{ij} + \epsilon_{ijk} c_k . \ee Here $a^s_{ij}$ is the symmetric and tracefree part, and $b, c_i$ parameterise the trace and anti-symmetric parts respectively. Let us now rewrite the expression for the momentum using this decomposition. We have \be \pi^{ij} = {\dot{a}}^{s\, ij} - {\mathrm i} \epsilon^{ikl} \partial_k a_{l}^{s\, j} + P^{ij|kl} \partial_k ({\mathrm i} c_l-a_{0\, l}). \ee We note that the second term here is automatically symmetric and tracefree. On the other hand, it is clear that the Lagrangian density in (\ref{lin-action}) is \be {\cal L}= \frac{(\pi^{ij})^2}{2}. \ee We see that the Lagrangian (density) is independent of $b$. This has a simple interpretation. Indeed, computing the infinitesimal diffeomorphism action on the temporal and spatial projections of the connection we find \be \delta a_0^i = {\mathrm i} \xi^i, \qquad \delta_\xi a^i_j = -{\mathrm i} \xi^0 \delta^i_j - \epsilon^i{}_{jk} \xi^k. \ee This in particular means that the trace part $b$ of the matrix $a_i^j$ is a pure gauge quantity that can be set to zero by a temporal diffeomorphism. We also see that the Lagrangian depends on the anti-symmetric part of spatial and temporal components of the connection only in the combination ${\mathrm i} c_i - a_{0\, i}$. Indeed, it is easy to check that precisely this combination is invariant under spatial diffeomorphisms, as the anti-symmetric component transforms as $\delta_\xi c_i = \xi_i$. Let us denote the invariant combination by $\phi_i$. As we shall soon see, it will become a generator of infinitesimal gauge rotations in our theory. Thus, we finally rewrite the momentum as \be \pi^{ij} = {\dot{a}}^{s\, ij} - {\mathrm i} \epsilon^{ikl} \partial_k a_{l}^{s\, j} + P^{ij|kl} \partial_k \phi_l, \ee and compute the Hamiltonian density as ${\cal H}=\pi^{ij} \dot{a}_{ij}^s -{\cal L}$. We get \be {\cal H}=\frac{(\pi^{ij})^2}{2}+{\mathrm i} \pi^{ij} \epsilon_i{}^{kl}\partial_k a_{lj}+\phi_i \partial_j \pi^{ij}, \ee where we have dropped the index $s$ from $a^s_{ij}$ for brevity. Thus, now all the dynamical fields appearing in the Hamiltonian are symmetric tracefree tensors. The quantity $\phi_i$ is the Lagrange multiplier, which serves as a generator of ${\rm SU}(2)$ rotations on the connection. Indeed, the Poisson bracket of the integrated last term with the connection gives \be \delta_\phi a_{ij} = \partial_{(i} \phi_{j)}, \ee which is just the (symmetrised) gauge transformation. To see the structure of the arising Hamiltonian it is convenient to fix the gauge and require the connection to be transverse \be \partial^i a_{ij}=0. \ee The momentum is required to be transverse by the condition obtained varying the action with respect to the Lagrange multipliers $\phi_i$. So, it is now clear that the reduced phase space of our linearized system is parameterised by two symmetric, tracefree and transverse matrices $a_{ij}$ and $\pi_{ij}$. This corresponds to two propagating DOF. Let us now see what the dynamics becomes. To unravel the structure of the arising expression for the (reduced) Hamiltonian let us further rewrite it as \be\label{ham} {\cal H}=\frac{1}{2}(\pi^{ij}+ {\mathrm i} \epsilon^{ikl}\partial_k a_l^j)^2 + \frac{1}{2}(\partial_k a_{ij})^2. \ee Up to this point no reality conditions for the fields were specified. We can now deduce the linearized theory reality conditions from the Hamiltonian (\ref{ham}). Indeed, declaring the symmetric tracefree transverse connection field $a_{ij}$ to be real, and defining a new real momentum field \be p^{ij}:= \pi^{ij}+ {\mathrm i} \epsilon^{ikl}\partial_k a_l^j, \qquad p^{ij}\in {\mathbb R} \ee we can rewrite the linearized Hamiltonian in an explicitly positive definite form \be {\cal H} = \frac{1}{2}(p^{ij})^2 + \frac{1}{2}(\partial_k a_{ij})^2. \ee The field equations that follow are now the usual \be \Box \, a_{ij} = 0, \ee which is just the wave equation for the two components of the connection field $a_{ij}$. This is how gravitons are described by our gauge theory approach. We note that one can recognise in the analysis of this section the linearized version of the new Hamiltonian formulation of gravity \cite{Ashtekar:1991hf}. In particular, the arising reality conditions for the phase space fields are the same as in this formulation. Thus, even though our starting point of a gauge theory is a bit unconventional, the linearized theory mimics constructions familiar from other formulations. What is different about our linearized theory (\ref{lin-action}) from the more familiar treatment in \cite{Ashtekar:1991hf} is that no diffeomorphism constraints are left in the final result. Instead, our linearized action is simply independent of certain components of the connection field, so the theory is formulated on a smaller configuration space to start with. In other words, in our pure connection approach to gravity the Hamiltonian and diffeomorphism constraints of GR that usually require so much attention are solved once and for all by projecting out certain components of the connection field. This fact about our formulation must be very important for practical applications. And indeed, we shall see below that e.g. the issue of the gauge-fixing is considerably easier here than in the case of metric based GR. \section{Propagator} \label{sec:prop} In this section we invert the quadratic form that we have obtained by expanding the theory around the Minkowski spacetime background. In doing this we must decide on the gauge fixing. \subsection{Gauge fixing} We have seen that the action (\ref{lin-action}) is invariant under both gauge and diffeomorphism transformations, but we have also seen above that this invariance is manifested very differently in the two cases. Thus, in the case of the gauge invariance the situation is completely standard in that some of the field components have zero momenta and are thus Lagrange multipliers --- generators of gauge symmetries. In the case of diffeomorphisms the situation is very different --- we have seen that the action is simply independent of some components of the field, exactly those components that can be freely changed by performing a diffeomorphism. Thus, while there is very little choice for dealing with the gauge rotations --- we have to treat them in the usual way by fixing the gauge and thus making the unphysical components of the gauge field propagate --- we will need a different procedure for dealing with those components of the connection that gets affected by diffeomorphisms. A useful analogy here is as follows. Let us consider a theory of two scalar fields $\phi, \psi$ with the Lagrangian \be {\cal L} = - \frac{1}{2}(\partial_\mu(\phi-\psi))^2. \ee It is clear that the Lagrangian is invariant under a simultaneous shift of both of the fields by some function. The way this is realized is that the Lagrangian is simply independent of a certain combination of the fields, namely of $\phi+\psi$, being only a function of the combination $\phi-\psi$. A natural quantization strategy in this case is to introduce a new field $\phi-\psi$ and rewrite the Lagrangian in terms of the new field only. Then only this combination of the fields is a propagating field, while the other combination $\phi+\psi$ is a fiction. In the case of the simple Lagrangian above it is very easy to see what the propagating field is. In our case (\ref{lin-action}) this is much harder. In particular, we will not be able to rewrite the full Lagrangian in a way that has only diffeomorphism invariant combinations of the connection components appearing (see, however, below for an expression for the linearized Lagrangian that depends solely on the "physical" diffeomorphism invariant components of the connection). However, an appropriate strategy is as follows. We can consider the quadratic form (\ref{lin-action}) as a form on the space of diffeomorphism invariant classes of connections $a_\mu^i$, i.e. connections related via \be\label{sim-diff} a_\mu^i \sim a_\mu^i + \xi^\nu \Sigma^i_{\mu\nu}. \ee The quadratic form in (\ref{lin-action}) is degenerate on this space because there is still the usual gauge invariance to be taken care of. However, this gauge invariance can be dealt with in the usual way, by fixing the gauge. As we shall see below, it will be possible to find a gauge-fixing condition that is invariant under (\ref{sim-diff}). After doing this we obtain a non-degenerate quadratic form on the space of diffeomorphism classes (\ref{sim-diff}). It can be inverted, to obtain a propagator on the space of diffeomorphism classes of connections. As is standard for gauge-fixing, this procedure will make the temporal and longitudinal components of the connection propagate (and will add ghosts that will offset the effect of making this components propagating). At the same time, the components of the connection that are identified in (\ref{sim-diff}) will not be propagating, as the propagator will involve a projector on the space of diffeomorphism equivalence classes. This way of dealing with the gauge symmetries of our theory is very different from the case of the metric based GR, but is quite natural given that the diffeomorphisms are realized in our theory quite differently. Having explained the logic of our procedure it remains to find a gauge-fixing condition that is diffeomorphism invariant. After some trial and error we found the following gauge-fixing condition to be useful: \be\label{gf-cond} \partial^\mu \Pi^{\mu i| \nu j} a_{\nu j} = \frac{2}{3} \partial^\mu \left(a_\mu^i + \frac{1}{2} \epsilon^{ijk} \Sigma^k_\mu{}^\nu a_\nu^j\right)=0, \ee where \be\label{proj-P} \Pi^{\mu i| \nu j} := \eta^{\mu\nu}\delta^{ij} + \frac{1}{3} \Sigma^{i\,\mu\rho} \Sigma_\rho^j{}^\nu = \frac{2}{3} \left(\eta^{\mu\nu} \delta^{ij} +\frac{1}{2} \epsilon^{ijk}\Sigma^{k\,\mu\nu} \right) \ee is a projector operator whose meaning is to be clarified below. The projector property \be \Pi^{\mu i| \nu j} \Pi_{\nu j}{}^{\rho k} = \Pi^{\mu i| \rho k}, \ee can be checked by an elementary computation. It is easy to see that our gauge-fixing condition is diffeomorphism invariant. Indeed, consider \be\label{gf-1} \xi^\nu \Sigma^i_{\mu\nu} + \frac{1}{2} \epsilon^{ijk} \Sigma^k_\mu{}^\nu \xi^\rho \Sigma^j_{\nu\rho}. \ee Using the algebra (\ref{ident}) of $\Sigma^i_{\mu\nu}$ matrices we see that the last term here equals \be \frac{1}{2} \epsilon^{ijk} \epsilon^{kjl} \xi^\rho \Sigma^l_{\mu\rho} = - \xi^\nu \Sigma^i_{\mu\nu} . \ee Thus, the quantity in (\ref{gf-1}) is zero, and the gauge-fixing condition (\ref{gf-cond}) is diffeomorphism-invariant. It is also clear that as far as the gauge transformations are concerned the last term in (\ref{gf-cond}) is inessential, for it is zero for any $a_\mu^i$ that is a pure gauge $a_\mu^i=\partial_\mu \phi^i$. Thus, (\ref{gf-cond}) is the usual gauge theory gauge-fixing condition, corrected by a term that is inessential as far as the behaviour under the gauge transformations is concerned. Let us now confirm that the projector $\Pi^{\mu i| \nu j}$ is just that on diffeomorphism equivalence classes of connections, and so it is natural to apply it before the usual gauge-fixing condition is imposed (to make this condition diffeomorphism invariant). We compute the action of the projector on the connection $a_\nu^j$ decomposed as in the previous subsection \be a_\nu^j = a_0^j (dt)_\nu + (a_{ij}^s + b \delta_{ij} + \epsilon_{ijk} c_k)(dx^i)_\nu. \ee The result is \be \Pi^{\mu i| \nu j} a_\nu^j = \frac{2}{3}\left( \delta^{ij} \left(\frac{\partial}{\partial t}\right)^\mu + \frac{i}{2}\epsilon^{ijk} \left( \frac{\partial}{\partial x^k}\right)^\mu \right) (a_0^j-{\mathrm i} c^j) + a_{ij}^s \left( \frac{\partial}{\partial x^j}\right)^\mu. \ee We note the the quantity $b$ got projected out, and the projected connection only depends on the temporal and the anti-symmetric spatial components of the connection in the combination $a_0^i-{\mathrm i} c^i$, as expected from the previous section. Thus, the projector $\Pi^{\mu i| \nu j}$ is indeed just that on the diffeomorphism invariant subspace, and selects the components $a_0^i-{\mathrm i} c^i$, which play the role of the generators of the Gauss constraints, as well as $a_{ij}^s$, which are the two propagating DOF plus three longitudinal modes of the connection. As usual for a gauge theory we shall make the components generators of the Gauss constraints as well as the longitudinal components of the connection propagating by adding a gauge fixing term, and then offset their effects by adding ghosts. The projector $\Pi^{\mu i|\nu j}$ can be somewhat demystified by explaining what is its spinorial analog. Readers not familiar with the Penrose's spinor language \cite{Penrose:1985jw} for gravity can skip this paragraph. Using spinors one can express the connection $a_\mu^i$ as a certain rank 4 spinor. Indeed, the spacetime index gets replaced by a pair $AA'$ of an unprimed and primed spinor indices. The ${\rm SU}(2)$ index $i$ gets replaced by a pair $AB$ of two unprimed indices, which is moreover $AB$ symmetric. Thus we get $a^{AB}{}_{CC'}$ as our linearized theory dynamical field. It is now not hard to show that the projector $\Pi^{\mu i|\nu j}$ is simply that on the component of this spinor that is completely symmetric in all its 3 unprimed indices. Thus, schematically, $(\Pi a)_{AB\, CC'} = a_{(ABC)C'}$, where the brackets denote symmetrization. The projected out part is $a^{AB}{}_{BA'}$, and is thus a mixed rank 2 spinor and carries precisely 4 components, as is appropriate for something that can be projected out by a diffeomorphism. As we shall note below, the linearized action (\ref{lin-action}) can be written very simply in terms of the field $a_{(ABC)C'}$. We now add the gauge-fixing condition squared with some parameter to the Lagrangian. Thus, we consider the following gauge-fixed Lagrangian on the space of diffeomorphism equivalence classes of connections \be\label{L-gf} {\cal L}_{\rm gf} = -\frac{1}{2} P^{ij|kl} (\Sigma^{i\,\mu\nu} \partial_\mu a_\nu^j) (\Sigma^{k\,\rho\sigma} \partial_\rho a_\sigma^l) -\frac{\alpha}{2} \left( \partial^\mu a_\mu^i -\frac{1}{2} \epsilon^{ijk} \Sigma^{j\,\mu\nu} \partial_\mu a_\nu^k \right)^2, \ee where we have changed the order of indices $jk$ in the gauge-fixing term for convenience, and absorbed the $(2/3)^2$ factor into the gauge-fixing parameter $\alpha$. As in the case of Yang-Mills theory, the idea is now to select the gauge-fixing parameter $\alpha$ so that the gauge-fixed action is as simple as possible. \subsection{The algebra of gauge-fixing} In this subsection we will simplify the expression for the gauge-fixed Lagrangian and find a useful value for the gauge-fixing parameter $\alpha$. To this end, let us first write the Lagrangian in the momentum space. Omitting the argument $\pm k$ from the Fourier components $a_\mu^i(k)$ of $a_\mu^i$ for brevity we have the following expression \be {\cal L}_{\rm gf} = -\frac{1}{2} P^{ij|kl} (\Sigma^{i\,\mu\nu} k_\mu a_\nu^j) (\Sigma^{k\,\rho\sigma} k_\rho a_\sigma^l) -\frac{\alpha}{2} \left( k^\mu a_\mu^i -\frac{1}{2} \epsilon^{ijk} \Sigma^{j\,\mu\nu} k_\mu a_\nu^k \right)^2. \ee Let us expand the last term. Introducing a compact notation $(ka^i):=k^\mu a_\mu^i$ and expanding the product of two $\epsilon$'s we have \be\label{gf-2} \left( (ka^i) -\frac{1}{2} \epsilon^{ijk} \Sigma^{j\,\mu\nu} k_\mu a_\nu^k \right)^2 =(ka^i)^2 - (ka^i)\epsilon^{ijk} \Sigma^{j\,\mu\nu} k_\mu a_\nu^k \\ \nonumber +\frac{1}{4} \Sigma^{i\mu\nu} \Sigma^{i\,\rho\sigma} k_\mu k_\rho a_\nu^j a_\sigma^j - \frac{1}{4} (\Sigma^{i\mu\nu} k_\mu a_\nu^j) (\Sigma^{j\,\rho\sigma} k_\rho a_\sigma^i) . \ee Let us now expand the first term of the Lagrangian. We have \be\label{gf-3} P^{ij|kl} (\Sigma^{i\,\mu\nu} k_\mu a_\nu^j) (\Sigma^{k\,\rho\sigma} k_\rho a_\sigma^l) = \frac{1}{2} \Sigma^{i\mu\nu} \Sigma^{i\,\rho\sigma} k_\mu k_\rho a_\nu^j a_\sigma^j \\ \nonumber +\frac{1}{2} (\Sigma^{i\mu\nu} k_\mu a_\nu^j) (\Sigma^{j\,\rho\sigma} k_\rho a_\sigma^i) -\frac{1}{3} (\Sigma^{i\,\mu\nu} k_\mu a_\nu^i) (\Sigma^{j\,\rho\sigma} k_\rho a_\sigma^j). \ee We can now use the following two identities \be\label{ident-proj} \Sigma^{i\mu\nu} \Sigma^{i\,\rho\sigma} = \eta^{\mu\rho} \eta^{\nu\sigma}- \eta^{\nu\rho} \eta^{\mu\sigma}-{\mathrm i} \epsilon^{\mu\nu\rho\sigma} \ee and \be\label{ident-as} \Sigma^{i\mu\nu} \Sigma^{j\,\rho\sigma} - \Sigma^{j\mu\nu} \Sigma^{i\,\rho\sigma} = \epsilon^{ijk} \left( \Sigma^{k\, \mu\sigma} \eta^{\nu\rho} - \Sigma^{k\, \nu\sigma} \eta^{\mu\rho} - \Sigma^{k\, \mu\rho} \eta^{\nu\sigma}+\Sigma^{k\, \nu\rho} \eta^{\mu\sigma}\right). \ee We can now use the identity (\ref{ident-proj}) to rewrite the first term in (\ref{gf-3}), and the identity (\ref{ident-as}) to rewrite the last term as a multiple of the second plus some extra terms. We get \be \frac{1}{2}(k^2 (a_\mu^i)^2 - (ka^i)^2) + \frac{1}{6} (\Sigma^{i\mu\nu} k_\mu a_\nu^j) (\Sigma^{j\,\rho\sigma} k_\rho a_\sigma^i) +\frac{1}{3} \left( k^2 \epsilon^{ijk} \Sigma^{i\,\mu\nu} a_\mu^j a_\nu^k+ 2(ka^i) \epsilon^{ijk} \Sigma^{j\,\mu\nu} k_\mu a_\nu^k\right). \ee We now note that if we make a choice \be \alpha = \frac{2}{3} \ee then the terms $(\Sigma^{i\mu\nu} k_\mu a_\nu^j) (\Sigma^{j\,\rho\sigma} k_\rho a_\sigma^i)$, as well as $(ka^i)^2$ and $(ka^i) \epsilon^{ijk} \Sigma^{j\,\mu\nu} k_\mu a_\nu^k$ cancel out and we get the following simple gauge-fixed action \be\label{gf*} {\cal L}_{\rm gf} = -\frac{k^2}{3}\left( (a_\mu^i)^2 +\frac{1}{2} \epsilon^{ijk}\Sigma^{k\,\mu\nu} a_\mu^i a_\nu^j \right)=-\frac{k^2}{2}\Pi^{\mu i| \nu j} a_{\mu i} a_{\nu j}, \ee where $\Pi^{\mu i| \nu j}$ is the projector (\ref{proj-P}). Because the projector on diffeomorphism equivalence classes appears here explicitly, it is obvious that this action is still invariant under the diffeomorphisms (\ref{sim-diff}), and so is now a non-degenerate quadratic form on the space of diffeomorphism equivalence classes. We note that the above analysis implies that our original linearized Lagrangian (\ref{lin-action}) can be rewritten (in the momentum space) in terms of the "projected" connection $\Pi a$ schematically as follows: \be\label{Lagr-YM} {\cal L}= -\frac{1}{2}\left( k^2 (\Pi a)^2 - \frac{3}{2} (k \Pi a)^2 \right). \ee Thus, our linearized Lagrangian is {\it different} from that for Yang-Mills theory for the projected connection $\Pi a$. Indeed, in the case of Yang-Mills the numerical coefficient in front of the second term in the brackets in (\ref{Lagr-YM}) would be unity. In the case of Yang-Mills theory the value of the coefficient in front of $(ka)^2$ is fixed by the requirement of gauge invariance. The same is true in our case, and the different numeric value has to do with the fact that the projected connection $\Pi a$ transforms under the gauge transformations in a more complicated way than $\delta a_\mu^i =\partial_\mu \phi^i$. Indeed, we have: \be \delta \Pi^{\mu i|\nu j} a^j_\nu = \Pi^{\mu i|\nu j} \delta a_\nu^j = \frac{2}{3} \partial_\mu\phi^i - \frac{1}{3}\epsilon^{ijk} \Sigma_\mu^{\,\,\nu j} \partial_\nu \phi^k. \ee It is this more involved transformation law for the projected connection that is responsible for the different from the Yang-Mills case numerical factor in front of the second term in (\ref{Lagr-YM}). We can now also note that our linearized Lagrangian in (\ref{lin-action}) admits a very simple description in terms of spinors. Thus, as we have already mentioned, in the spinor notation our connection $a_\mu^i$ gets described by a rank 4 spinor $a_{ABCC'}$. The diffeomorphism classes are described by the component which is symmetric in its 3 unprimed indices, or, in other words, by the $(3/2,1/2)$ irreducible representation of the Lorentz group, where the first number denotes the representation in the space of unprimed spinors and the second one in the space of primed ones. The Lagrangian in (\ref{lin-action}) is then a multiple of \be\label{L-spinor} {\cal L}\sim (\partial^{(A}{}_{A'} a^{BCD)A'})^2, \ee where the precise numerical coefficient is convention dependent and will be spelled out elsewhere. Here $\partial_{AA'}$ is the 2-component spinor Dirac operator. In words, the Dirac operator is used to convert the representation $(3/2,1/2)$ described by the connection to the spin 2 representation $(2,0)$, and this is then squared to form the Lagrangian. The Lagrangian clearly only depends on the part $a_{(ABC)C'}$ of the connection, which makes it obvious that at least in the linearized theory the diffeomoprhisms are realized simply so that the action is independent of some of the connection components. The form (\ref{L-spinor}) of the Lagrangian also explains the structure of the propagator that is obtained below. \subsection{Propagator} We now invert the quadratic form in (\ref{gf*}). Thus, we add a current term to the action \be S_{\rm gf} = \int \frac{d^4 k}{(2\pi)^4}\left[ - \frac{k^2}{2} \Pi^{\mu i| \nu j} a_{\mu i}(-k) a_{\nu j}(k) + J^{\mu\,i}(-k)a_\mu^i(k)\right], \ee and then integrate the field $a_\mu^i$ out. This can be easily done in the space of diffeomorphism equivalence classes, and we immediately see that the action with the original connection field integrated out is given by \be S[J] = \int \frac{d^4 k}{(2\pi)^4} \frac{1}{2k^2} \Pi^{\mu i| \nu j} J_\mu^i(-k) J_\nu^j(k). \ee In other words, the propagator of our theory is given by \be \langle a^{\mu i}(-k) a^{\nu j}(k) \rangle = (1/{\mathrm i})^2 \frac{\delta}{\delta J_\mu^i(-k)} \frac{\delta}{\delta J_\nu^j(k)} e^{{\mathrm i} S[J]} \Big|_{J=0} = (1/{\mathrm i}) \frac{1}{k^2} \Pi^{\mu i| \nu j}, \ee which is just the usual $1/k^2$ term times the projector onto the space of diffeomorphism equivalence classes of connections, times the (convention dependent) $1/{\mathrm i}$ factor. This finishes our discussion of the free theory of gravitons on the Minkowski spacetime background (or gravitons with energy $E\gg M_0$ much greater than the energy scale of our constant curvature background). We refrain from considering ghosts that are irrelevant for our purely classical purposes in this paper. Instead, let us now consider the lowest order interactions. \section{Interactions} \label{sec:inter} In this section we consider graviton self-interactions and discuss puzzles related to the fact that the action blows up in the $\Lambda\to 0$ limit. \subsection{Third variation of the action} The third variation of the action is easily computed from (\ref{sec-var}). We get \be\label{3-var} \delta^3 S[A] = (1/{\mathrm i}) \int_M d^4x\, \left( \frac{\partial^3 f}{\partial \tilde{X}^{ij}\partial \tilde{X}^{kl}\partial \tilde{X}^{pq}} \delta \tilde{X}^{ij} \delta \tilde{X}^{kl} \delta \tilde{X}^{pq} +3 \frac{\partial^2 f}{\partial \tilde{X}^{ij}\partial \tilde{X}^{kl}} \delta^2 \tilde{X}^{ij} \delta X^{kl} + \frac{\partial f}{\partial \tilde{X}^{ij}} \delta^3 X^{ij} \right). \ee We have already computed the first and second variations of the matrix $\tilde{X}^{ij}$ in (\ref{X-1}), (\ref{X-2}). The third variation is given by \be\label{X-3} \delta^3 \tilde{X}^{ij} = \frac{3}{2}\tilde{\epsilon}^{\mu\nu\rho\sigma} D_{A\, \mu} \delta A^{(i}_\nu \epsilon^{j)kl} \delta A_\rho^k \delta A_\sigma^l. \ee We also note that the fourth variation, of relevance for higher-order interaction vertices, is zero, which follows by expanding the product of two $\epsilon$'s and noting that there is always a $\delta^{ij}$-contraction of two variations of the connection. On the other hand, spacetime indices of all 4 variations of the connection are contracted with $\tilde{\epsilon}^{\mu\nu\rho\sigma}$, and so the result is zero. \subsection{Cubic interaction} We have already computed the first and second derivatives of the defining function at the identity matrix in (\ref{fp}), (\ref{fpp}). Let us now compute the third derivative. Here we only consider a simpler case when the defining function depends on the invariant ${\rm Tr}(X^2)/({\rm Tr})^2$. The general case will be described elsewhere. We get: \be \frac{\partial^3 f}{\partial X^{ij} \partial X^{kl} \partial X^{pq}} \Big|_{X_0} = -\frac{2g_0}{3(-2{\mathrm i} M_0^4)^2} \left( \delta^{ij} P^{kl|pq} + \delta^{kl} P^{ij|pq} + \delta^{pq} P^{ij|kl}\right), \ee where $g_0$ is the dimensionless constant given by (\ref{g}), and $P^{ij|kl}$ is the projector on the symmetric traceless part that we already encountered above. We now compute the cubic interaction term. Let us first discuss the simpler case when all 3 gravitons are high energy $E\gg M_0$. In this case certain terms are dominant, and we are first going to describe these terms. We evaluate (\ref{3-var}) at the constant curvature background connection (\ref{A0}). The last term in (\ref{3-var}) is then seen to be a total derivative. We can also note that of the two terms coming from $\delta^2 X^{ij}$ one term is proportional to $(D\delta A)^2$, while the other is of the order $M_0^2 (\delta A)^2$. Let us first neglect the term $M_0^2 (\delta A)^2$ as compared to $(D\delta A)^2$. Then, after some rewriting we get \be \delta^3 S \Big|_{A_0} = \frac{g_0}{2 M_0^2} \int d^4x \sqrt{-g}\, P^{ij|kl} (\Sigma^{i\,\mu\nu} D_{A_0\,\mu} \delta A_\nu^j) \Big[ (\Sigma^{k\,\rho\sigma} D_{A_0\,\rho} \delta A_\sigma^l) (\Sigma^{m\, \alpha\beta}D_{A_0\,\alpha} \delta A_\beta^m) \\ \nonumber-3{\mathrm i} \, \epsilon^{\alpha\beta\gamma\delta} D_{A_0\,\alpha} \delta A_\beta^k \, D_{A_0\,\gamma} \delta A_\delta^l \Big] . \ee Now passing to the high-energy limit $E\gg M_0$ we replace the covariant derivatives by the usual coordinate ones, and then rewrite the interaction term in terms of the connection field $a_\mu^i= \sqrt{g_0} (\delta A_\mu^i)$, for which the kinetic term (\ref{lin-action}) is canonically normalised. We also need to divide the third variation by $3!$ to get the correct (leading contribution to) the cubic interaction term. We get: \be S^{(3)} = \frac{1}{12 \sqrt{g_0} M_0^2} \int d^4x \, P^{ij|kl} (\Sigma^{i\,\mu\nu} \partial_\mu a_\nu^j) \Big[ (\Sigma^{k\,\rho\sigma} \partial_\rho a_\sigma^l) (\Sigma^{m\, \alpha\beta}\partial_\alpha a_\beta^m) -3{\mathrm i} \, \epsilon^{\alpha\beta\gamma\delta} \partial_\alpha a_\beta^k \partial_\gamma a_\delta^l \Big]. \ee To summarize, schematically, the obtained leading order contribution to the cubic interaction is of the form \be\label{inter} {\cal L}^{(3)} \sim \frac{1}{\sqrt{g_0} M_0^2} (\partial a)^3. \ee We learn that our theory of gravity has a negative mass dimension coupling constant, and so is non-renormalisable in the usual sense of the word, as could be expected. We also see that in our approach the self-coupling of our gravitons described by the connection perturbation $a$ cannot be identified with the Newton's constant. Indeed, for the defining function (\ref{def-GR}) that corresponds to the cosmological constant GR we have $g_0\sim M_p^2/M_0^2$. Thus, we see that the combination that appears in the denominator of (\ref{inter}), at least for the defining function that corresponds to GR, is given by \be\label{Mp-M0} M_*^2 := \sqrt{g_0} M_0^2 \sim M_p M_0. \ee \subsection{Discussion} Some remarks on the result (\ref{inter}) are in order. First, the obtained form of the cubic graviton self-interaction is different from that in GR. Indeed, the GR Lagrangian expanded (around the Minkowski metric) starts with the cubic interaction term $\kappa h (\partial h)^2$, where $\kappa\sim \sqrt{G}\sim 1/M_p$ and $h$ is the metric perturbation. The GR cubic term is quadratic in the derivative operator, while (\ref{inter}) is cubic. This explains why the mass dimension of the coupling constant in (\ref{inter}) is minus two while in the cubic interaction term of GR it is minus one. Thus, both are non-renormalisable by power counting, but the form of the interaction is different. We also see that the coupling constant measuring the strength of self-interactions of gravitons in our approach is different from that in GR. This is not too surprising since in the usual metric-based approach the Newton's constant $G$ sets the strength of interaction of gravitons with the stress-energy of matter (or other gravitons). This is why it is a factor of $\sqrt{G}$ that serves as the theory's coupling constant. But the notion of the stress-energy tensor of gravitons is metric based. Indeed, the stress-energy arises as the variational derivative of the action with respect to the metric. In our approach gravitons are described in terms of a different field (connection $a$), and so the variation of the action with respect to $a$ no longer has the meaning of the stress tensor. This is why the strength of self-interaction of the connection field $a$ no longer needs to be directly identified with the Newton's constant. This raises the question of how the Newton's constant can be identified in our theory. One way to do this could be to evaluate the 4-graviton scattering amplitude, which in the usual metric based approach is proportional to $G$. We leave this calculation to future work. Another remark on (\ref{inter}) is that $M_*$ given by (\ref{Mp-M0}) is the scale at which our perturbation theory appears to become strongly coupled. Thus, it appears that, unlike in the case of GR where the cutoff scale is $M_p$, the cutoff for the gravitational perturbation theory in the "pure connection" formulation is $M_*$. For the currently accepted value of the cosmological constant this is $M_*\sim 10^{-2} eV$. Thus, it appears that our perturbation theory cannot be trusted for energies larger than $10^{-2} eV$. While this fact would not be a problem for the envisaged renormalization group calculations that are non-perturbative in nature, this apparent strong coupling arising in our theory at such a low energy scale should certainly be given an interpretation. This is in particular worrying given the fact that for a particular defining function our theory is claimed to be the usual GR, with its very different strong coupling scale. There is clearly a puzzle here. While we have not yet worked out a resolution of this puzzle in all details, we believe what happens is as follows. The first remark is that (\ref{inter}) is not the full cubic vertex, but only its part that blows up in the $M_0\to 0$ limit. It is not hard to see that we have neglected another part (which goes to zero in the $M_0\to 0$ limit) and that the full cubic interaction term is schematially \be\label{inter-full} {\cal L}^{(3)}_{\rm full} \sim \frac{1}{\sqrt{g_0} M_0^2} (\partial a)^3 + \frac{M_0}{\sqrt{g_0}} (\partial a) a^2. \ee Thus, the cubic vertex consists of two parts. One blows up in the limit $M_0\to 0$, which is not very surprising given that the action itself blows up in this limit (at least in the case of GR when we can identify $M_0^2$ with $\Lambda$). The other goes to zero in the same limit. The fact that there is a blowing up part seems to indicate that it is not possible to take the Minkowski spacetime limit, which would be very worrying given that we certainly would like to be able to scatter gravitons in Minkowski spacetime to be able to compare predictions of our theory to those of the usual metric based formulation. However, it can be shown that the full interaction vertex (\ref{inter-full}) actually vanishes when all 3 external legs are put on shell. This is the same result as in GR, so in spite of some off-shell blowing up terms the on-shell result is completely the same as in GR. Thus, to understand what happens one must consider higher order interactions. One finds that the quartic interaction is schematically \be {\cal L}^{(4)} \sim \frac{1}{g_1 M_0^4} (\partial a)^4 + \frac{1}{g_0 M_0^2} (\partial a)^2 a^2 + \frac{1}{g_0} a^4, \ee where $g_1$ is a new coupling constant, related to higher derivatives of the defining function computed at the identity matrix. We see that there is again a blowing up leading order term. The last term vanishes in the limit $\Lambda\to 0$, when $g_0\to \infty$. However, we see that (in the case of GR) the second term is exactly the usual $(1/M_p^2) (\partial a)^2 a^2$ second-derivative graviton interaction. Thus, we see that when the 4-graviton scattering amplitude is computed there is a blowing up contribution from the diagrams involving two cubic vertices, as well as another blowing up contribution from the quartic vertex. There are also finite contributions both from the quartic vertex as well as from the diagrams involving two cubic vertices. We believe that, when evaluated on the physical states, the blowing up contributions should cancel, while the finite pieces assemble into the usual GR result. We will not attempt such a calculation here as it requires technology (spinor helicity) that is beyond the scope of this paper. But the fact that the terms finite in the $\Lambda\to 0$ limit are precisely of the familiar from GR two-derivative form support the picture sketched. To summarize, the structure of interactions in our gauge-theoretic description of gravity is yet to be unravelled. It is, however, clear that the theory is as non-renormalizable as the usual GR in the metric based approach. What is different about our formulation is that the limit of the cosmological constant going to zero is a non-trivial one to take, for the action of the theory blows up in this limit. This is manifested in the fact that the interaction vertices contain blowing up pieces. Naively, this suggests strong coupling at a very low energy scale. However, we believe that the issue is much more subtle and that when computed for the physical graviton states the scattering amplitudes are perfectly finite in the $\Lambda\to 0$ limit and for the case of the defining function corresponding to GR reproduce the known results. A verification of this is left to future work. \section{Conclusions} In this paper we have proposed a new approach to the gravitational perturbation theory. While our main motivation was the quantum theory (renormalization), in the present paper we remained in the classical domain. We have recalled how a diffeomorphims invariant gauge theory can be formulated using a homogeneous degree one defining function, and how such a theory for the gauge group ${\rm SU}(2)$ is a gravity theory describing two propagating degree of freedom. In particular, general relativity itself can be put in this framework, see the action (\ref{sec-GR-action}). Our main interest here was in the perturbation theory. Hence, we expanded our general diffeomorphims invariant gauge theory Lagrangian around a constant curvature connection (\ref{A0}). The original theory does not have any dimensionful parameters, and we have seen that it is the choice of the background that brings in a dimensionful quantity into the game, in our case the radius of curvature of the background. We then took a limit of the radius of curvature becoming very large (or working at energies such that the curvature of the background can be neglected). This way we obtained a theory on the Minkowski spacetime background. The linearized action (\ref{lin-action}) we obtained is quite simple, and can be seen to be a natural construct involving the linearized connection, as well as the basic (anti-) self-dual two-forms $\Sigma_{\mu\nu}^i$. Indeed, as is sometimes done in the literature, one can introduce the derivative operators $\partial^{\mu\,i} := \Sigma^{\mu\nu\, i} \partial_\nu$. The basic building block of our linearized action is then $\partial^{\mu\, i} a_\mu^j$, where this quantity is symmetrised and then its tracefree part is squared to form the action. Note that the projector $P^{ij|kl}$ on the symmetric tracefree part is just that on the spin two part of the tensor product of two spin one representations, and this is another manifestation of how the spin two appears in the game. Indeed, one could rewrite our linearized gauge theory action using the spinor notation as a multiple of $(\Sigma^{\mu\nu\,(AB}\partial_\mu a^{CD)}_\nu)^2$, where the brackets denote the symmetrisation. A completely symmetrised rank 4 spinor is the standard realisation of the spin two representation. Another, particularly clear way to rewrite our linearized Lagrangian is completely in terms of spinors, when all spacetime indices are eliminated in favour of the spinor ones. The Lagrangian then takes the extremely simple form (\ref{L-spinor}). This should be compared to a much more involved linearized Lagrangian for gravitons in the usual metric-based approach. This considerable simplification of the linearized Lagrangian is in itself a significant plus of our approach. Another very important feature of our approach is that diffeomorphism invariance can be dealt with in a very simple way. Recall that it is this gauge symmetry that is causing so much difficulty in any approach to gravity, perturbative or non-perturbative. In contrast, in our formulation diffeomorphisms can be dealt with once and for all, by simply projecting out certain components of the connection. We believe that this feature of our gauge-theoretic description of gravity is very important, to be fully appreciated with more work on this approach. In a certain cense, what replaced the usual diffeomorphisms in our approach are the ${\rm SU}(2)$ gauge rotations. We have seen that these must be gauge-fixed in the usual fashion. It is however much easier to deal with gauge rotations than with diffeomorphisms, something that can be appreciated from our derivation of the propagator of our theory. This propagator can be literally read off from the Lagrangian in its form (\ref{L-spinor}). There is no such a simple derivation of the propagator for the metric-based gravitons. We have also looked at the (cubic and quartic) graviton self-interactions as described in our gauge-theoretic framework. It was observed that the perturbation theory appears to become strongly coupled at a very low energy scale $M_*=\sqrt{M_p M_\Lambda}$, and so appears to behave quite differently from the perturbation theory based on the Einstein-Hilbert Lagrangian. However, there are indications that this strong coupling may be only apparent, and that the physical scattering amplitudes are the same as in the metric based GR. A verification that this is indeed the case is left to future work. Apart from quantum aspects, which we purposefully decided to avoid here, we did not comment much on the subtle issue of the reality conditions for our theory. Indeed, these were discussed at the linearized level, where their treatment is no different from that in the Ashtekar formulation, see \cite{Ashtekar:1991hf}. It is clear, however, that the full interacting action will require a much more sophisticated choice of the reality conditions. For the quantum calculations to be carried out with this formalism this is not much of an issue, because all loops are computed via the trick of the analytic continuation, and under this all factors of $\sqrt{-1}$ in our formulas disappear and fields become real. However, these issues do matter for the questions of the unitarity of the arising quantum theory. We expect that these subtle issues will take some time to be settled, and refrain from trying to address them in this work. To conclude, we hope to have convinced the reader that the present gauge-theoretic approach to gravity brings with itself many rather exciting opportunities that are simply unavailable, or impractical in the usual metric setting. It now seems within reach that, with the new tools developed here, the renormalization group flow for an infinite parametric class of gravity theories can be computed. Once this is achieved, ideas about the ultra-violet behaviour of gravity, e.g. the asymptotic safety conjecture \cite{Weinberg:2009bg}, can be explicitly tested. \section*{Acknowledgements} The author is grateful to J. Fine and D. Panov for making a draft of the paper \cite{Joel} available to him prior to publication. Some of its constructions made the author take the "pure connection" formulation of diffeomorphism invariant gauge theories more seriously, which resulted in the present paper. \section*{Appendix A: Defining function for GR with the cosmological constant} In this appendix we derive an expression for the defining function corresponding to general relativity with a cosmological constant. We start with the Plebanski formulation \cite{Plebanski:1977zz} of the theory, and then integrate out the two-form field, as well as the Lagrange multiplier field. A similar in spirit derivation is given in \cite{Capovilla:1991kx}. However, the final result of that calculation is erroneous, see \cite{CDJ-erratum}. Here we present the correct defining function. In the Plebanski formulation GR with a cosmological constant $\Lambda$ is described by the following action \be S[B,A,\Psi]=\frac{1}{8\pi {\mathrm i} G}\int \left[B^i\wedge F^i - \frac{1}{2} \left( \Psi^{ij}+ \frac{\Lambda}{3}\delta^{ij}\right) B^i\wedge B^j\right]. \ee Here $G,\Lambda$ are the Newton's and cosmological constant respectively, $B^i$ is a ${\mathfrak su}(2)$-valued two-form field, $A^i$ is a ${\rm SU}(2)$ connection, ${\mathrm i}=\sqrt{-1}$, and $\Psi^{ij}$ is the symmetric traceless field of Lagrange multipliers. More details on this formulation can be found in e.g. \cite{Krasnov:2009pu}. Integrating out the two-form field one gets the following action \be S[A,\Psi] = \frac{1}{16\pi{\mathrm i} G} \int \left( \Psi^{ij}+ \frac{\Lambda}{3}\delta^{ij}\right)^{-1} F^i\wedge F^j, \ee where it is assumed that the matrix $\left( \Psi^{ij}+ (\Lambda/3)\delta^{ij}\right)$ is invertible. It is now convenient to rescale the Lagrange multipliers field and write the action as \be S[A,\tilde{\Psi}] = \frac{1}{{\mathrm i}} \int \left(\tilde{\Psi}^{ij} + \alpha \delta^{ij} \right)^{-1} F^i\wedge F^j, \ee where \be \alpha:= \frac{16\pi G\Lambda}{3} \ee is a dimensionless quantity. Note that $\alpha\sim M_0^2/M_p^2$ and so is of the order $\alpha\sim 10^{-120}$. In the final step we integrate out the Lagrange multiplier field $\tilde{\Psi}^{ij}$. Let us drop the tilde on the symbol for brevity. We can the rewrite the above action as \be S[A,\Psi]=\frac{1}{{\mathrm i}} \int ({\rm vol}) {\rm Tr}\left((\Psi +\alpha {\rm Id})^{-1} X\right), \ee where we have introduced $F^i\wedge F^j = ({\rm vol}) X^{ij}$, and $({\rm vol})$ is an arbitrary auxiliary 4-form on our manifold. To integrate out the matrix $\Psi$ we have to solve the field equations for it, and then substitute the result back into the action. Assuming that the solution for $\Psi$ can be written as a function of the matrix $X$ that admits a representation as a series in powers of $X$, we see that $\Psi$ will be diagonal if $X$ is. Thus, we can simplify the problem of finding $\Psi$ by using an ${\rm SO}(3)$ rotation to go to a basis in which $X$ is diagonal. This is always possible at least locally. We then look for a solution in which $\Psi$ is also diagonal. Denoting by $\lambda_1,\lambda_2,\lambda_3$ the eigenvalues of $X^{ij}$, and by $a,b,-(a+b)$ the components of the diagonal matrix $\Psi$, we get the following action functional to consider \be\label{app-1} F[a,b,\lambda]= \frac{\lambda_1}{\alpha+a}+\frac{\lambda_2}{\alpha+b} + \frac{\lambda_3}{\alpha-(a+b)}. \ee We now have to vary this with respect to $a,b$ and substitute the solution back to obtain the defining function as a function of $\lambda_i$. Assuming that neither of the denominators in (\ref{app-1}) is zero we get the following two equations \be (\alpha+a)^2 \lambda_3 = (\alpha-(a+b))^2\lambda_1, \qquad (\alpha+b)^2 \lambda_3 = (\alpha-(a+b))^2\lambda_2. \ee Taking the (positive branch of the) square root and adding the results we get $a+b$, which is most conveniently written as \be \alpha-(a+b)=3\alpha \frac{\sqrt{\lambda_3}}{\sqrt{\lambda_1}+\sqrt{\lambda_2}+\sqrt{\lambda_3}}. \ee The other two combinations that appear in (\ref{app-1}) are given by similar expressions: \be \alpha+a = 3\alpha \frac{\sqrt{\lambda_1}}{\sqrt{\lambda_1}+\sqrt{\lambda_2}+\sqrt{\lambda_3}}, \qquad \alpha+b = 3\alpha \frac{\sqrt{\lambda_2}}{\sqrt{\lambda_1}+\sqrt{\lambda_2}+\sqrt{\lambda_3}}. \ee It is now clear that the defining function is \be\label{def-GR} f_{GR}(\lambda) = \frac{1}{3\alpha} \left( \sqrt{\lambda_1}+\sqrt{\lambda_2}+\sqrt{\lambda_3}\right)^2 = \frac{1}{3\alpha}\left({\rm Tr}\sqrt{X}\right)^2. \ee Thus, we learn that the action for GR with the cosmological constant can be rewritten in the form (\ref{action}) as follows \be\label{GR} S_{GR}[A] = \frac{1}{16\pi{\mathrm i} G\Lambda} \int \left( {\rm Tr}\sqrt{F^i\wedge F^j}\right)^2. \ee For the defining function (\ref{def-GR}) the value $f_0=f(\delta)$ is given by \be f_0 = \frac{3}{\alpha}, \ee which is thus of the order $f_0\sim 10^{120}$. We can also compute the constant $g_0$. Thus, from (\ref{fpp}) we get \be \frac{\partial^2 f}{\partial \lambda_1 \partial \lambda_1} \Big|_{\lambda_i=1} = \frac{4g_0}{3}. \ee On the other hand, evaluating the second derivative of (\ref{def-GR}) with respect to $\lambda_1$ we get \be \frac{\partial^2 f_{GR}}{\partial \lambda_1 \partial \lambda_1} \Big|_{\lambda_i=1}= -\frac{1}{3\alpha}. \ee Thus, \be g_0 = -\frac{1}{4\alpha}, \ee where the minus sign reflects the concave character of (\ref{def-GR}). Thus we have $|g_0|\sim 10^{120}$ and $f_0/g_0=-12$ for this defining function. \section*{Appendix B: Defining function for the (minimally) modified GR} In this Appendix we analyse the defining function for what can be called minimally modified general relativity. The Plebanski-like action is given by: \be S[B,A,\Psi]=\frac{1}{8\pi {\mathrm i} G}\int \left[B^i\wedge F^i - \frac{1}{2} \left( \Psi^{ij}+ \frac{\Lambda}{3}\delta^{ij}+ \frac{\tilde{g}}{2}{\rm Tr}(\Psi^2) \delta^{ij}\right) B^i\wedge B^j\right]. \ee Here $\tilde{g}$ is a constant of dimensions $\tilde{g}\sim M^{-2}$. Thus, this theory contains, in addition to $G, \Lambda$ present in GR, an additional dimensionful coupling $\tilde{g}$. As before, we now integrate out the two-form field, and then rescale all the quantities by a multiple of $16\pi G$. Thus, we introduce: \be g:= \frac{\tilde{g}}{16\pi G}, \ee which is dimensionless, and then write the action omitting the tilde from over the symbol of $\Psi$. The resulting action is: \be S[A,\Psi]=\frac{1}{{\mathrm i}} \int ({\rm vol}) {\rm Tr}\left(\left(\Psi +\alpha {\rm Id}+ \frac{g}{2} {\rm Tr}(\Psi^2) {\rm Id} \right)^{-1} X\right). \ee Since it is natural to expect that the scale of deformation set by $\tilde{g}$ is of the order of $\tilde{g}\sim M_p^{-2}$, the natural values for $g$ are order 1. The action then contains a small parameter $\alpha$, and the action with $\Psi$ integrated out can be found as an expansion in powers of this parameter. As in the previous section we will integrate out $\Psi$ by first diagonalising $X^{ij}$ and then looking for a solution for $\Psi^{ij}$ as a function of $X^{ij}$, which guarantees that it is also diagonal. Thus, we have to consider the following functional of the eigenvalues only: \be\label{app-2} F[a,b,\lambda]= \frac{\lambda_1}{\alpha+a+ g(a^2+b^2+ab)}+\frac{\lambda_2}{\alpha+b+ g(a^2+b^2+ab)} + \frac{\lambda_3}{\alpha-(a+b)+ g(a^2+b^2+ab)}, \ee where as before $\lambda_1,\lambda_2,\lambda_3$ are eigenvalues of $X^{ij}$ and $a, b, -a-b$ are those of $\Psi^{ij}$. We now differentiate with respect to $a,b$ and get the following two equations: \be \frac{\lambda_1(1+g(2a+b))}{(\alpha+a+ g(a^2+b^2+ab))^2}= \frac{\lambda_3(1-g(2a+b))}{(\alpha-a-b+ g(a^2+b^2+ab))^2}, \\ \nonumber \frac{\lambda_2(1+g(2b+a))}{(\alpha+b+ g(a^2+b^2+ab))^2}= \frac{\lambda_3(1-g(2b+a))}{(\alpha-a-b+ g(a^2+b^2+ab))^2}. \ee We now look for the solutions in the form of a series: \be a=a^{(1)}+a^{(2)}+\ldots, \qquad b=b^{(1)}+b^{(2)}+\ldots, \ee where $a^{(1)}, b^{(1)}$ is $O(\alpha)$, $a^{(2)}, b^{(2)}$ is $O(\alpha^2)$, and the dots denote higher orders in the small parameter $\alpha$. We have already found above that \be a^{(1)}= \alpha \frac{2\sqrt{\lambda_1} - \sqrt{\lambda_2}-\sqrt{\lambda_3}}{\sqrt{\lambda_1}+\sqrt{\lambda_2}+\sqrt{\lambda_3}}, \qquad b^{(1)}= \alpha \frac{2\sqrt{\lambda_2} - \sqrt{\lambda_1}-\sqrt{\lambda_3}}{\sqrt{\lambda_1}+\sqrt{\lambda_2}+\sqrt{\lambda_3}}. \ee Using this we get: \be (a^{(1)})^2+(b^{(1)})^2+a^{(1)} b^{(1)} = \frac{\alpha^2}{2(\sum_i\sqrt{\lambda_i})^2} \left(6\lambda_1+6\lambda_2+6\lambda_3-7\sqrt{\lambda_1\lambda_2} -7\sqrt{\lambda_1\lambda_3}-7\sqrt{\lambda_2\lambda_3}\right). \ee Let us introduce a compact notation \be \Delta:=6\lambda_1+6\lambda_2+6\lambda_3-7\sqrt{\lambda_1\lambda_2} -7\sqrt{\lambda_1\lambda_3}-7\sqrt{\lambda_2\lambda_3}. \ee The functional (\ref{app-2}) can the be rewritten as follows: \be F[a,b,\lambda]=\frac{\sum_i\sqrt{\lambda_i}}{3\alpha} \Big(\sqrt{\lambda_1}\left(1+\frac{(\sum_i\sqrt{\lambda_i})a^{(2)}}{3\alpha \sqrt{\lambda_1}} + \frac{\alpha g\Delta }{6 (\sum_i\sqrt{\lambda_i}) \sqrt{\lambda_1}}+O(\alpha^2)\right)^{-1} \\ \nonumber +\sqrt{\lambda_2}\left(1+\frac{(\sum_i\sqrt{\lambda_i})b^{(2)}}{3\alpha \sqrt{\lambda_2}} + \frac{\alpha g\Delta }{6 (\sum_i\sqrt{\lambda_i}) \sqrt{\lambda_2}}+O(\alpha^2)\right)^{-1} \\ \nonumber +\sqrt{\lambda_3}\left(1-\frac{(\sum_i\sqrt{\lambda_i})(a^{(2)}+b^{(2)})}{3\alpha \sqrt{\lambda_3}} + \frac{\alpha g\Delta }{6 (\sum_i\sqrt{\lambda_i}) \sqrt{\lambda_3}}+O(\alpha^2)\right)^{-1} \Big). \ee Expanding the denominators in a power series in $\alpha$ and keeping only the $O(\alpha)$ terms we see that the terms involving $a^{(2)}, b^{(2)}$ cancel, and so we don't need to find these quantities to this order in $\alpha$. We get the following functional \be\label{app-3} F[\lambda]=\frac{(\sum_i\sqrt{\lambda_i})^2}{3\alpha} - \frac{g\Delta}{6} + O(\alpha). \ee We can rewrite it in a more convenient form by noting that the function \be\label{app-top} F_{\rm top}[\lambda]=\sum_i \lambda_i \ee gives rise to a total derivative, and so can always be added to our action. Thus, we can neglect multiples of $F_{\rm top}[\lambda]$. It is then easy to see that the function (\ref{app-3}) modulo (\ref{app-top}) is equal to \be\label{app-4} F[\lambda]\approx \frac{(\sum_i\sqrt{\lambda_i})^2}{3\alpha}\left(1+ \frac{7\alpha g}{4} + O((\alpha g)^2) \right), \ee where $\approx$ stands for equal modulo $F_{\rm top}[\lambda]$. The fact that it is the combination $\alpha g$ whose powers appear in brackets can be seen from (\ref{app-2}). Indeed, one can rescale the variables $a\to \alpha a, b\to \alpha b$ in (\ref{app-2}) so as to take $1/\alpha$ outside of the functional. Then the denominators will contain the combination $\alpha g$, and it is clear that the function with $a, b$ integrated out can be represented as an expansion in powers of $\alpha g$. The first term in this expansion is given in (\ref{app-4}).
1,116,691,497,442
arxiv
\section{Introduction}\label{s:intro} The stellar spectra even at modest resolution contain wealth of information on stellar parameters. In fact, most of the classification work has been done using medium/low resolution spectra. Hydrogen lines are good indicators of temperature and luminosity for a good range in spectral types; although for hotter end stars lines of neutral and ionized helium, carbon and nitrogen are used while strengths of molecular features are employed for the cool stars. A recent summary of the advances in classification can be found in Giridhar (2010). Additional features such as near IR triplet at 7771-74\AA~ and Ca II lines in 8490-8670 \AA~ region are also used for luminosity calibration. Many large telescopes are now equipped with multi-object spectrometers enabling coverage of a large number of objects per frame for stellar systems like clusters. Instruments such as 6df on the UK Schmidt telescope and AAOMEGA at the AAT can provide very large number of spectra per night. On-going and future surveys, and space missions would collect a large number of spectra for stars belonging to different components of our Galaxy. Such large volume of data can be handled only with automatic procedures which would also have the advantage of being objective and providing homogeneous data set most suited for Galactic structure and evolutionary studies. Another outcome would be detection of stellar variability and finding of peculiar objects. \section{ Automated methods for parametrization}\label{s: Automated methods} Several methods have been developed to estimate atmospheric parameters from medium-resolution stellar spectra in a fast, automatic, objective fashion. The most commonly adopted approaches are based upon the minimum distance method (MDM) and those using Artificial Neural Network or ANN. Both the approaches use reference libraries to make comparison with object spectra. Other methods use correlations between broadband colors or the strength of prominent metallic lines and the atmospheric parameters e.g. Stock and Stock (1999). \subsection{Comparison between empirical and synthetic libraries} The observed stellar spectra are assigned a given spectral type and Luminosity Class (LC) based upon the appearance of spectral features and hence these classifications are not model dependent. Synthetic spectra depend on model atmospheres mostly assuming local thermodynamic equilibrium (LTE), are affected by inadequacy of atomic and molecular database and non-LTE effects are severe for certain temperature/metallicity domain. Empirical spectra however may not have the required uniform range in the parameter space. \subsection { MDM based approaches } The basic concept is to minimize the distance metric between the reference spectrum and spectrum to be classified/parametrized. The accuracy depends upon the density of reference spectra in parameter space. We need to construct a stellar spectral template library for stars of known parameters. The software TGMET developed by Katz et al. (1998) is based upon direct comparison with a reference library of stellar spectra. Soubiran et al (2003) used this approach to estimate the T$_{\rm eff}$ , log~$g$ and [Fe/H] with very good accuracy 86 K, 0.28 dex and 0.16 dex respectively for good S/N ratio spectra of F, G and K stars. Instead of reference spectra synthetic spectra using the model atmospheres were used by Zwitter et al.(2008) and others. In SPADES (Posbic et al. 2011) the comparison is made of specific lines allowing abundance determination of various elements. \subsection { Artificial Neural Network } A very good account of this approach can be found in numerous papers e.g. Bailer-Jones (2002), von Hippel (1994) and others. It is a computational method which can provide non-linear mapping between the input vector (a spectrum for example) and one or more outputs like T$_{eff}$, log~$g$ and [M/H]. A network need to be trained with the help of spectra of stars of known parameters. The trained network is used to parametrize the unclassified spectra. We have used the back-propagation ANN code by Ripley (1993). The chosen configuration of ANN is described in Giridhar, Muneer and Goswami (2006). \section{ Analysis of VBT spectra} We had initiated a modest survey program for exploration of metal-poor candidate stars from HK Survey (Beers, Shectmann and Preston 1992), EC survey (Stobie et al. 1997) and high proper motion list of Lee (1984). The semi-empirical approach based upon the strengths of prominent lines and line ratio adopted in Giridhar and Goswami (2002) resulted in detection of several new metal-poor stars. We therefore chose to explore the use of ANN on a larger sample of candidate metal-poor stars. The medium resolution spectra (R$\sim$2000) were obtained using OMR spectrometer with 2.3m telescope at VBO, Kavalur. The spectra cover 3800-6000\AA~ region. Our spectral analysis, alignment procedure etc. are described in Giridhar, Muneer and Goswami (2006). A few representative spectra arranged in increasing temperatures are presented in (Fig. \ref{sample_spectra}). \begin{figure} \centerline{\includegraphics[height=9.8cm,width=12.5cm]{sample_spectra.eps}} \caption[]{Sample spectra arranged in temperature sequence are presented. The stars with normal metallicity are plotted as dotted lines while metal-poor stars are shown as continuous lines.} \label{sample_spectra} \end{figure} \subsection { Calibration accuracies of stellar parameters} Our training set containing 143 stars of known atmospheric parameters were chosen from Allende Prieto \& Lambert (1999), Gray, Grahm \& Hoyt(2001), Snider et al. (2001) and ELODIE data base (Soubiran, Karz \& Cayrel 1998). \begin{figure} \centerline{\includegraphics[height=6.8cm]{Temp_metal.eps} \quad \includegraphics[height=6.8cm]{Grav_Mv.eps}} \caption[]{The parameters estimated from ANN are compared with those from literature.} \label{Temp_metal} \end{figure} Figure 2 shows the ANN results compared with calibrating values. We have shown in Figure 2a [Fe/H] ANN results for 76 calibrating stars plotted against those from literature. For the metallicity range of $-$3.0 to $+$0.3 dex. the RMS scatter about the line of unity is 0.3 dex which is similar to the intrinsic uncertainties metallicities for calibrating stars. To avoid using the same spectra for training and testing purposes, we divided the training set into two parts and trained ANN for each part. Then the weights for part~1 were used to estimate [Fe/H] for stars in part~2 while those of part~2 were used to estimate [Fe/H] for stars in part ~1. The errors shown in the Figure 2 are therefore realistic estimate of errors. This approach of dividing calibrating sample into two separate training and testing sets has been adopted for T$_{\rm eff}$ and log~$g$ calibration also. We had good T$_{\rm eff}$ and log~$g$ estimates for 143 stars for calibrations and among them 110 stars had nearly solar metallicities while 33 were hard core metal-poor stars. While training the networks for temperature we found that usage of the same ANN for normal metallicity stars as well as metal-poor stars was giving large calibration errors (250 to 300K for T$_{\rm eff}$). It is understandable as the spectra of metal-poor stars and also those of hot stars have weak metallic lines. To overcome this degeneracy we used separate ANNs for each metallicity subgroup for the temperature (as well as gravity) calibration. The temperatures estimated by AAN are compared with the literature values in Figure 2b. The RMS error is now reduced to 150K. The Figure 2c shows the result for gravity calibration adopting the procedure mentioned above. The RMS error is about 0.35 for log ~$g$ range of 1 to $+$4.5 dex. A large fraction of stars observed by us have good parallax estimates (errors less than 20\%). Combining the V magnitudes with parallaxes the distances and hence M$_{V}$ could be estimated. Most of these objects were nearby objects so the effect of interstellar extinction could be assumed as negligible. Our spectral region contains many luminosity sensitive features like hydrogen lines, Mg I lines at 5172-83\AA~, G bands etc. However, the same feature cannot serve the whole range of spectral types. We have divided the sample stars into two temperature groups and yet another group for metal-poor objects. The usage of three separate networks helped in attaining calibration accuracy $\sim$ $\pm$0.3 mag for M$_{V}$. The M$_{V}$ estimated by ANN are compared with those estimated from parallaxes as shown in Figure 2d. \section{ Stellar parameters for metal-poor candidate stars} A different set of ANNs for each atmospheric parameter were trained for metal-poor candidate stars. A preliminary estimation of metallcity was made using ANN trained on the full range of metallicity. Then, we refined the measurements by using two different ANN sets; one for estimating the atmospheric parameters for stars of near solar metallicties and the other for the significantly metal-poor stars ([Fe/H] $<$ $-$0.7 dex). The (B-V) colours were available for many of them which were used to verify the T$_{\rm eff}$ estimated by ANN. In most cases the temperature estimated using ANN were in close agreement with colour temperatures. A sizeable fraction of the candidate stars belonged to [Fe/H] $-$0.5 to $-$2.5 dex range. \section{ Conclusions} We have demonstrated that using ANN we can measure atmospheric parameters with an accuracy of $\pm$ 0.3dex in [Fe/H], $\pm$200K in temperature and $\pm$0.35 in log~$g$ with the help of training set of stars of known parameters. We find that independent calibrations for near solar metallicity stars and metal-poor stars decrease the errors in T$_{\rm eff}$ and log~$g$ by a factor of two. We have extended the application of this method to estimation of absolute magnitude using nearby stars with well determined parallaxes. Better M$_{V}$ calibration accuracy can be obtained by using two separate ANNs for cool and warm stars. The present accuracy of M$_{V}$ calibration is $\sim$$\pm$0.3mag. \section {Acknowledgment} This work was partially funded by the National Science Foundation's Office of International Science and Education, Grant Number 0554111: International Research Experience for Students, and managed by the National Solar Observatory's Global Oscillation Network Group.
1,116,691,497,443
arxiv
\section{Introduction} This paper is devoted to show the existence and uniqueness of logarithmic models for any holomorphic foliation on $({\mathbb C}^n,0)$ of generalized hypersurface type. In the case of $n=2$, this result has been obtained by N. Corral in \cite{Cor}. The main result is Theorem \ref{teo:main} in the last section of the paper. We state it as follows: \begin{theoremmain} Every generalized hypersurface on $({\mathbb C}^n,0)$ has a logarithmic model. \end{theoremmain} A germ $\mathcal L$ of singular codimension one foliation on $({\mathbb C}^n,0)$ is {\em logarithmic} when it is given by a closed logarithmic $1$-form $$ \eta=\sum_{i=1}^s\lambda_i\frac{df_i}{f_i},\quad f_i\in {\mathcal O}_{{\mathbb C}^n,0}. $$ In other words the foliation $\mathcal L$ has the multivaluated first integral $f_1^{\lambda_1}f_2^{\lambda_2}\cdots f_s^{\lambda_s}$. Up to reduction of singularities of the germ of hypersurface $$ H=(f_1f_2\cdots f_s=0), $$ the transform of $\eta$ is a global closed logarithmic $1$-form and the total transform of $H$ has normal crossings. In this situation all the local holonomies are linear in terms of the coordinates given by $H$. We can say that the holonomy of $\mathcal L$ is ``globally linearizable''. Of course this picture needs to be specified, mainly by asking that we are not in a ``dicritical'' situation. Roughly speaking, a ``logarithmic model'' for a codimension one foliation $\mathcal F$ should be a logarithmic foliation $\mathcal L$ such that the local holonomies of $\mathcal L$ coincide with the linear parts of the local holonomies of $\mathcal F$. In this way, the logarithmic model is an object that can be considered as ``the linear part of the holonomy of $\mathcal F$" or, in some sense, a ``holonomic initial part'' of $\mathcal F$. The logarithmic models in ambient dimension two may be described in a more precise way. We do it for foliations $\mathcal F$ on $({\mathbb C}^2,0)$ without saddle nodes in their reduction of singularities (hidden saddle nodes) and that are also ``non-dicritical'', in the sense that we encounter only invariant exceptional divisors in the sequence of blowing-ups desingularizing $\mathcal F$. We give the name {\em generalized curves} to such foliations, following a terminology that comes from the foundational paper of Camacho, Lins Neto and Sad \cite{Cam-LN-S}. In this situation, after desingularization, the foliation at a singular point is given by a local $1$-form $$ (\lambda+\cdots)ydx+(\mu+\cdots)xdy,\quad \lambda\mu\ne0,\;\lambda/\mu\notin {\mathbb Q}_{\leq 0}. $$ The quotient $-\lambda/\mu$ is the Camacho-Sad index of the foliation with respect to $y=0$ and it also determines the coefficient of the linear part of the holonomy. Up to multiply the 1-form by a scalar and to adapt the coordinates, a local logarithmic foliation, having holonomy with the same linear part as $\mathcal F$, is locally given by $$ \lambda\frac{dx}{x}+\mu\frac{dy}{y}. $$ This is the way we take for approaching a germ of generalized curve $\mathcal F$ on $({\mathbb C}^2,0)$ by a logarithmic foliation $\mathcal L$: \begin{quote} A logarithmic foliation $\mathcal L$ is a {\em logarithmic model} for a generalized curve $\mathcal F$ on $({\mathbb C}^2,0)$ if it has the same invariant branches as $\mathcal F$ and the same Camacho-Sad indices after reduction of singularities. \end{quote} This provides a precise definition. With no effort one realizes that the property is independent of the chosen reduction of singularities; indeed, in dimension two we have a well defined minimal reduction of singularities and any other one is obtained by additional blowing-ups from it. In the paper \cite{Cor} there is a proof of the existence of logarithmic models in dimension two for any generalized curve. Logarithmic models in dimension two have been particularly useful for describing the properties of the generic polar of a given foliation, because the main Newton Polygon parts coincide for the foliation and the logarithmic model, see \cite{Cor}. Let us also note that some results in dimension two may be stated in the dicritical case \cite{Can-Co}, anyway in this paper we consider always the non-dicritical situation. We have two possible ways for extending the concept of logarithmic models to higher dimension. The first one is to use reduction of singularities as in the two-dimensional case. Since we are considering generalized hypersurfaces, we know the existence of reduction of singularities for our foliations, more precisely, any reduction of singularities of the finite set of invariant hypersurfaces provides a reduction of singularities of the foliation \cite{Fer-M}. The second way is to perform two-dimensional tests. It is known that certain properties in algebraic geometry are tested by valuative criteria, for instance integral dependence, or properness. In the theory of codimension one foliations, there are remarkable properties detected by testing with a two-dimensional map. The existence of holomorphic first integral is one of them, as exhibited in the paper of Mattei-Moussu \cite{Mat-Mou}. The dicriticalness and the existence of hidden-saddle nodes are also properties of this kind: \begin{itemize} \item Dicriticalness: A codimension one foliation $\mathcal F$ on $({\mathbb C}^n,0)$ is {\em dicritical} if and only if there is a holomorphic map $\phi: ({\mathbb C}^2,0)\rightarrow ({\mathbb C}^n,0)$ such that $\phi^*{\mathcal F}=(dx=0)$ and the image of $y=0$ is invariant for $\mathcal F$. \item Existence of hidden saddle nodes: A codimension one foliation $\mathcal F$ on $({\mathbb C}^n,0)$ has a {\em hidden saddle-node} if there is a holomorphic map $\phi: ({\mathbb C}^2,0)\rightarrow ({\mathbb C}^n,0)$ such that $\phi^*{\mathcal F}$ is a saddle-node. \end{itemize} When the foliation $\mathcal F$ is non-dicritical and without hidden saddle-nodes, we say that $\mathcal F$ is a {\em generalized hypersurface}. In this context, we take a definition of logarithmic model as follows: \begin{quote} Let $\mathcal F$ be a generalized hypersurface and consider a logarithmic foliation $\mathcal L$, both on $({\mathbb C}^n,0)$. We say that $\mathcal L$ is a {\em logarithmic model for $\mathcal F$} if and only if $\phi^*{\mathcal L}$ is a logarithmic model for $\phi^*{\mathcal F}$, for any holomorphic map $\phi: ({\mathbb C}^2,0)\rightarrow ({\mathbb C}^n,0)$ such that $\phi^*{\mathcal F}$ exists. \end{quote} In this paper we show that the two above ways are confluent. The uniqueness of logarithmic models is a consequence of the same result in dimension two. We show the existence of logarithmic models for generalized hypersurfaces by working throughout a particular reduction of singularities of the foliation $\mathcal F$. From the technical viewpoint, we develop the theory of logarithmic models in terms of $\mathbb C$-divisors. We introduce the concept of {\em divisorial model} and we state the existence in the main technical result Theorem \ref{teo:main}. There is a relationship between $\mathbb C$-divisors and logarithmic foliations, that provides the bridge between the divisorial models and the logarithmic models as explained in the last Section \ref{Logarithmic Models}. Consider a non-singular complex analytic space $M$. A {\em ${\mathbb C}$-divisor\/} $\mathcal D$ on $M$ is a formal finite sum $$ {\mathcal D}=\sum_{i=1}^s\lambda_i H_i, \quad 0\ne \lambda_i\in {\mathbb C}, $$ where the $H_i\subset M$ are hypersurfaces. The support of the divisor is the union of the hypersurfaces $H_i$. We can make the usual operations with $\mathbb C$-divisors, in particular the pull-back $\phi^*{\mathcal D}$ under a holomorphic map $\phi:N\rightarrow M$, when the image is not locally contained in the support of ${\mathcal D}$. Working locally, if we take a reduced equation $f_i=0$ of $H_i$, we can consider the closed logarithmic $1$-form $\eta$ given by $$ \eta=\sum_{i=1}^s\lambda_i\frac{df_i}{f_i}. $$ The logarithmic foliation induced by $\eta$ will be called {\em $\mathcal D$-logarithmic.} In dimension two, we give a proof of the existence of logarithmic model in terms of $\mathbb C$-divisors, that is divisorial models, in a more explicit way than in the paper \cite{Cor}. When the foliation $\mathcal F$ is desingularized, at a singular point we have exactly two invariant curves, $\Gamma_1$ and $\Gamma_2$, given by the equations $\Gamma_1=(x_1=0)$ and $\Gamma_2=(x_2=0)$. We know that the foliation is given by a differential $1$-form $\omega$ as $$ \omega=(\lambda_1+\cdots)\frac{dx_1}{x_1}+ (\lambda_2+\cdots)\frac{dx_2}{x_2}, $$ where $-\lambda_i/\lambda_j$ are the Camacho-Sad indices. We say that the ${\mathbb C}$-divisor $$ {\mathcal D}=\lambda_1\Gamma_1+\lambda_2\Gamma_2, $$ is a divisorial model for $\mathcal F$. Of course, the logarithmic foliation $\mathcal L$ defined by $$ \eta=\lambda_1dx_1/x_1+\lambda_2dx_2/x_2 $$ fulfils the definition of being a logarithmic model for $\mathcal F$. We pass to the general case in dimension two through the stability under blowing-ups. More precisely, we recover the general definition of Camacho-Sad indices for foliations in dimension two (see \cite{Bru} and \cite{LNet}) and we establish a similar one for $\mathbb C$-divisors. Both are compatible with the blowing-ups and in this way we obtain logarithmic models once we have proven the existence of divisorial models in Theorem \ref{th:existenciayunicidadendimensiondos}. The above arguments pass in higher dimension, hence we have a definition of divisorial model that automatically gives a logarithmic foliation that is a logarithmic model. In this way, the main difficulty in the paper is the proof of the existence of a divisorial model for any generalized hypersurface $\mathcal F$ on $({\mathbb C}^n,0)$. We state this result in Theorem \ref{teo:main} in Section \ref{Logarithmic Models For Generalized Hypersurfaces}. The first sections are devoted to present the theory of $\mathbb C$-divisors, the relationship between $\mathbb C$-divisors, closed logarithmic 1-forms and logarithmic foliations, the dicriticalness condition for $\mathbb C$-divisors and foliations, the property of being a generalized hypersurface, the existence of logarithmic models in dimension two throughout the generalized Camacho-Sad indices, the reduction of singularities and the properties of generic equireduction and relative transversality that are useful in the final proofs. The proofs of the main results are given in Section \ref{Logarithmic Models For Generalized Hypersurfaces}. We first show the existence of a ${\mathbb C}$-divisor compatible with a given reduction of singularities in Theorem \ref{teo:pilogarithmicmodel} and finally we prove the main Theorem \ref{teo:main} on the existence of divisorial models. All the paper has been developed having in mind that a logarithmic model is a foliation; anyway, from the technical view point, we have stated and prove results in terms of divisorial models. In the last Section \ref{Logarithmic Models}, we quickly summarize how the existence and uniqueness results on divisorial models are translated to logarithmic models. In this way, we obtain the proof of the main Theorem stated in this Introduction. \section{$\mathbb C$-Divisors} Let $M$ be a non singular complex analytic variety. The space of {\em generalized divisors $\operatorname{Div}_{\mathbb C}(M)$}, also called {\em $\mathbb C$-divisors}, is defined to be the ${\mathbb C}$-vector space having as a basis the set of irreducible hypersurfaces of $M$. They have been introduced in \cite{Can-Co} for the purpose of describing logarithmic models of foliations in ambient dimension two. Thus, a {\em $\mathbb C$-divisor ${\mathcal D}$ } in $M$ is a finite expression $$ {\mathcal D}=\sum_{H}\lambda_HH, $$ where $H$ runs over the irreducible hypersurfaces of $M$ and the coefficients $\lambda_H$ are complex numbers, such that only finitely many of them are nonzero. The {\em support $\operatorname{Supp}({\mathcal D})$ of $\mathcal D$} is the union of the $H$ such that $\lambda_H\ne0$. We say that two nonzero $\mathbb C$-divisors ${\mathcal D}_1, {\mathcal D}_2\in \operatorname{Div}_{\mathbb C}(M)$ with connected support are {\em projectively equivalent} if and only if there is a nonnull scalar $\lambda\in {\mathbb C}^*$ such that ${\mathcal D}_2=\lambda{\mathcal D}_1$. If the support is not connected, we say that they are projectively equivalent when the condition holds at each connected component of the support. Consider a function $f:M\rightarrow {\mathbb C}$ that is not constant at any connected component of $M$. As usual, we define the divisor $\operatorname{Div}(f)$ by $$ \operatorname{Div}(f)=\sum_H \mu_HH, $$ where $\mu_H\ne 0$ if and only if $H$ is an irreducible component of $f=0$ and $\mu_H$ is the multiplicity of a local reduced equation of $H$ as a factor of $f$. Let us consider a closed hypersurface $S$ of $M$, not necessarily irreducible. Let $S=\cup_{i=1}^s H_i$ be the decomposition of $S$ into a union of irreducible components. The divisor $\operatorname{Div}(S)$ is defined to be $$ \operatorname{Div}(S)=\sum_{i=1}^s H_i. $$ In particular, if ${\mathcal D}=\sum_H\lambda_H H$, we also have that ${\mathcal D}=\sum_H\lambda_H\operatorname{Div}(H)$. This simple remark allows us to define the restriction of a $\mathbb C$-divisor to an open set $U\subset M$, by means of the formula $$ {\mathcal D}\vert_U=\sum_H\lambda_H\operatorname{Div}(H\cap U). $$ In this way, we can also interpret the germ ${\mathcal D}_p$ at a point $p\in M$ of a $\mathbb C$-divisor ${\mathcal D}$ in $M$ as being a ${\mathbb C}$-divisor on the germified space $(M,p)$. Of course, these ones are particular cases of the inverse image of a $\mathbb C$-divisor by a morphism to be introduced below. \begin{remark} Most of the complex analytic varieties in this paper are germs over compact sets. In this case any hypersurface has only finitely many irreducible components. Anyway, we consider only hypersurfaces with this property, even if we are in an analytic variety not necessarily a germ over a compact. The reader will appreciate at each statement the limits of this implicit assumption. \end{remark} Consider a morphism $\phi: N\rightarrow M$ between connected non singular complex analytic varieties and a hypersurface $S\subset M$. We say that $\phi$ is {\em $S$-transverse} if and only if the image of $\phi$ is not contained in $S$. In this case $\phi$ is $H$-transverse for any irreducible component $H$ of $S$ and the inverse image is a hypersurface $\phi^{-1}(H)\subset N$. When $\phi:N\rightarrow M$ is $H$-transverse, we define the $\mathbb C$-divisor $\phi^*(1\cdot H)$ of $N$ by the following property: for any point $q\in N$ the divisor $\phi^*(1\cdot H)$ germified at $q$ is equal to $ \operatorname{Div} (f\circ \phi) $, where $f=0$ is a local reduced equation of $H$ at $\phi(q)$. Consider a $\mathbb C$-divisor ${\mathcal D}=\sum_H\lambda_HH$. We say that the morphism $\phi:N\rightarrow M$ is {\em ${\mathcal D}$-transverse} if it is $S$-transverse where $S =\operatorname{Supp}(\mathcal{D})$. Otherwise, we say that $\phi$ is {\em $\mathcal D$-invariant}. When $\phi$ is ${\mathcal D}$-transverse, the {\em inverse image} is defined by $$ \phi^{*}{\mathcal D}=\sum_H\lambda_H\phi^{*}(1\cdot H). $$ We are particularly interested in the case of blowing-ups $\pi:M'\rightarrow M$ with irreducible non-singular center $Y\subset M$. The blowing-up $\pi$ being a surjective morphism is ${\mathcal D}$-transverse for any $\mathbb C$-divisor ${\mathcal D}$. The inverse image $\pi^{*}{\mathcal D}$, or {\em transform of $\mathcal D$ by $\pi$}, is given by $$ \pi^{*}{\mathcal D}=\mu E+\sum_{H}\lambda_HH', \quad E=\pi^{-1}(Y), \quad \mu= \sum_{H}\nu_Y(H)\lambda_H, $$ where $H'$ are the strict transforms of the irreducible hypersurfaces $H\subset M$ and $\nu_Y(H)$ denotes the generic multiplicity of $H$ along $Y$. The blowing-up $\pi$ is said to be {\em $\mathcal D$-admissible} when $Y\subset \operatorname{Supp}({\mathcal D})$. This is equivalent to say that $$ \sum_{H\subset \operatorname{Supp}({\mathcal D})}\nu_{Y}(H)\geq 1. $$ A ${\mathcal D}$-admissible blowing-up $\pi$ is called {\em ${\mathcal D}$-dicritical} when $ \sum_{H}\nu_Y(H)\lambda_H=0 $; that is, the exceptional divisor $E$ is not contained in the support of $\pi^*{\mathcal D}$. \begin{remark} Let us recall that for any germ of function $f\in {\mathcal O}_{{\mathbb C}^n,0}$, the generic multiplicity $\nu_Y(f)$ along $Y\subset ({\mathbb C}^n,0)$ is defined to be the minimum of the multiplicity of $f$ at the points of $Y$ near the origin (the multiplicity is an upper semi-continuous function). The generic multiplicity $\nu_Y(H)$ is the generic multiplicity along $Y$ of a reduced germ $f$, such that $H$ is given by $f=0$. \end{remark} \begin{remark} \label{rk:transversemaps} Let us consider two morphisms $\phi_2:N_2\rightarrow N_1$ and $\phi_1:N_1\rightarrow M$ and a $\mathbb C$-divisor ${\mathcal D}$ on $M$. If $\phi_1\circ\phi_2$ is ${\mathcal D}$-transverse, then $\phi_1$ is ${\mathcal D}$-transverse, the morphism $\phi_2$ is $\phi_1^*{\mathcal D}$-transverse and $$ (\phi_1\circ\phi_2)^*{\mathcal D}=\phi_2^*(\phi_1^*{\mathcal D}). $$ The converse is not true. It is possible to have that $\phi_1$ is ${\mathcal D}$-transverse and $\phi_2$ is $\phi_1^*{\mathcal D}$-transverse, but $\phi_1\circ\phi_2$ is ${\mathcal D}$-invariant. The typical example of this situation is the inclusion $$ \pi^{-1}(Y)\stackrel{\phi_2}{\subset} M'\stackrel{\phi_1}{\rightarrow }M, $$ where $\phi_1$ is a $\mathcal D$-dicritical blowing-up with center $Y$. The $\mathbb C$-divisor $\phi_2^*(\phi_1^*{\mathcal D})$ cannot be obtained directly from $\pi^{-1}(Y)\rightarrow M$ (this phenomenon is an essential fact for the transcendence of leaves of singular foliations studied in \cite{Can-L-M}). \end{remark} If no confusion arises, we write $ \pi:(M',{\mathcal D}')\rightarrow (M,{\mathcal D}) $ to denote a ${\mathcal D}$-transverse holomorphic map $\pi:M'\rightarrow M$, where ${\mathcal D}'=\pi^*{\mathcal D}$. The rest of this section is devoted to characterize the dicriticalness of a $\mathbb C$-divisor. We take the following definition, which is inspired in the corresponding one for foliations: \begin{definition} Consider a $\mathbb C$-divisor $\mathcal D$ on a non-singular complex analytic variety $M$. We say that $\mathcal D$ is {\em dicritical at a point $p\in M$} if and only if there is a $\mathcal D$-transverse holomorphic map $ \phi:({\mathbb C}^2,0)\rightarrow M $ such that $$ \phi(0)=p,\quad \phi(y=0)\subset \operatorname{Supp} ({\mathcal D}),\quad \phi^*{\mathcal D}=0. $$ We say that $\mathcal D$ is {\em dicritical} if there is a point $p\in M$ such that it is dicritical at $p$. In a consonant way, we say that $\mathcal D$ is {\em non-dicritical} if and only if it is non-dicritical at each point $p\in M$ (in the case of germs $(M,K)$ we ask the conditions for the points $p\in K$). \end{definition} \begin{proposition} \label{pro:divdicrunblowinup} Consider a $\mathbb C$-divisor ${\mathcal D}$ on $M=({\mathbb C}^n,0)$ and a non-dicritical admissible blowing-up $ \pi:((M,\pi^{-1}(0)),{\mathcal D}')\rightarrow (({\mathbb C}^n,0),{\mathcal D}). $ Then, the $\mathbb C$-divisor ${\mathcal D}$ is dicritical if and only if there is a point $p'\in\pi^{-1}(0)$ such that ${\mathcal D}'$ is dicritical at $p$. \end{proposition} \begin{proof} Let us assume that ${\mathcal D}'$ is dicritical at a point $p'\in \pi^{-1}(0)$. Then, there is a ${\mathcal D}'$-transverse map $\phi': ({\mathbb C}^2,0)\rightarrow (M',p')$ such that $${\phi'}^*{\mathcal D'}=0,\quad \phi'(y=0)\subset \operatorname{Supp} ({\mathcal D}'). $$ Since $\pi$ is non-dicritical, we have that $ \operatorname{Supp} ({\mathcal D}')=\pi^{-1} (\operatorname{Supp} ({\mathcal D})) $. This implies that $\phi=\pi\circ\phi'$ is also a ${\mathcal D}$-transverse map and moreover, we have $$ \phi(y=0)\subset \operatorname{Supp} ({\mathcal D}), \quad \phi^*{\mathcal D}={\phi'}^*{\mathcal D}'=0. $$ Hence, the $\mathbb C$-divisor ${\mathcal D}$ is dicritical. Conversely, let us assume that ${\mathcal D}$ is dicritical. Consider a $\mathcal D$-transverse map $\phi: ({\mathbb C}^2,0)\rightarrow ({\mathbb C}^n,0)$, such that $\phi(y=0)\subset \operatorname{Supp}{\mathcal D}$ and $\phi^*{\mathcal D}=0$. In view of Proposition \ref{prop:appdos} in the Appendix I, there is a morphism $$ \sigma: (N,\sigma^{-1}(0))\rightarrow ({\mathbb C}^2,0) $$ that is a composition of blowing-ups and a morphism $\psi: (N,\sigma^{-1}(0))\rightarrow (M,\pi^{-1}(0))$ such that $\pi\circ\psi=\phi\circ\sigma$. Note that $\phi\circ\sigma$ is $\mathcal D$-transverse, since $\phi$ is $\mathcal D$-transverse and $\sigma$ is a surjective map. By Remark \ref{rk:transversemaps}, we have that $\psi$ is $\pi^*{\mathcal D}$-transverse and $$ 0= \sigma^*(\phi^*{\mathcal D})=(\phi\circ\sigma)^*{\mathcal D}=(\pi\circ\psi)^*{\mathcal D}=\psi^*({\mathcal D}'). $$ Now, let $(\Gamma,q)\subset (N,q)$ be the strict transform of $y=0$ by $\sigma$. We have that $\pi(\psi(\Gamma))=\phi(y=0)\subset \operatorname{Supp}({\mathcal D})$. In other words $$ \psi(\Gamma)\subset \pi^{-1}(\operatorname{Supp}({\mathcal D}))=\operatorname{Supp}({\mathcal D}'). $$ Select local coordinates $x',y'$ at $q$ such that $\Gamma=(y'=0)$ and let $$ \phi':(N,q)=({\mathbb C}^2,0)\rightarrow (M,p),\quad p=\psi(q), $$ be the map between germs induced by $\psi$. Thanks to $\phi'$, we see that ${\mathcal D}'$ is dicritical at $p$. \end{proof} The following corollary is a direct consequence of Proposition \ref{pro:divdicrunblowinup}: \begin{corollary} \label{dicriticidadexplosionnodicritica} Consider a morphism $ \pi:(M', {\mathcal D}')\rightarrow (M,{\mathcal D}) $ that is the composition of a sequence of non-dicritical admissible blowing-ups. Then, the $\mathbb C$-divisor $\mathcal D$ is dicritical in $M$ if and only if ${\mathcal D}'$ is dicritical in $M'$. \end{corollary} Now, we characterize the dicriticalness in terms of admissible blowing-ups. We start with the normal crossings case. We say that a $\mathbb C$-divisor $\mathcal D$ on a complex analytic variety $M$ has a {\em non-negative resonance} at a point $p\in M\cap \operatorname{Supp}({\mathcal D})$ if the germ of the divisor is written $$ {\mathcal D}_p=\sum_{i=1}^s\lambda_i H_i,\quad \lambda_i\ne 0\mbox{ for } i=1,2,\ldots,s $$ and there is $\mathbf{m}=(m_1,m_2,\ldots,m_s)\in {\mathbb Z}_{\geq 0}^s$, with $\mathbf{m}\ne\mathbf{0}$ such that \begin{equation} \label{eq:resonance} \sum_{i=1}^sm_i\lambda_i=0,\quad (m_1,m_2,\ldots,,m_s)\in {\mathbb Z}^s_{\geq 0}\setminus\{\mathbf{0}\}. \end{equation} \begin{lemma} \label{lema:normalcrossings} Let $(M,K)$ be a non-singular complex analytic variety that is a germ over a compact subset $K\subset M$. Consider a $\mathbb C$-divisor $\mathcal D$ in $(M,K)$ whose support has normal crossings. Assume that there is a point $p\in K\cap\operatorname{Supp}(\mathcal D)$ in which $\mathcal D$ has a non-negative resonance. Then, there are morphisms $ \pi':(M',{\mathcal D}')\rightarrow (M,{\mathcal D})$ and $\pi'':(M'',{\mathcal D}'')\rightarrow (M',{\mathcal D}')$, such that $\pi'$ is the composition of a sequence of non-dicritical admissible blowing-ups and $\pi''$ is a dicritical admissible blowing-up. \end{lemma} \begin{proof} This result, in another context, is proven in \cite{FDu}. Let us give a quick idea of a proof. Choose local coordinates $(x_1,x_2,\ldots,x_n)$ at the origin such that $$H_i=(x_i=0), \quad i=1,2,\ldots,s. $$ Up to a reordering, we assume that $\prod_{i=1}^tm_i \ne 0$ and $m_i=0$ for $t+1\leq i\leq s$. We proceed by induction on the lexicographical invariant $(t,\delta)$, where $$ \delta=\min_{1\leq i<j\leq t}\{m_i+m_j\}. $$ Assume, up to a new reordering, that $\delta=m_1+m_2$ and $m_1\leq m_2$. Choose $Y=(x_1=x_2=0)$ as a center of blowing-up. The first chart of this blowing-up gives a morphism $$ \phi: ({\mathbb C}^n,0)\rightarrow ({\mathbb C}^n,0) $$ defined by the equations $x_1=x'_1, x_2=x'_1x'_2$ and $x_i=x'_i$ for $i=3,4,\ldots,n$. The transform $\phi^*{\mathcal D}$ is given at the origin of this chart by $$ \phi^*{\mathcal D}=(\lambda_1+\lambda_2)E+\sum_{i=2}^s\lambda_i H'_i, $$ where $E=(x'_1=0)$ and $H'_i=(x'_i=0)$ for $i=2,3,\ldots,s$. If $\lambda_1+\lambda_2=0$ we are done, since then we have an admissible dicritical blowing-up. Otherwise, we obtain a resonance $$ \mathbf{m}'=(m_1, m_2-m_1,m_3,\ldots,m_s) $$ and the invariant $(t',\delta')$ is strictly smaller than $(t,\delta)$. In spite of the local presentation, the above procedure is in fact a global one. This ends the proof. \end{proof} \begin{proposition} \label{pro:divdicritico} Let us consider a $\mathbb C$-divisor ${\mathcal D}$ on $M=({\mathbb C}^n,0)$. The $\mathbb C$-divisor $\mathcal D$ is dicritical if and only if there are morphisms $$ \pi':(M',{\mathcal D}')\rightarrow (M,{\mathcal D}),\quad \pi'':(M'',{\mathcal D}'')\rightarrow (M',{\mathcal D}'), $$ such that $\pi'$ is the composition of a sequence of non-dicritical admissible blowing-ups and $\pi''$ is a dicritical admissible blowing-up. \end{proposition} \begin{proof} Let us assume first the existence of $\pi',\pi''$ with the stated properties. Since $\pi'$ is a composition of non-dicritical admissible blowing-ups, we have that $$ E'\subset\operatorname{Supp}({\mathcal D}'), $$ where $E'$ is the exceptional divisor of $\pi'$. Let $Y\subset M'$ be the center of $\pi''$ and denote $\pi=\pi'\circ\pi''$. The exceptional divisor of $\pi$ is $E''={\tilde E}\cup D$, where $D={\pi''}^{-1}(Y)$ and $\tilde E$ is the strict transform of $E'$ by $\pi''$. Since $\pi''$ is dicritical, we have that $$ \tilde E\subset\operatorname{Supp}({\mathcal D}''),\quad D\not\subset\operatorname{Supp}({\mathcal D}''). $$ Take a point $p\in D\setminus\operatorname{Supp}({\mathcal D}'')$. Let us identify the germ $(M'',p)$ with $({\mathbb C}^n,0)$, with coordinates $x_1,x_2,\ldots,x_n$, where $D$ is locally given at $p$ by the equation $x_n=0$. Consider the morphism $$ \psi:({\mathbb C}^2,0)\rightarrow ({\mathbb C}^n,0)=(M'',p)\hookrightarrow M'' $$ defined by $x_1=x$, $x_n=y$ and $x_i=0$, for $2\leq i\leq n-1$. We have that $\psi(y=0)\subset D$ and $\operatorname{Im}(\psi)\not\subset D$. Since the support of ${\mathcal D}''$ is empty around $p$, we have that $\psi^*{\mathcal D}''=0$ and we also have $$ \operatorname{Im}(\psi)\not\subset D\cup\operatorname{Supp}({\mathcal D}'')=E''\cup \operatorname{Supp}({\mathcal D}''). $$ Noting that $\pi^{-1}(\operatorname{Supp}({\mathcal D}))=\tilde E\cup \operatorname{Supp}({\mathcal D}'')$, we conclude that $\phi$ is ${\mathcal D}$-transverse, where $\phi=\pi\circ \psi$. Then, we have $$ \phi^*{\mathcal D}=\psi^*(\pi^*{\mathcal D})=\psi^*({\mathcal D''})=0,\quad \phi(y=0)\subset \pi'(Y)\subset\operatorname{Supp}({\mathcal D}). $$ This implies that $\mathcal D$ is dicritical. Assume now that ${\mathcal D}$ is dicritical. Let us perform a Hironaka reduction of singularities of the support of $\mathcal D$ by means of admissible blowing-ups (see \cite{Aro-H-V, Hir}). We can assume that none of the blowing-ups in the reduction of singularities is dicritical, since then we are done. Hence, we have a morphism $$ \tilde\pi: ((\tilde M,\tilde\pi^{-1}(0)),\tilde{\mathcal D})\rightarrow (({\mathbb C}^n,0),{\mathcal D}) $$ that is a composition of non-dicritical admissible blowing-ups such that $\operatorname{Supp}({\tilde{\mathcal D}})$ has normal crossings. Now, in view of Lemma \ref{lema:normalcrossings}, it is enough to find a point $p$ in $\tilde\pi^{-1}(0)\cap \operatorname{Supp}(\tilde{\mathcal D})$ such that $\tilde{\mathcal D}$ has a non-negative resonance at $p$. By Proposition \ref{pro:divdicrunblowinup}, there is a point $p \in \tilde\pi^{-1}(0)$ such that $\tilde {\mathcal D}_{p}$ is dicritical. Then, there is a $\tilde{\mathcal D}$-transverse map $$ \tilde\phi: ({\mathbb C}^2,0)\rightarrow (\tilde M,p) $$ such that $\tilde\phi^*{\tilde{\mathcal D}}=0$ and $\tilde\phi(y=0)\subset \operatorname{Supp} (\tilde{\mathcal D})$. Let us identify $(\tilde M,p)$ with $({\mathbb C}^n,0)$ by means of a choice of local coordinates $x_1,x_2,\ldots,x_n$ at $p$ such that $$ \tilde{\mathcal D}_p=\sum_{i=1}^s\lambda_iH_i,\quad H_i=(x_i=0),\; i=1,2,\ldots,s. $$ Put $\tilde\phi_i=x_i\circ\tilde\phi$, for $i=1,2,\ldots, n$. We know that $\tilde\phi_i(0)=0$ for $i=1,2,\ldots,n$ and that $\tilde\phi_\ell\ne 0$, for $1\leq\ell\leq s$. Let $\Gamma\subset ({\mathbb C}^2,0)$ be an irreducible component of $\tilde\phi_1=0$, that is $\nu_\Gamma(\tilde\phi_1)\geq 1$. The coefficient of $\Gamma$ in $\tilde\phi^*{\tilde{\mathcal D}}=0$ is $$ \sum_{i=1}^s\lambda_i\nu_\Gamma(\tilde\phi_i)=0. $$ This is the desired non-negative resonance. \end{proof} \begin{remark} In other words, the $\mathbb C$-divisor $\mathcal D$ is dicritical if and only if there is a sequence of non-dicritical admissible blowing-ups that can be followed by a dicritical admissible blowing-up. \end{remark} The nonnegative resonances characterize dicriticalness in the case of normal crossings support, as we show in the following result: \begin{corollary} \label{cor:dicriticonormalcrossings} Consider a $\mathbb C$-divisor ${\mathcal D}=\sum_{i=1}^s\lambda_iH_i$ on $M=({\mathbb C}^n,0)$ whose support $S=\cup_{i=1}^sH_i$ has normal crossings. The following statements are equivalent: \begin{enumerate} \item The $\mathbb C$-divisor $\mathcal D$ is dicritical. \item There is a nonnegative resonance $\sum_{i=1}^sm_i\lambda_i=0$, with $m_i\geq 0$ not all zero integer numbers. \end{enumerate} \end{corollary} \begin{proof} See the second part of the proof of Proposition \ref{pro:divdicritico}. \end{proof} \section{Logarithmic Foliations and Dicriticalness} Let $\mathcal F$ be a codimension one singular holomorphic foliation on a non-singular complex analytic variety $M$. Given a point $p \in M$, we recall that the germ of $\mathcal F$ at $p$ is generated by an integrable meromorphic germ $\eta$ of differential $1$-form. Moreover two such differential $1$-forms $\eta$ and $\eta'$ generate the same germ of foliation if and only if $\eta'=\phi\eta$, where $\phi$ is the germ at $p$ of a meromorphic function. We recall from \cite{Sai} that a meromorphic germ of differential $1$-form $\eta$ at a point $p\in M$ is {\em logarithmic} when both $\eta$ and $d\eta$ have at most simple poles. The set $\operatorname{Pol}(\eta)$ of poles of a meromorphic differential $1$-form $\eta$ is the hypersurface $g=0$, where $g\eta$ is holomorphic and $g$ divides any other $g'$ such that $g'\eta$ is holomorphic; the poles are simple when we can take $g$ to be reduced. \begin{remark} \label{rk:logaritmicgenerator} Assume that $\mathcal F$ is a germ of foliation on $({\mathbb C}^n,0)$ locally generated by a germ of holomorphic integrable $1$-form $\omega$, without common factors in its coefficients. Let $f=0$ be a reduced equation for a (maybe non-irreducible) invariant hypersurface of $\mathcal F$; this means that $f$ divides $df\wedge\omega$. Then, the meromorphic $1$-form $\omega/f$ is logarithmic. Indeed, we have that $$ d(\omega/f)= (1/f)\left(-(df\wedge \omega)/f+d\omega\right) $$ and hence $f d(\omega/f)$ is holomorphic. \end{remark} The following result is well known: \begin{proposition} Let $\eta$ be the germ of a closed logarithmic $1$-form on $({\mathbb C}^n,0)$. There is a multivaluated function $F$ such that $\eta=dF/F$. More precisely, if we decompose the set of poles as a union $\operatorname{Pol}(\eta)=\cup_{i=1}^sH_i$ of irreducible hypersurfaces, there are $\lambda_i\ne 0$ and reduced local equations $f_i=0$ for each $H_i$ such that $$ \eta=\sum_{i=1}^s\lambda_i\frac{df_i}{f_i}. $$ Moreover, the coefficients $\lambda_i$ are unique. In the case that $\operatorname{Pol}(\eta)=\emptyset$ and hence $\eta$ is holomorphic, the statement must be interpreted by saying that there is a unit $U$ such that $\eta=dU/U$. \end{proposition} \begin{proof}(See \cite{Cer-M, Sai}) If $\eta$ is holomorphic, by Poincaré Lemma, there is a holomorphic function $G$ such that $\eta=dG$, taking $U=\exp(G)$, we have that $\eta=dU/U$. We know that the residue of $\eta$ along $H_i$ is non zero and constant (see (2.6) and the proof of Theorem (2.9) in \cite{Sai}), let us call $\lambda_i\in {\mathbb C}$ this residue. Taking local reduced equations $g_i=0$ of $H_i$ we have that $$ \alpha=\eta- \sum_{i=1}^s\lambda_i\frac{dg_i}{g_i} $$ is a closed logarithmic differential $1$-form without residues. Hence $(1/\lambda_1)\alpha$ is a closed holomorphic $1$-form and thus $(1/\lambda_1)\alpha=dU/U$, where $U$ is a unit. Put $f_1=Ug_1$ and $f_i=g_i$, for $i=2,3,\ldots,s$. We conclude that $\eta=\sum_{i=1}^s \lambda_idf_i/f_i$. \end{proof} Given a closed logarithmic differential $1$-form $\eta$ on $M$, we attach to it the $\mathbb C$-divisor $\operatorname{Div}{\eta}$ given by $$ \operatorname{Div}{\eta}=\sum_{H}\lambda_HH, $$ where $\lambda_H=\operatorname{Res}_H(\eta)$ is the residue of $\eta$ along $H$, that we know to be constant by \cite{Sai}. When ${\mathcal D}=\operatorname{Div}(\eta)$, we say that the closed $1$-form $\eta$ is {\em ${\mathcal D}$-logarithmic} . \begin{definition} A codimension one singular holomorphic foliation $\mathcal F$ on $M$ is {\em $\mathcal D$-logarithmic} when it is locally generated by a closed $\mathcal D$-logarithmic differential $1$-form. \end{definition} Let us note that $\operatorname{Div}(\mu\eta)=\mu\operatorname{Div}(\eta)$, when $\mu\in{\mathbb C}$. Thus, if ${\mathcal F}$ is ${\mathcal D}$-logarithmic and ${\mathcal D}'$ is a $\mathbb C$-divisor projectively equivalent to ${\mathcal D}$, then ${\mathcal F}$ is also ${\mathcal D}'$-logarithmic. \begin{remark} \label{rk:invarianciasoporte} If $\mathcal F$ is a $\mathcal D$-logarithmic foliation, the irreducible components of the support of $\mathcal D$ are invariant for $\mathcal F$. This may be verified locally, assuming that $\eta=\sum_{i=1}^s\lambda_i df_i/f_i$ generates $\mathcal F$. We have that $f_i$ does not divide the coefficients of the holomorphic $1$-form $\omega=f\eta$, where $f=\prod_{i=1}^sf_i$, indeed $f_i$ does not divide $df_i$, since $f_i$ is reduced; in this situation, we have only to verify that $f_i$ divides $df_i\wedge \omega$, and this condition is visible. \end{remark} \begin{remark} Consider the radial foliation ${\mathcal R}$ on $({\mathbb C}^2,0)$ defined by $\omega=0$ where $\omega=ydx-xdy$. Note that $\mathcal R$ is defined both by $\eta_0$ and $\eta_1$, where $$ \eta_0=\frac{dx}{x}-\frac{dy}{y},\quad \eta_1=\frac{d(x+y)}{x+y}- \frac{d(x-y)}{x-y}. $$ Put $H^0_1=(x=0)$, $H^0_2=(y=0)$, $H^1_1=(x+y=0)$ and $H^1_2=(x-y=0)$. The $\mathbb C$-divisors $$ \operatorname{Div}(\eta_0)= H^0_1-H^0_2,\quad \operatorname{Div}(\eta_1)=H^1_1-H^1_2 $$ are different and not proportional. Hence a codimension one foliation can be logarithmic with respect to several non projectively equivalent $\mathbb C$-divisors. \end{remark} \subsection{Dicriticalness} The word dicritical comes from ancient works of Autom, following Mattei \cite{Cer-M}. The general definition of dicritical foliation, suggested by D. Cerveau, may be found in \cite{Can-RV-S}: \begin{definition} \label{def:foldicr} Let $\mathcal F$ be a codimension one holomorphic foliation on a non-singular complex analytic variety $M$. We say that $\mathcal F$ is {\em dicritical at a point $p\in M$} if and only if there is a holomorphic map $$ \phi:({\mathbb C}^2,0)\rightarrow (M,p) $$ such that $\phi^*{\mathcal F}=(dx=0)$ and $\phi(y=0)$ is invariant for $\mathcal F$. We say that $\mathcal F$ is {\em dicritical} if there is a point $p$ such that it is dicritical at $p$. When $M$ is a germ $(M,K)$ over a compact set $K\subset M$, we ask the condition just for the points in $K$. \end{definition} As in the case of $\mathbb C$-divisors, we adopt the notation $$ \pi:(M',{\mathcal F}')\rightarrow (M,{\mathcal F}) $$ to indicate a morphism $\pi:M'\rightarrow M$, a foliation $\mathcal F$ on $M$ and the transform ${\mathcal F}'=\pi^*{\mathcal F}$. When $\pi$ is a blowing-up with non-singular center $Y$, we say that $\pi$ is {\em $\mathcal F$-admissible} if the center $Y$ is invariant for $\mathcal F$, we say that $\pi$ is a {\em dicritical blowing-up} if the exceptional divisor $\pi^{-1}(Y)$ is not invariant for ${\mathcal F}'$ and it is {\em non-dicritical} when the exceptional divisor is invariant for ${\mathcal F}'$. \begin{proposition} \label{prop:dicriticidaddeunaexplosion} Let $\mathcal F$ be a codimension one singular foliation on $({\mathbb C}^n,0)$ and assume that $Y$ is a non-singular invariant subvariety of $({\mathbb C}^n,0)$. If the blowing-up $$ \pi:((M,\pi^{-1}(0)),{\mathcal F}')\rightarrow (({\mathbb C}^n,0),{\mathcal F}) $$ centered at $Y$ is a dicritical blowing-up, then $\mathcal F$ is a dicritical foliation. \end{proposition} \begin{proof} Choose a point $p\in E=\pi^{-1}(Y)$ and consider local coordinates $x_1,x_2,\ldots,x_n$ at $p$ such that $E=(x_1=0)$ and $x_1=x_2=\ldots=x_{n-1}=0$ is not invariant for ${\mathcal F}'$. This is possible, since not all the non-singular branches through $p$ contained in $E$ are invariant for ${\mathcal F}'$ (this should imply that $E$ itself is invariant). Now, let $ \psi: ({\mathbb C}^2,0)\rightarrow (M,p)\hookrightarrow (M,\pi^{-1}(0)) $ be the map given by $$ x_1 \circ \psi=v,\; x_n \circ \psi=u,\; x_i \circ \psi=0,\, i=2,3,\ldots,n-1, $$ where $u,v$ are local coordinates in $({\mathbb C}^2,0)$. We know that $\Gamma=(v=0)$ is not invariant for $\psi^*{\mathcal F}'$. Let $\sigma: ({\mathbb C}^2,0)\rightarrow ({\mathbb C}^2,0)$ be the composition of a sequence of local blowing-ups following the infinitely near points of $\Gamma$ such that the strict transform of $\Gamma$ is $y=0$ and $\sigma^*(\psi^*{\mathcal F}')$ is the foliation $dx=0$. This is possible, since we do the reduction of singularities both of $\Gamma$ and $\psi^*{\mathcal F}'$. We end by considering $\phi=\pi \circ \psi\circ\sigma$, where $\phi^*({\mathcal F})=(dx=0)$ and $\phi(y=0)\subset Y$ is invariant. \end{proof} \begin{remark} When $M$ has dimension two, we have that $\mathcal F$ is dicritical at $p$ if and only if there are infinitely many germs of invariant branches of $\mathcal F$ at $p$ and this is also equivalent to say that we can find a sequence of blowing-ups ended by a dicritical one. This property is the classical definition of dicritical foliation in dimension two. Nevertheless, the direct generalization to higher dimension is not evident, as Jouanolou's example \cite{Jou} show: a germ of foliation $\mathcal F$ in $({\mathbb C}^3,0)$ without invariant surface, but such that the blowing-up of the origin is dicritical. See \cite{Can-C, Can-C-D} for more details. \end{remark} \begin{proposition} \label{prop:estabilidddicriticidad} Let $\pi:(M',{\mathcal F}')\rightarrow(M,{\mathcal F})$ be an admissible non-dicritical blowing-up. Then ${\mathcal F}$ is a dicritical foliation if and only if ${\mathcal F}'$ is so. \end{proposition} \begin{proof} Assume that ${\mathcal F}'$ is dicritical. Take a holomorphic map $\phi':({\mathbb C}^2,0)\rightarrow M'$ such that ${\phi'}^*{\mathcal F}'=(dx=0)$ and ${\phi'(y=0)}\subset M'$ is invariant. Put $\phi=\pi \circ \phi'$. Since $\pi$ is non dicritical, we have that $$ \phi^*{\mathcal F}={\phi'}^*(\pi^*{\mathcal F}) $$ (the non-dicriticalness of $\pi$ is necessary here, see the Remark \ref{rk:pullbackdicriticao} below) and moreover $\phi(y=0)=\pi(\phi'(y=0))$ is invariant. Hence ${\mathcal F}$ is also a dicritical foliation. Conversely, let us assume that ${\mathcal F}$ is dicritical and take a holomorphic map $$ \phi:({\mathbb C}^2, 0)\rightarrow M $$ such that $\phi^*{\mathcal F}=(dx=0)$ and $\phi(y=0)$ is invariant for ${\mathcal F}$. In view of Proposition \ref{prop:appdos} in the Appendix I, there is a commutative diagram of morphisms $$ \begin{array}{ccc} ({\mathbb C}^2,0)&\stackrel{\sigma}{\longleftarrow}&N\\ \phi\downarrow\;\; &&\;\;\downarrow\psi\\ M&\stackrel{\pi}{\longleftarrow}&M' \end{array}, $$ where $\sigma$ is the composition of a finite sequence of blowing-ups. Let $(\Gamma',p')$ be the strict transform of $(y=0)$ by $\sigma$. We know that there are local coordinates $u,v$ at $p'$ such that $\Gamma'=(v=0)$ and $\sigma^*{(dx=0)}=(du=0)$. Note that $$ (\phi\circ\sigma)^*{\mathcal F}=(\pi\circ\psi)^*{\mathcal F}. $$ Since $\pi$ is non-dicritical, we have that $(\pi\circ\psi)^*{\mathcal F}=\psi^*{\mathcal F}'$. Moreover, since $\sigma$ is a sequence of blowing-ups centered at points, we have that $$ (\phi\circ\sigma)^*{\mathcal F}=\sigma^*(\phi^*{\mathcal F})=(du=0). $$ Hence $\psi^*{\mathcal F}'=(du=0)$ and $\mathcal F'$ is a dicritical foliation. \end{proof} \begin{remark} \label{rk:pullbackdicriticao} Let $\mathcal F$ be a codimension one singular foliation of $M$ and consider two morphisms $\phi:M'\rightarrow M$ and $\psi:M''\rightarrow M'$. The foliation $\phi^*{\mathcal F}$ is defined locally by the pullback $\phi^*\omega$ of a differential $1$-form $\omega$ defining $\mathcal F$. The pull-back foliation $\phi^*{\mathcal F}$ {\em exists}, or {\em is defined}, if and only if $\phi^*\omega\ne 0$, when $\omega$ is chosen to be without common factors in its coefficients. When $(\phi\circ\psi)^*{\mathcal F}$, $\phi^*{\mathcal F}$ and $\psi^*(\phi^*{\mathcal F})$ exist, we have that $$ (\phi\circ\psi)^*{\mathcal F}=\psi^*(\phi^*{\mathcal F}), $$ but it is possible for $\phi^*{\mathcal F}$ and $\psi^*(\phi^*{\mathcal F})$ to be well defined, whereas $(\phi\circ\psi)^*{\mathcal F}$ does not exist. An important case of this situation is the immersion $\psi: M''\rightarrow M'$ of the exceptional divisor of a dicritical blowing-up $\phi:M'\rightarrow M$, see \cite{Can-L-M}. Anyway, when $\phi$ is a non-dicritical blowing-up, and hence the exceptional divisor is invariant for $\phi^*{\mathcal F}$, we have that $\phi^*{\mathcal F}$ exists (this is always true because a blowing-up is an isomorphism in a dense open set) and moreover $\psi^*(\phi^*{\mathcal F})$ is defined if and only if $(\phi\circ\psi)^*{\mathcal F}$ is defined; hence we have the equality. On the other hand, when $\psi$ is a blowing-up or a sequence of blowing-ups, we also have that $\phi^*{\mathcal F}$ is defined if and only if $(\phi\circ\psi)^*{\mathcal F}$ is defined and, if this is the case, we also have that $ (\phi\circ\psi)^*{\mathcal F}=\psi^*(\phi^*{\mathcal F}) $. \end{remark} \subsection{Non-dicritical Logarithmic Foliations} In this Subsection we relate the non-dicriticalness of a ${\mathcal D}$-logarithmic foliation with the same property for the $\mathbb C$-divisor $\mathcal D$. \begin{lemma} \label{lema:unaexplosion} Let $\mathcal F$ be a $\mathcal D$-logarithmic foliation. Assume that $ \pi:(M',{\mathcal D}')\rightarrow (M,{\mathcal D}) $ is a non-dicritical ${\mathcal D}$-admissible blowing-up. Then $ \pi:(M',{\mathcal F}')\rightarrow (M,{\mathcal F}) $ is an admissible non-dicritical blowing-up and ${\mathcal F}'$ is ${\mathcal D}'$-logarithmic. \end{lemma} \begin{proof} Let $Y$ be the center of $\pi$. We know that $Y\subset\operatorname{Supp}({\mathcal D})$ and hence $Y$ is ${\mathcal F}$-invariant, since the support is $\mathcal F$-invariant, in view of Remark \ref{rk:invarianciasoporte}. Then $\pi$ is $\mathcal F$-admissible. Put ${\mathcal D}=\sum_{i=1}^s\lambda_i H_i$ and assume that $\mathcal F$ is generated by $$ \eta=\sum_{i=1}^s\lambda_i\frac{df_i}{f_i}, $$ where $f_i=0$ is a reduced local equation of $H_i$ for $i=1,2,\ldots, s$. Then ${\mathcal F}'$ is generated by $\pi^*\eta$, where $$ \pi^*\eta= \sum_{i=1}^s\lambda_i\frac{d(f_i\circ\pi)}{f_i\circ \pi}. $$ Moreover, we have that $$ {\mathcal D}'=\pi^{*}{\mathcal D}=\sum_{i=1}^s\lambda_i \pi^*(1\cdot H_i)=\mu E+\sum_{i=1}^s\lambda_i H'_i, $$ where $E=\pi^{-1}(Y)$, $\mu=\sum_{i=1}^s\lambda_i\nu_i$, with $\nu_i=\nu_Y(H_i)$ and $H'_i$ stands for the strict transform of $H_i$. By hypothesis, we have that $\mu\ne 0$. We can do the necessary verifications locally at the points in $E$. Take one such $q\in E$ and let $h=0$ be a local reduced equation of $E$ at $q$. We have that $$ f_i\circ \pi=h^{\nu_i}f'_i, $$ where $f'_i=0$ is a local reduced equation for the strict transform $H'_i$ of $H_i$ at $q$. Let us show that $\pi^*\eta$ can be written as \begin{equation} \label{eq:transformado} \pi^*\eta=\mu\frac{dh'}{h'}+\sum_{q\in H'_i}\lambda_{i}\frac{df'_i}{f'_i}, \end{equation} where $h'=0$ is a reduced local equation of $\pi^{-1}(Y)$ at $q$. Recalling that $\mu\ne 0$, we see that $\pi^{-1}(Y)$ is invariant for ${\mathcal F}'$ and hence $\pi:( M', {\mathcal F}')\rightarrow (M,{\mathcal F})$ is non dicritical; moreover Equation \ref{eq:transformado} also shows that ${\mathcal F}'$ is ${\mathcal D}$-logarithmic. It remains to find $h'$ satisfying Equation \ref{eq:transformado}. Note that $f'_i$ is a unit if and only if $q\notin H'_i$. In this situation, there is a unit $U$ such that $$ \mu\frac{dU}{U}= \sum_{q\notin H'_i}\lambda_{i}\frac{df'_i}{f'_i}. $$ Now, it is enough to take $h'=Uh$. \end{proof} \begin{remark} It is possible to have a dicritical ${\mathcal D}$-admissible blowing-up $$ \pi:(M',{\mathcal D}')\rightarrow (M,{\mathcal D}) $$ and a ${\mathcal D}$-logarithmic foliation ${\mathcal F}$ such that $\pi$ induces a non-dicritical admissible blowing-up $ \pi:(M',{\mathcal F}')\rightarrow (M,{\mathcal F}) $. The following example may be found in \cite{Can-Co}: take the foliation $\mathcal F$ on $({\mathbb C}^2,0)$ given by $\eta=0$ where $$ \eta=\frac{d(y-x^2)}{y-x^2}-\frac{d(y+x^2)}{y+x^2}. $$ Then $\mathcal F$ is $\mathcal D$-logarithmic for ${\mathcal D}=(y-x^2=0)-(y+x^2=0)$. Note that $\mathcal F$ is also ${\mathcal D}_1$-logarithmic, where ${\mathcal D}_1=(y=0)-2(x=0)$. The first blowing-up $\pi$ is ${\mathcal D}$-dicritical, but the exceptional divisor is invariant for the transformed foliation. Anyway, we know that in ambient dimension two, this situation implies that ${\mathcal F}$ is actually a dicritical foliation, although the blowing-up $\pi$ could be non dicritical. In general, it is an open question to know if given a logarithmic foliation $\mathcal F$ there is a $\mathbb C$-divisor that ``faithfully'' represents the dicriticalness of the foliation, for instance in terms of blowing-ups. In this paper, we concentrate ourselves in the non-dicritical case. \end{remark} \begin{proposition} \label{pro:hipersuperficiesinvariantes} Let $\mathcal F$ be a $\mathcal D$-logarithmic foliation, where $\mathcal D$ is non dicritical and take a point $p\in \operatorname{Supp}({\mathcal D})$. The only irreducible germs of hypersurface at $p$ invariant for $\mathcal F$ are the irreducible components of the germ at $p$ of support of $\mathcal D$. \end{proposition} \begin{proof} (See also \cite{Cer-M}). We can assume that $M=({\mathbb C}^n,0)$, $0\ne{\mathcal D}=\sum_{i=1}^s\lambda_iH_i$ and $\mathcal F$ is generated by $$ \eta=\sum_{i=1}^s\lambda_i\frac{df_i}{f_i}, $$ where $f_i=0$ are reduced equations of $H_i$, for $i=1,2,\ldots,s$. By Remark \ref{rk:invarianciasoporte}, we already know that each $H_i$ is invariant for $\mathcal F$. Let us suppose now that $S$, given by $g=0$, is another germ of irreducible hypersurface invariant for $\mathcal F$. Up to make a desingularization of the support of $\mathcal D$ and by choosing a point where the strict transform of $S$ intersects the exceptional divisor, we restrict ourselves to the case when the $H_i$ are coordinate hyperplanes. There is at least one of the components of the support, because of the non dicriticalness of ${\mathcal D}$, that makes us to add each time we blow-up the exceptional divisor to the support of $\mathcal D$. Then, we can assume that $$ \eta=\sum_{i=1}^s\lambda_i\frac{dx_i}{x_i}. $$ The non-dicriticalness of ${\mathcal D}$ implies in this situation that $\sum_{i=1}^sm_i\lambda_i\ne 0$ for any $0\ne \mathbf{m}\in {\mathbb Z}_{\geq 0}^s$, in view of Lemma \ref{lema:normalcrossings}. By the curve selection lemma, there is a parameterized curve $$ \gamma:t\mapsto (\gamma_i(t))_{i=1}^n $$ contained in $S$ and not contained in the support $\prod_{i=1}^sx_i=0$, in particular $\gamma_i(t)\ne 0$, for any $i=1,2,\ldots,s$. Let us write $$ \gamma_i(t)=\mu_{im_i}t^{m_i}+\mu_{i,m_i+1}t^{m_i+1}+\cdots,\quad \mu_{im_i}\ne 0, \quad i=1,2,\ldots,s. $$ Since $S$ is invariant, we have that $\gamma^*\eta=0$. Looking at the residue of $\gamma^*\eta$, we have that $$ \sum_{i=1}^sm_i\lambda_i=0. $$ This is not possible. \end{proof} \begin{corollary} Let $\mathcal F$ be a $\mathcal D$-logarithmic foliation, where $\mathcal D$ is non dicritical. Then ${\mathcal F}$ is also non dicritical. Moreover, if $\mathcal F$ is ${\mathcal D}'$-logarithmic, then ${\mathcal D}'$ and ${\mathcal D}$ are projectively equivalent $\mathbb C$-divisors. \end{corollary} \begin{proof} Let us desingularize $\operatorname{Supp}{\mathcal D}$ by means of non-dicritical admissible blowing-ups. Note that, by Lemma \ref{lema:unaexplosion}, the above blowing ups are non-dicritical for $\mathcal F$. Invoking Proposition \ref{prop:estabilidddicriticidad}, we reduce the problem to the case when $\operatorname{Supp}({\mathcal D})$ has normal crossings. Now, if ${\mathcal F}$ is dicritical we get a nonnegative resonance in the coefficients of ${\mathcal D}$ and hence ${\mathcal D}$ should be dicritical, by Lemma \ref{lema:normalcrossings}. We can also do the above reduction in order to prove the second part of the statement. Assume thus that $\operatorname{Supp}({\mathcal D})$ has normal crossings, the foliation is ${\mathcal D}$-logarithmic and non dicritical. Now, the support of ${\mathcal D}$ coincides with the invariant hypersurfaces of ${\mathcal F}$ by Proposition \ref{pro:hipersuperficiesinvariantes}; moreover, the coefficients (up to projective equivalence) are given by the residues. Hence ${\mathcal D}'$ is determined up to projective equivalence from ${\mathcal D}$. \end{proof} \begin{remark} Let $\mathcal F$ be a $\mathcal D$-logarithmic foliation. We know that if $\mathcal D$ is non-dicritical, then the foliation $\mathcal F$ is also non-dicritical. The converse is a natural question that has positive answer. That is, if $\mathcal F$ is non-dicritical, then $\mathcal D$ is also non-dicritical. This is a consequence of the theorem on existence and non-dicriticalness of the logarithmic models, that we prove in this paper. \end{remark} \section{Generalized Hypersurfaces and Logarithmic Forms} We recall here some facts useful for the sequel, concerning generalized hypersurfaces and the more general case of non-dicritical codimension one singular foliations. For more details on generalized hypersurfaces the reader can look at \cite{Fer-M}. We end the section by associating logarithmic forms to the generalized hypersurfaces that are stable under blowing-ups. We take the following definition: \begin{definition}[\cite{Can-RV-S}] Given a complex analytic variety $M$, a foliation $\mathcal F$ on $M$ and a point $p\in M$, we say that $\mathcal F$ is {\em complex hyperbolic} at $p$, or that $\mathcal F$ {\em has no hidden saddle-nodes at $p$}, if and only if there is no holomorphic map $\phi:({\mathbb C}^2,0)\rightarrow M$, with $\phi(0)=p$, such that $\phi^*{\mathcal F}$ is a saddle-node. We say that $\mathcal F$ is a {\em generalized hypersurface at $p$} if, in addition, it is non-dicritical at $p$. We say that $\mathcal F$ is a generalized hypersurface at $M$ when the property holds at each point of $M$. \end{definition} The origin of the terminology {\em generalized curve} is in the paper \cite{Cam-LN-S}, where the authors made an extensive consideration of the condition of being complex hyperbolic, in the two dimensional case. In some cases, the above name is used also for the dicritical situation. We fix the word {\em generalized hypersurface} for denoting both properties: non-dicriticalness and no hidden-saddle-nodes. Of course, in the case of ambient dimension two, the expression {\em generalized curve} also means for us to be non-dicritical and without hidden saddle-nodes. \begin{remark} Some ``ramified'' saddle-nodes have the property of being generalized hypersurfaces. For instance, take the saddle-node given by the meromorphic $1$-form $$ \frac{du}{u}+ u\frac{dv}{v} $$ in dimension two. Let us consider the ramification $u=x^py^q$, $v=y$; we obtain by pull back a differential $1$-form $$ \left(p\frac{dx}{x}+q\frac{dy}{y}\right)+ x^py^q\frac{dy}{y} $$ that defines a generalized curve on $({\mathbb C}^2,0)$; it is an example of Martinet-Ramis resonant case \cite{Mar-R}. Note that it has no holomorphic first integral. The reader can see \cite{Can-C-D} for more details. \end{remark} One of the important features of generalized hypersurfaces is the following result: \begin{proposition}[See \cite{Can,Can-M-RV,Fer-M}] \label{prop:redsinghypgeneralizada} Let $\mathcal F$ be a generalized hypersurface on $({\mathbb C}^n,0)$. There are only finitely many irreducible invariant hypersurfaces of $\mathcal F$, its union $S$ is non empty and any reduction of singularities of $S$ provides a reduction of singularities of $\mathcal F$. In particular, the singular locus of $\mathcal F$ is contained in $S$. \end{proposition} Let us state some other useful results concerning generalized hypersurfaces: \begin{lemma} \label{lema:curvainvariante} Consider a generalized hypersurface $\mathcal F$ on $({\mathbb C}^n, 0)$ and take an invariant analytic branch $(\Gamma,0)\subset ({\mathbb C}^n,0)$ not contained in the singular locus of $\mathcal F$. There is a single irreducible hypersurface $H$ invariant for $\mathcal F$ such that $\Gamma\subset H$. \end{lemma} \begin{proof} This is true for any non-dicritical foliation that admits a reduction of singularities, in particular for generalized hypersurfaces, see \cite{Can}. \end{proof} \begin{proposition} \label{prop:pullbackgeneralizedcurve} Consider a generalized hypersurface $\mathcal F$ on $({\mathbb C}^n, 0)$ and let $S$ be the union of the invariant hypersurfaces of $\mathcal F$. For any $S$-transverse holomorphic map $ \phi:({\mathbb C}^2,0)\rightarrow ({\mathbb C}^n,0) $, the pull-back $\phi^*{\mathcal F}$ is a generalized curve. \end{proposition} \begin{proof} Let $\omega$ be a reduced holomorphic generator of $\mathcal F$. We have first to show that $\phi^*\omega\ne 0$. Assume that $\phi^*\omega=0$, since $\phi$ is $S$-transverse, there is an irreducible branch $(\Gamma,0)\subset ({\mathbb C}^2,0)$ such that $\phi(\Gamma)\not\subset S$. In this situation, the curve $\phi(\Gamma)$ is an invariant curve of $\mathcal F$ not contained in $S$; this contradicts Lemma \ref{lema:curvainvariante}. Then, we have that $\phi^*{\mathcal F}$ exists. More precisely, the invariant curves for $\phi^*{\mathcal F}$ are precisely the irreducible components of $\phi^{-1}(S)$. In particular, $\phi^*{\mathcal F}$ has only finitely many invariant branches and then it is non-dicritical. Finally, let us find a contradiction if $\phi^*{\mathcal F}$ is not complex hyperbolic. Take $$ \varphi:({\mathbb C}^2,0)\rightarrow ({\mathbb C}^2,0) $$ such that $\varphi^*(\phi^*{\mathcal F})$ is a saddle-node. We have that $\operatorname{Im}(\varphi)\not\subset \phi^{-1}(S)$ and hence $\phi\circ\varphi$ is $S$-transverse. We conclude that $(\phi\circ\varphi)^*{\mathcal F}$ exists and $$ (\phi\circ\varphi)^*{\mathcal F}=\varphi^*(\phi^*{\mathcal F}) $$ is a saddle-node, contradiction, since $\mathcal F$ is complex hyperbolic. \end{proof} Next, we show the stability under permissible blowing-ups of being generalized hypersurface \begin{proposition} \label{prop:blowingupgeneralizedhip} Consider a generalized hypersurface $\mathcal F$ on $({\mathbb C}^n, 0)$ and let $Y$ be a non-singular subvariety of $({\mathbb C}^n,0)$ contained in the union $S$ of the invariant hypersurfaces of $\mathcal F$. Consider the admissible blowing-up $$ \pi:((M,\pi^{-1}(0)), {\mathcal F}')\rightarrow (({\mathbb C}^n,0),{\mathcal F}) $$ with center $Y$. Then $\pi$ is non-dicritical and ${\mathcal F}'$ is a generalized hypersurface. \end{proposition} \begin{proof} By Proposition \ref{prop:dicriticidaddeunaexplosion}, we see that the blowing-up $\pi$ is non-dicritical, since $\mathcal F$ is a non-dicritical foliation. Moreover, the transformed foliation ${\mathcal F}'$ is non-dicritical in view of Proposition \ref{prop:estabilidddicriticidad}. Let us show that ${\mathcal F}'$ is complex-hyperbolic. Assume by contradiction that it is not and thus there is a point $p\in \pi^{-1}(0)$ and a morphism $$ \phi: ({\mathbb C}^2,0)\rightarrow (M,p) $$ such that $\phi^*{\mathcal F}'$ is a saddle-node. Since $\pi$ is non-dicritical, we have that the exceptional divisor $E=\pi^{-1}(Y)$ is invariant and hence the image of $\phi$ is not contained in $E$; this implies that $(\pi\circ\phi)^*{\mathcal F}$ exists and, in view of Remark \ref{rk:pullbackdicriticao}, we have $$ (\pi\circ\phi)^*{\mathcal F}=\phi^*(\pi^*{\mathcal F})=\phi^*{\mathcal F}'. $$ It is a saddle-node, contradiction. \end{proof} \subsection{Transversality} We consider here the concepts of generic multiplicity, equimultiplicity and Mattei-Moussu transversality, that we need in the proof of existence of logarithmic model for generalized hypersurfaces. Let $Y$ be a non-singular irreducible subvariety of $({\mathbb C}^n,0)$ and consider a holomorphic $1$-form $\omega$ on $({\mathbb C}^n,0)$. The {\em generic multiplicity $\nu_Y(\omega)$ of $\omega$ along $Y$} is the minimum of the generic multiplicity of the coefficients of $\omega$ along $Y$. When $\omega$ is a reduced (no common factors in the coefficients) generator of a codimension one singular foliation $\mathcal F$ on $({\mathbb C}^n,0)$ we say that $\nu_Y(\omega)$ is the generic multiplicity of $\mathcal F$ along $Y$ and we denote $\nu_Y({\mathcal F})=\nu_Y(\omega)$. Let $S$ be a hypersurface of $({\mathbb C}^n,0)$ with reduced equation $f=0$; recall that $\nu_Y(S)=\nu_Y(f)$. We say that $Y$ is {\em equimultiple at the origin} for $\omega$, $\mathcal F$ or $S$, if we respectively have that $$ \nu_Y(\omega)=\nu_0(\omega),\quad \nu_Y({\mathcal F})=\nu_0(\mathcal F),\quad \nu_Y(H)=\nu_0(H). $$ By taking appropriate representatives of the germs, we know that the points of equimultiplicity define a dense open set in $Y$. \begin{remark} If $S$ is given by the reduced equation $f=0$, where $f$ is reduced, and we consider the foliation ${\mathcal F}=(df=0)$, we have $\nu_Y({\mathcal F})=\nu_Y(S)-1$. \end{remark} Let us recall that the {\em singular locus $\operatorname{Sing}({\mathcal F})$} of a foliation $\mathcal F$ coincides locally with the singular locus of a holomorphic generator of $\mathcal F$ without common factor in its coefficients. In particular, we have that $\operatorname{Sing}({\mathcal F})\subset M$ is an analytic subset of codimension at least two. Take a holomorphic germ of $1$-form $\omega$ on $({\mathbb C}^n,0)$ such that $\operatorname{codim}(\operatorname{Sing}(\omega))\geq 2$. Following \cite{Mat-Mou}, we say that a closed immersion $\phi:({\mathbb C}^2,0)\rightarrow ({\mathbb C}^n,0)$ is a {\em Mattei-Moussu transversal} for $\omega$ when the following properties hold $$ \operatorname{Sing}(\phi^*\omega)=\phi^{-1}(\operatorname{Sing}(\mathcal F))\subset \{0\},\quad \nu_0(\phi^*\omega)=\nu_0(\omega). $$ If $\mathcal F$ is a codimension one singular foliation on $({\mathbb C}^n,0)$ we say that $\phi$ is a {\em Mattei-Moussu transversal} for $\mathcal F$ when it is a Mattei-Moussu transversal for a holomorphic generator of $\mathcal F$. Let $S$ be a hypersurface given by a reduced equation $f=0$; we say that $\phi$ is a {\em Mattei-Moussu transversal} for $S$ when it is a Mattei-Moussu transversal for the foliation $df=0$. In this paper, we consider the following version of the Transversality Theorem of Mattei-Moussu: \begin{theorem} Let $\mathcal F$ be a non-dicritical holomorphic foliation on $({\mathbb C}^n,0)$. There is a Zariski nonempty open set $W$ in the space of linear two-planes such that any closed immersion $\phi:({\mathbb C}^2,0)\rightarrow ({\mathbb C}^n,0)$ with tangent plane in $W$ is a Mattei-Moussu transversal for $\mathcal F$. \end{theorem} \begin{proof} See \cite{Mat-Mou} and \cite{Can3,Can2, Can-M}. \end{proof} We have the following consequence: \begin{proposition} \label{prop:multiplicidadgenericagh} Let $\mathcal F$ be a generalized hypersurface of $({\mathbb C}^n,0)$ and denote by $S$ the union of the invariant hypersurfaces of $\mathcal F$. Consider a non-singular subvariety $(Y,0)$ of $({\mathbb C}^n,0)$ with $Y\subset S$. Then $\nu_Y({\mathcal F})=\nu_Y(S)-1$. \end{proposition} \begin{proof} We first reduce the problem to the case $Y=\{0\}$ as follows. Taking appropriate representatives of the germs, there is a dense open subset $U$ of $Y$ such that both $S$ and $\mathcal F$ are equimultiple along $Y$ at the points in $U$, that is, we have $$ \nu_p(S)=\nu_Y(S),\quad \nu_p({\mathcal F})=\nu_Y({\mathcal F}), $$ for any $p\in U$. Thus, working at a point of equimultiplicity, we can assume that $Y=\{0\}$. Now, we apply Mattei-Moussu Transversality Theorem to get a closed immersion $\phi:({\mathbb C}^2,0)\rightarrow ({\mathbb C}^n,0)$ such that $$ \nu_0(\phi^*{\mathcal F})=\nu_0({\mathcal F}); \quad \nu_0(\phi^{-1}(S))=\nu_0(S). $$ Since $\phi^*{\mathcal F}$ is a generalized curve (see Proposition \ref{prop:pullbackgeneralizedcurve}) we reduce the problem to the two dimensional case. In this case, the result is known from \cite{Cam-LN-S}. \end{proof} \subsection{Logarithmic Forms Fully Associated to Generalized Hypersurfaces} Let us consider a generalized hypersurface $\mathcal F$ on $({\mathbb C}^n, 0)$. We know that there exists at least a germ of invariant hypersurface and that there are finitely many of them $H_i$, $i=1,2,\ldots, s$. Let us select a germ of reduced function $$ f=f_1f_2\cdots f_s\in {\mathcal O}_{{\mathbb C}^n,0} $$ such that $f_i=0$ is a local equation for $H_i$, for $i=1,2,\ldots,s$. Thus $f=0$ gives a reduced equation of the union $S$ of the invariant hypersurfaces of $\mathcal F$. Take a local holomorphic generator $\omega$ of $\mathcal F$ without common factors in its coefficients. The meromorphic $1$-form $\eta=\omega/f$ also defines $\mathcal F$. In view of Remark \ref{rk:logaritmicgenerator}, we know that $\eta$ is a logarithmic differential $1$-form, although it is not necessarily closed. Such an integrable logarithmic $1$-form $\eta=\omega/f$ will be called {\em fully associated to $\mathcal F$}. \begin{remark} Assume that $\eta=\omega/f$ and $\eta'=\omega'/f'$ are two integrable logarithmic $1$-forms fully associated to $\mathcal F$. There are units $U,V\in {\mathcal O}_{{\mathbb C}^n,0}$, such that $\omega'=U\omega$ and $f'=Vf$ and hence there is a unit $W=U/V$ such that $\eta'=W\eta$. \end{remark} \begin{proposition} \label{prop:pullbackoflogaritmicformsfullyassociated} Consider an integrable logarithmic $1$-form $\eta$ fully associated to a generalized hypersurface $\mathcal F$ on $({\mathbb C}^n, 0)$. Take a non-singular irreducible subvariety $Y$ of $({\mathbb C}^n,0)$ invariant for $\mathcal F$ and let us perform the blowing-up centered at $Y$ $$ \pi:((M,\pi^{-1}(0)),{\mathcal F}')\rightarrow (({\mathbb C}^n,0),{\mathcal F}). $$ The pullback $\pi^*\eta$ is an integrable logarithmic $1$-form fully associated to ${\mathcal F}'$. \end{proposition} \begin{proof} (See \cite{Cer} for the two dimensional case). Let $\omega$ be a holomorphic generator of $\mathcal F$ without common factors in its coefficients, take a reduced equation $f=0$ of the union $S$ of the invariant hypersurfaces of $\mathcal F$ and put $\eta=\omega/f$. By Proposition \ref{prop:multiplicidadgenericagh}, we know that $$ \nu_Y(\omega)=\nu_Y(f)-1 . $$ Put $m=\nu_Y(\omega)$. Working locally at a point $p$ of the exceptional divisor $\pi^{-1}(Y)$, where $x'=0$ is a reduced equation of $\pi^{-1}(Y)$, we know that $f\circ\pi=x'^{m+1}f'$, where $f'=0$ is a reduced local equation of the strict transform of $S$. By Proposition \ref{prop:dicriticidaddeunaexplosion}, we know that $\pi$ is a non-dicritical blowing-up for $\mathcal F$. Hence $x'f'=0$ is a local reduced equation of the union of the invariant hypersurfaces of ${\mathcal F}'$ at the point $p$. On the other hand, ${\mathcal F}'$ is generated by $\pi^*\omega$ and $$ \pi^*\omega=x'^{m}\omega', $$ where $\omega'$ has no common factors in its coefficients; for this, we use the fact that $\pi$ is a non dicritical blowing-up and thus we can divide $\pi^*\omega$ exactly by ${x'}^m$. Hence, we have that $$ \pi^*\eta=\frac{\pi^*\omega}{f\circ\pi}=\omega'/(x'f') $$ is an integrable logarithmic $1$-form fully associated to $\pi^*{\mathcal F}$. \end{proof} \section{Divisorial Models in Dimension Two} Consider a foliation $\mathcal F$ on $({\mathbb C}^2,0)$. From the work of A. Seidenberg \cite{Sei}, we know that there is an essentially unique reduction of singularities of $\mathcal F$. When there are no saddle-nodes after reduction of singularities and all the irreducible components of the exceptional divisor are invariant ones, we say that $\mathcal F$ is a {\em generalized curve} \cite{Cam-LN-S}. In this case, the Camacho-Sad indices at the singular points, after reduction of singularities, are all nonzero and they determine locally the linear part of the holonomy. This motivates the quest of a foliation with linear holonomy, with the same reduction of singularities and such that the linear part of the holonomy is the same as the one for $\mathcal F$ after reduction of singularities. Such foliations are the logarithmic ones and hence we look for a ``logarithmic model'' of a given generalized curve. This problem has been solved in dimension two by N. Corral in \cite{Cor}. In this section we recover Corral results in the language of $\mathbb C$-divisors and the indices with respect to singular invariant curves. \subsection{Indices for $\mathbb C$-divisors in dimension two} We develop here a notion of index for ${\mathbb C}$-divisors, directly inspired in the behavior of Camacho-Sad index in the case of holomorphic foliations in dimension two. Let $M$ be a non-singular complex variety of dimension two. Take a $\mathbb C$-divisor \begin{equation} \label{eq:escrituraparaindice} {\mathcal D}=\mu \operatorname{Div}(T)+\sum_{i=2}^s\lambda_i\operatorname{Div}(H_i) \end{equation} on it, where $T\subset M$ and $H_i\subset M$ are curves in $M$, not necessarily irreducible, such that none of the irreducible components of $T$ is an irreducible component of an $H_i$, for $i=2,3,\ldots,s$. We assume that the support of $\mathcal D$ contains $T$, that is $\mu\ne 0$. Let us take a point $p\in T$. We define the {\em Camacho-Sad index $I_p({\mathcal D},T)$ at $p$ of $\mathcal D$ with respect to $T$} by the expression \begin{equation} \label{eq:indice} I_p({\mathcal D},T)=-\frac{\sum_{i=2}^s\lambda_i (T,H_i)_p}{\mu}, \end{equation} where $(T,H_i)_p$ stands for the intersection multiplicity of $T$ and $H_i$ at $p$. Let us note that if ${\mathcal D}_p$ denotes the germ of $\mathcal D$ at $p$ we have that $$ I_p({\mathcal D}_p,T)=I_p({\mathcal D},T). $$ Let us remark that the germ of $T$ at $p$ may not be irreducible, even when we choose $T$ to be irreducible as a curve in $M$. \begin{remark} It is possible to extend the above definition in order to define the index of $\mathcal D$ with respect to any union of curves in the support passing through $p$, by using the formula $$ I_p({\mathcal D}, T_1\cup T_2)= I_p({\mathcal D}, T_1)+I_p({\mathcal D}, T_2)+2(T_1,T_2)_p. $$ In this way, we could recover the complete definition of the Camacho-Sad index with respect to a not necessarily irreducible invariant curve, see \cite{Bru}. Anyway, we only need the definition for the case where the coefficients of all the irreducible components of $T$ in $\mathcal D$ are equal, as defined above. \end{remark} \begin{proposition} Let $\mathcal D$ be a $\mathbb C$-divisor on a non-singular two dimensional complex analytic variety $M$, that we write as ${\mathcal D}=\mu \operatorname{Div}(T)+\sum_{i=2}^s\lambda_i\operatorname{Div}(H_i)$, where $\mu\ne 0$ and $T$ and $H_i$ have no common irreducible components, for any $i=2,3,\ldots,s$. Let $\pi:(M',{\mathcal D}')\rightarrow (M,{\mathcal D})$ be the blowing-up centered at a point $p\in T$. Denote by $T'$ the strict transform of $T$ by $\pi$ and by $E=\pi^{-1}(p)$ the exceptional divisor of $\pi$. The following equality holds: \begin{equation} \label{eq:indiceexplosion1} \sum_{p'\in T'\cap E}I_{p'}({\mathcal D}',T')=I_p({\mathcal D},T)-\nu_p(T)^2. \end{equation} Moreover, if $\pi$ is non-dicritical, we have $\sum_{p'\in E}I_{p'}({\mathcal D}',E)=-1$. \end{proposition} \begin{proof} If $\alpha=\mu\nu_p(T)+\sum_{i=2}^s\lambda_i\nu_p(H_i)$, we have ${\mathcal D}'=\alpha E+\mu T'+\sum_{i=2}^s\lambda_iH'_i$, where we denote by $H'_i$ the strict transforms of the $H_i$ by $\pi$. Recall Noether's formulas: \begin{equation*} (H_i,T)_p=\sum_{p'\in E\cap T'}(H'_i,T')_{p'} +\nu_p(T)\nu_p(H_i);\quad \nu_p(T)=\sum_{p'\in E\cap T'}(E,T')_{p'} . \end{equation*} Let us show that $ \mu I_p({\mathcal D}, T)=\mu\nu_p(T)^2+\sum_{p'\in E}\mu I_{p'}({\mathcal D}',T') $, in order to verify the identity in Equation \ref{eq:indiceexplosion1}: \begin{eqnarray*} \mu I_p({\mathcal D},T)&=&-\sum_{i=2}^s\lambda_i(T,H_i)_p= -\sum_{i=2}^s\lambda_i\nu_p(T)\nu_p(H_i)-\sum_{p'\in E}\sum_{i=2}^s\lambda_i(T',H'_i)_{p'}= \\ &=&\mu\nu_p(T)^2-\nu_p(T)\alpha-\sum_{p'\in E}\sum_{i=2}^s\lambda_i(T',H'_i)_{p'}= \\ &=&\mu\nu_p(T)^2-\sum_{p'\in E}\left(\alpha(E,T')_{p'}+\sum_{i=2}^s\lambda_i(T',H'_i)_{p'}\right)=\\ &=&\mu\nu_p(T)^2+\sum_{p'\in E}\mu I_{p'}({\mathcal D}',T'). \end{eqnarray*} Assume now that $\pi$ is non-dicritical, hence $\alpha\ne 0$. We have $$ -\alpha=-\sum_{p'\in E}\left(\mu(E,T')_{p'}+\sum_{i=2}^s\lambda_i(E,H'_i)_{p'} \right)=\alpha\sum_{p'\in E}I_{p'}({\mathcal D}', E). $$ This ends the proof. \end{proof} \begin{corollary} If $T$ is non singular at $p$, there is only one point $p'\in E\cap T'$ and we have that $ I_{p'}({\mathcal D}',T')=I_p({\mathcal D},T)-1 $. \end{corollary} \subsection{Camacho-Sad indices} Let us recall here the notion of generalized Camacho-Sad index introduced by A. Lins Neto in \cite{LNet}, in the spirit of the residue theory of Saito \cite{Sai}. A good presentation of this results may be found in Brunella \cite{Bru, Bru2} and \cite{Lem-S,Suw}. \begin{definition}[\cite{LNet}] Let $\mathcal F$ be a germ of foliation on $({\mathbb C}^2,0)$ generated by a holomorphic $1$-form $\omega$ without common factors in its coefficients. Consider an invariant branch $\Gamma$ of $\mathcal F$ given by an irreducible equation $f=0$. There is an expression $$ g\omega=hdf+f\alpha, $$ where $\alpha$ is a holomorphic $1$-form and $f$ does not divide $g$. The {\em Camacho-Sad index $\operatorname{CS}_0({\mathcal F},\Gamma)$ of $\mathcal F$ with respect to $\Gamma$} is defined by $$ \operatorname{CS}_0({\mathcal F},\Gamma)=\frac{-1}{2\pi i}\int_{\gamma(f)}\frac{\alpha}{h}, $$ where $\gamma(f)$ is the homological class of the image of the standard loop $z\mapsto \exp(2\pi i)$ under a Puiseux parametrization of $\Gamma$. \end{definition} \begin{remark} \label{rk:indicessingsimples} If the origin is a simple point that is not a saddle-node, we can take $\Gamma=(y=0)$ and $\omega=(\lambda+\cdots)ydx+(\mu+\cdots)xdy$, with $\lambda\mu \ne 0$. In this case we see that \begin{equation}\label{eq:indicereducido} \operatorname{CS}_0({\mathcal F},\Gamma)=-\lambda/\mu. \end{equation} \end{remark} We are mainly interested in the behavior of the above index under non-dicritical blowing-ups. Let us summarize those results in the following proposition: \begin{proposition} Let $\mathcal F$ be a germ of foliation on $({\mathbb C}^2,0)$ and let $$ \pi:( (M,E), {\mathcal F}')\rightarrow (({\mathbb C}^2,0),{\mathcal F}),\quad E=\pi^{-1}(0), $$ be the blowing-up of the origin of ${\mathbb C}^2$. The following properties hold: \begin{enumerate} \item[a)] For any invariant branch $(\Gamma,0)$ we have that $$ \operatorname{CS}_{p'}({\mathcal F}',\Gamma')= \operatorname{CS}_{p}({\mathcal F},\Gamma)-\nu_0(\Gamma)^2, $$ where $p'$ is the only point in $E$ belonging to the strict transform $\Gamma'$ of $\Gamma$. \item[b)] If $\pi$ is non-dicritical, then $\sum_{q\in E}\operatorname{CS}_q({\mathcal F}',E)=-1$. \end{enumerate} \end{proposition} \begin{proof} See Brunella \cite{Bru,Bru2}. \end{proof} As a consequence of the above results we obtain the following proposition: \begin{proposition} Let $\mathcal L$ be a ${\mathcal D}$-logarithmic foliation on $({\mathbb C}^2,0)$, where $\mathcal D$ is a non-dicritical $\mathcal D$-divisor. Then $$ \operatorname{CS}_0({\mathcal L},\Gamma)=I_0({\mathcal D},\Gamma), $$ for any irreducible invariant branch $\Gamma$ of $\mathcal L$. \end{proposition} \begin{proof} The behavior of the indices is the same one after a sequence of blowing-ups that desingularizes $\mathcal L$ and $\Gamma$ along $\Gamma$. When we have a simple point, the indices coincide by Equation \ref{eq:indicereducido}. This equality projects by the sequence of blowing-ups and we are done. \end{proof} \subsection{Existence of Divisorial Models in Dimension Two} In this section we present the definitions and main properties of logarithmic models in dimension two, in terms of ${\mathbb C}$-divisors. The existence of logarithmic models for generalized curves in dimension two has been proved in \cite{Can-Co, Cor}, without an extensive use of $\mathbb C$-divisors. The particularization to the ambient dimension two of the concept of generalized hypersurface is the one of {\em generalized curve}. To avoid possible confusion with other uses of this terminology in the literature, we note that in this paper a generalized curve is given by the following definition: \begin{definition} \label{def:curvageneralizada} A foliation ${\mathcal F}$ on $({\mathbb C}^2,0)$ is a {\em generalized curve} if and only if it is non-dicritical and there are no saddle-nodes in a reduction of singularities of $\mathcal F$. \end{definition} \begin{remark} If there are no saddle-nodes in a reduction of singularities, we find no saddle-nodes after any finite sequence of blowing-ups. In particular the definition is independent of the choice of a reduction of singularities (note that in dimension two we can speak of a {\em minimal reduction of singularities}). For more details, see \cite{Can-C-D}. \end{remark} \begin{definition} \label{def:modelodimensiondos} Consider a generalized curve ${\mathcal F}$ and a let $\mathcal D$ be a ${\mathbb C}$-divisor on a two-dimensional non-singular complex analytic variety $M$. We say that $\mathcal D$ is a {\em divisorial model for ${\mathcal F}$ at a point $p$ in $M$} if the following conditions hold: \begin{enumerate} \item The support $\operatorname{Supp}({\mathcal D}_p)$ of the germ ${\mathcal D}_p$ of $\mathcal D$ at $p$ is the union of the germs at $p$ of the invariant branches of $\mathcal F$. \item The indices of ${\mathcal D}_p$ with respect to the irreducible branches of $\operatorname{Supp}({\mathcal D}_p)$ coincide with the Camacho-Sad indices of $\mathcal F$. \end{enumerate} We say that $\mathcal D$ is a {\em divisorial model for $\mathcal F$} if it fulfils the above conditions at every point $p\in M$. In the case of a germ $(M,K)$ we ask the property at each point of the germification set $K$. \end{definition} \begin{remark} Let us note that if $\mathcal D$ is a divisorial model for a generalized curve $\mathcal F$ on $(M,K)$ then we necessarily have that the ``germification set'' $K$ satisfies that $K\subset \operatorname{Supp}({\mathcal D})$. Indeed, if there is a point $p\in K\setminus \operatorname{Supp}({\mathcal D})$, we know that there is at least one invariant branch (this is a general fact that does not need of the hypothesis generalized curve, see \cite{Cam-S}, also \cite{Ort-R-V}) at $p$ that obviously is not contained in the support of the divisor. \end{remark} \begin{example} \label{ex:integral primera} The first example is a foliation $\mathcal F$ of $({\mathbb C}^2,0)$ with a holomorphic first integral. That is, we take a germ of function $f=f_1^{r_1}f_2^{r_2}\cdots f_s^{r_s}$ and the foliation given by $df/f$. The divisorial model is $$ \mathcal D=r_1\operatorname{Div}(f_1)+r_2\operatorname{Div}(f_2)+\cdots+ r_s\operatorname{Div}(f_s). $$ The verification of this statement can be done, first in the normal crossings situation and second after Corollary \ref{cor:modlogsucblowing} and reduction of singularities. \end{example} Our objective in this subsection is to give a proof of the following result, in terms of $\mathbb C$-divisors: \begin{theorem} \label{th:existenciayunicidadendimensiondos} Given a generalized curve ${\mathcal F}$ on $({\mathbb C}^2,0)$, there is a divisorial model $\mathcal D$ for $\mathcal F$. Moreover, if $\tilde {\mathcal D}$ is another divisorial model for $\mathcal F$, then $\tilde {\mathcal D}$ is projectively equivalent to $\mathcal D$; conversely, any ${\mathbb C}$-divisor projectively equivalent to $\mathcal D$ is also a divisorial model for $\mathcal F$. \end{theorem} Let us work in a matricial way. First of all, we recall a basic fact of linear algebra: \begin{lemma} \label{lema:matrizsimetrica} Let $A=(\alpha_{ij})$ be a $s\times s$ symmetric matrix of rank $s-1$, having coefficients in a field $k$. Assume that there is a vector $\lambda=(\lambda_1,\lambda_2,\ldots,\lambda_s)\in k^s$ such that $\lambda A= 0$ and $\lambda_i\ne 0$ for any $i=1,2,\ldots,s$. Consider the diagonal minors $$ \Delta_\ell=\det B_\ell; \quad B_\ell=(\alpha_{ij})_{i,j\in \{1,2,\ldots,s\}\setminus \{\ell\}}. $$ Then $\Delta_\ell\ne 0$, for all $\ell=1,2,\ldots,s$. \end{lemma} \begin{proof} Up to reordering, we may assume that $\ell=1$. Let $F_i$ be the files and $C_i$ the columns of $A$. We know that $$ F_1=(-1/\lambda_1)\sum_{i=2}^s\lambda_iF_i,\quad C_1=(-1/\lambda_1)\sum_{i=2}^s\lambda_iC_i. $$ Let $A'$ be the matrix obtained by changing the first row of $A$ by $F_1$ plus the linear combination $(1/\lambda_1)\sum_{i=2}^s\lambda_iF_i$ and let $A''$ be obtained from $A'$ by changing the first column $C'_1$ of $A'$ by $C'_1$ plus the linear combination $(1/\lambda_1)\sum_{i=2}^s\lambda_iC'_i$, where the $C'_i$ are the columns of $A'$. We have that $\operatorname{rank}(A'')=s-1$ and $$ A''= \left( \begin{array}{c|c} 0& 0\\ \hline 0&B_1 \end{array} \right). $$ We conclude that $\Delta_1\ne 0$. \end{proof} Denote by $H=\cup_{i=1}^sH_i$ the union of invariant branches of $\mathcal F$, where we fix an ordering $H_1,H_2,\ldots,H_s$. We define the $s\times s$ symmetric matrix $A_0({\mathcal F})=(\alpha_{ij})$ by $$ \alpha_{ij}= \left\{ \begin{array}{ccc} \operatorname{CS}_0({\mathcal F},H_i)&\text{ if }& i=j,\\ (H_i,H_j)_0 &\text{ if }& i\ne j. \end{array} \right. $$ Let us denote $B_0({\mathcal F})=(\alpha_{ij})_{2\leq i,j\leq s}$, that is, we have \begin{equation} \label{eq:matrizacero} A_0({\mathcal F})= \left( \begin{array}{c|ccc} \operatorname{CS}_0({\mathcal F},H_1)&(H_1,H_2)_0&\cdots&(H_1,H_s)_0\\ \hline (H_2,H_1)_0&&&\\ \vdots&&B_0({\mathcal F})&\\ (H_s,H_1)_0&&& \end{array} \right). \end{equation} \begin{lemma} \label{rk:matriciallogaritmicmodel} Let $\mathcal F$ be a generalized curve on $({\mathbb C}^2,0)$ and consider a $\mathbb C$-divisor of the form ${\mathcal D}=\sum_{i=1}^s\lambda_i H_i$. The following statements are equivalent: \begin{enumerate} \item The divisor $\mathcal D$ is a divisorial model for $\mathcal F$. \item We have that $\lambda A_0({\mathcal F})=0$ and $\lambda_i\ne 0$ for any $i=1,2,\ldots,s$. \end{enumerate} \end{lemma} \begin{proof} Assume that $\mathcal D$ is a divisorial model for $\mathcal F$. We have that $\lambda_i\ne 0$ for any $i=1,2,\ldots,s$, since the support of $\mathcal D$ is the union $H=\cup_{i=1}^sH_i$ of the invariant curves of $\mathcal F$. Moreover, the indices of $\mathcal D$ coincide with the Camacho-Sad indices of $\mathcal F$. That is, for any $H_i$ we have that $\operatorname{CS}_0({\mathcal F},H_i)=I_0({\mathcal D},H_i)$; noting that $$ I_0({\mathcal D},H_i)=\frac{-\sum_{j\ne i}\lambda_j(H_i,H_j)_0}{\lambda_i}, $$ we conclude that $\lambda A_0({\mathcal F})=0$. Conversely, let us assume that $\lambda A_0({\mathcal F})=0$ and $\lambda_i\ne 0$ for any $i=1,2,\ldots,s$. Then, the support of ${\mathcal D}$ is equal to $H$. Moreover, the fact that $\lambda A_0({\mathcal F})=0$ implies that $$ \operatorname{CS}_0({\mathcal D},H_i)=\frac{-\sum_{j\ne i}\lambda_j(H_i,H_j)_0}{\lambda_i}= I_0({\mathcal D},H_i),\quad i=1,2,\ldots,s, $$ and we are done. \end{proof} \begin{lemma} \label{lema:matrices} Let $\mathcal F$ be a generalized curve on $({\mathbb C}^2,0)$. Then, we have: \begin{enumerate} \item The rank $\operatorname{rk}(A_0({\mathcal F}))$ of $A_0({\mathcal F})$ is equal to $s-1$. \item The determinant $\det B_0({\mathcal F})$ of $B_0({\mathcal F})$ is nonzero. \item There is a vector $\lambda=(\lambda_1,\lambda_2,\ldots,\lambda_s)$ such that $\lambda_i\ne 0$ for any $i=1,2,\ldots,s$ and $\lambda A_0({\mathcal F})=0$. \end{enumerate} \end{lemma} \begin{proof} We work by induction on the length of a reduction of singularities of $H$. If this length is zero, either $H$ is non singular or $H=H_1\cup H_2$ is the union of two transverse non singular branches. When $H$ is non singular, we have that $\mathcal F$ is non singular too, since it is a generalized curve; then we have $\operatorname{CS}_0({\mathcal F},H)=0$ and we are done. If $H=H_1\cup H_2$ is the union of two transverse non singular branches, the origin is a simple point which is not a saddle-node. In this case we have $$ \operatorname{CS}_0({\mathcal F},H_1)\operatorname{CS}_0({\mathcal F},H_2)=1. $$ We are done since $$ A_0({\mathcal F})= \left( \begin{array}{cc} \operatorname{CS}_0({\mathcal F},H_1)&1\\ 1&1/\operatorname{CS}_0({\mathcal F},H_1) \end{array} \right). $$ In order to prove the induction step, let us do the blowing-up centered at the origin $$ \pi:(M,E)\rightarrow ({\mathbb C}^2,0);\quad E=\pi^{-1}(0). $$ Denote by $p_1,p_2, \ldots,p_t$ the points of intersection between $E$ and the strict transform $H'$ of $H$. Up to a reordering in $H_2,H_3,\ldots,H_s$, we may assume that $p_j\in H'_i$ if and only if $i\in I_j$, where \begin{equation} \label{eq:indicesjota} I_j=\{n_{j-1}+1,n_{j-1}+2,\ldots,n_j\};\quad n_0=0, n_t=s. \end{equation} Put $\nu_i=\nu_0(H_i)$, for $i=1,2,\ldots,s$. Note that $\nu_i=(E,H'_i)_{p_j}$ when $i\in I_j$. Denote by $\underline{\nu}$ the vector $\underline{\nu}=(\nu^{(1)},\nu^{(2)},\ldots,\nu^{(t)})$, where $$ \nu^{(j)}=(\nu_{n_{j-1}+1}, \nu_{n_{j-1}+2},\ldots,\nu_{n_{j}}),\quad j=1,2,\ldots,t. $$ The matrices $A_{p_j}({\mathcal F}')$ are given by $$ A_{p_j}({\mathcal F}')= \left( \begin{array}{c|c} \operatorname{CS}_{p_j}({\mathcal F}',E)&\nu^{(j)}\\ \hline (\nu^{(j)})^t&B_{p_j}({\mathcal F}') \end{array} \right), $$ where $B_{p_j}({\mathcal F}')$ is the matrix $$ \left( \begin{array}{cccc} \operatorname{CS}_{p_j}({\mathcal F},H'_{n_{j-1}+1})&(H'_{n_{j-1}+1},H'_{n_{j-1}+2})_{p_j}&\cdots&(H'_{n_{j-1}+1},H'_{n_{j}})_{p_j}\\ (H'_{n_{j-1}+2},H'_{n_{j-1}+1})_{p_j}&\operatorname{CS}_{p_j}({\mathcal F},H'_{n_{j-1}+2})&\cdots&(H'_{n_{j-1}+2},H'_{n_{j}})_{p_j}\\ \vdots&\vdots&&\vdots\\ (H'_{n_{j}},H'_{n_{j-1}+1})_{p_j}&(H'_{n_{j}},H'_{n_{j-1}+2})_{p_j}&\cdots&\operatorname{CS}_{p_j}({\mathcal F},H'_{n_{j}}) \end{array} \right). $$ We can apply the induction hypothesis at the points $p_j$. Thus, for any $j=1,2,\ldots,t$ we have: \begin{enumerate} \item $\det B_{p_j}({\mathcal F}')\ne 0$. \item There are vectors $ \lambda^{(j)}=(\lambda_{n_{j-1}+1}, \lambda_{n_{j-1}+2}, \ldots, \lambda_{n_j} ), $ with nonzero entries such that \begin{equation} \label{eq:lambdaapj} (1,\lambda^{(j)})A_{p_j}({\mathcal F}')=0. \end{equation} \end{enumerate} Now, let us define the matrix $A'$ by $$ A'= \left( \begin{array}{c|c|c|c|c} -1&\nu^{(1)}&\nu^{(2)}&\cdots&\nu^{(t)}\\ \hline ({\nu^{(1)}})^t&B_{p_1}({\mathcal F}')&0&\cdots&0\\ \hline ({\nu^{(2)}})^t&0&B_{p_2}({\mathcal F}')&\cdots&0\\ \hline \vdots&\vdots&\vdots&\cdots&\vdots\\ \hline ({\nu^{(t)}})^t&0&0&\cdots&B_{p_t}({\mathcal F}') \end{array} \right) = \left( \begin{array}{c|c} -1&\nu\\ \hline (\nu)^t& B' \end{array} \right). $$ Denote $\lambda=(\lambda_1,\lambda_2,\ldots,\lambda_s)= (\lambda^{(1)},\lambda^{(2)},\ldots,\lambda^{(t)})$. Let us show that \begin{equation} \label{eq:lambdaaprima} (1,\lambda)A'=0. \end{equation} Recall that $(1,\lambda^{(j)})A_{p_j}({\mathcal F}')=0$. Looking at the first column of $(1,\lambda^{(j)})A_{p_j}({\mathcal F}')$, we have that $$ \operatorname{CS}_{p_j}({\mathcal F}',E)+\sum_{i=n_{j-1}+1}^{n_j}\lambda_i \nu_i=0, \quad j=1,2,\ldots,t. $$ Noting that $\sum_{j=1}^t\operatorname{CS}_{p_j}({\mathcal F}',E)=-1$, we conclude that \begin{equation} \label{eq:lambdamultiplicidadesmenosuno} -1+\sum_{i=1}^s\lambda_i \nu_i= \sum_{j=1}^t\left(\operatorname{CS}_{p_j}({\mathcal F}',E)+\sum_{i=n_{j-1}+1}^{n_j}\lambda_i \nu_i\right)=0. \end{equation} Hence, the first column of $(1,\lambda)A'$ is zero. The $(1+i)$-th column of $(1,\lambda)A'$, where $i=\ell+n_{j-1}\in I_j$ coincides with the $(1+\ell)$-th column of $ (1,\lambda^{(j)})A_{p_j}({\mathcal F}')$. This shows that $(1,\lambda)A'=0$. Note that $\det B'\ne 0$, since $\det B_{p_j}({\mathcal F}')\ne 0$ for any $j=1,2,\ldots,t$. On the other hand, $(1,\lambda)A'=0$ implies that \begin{equation} \label{eq:lambdabprima} \nu+\lambda B'=0. \end{equation} Moreover, recall the equalities \begin{eqnarray*} \operatorname{CS}_0({\mathcal F},H_i)&=& \operatorname{CS}_{p_j}({\mathcal F}',H'_i)+\nu_i^2,\quad i\in I_j\\ (H_i,H_\ell)_0&=& \left\{ \begin{array}{ccc} \nu_i\nu_\ell+(H'_i,H'_\ell)_{p_j}&\mbox{ if }& i,\ell\in I_j. \\ \nu_i\nu_\ell&\mbox{ if }& i\in I_j,\ell\notin I_j. \end{array} \right. \end{eqnarray*} Then, we have $$ A_0({\mathcal F})= B'+\operatorname{Diag}(\nu_1,\nu_2,\ldots,\nu_s)N, $$ where $N$ is the matrix that has all the rows equal to $\nu$. We conclude that $$\operatorname{rank} A_0({\mathcal F})\geq s-1,$$ since $\operatorname{rank} B'=s$ and the rows of $A_0({\mathcal F})$ are obtained from the ones of $B'$ by adding vectors that are proportional to the single vector $\nu$. Let us show that $\lambda A_0({\mathcal F})=0$, having in mind equations (\ref{eq:lambdabprima}) and (\ref{eq:lambdamultiplicidadesmenosuno}). We have \begin{eqnarray*} \lambda A_0({\mathcal F})&=&\lambda B'+\lambda\operatorname{Diag}(\nu_1,\nu_2,\ldots,\nu_s)N=\\ &=&-\nu+(\lambda_1\nu_1,\lambda_2\nu_2,\ldots,\lambda_s\nu_s)N =\\ &=& -\nu+(\sum_{i=1}^s\lambda_i\nu_i)\nu=(-1+\sum_{i=1}^s\lambda_i\nu_i)\nu=0. \end{eqnarray*} Since $\lambda\ne 0$ and $\operatorname{rank}(A_0({\mathcal F}))\geq s-1$, we conclude that $\operatorname{rank}(A_0({\mathcal F}))=s-1$, this shows property (1) of the statement. By construction, we have that $\lambda_i\ne 0$ for all $i=1,2,\ldots,s$, this shows property (3). Finally, property (2) follows from properties (1) and (3) in view of Lemma \ref{lema:matrizsimetrica}. \end{proof} \begin{remark} Note that the above proof implies that $\operatorname{CS}_0({\mathcal F},H)=0$ when there is only one invariant branch $H$, even if $H$ is singular. \end{remark} Let us end the proof of Theorem \ref{th:existenciayunicidadendimensiondos}. Take the matrix $A_0({\mathcal F})$ as in Equation \ref{eq:matrizacero}. We have a vector $\lambda=(\lambda_1,\lambda_2,\ldots,\lambda_s)$ with only nonzero entries such that $\lambda A_0({\mathcal F})=0$. Define the ${\mathbb C}$-divisor ${\mathcal D}$ as $$ {\mathcal D}= \lambda_1H_1+\lambda_2H_2+\cdots+\lambda_sH_s. $$ Applying Lemma \ref{rk:matriciallogaritmicmodel}, we see that $\mathcal D$ is a divisorial model for $\mathcal F$. If $\tilde {\mathcal D}=\sum_{i=1}^s\tilde\lambda H_i$ is projectively equivalent to ${\mathcal D}$, there is a constant $c\in {\mathbb C}^*$ such that $\tilde\lambda=c\lambda$. Hence we also have that $\tilde\lambda A_0({\mathcal F})=0$ and $\tilde{\mathcal D}$ is also a divisorial model for $\mathcal F$. Assume now that ${\mathcal D}'=\sum_{i=1}^s\lambda'_i H_i$ is another divisorial model for $\mathcal F$, by Lemma \ref{rk:matriciallogaritmicmodel}, we have that $\lambda' A_0({\mathcal F})=0$. Since $A_0({\mathcal F})$ has rank $s-1$, there is a constant $c\in {\mathbb C}^*$ such that $\lambda'=c\lambda$ and thus the ${\mathbb C}$-divisor ${\mathcal D}'$ is projectively equivalent to ${\mathcal D}$. This ends the proof of Theorem \ref{th:existenciayunicidadendimensiondos}. \subsection{Stability under Morphisms} In this Subsection, we characterize the two-dimensional divisorial models in terms of blowing-ups and also in terms of transverse maps. This properties are the essential facts we need for extending to higher dimension the concept of divisorial model. \begin{proposition} \label{pro:modelostrasunaexplosion} Consider a generalized curve ${\mathcal F}$ on $({\mathbb C}^2,0)$ and a ${\mathbb C}$-divisor ${\mathcal D}$. Let $\pi:(M,\pi^{-1}(0))\rightarrow ({\mathbb C}^2,0)$ be the blowing-up of the origin and denote by ${\mathcal F}'$ the transform of $\mathcal F$ by $\pi$. The following statements are equivalent: \begin{enumerate} \item The $\mathbb C$-divisor $\mathcal D$ is a divisorial model for $\mathcal F$. \item The transform $\pi^*{\mathcal D}$ of $\mathcal D$ is a divisorial model for ${\mathcal F}'$. \end{enumerate} \end{proposition} \begin{proof} Take notations as in the proof of Lemma \ref{lema:matrices} and put ${\mathcal D}=\sum_{i=1}^s\mu_i H_i$. Assume first that $\mathcal D$ is a divisorial model for $\mathcal F$. We have to prove that $\pi^*{\mathcal D}$ is a divisorial model for $\pi^*{\mathcal F}$ at any point $p\in \pi^{-1}(0)=E$. In view of the proof of Lemma \ref{lema:matrices}, we find vectors $(1,\lambda^{(j)})$ for any $j=1,2,\ldots,t$ such that $$ (1,\lambda^{(j)})A_{p_j}({\mathcal F}')=0, $$ with $\lambda^{(j)}=(\lambda_{n_{j-1}+1}, \lambda_{n_{j-1}+2},\ldots,\lambda_{n_{j}})$ and $\lambda_i\ne 0$ for $i=n_{j-1}+1, n_{j-1}+2,\ldots,n_{j}$. By Lemma~\ref{rk:matriciallogaritmicmodel}, this means that $$ {\mathcal D}^{(j)}=E+ \sum_{i=n_{j-1}+1}^{n_j}\lambda_i H'_i $$ is a divisorial model for ${\mathcal F}'$ at the point $p_j$. Let us take $$ {\mathcal D}'= E+ \sum_{i=1}^s\lambda_i H'_i. $$ We have that ${\mathcal D}'$ is a divisorial model for ${\mathcal F}'$ at each of the points $p_j$, since the germ of ${\mathcal D}'$ at $p_j$ is equal to ${\mathcal D}^{(j)}$. Moreover, the $\mathbb C$-divisor ${\mathcal D}'$ is also a divisorial model for ${\mathcal F}'$ at any point $p\in E\setminus\{p_1,p_2,\ldots,p_t\}$, since the germ of ${\mathcal D}'$ at such points $p$ is just the ${\mathbb C}$-divisor $1\cdot E$. On the other hand, by Equation \ref{eq:lambdamultiplicidadesmenosuno} we have \begin{equation*} \sum_{i=1}^s\lambda_i \nu_i=1. \end{equation*} This implies that ${\mathcal D}'=\pi^*{\mathcal D}_0$, where ${\mathcal D}_0=\sum_{i=1}^s\lambda_i H_i$. Since ${\mathcal D}_0$ is a divisorial model for ${\mathcal F}$ at the origin $0\in {\mathbb C}^2$, we have that ${\mathcal D}=c{\mathcal D}_0$ for a nonzero constant $c\in {\mathbb C}^*$. Hence $\pi^*{\mathcal D}=c{\mathcal D}'$ and it is a divisorial model for ${\mathcal F}'$. Conversely, write ${\mathcal D}=\sum_{i=1}^s\mu_i H_i$ and assume that $\pi^*{\mathcal D}$ is a divisorial model for ${\mathcal F}'$. The exceptional divisor $E$ is invariant for ${\mathcal F}'$ and thus $\sum_{i=1}^s\mu_i\nu_i\ne 0$. Up to change ${\mathcal D}$ by a proportional $\mathbb C$-divisor, we can assume that $\sum_{i=1}^s\mu_i\nu_i=1$. This implies that $\pi^*{\mathcal D}={\mathcal D}'$ since they are both divisorial models for ${\mathcal F}'$ with fixed coefficient equal to $1$ for the exceptional divisor $E$. This implies also that ${\mathcal D}={\mathcal D}_0=\sum_{i=1}^s\lambda_i H_i$. We are done. \end{proof} Next corollary is a direct consequence of the preceding proposition: \begin{corollary} \label{cor:modlogsucblowing} Consider a generalized curve ${\mathcal F}$ on $({\mathbb C}^2,0)$ and a ${\mathbb C}$-divisor ${\mathcal D}$. Let $\pi:(M,\pi^{-1}(0))\rightarrow ({\mathbb C}^2,0)$ be the composition of a finite sequence of blowing-ups. The following statements are equivalent: \begin{enumerate} \item $\mathcal D$ is a divisorial model for $\mathcal F$. \item $\pi^*{\mathcal D}$ is a divisorial model for the transform ${\mathcal F}'$ of $\mathcal F$ by $\pi$. \end{enumerate} \end{corollary} In particular, when $\pi$ is a reduction of singularities of ${\mathcal F}$, we have that ${\mathcal D}$ is a divisorial model for $\mathcal F$ if and only if $\pi^*{\mathcal D}$ is a divisorial model for $\pi^*{\mathcal F}$ . This is the point of view taken in \cite{Cor} in the construction of divisorial models in dimension two. The property stated in next Proposition \ref{prop:pullbacklogmod} is the starting point for defining divisorial models in higher ambient dimension. \begin{proposition} \label{prop:pullbacklogmod} Let $\mathcal F$ be a generalized curve on $({\mathbb C}^2,0)$ and consider a $\mathbb C$-divisor ${\mathcal D}$ on $({\mathbb C}^2,0)$. The following statements are equivalent: \begin{enumerate} \item The $\mathbb C$-divisor $\mathcal D$ is a divisorial model for $\mathcal F$. \item For any $\mathcal D$-transverse holomorphic map $\phi: ({\mathbb C}^2,0)\rightarrow ({\mathbb C}^2,0)$, we have that $\phi^*{\mathcal D}$ is a divisorial model for $\phi^*{\mathcal F}$. \end{enumerate} \end{proposition} \begin{proof} We see that (2) implies (1) by choosing the identity morphism. Let us show now that (1) implies (2). We have to test that $\phi^*{\mathcal D}$ is a divisorial model for $\phi^*{\mathcal G}$. Note that the existence of the pull-back $\phi^*{\mathcal F}$ is guarantied by Proposition \ref{prop:pullbackgeneralizedcurve} and we also know that $\phi^*{\mathcal F}$ is a generalized curve. Let $\pi:(M,\pi^{-1}(0))\rightarrow ({\mathbb C}^2,0)$ be a reduction of singularities of $\mathcal F$ by blowing-ups centered at points. In view of Proposition \ref{prop:appdos} in the Appendix I, there is a commutative diagram of morphisms $$ \begin{array}{ccc} ({\mathbb C}^2,0)&\stackrel{\sigma}{\longleftarrow}&(N,\sigma^{-1}(0))\\ \phi\downarrow\;\; &&\;\;\downarrow\psi\\ ({\mathbb C}^2,0)&\stackrel{\pi}{\longleftarrow}&(M,\pi^{-1}(0)) \end{array}, $$ where $\sigma$ is the composition of a finite sequence of blowing-ups. Let us recall that $S=\operatorname{Supp}({\mathcal D})$ is the union of the invariant curves of $\mathcal F$. Note that $\phi^*{\mathcal F}$ exists, since $\phi$ is $S$-transverse and we also have that the pullback $\phi^*{\mathcal D}$ exists by the same reason. We have that $\phi\circ\sigma$ is $S$-transverse, since $\phi$ is $S$-transverse, by hypothesis, and $\sigma$ is a composition of blowing-ups. Hence $\pi\circ\psi$ is $S$-transverse, because we have $\pi\circ\psi=\phi\circ\sigma$. This implies that $\psi$ is $\pi^{-1}(S)$-transverse and then it is $\pi^*{\mathcal D}$-transverse, since $$ \operatorname{Supp}(\pi^*{\mathcal D})\subset \pi^{-1}(S). $$ In this situation, we have that \begin{eqnarray} \label{eq:transformados1} \sigma^*(\phi^*{\mathcal F})=(\phi\circ\sigma)^*{\mathcal F}&=& (\pi\circ\psi)^*{\mathcal F} = \psi^*(\pi^*{\mathcal F}), \\ \label{eq:transformados2} \sigma^*(\phi^*{\mathcal D})=(\phi\circ\sigma)^*{\mathcal D}&=& (\pi\circ\psi)^*{\mathcal D}= \psi^*(\pi^*{\mathcal D}). \end{eqnarray} We know that $\pi^*{\mathcal D}$ is a divisorial model for $\pi^*{\mathcal F}$ in view of Corollary \ref{cor:modlogsucblowing}. Also by Corollary \ref{cor:modlogsucblowing}, we have that $\phi^*{\mathcal D}$ is a divisorial model of $\phi^*{\mathcal F}$ if and only if $\sigma^*(\phi^*{\mathcal D})$ is a divisorial model of $\sigma^*(\phi^*{\mathcal F})$. In view of Equations \ref{eq:transformados1} and \ref{eq:transformados2}, it is enough to show that $\psi^*(\pi^*{\mathcal D})$ is a divisorial model for $\psi^*(\pi^*{\mathcal F})$. Recalling that $\pi^*{\mathcal F}$ is desingularized, that $\pi^*{\mathcal D}$ is a divisorial model for $\pi^*{\mathcal F}$ and that the desired verification may be done in a local way, we have reduced the problem to the case when $\mathcal F$ is desingularized. More precisely: \begin{quote} In order to prove that (1) implies (2), it is enough to consider only the case when $\mathcal F$ is desingularized. \end{quote} Then, we assume that $\mathcal F$ is desingularized. If $\mathcal F$ is non-singular, we have that ${\mathcal F}=(dx=0)$ and (up to multiply by a constant) ${\mathcal D}=\operatorname{Div}(x)=1\cdot (x=0)$. In this case $\phi^*({\mathcal D})=\operatorname{Div}(x\circ \phi)$ and $\phi^*{\mathcal F}=(d(x\circ\phi=0))$, we are done by Example \ref{ex:integral primera}. Assume that $\mathcal F$ has a simple singular point at the origin, then it is generated by a logarithmic $1$-form $$ \eta=(\lambda+f(x,y))\frac{dx}{x}+(\mu+g(x,y))\frac{dy}{y},\quad f(0,0)=g(0,0)=0, $$ where $(\lambda,\mu)$ is non resonant, in the sense that $m\lambda+n\mu\ne 0$ for any pair of non-negative integer numbers $n,m$ such that $n+m\geq 1$. Moreover, the divisor $\mathcal D$ is given by $${ \mathcal D}=\lambda \operatorname{Div}(x)+\mu \operatorname{Div}(y). $$ Now, we apply Proposition \ref{prop:appuno} to desingularize the list of functions $(x\circ \phi, y\circ \phi)$, by means of a sequence of blowing-ups $\sigma':(N',{\sigma'}^{-1}(0))\rightarrow ({\mathbb C}^2,0)$. It is enough to verify that ${\sigma'}^*(\phi^*{\mathcal D})$ is a divisorial model for ${\sigma'}^*(\phi^*{\mathcal F})$ at the points $p\in {\sigma'}^{-1}(0)$. This reduces the problem to the case in which $\phi$ has the form $$ x\circ\phi=Uu^av^b,\quad y\circ\phi=Vu^cv^d, \quad U(0,0)\ne 0\ne V(0,0), $$ where $a+b\geq 1$ and $c+d\geq 1$ (note that none of these functions is identically zero, since $\phi$ is $S$-transverse). Put $(\lambda',\mu')=(\lambda a+\mu c, \lambda b+\mu d)$, we have \begin{eqnarray*} \phi^*\eta&=& \lambda'\frac{du}{u}+\mu'\frac{dv}{v}+\alpha, \quad \mbox{ $\alpha$ holomorphic}, \\ \phi^*{\mathcal D}&=& \lambda'\operatorname{Div}(u)+\mu'\operatorname{Div}(v). \end{eqnarray*} Now, we see that $\phi^*{\mathcal D}$ is a divisorial model of ${\phi^*{\mathcal F}}$. Note that either $\lambda'\ne 0$ or $\mu'\ne 0$, since $a+b+c+d\geq 2$ and there are no resonances between $\lambda,\mu$. If $\lambda'\ne 0=\mu'$, we have a non singular foliation with $u=0$ the only invariant curve and $\phi^*{\mathcal D}=\lambda'\operatorname{Div}(u)$, we are done. If $\lambda'\ne 0\ne \mu'$ we have a simple singularity and $\phi^*{\mathcal D}=\lambda'\operatorname{Div}(u)+\mu'\operatorname{Div}(v)$ is a divisorial model, as we know by Remark \ref{rk:indicessingsimples}. \end{proof} \begin{corollary} Let $\mathcal F$ be a generalized curve on $({\mathbb C}^2,0)$ and consider a divisorial model ${\mathcal D}$ of ${\mathcal F}$. Then the $\mathbb C$-divisor $\mathcal D$ is non-dicritical. \end{corollary} \begin{proof} Assume that there is a $\mathcal D$-transverse map $\phi:({\mathbb C}^2,0)\rightarrow ({\mathbb C}^2,0)$ such that $\phi^*{\mathcal D}=0$ and $\phi(y=0)\subset \operatorname{Supp}({\mathcal D})$. By Proposition \ref{prop:pullbacklogmod}, we know that $\phi^*{\mathcal D}$ is a divisorial model of $\phi^*{\mathcal F}$ at the origin. But this is not possible, since we know that $\phi^*{\mathcal F}$ exists and hence the divisorial model at the origin cannot be zero. \end{proof} \section{Reduction of Singularities of Foliated Spaces} Before considering reduction of singularities, let us precise what we mean by a {\em desingularized foliated space } in the case of generalized hypersurfaces. This concept is developed for any foliated space in \cite{Can}. \begin{definition} \label{def:foliatedspace} A {\em foliated space} is just a data $((M,K),E,{\mathcal F})$, where \begin{enumerate} \item The {\em ambient space} $(M,K)$ is a germ of non-singular complex analytic variety along a connected and compact analytic subset $K\subset M$. \item The {\em divisor} $E\subset M$ is a normal crossings divisor on $M$. More precisely, it is a germ along $E\cap K$. \item The {\em foliation} $\mathcal F$ is a germ of holomorphic foliation on $M$ along the germification set $K$. \end{enumerate} We say that a foliated space $((M,K),E,{\mathcal F})$ is of {\em generalized hypersurface type} if and only if $\mathcal F$ is a generalized hypersurface and all the irreducible components of $E$ are invariant for $\mathcal F$. \end{definition} We say that a foliated space $((M,K),E,{\mathcal F})$ is {\em desingularized} if it is {\em simple } at any point $p\in K$, in the sense that we detail in Subsection \ref{definicionsimplepoint}. The property of being a simple point is an open property and hence it is also satisfied in an open neighborhood of $K$. \subsection{Simple Points} \label{definicionsimplepoint} The definition of ``simple point'' in any dimension has been introduced in \cite{Can-C, Can}. Here we recall this concept particularized to the case of foliated spaces of generalized hypersurface type. Consider a foliated space $((M,K), E, {\mathcal F})$ of generalized hypersurface type. Let us define when a point $p\in K$ is a simple point for the foliated space. Denote by $\tau$ the {\em dimensional type} of ${\mathcal F}$ at $p$ (see \cite{Can,Can-C, Can-M-RV}). Roughly speaking, the dimensional type $\tau$ is the minimum number of local coordinates needed to describe ${\mathcal F}$ at $p$. Denote by $e$ the number of irreducible components of $E$ through $p$. The first request for $p$ to be simple is that $\tau-1\leq e \leq \tau$. In this way we have two categories of simple points: \begin{enumerate} \item[a)] {\em Simple corner points:} the simple points where $e=\tau$. \item[b)]{\em Simple trace points:} the simple points where $e=\tau-1$. \end{enumerate} Assume that $e=\tau$. Then, there are coordinates $(x_1,x_2,\ldots,x_n)$ at $p$ such that $E=(\prod_{i=1}^\tau x_i=0)$ and ${\mathcal F}$ is locally defined at $p$ by a meromorphic differential $1$-form $\eta$ written as \begin{equation} \label{simplecorners} \eta= \sum_{i=1}^\tau (\lambda_i+a_i(x_1,x_2,\ldots,x_\tau))\frac{dx_i}{x_i},\quad a_i\in {\mathcal O}_{M,p}, \end{equation} where $a_i(0)=0$ for $i=1,2,\ldots,\tau$. We say that $p$ is a {\em simple corner} if the following {\em non resonance property} holds: \begin{quote} \label{quote:resonance} ``For any $\mathbf{0}\ne \mathbf{m}=(m_i)_{i=1}^\tau\in {\mathbb Z}_{\geq 0}^\tau$, we have that $\sum_{i=1}^\tau{m_i}\lambda_i\neq 0$.'' \end{quote} Let us note that $\prod_{i=1}^\tau\lambda_i\ne0$. \begin{remark} It is known that the germs at $p$ of the irreducible components of $E$ are the only invariant germs of hypersurface for $\mathcal F$ at a simple corner $p$. One way of verifying this is as follows. First of all, we can assume that $\tau=n$, because of the ``cylindric shape'' of the foliation over its projection on the first $\tau$ coordinates. Assume now that there is another invariant hypersurface. Then we should have an invariant curve $t\mapsto \gamma(t)$ as follows: $$ \gamma(t)=(t^{m_1}U_1(t),t^{m_2}U_2(t),\ldots,t^{m_n}U_n(t)),\quad U_i(0)\ne 0, \; i=1,2,\ldots,n. $$ Let $\eta$ be as in Equation \ref{simplecorners}. The fact that $\gamma^*\eta=0$ implies that $\sum_{i=1}^n m_i\lambda_i=0$ and this contradicts the property of non-resonance. \end{remark} Assume now that $e=\tau-1$. The point $p$ is a {\em simple trace point} if and only if there is a invariant germ of non-singular hypersurface $H_p$ at $p$, not contained in $E$ and having normal crossings with $E$, in such a way that the germ of $\mathcal F$ at $p$ is a simple corner with respect to the normal crossings divisor $E\cup H_p$. \begin{remark} \label{rk:regular implicasimple} Given a foliated space $((M,K),E,{\mathcal F})$ of generalized hypersurface type, any point $p\in M\setminus\operatorname{Sing}({\mathcal F})$ is a simple point. In this case the dimensional type is $\tau=1$. If $e=0$, we have an ``improper'' trace point and the foliation is locally given by $dx=0$, where $x$ is a local coordinate, we write it as $dx/x=0$ and we see that $(M,\emptyset,\mathcal F)$ fulfils the definition of simple point for generalized hypersurfaces. If $e\geq 1$ we necessarily have that $e=1$, since all the components of $E$ are invariant and we have only one of them; we can choose an appropriate coordinate such that $x=0$ is the divisor $E$ and $\mathcal F$ is given by $dx/x=0$; hence it satisfies the definition of simple point. The above property is true in the general case when all the components of $E$ are invariant. In presence of dicritical components, we have to assure the normal crossings property between $\mathcal F$ and the divisor. See \cite{Can}. \end{remark} In our current case of generalized hypersurface type foliated spaces, simple points may be described by means of the {\em logarithmic order} as follows. Consider a foliated space $((M,K), E, {\mathcal F})$ of generalized hypersurface type. Take a point $p\in K$. There are local coordinates $(x_1,x_2,\ldots,x_n)$ such that $E=(\prod_{i=1}^ex_i=0)$ and ${\mathcal F}$ is generated locally at $p$ by an integrable meromorphic $1$-form $$ \eta=\sum_{i=1}^e a_i(x)\frac{dx_i}{x_i}+\sum_{i=e+1}^na_i(x)dx_i,\quad a_i\in {\mathcal O}_{M,p}, $$ where the coefficients $a_i$ do not have a common factor, for $i=1,2,\ldots,n$. The {\em logarithmic order $\operatorname{LogOrd}_p(\eta,E)$} of $\eta,E$ at the origin is defined by $$ \operatorname{LogOrd}_p(\eta,E)=\min\{\nu_0(a_i);\; i=1,2,\ldots,n\}. $$ We also put $\operatorname{LogOrd}_p({\mathcal F},E)=\operatorname{LogOrd}_p(\eta,E)$, when $\eta$ generates ${\mathcal F}$ as above. \begin{proposition} \label{prop:simplepointsandlogorder} Assume that $((M,K), E, {\mathcal F})$ is a foliated space of generalized hypersurface type. Take a point $p\in K$. The following statements are equivalent \begin{enumerate} \item The point $p$ is a simple point for $((M,K), E, {\mathcal F})$. \item $\operatorname{LogOrd}_p({\mathcal F},E)=0$. \end{enumerate} \end{proposition} \begin{proof} See also \cite{Can-M-RV, Moli}. We provide a direct proof in Appendix II. \end{proof} Thus, the locus of non simple points coincides with the {\em log-singular locus\/}: $$\operatorname{LogSing}({\mathcal F}, E)=\{p\in M;\quad \operatorname{LogOrd}_p({\mathcal F},E)\geq 1 \}.$$ \subsection{Reduction of Singularities} \label{reductionofsingularities} Let us recall now what we mean by a reduction of singularities of a foliated space of generalized hypersurface type. The existence of reduction of singularities for germs of codimension one holomorphic foliations is know from the paper of Seidenberg \cite{Sei} in ambient dimension two; when the ambient dimension is three, it has been proven in \cite{Can}. In general ambient dimensions it is still an open problem, but there is reduction of singularities for foliated spaces of generalized hypersurface type \cite{Fer-M}. Take a foliated space $((M,K), E,{\mathcal F})$ of generalized hypersurface type. A {\em reduction of singularities of $((M,K), E,{\mathcal F})$} is a transformation of foliated spaces \begin{equation} \label{eq:reducciondesingularidades} \pi: ((M',K'),E',{\mathcal F}')\rightarrow ((M,K),E,{\mathcal F}) \end{equation} obtained by composition of a finite sequence of admissible blowing-ups of foliated spaces in such a way that $((M',K'),E',{\mathcal F}')$ is desingularized. A non-singular and connected closed analytic subset $(Y,Y\cap K)\subset (M,K)$ is an {\em admissible center} for $((M,K), E,{\mathcal F})$ when it is invariant for $\mathcal F$ and it has normal crossings with $E$. In this situation, we can perform the admissible blowing-up with center $Y$: $$ \pi_1:((M_1,K_1), E^1,{\mathcal F}_1)\rightarrow ((M,K), E,{\mathcal F}),\quad K_1=\pi_1^{-1}(K), $$ where ${\mathcal F}_1=\pi_1^*{\mathcal F}$ is the transform of ${\mathcal F}$ and $E^1=\pi_1^{-1}(E\cup Y)$. Such transformations may be composed. Then, a reduction of singularities $\pi$ as in Equation \ref{eq:reducciondesingularidades} is a finite composition $$ \pi=\pi_1\circ\pi_2\circ\cdots\circ\pi_s, $$ where each $\pi_i$ is an admissible blowing-up of foliated spaces, for $i=1,2,\ldots,s$. The number $s$ is called the {\em length } of $\pi$ and it will be important in order to perform inductive arguments. \begin{remark} We recall that a reduction of singularities of the (finite) union of invariant hypersurfaces induces a reduction of singularities of the foliated space, in the framework of generalized hypersurfaces, see \cite{Fer-M, Can-M-RV}. Then, we can assure the existence of reduction of singularities for a given foliated space $((M,K), E,{\mathcal F})$ of generalized hypersurface type. \end{remark} \subsection{Notations on a Reduction of Singularities} \label{Notations on a Reduction of Singularities} Let us introduce some useful notations concerning a given reduction of singularities $\pi$ as in Equation \ref{eq:reducciondesingularidades}. The morphism $\pi$ is a finite composition $\pi=\pi_1\circ\pi_2\circ\cdots\circ\pi_s$, where $$ \pi_j:((M_j,K_{j}), E^j, {\mathcal F}_j)\rightarrow ((M_{j-1},K_{j-1}), E^{j-1},{\mathcal F}_{j-1}),\quad j=1,2,\ldots,s, $$ is the admissible blowing-up with center $Y_{j-1}\subset M_{j-1}$. The initial and final foliated spaces are given by \begin{eqnarray*} ((M_0,K_0),E^0,{\mathcal F}_0)&=&((M,K),E,{\mathcal F}),\\ ((M_s,K_s),E^s,{\mathcal F}_s)&=&((M',K'),E',{\mathcal F}') \end{eqnarray*} The exceptional divisor of $\pi_j$ is $E^j_j=\pi_j^{-1}(Y_{j-1})$. Moreover, for any $j=1,2,\ldots, s$ we write the decomposition into irreducible components of $E^{j-1}$ and $E^j$ as $$ E^{j-1}=\cup_{i\in I_0\cup \{1,2,\ldots,j-1\}}E^{j-1}_i,\quad E^{j}=\cup_{i\in I_0\cup \{1,2,\ldots,j\}}E^{j}_i, $$ where $E^{j}_i$ is the strict transform of $E^{j-1}_i$, for $i\in I_0\cup \{1,2,\ldots,j-1\}$. If we denote $I=I_0\cup \{1,2,\ldots,s\}$ and $E'_i=E^s_i$ for $i\in I$, we have that $E'=\cup_{i\in I}E'_i$. In the same way, we can express the decomposition of $E$ into irreducible components as $E=\cup_{i\in I_0}E_i$, where $E_i=E^0_i$, for $i\in I_0$. The inductive arguments on the length of $\pi$ are just based on the fact that after a first blowing-up, we have a reduction of singularities of smaller length. That is, when $s\geq 1$, we consider the decomposition $\pi=\pi_1\circ \sigma$, where $\sigma=\pi_2\circ\pi_3\circ\cdots\circ\pi_s$. Thus, we have \begin{equation} \begin{array}{ccc} \pi_1: ((M_1,K_1), E^1,{\mathcal F}_1)&\rightarrow& ((M,K), E,{\mathcal F}), \\ \sigma: ((M',K'), E',{\mathcal F}')&\rightarrow& ((M_1,K_1), E^1,{\mathcal F}_1). \end{array} \end{equation} Note that $\sigma$ is a reduction of singularities of length $s-1$. \begin{remark} For the shake of simplicity, we do not detail certain properties about the germs of spaces. We will just use expressions as ``a point close enough to the germification set'' or ``by taking appropriate representatives''. In each case, we hope the reader to supply the exact meaning of these expressions. \end{remark} Take a point $p\in M$, close enough to the germification set $K$. Then $\pi$ induces a reduction of singularities over the ambient space $(M,p)$ that we denote \begin{equation} \label{eq:pisobrep} \pi_p: ((M',K'_p), E',{\mathcal F}')\rightarrow ((M,p),E,{\mathcal F}),\quad K'_p=\pi^{-1}(p). \end{equation} We can decompose it as $\pi_p=\pi_{1,p}\circ\sigma_p$, where \begin{equation} \label{eq.redsingsobrep} \begin{array}{cccc} \pi_{1,p}: ((M_1,K_{1,p}), E^1,{\mathcal F}_1)&\rightarrow& ((M,p), E,{\mathcal F}),& K_{1,p}=\pi_1^{-1}(p), \\ \sigma_p: ((M',K'_p), E',{\mathcal F}')&\rightarrow& ((M_1,K_{1,p}), E^1,{\mathcal F}_1).& \end{array} \end{equation} Let us unify next the notations between the components of the exceptional divisors and the irreducible invariant hypersurfaces, not necessarily contained in them. Denote by $S'\subset M'$ the union of invariant hypersurfaces of ${\mathcal F}'$ not contained in the divisor $E'$. We know that $S'$ is a disjoint union of non singular hypersurfaces and $D'=E'\cup S'$ is also a normal crossings divisor on $M'$. Since the irreducible components of $E'$ are invariant, we have that $D'$ is the union of all invariant hypersurfaces of ${\mathcal F}'$. Let us denote $$ S'=\cup_{b\in B}S'_b $$ the decomposition into irreducible components of $S'$, where we choose the set of indices $B$ in such a way that $B\cap I=\emptyset$. Denote $D'_i=E'_i$ if $i\in I$ and $D'_b=S'_b$ if $b\in B$. We have that $$ D'=\cup_{j\in I\cup B}D'_j, $$ is the decomposition into irreducible components of $D'$. Moreover, let us denote by $D_j=\pi(D'_j)$, for $j\in I_0\cup B$. Then $D=\cup_{j\in I_0\cup B}D_j\subset M$ is the union of the irreducible invariant hypersurfaces of $\mathcal F$. \subsection{Equidesingularization} We recall here the concept of {\em equireduction point} for a foliated space $((M,K),E,{\mathcal F})$ of generalized hypersurface type. This idea has already been useful in \cite{Can-M} and \cite{Can-M-RV}. \begin{remark} In our applications we will consider points that can be outside of the germification set, but close enough to it. So, if we say ``take a point $p\in M$'' we understand that it is ``close enough to $K$''. \end{remark} Let us take a point $p\in M$. We say that $p$ is an {\em even point} for $((M,K),E,{\mathcal F})$ if either $p\not\in \operatorname{Sing}({\mathcal F})$ (see Remark \ref{rk:regular implicasimple}) or $p\in \operatorname{Sing}({\mathcal F})$ and the singular locus $\operatorname{Sing}({\mathcal F})$ satisfies the following properties, locally at $p$: \begin{enumerate} \item[a)] The singular locus $\operatorname{Sing}({\mathcal F})$ has codimension two in $M$, it is non-singular and it has normal crossings with $E$. \item[b)] The foliation ${\mathcal F}$ is equimultiple along $\operatorname{Sing}({\mathcal F})$. In particular, each irreducible component of $E$ through $p$ contains $\operatorname{Sing}({\mathcal F})$ and there are at most two of them. \end{enumerate} We say that $p$ is an {\em equireduction point}, or a point of {\em $2$-equireduction}, for the foliated space $(M,E,{\mathcal F})$ if it is an even point and this is stable under blowing-ups centered at the singular locus. More precisely, we say that an even point $p\in M$ is an {\em equireduction point} for $((M,K), E, {\mathcal F})$ if for any finite sequence of local blowing-ups over $p$ \begin{equation} \label{eq:sucesiondeequirreduccion} ((M,p),E,{\mathcal F})\stackrel{\sigma_1}{\leftarrow}((M_1,p_1),E^1,{\mathcal F}_1) \stackrel{\sigma_2}{\leftarrow}\cdots \stackrel{\sigma_m}{\leftarrow} ((M_m,p_m),E^m,{\mathcal F}_m) \end{equation} such that the center of $\sigma_i$ is $\operatorname{Sing}({\mathcal F}_{i-1})$, for $i=1,2,\ldots,m$, we have the following properties: \begin{enumerate} \item The point $p_m$ is an even point for $((M_m,p_m),E^m,{\mathcal F}_m)$. \item If $p_m\in \operatorname{Sing}({\mathcal F}_m)$, the induced morphism $ \operatorname{Sing}({\mathcal F}_{m})\rightarrow \operatorname{Sing}({\mathcal F}_{0}) $ is étale. \end{enumerate} Let us recall that a {\em local blowing-up} is the composition of a blowing-up $$\pi:(M',\pi^{-1}(p))\rightarrow (M,p)$$ with an immersion of germs $(M',p')\rightarrow (M',\pi^{-1}(p))$. Next two results may be obtained by a direct adaptation of the statements proved in \cite{Can-M} to the case of generalized hypersurfaces: \begin{proposition} \label{pro:codimensionnoequireduccion} Let $((M,K),E,{\mathcal F})$ be a foliated space of generalized hypersurface type. The set $\operatorname{Z}({\mathcal F}, E)$ of non-equireduction points is a closed analytic subset of $M$ of codimension at least three. \end{proposition} \begin{proposition} \label{pro:secciontransversalequirreduccion} Let $p\in M$ be a singular equireduction point for a foliated space $((M,K),E,{\mathcal F})$ of generalized hypersurface type. Any two dimensional section $$ (\Delta,p)\subset (M,p) $$ transverse to $\operatorname{Sing}({\mathcal F})$ is a Mattei-Moussu transversal for ${\mathcal F}$ and it induces a foliated space $((\Delta,p), E\cap \Delta,{\mathcal F}\vert_{{\Delta}})$ that is a generalized curve. \end{proposition} Consider a singular equireduction point $p\in M$ for the generalized hypersurface type foliated space $((M,K),E,{\mathcal F})$. Let us perform the blowing-up with center at the whole singular locus $(\operatorname{Sing}(\mathcal F), p)\subset (M,p)$: $$ \varsigma_1: ((M_1,\varsigma_1^{-1}(p)),E^1,{\mathcal F}_1)) \rightarrow ((M,p),E,{\mathcal F}). $$ There are only finitely many points $\{p_{j}\}_{j=1}^{n_1}$ over $p$ in the singular locus $\operatorname{Sing}({\mathcal F}_1)$ and the morphism of germs $$ (\operatorname{Sing}({\mathcal F}_1),p_{j}) \rightarrow (\operatorname{Sing}({\mathcal F}),p) $$ is étale for all $j=1,2,\ldots,n_1$. Then, we can blow-up $((M_1,\varsigma_1^{-1}(p)),E^1,{\mathcal F}_1))$ with center $\operatorname{Sing}({\mathcal F}_1)$ to obtain a morphism $$ \varsigma_2: ((M_2,\varsigma_2^{-1}(\varsigma_1^{-1}(p))),E^2,{\mathcal F}_2)) \rightarrow ((M_1,\varsigma_1^{-1}(p)),E^1,{\mathcal F}_1)). $$ Note that the center of $\varsigma_2$ has exactly $n_1$ connected components, each one passing through a point $p_{j}$, for $j=1,2,\ldots,n_1$. Locally at each point $p_{j}$ we have an induced blowing-up with center $(\operatorname{Sing}({\mathcal F}_1),p_{j})$: $$ \varsigma_{p_{j}}:((M_2,\varsigma_2^{-1}(p_{j})),E^2,{\mathcal F}_2)) \rightarrow ((M_1,p_{j}),E^1,{\mathcal F}_1)). $$ Continuing indefinitely in this way, we obtain the {\em equireduction sequence} \begin{equation} \label{eq:sucesiondeequirreduccion2} {\mathcal E}_{M,E,{\mathcal F}}^p: ((M,p),E,{\mathcal F})\stackrel{\varsigma_1}{\leftarrow}((M_1,\varsigma_1^{-1}(p)),E^1,{\mathcal F}_1) \stackrel{\varsigma_2}{\leftarrow} \cdots . \end{equation} The {\em infinitely near points of $p$} in $M_\ell$ are the points in $$ (\varsigma_1\circ\varsigma_2\circ \cdots\circ \varsigma_{\ell})^{-1}(p)\cap \operatorname{Sing}({\mathcal F}_\ell) $$ that we can write as $p_{j_1j_2\cdots j_\ell}$, with the ``dendritic'' property that $$ \varsigma_\ell(p_{j_1j_2\cdots j_{\ell-1} j_\ell})=p_{j_1j_2\cdots j_{\ell-1}}. $$ Let us detail some consequences of Proposition \ref{pro:secciontransversalequirreduccion} relative to plane sections at an equireduction point $p\in M$. Take a two dimensional section $(\Delta,p)\subset (M,p)$ transverse to $\operatorname{Sing}({\mathcal F})$. First of all, we consider the following remark: \begin{remark} \label{rk:sectiondimdos} In view of \cite{Mat}, we know that $p$ is a simple point for $((M,p),E,{\mathcal F})$ if and only if it is a simple point for the restriction $((\Delta,p),E\cap \Delta,{\mathcal F}\vert_{\Delta}) $. \end{remark} If we consider a local holomorphic generator $\omega$ of ${\mathcal F}$ at $p$, without common factors in its coefficients, we know that $\eta=\omega\vert_{\Delta}$ is a local generator of ${\mathcal F}\vert_{\Delta}$ and moreover $$ \nu_{\Sigma}(\omega)=\nu_p(\omega)=\nu_p(\eta), \quad \Sigma=(\operatorname{Sing}({\mathcal F}), p). $$ This makes the blowing-ups $\varsigma_i$ in the equireduction sequence \ref{eq:sucesiondeequirreduccion2} to be ``compatible'' with the transversal section $\Delta$. More precisely, the equireduction sequence induces a sequence of blowing-ups $\bar\varsigma_i$ of the two-dimensional section $((\Delta,p),E\cap \Delta,{\mathcal F}\vert_{\Delta})$ with center at the points $p_{j_1j_2\cdots j_\ell}$, in such a way that the following diagram is commutative: \begin{equation} \label{eq:equrreducciondomtwo} \begin{array}{ccccc} ((\Delta,p),E\cap \Delta,{\mathcal F}\vert_{\Delta})& \stackrel{\bar\varsigma_1}{\longleftarrow}& ((\Delta_1,\varsigma_1^{-1}(p)),E^1\cap \Delta_1,{\mathcal F}_1\vert_{\Delta_1})& \stackrel{\bar\varsigma_2}{\longleftarrow}& \cdots \\ \downarrow&&\downarrow&& \\ ((M,p),E,{\mathcal F})&\stackrel{\varsigma_1}{\longleftarrow}& ((M_1,\varsigma_1^{-1}(p)),E^1,{\mathcal F}_1)& \stackrel{\varsigma_2}{\longleftarrow}&\cdots \end{array} \end{equation} Looking at the diagram in Equation \ref{eq:equrreducciondomtwo}, we know that the sequence of the $\bar\varsigma_i$ desingularizes the two dimensional foliated space $((\Delta,p),E\cap \Delta,{\mathcal F}\vert_{\Delta})$, since we blow-up each time at the singular points; hence we apply the existence of reduction of singularities in dimension two, see \cite{Sei,Can-C-D}. As a consequence, at a finite step of the equireduction sequence given in Equation \ref{eq:sucesiondeequirreduccion2}, we reach a reduction of singularities of $((M,p), E,{\mathcal F})$. \subsection{Relative Equireduction Points and Relative Transversality} Let us introduce a version of the equireduction points relative to a fixed reduction of singularities \begin{equation*} \pi: ((M',K'),E',{\mathcal F}')\rightarrow ((M,K),E,{\mathcal F}) \end{equation*} as in Equation \ref{eq:reducciondesingularidades}. We also introduce the locus of {\em $\pi$-good} points that will be essential in our proof of the existence of divisorial models. Let us take the notations introduced in Subsection \ref{Notations on a Reduction of Singularities}. We define the locus $Z_\pi({\mathcal F},E)$ of the {\em points that are not points of $\pi$-equireduction} as being $$ Z_\pi({\mathcal F},E)=Z({\mathcal F},E)\cup B_\pi({\mathcal F}, E), $$ with $B_\pi({\mathcal F}, E)=\cup_{j\in J_\pi}(\pi_1\circ\pi_2\circ\cdots\circ\pi_j)(Y_j) $, where $J_\pi$ is the set of indices $j$ in $\{0,1,\ldots,s-1\}$ such that the center $Y_j$ of $\pi_{j+1}$ has codimension at least three. The complement $M\setminus Z_\pi({\mathcal F},E)$ is the set of {\em $\pi$-equireduction points}. \begin{remark} \label{rk:codimpiequireduccion} We have that $Z_\pi({\mathcal F},E)$ is a closed analytic subset of $M$ of codimension at least three. \end{remark} If we take a point $p\in M\setminus Z_\pi({\mathcal F},E)$, the morphism $$ \pi_p:((M',\pi^{-1}(p)), E',{\mathcal F}')\rightarrow ((M,p), E,{\mathcal F}) $$ is a ``part of the equireduction sequence'' in Equation \ref{eq:sucesiondeequirreduccion2}, in the sense that there is a finite step in the equireduction sequence that can be obtained from $$ ((M',\pi^{-1}(p)), E',{\mathcal F}') $$ by repeatedly blowing-up the singular locus. Recall now that we have the decomposition $\pi=\pi_1\circ \sigma$ and that $\pi_1^{-1}(Y)=E^1_1$ is the exceptional divisor of the first blowing-up $\pi_1$, where $Y=Y_0$. Take a point $q\in E^1_1$. Let us put $p=\pi_1(q)$ and denote by $F_p=\pi_1^{-1}(p)$ the fiber of $p$ by $\pi_1$. Recall that $F_p$ is isomorphic to a complex projective space of dimension equal to $n-d-1$, where $n=\dim M$ and $d=\dim Y$. We say that $q$ is a point of {\em $\pi_1$-transversality} if $q\notin \operatorname{Sing}({\mathcal F}_1)$ or $q\in \operatorname{Sing}({\mathcal F}_1)$ and the germ $(\Sigma,q)$ of the singular locus $\operatorname{Sing}({\mathcal F}_1)$ at $q$ is non singular, it is contained in $(E^1_1,q)$, it has codimension two in $(M_1,q)$ and, moreover, we have the transversality property with respect to the fiber $F_p$ given by $T_q(F_p)\not\subset T_q\Sigma$, where $T_q$ stands for the tangent space. Let us denote by $T^1_\pi \subset E^1_1$ the locus of points that are not of $\pi_1$-transversality. Note that we have the closed analytic set $ Z_\sigma({\mathcal F}_1,E^1) $ defining the locus of points in $M_1$ that are not of $\sigma$-equireduction. The codimension of $Z_\sigma({\mathcal F}_1,E^1)$ in $M_1$ is at least three. We say that a point $p\in Y$ is a {\em $\pi$-bad point} if and only if $$ \dim \left( Z_\sigma({\mathcal F}_1,E^1) \cup T^1_\pi \right) \cap F_p \geq n-d-2,\quad d=\dim Y. $$ Denote by $B_\pi\subset Y$ the set of $\pi$-bad points. \begin{lemma} The set $B_\pi$ is a closed analytic subset of $Y$ such that $B_\pi\ne Y$. \end{lemma} \begin{proof} We know that $\dim \left( Z_\sigma({\mathcal F}_1,E^1) \cup T^1_\pi \right) \cap F_p$ is the maximum $$ \max\{ \dim (Z_\sigma({\mathcal F}_1,E^1) \cap F_p), \dim( T^1_\pi \cap F_p)\}. $$ Hence $B_\pi=B'\cup B''$ where \begin{eqnarray*} B' &=& \{p\in Y;\; \dim (Z_\sigma({\mathcal F}_1,E^1) \cap F_p) \geq n-d-2\}, \\ B''&=&\{p\in Y;\; \dim (T^1_\pi \cap F_p) \geq n-d-2 \}. \end{eqnarray*} Recall that the projection $E^1_1\rightarrow Y$ is a fibration with fiber ${\mathbb P}^{n-d-1}_{\mathbb C}$. Since the dimension of the fibers is upper semicontinuous, we see that both $B'$ and $B''$ are closed analytic subsets of $Y$. The codimension of $Z_\sigma({\mathcal F}_1,E^1)\cap E^1_1$ in $E^1_1$ is at least two, hence there is a closed subset $Z'\subset Y$, with $Z'\ne Y$ such that the codimension of $Z_\sigma({\mathcal F}_1,E^1)\cap F_p$ in $F_p$ is at least two, for any $p\in Y\setminus Z'$; in particular, we have that $B'\ne Y$. Let us show now that $B''\ne Y$. Decompose the analytic subset $\operatorname{Sing}({\mathcal F}_1)\cap E^1_1$ of $E^1_1$ as a union $$ \operatorname{Sing}({\mathcal F}_1)\cap E^1_1= \Sigma_1\cup \Sigma_2\cup\cdots\Sigma_t\cup R_1, $$ where the $\Sigma_i$ are the irreducible components of $\operatorname{Sing}({\mathcal F}_1)$ that have codimension two and that are contained in $E^1_1$, for $i=1,2,\ldots,t$. The closed analytic set $R_1\subset E^1_1$ is the union of the intersection with $E^1_1$ of the other irreducible components of $\operatorname{Sing}({\mathcal F}_1)$. Let us note that $R_1$ has codimension at least two in $E_1$ and that $R_1\subset T^1_\pi$. Moreover, we have that $$ T^1_\pi=R_1\cup\bigcup_{i=1}^t(\Sigma_i\cap T^1_\pi). $$ We have that $B''=B''_{1}\cup B''_{2}\cup\cdots\cup B''_{t}\cup B'''$, where \begin{eqnarray*} B''_{i}&=& \{p\in Y;\; \dim (\Sigma_i\cap T^1_\pi \cap F_p)\geq n-d-2 \}, \\ B'''&=& \{p\in Y;\; \dim (R_1\cap F_p) \geq n-d-2 \}. \end{eqnarray*} Since the codimension of $R_1$ in $E^1_1$ is at least two, we have that $B'''\ne Y$. Let us show that $B''_{i}\ne Y$, for $i=1,2,\ldots,t$. If $\dim \Sigma_i\cap T^1_\pi\leq n-3$, we are done. Thus, we assume that $\dim\Sigma_i\cap T^1_\pi= n-2$ and hence $\Sigma_i\subset T^1_\pi$. Take $U=\Sigma_i\setminus \Upsilon$, where $\Upsilon$ is the set of singularities of the closed analytic set $\operatorname{Sing}({\mathcal F}_1)$. The fact that $U\subset T^1_\pi$ means that for any point $q\in U$ we have that $T_q(F_{\pi_1(q)})\subset T_q\Sigma_i$. This property implies that $\Sigma_i$ is a union of fibers. Then, if $B''_i=Y$, we have that $\Sigma_i=E^1_1$, contradiction, since $\Sigma_i$ is a hypersurface of $E^1_1$. We have a decomposition of $B_\pi$ into a finite union of strict closed analytic subsets of $Y$. Since $Y$ is irreducible, we conclude that $B_\pi\ne Y$. \end{proof} Next Corollary is an important step in our proof of the existence of divisorial models in higher dimension: \begin{corollary} \label{cor: equirrducciongenerica} There is a strict closed analytic subset $Z\subset Y$, $Z\ne Y$ such that any point $p\in Y\setminus Z$ satisfies the following properties \begin{enumerate} \item The center $Y$ is equimultiple for $\mathcal F$ at the point $p$. \item There is a Mattei-Moussu transversal $(\Delta,p)$ of ${\mathcal F}$ at $p$ such that the strict transform $\Delta^1$ of $\Delta$ by $\pi_{1,p}$ intersects $\operatorname{Sing}({\mathcal F}_1)$ transversely and only in points of $\sigma$-equireduction. \end{enumerate} \end{corollary} \begin{proof} Let $C\subset Y$ be the points where $Y$ is not equimultiple for $\mathcal F$. We know that $C\ne Y$ is a strict closed analytic subset of $Y$. Now, take $Z=C\cup B_\pi$, that is also a strict closed analytic subset of $Y$. Let us show that $Z$ satisfies the statement. Take a point $p\in Y\setminus Z$. Since $p\notin C$, it is a point of equimultiplicity of $Y$ with respect to $\mathcal F$. Consider the fiber $F_p$ of $p$; we know that it is a projective space of dimension $n-d-1$. Since $p\notin B_\pi$, we have that $$ \dim \left( Z_\sigma({\mathcal F}_1,E^1) \cup T^1_\pi \right) \cap F_p \leq n-d-3,\quad d=\dim Y. $$ This means that for a nonempty Zariski open set $U$ of the grassmanian of lines $\ell$ in the projective space $F_p$, we have the property that $$ \ell\cap (Z_\sigma({\mathcal F}_1,E^1) \cup T^1_\pi)=\emptyset. $$ We also know that for a nonempty Zariski open set $W$ of the grassmanian of lines $\ell$ in the projective space $F_p$, we have the property that $\ell$ meets transversely $\operatorname{Sing}({\mathcal F}_1)$. By the generic properties of Mattei-Moussu transversals, we can choose $(\Delta,p)$ for $\mathcal F$ such that the line $\ell=F_p\cap \Delta_1$ lies in $U\cap W$. The desired property follows from the definition of the sets $Z_\sigma({\mathcal F}_1,E^1)$ and $T^1_\pi$. \end{proof} \section{Divisorial Models For Generalized Hypersurfaces} \label{Logarithmic Models For Generalized Hypersurfaces} We introduce the definition of divisorial model as follows: \begin{definition} Consider a generalized hypersurface $\mathcal F$ on a non-singular complex analytic variety $M$ and a $\mathbb C$-divisor $\mathcal D$ on $M$. We say that $\mathcal D$ is a {\em divisorial model for ${\mathcal F}$ at a point $p$ in $M$} if the following conditions hold: \begin{enumerate} \item The support $\operatorname{Supp}({\mathcal D}_p)$ of the germ ${\mathcal D}_p$ of $\mathcal D$ at $p$ is the union of the germs at $p$ of invariant hypersurfaces of $\mathcal F$. \item For any ${\mathcal D}$-transverse map $\phi:({\mathbb C}^2,0)\rightarrow (M,p)$, the ${\mathbb C}$-divisor $\phi^*{\mathcal D}$ is a divisorial model for $\phi^*{\mathcal F}$. \end{enumerate} We say that $\mathcal D$ is a divisorial model for $\mathcal F$ if it fulfils the above conditions at every point $p$ in $M$. \end{definition} In view of Proposition \ref{prop:pullbacklogmod}, this definition extends the one for dimension two, stated in Definition \ref{def:modelodimensiondos}. This Section is devoted to giving a proof of the following results: \begin{theorem} \label{teo:main} Every generalized hypersurface on $({\mathbb C}^n,0)$ has a divisorial model. \end{theorem} \begin{proposition} \label{pro: uniquenessandnondicriticalness} Let $\mathcal D$ be a divisorial model for a generalized hypersurface $\mathcal F$ on $({\mathbb C}^n,0)$. Then $\mathcal D$ is a non-dicritical $\mathbb C$-divisor. Moreover, any other $\mathbb C$-divisor is a divisorial model for $\mathcal F$ if and only if it is projectively equivalent to $\mathcal D$. \end{proposition} In order to build a divisorial model, we take a reduction of singularities $\pi$ of the generalized hypersurface $\mathcal F$. There is a natural projective class of $\mathbb C$-divisors associated to the desingularized foliated space obtained from $\pi$. The divisorial model will be a $\mathbb C$-divisor that is transformed into this class by the reduction of singularities. For technical convenience, we systematically consider foliated spaces including a highlighted normal crossings divisor, although the concept of divisorial model does not involve such a normal crossings divisor. Note that the uniqueness part in Proposition \ref{pro: uniquenessandnondicriticalness} comes from the corresponding fact in dimension two, just by taking a Mattei-Moussu transversal. \subsection{$\mathbb C$-Divisors Associated to Desingularized Foliated Spaces} \label{DivisorsAssociatedtoDesingularizedFoliateSpaces} Let us consider a foliated space $((M',K'), E', {\mathcal F}')$ of generalized hypersurface type, where $K'$ is a connected and compact analytic subset of $M'$. Let us take the following hypothesis: \begin{itemize} \item There is a logarithmic $1$-form $\eta'$ on $(M',K')$ fully associated to ${\mathcal F}'$. \item The foliated space $((M',K'),E',{\mathcal F}')$ is desingularized. \end{itemize} \begin{remark} \label{rk:existenciadeetaprima} Consider a foliated space $((M,K), E, {\mathcal F})$ of generalized hypersurface type, where $K$ is a connected and compact analytic subset of $M$. Assume that there is a logarithmic $1$-form $\eta$ on $(M,K)$ fully associated to ${\mathcal F}$ and consider a reduction of singularities $$ \pi:((M',K'), E',{\mathcal F}')\rightarrow ((M,K),E,{\mathcal F}) $$ as in Equation \ref{eq:reducciondesingularidades}. Then $\eta'=\pi^*{\eta}$ is a logarithmic $1$-form on $(M',K')$ fully associated to ${\mathcal F}'$, in view of Proposition \ref{prop:pullbackoflogaritmicformsfullyassociated}. We note that the existence of such $\eta$, and hence $\eta'$, is assured when $K=\{0\}$ is a single point. \end{remark} In this subsection we build a $\mathbb C$-divisor ${\mathcal D}^{\eta'}$ on $(M',K')$, defining a divisorial model for ${\mathcal F}'$ on $(M',K')$, that is obtained from $\eta'$ in terms of residues. Taking notations compatible with Subsection \ref{Notations on a Reduction of Singularities}, we denote by $S'\subset M'$ the union of invariant hypersurfaces of ${\mathcal F}'$ not contained in the divisor $E'$. We know that $S'$ is a disjoint union of non singular hypersurfaces and $D'=E'\cup S'$ is also a normal crossings divisor on $M'$. Since the irreducible components of $E'$ are invariant, we have that $D'$ is the union of all invariant hypersurfaces of ${\mathcal F}'$. Accordingly with Subsection \ref{Notations on a Reduction of Singularities}, we denote $D'=\cup_{j\in I\cup B}D'_j$ the decomposition of $D'$ into irreducible components, where $E'=\cup_{i\in I}E'_i$ and $S'=\cup_{b\in B}S'_b$. Taking in account Saito's residue theory in \cite{Sai} and noting that $D'$ has the normal crossings property, the residue $\operatorname{Res}_p(\eta')$ of the germ of $\eta'$ at a point $p\in K'\subset M'$ is an element $$ \operatorname{Res}_p(\eta')\in \oplus_{j\in I_p\cup B_p}{\mathcal O}_{D'_j,p}, $$ where $I_p=\{i\in I; p\in E'_i\}$ and $B_p=\{b\in B; p\in H'_b\}$. Note that $B_p$ is empty (corner points) or it has exactly one element (trace points) in view of the description of simple points in Subsection \ref{definicionsimplepoint}. More generally, the residues induce global holomorphic functions \begin{equation} \label{eq:residuefunctions} f_j:D'_j\rightarrow {\mathbb C},\quad j\in I\cup B. \end{equation} Let us note that each $D'_j$ is a germ along $D'_j\cap K'$. Thus, the functions $f_j$ are constant along the connected components of the compact sets $D'_j\cap K'$. More precisely, following Saito's Theory, we have local coordinates at $p$ that can be labelled as $(x_j; j\in I_p\cup B_p)\cup (y_s;s\in A)$ such that the germ $\eta'_p$ of $\eta'$ at $p$ can be written as \begin{equation*} \eta'_p=\sum_{j\in I_p\cup B_p}\tilde f_j\frac{dx_j}{x_j}+\alpha,\quad \tilde f_j\vert_{D'_j}=f_j, \end{equation*} where $\alpha$ is a germ of holomorphic $1$-form and moreover, the functions $\tilde f_j$ satisfy that $\partial \tilde f_j/\partial x_j=0$, that is $\tilde f_j$ does not depend on the coordinate $x_j$. By evaluating $f_j$ at the point $p$, we get a local $\mathbb C$-divisor ${\mathcal D}_{\eta',p}$ defined by \begin{equation} \label{eq:descripciondedeetaprima} {\mathcal D}_{\eta',p}=\sum_{j\in I_p\cup B_p} f_j(p)\operatorname{Div}_p(D'_j). \end{equation} \begin{proposition} There is a $\mathbb C$-divisor ${\mathcal D}^{\eta'}$ on $(M',K')$ such that the germ ${\mathcal D}^{\eta'}_p$ of ${\mathcal D}^{\eta'}$ at any $p\in K'$ satisfies that ${\mathcal D}^{\eta'}_p={\mathcal D}_{\eta',p}$. \end{proposition} \begin{proof} The residual functions $f_j$ of Equation \ref{eq:residuefunctions} are constant along each connected component of $K'\cap D'_j$, for $j\in I\cup B$. We have only to remark that the compact set $K'\cap D'_j $ is connected for any $j\in I\cap B$; otherwise $D'_j$ would not be irreducible, since each connected component of $K'\cap D'_j$ determines a connected component of $D'_j$ as a germ of hypersuface. \end{proof} \begin{remark} The $\mathbb C$-divisor ${\mathcal D}^{\eta'}$ is a divisorial model for ${\mathcal F}'$. The proof of this statement is a consequence of the more general results in Subsection \ref{Existence of Logarithmic Models}, by considering a trivial reduction of singularities. \end{remark} \subsection{The $\mathbb C$-Divisor Induced by a Reduction of Singularities} \label{The Divisor Induced by a Reduction of Singularities} Let us consider a foliated space $((M,K),E,{\mathcal F})$ of generalized hypersurface type, where $K$ is a connected and compact analytic subset $K\subset M$. Assume that we have a logarithmic $1$-form $\eta$ on $(M,K)$ fully associated to $\mathcal F$. It is not evident how to define a ${\mathbb C}$-divisor associated to $\eta$ as in Subsection \ref{DivisorsAssociatedtoDesingularizedFoliateSpaces}, unless we are in the situation of normal crossings outside a subset of codimension $\geq 3$, described in Saito \cite{Sai} and also in \cite{Cer-LN}. In this subsection we do it, once we have fixed a reduction of singularities of $((M,K), E,{\mathcal F})$. More precisely, this subsection is devoted to the proof of the following result: \begin{theorem} \label{teo:pilogarithmicmodel} Consider a foliated space $((M,K),E,{\mathcal F})$ of generalized hypersurface type, where $K$ is a connected and compact analytic subset of $M$. Let $\eta$ be a logarithmic differential $1$-form on $(M,K)$ fully associated to ${\mathcal F}$. Given a reduction of singularities $$ \pi:((M',K'), E',{\mathcal F}')\rightarrow ((M,K), E,{\mathcal F}), $$ there is a unique $\mathbb C$-divisor ${\mathcal D}_\pi^{\eta}$ of $(M,K)$ such that $\pi^*({\mathcal D}_\pi^{\eta})={\mathcal D}^{\eta'}$, where $\eta'=\pi^*\eta$. Moreover ${\mathcal D}_\pi^\eta$ is non dicritical. \end{theorem} \begin{remark} We know that Theorem \ref{teo:pilogarithmicmodel} is true when $\dim M=2$. Let us see this. Assume that $\dim M=2$ and take a logarithmic differential $1$-form $\eta$ fully associated to $\mathcal F$. We have that $\eta'=\pi^*\eta$ is locally given at the singular points $p\in K'$ as $$ \eta'=(\lambda+a(x,y))\frac{dx}{x}+ (\mu+b(x,y))\frac{dy}{y}, $$ where $xy=0$ is a local equation of $S' \cup E'$ at $p$. The coefficient in ${\mathcal D}^{\eta'}$ of the irreducible component of $S'\cup E'$ of equation $x=0$ is equal to $\lambda$. This tells us that ${\mathcal D}^{\eta'}$ is a divisorial model for ${\mathcal F}'$. Consider now a divisorial model ${\mathcal D}$ for ${\mathcal F}$, it exists in view of Theorem \ref{th:existenciayunicidadendimensiondos}. By Corollary \ref{cor:modlogsucblowing}, the pullback $\pi^*{\mathcal D}$ is a divisorial model for ${\mathcal F}'$. Recall that two divisorial models are projectively equivalent, by Theorem \ref{th:existenciayunicidadendimensiondos}. Noting that the supports are connected (each component of the support intersects the connected subset $K'=\pi^{-1}(0)$), there is a non-zero scalar $\mu\in {\mathbb C}$ such that $ {\mathcal D}^{\eta'}=\mu \pi^*{\mathcal D}$. If we take ${\mathcal D}_\pi^{\eta}=\mu{\mathcal D}$, we obtain that $\pi^*({\mathcal D}_\pi^{\eta})={\mathcal D}^{\eta'}$. The non-dicriticalness of ${\mathcal D}_\pi^{\eta}$ is also a consequence of the fact that it is a divisorial model for $\mathcal F$, in view of the statement of Theorem \ref{th:existenciayunicidadendimensiondos}. \end{remark} From now on, let us take the general notations introduced in Subsection \ref{Notations on a Reduction of Singularities}. Let us recall that $I\setminus I_0=\{1,2,\ldots,s\}$, where $s$ is the length of $\pi$ as a composition of a sequence of blowing-ups. We assume that $s\geq 1$, otherwise we take ${\mathcal D}^{\eta}_\pi={\mathcal D}^{\eta}$ and we are done. Let us recall also the decomposition $\pi=\pi_1\circ\sigma$, where $\pi_1$ is the first blowing-up and $\sigma$ is a composition of $s-1$ blowing-ups. Let us show that ${\mathcal D}_\pi^{\eta}$ is necessarily unique. If we write ${\mathcal D}^{\eta'}=\sum_{j\in I\cup B}\lambda_j D'_j$, the condition $\pi^*({\mathcal D}_\pi^\eta)={\mathcal D}^{\eta'}$ implies that \begin{equation} \label{eq:depieta} {\mathcal D}_\pi^\eta=\sum_{j\in I_0\cup B}\lambda_j D_j. \end{equation} Then ${\mathcal D}_\pi^{\eta}$ is unique, if it exists. From now on, we fix ${\mathcal D}_\pi^\eta$ as being the one given by Equation \ref{eq:depieta}. Let us see that ${\mathcal D}_\pi^\eta$ is a non-dicritical $\mathbb C$-divisor, under the assumption that $\pi^*({\mathcal D}^\eta_\pi)={\mathcal D}^{\eta'}$. We know that ${\mathcal D}^{\eta'}$ is non-dicritical, by applying Corollary \ref{cor:dicriticonormalcrossings}. Moreover, for any $i\in \{1,2,\ldots,s\}$, the $i$-blowing-up is non dicritical, since $\lambda_i\ne 0$. Hence $\pi$ is a composition of admissible non-dicritical blowing-ups for ${\mathcal D}^\eta_\pi$. Since ${\mathcal D}^{\eta'}$ is non-dicritical, by Corollary \ref{dicriticidadexplosionnodicritica} we conclude that ${\mathcal D}^\eta_\pi$ is non-dicritical. Finally, we have to verify that $\pi^*({\mathcal D}_\pi^\eta)={\mathcal D}^{\eta'}$. We proceed by induction on the length $s$ of the reduction of singularities $\pi$, using the fact that Theorem \ref{teo:pilogarithmicmodel} is true when $n=2$. Let us put $\eta_1=\pi_1^*\eta$. We have that the logarithmic 1-form $\eta_1$ is fully associated to ${\mathcal F}_1$ and $\eta'=\sigma^*\eta_1$. Our induction hypothesis implies that the statement of Theorem \ref{teo:pilogarithmicmodel} is true for the morphism $$ \sigma:((M',K'), E',{\mathcal F}')\rightarrow ((M_1,K_1), E^1,{\mathcal F}_1). $$ This means that $\sigma^*({\mathcal D}^{\eta_1}_\sigma)={\mathcal D}^{\eta'}$. Then, in order to show that $\pi^*({\mathcal D}^{\eta}_\pi)={\mathcal D}^{\eta'}$, we have only to verify that $\pi_1^*({\mathcal D}^{\eta}_\pi)={\mathcal D}^{\eta_1}_\sigma$. This is equivalent to saying that the following equality holds: \begin{equation} \label{eq:lambdauno} \lambda_1=\sum_{j\in J_Y}\lambda_j\nu_Y(D_j), \end{equation} where $J_Y=\{j\in I_0\cup B;\; Y\subset D_j\}$. \begin{remark} Take a point $p\in Y$, not necessarily in $Y\cap K$, but close enough to $K$. Assume that $p$ is a point of equimultiplicity for $\mathcal F$; then, for any $D_j$, with $j\in I_0\cup B$, we have that $\nu_p(D_j)=\nu_Y(D_j)$. In this case, we have the equality \begin{equation} \label{eq:indicesdey} J_Y=\{j\in I_0\cup B;\quad p\in D_j\}. \end{equation} \end{remark} Let us take a point $p\in Y\setminus Z$ and a Mattei-Moussu transversal $(\Delta,p)$ as stated in Corollary \ref{cor: equirrducciongenerica}. We recall that $\pi$ induces a morphism $\pi_p$ as in Equation \ref{eq:pisobrep} that splits as $\pi_p=\pi_{1,p}\circ\sigma_p$ as in Equation \ref{eq.redsingsobrep}. Moreover, the morphism $\pi$ induces a two-dimensional reduction of singularities of foliated spaces of generalized curve type that we denote as: \begin{equation} \label{eq:pibarrap} \bar\pi_p: ((\Delta',E'(p)), E'(p), {\mathcal F}'\vert_{\Delta'}) \rightarrow ((\Delta,p), \emptyset, {\mathcal F}\vert_\Delta),\quad E'(p)=\Delta'\cap\pi^{-1}(p), \end{equation} where $\Delta'$ is the strict transform of $\Delta$ by $\pi$. The reader may find similar situations in \cite{Can-M-RV}. In view of the equireduction properties, there is an increasing sequence of integers \begin{equation} \label{eq:listaele} \ell_1=1<\ell_2< \cdots<\ell_r\leq s,\quad T_Y=\{1,\ell_2,\ldots,\ell_s\}, \end{equation} depending only on $Y$ with the following property: for any $j=1,2,\ldots,s$, the inverse image $K'_p=\pi^{-1}(p)$ intersects $D'_j$ if and only if $j\in T_Y$. We have that $$ E'(p)=\cup_{t\in T_Y} (\Delta'\cap E'_t) $$ is the decomposition into irreducible components of $E'(p)=\bar\pi^{-1}_p(p)$. We can decompose $\bar\pi_p$ as $\bar\pi_p=\bar\pi_{1,p}\circ \bar\sigma_p$ where \begin{eqnarray*} \bar\pi_{1,p}:((\Delta_1,E^1(p)), E^1(p),{\mathcal F}_1\vert_{\Delta_1})&\rightarrow & ((\Delta,p), \emptyset,{\mathcal F}\vert_{\Delta}), \quad E^1(p)=\Delta_1\cap \pi_1^{-1}(p), \\ \bar\sigma_p: ((\Delta',E'(p)), E'(p),{\mathcal F}'\vert_{\Delta'})&\rightarrow&((\Delta_1,E^1(p)), E^1(p),{\mathcal F}_1\vert_{\Delta_1}), \end{eqnarray*} where $\Delta_1$ is the strict transform of $\Delta$ by $\pi_1$. Since $(\Delta,p)$ is a Mattei-Moussu transversal, we have that $ \bar\eta=\eta\vert_\Delta $ is a logarithmic differentiable $1$-form fully associated to ${\mathcal F}\vert_{\Delta}$. Moreover, we know that $\Delta_1$ is also a Mattei-Moussu transversal for ${\mathcal F}_1$ in all the points of $E^1(p)$, then $\bar\eta_1=\eta_1\vert_{\Delta_1}$ is also a logarithmic $1$-form fully associated to ${\mathcal F}_1\vert_{\Delta_1}$. By the same reason, we see that $\bar\eta'=\eta'\vert_{\Delta'}$ is a logarithmic $1$-form fully associated to ${\mathcal F}'\vert_{\Delta'}$. On the other hand, an elementary functoriality assures that \begin{equation} \bar\eta'=\bar\pi_p^*(\bar\eta)=\bar\sigma_p^*(\bar\eta_1), \quad \bar\eta_1=\bar\pi_{1,p}^*(\bar\eta). \end{equation} Note that $((M', K'_p ),E',{\mathcal F}')$ is desingularized. Then, we can follow the construction in Subsection \ref{DivisorsAssociatedtoDesingularizedFoliateSpaces} to obtain a $\mathbb C$-divisor $ {\mathcal D}^{\eta'}_p $, defined in $(M', K'_p )$ and associated to the logarithmic $1$-form $\eta'$. To be precise, the $\mathbb C$-divisor $ {\mathcal D}^{\eta'}_p $ is associated to the germ of $\eta'$ along $K'_p$; this germ may be considered for $p$ close enough to $K$, by taking appropriate representatives. We can write the $\mathbb C$-divisor ${\mathcal D}^{\eta'}_p$ on $(M', K'_p)$ and the ${\mathbb C}$-divisor ${\mathcal D}^{\eta}_{\pi_p}$ on $(M,p)$ as \begin{eqnarray} \label{eq:enlafibra} {\mathcal D}^{\eta'}_p&=&\sum_{j\in J_Y}\lambda_j(p)\operatorname{Div}_{K'_p}(D'_j) + \sum_{j\in T_Y}\lambda_j(p)\operatorname{Div}_{K'_p}(D'_j), \\ {\mathcal D}^{\eta}_{\pi_p}&=&\sum_{j\in J_Y}\lambda_j(p)\operatorname{Div}_{p}(D_j). \end{eqnarray} We denote by $\operatorname{Div}_{K'_p}(D'_j)$ the germ of $D'_j$ along $K'_p=\pi^{-1}(p)$. In the same way, we denote by $\operatorname{Div}_{p}(D_j)$ the germ of $D_j$ at the point $p$. Note that Equation \ref{eq:enlafibra} is written without null coefficients. \begin{remark} \label{rk:igualdaddecoefficientes} Recall that $K'_p=\pi^{-1}(p)$. If $p\in K$, we have that $K'_p\subset K'$ and the reader can verify that ${\mathcal D}^{\eta'}_p$ is just the germ of ${\mathcal D}^{\eta'}$ along $\pi^{-1}(p)$. In this case, we have that \begin{equation} \label{eq:igualdaddecoeficientes} \lambda_j(p)=\lambda_j, \mbox{ for any } j\in J_Y\cup T_Y. \end{equation} When $p$ is not in the germification set $K$, we describe further the relationship between ${\mathcal D}^{\eta'}_p$ and the germ of ${\mathcal D}^{\eta'}$ along $\pi^{-1}(p)$. \end{remark} The following Lemma \ref{lema:thconequrreduccion} is our first step in the proof of Theorem \ref{teo:pilogarithmicmodel}: \begin{lemma} \label{lema:thconequrreduccion} Under the induction hypothesis, we have that \begin{equation} \label{eq:teoenpuntosdeequirreduccion} \lambda_1(p)=\sum_{j\in J_Y}\lambda_j(p)\nu_Y(D_j) \end{equation} and thus $\pi_p^*({\mathcal D}^{\eta}_{\pi_p})={\mathcal D}^{\eta'}_{p}$. \end{lemma} \begin{proof} By induction hypothesis, we have that $\sigma_p^*({\mathcal D}^{\eta_1}_{\sigma_p})= {\mathcal D}^{\eta'}_{p}$. Reasoning as before, we have that Equation \ref{eq:teoenpuntosdeequirreduccion} holds if and only if $\pi_p^*({\mathcal D}^{\eta}_{\pi_p})={\mathcal D}^{\eta'}_{p}$. We know that the $\mathbb C$-divisor ${\mathcal D}^{\bar\eta'}$ on $(\Delta', K'_p)$, the $\mathbb C$-divisor ${\mathcal D}^{\bar\eta_1}_{\bar\sigma_p}$ on $(\Delta_1, K_{1,p})$ and the $\mathbb C$-divisor ${\mathcal D}^{\bar\eta}_{\bar\pi_p}$ on $(\Delta, p)$ are given by $$ {\mathcal D}^{\bar\eta'}={\mathcal D}^{\eta'}_{p}\vert_{\Delta'}, \quad {\mathcal D}^{\bar\eta_1}_{\bar\sigma_p}={\mathcal D}^{\eta_1}_{\sigma_p}\vert_{\Delta_1}, \quad {\mathcal D}^{\bar\eta}_{\bar\pi_p}= {\mathcal D}^{\eta}_{\pi_p}\vert_{\Delta} . $$ Recalling that Theorem \ref{teo:pilogarithmicmodel} is true for two dimensional ambient spaces, we have $$ \bar\pi_p^*({\mathcal D}^{\bar\eta}_{\bar\pi_p})={\mathcal D}^{\bar\eta'}, \quad \bar\sigma_p^*({\mathcal D}^{\bar\eta_1}_{\bar\sigma_p})={\mathcal D}^{\bar\eta'}, \quad \bar\pi_{1,p}^*({\mathcal D}^{\bar\eta}_{\bar\pi_p})={\mathcal D}^{\bar\eta_1}_{\bar\sigma_p}. $$ The property $\bar\pi_{1,p}^*({\mathcal D}^{\bar\eta}_{\bar\pi_p})={\mathcal D}^{\bar\eta_1}_{\bar\sigma_p}$ gives us that Equation \ref{eq:teoenpuntosdeequirreduccion} holds, as follows. The coefficient $\mu(p)$ of $\operatorname{Div}(E^1_1\cap \Delta_1)$ in $\pi_{1,p}^*({\mathcal D}^{\bar\eta}_{\bar\pi_p})$ is given by $$ \mu(p)= \sum_{j\in J_Y}\lambda_j(p)\nu_p(D_j\cap \Delta). $$ Recalling that $(\Delta,p)$ is a Mattei-Moussu transversal, we have that $$ \nu_p(D_j\cap \Delta)=\nu_p(D_j),\quad \mbox{ for any } j\in J_Y. $$ Since $p$ is a point of $Y$-equimultiplicity for the generalized hypersurface $\mathcal F$, we have that $\nu_p(D_j)=\nu_Y(D_j)$, for all $j\in J_Y$. We conclude that $$ \mu(p)= \sum_{j\in J_Y}\lambda_j(p)\nu_Y(D_j). $$ On the other hand, the coefficient of $\operatorname{Div}(E^1_1\cap \Delta_1)$ in ${\mathcal D}^{\bar\eta_1}_{\bar\sigma_p}$ is equal to $\lambda_1(p)$. Since $\bar\pi_{1,p}^*({\mathcal D}^{\bar\eta}_{\bar\pi_p})={\mathcal D}^{\bar\eta_1}_{\bar\sigma_p}$, he have that $\lambda_1(p)=\mu(p)$ and we are done. \end{proof} In view of Remark \ref{rk:igualdaddecoefficientes} and Equation \ref{eq:igualdaddecoeficientes}, if we can choose $p\in K$, the equality in Equation \ref{eq:lambdauno} holds and we are done. This is the situation when $Y=\{p\}$ and more generally when $Y\subset K$. We have to consider the case when we cannot chose $p\in K$. This means that $Y\cap K\subset Z$, where $Z\subset Y$ is the closed analytic subset presented in Corollary \ref{cor: equirrducciongenerica}. We end with the following lemma: \begin{lemma} \label{lema.csindices} For any $j\in J_Y$ and $p\in Y\setminus Z$, we have that $ \lambda_j/\lambda_1=\lambda_j(p)/\lambda_1(p)$. \end{lemma} \begin{proof} Consider the reduction of singularities $\bar\pi_p$ as in Equation \ref{eq:pibarrap}. For any index $\ell\in J_Y\cup T_Y$, let us denote $D'_\ell(p)=D'_\ell\cap \Delta'$. The exceptional divisor of $\bar\pi_p$ is $$ E'(p)=\bar\pi_p^{-1}(p)=\cup_ {t\in T_Y}D'_t(p) $$ and $\bar\pi_p$ is a composition of $r$ blowing-ups, the morphisms corresponding to the indices $t\in T_Y$. By connectedness of the dual graph of $\bar\pi_p$, there is a finite sequence $(t_u)_{u=1}^{v+1}$ of elements of $T_Y\cup \{j\}$, with $t_1=1$, $t_{v+1}=j$ such that $$ D'_{t_u}(p)\cap D'_{t_{u+1}}(p)\ne \emptyset, \quad u=1,2,\ldots,v. $$ Take a point $q_{u}\in D'_{t_u}(p)\cap D'_{t_{u+1}}(p)$, for $u=1,2,\ldots,v$. By the local description of simple singularities in dimension two, we have that $$ \operatorname{CS}_{q_u}({\mathcal F}'\vert_{\Delta'}, D'_{t_u}(p)) =-\lambda_{t_{u+1}}(p)/\lambda_{t_{u}}(p) , $$ see Remark \ref{rk:indicessingsimples}. The Camacho-Sad index $\operatorname{CS}_{q_u}({\mathcal F}'\vert_{\Delta'}, D'_{t_u}(p))$ is the Camacho-Sad index of the transversal type of ${\mathcal F}'$ along $D'_{t_u}\cap D'_{t_{u+1}}$. It can be read locally at the points in $ (D'_{t_u}\cap D'_{t_{u+1}})\cap K' $, from the germ of the $1$-form $\eta'$ along $K'$. We deduce that $$ \operatorname{CS}_{q_u}({\mathcal F}'\vert_{\Delta'}, D'_{t_u}(p))=-\lambda_{t_{u+1}}/\lambda_{t_{u}}. $$ Hence $\lambda_{t_{u+1}}(p)/\lambda_{t_{u}}(p)=\lambda_{t_{u+1}}/\lambda_{t_{u}}$ for any $u=1,2,\ldots,v$. Making the product of these equalities, we conclude that $\lambda_{j}(p)/\lambda_{1}(p)=\lambda_j/\lambda_1$ and we are done. \end{proof} As a consequence of Lemmas \ref{lema:thconequrreduccion} and \ref{lema.csindices}, we obtain that Equation \ref{eq:lambdauno} holds. This ends the proof of Theorem \ref{teo:pilogarithmicmodel}. \subsection{Existence of Divisorial Models} \label{Existence of Logarithmic Models} We apply first Theorem \ref{teo:pilogarithmicmodel} to a reduction of singularities $$ \pi:((M',K'), E',{\mathcal F}')\rightarrow (({\mathbb C}^n,0), \emptyset,{\mathcal F}),\quad K'= \pi^{-1}(0), $$ of the foliated space $(({\mathbb C}^n,0), \emptyset,{\mathcal F})$. That is, we take an integrable logarithmic differential $1$-form $\eta$ fully associated to ${\mathcal F}$, we consider the $\mathbb C$-divisor ${\mathcal D}'={\mathcal D}^{\eta'}$ on $M'$, where $\eta'=\pi^*\eta$ and finally, we consider the ${\mathbb C}$-divisor ${\mathcal D}$ on $({\mathbb C}^n,0)$ defined by the property $\pi^*{\mathcal D}={\mathcal D}'$, whose existence has been shown in Theorem \ref{teo:pilogarithmicmodel}. We are going to verify that ${\mathcal D}$ is a divisorial model for $\mathcal F$. Note that the support of $\mathcal D$ is the union $D$ of the invariant hypersurfaces of $\mathcal F$. Consider a ${\mathcal D}$-transverse holomorphic map $\phi:({\mathbb C}^2,0)\rightarrow ({\mathbb C}^n,0)$. By Proposition \ref{prop:appdos} in the Appendix I, there is a commutative diagram of holomorphic maps \begin{equation} \begin{array}{ccc} ({\mathbb C}^2,0)&\stackrel{\sigma}{\longleftarrow}& (N',\sigma^{-1}(0)) \\ \downarrow \phi& & \downarrow \psi \\ ({\mathbb C}^n,0)&\stackrel{\pi}{\leftarrow}& (M',\pi^{-1}(0)) \end{array} \end{equation} such that $\sigma$ is the composition of a finite sequence of blowing-ups. \begin{lemma} \label{lem:conmutatividad} In the above situation, we have that $\phi^*{\mathcal F}$ exists and $\sigma^*({\phi^*{\mathcal F}})=\psi^*{\mathcal F}'$. \end{lemma} \begin{proof} Let $\omega$ be an integrable holomorphic $1$-form defining $\mathcal F$, without common factors in its coefficients. Assume that $\phi^*\omega\ne 0$, then we are done. Indeed, in this case $\phi^*{\mathcal F}$ is defined by $\phi^*\omega$, since $\sigma$ is a sequence of blowing-ups, we have that $\sigma^*(\phi^*\omega)\ne 0$ and $\sigma^*(\phi^*{\mathcal F})$ is defined by the nonzero $1$-form $\sigma^*(\phi^*\omega)$. Noting that $$ \sigma^*(\phi^*\omega)=(\phi\circ\sigma)^*\omega=(\pi\circ\psi)^*\omega=\psi^*(\pi^*\omega), $$ we conclude that $\phi^*{\mathcal F}$ exists and $\sigma^*({\phi^*{\mathcal F}})=\psi^*{\mathcal F}'$. Let us show that $\phi^*\omega\ne 0$. Assume by contradiction that $\phi^*\omega= 0$. Take an analytic germ of curve $(\Gamma, 0)$ such that $\phi(\Gamma)\not\subset D$. The existence of such a $(\Gamma, 0)$ is guarantied by the hypothesis that $\operatorname{Im}(\phi)\not\subset D$. The assumption that $\phi^*\omega=0$ implies that $(\phi(\Gamma),0)$ is an invariant germ of curve of $\mathcal F$. Taking a reduction of singularities of $D$, that induces a reduction of singularities of $\mathcal F$, we see that any germ of invariant curve must be contained in $D$. Contradiction. \end{proof} In view of Proposition \ref{pro:modelostrasunaexplosion}, in order to prove that $\phi^*{\mathcal D}$ is a divisorial model for $\phi^*{\mathcal F}$, it is enough to prove that $ \sigma^*(\phi^*{\mathcal D}) $ is a divisorial model for $\sigma^*(\phi^*{\mathcal F})$. By Lemma \ref{lem:conmutatividad} above, we have that $\sigma^*({\phi^*{\mathcal F}})=\psi^*{\mathcal F}'$. Moreover, we also have $$ \sigma^*(\phi^*{\mathcal D})=\psi^*{\mathcal D}'. $$ Thus, we have to verify that $\psi^*{\mathcal D}'$ is a divisorial model for $\psi^*{\mathcal F}'$. Let us work locally at a point $p\in\sigma^{-1}(0)$ and put $q=\psi(p)$. First of all, we take local coordinates $(x_1,x_2,\ldots,x_n)$ at $q$ such that the foliation ${\mathcal F}'$ is locally given at $q$ by an integrable meromorphic $1$-form $$ \eta'=\sum_{i=1}^\tau (\lambda_i+f_i(x_1,x_2,\ldots,x_\tau))\frac{dx_i}{x_i},\quad f_i(0,0,\ldots,0)=0, $$ where $\sum_{i=1}^\tau n_i\lambda_i\ne 0$ for any $\mathbf{n}\in {\mathbb Z}^\tau_{\geq 0}\setminus\{0\}$. Recall that then the total transform of $D$ is locally given at $q$ by the union of the hyperplanes $x_i=0$, with $i=1,2,\ldots,\tau$. Moreover, we know that the $\mathbb C$-divisor ${\mathcal D}'$ is locally given at $q$ by $$ {\mathcal D}'=\sum_{i=1}^\tau \lambda_i (x_i=0). $$ We have to show that $\psi^*{{\mathcal D}'}$ is a divisorial model for $\psi^*{{\mathcal F}'}$ at $p$. We apply now Proposition \ref{prop:appuno} in the Appendix I as follows. Take the list of functions $$ {\mathcal L}'_p=\{\psi_{i,p}\}_{i=1}^\tau, $$ where $\psi_i=x_i\circ\psi$, for $i=1,2,\ldots,\tau$ and $\psi_{i,p}$ denotes the germ at $p$ of $\psi_i$. There is a composition of blowing-ups centered at points $$ \sigma': (N'',\sigma'^{-1}(p))\rightarrow (N',p) $$ in such a way that the transformed list ${\mathcal L}''=\{f_i\}_{i=1}^\tau$ is desingularized, where $f_i=\psi_{i,p}\circ \sigma'$, see Appendix I. In particular, for any point $p'\in \sigma'^{-1}(p)$, there are local coordinates $u,v$ such that for any $i\in\{1,2,\ldots,\tau\}$ with $f_{i,p'}\ne 0$, there is a unit $U_{i,p'}\in {\mathcal O}_{N'',p'}$ and $(a_i,b_i)\in {\mathbb Z}^2_{\geq 0}$ with $f_{i,p'}=U_{i,p'} u^{a_i}v^{b_i}$; note also that we have $a_i+b_i\geq 1$, since $\sigma'(p')=p$. Now, in order to prove that $\psi^*{{\mathcal D}'}$ is a divisorial model for $\psi^*{{\mathcal F}'}$ at $p$, it is enough to prove that ${\sigma'}^*(\psi^*{{\mathcal D}'})$ is a divisorial model for ${\sigma'}^*(\psi^*{{\mathcal F}'})$ at any point $p'$ of ${\sigma'}^{-1}(p)$. By the local expression of $\psi\circ\sigma'$ at $p'$ in appropriate local coordinates $u,v$, we conclude that ${\sigma'}^*\psi^*{{\mathcal F}'}$ is generated by $$ {\sigma'}^*\psi^*\eta'= \left(\sum_{i=1}^\tau a_i\lambda_i+g(u,v) \right) \frac{du}{u}+\left(\sum_{i=1}^\tau b_i\lambda_i+h(u,v) \right) \frac{dv}{v}+\alpha, $$ where $\alpha$ is a holomorphic $1$-form. Put $\mu_1=\sum_{i=1}^\tau a_i\lambda_i$ and $\mu_2=\sum_{i=1}^\tau b_i\lambda_i$. We know that not all the germs of functions $f_{i,p'}$ are identically zero, for $i=1,2,\ldots\tau$; this would imply that ${\sigma'}^*\psi^* \eta'=0$ and this is not possible since we know that $\psi^*\eta'\ne 0$. Then, some of the $a_i, b_i$ are nonzero and by the non resonance properties either $\mu_1$ or $\mu_2$ are nonzero. Say that $\mu_1\ne 0$, since we are dealing with generalized hypersurfaces, there are no saddle-nodes, and then $\mu_1\mu_2\ne 0$ or we have a non-singular foliation, in the sense that $\mu_2+h(u,v)$ is identically zero. Now, we have that $$ {\sigma'}^*\psi^*{\mathcal D}'=\mu_1(u=0)+\mu_2(v=0) $$ locally at $p'$. Hence ${\sigma'}^*\psi^*{\mathcal D}'$ is a divisorial model for ${\sigma'}^*(\psi^*{\mathcal F}')$ at $p'$. This ends the proof of Theorem \ref{teo:main}. \section{Logarithmic Models} \label{Logarithmic Models} Let $\mathcal F$ be a generalized hypersurface over $(M,K)$ where $M$ is a germ of non-singular complex analytic variety over a connected and compact analytic subset $K\subset M$. Let us assume that $\mathcal D$ is a divisorial model for ${\mathcal F}$. By definition, any $\mathcal D$-logarithmic foliation $\mathcal L$ on $(M,K)$ is a {\em logarithmic model} for $\mathcal F$. In the case of $K=\{0\}$ and hence $(M,K)=({\mathbb C}^n,0)$, the existence of logarithmic models is assured. Indeed, by Theorem \ref{teo:main} there is a divisorial model $\mathcal D$ for $\mathcal F$, that we can write $$ {\mathcal D}=\sum_{i=1}^s\lambda_iS_i. $$ Choosing a reduced local equation $f_i=0$ for each $S_i$, the closed logarithmic $1$-form $\eta=\sum_{i=1}^sdf_i/f_i$ generates a logarithmic model $\mathcal L$. This gives sense to the main theorem stated in the Introduction. Let us state certain properties of logarithmic models that are directly deduced from the results presented in this work: \begin{enumerate} \item If $(M,K)=({\mathbb C}^2,0)$, a logarithmic foliation $\mathcal L$ is a logarithmic model for a generalized curve $\mathcal F$ if and only if ${\mathcal L}$ and $\mathcal F$ have the same Camacho-Sad indices with respect to the invariant branches. \item Assume that $\pi: ((M',K'),E',{\mathcal F}')\rightarrow ((M,K),E,{\mathcal F})$ is the composition of a finite sequence of admissible blowing-ups, where $((M,K),E,{\mathcal F})$ is a foliated space of generalized hypersurface type. If $\mathcal L$ is a logarithmic model for $\mathcal F$, then $\pi^*{\mathcal L}$ is a logarithmic model for ${\mathcal F}'$. \item Let $\mathcal F$ be a generalized hypersurface on $({\mathbb C}^n,0)$ and denote by $S$ the union of its invariant hypersurfaces. Consider a logarithmic foliation $\mathcal L$ on $({\mathbb C}^n,0)$. The following statements are equivalent: \begin{enumerate} \item $\mathcal L$ is a logarithmic model for $\mathcal F$. \item For any $S$-transverse map $\phi:({\mathbb C}^2,0)\rightarrow ({\mathbb C}^n,0)$ we have that $\phi^*{\mathcal L}$ is a logarithmic model for $\phi^*{\mathcal F}$. \end{enumerate} \end{enumerate} A question for the future is to develop a similar theory concerning the dicritical case. In dimension two, some results are known \cite{Can-Co}. \section{Appendix I} We recover here results in Proposition \ref{prop:appuno} and Proposition \ref{prop:appdos} concerning the reduction of singularities of lists of functions in dimension two and the lifting of morphisms by a sequence of blowing-ups. These results are well known. We just state the first one and we prove the second one as a consequence of it. Let us note that there are stronger results on monomialization of morphisms by D. Cutkosky \cite{Cut} and Akbulut and King (Chapter 7 of \cite{Akb-K}, according to \cite{Cut}); we do not need the use of such strong versions in this work. Let $N$ be a two dimensional complex analytic variety and consider a finite list ${\mathcal L}=\{f_i\}_{i=1}^t$, where $f_i;N\rightarrow {\mathbb C}$ is a holomorphic function for $i=1,2,\ldots,t$. Given a point $p\in N$, denote by ${\mathcal L}_p=\{f_{i,p}\}_{i=1}^t$ the list of germs at $p$ of the functions $f_i$. We say that ${\mathcal L}$ is {\em desingularized } at $p\in N$, or equivalently that {\em the list ${\mathcal L}_p$ is desingularized,\/} if and only if there are local coordinates $(u,v)$ at $p$ such that the following two properties hold: \begin{enumerate} \item For any $i\in\{1,2,\ldots,t\}$ with $f_{i,p}\ne 0$, there is a unit $U_{i,p}\in {\mathcal O}_{N,p}$ and $(a_i,b_i)\in {\mathbb Z}^2_{\geq 0}$ such that $f_{i,p}=U_{i,p} u^{a_i}v^{b_i}$. \item Given $i,j\in\{1,2,\ldots,t\}$, if $f_{i,p}$ does not divide $f_{j,p}$, then $f_{j,p}$ divides $f_{i,p}$. \end{enumerate} We say that {\em ${\mathcal L}$ is desingularized} when it is desingularized at any point $p\in N$. This is an open property, in the sense that the points where ${\mathcal L}$ is desingularized are the points of an open subset of $N$. Given a morphism $\sigma:N'\rightarrow N$, the transform $\sigma^*{\mathcal L}$ of the $\mathcal L$ is the list in $N'$ defined by $$ \sigma^*{\mathcal L}=\{f_i\circ\sigma\}_{i=1}^t $$ Next, we state a well known result of desingularization of lists of functions: \begin{proposition} \label{prop:appuno} Let ${\mathcal L}=\{f_i\}_{i=1}^t$ be a list of functions on a non singular two-dimensional holomorphic variety $(N,C)$, that is a germ along a compact set $C\subset N$. There is a morphism $$ \sigma:(N',\sigma^{-1}(C))\rightarrow (N,C), $$ that is the composition of a finite sequence of blowing-ups centered at points, in such a way that the transformed list $\sigma^*{\mathcal L}=\{f_i\circ \sigma\}_{i=1}^t$ is desingularized. \end{proposition} \begin{proof} This result is an easy consequence of the classical results of desingularization for plane curves. The reader may look at \cite{Lip}. \end{proof} \begin{remark} The above proposition \ref{prop:appuno} is true without restriction on the dimension of $N$ (with similar definition of what is a desingularized list). This is a consequence of Hironaka's reduction of singularities in \cite{Hir}. \end{remark} \begin{proposition} \label{prop:appdos} Let $\phi:(N,C)\rightarrow (M,K)$ be a holomorphic map between connected germs of non-singular analytic varieties along compact sets, where $\dim N=2$. Consider a morphism $\pi:(M',\pi^{-1}(K))\rightarrow (M,K) $ that is the composition of a finite sequence of blowing-ups with non singular centers. Let us assume that the image of $\phi$ is not contained in the projection by $\pi$ of the centers of blowing-up. Then, there is a morphism $$ \sigma: (N',\sigma^{-1}(C))\rightarrow (N,C) $$ that is the composition of a finite sequence of blowing-ups centered at points, in such a way that there is a unique morphism $ \psi: (N',\sigma^{-1}(C))\rightarrow (M',\pi^{-1}(K)) $ such that $\phi\circ\sigma=\pi\circ\psi$. \end{proposition} \begin{proof} Let us show first that the result is true in the special case that $$ (N,C)=({\mathbb C}^2,0), \quad (M,K)=({\mathbb C}^n,0) $$ and $\pi$ is the single blowing-up with center $Y=(x_1=x_2=\cdots=x_t=0)$. Consider the list of functions ${\mathcal L}=\{\phi_i\}_{i=1}^t$, where $\phi_i=x_i\circ \phi$. Take a desingularization $$ \sigma: (N',\sigma^{-1}(0))\rightarrow ({\mathbb C}^2,0), $$ of $\mathcal L$ as stated in Proposition \ref{prop:appuno}, where ${\mathcal L}'=\{\varphi_i\}_{i=1}^t$ is the transformed list, recalling that $\varphi_i=\phi_i\circ\sigma$. Let us represent $\phi$ by a morphism $ \phi_U:U\rightarrow V $ where $U\subset{\mathbb C}^2$ is a connected open neighborhood of the origin $0\in {\mathbb C}^2$ and $V={\mathbb D}_\epsilon^n\subset {\mathbb C}^n$ is a poly-cylinder around the origin in such a way that the center of $\pi$ is represented by $$ Y=(x_1=x_2=\cdots=x_t=0)\subset V. $$ We also consider the morphism $\sigma_U:U'\rightarrow U$ obtained by the same blowing-ups indicated by $\sigma$ and we denote by $\pi_V:V'\rightarrow V$ the blowing-up with center $Y$. Let us put $\phi_{i,U}=x_i\circ\phi_U$ and $\varphi_{i,U'}= \phi_{i,U}\circ \sigma_U$. Since the property of being desingularized is open, by taking $U$ small enough, we can assume that the list ${\mathcal L}_{U'}'=\{\varphi_{i,U'}\}_{i=1}^t$ is desingularized at any point of $U'$. In this situation, let us show that there is a unique holomorphic map $$ \psi_{U'}:U'\rightarrow V' $$ such that $\phi_U\circ \sigma_U=\pi_V\circ\psi_{U'}$. More precisely, we are going to prove that given any nonempty open subset $W\subset U'$, there is a unique holomorphic map $$ \psi_{W}:W\rightarrow V' $$ such that $\phi_U\circ ({\sigma_U}\vert_W)=\pi_V\circ\psi_W$. Let us recall how is constructed the blowing-up $\pi_V$. Take the projective space ${\mathbb P}^{t-1}_{\mathbb C}$ and consider the closed subset $Z\subset {\mathbb P}^{t-1}_{\mathbb C}\times {\mathbb D}_\epsilon^t$ given by $$ Z=\{([a_1,a_2,\ldots,a_t], (b_1,b_2,\ldots, b_t)); \quad a_ib_j=a_jb_i, \; 1\leq i,j\leq t\}. $$ The blowing-up $\tilde\pi$ of the origin of ${\mathbb D}^t_{\epsilon}$ is the second projection $ \tilde\pi: Z\rightarrow {\mathbb D}^t_{\epsilon} $. The blowing-up of $V={\mathbb D}^t_{\epsilon}\times {\mathbb D}^{n-t}_{\epsilon}$ with center $Y$ is the product $$ \pi=\tilde\pi\times \operatorname{id}_{{\mathbb D}^{n-t}_{\epsilon}}: Z\times {\mathbb D}^{n-t}_{\epsilon}\rightarrow {\mathbb D}^{t}_{\epsilon}\times {\mathbb D}^{n-t}_{\epsilon}=V. $$ Now, let us show the existence and uniqueness of $\psi_{W}$. Let us consider the open subset $W_0\subset W$ defined by $ W_0=W\setminus \sigma_{U}^{-1}(\phi_U^{-1}(Y)) $. By hypothesis, we know that $W_0$ is a dense open subset of $W$. The uniqueness of $\psi_W$ is then implied by the uniqueness of $\psi_{W_0}$. Take $p\in W_0$ and consider the vectors \begin{eqnarray*} {\mathbf v}^t(p)&=&(\varphi_{1,U'}(p), \varphi_{2,U'}(p), \ldots, \varphi_{t,U'}(p))\\ {\mathbf v}^{n-t}(p)&=&(\varphi_{t+1,U'}(p), \varphi_{t+2,U'}(p), \ldots, \varphi_{n,U'}(p)). \end{eqnarray*} Since $p\in W_0$, we see that ${\mathbf v}^t(p)$ is not identically zero and we necessarily have that \begin{equation} \label{eq:uniciddlevantamiento} \psi_W(p)=([{\mathbf v}^t(p)], {\mathbf v}^t(p), {\mathbf v}^{n-t}(p) ). \end{equation} This shows the uniqueness of $\psi_W$. Now take a point $p\in W$; we denote $\varphi_{i,p}$ the germ of $\varphi_{i,U'}$ at $p$, even in the case when $p\notin \sigma^{-1}(0)$. Consider the set $$ I_p=\{i\in \{1,2,\ldots,t\}; \quad \varphi_{i,p} \mbox{ divides } \varphi_{j,p},\; \mbox{ for any } j\in \{1,2,\ldots,t\} \}. $$ Let us note that $I_p\ne \emptyset$. Indeed, saying that $I_p=\emptyset$ means that $\varphi_{i,p}=0$ for all $i=1,2\ldots,t$, this implies that $\varphi_{i,U'}=0$ and thus $\phi_{i,U}=0$ for any $i=1,2,\ldots,t$; this contradicts the hypothesis that the image of $\phi$ is not contained in $Y$. Let us choose an index $i\in I_p$. Define the germs $$ \varphi_{ji,p}=\left\{ \begin{array}{ccc} \varphi_{j,p}/\varphi_{i,p}&\mbox{ if }& 1\leq j\leq t,\\ \varphi_{j,p}&\mbox{ if }& t+1\leq j\leq n.\\ \end{array} \right. $$ Note that $\varphi_{ii,p}=1$. We can define the germ $\psi_p$ of $\psi_W$ by $$ \psi_p= ([\varphi_{1i,p},\varphi_{2i,p},\ldots,\varphi_{ti,p}], \varphi_{1i,p},\varphi_{2i,p},\ldots,\varphi_{ti,p},\varphi_{t+1,i,p},\ldots, \varphi_{ni,p}). $$ The definition does not depend on the index $i\in I_p$ and the uniqueness is guarantied since the restriction to $W_0$ is as indicated in Equation \ref{eq:uniciddlevantamiento}. Let us consider now the case where $\pi:(M',\pi^{-1}(K))\rightarrow (M,K) $ is a single blowing-up with center $(Y, Y\cap K)$ and $\phi: (N,C)\rightarrow (M,K)$ is as in the statement. Once this case is solved, we obtain directly the general case by induction on the number of blowing-ups in $\pi$. In view of the previous result, for any point $p\in N$ there is an open set $U_p\subset N$ with $p\in U_p$ and a finite sequence of blowing-ups over $p$ $$ \sigma_{U_p}:U'_p\rightarrow U_p $$ such that for any open subset $W'\subset U'_p$ there is a unique map $\psi_{W'}:W'\rightarrow M'$ such that $\phi\circ (\sigma_{U_p}\vert_{W'})=\pi\circ \psi_{W'}$. Note that $$ \psi_{W'}=\psi_{U'_p}\vert_{W'}, $$ for any open set $W'\subset U'_p$. By the compactness of $C\subset N$, we can cover $C$ by finitely many open subsets of the type $U_p$, with $p\in C$. That is, there are finitely many points $p_1,p_2,\ldots,p_r$ in $C$ such that $$ C\subset \cup_{i=1}^r U_i,\quad U_i=U_{p_i},\; i=1,2,\ldots,r. $$ Without loss of generality, we assume that $p_j\notin U_i$, if $j\ne i$. We can glue the morphisms $\sigma_{U_i}: U'_i\rightarrow U_i$ into a morphism $$ \sigma_U: U'\rightarrow U=\cup_{i=1}^rU_i, $$ is such a way that we identify $U'_i=\sigma_U^{-1}(U_i)$. Of course, the morphism $\sigma_U$ is the composition of a sequence of blowing-ups points, over the points $p_1,p_2,\ldots,p_r$. Note that $\sigma_U$ induces a morphism of germs $$ \sigma: (N',\sigma^{-1}(C))\rightarrow (N,C), $$ where $(N',\sigma^{-1}(C))$ is represented by $(U',\sigma_U^{-1}(C)$ and $(N,C)$ by $(U,C)$. On the other hand, by the uniqueness property, we have that $$ \psi_{U'_i}\vert_{U'_i\cap U'_j}=\psi_{U'_j}\vert_{U'_i\cap U'_j}. $$ Then, we can also glue the morphisms $\psi_{U'_i}$ to a morphism $\psi_{U'}:U'\rightarrow M'$ such that $$ \phi\circ \sigma_{U}=\pi\circ \psi_{U'}. $$ We have an induced morphism of germs $\psi:(N',\sigma^{-1}(C))\rightarrow (M',\pi^{-1}(K))$ with the property that $\pi\circ\psi=\phi\circ\sigma$. This ends the proof. \end{proof} \section{Appendix II} Here we provide a proof of Proposition \ref{prop:simplepointsandlogorder}. Recall that we have a foliated space $((M,K), E, {\mathcal F})$ of generalized hypersurface type and a point $p\in K$. We have to show that the following statements are equivalent \begin{enumerate} \item The point $p$ is a simple point for $((M,K), E, {\mathcal F})$. \item $\operatorname{LogOrd}_p({\mathcal F},E)=0$. \end{enumerate} We have that (1) implies (2) as a direct consequence of the definition of simple point in Subsection \ref{definicionsimplepoint}. Let us suppose that $\operatorname{LogOrd}_p({\mathcal F},E)=0$. We assume that the dimensional type $\tau$ is equal to $n$. The case when $\tau<n$ may be done in the same way. Moreover, since we are working locally at $p$, we identify $(M,p)=({\mathbb C}^n,0)$ and we work at the origin of ${\mathbb C}^n$. Choose local coordinates $x_1,x_2,\ldots,x_n$ such that $E=\left(\prod_{i=1}^ex_i=0\right)$. Let us see first that $n-1\leq e\leq n$. If this is not the case, we have $e\leq n-2$ and one of the following expressions holds for a local generator $\eta$ of $\mathcal F$ (up to a reordering and to multiply by a unit) \begin{enumerate} \item[a)]$\eta=dx_1/x_1+\sum_{i=2}^e a_idx_i/x_i+a_{e+1}dx_{e+1}+ a_{e+2}dx_{e+2}+\sum_{i=e+3}^n a_idx_i$. \item[b)] $\eta=\sum_{i=1}^e a_idx_i/x_i+dx_{e+1}+ a_{e+2}dx_{e+2}+\sum_{i=e+3}^n a_idx_i$. \end{enumerate} This situation does not hold, since we can find a non-singular vector field $\xi$ such that $\eta(\xi)=0$, hence $\xi$ trivializes the foliation and $\tau<n$. The vector field $\xi$ can be taken as follows $$ \xi=\left\{ \begin{array}{cc} a_{e+1}x_1\partial/\partial x_1-\partial /\partial x_{e+1},&\mbox{ in case a)}\\ a_{e+2}\partial/\partial x_{e+1}-\partial /\partial x_{e+2} ,&\mbox{ in case b)} \end{array} \right. $$ Thus, we conclude that $n-1\leq e\leq n$. Note that, even when $e=n-1$, the case a) above does not hold. Then, we have that $e=n$ or $e=n-1$ and one of the following situations holds: \begin{enumerate} \item[i)]$\eta=dx_1/x_1+\sum_{i=2}^n a_idx_i/x_i$. \item[ii)] $\eta=\sum_{i=1}^{n-1} a_idx_i/x_i+dx_{n}$, with $a_i(0)=0$, for $i=1,2,\ldots,n-1$. \end{enumerate} Assume first we are in the situation i) and put $\lambda_1=1$, $\lambda_i=a_i(0)$, for $i=2,3,\ldots,n$. We have to show that there is no resonance $\sum_{i=1}^{n}m_i\lambda_i=0$ with $\mathbf{m}\ne 0$, for $\mathbf{m}=(m_1,m_2,\ldots,m_n)$. Let us reason by contradiction, assuming that there is such a resonance. Note that there is at least one $m_i\ne 0$ with $2\leq i\leq n$. Up to a reordering, we assume that $m_2m_3\cdots m_\ell\ne0$ and $m_i=0$ for $\ell <i\leq n$. Consider the map $\phi:({\mathbb C}^2,0)\rightarrow ({\mathbb C}^n,0)$ given by $$ x_1=uv^{m_1}, x_2=v^{m_2},\ldots,x_\ell=v^{m_\ell}; \quad x_i=0,\mbox{ if }\ell< i\leq n. $$ Then we have $$ \phi^*\eta=\frac{du}{u}+b(u,v)\frac{dv}{v}, $$ where $b(u,v)=m_1\lambda_1+\sum_{i=2}^\ell m_i a_i(uv^{m_1},v^{m_2},\ldots,v^{m_\ell},0,\ldots,0)$. Note that $b(0,0)=0$, since $\sum_{i=1}^nm_i\lambda_i=0$. We have two possible situations: \begin{enumerate} \item The function $b(u,v)$ is not divisible by $v$. In this case, we have that $\phi^*\eta$ defines a saddle-node. This is not possible, since $\mathcal F$ is complex-hyperbolic. \item We have that $b(u,v)=vb'(u,v)$. Then $\phi^*{\mathcal F}$ is defined by the non-singular $1$-form $du+ub'(u,v)dv$. We know that there is a unit $U(u,v)$ such that $du+ub'(u,v)dv=d(uU(u,v))$ and we take new local coordinates $u^*=uU(u,v)$ and $v$. We have that $\phi^*{\mathcal F}=(du^*=0)$ and $\phi(v=0)$ is the origin. This implies that $\mathcal F$ is dicritical, but this is not possible since it is a generalized hypersurface. \end{enumerate} Assume now that we are in the situation ii). We are going to show the existence of a non-singular hypersurface $H$ having normal crossings with $E$ such that we get a simple corner for $\mathcal F$ with respect to $E\cup H$. Note that $H$ should have an equation $x_n=f(x_1,x_2,\ldots,x_{n-1})$. If we do this, we are done, since we are in situation i) with respect to $E\cup H$. In particular, we are done when $x_n$ divides $a_i$ for $i=1,2,\dots,n-1$. Let us rename the variables as $$ \mathbf{y}=(y_1,y_2,\ldots,y_{n-1})=(x_1,x_2,\ldots,x_{n-1}),\quad z=x_n. $$ We end the proof by providing a coordinate change $z^*=z-f(\mathbf{y})$ with the property that $z^*=0$ is an invariant hypersurface of $\mathcal F$. The existence of such a coordinate change depends upon certain non-resonances in $\eta$, that will nor occur thanks to the hypothesis that $\mathcal F$ is a generalized hypersurface. Let us precise this. We write \begin{equation} \label{eq:omega} \eta=z(\omega_0+\tilde\omega)+\omega'+dz, \end{equation} where $\omega_0=\sum_{i=1}^{n-1}\mu_i{dy_i}/{y_i}$, $\tilde\omega=\sum_{i=1}^{n-1}\tilde a_i(\mathbf{y},z){dy_i}/{y_i}$ and $\omega'=\sum_{i=1}^{n-1} a'_i(\mathbf{y}){dy_i}/{y_i}$, with $\mu_i\in {\mathbb C}$, $\tilde a_i(\mathbf{0}, 0)=0$ and $a'_i(\mathbf{0})=0$, for $i=1,2,\ldots,n-1$. \begin{lemma} \label{lema:noresonanciatotal} In the above situation, for any $ \mathbf{m}=(m_1,m_2,\ldots,m_{n-1})\in {\mathbb Z}^{n-1}_{\geq 0}$, with $\mathbf{m}\not=\mathbf{0}$, we have that $\omega_0+\sum_{i=1}^{n-1}m_i{dy_i}/{y_i}\ne 0$. \end{lemma} \begin{proof} Let us reason by contradiction, assuming that there is $\mathbf{m}\in {\mathbb Z}^{n-1}_{\geq 0}$, with $\sum_{i=1}^{n-1}m_i=m>0$, such that $\omega_0=-\sum_{i=1}^{n-1}m_i{dy_i}/{y_i}$. Consider the map $$ \phi:({\mathbb C}^2,0)\rightarrow ({\mathbb C}^n,0) $$ given by $z=u$ and $y_i=v$, for $i=1,2,\ldots,n-1$. Then we have $$ \phi^*\eta= (-mu+ h(v)+ug(u,v))\frac{dv}{v}+du , $$ where $g(0,0)=0$ and $h(0)=0$. In particular, this singularity is a pre-simple singularity in dimension two (non-nilpotent linear part) that is not simple, since we have the resonance $1\cdot(-m)+m\cdot 1=0$. These singularities are either dicritical or they have a hidden saddle-node, \cite{Can-C-D}. This is the desired contradiction. \end{proof} Let us perform the coordinate change $z\mapsto z^*$ as a Krull limit, in the following way. Assume that the order $\nu_0(\omega')$ of the coefficients of $\omega'$ is $\nu_0(\omega')=m>0$ and let us write $$ \omega'= \bar\omega+\omega'', $$ where $\nu_0(\omega'')>m$ and the coefficients of $\bar\omega$ are homogeneous of degree $m$. We are going to show that there is a homogeneous polynomial $p_m(\mathbf{y})$ of degree $m$ such that if $z^{(m)}=z-p_m(\mathbf{y})$, then $$ \eta=z^{(m)}\left\{\omega_0+\tilde\omega^{(m)}\right\}+{\omega'}^{(m)}+dz^{(m)}, $$ with the same structure as in Equation \ref{eq:omega} but with $\nu_0({\omega'}^{(m)})>m$. Now, we are done by taking the Krull limit of the $z^{(m)}$. Of course, we obtain a formal invariant hypersurface $z^*=0$, but we know that all the formal invariant hypersurfaces of generalized hypersurfaces are in fact convergent ones and thus we are done. Now, looking at the $\mathbf{y}$-homogeneous part of degree $m$ in the Frobenius integrability condition $\eta\wedge d\eta=0$, we have that $ d\bar\omega=\bar\omega\wedge\omega_0 $. Write $\eta$ in the coordinates $\mathbf{y},z^{(m)}$: $$ \eta=z^{(m)}(\omega_0+\tilde\omega)+(p_m\tilde\omega+\omega'')+dz^{(m)} + (p_m\omega_0+\bar\omega + dp_m). $$ If there is $p_m$ such that $p_m\omega_0+\bar\omega + dp_m=0$, then we are done. Let us write $$ \bar\omega=\sum_{\vert\mathbf{m}\vert=m}\mathbf{y}^\mathbf{m} \bar\omega_\mathbf{m}, \quad d\bar\omega=\sum_{\vert\mathbf{m}\vert=m}\mathbf{y}^\mathbf{m}\frac{d \mathbf{y}^\mathbf{m}}{\mathbf{y}^\mathbf{m}}\wedge\bar\omega_\mathbf{m}, $$ where the $1$-forms $\bar\omega_\mathbf{m}$ have constant coefficients. Moreover $$ \bar\omega\wedge\omega_0= \sum_{\vert\mathbf{m}\vert=m}\mathbf{y}^\mathbf{m} \bar\omega_\mathbf{m}\wedge\omega_0. $$ Since $ d\bar\omega=\bar\omega\wedge\omega_0 $, we conclude that $$ \left\{ ({d \mathbf{y}^\mathbf{m}}/{\mathbf{y}^\mathbf{m}})+\omega_0\right\}\wedge \bar\omega_\mathbf{m}=0, \quad \mbox{ for all } \mathbf{m}\in {\mathbb Z}^{n-1}_{\geq 0} , \mbox{ with } \vert{\mathbf{m}}\vert=m. $$ By Lemma \ref{lema:noresonanciatotal}, we know that $ 0\ne d \mathbf{y}^\mathbf{m}/\mathbf{y}^\mathbf{m}+\omega_0 $. Hence, there are constants $c_\mathbf{m}\in {\mathbb C}$ such that $$ \bar\omega_\mathbf{m}+c_\mathbf{m} \left\{ ({d \mathbf{y}^\mathbf{m}}/{\mathbf{y}^\mathbf{m}})+\omega_0\right\}=0, \quad \mbox{ for all } \mathbf{m}\in {\mathbb Z}^{n-1}_{\geq 0} , \mbox{ with } \vert{\mathbf{m}}\vert=m. $$ Taking $p_m=\sum_{\vert\mathbf{m}\vert=m}c_{\mathbf{m}}\mathbf{y}^{\mathbf{m}}$, we obtain that $p_m\omega_0+\bar\omega + dp_m=0$ and we are done.
1,116,691,497,444
arxiv
\section{Introduction} In recent years, computer vision has made tremendous progress across many complex recognition tasks, including image classification \cite{krizhevsky2012nips,zheng2017learning}, image captioning \cite{chen2017sca,johnson2016cvpr,xu2015show,you2016image}, image generation \cite{tang2014nips,zhang2018arxiv,zhao2018modular} and visual question answering (VQA) \cite{antol2015iccv,fukui2016multimodal,johnson2017cvpr,lu2016hierarchical,seo2017visual,tommasi2014bmvc,xu2016ask,yang2016stacked}. Arguably, much of this success can be attributed to the use of visual attention mechanisms which, similar to the human perception, identify the important regions of an image. Attention mechanisms typically produce a spatial mask for the given image feature tensor. In an ideal scenario, the mask is expected to have higher activation values over the features corresponding to the regions of interest, and lower activation values everywhere else. For tasks that are multi-modal in nature, like VQA, a query (\eg, a question) can additionally be used as an input to generate the mask. In such cases, the attention activation is usually a function of similarity between the corresponding encoding of the image region and the question in a pre-defined or learned embedding space. \begin{figure}[t] \centering \includegraphics[width=0.8\linewidth]{model} \vspace{-0.1in} \caption{{\bf AttentionRNN.} Illustration of the proposed structured attention network as a module for down stream task.} \label{fig:model} \vspace{-0.1in} \end{figure} Existing visual attention mechanisms can be broadly characterized into two categories: {\em global} or {\em local}; see Figure~\ref{subfig1:att} and \ref{subfig2:att} respectively for illustration. Global mechanisms predict all the attention variables jointly, typically based on a dense representation of the image feature map. Such mechanisms are prone to overfitting and are only computationally feasible for low-resolution image features. Therefore, typically, these are only applied at the last convolutional layer of a CNN \cite{lu2016hierarchical, zhu2016visual7w}. The local mechanisms generate attention values for each spatial attention variable independently based on corresponding image region \cite{fukui2016multimodal,seo2017visual,seo2016progressive} (\ie, feature column) or with the help of local context \cite{seo2016progressive, woo2018cbam, zhao2018modular}. As such, local attention mechanisms can be applied at arbitrary resolution and can be used at various places within a CNN network (\eg, in \cite{seo2016progressive} authors use them before each sub-sampling layer and in \cite{woo2018cbam} as part of each residual block). However, all the aforementioned models lack explicit structure in the generated attention masks. This is often exhibited by lack of coherence or sharp discontinuities in the generated attention activation values \cite{seo2016progressive}. \begin{figure*}[t] \centering \begin{subfigure}{.2\textwidth} \centering \includegraphics[width=\linewidth]{globalatt} \caption{{\em Global} Attention} \label{subfig1:att} \end{subfigure} \begin{subfigure}{.2\textwidth} \centering \includegraphics[width=\linewidth]{localatt} \caption{{\em Local} Attention} \label{subfig2:att} \end{subfigure} \begin{subfigure}{.2\textwidth} \centering \includegraphics[width=\linewidth]{structuredatt} \caption{{\em Structured} Attention} \label{subfig3:att} \end{subfigure} \vspace{-0.15in} \caption{{\bf Different types of attention mechanisms.} Compared are (a) {\em global} and (b) {\em local} attention mechanisms explored in prior works and proposed {\em structured} AttentionRNN architecture in (c).} \label{fig:att} \vspace{-0.2in} \end{figure*} Consider a VQA model attending to regions required to answer the question, ``Do the two spheres next to each other have the same color?''. Intuitively, attention mechanisms should focus on the two spheres in question. Furthermore, attention region corresponding to one sphere should inform the estimates for attention region for the other, both in terms of shape and size. However, most traditional attention mechanisms have no ability to encode such dependencies. Recent modularized architectures \cite{andreas2016cvpr,hu2018eccv} are able to address some of these issues with attentive {\em reasoning}, but they are relevant only for a narrow class of VQA problems. Such models are inapplicable to scenarios involving self-attention \cite{woo2018cbam} or generative architectures, where granular shape-coherent attention masks are typically needed \cite{zhao2018modular}. In this paper, we argue that these challenges can be addressed by {\em structured} spatial attention. Such class of attention models can potentially encode arbitrary constraints between attention variables, be it top-down structured knowledge or local/global consistency and dependencies. To enforce this structure, we propose a novel attention mechanism which we refer to as AttentionRNN (see Figure \ref{subfig3:att} for illustration). We draw inspiration from the Diagonal BiLSTM architecture proposed in \cite{van2016pixel}. As such, AttentionRNN generates a spatial attention mask by traversing the image diagonally, starting from a corner at the top and going to the opposite corner at the bottom. When predicting the attention value for a particular image feature location, structure is enforced by taking into account: (i) local image context around the corresponding image feature location and, more importantly, (ii) information about previously generated attention values. One of the key benefits of our model is that it can be used agnostically in any existing feed-forward neural network at one or multiple convolutional feature levels (see Figure \ref{fig:model}). To support this claim, we evaluate our method on different tasks and with different backbone architectures. For VQA, we consider the Progressive Attention Network (PAN) \cite{seo2016progressive} and Multimodal Compact Bilinear Pooling (MCB) \cite{fukui2016multimodal}. For image generation, we consider the Modular Generative Adversarial Networks (MGAN) \cite{zhao2018modular}. For image categorization, we consider the Convolutional Block Attention Module (CBAM) \cite{woo2018cbam}. When we replace the existing attention mechanisms in these models with our proposed AttentionRNN, we observe higher overall performance along with better spatial attention masks. \vspace{0.02in} \noindent {\bf Contributions:} Our contributions can be summarized as follows: (1) We propose a novel spatial attention mechanism which explicitly encodes structure over the spatial attention variables by sequentially predicting values. As a consequence, each attention value in a spatial mask depends not only on local image or contextual information, but also on previously predicted attention values. (2) We illustrate that this general attention mechanism can work with any existing model that relies on, or can benefit from, spatial attention; showing its effectiveness on a variety of different tasks and datasets. (3) Through experimental evaluation, we observe improved performance and better attention masks on VQA, image generation and image categorization tasks. \section{Related Work} \subsection{Visual Attention} Visual attention mechanisms have been widely adopted in the computer vision community owing to their ability to focus on important regions in an image. Even though there is a large variety of methods that deploy visual attention, they can be categorized based on key properties of the underlying attention mechanisms. For ease of understanding, we segregate related research using these properties. \vspace{0.02in} \noindent \textbf{Placement of attention in a network}. Visual attention mechanisms are typically applied on features extracted by a convolutional neural network (CNN). Visual attention can either be applied: (1) at the end of a CNN network, or (2) iteratively at different layers within a CNN network. Applying visual attention at the end of a CNN network is the most straightforward way of incorporating visual attention in deep models. This has led to an improvement in model performance across a variety of computer vision tasks, including image captioning \cite{chen2017sca, xu2015show, you2016image}, image recognition \cite{zheng2017learning}, VQA \cite{lu2016hierarchical, xu2016ask, yang2016stacked, zhu2016visual7w}, and visual dialog \cite{seo2017visual}. On the other hand, there have been several approaches that iteratively apply visual attention, operating over multiple CNN feature layers \cite{jaderberg2015spatial, seo2016progressive, woo2018cbam}. Seo \etal \cite{seo2016progressive} progressively apply attention after each pooling layer of a CNN network to accurately attend over target objects of various scales and shape. Woo \etal \cite{woo2018cbam} use a similar approach, but instead apply two different types of attention - one that attends over feature channels and the other that attends over the spatial domain. \vspace{0.02in} \noindent \textbf{Context used to compute attention}. Attention mechanisms differ on how much information they use to compute the attention mask. They can be \emph{global}, that is use all the available image context to jointly predict the values in an attention mask \cite{lu2016hierarchical, xu2015show}. As an example, \cite{lu2016hierarchical} propose an attention mechanism for VQA where the attention mask is computed by projecting the image features into some latent space and then computing its similarity with the question. Attention mechanisms can also be \emph{local}, where-in attention for each variable is generated independently or using a corresponding local image region \cite{fukui2016multimodal,seo2017visual,seo2016progressive, woo2018cbam, zhao2018modular}. For example, \cite{seo2016progressive, woo2018cbam, zhao2018modular} use a $k\times k$ convolutional kernel to compute a particular attention value, allowing them to capture local information around the corresponding location. \vspace{0.02in} \noindent None of the aforementioned works enforce structure over the generated attention masks. Structure over the values of an image, however, has been exploited in many autoregressive models trained to generate images. The next section briefly describes the relevant work in this area. \subsection{Autoregressive Models for Image Generation} Generative image modelling is a key problem in computer vision. In recent years, there has been significant work in this area \cite{goodfellow2014generative, kingma2014auto, rezende2014stochastic, salimans2017pixelcnn++, van2016pixel, zhang2017stackgan, zhao2018modular}. Although most work uses stochastic latent variable models like VAEs \cite{kingma2014auto, rezende2014stochastic} or GANs \cite{goodfellow2014generative, zhang2017stackgan, zhao2018modular}, autoregressive models \cite{salimans2017pixelcnn++, van2016conditional, van2016pixel} provide a more tractable approach to model the joint distribution over the pixels. These models leverage the inherent structure over the image, which enables them to express the joint distribution as a product of conditional distributions - where the value of the next pixel is dependent on all the previously generated pixels. Van \etal \cite{van2016pixel} propose a PixelRNN network that uses LSTMs \cite{hochreiter1997long} to model this sequential dependency between the pixels. They also introduce a variant, called PixelCNN, that uses CNNs instead of LSTMs to allow for faster computations. They later extend PixelCNN to allow the model to be conditioned on some query \cite{van2016conditional}. Finally, \cite{salimans2017pixelcnn++} propose further simplifications to the PixelCNN architecture to improve performance. Our work draws inspiration from the PixelRNN architecture proposed in \cite{van2016pixel}. We extend PixelRNN to model structural dependencies within attention masks. \section{Approach} Given an input image feature $\mathbf{X} \in \mathbb{R}^{h \times m \times n}$, our goal is to predict a spatial attention mask $\mathbf{A} \in \mathbb{R}^{m\times n}$, where $h$ represents the number of channels, and $m$ and $n$ are the number of rows and the columns of $\mathbf{X}$ respectively. Let $\mathbf{X} = \{\mathbf{x}_{1,1}, \dots, \mathbf{x}_{m,n}\}$, where $\mathbf{x}_{i,j} \in \mathbb{R}^h$ be a column feature corresponding to the spatial location $(i,j)$. Similarly, let $\mathbf{A} = \{a_{1,1}, \dots, a_{m,n}\}$, where $a_{i,j} \in \mathbb{R}$ be the attention value corresponding to $\mathbf{x}_{i,j}$. Formally, we want to model the conditional distribution $p(\mathbf{A}~|~\mathbf{X})$. In certain problems, we may also want to condition on other auxiliary information in addition to $\mathbf{X}$, \eg in VQA on a question. While in this paper we formulate and model attention probabilistically, most traditional attention models directly predict the attention values, which can be regarded as a point estimate (or expected value) of $\mathbf{A}$ under our formulation. Global attention mechanisms \cite{lu2016hierarchical, zhu2016visual7w} predict $\mathbf{A}$ directly from $\mathbf{X}$ using a fully connected layer. Although this makes no assumptions on the factorization of $p(\mathbf{A}~|~\mathbf{X})$, it becomes intractable as the size of $\mathbf{X}$ increases. This is mainly due to the large number of parameters in the fully connected layer. Local attention mechanisms \cite{seo2017visual, seo2016progressive, woo2018cbam, zhao2018modular}, on the other hand, make strong independence assumptions on the interactions between the attention variables $a_{i,j}$. Particularly, they assume each attention variable $a_{i,j}$ to be independent of other variables given some local spatial context $\delta(\mathbf{x}_{i, j})$. More formally, for local attention mechanisms, \begin{align} \begin{split} p\left(\mathbf{A}~|~\mathbf{X}\right) &\approx \prod_{i=1, j=1}^{i=m, j=n} p\left(a_{i,j}~|~ \delta(\mathbf{x}_{i,j})\right) \end{split} \end{align} Even though such a factorization improves tractability, the strong independence assumption often leads to attention masks that lack coherence and contain sharp discontinuities. Contrary to local attention mechanisms, our proposed \emph{AttentionRNN} tries to capture some of the structural dependencies between the attention variables $a_{i,j}$. We assume \begin{align} \label{eq:chainrule} p(\mathbf{A}~|~\mathbf{X}) &= \prod_{i=1, j=1}^{i=m, j=n} p\left(a_{i,j} ~|~ \mathbf{a}_{<i,j},~ \mathbf{X})\right)\\ \label{eq:arrnprob} &\approx \prod_{i=1, j=1}^{i=m, j=n} p\left(a_{i,j} ~|~ \mathbf{a}_{<i,j},~ \delta(\mathbf{x}_{i,j})\right) \end{align} where $\mathbf{a}_{<i,j} = \{a_{1,1}, \dots, a_{i-1,j}\}$ (The blue and green region in Figure \ref{fig:skewing}). That is, each attention variable $a_{i,j}$ is now dependent on : (i) the local spatial context $\delta(\mathbf{x}_{i,j})$, and (ii) all the previous attention variables $\mathbf{a}_{<i,j}$. Note that Equation \ref{eq:chainrule} is just a direct application of the chain rule. Similar to local attention mechanisms, and to reduce the computation overhead, we assume that a local spatial context $\delta(\mathbf{x}_{i,j})$ is a sufficient proxy for the image features $\mathbf{X}$ when computing $a_{i,j}$. Equation \ref{eq:arrnprob} describes the final factorization we assume. One of the key challenges in estimating $\mathbf{A}$ based on Equation \ref{eq:arrnprob} is to efficiently compute the term $\mathbf{a}_{<i,j}$. A straightforward solution is to use a recurrent neural network (\eg LSTMs) to summarize the previously predicted attention values $\mathbf{a}_{<i,j}$ into its hidden state. This is a common approach employed in many sequence prediction methods \cite{bahdanau2014neural, shankar2018posterior, venugopalan2015sequence}. However, while sequences have a well defined ordering, image features can be traversed in multiple ways due to their spatial nature. Naively parsing the image along its rows using an LSTM, though provides an estimate for $\mathbf{a}_{<i,j}$, fails to correctly encode the necessary information required to predict $a_{i,j}$. As an example, the LSTM will tend to forget information from the neighbouring variable $a_{i-1,j}$ as it was processed $n$ time steps ago. To alleviate this issue, \emph{AttentionRNN} instead parses the image in a diagonal fashion, starting from a corner at the top and going to the opposite corner in the bottom. It builds upon the Diagonal BiLSTM layer proposed by \cite{van2016pixel} to efficiently perform this traversal. The next section describes our proposed \emph{AttentionRNN} mechanism in detail. \subsection{AttentionRNN}\label{sec:attention} Our proposed structured attention mechanism builds upon the Diagonal BiLSTM layer proposed by \cite{van2016pixel}. We employ two LSTMs, one going from the top-left to bottom-right corner ($\mathcal{L}^{l}$) and the other from the top-right to the bottom-left corner ($\mathcal{L}^r$). As mentioned in Equation \ref{eq:arrnprob}, for each $a_{i,j}$, our objective is to estimate $p\left(a_{i,j} ~|~ \mathbf{a}_{<i,j}, \delta(\mathbf{x}_{i,j})\right)$. We assume that this can be approximated via a combination of two distributions. \begin{align} \label{eq:decompose} \begin{split} p\left(a_{i,j}|\mathbf{a}_{<i,j}\right) &= \Gamma\left<p\left(a_{i,j}| \mathbf{a}_{<i,<j}\right),~p\left(a_{i,j}|\mathbf{a}_{<i,>j}\right) \right> \end{split} \end{align} where $\mathbf{a}_{<i,<j}$ is the set of attention variables to the top and left (blue region in Figure \ref{fig:skewing}) of $a_{i,j}$, $\mathbf{a}_{<i,>j}$ is the set of attention variables to the top and right of $a_{i,j}$ (green region in Figure \ref{fig:skewing}), and $\Gamma$ is some combination function. For brevity, we omit explicitly writing $\delta(\mathbf{x}_{i,j})$. Equation \ref{eq:decompose} is further simplified by assuming that all distributions are Gaussian. \begin{align} \label{eq:gaussian} \begin{split} p\left(a_{i,j}|\mathbf{a}_{<i,<j}\right) &\approx \mathcal{N}\left(\mu_{i,j}^l, {\sigma_{i,j}^l} \right)\\ p\left(a_{i,j}|\mathbf{a}_{<i,>j}\right) &\approx \mathcal{N}\left(\mu_{i,j}^r, {\sigma_{i,j}^r} \right)\\ p\left(a_{i,j}|\mathbf{a}_{<i,j}\right) &\approx \mathcal{N}\left(\mu_{i,j}, {\sigma_{i,j}} \right) \end{split} \end{align} where, \begin{align} \label{eq:gaussparams} \begin{split} (\mu_{i,j}^l, \sigma_{i,j}^l)& = f_l\left(\mathbf{a}_{<i,<j}\right);~~ (\mu_{i,j}^r, \sigma_{i,j}^r) = f_r\left(\mathbf{a}_{<i,>j}\right)\\ &(\mu_{i,j}, \sigma_{i,j}) = \Gamma\left(\mu_{i,j}^l, \sigma_{i,j}^l, \mu_{i,j}^r, \sigma_{i,j}^r \right) \end{split} \end{align} $f_l$ and $f_r$ are fully connected layers. Our choice for the combination function $\Gamma$ is explained in Section \ref{sec:combination}. For each $a_{i,j}$, $\mathcal{L}^l$ is trained to estimate $(\mu_{i,j}^l, \sigma_{i,j}^l)$, and $\mathcal{L}^r$ is trained to estimate $(\mu_{i,j}^r, \sigma_{i,j}^r)$. We now explain the computation for $\mathcal{L}^l$. $\mathcal{L}^r$ is analogous and has the same formulation. $\mathcal{L}^l$ needs to correctly approximate $\mathbf{a}_{<i,<j}$ in order to obtain a good estimate of $(\mu_{i,j}^l, \sigma_{i,j}^l)$. As we are parsing the image diagonally, from Figure \ref{fig:skewing} it can be seen that the following recursive relation holds, \begin{align} \label{eq:recursion} \mathbf{a}_{<i,<j} = f(\mathbf{a}_{<i-1,<j}~~,~~ \mathbf{a}_{<i,<j-1}) \end{align} That is, for each location $(i,j)$, $\mathcal{L}^l$ only needs to consider two attention variables- one above and the other to the left; \cite{van2016pixel} show that this is sufficient for it to be able to obtain information from all the previous attention variables. \begin{figure}[t] \centering \includegraphics[width=0.48\textwidth]{skew} \caption{{\bf Skewing operation.} This makes it easier to compute convolutions along the diagonal. The arrows indicate dependencies between attention values. To obtain the image on the right, each row of the left image is offset by one position with respect to its previous row.} \label{fig:skewing} \vspace{-0.2in} \end{figure} To make computations along the diagonal easier, similar to \cite{van2016pixel}, we first skew $\mathbf{X}$ into a new image feature $\mathbf{\widehat{X}}$. Figure \ref{fig:skewing} illustrates the skewing procedure. Each row of $\mathbf{X}$ is offset by one position with respect to the previous row. $\mathbf{\widehat{X}}$ is now an image feature of size $h\times m \times (2n - 1)$. Traversing $\mathbf{X}$ in a diagonal fashion from top left to bottom right is now equivalent to traversing $\mathbf{\widehat{X}}$ along its columns from left to right. As spatial locations $(i-1,j)$ and $(i,j-1)$ in $\mathbf{X}$ are now in the same column in $\widehat{\mathbf{X}}$, we can implement the recursion described in Equation \ref{eq:recursion} efficiently by performing computations on an entire column of $\widehat{\mathbf{X}}$ at once. Let $\mathbf{\widehat{X}}_j$ denote the $j^{th}$ column of $\mathbf{\widehat{X}}$. Also, let $\mathbf{\widehat{h}}_{j-1}^l$ and $\mathbf{\widehat{c}}_{j-1}^l$ respectively denote the hidden and memory state of $\mathcal{L}^l$ before processing $\mathbf{\widehat{X}}_j$. Both $\mathbf{\widehat{h}}_{j-1}^l$ and $\mathbf{\widehat{c}}_{j-1}^l$ are tensors of size $t\times m$, where $t$ is the number of latent features. The new hidden and memory state is computed as follows. \begin{align} \label{eq:gates} \begin{split} [\mathbf{o}_j, \mathbf{f}_j, \mathbf{i}_j, \mathbf{g}_j] &= \sigma\left(\mathbf{K}^{h} \circledast \mathbf{\widehat{h}}_{j-1}^l + \mathbf{K}^{x} \circledast \mathbf{\widehat{X}}_{j}^c\right)\\ \mathbf{\widehat{c}}_{j}^l &= \mathbf{f}_j \odot \mathbf{\widehat{c}}_{j-1}^l + \mathbf{i}_j \odot \mathbf{g}_j\\ \mathbf{\widehat{h}}_{j}^l &= \mathbf{o}_j \odot \text{tanh}(\mathbf{c}_{j}^l) \end{split} \end{align} Here $\circledast$ represents the convolution operation and $\odot$ represents element-wise multiplication. $\mathbf{K}^h$ is a $2 \times 1$ convolution kernel which effectively implements the recursive relation described in Equation \ref{eq:recursion}, and $\mathbf{K}^x$ is a $1 \times 1$ convolution kernel. Both $\mathbf{K}^h$ and $\mathbf{K}^u$ produce a tensor of size $4t \times m$. $\mathbf{\widehat{X}}_{j}^c$ is the $j^{th}$ column of the skewed local context $\mathbf{\widehat{X}}^c$, which is obtained as follows. \begin{align} \label{eq:localcontext} \begin{split} \mathbf{\widehat{X}}^c = \text{skew}\left(\mathbf{K}^c \circledast \mathbf{X}\right) \end{split} \end{align} where $\mathbf{K}^c$ is a convolutional kernel that captures a $\delta$-size context. For tasks that are multi-modal in nature, a query $\mathbf{Q}$ can additionally be used to condition the generation of $a_{i,j}$. This allows the model to generate different attention mask for the same image features depending on $\mathbf{Q}$. For example, in tasks like VQA, the relevant regions of an image will depend on the question asked. The nature of $\mathbf{Q}$ will also dictate the encoding procedure. As an example, if $\mathbf{Q}$ is a natural language question, it can be encoded using a LSTM layer. $\mathbf{Q}$ can be easily incorporated into \emph{AttentionRNN} by concatenating it with $\mathbf{\widehat{X}}^c$ before passing it to Equation \ref{eq:gates}. Let $\mathbf{\widehat{h}}^l = \{\mathbf{\widehat{h}}_{1}^l,\dots, \mathbf{\widehat{h}}_{2n-1}^l\}$ be the set of all hidden states obtained from $\mathcal{L}^l$, and $\mathbf{h}^l$ be the set obtained by applying the reverse skewing operation on $\mathbf{\widehat{h}}^l$. For each $a_{i,j}$, $\mathbf{a}_{<i,<j}$ is then simply the $(i,j)$ spatial element of $\mathbf{h}^l$. $\mathbf{a}_{<i,>j}$ can be obtained by repeating the aforementioned process for $\mathcal{L}^r$, which traverses $\mathbf{X}$ from top-right to bottom-left. Note that this is equivalent to running $\mathcal{L}^r$ from top-left to bottom-right after mirroring $\mathbf{X}$ along the column dimension, and then mirroring the output hidden states $\mathbf{h}^r$ again. Similar to \cite{van2016pixel}, $\mathbf{h}^r$ is shifted down by one row to prevent $\mathbf{a}_{<i,>j}$ from incorporating future attention values. Once $\mathbf{a}_{<i,<j}$ and $\mathbf{a}_{<i,>j}$ are computed (as discussed above), we can obtain the Gaussian distribution for the attention variable $\mathcal{N}\left(\mu_{i,j}, \sigma_{i,j} \right)$ by following Equation \ref{eq:gaussparams}. The attention $a_{i,j}$ could then be obtained by either sampling a value from $\mathcal{N}\left(\mu_{i,j}, \sigma_{i,j} \right)$ or simply by taking the expectation and setting $a_{i,j} = \mu_{i,j}$. For most problems, as we will see in the experiment section, taking the expectation is going to be most efficient and effective. However, sampling maybe useful in cases where attention is inherently multi-modal. Focusing on different modes using coherent masks might be more beneficial in such situations. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{upscale} \caption{{\bf Block AttentionRNN} for $\gamma=2$. The input is first down-sized using a $\gamma \times \gamma$ convolutional kernel. Attention is computed on this smaller map.} \label{fig:upscale} \vspace{-0.2in} \end{figure} \subsection{Combination Function}\label{sec:combination} The choice of the combination function $\Gamma$ implicitly imposes some constraints on the interaction between the distributions $\mathcal{N}\left(\mu_{i,j}^l, \sigma_{i,j}^l\right)$ and $\mathcal{N}\left(\mu_{i,j}^r, \sigma_{i,j}^r\right)$. For example, assumption of independence would dictate a simple product for $\Gamma$, with resulting operations to produce $(\mu_{i,j}, \sigma_{i,j})$ being expressed in closed form. However, it is clear that independence is unlikely to hold due to image correlations. To allow for a more flexible interaction between variables and combination function, we instead use a fully connected layer to learn the appropriate $\Gamma$ for a particular task. \begin{align} \label{eq:combination} \begin{split} \mu_{i,j}, \sigma_{i,j} = f_{comb}\left(\mu_{i,j}^l, \sigma_{i,j}^l, \mu_{i,j}^r, \sigma_{i,j}^r\right) \end{split} \end{align} \subsection {Block AttentionRNN}\label{sec:upscale} Due to the poor performance of LSTMs over large sequences, the AttentionRNN layer doesn't scale well to large image feature maps. We introduce a simple modification to the method described in Section \ref{sec:attention} to alleviate this problem, which we refer to as Block AttentionRNN (BRNN). BRNN reduces the size of the input feature map $\mathbf{X}$ before computing the attention mask. This is done by splitting $\mathbf{X}$ into smaller blocks, each of size $\gamma \times \gamma$. This is equivalent to down-sampling the original image $\mathbf{X}$ to $\mathbf{X}^{ds}$ as follows. \begin{align} \mathbf{X}^{ds} = \mathbf{K}^{ds} \circledast \mathbf{X} \end{align} where $\mathbf{K}^{ds}$ is a convolution kernel of size $\gamma \times \gamma$ applied with stride $\gamma$. In essence, each value in $\mathbf{X}^{ds}$ now corresponds to a $\gamma \times \gamma$ region in $\mathbf{X}$. Instead of predicting a different attention probability for each individual spatial location $(i,j)$ in $\mathbf{X}$, BRNN predicts a single probability for each $\gamma \times \gamma$ region. This is done by first computing the attention mask $\mathbf{A}^{ds}$ for the down-sampled image $\mathbf{X}^{ds}$ using AttentionRNN (Section \ref{sec:attention}), and then $\mathbf{A}^{ds}$ is then scaled up using a transposed convolutional layer to obtain the attention mask $\mathbf{A}$ for the original image feature $\mathbf{X}$. Figure \ref{fig:upscale} illustrates the BRNN procedure. BRNN essentially computes a coarse attention mask for $\mathbf{X}$. Intuitively, this coarse attention can be used in the first few layers of a deep CNN network to identify the key region blocks in the image. The later layers can use this coarse information to generate a more granular attention mask. \section{Experiments}\label{experiments} To show the efficacy and generality of our approach, we conduct experiments over four different tasks: visual attribute prediction, image classification, visual question answering (VQA) and image generation. We highlight that our goal is not to necessarily obtain the absolute highest raw performance (although we do in many of our experiments), but to show improvements from integrating AttentionRNN into existing state-of-the-art models across a variety of tasks and architectures. Due to space limitations, all model architectures and additional visualizations are described in the supplementary material. \begin{figure}[t] \centering \begin{subfigure}{.15\textwidth} \centering \includegraphics[width=0.9\linewidth]{mref} \caption{MREF} \label{subfig1:synthetic} \end{subfigure} \begin{subfigure}{.15\textwidth} \centering \includegraphics[width=0.9\linewidth]{mdist} \caption{MDIST} \label{subfig2:synthetic} \end{subfigure} \begin{subfigure}{.15\textwidth} \centering \includegraphics[width=0.9\linewidth]{mbg} \caption{MBG} \label{subfig3:synthetic} \end{subfigure} \vspace{-0.1in} \caption{{\bf Synthetic Dataset Samples.} Example images taken from the three synthetic datasets proposed in \cite{seo2017visual}.} \label{fig:synthetic} \vspace{-0.1in} \end{figure} \subsection{Visual Attribute Prediction}\label{subsec:vap} \noindent \textbf{Datasets.} We experiment on the synthetic MREF, MDIST and MBG datasets proposed in \cite{seo2016progressive}. Figure \ref{fig:synthetic} shows example images from the datasets. The images in the datasets are created from MNIST \cite{lecun1998gradient} by sampling five to nine distinct digits with different colors (green, yellow, white, red, or blue) and varying scales (between 0.5 and 3.0). The datasets have images of size 100 x 100 and only differ in how the background is generated. MREF has a black background, MDIST has a black background with some Gaussian noise, and MBG has real images sampled from the SUN Database \cite{xiao2016sun} as background. The training, validation and test sets contain 30,000, 10,000 and 10,000 images respectively. \begin{table}[t] \centering \begin{tabular}{l|ccc|c} \thickhline Attention & MREF & MDIST & MBG & \begin{tabular}[c]{@{}c@{}}Rel. \\ Runtime\end{tabular} \\ \hline SAN \cite{xu2015show} & 83.42 & 80.06 & 58.07 & 1x \\ $\lnot \text{CTX}$ \cite{seo2016progressive} & 95.69 & 89.92 & 69.33 & 1.08x \\ $\text{CTX}$ \cite{seo2016progressive} & 98.00 & 95.37 & 79.00 & 1.10x \\\hdashline $\text{ARNN}_{ind}^{\sim}$ & 98.72 & 96.70 & 83.68 & \multirow{4}{*}{4.73x} \\ $\text{ARNN}_{ind}$ & 98.58 & 96.29 & 84.27 & \\ $\text{ARNN}^{\sim}$ & 98.65 & 96.82 & 83.74 & \\ $\text{ARNN}$ & \textbf{98.93} & \textbf{96.91} & \textbf{85.84} & \\ \thickhline \end{tabular} \caption {{\bf Color prediction accuracy.} Results are in \% on MREF, MDIST and MBG datasets. Our AttentionRNN-based model, CNN+ARNN, outperforms all the baselines.} \label{tab:vapresultspred} \vspace{-0.2in} \end{table} \vspace{0.02in} \noindent \textbf{Experimental Setup.} The performance of AttentionRNN (ARNN) is compared against two {\em local} attention mechanisms proposed in \cite{seo2016progressive}, which are referred as $\lnot {\text{CTX}}$ and CTX. ARNN assumes $a_{i,j}=\mu_{i,j}, \delta=3$, where $\mu_{i,j}$ is defined in Equation \ref{eq:combination}. To compute the attention for a particular spatial location $(i,j)$, CTX uses a $\delta = 3$ local context around $(i,j)$, whereas $\lnot {\text{CTX}}$ only uses the information from location $(i,j)$. We additionally define three variants of ARNN: i) $\text{ARNN}^\sim$ where each $a_{i,j}$ is sampled from $\mathcal{N}\left(\mu_{i,j}, \sigma_{i,j} \right)$, ii) $\text{ARNN}_{ind}$ where the combination function $\Gamma$ assumes the input distributions are independent, and iii) $\text{ARNN}_{ind}^{\sim}$ where $\Gamma$ assumes independence and $a_{i,j}$ is sampled. The soft attention mechanism (SAN) proposed by \cite{xu2015show} is used as an additional baseline. The same base CNN architecture is used for all the attention mechanisms for fair comparison. The CNN is composed of four stacks of $3 \times 3$ convolutions with 32 channels followed by $2 \times 2$ max pooling layer. SAN computes attention only on the output of the last convolution layer, while $\lnot {\text{CTX}}$, CTX and all variants of ARNN are applied after each pooling layer. Given an image, the models are trained to predict the color of the number specified by a query. Chance performance is 20\%. \begin{figure*}[t] \vspace{-0.1in} \centering \begin{subfigure}{0.5\textwidth} \refstepcounter{subfigure}\label{subfig1:viz} \centering \includegraphics[width=0.9\linewidth]{vis_vap_1} \end{subfigure}\begin{subfigure}{0.5\textwidth} \refstepcounter{subfigure}\label{subfig2:viz} \centering \includegraphics[width=0.9\linewidth]{vis_inv_vap} \end{subfigure} \vspace{-0.1in} \caption{{\bf Qualitative analysis of the attention masks.} (a) Layer-wise attended feature maps sampled from $\text{ARNN}^{\sim}$. The samples span all the modes in the image. (b) Layer-wise attended feature maps generated by different mechanisms visualized on images from $\text{MBG}^{inv}$ dataset. Additional visualizations are shown in the supplementary material.} \label{fig:qualitative} \vspace{-0.2in} \end{figure*} \vspace{0.02in} \noindent \textbf{Results.} Table \ref{tab:vapresultspred} shows the color prediction accuracy of various models on MREF, MDIST and MBG datasets. It can be seen that ARNN and all its variants clearly outperform the other baseline methods. The difference in performance is amplified for the more noisy MBG dataset, where ARNN is 6.8\% better than the closest baseline. $\text{ARNN}_{ind}$ performs poorly compared to $\text{ARNN}$, which furthers the reasoning of using a neural network to model $\Gamma$ instead of assuming independence. Similar to \cite{seo2016progressive}, we also evaluate the models on their sensitivity to the size of the target. The test set is divided into five uniform scale intervals for which model accuracy is computed. Table \ref{tab:vapresultsscale} shows the results on the MBG dataset. ARNN is robust to scale variations and performs consistently well on small and large targets. We also test the correctness of the mask generated using the metric proposed by \cite{liu2017attention}, which computes the percentage of attention values in the region of interest. For models that apply attention after each pooling layer, the masks from different layers are combined by upsampling and taking a product over corresponding pixel values. The results are shown for the MBG dataset in Table \ref{tab:vapresultsscale}. ARNN is able to more accurately attend to the correct regions, which is evident from the high correctness score. \begin{table}[t] \centering \setlength\tabcolsep{3.0pt} \begin{tabular}{@{}l|c|ccccc@{}} \thickhline \multirow{2}{*}{Attention} & \multirow{2}{*}{Corr.} & \multicolumn{5}{c}{Scale} \\ & & 0.5-1.0 & 1.0-1.5 & 1.5-2.0 & 2.0-2.5 & 2.5-3.0 \\ \hline SAN \cite{xu2015show} & 0.15 & 53.05 & 74.85 & 72.18 & 59.52 & 54.91 \\ $\lnot\text{CTX}$ \cite{seo2016progressive} & 0.28 & 68.20 & 76.37 & 73.30 & 61.60 & 57.28 \\ CTX \cite{seo2016progressive} & 0.31 & 77.39 & 87.13 & 84.96 & 75.59 & 63.72 \\\hdashline $\text{ARNN}_{ind}^{\sim}$ & 0.36 & 82.23 & 89.41 & 86.46 & 84.52 & 81.35 \\ $\text{ARNN}_{ind}$ & 0.34 & 82.89 & 89.47 & 88.34 & 84.22 & 80.00 \\ $\text{ARNN}^{\sim}$ & 0.39 & 82.23 & 89.41 & 86.46 & 84.52 & 81.35 \\ $\text{ARNN}$ & \textbf{0.42} & \textbf{84.45} & \textbf{91.40} & \textbf{86.84} & \textbf{88.39} & \textbf{82.37} \\ \thickhline \end{tabular} \caption {{\bf Mask Correctness and Scale experiment on MBG.} The ``Corr.'' column lists the mask correctness metric proposed by \cite{liu2017attention}. The ``Scale'' column shows the color prediction accuracy in \% for different scales.} \label{tab:vapresultsscale} \vspace{-0.3in} \end{table} From Tables \ref{tab:vapresultspred} and \ref{tab:vapresultsscale}, it can be seen that $\text{ARNN}^{\sim}$ provides no significant advantage over its deterministic counterpart. This can be attributed to the datasets encouraging point estimates, as each input query can only have one correct answer. As a consequence, for each $a_{i,j}$, $\sigma_{i,j}$ was observed to underestimate variance. However, in situations where an input query can have multiple correct answers, $\text{ARNN}^{\sim}$ can be used to generate diverse attention masks. To corroborate this claim, we test the pre-trained $\text{ARNN}^{\sim}$ on images that are similar to the MBG dataset but have the same digit in multiple colors. Figure \ref{subfig1:viz} shows the individual layer attended feature maps for three different samples from $\text{ARNN}^{\sim}$ for a fixed image and query. For the query ``9'', $\text{ARNN}^{\sim}$ is able to identify the three modes. Note that since $\sigma_{i,j}$'s were underestimated due to aforementioned reasons, they were scaled up before generating the samples. Despite being underestimated $\sigma_{i,j}$'s still capture crucial information. \vspace{0.02in} \noindent \textbf{Inverse Attribute Prediction.} Figure \ref{subfig1:viz} leads to an interesting observation regarding the nature of the task. Even though $\text{ARNN}^{\sim}$ is able to identify the correct number, it only needs to focus on a tiny part of the target region to be able to accurately classify the color. To further demonstrate the ARNN's ability to model longer dependencies, we test the performance of ARNN, CTX and $\lnot\text{CTX}$ on the $\text{MBG}^{inv}$ dataset, which defines the inverse attribute prediction problem - given a color identify the number corresponding to that color. The base CNN architecture is identical to the one used in the previous experiment. ARNN, CTX and $\lnot\text{CTX}$ achieve an accuracy of 72.77\%, 66.37\% and 40.15\% and a correctness score\cite{liu2017attention} of 0.39, 0.24 and 0.20 respectively. Figure \ref{subfig2:viz} shows layer-wise attended feature maps for the three models. ARNN is able to capture the entire number structure, whereas the other two methods only focus on a part of the target region. Even though CTX uses some local context to compute the attention masks, it fails to identify the complete structure for the number ``0''. A plausible reason for this is that a $3 \times 3$ local context is too small to capture the entire target region. As a consequence, the attention mask is computed in patches. CTX maintains no information about the previously computed attention values, and therefore is unable to assign correlated attention scores for all the different target region patches. ARNN, on the other hand, captures constraints between attention variables, making it much more effective in this situation. \begin{table}[t] \setlength\tabcolsep{2.5pt} \centering \begin{tabular}{@{}c|c|ccccc@{}} \thickhline & & \multicolumn{5}{c}{Scale} \\ & Total & 0.5-1.0 & 1.0-1.5 & 1.5-2.0 & 2.0-2.5 & 2.5-3.0 \\ \hline NONE & 91.43 & 85.63 & 92.57 & 94.96 & 94.77 & 93.59 \\ ARNN & 91.09 & 84.89 & 92.25 & 94.24 & 94.70 & \textbf{94.52} \\ $\text{BRNN}^{\gamma=3}$ & 91.67 & 85.97 & 93.46 & 94.81 & 94.35 & 93.68 \\ $\text{BRNN}^{\gamma=2}$ & \textbf{92.68} & \textbf{88.10} & \textbf{94.23} & \textbf{95.32 } & \textbf{94.80} & 94.01 \\ \thickhline \end{tabular} \caption{{\bf Block AttentionRNN.} Ablation results on $\text{MBG}^\text{b}$ dataset. AttentionRNN (ARNN) and Block AttentionRNN (BRNN) with block sizes of 2 and 3 are compared. } \label{tab:ablation} \vspace{-0.2in} \end{table} \vspace{0.02in} \noindent \textbf{Scalability of ARNN.} The results shown in Table \ref{tab:vapresultspred} correspond to models trained on $100\times100$ input images, where the first attention layer is applied on an image feature of size $50\times50$. To analyze the performance of ARNN on comparatively larger image features, we create a new dataset of $224\times224$ images which we refer to as $\text{MBG}^\text{b}$. The data generation process for $\text{MBG}^\text{b}$ is identical to MBG. We perform an ablation study analyzing the effect of using Block AttentionRNN (BRNN) (Section \ref{sec:upscale}) instead of ARNN on larger image features. For the base architecture, the ARNN model from the previous experiment is augmented with an additional stack of convolutional and max pooling layer. The detailed architecture is mentioned in the supplementary material. Table \ref{tab:ablation} shows the color prediction accuracy on different scale intervals for the $\text{MBG}^\text{b}$ dataset. As the first attention layer is now applied on a feature map of size $112\times 112$, ARNN performs worse than the case when no attention (NONE) is applied due to the poor tractability of LSTMs over large sequences. BRNN$^{\gamma=2}$, on the other hand, is able to perform better as it reduces the image feature size before applying attention. However, there is a considerable difference in the performance of BRNN when $\gamma=2$ and $\gamma=3$. When $\gamma=3$, BRNN applies a $3 \times 3$ convolution with stride $3$. This aggressive size reduction causes loss of information. \subsection{Image Classification} \vspace{-0.05in} \noindent \textbf{Dataset.} We use the CIFAR-100 \cite{krizhevsky2009learning} to verify the performance of AttentionRNN on the task of image classification. The dataset consists of 60,000 $32\times32$ images from 100 classes. The training/test set contain 50,000/10,000 images. \vspace{0.02in} \noindent \textbf{Experimental Setup. }We augment ARNN to the convolution block attention module (CBAM) proposed by \cite{woo2018cbam}. For a given feature map, CBAM computes two different types of attentions: 1) channel attention that exploits the inter-channel dependencies in a feature map, and 2) spatial attention that uses local context to identify relationships in the spatial domain. We replace \emph{only} the spatial attention in CBAM with ARNN. This modified module is referred to as CBAM+ARNN. ResNet18 \cite{he2016deep} is used as the base model for our experiments. ResNet18+CBAM is the model obtained by using CBAM in the Resnet18 model, as described in \cite{woo2018cbam}. Resnet18+CBAM+ARNN is defined analogously. We use a local context of $3 \times 3$ to compute the spatial attention for both CBAM and CBAM+ARNN. \vspace{0.02in} \noindent \textbf{Results. }Top-1 and top-5 error is used to evaluate the performance of the models. The results are summarized in Table \ref{tab:cbam}. CBAM+ARNN provides an improvement of 0.89\% on top-1 error over the closest baseline. Note that this gain, though seems marginal, is larger than what CBAM obtains over ResNet18 with no attention (0.49\% on top-1 error). \begin{table}[t] \setlength\tabcolsep{1.5pt} \centering \begin{tabular}{@{}l|cc|c@{}} \thickhline & \begin{tabular}[c]{@{}c@{}}Top-1 \\ Error (\%)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Top-5 \\ Error (\%)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Rel.\\ Runtime\end{tabular} \\ \hline ResNet18 \cite{he2016deep} & 25.56 & 6.87 & 1x \\ ResNet18 + CBAM \cite{woo2018cbam} & 25.07 & 6.57 & 1.43x \\ ResNet18 + CBAM + ARNN & \textbf{24.18} & \textbf{6.42} & 4.81x \\ \thickhline \end{tabular} \caption {\textbf{Performance on Image Classification.} The Top-1 and Top-5 error \% are shown for all the models. The ARNN based model outperforms all other baselines.} \label{tab:cbam} \vspace{-0.1in} \end{table} \begin{table}[t] \centering \setlength\tabcolsep{2.5pt} \begin{tabular}{l|cccc|c} \thickhline & Yes/No & Number & Other & Total & \begin{tabular}[c]{@{}c@{}}Rel.\\ Runtime\end{tabular} \\ \hline MCB \cite{fukui2016multimodal} & 76.06 & 35.32 & 43.87 & 54.84 & 1x \\ MCB+ATT \cite{fukui2016multimodal} & 76.12 & 35.84 & 47.84 & 56.89 & 1.66x \\ MCB+ARNN & \textbf{77.13} & \textbf{36.75} & \textbf{48.23} & \textbf{57.58} & 2.46x \\ \thickhline \end{tabular} \caption{\textbf{Performance on VQA.} In \% accuracy.} \label{tab:vqa} \vspace{-0.2in} \end{table} \subsection{Visual Question Answering} \noindent \textbf{Dataset.} We evaluate the performance of ARNN on the task of VQA \cite{antol2015iccv}. The experiments are done on the VQA 2.0 dataset \cite{goyal2017making}, which contains images from MSCOCO \cite{lin2014microsoft} and corresponding questions. As the test set is not publicly available, we evaluate performance on the validation set. \vspace{0.02in} \noindent \textbf{Experimental Setup. }We augment ARNN to the Multimodal Compact Bilinear Pooling (MCB) architecture proposed by \cite{fukui2016multimodal}. This is referred to as MCB+ARNN. Note that even though MCB doesn't give state-of-the-art performance on this task, it is a competitive baseline that allows for easy ARNN integration. MCB+ATT is a variant to MCB that uses a local attention mechanism with $\delta=1$ from \cite{fukui2016multimodal}. For fair comparison, MCB+ARNN also uses a $\delta=1$ context. \vspace{0.02in} \noindent \textbf{Results. } The models are evaluated using the accuracy measure defined in \cite{antol2015iccv}. The results are summarized in Table \ref{tab:vqa}. MCB+ARNN achieves a 0.69\% improvement over the closest baseline. We believe this marginal improvement is because all the models, for each spatial location $(i,j)$, use no context from neighbouring locations (as $\delta=1$). \subsection{Image Generation} \noindent \textbf{Dataset.} We analyze the effect of using ARNN on the task of image generation. Experiments are performed on the CelebA dataset \cite{liu2015faceattributes}, which contains 202,599 face images of celebrities, with 40 binary attributes. The data pre-processing is identical to \cite{zhao2018modular}. The models are evaluated on three attributes: hair color = {\emph{\{black, blond, brown\}}}, gender = {\emph{\{male, female}\}}, and smile = {\emph{\{smile, nosmile\}}}. \vspace{0.02in} \noindent \textbf{Experimental Setup.} We compare ARNN to a local attention mechanism used in the ModularGAN (MGAN) framework \cite{zhao2018modular}. MGAN uses a $3 \times 3$ local context to obtain attention values. We define MGAN+ARNN as the network obtained by replacing the local attention with ARNN. The models are trained to transform an image given an attribute. \vspace{0.02in} \noindent \textbf{Results.} To evaluate the performance of the models, similar to \cite{zhao2018modular}, we train a ResNet18 \cite{he2016deep} model that classifies the hair color, facial expression and gender on the CelebA dataset. The trained classifier achieves an accuracy of 93.9\%, 99.0\% and 93.7\% on hair color, gender and smile respectively. For each transformation, we pass the generated images through this classifier and compute the classification error (shown in Table \ref{tab:modulargan}). MGAN+ARNN outperforms the baseline on all categories except \emph{hair color}. To analyse this further, we look at the attention masks generated for the \emph{hair color} transformation by both models. As shown in Figure \ref{fig:gan}, we observe that the attention masks generated by MGAN lack coherence over the target region due to discontinuities. MGAN+ARNN, though has a slightly higher classification error, generates uniform activation values over the target region by encoding structural dependencies. \begin{figure}[t] \centering \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\linewidth]{gan1_1} \end{subfigure} \caption{{\bf Qualitative Results on ModularGAN.} Attention masks generated by original ModularGAN \cite{zhao2018modular} and ModularGAN augmented with ARNN are shown. Notice that the hair mask is more uniform for MGAN+ARNN as it is able to encode structural dependencies in the attention mask. Additional results shown in supplementary material.} \label{fig:gan} \vspace{-0.1in} \end{figure} \begin{table}[t] \vspace{-0.1in} \centering \setlength\tabcolsep{6.5pt} \begin{tabular}{l|ccc|c} \thickhline \multicolumn{1}{c|}{} & Hair & Gender & Smile & \begin{tabular}[c]{@{}c@{}}Rel.\\ Runtime\end{tabular} \\ \hline MGAN \cite{zhao2018modular} & \textbf{2.5} & 3.2 & 12.6 &1x \\ MGAN+ARNN & 3.0 & \textbf{1.4} & \textbf{11.4} &1.96x \\ \thickhline \end{tabular} \caption{\textbf{Performance on Image Generation.} ResNet18 \cite{he2016deep} Classification Errors (\%) for each attribute transformation. ARNN achieves better performance on two tasks.} \label{tab:modulargan} \vspace{-0.2in} \end{table} \section{Conclusion} \vspace{-0.05in} In this paper, we developed a novel {\em structured} spatial attention mechanism which is end-to-end trainable and can be integrated with any feed-forward convolutional neural network. The proposed AttentionRNN layer explicitly enforces structure over the spatial attention variables by sequentially predicting attention values in the spatial mask. Experiments show consistent quantitative and qualitative improvements on a large variety of recognition tasks, datasets and backbone architectures. \raggedbottom \onecolumn { \vspace*{1.0cm} \centering \Large\bfseries Supplementary Material\\ \vspace*{2.0cm} } \setcounter{section}{0} Section 1 explains the architectures for the models used in the experiments (Section 4 in the main paper). Section 2 provides additional visualizations for the task of Visual Attribute Prediction (Section 4.1 in the main paper) and Image Generation (Section 4.4 in the main paper). These further show the effectiveness of our proposed structured attention mechanism. \section{Model Architectures} \subsection{Visual Attribute Prediction} Please refer to Section 4.1 of the main paper for the task definition. Similar to \cite{seo2016progressive}, the base CNN architecture is composed of four stacks of $3 \times 3$ convolutions with 32 channels followed by $2 \times 2$ max pooling layer. SAN computes attention only on the output of the last convolution layer, while $\lnot {\text{CTX}}$, CTX and all variants of ARNN are applied after each pooling layer. Table \ref{tab:modelarc} illustrates the model architectures for each network. \{$\lnot\text{CTX}$, CTX, ARNN\}$_{sigmoid}$ refers to using sigmoid non-linearity on the generated attention mask before applying it to the image features. Similarly, \{$\lnot\text{CTX}$, CTX, ARNN\}$_{softmax}$ refers to using softmax non-linearity on the generated attention mask. We use the same hyper-parameters and training procedure for all models, which is identical to \cite{seo2016progressive}. For the scalability experiment described in Section 4.1, we add an additional stack of $3 \times 3$ convolution layer followed by a $2 \times 2$ max pooling layer to the ARNN architecture described in Table \ref{tab:modelarc}. This is used as the base architecture. Table \ref{tab:ablationsupp} illustrates the differences between the models used to obtain results mentioned in Table 3 of the main paper. \begin{table}[h] \vspace{-0.04in} \centering \setlength{\tabcolsep}{2.0em} \def1.4{1.4} \begin{tabular}{|c|c|c|c|} \hline \textbf{SAN} & \textbf{$\lnot \text{CTX}$} & \textbf{CTX} & \textbf{ARNN} \\ \hline \multicolumn{4}{|c|}{conv1 (3x3@32)} \\ \hline \multicolumn{4}{|c|}{pool1 (2x2)} \\ \hline $\downarrow$ & \cellcolor{gray!20}$\lnot \text{CTX}_{sigmoid}$ & \cellcolor{gray!20}CTX$_{sigmoid}$ & \cellcolor{gray!20}ARNN$_{sigmoid}$ \\ \hline \multicolumn{4}{|c|}{conv2 (3x3@32)} \\ \hline \multicolumn{4}{|c|}{pool2 (2x2)} \\ \hline $\downarrow$ & \cellcolor{gray!20}$\lnot \text{CTX}_{sigmoid}$ & \cellcolor{gray!20}CTX$_{sigmoid}$ & \cellcolor{gray!20}ARNN$_{sigmoid}$ \\ \hline \multicolumn{4}{|c|}{conv3 (3x3@32)} \\ \hline \multicolumn{4}{|c|}{pool3 (2x2)} \\ \hline $\downarrow$ & \cellcolor{gray!20}$\lnot \text{CTX}_{sigmoid}$ & \cellcolor{gray!20}CTX$_{sigmoid}$ & \cellcolor{gray!20}ARNN$_{sigmoid}$ \\ \hline \multicolumn{4}{|c|}{conv4 (3x3@32)} \\ \hline \multicolumn{4}{|c|}{pool4 (2x2)} \\ \hline \cellcolor{gray!20}SAN & \cellcolor{gray!20}$\lnot \text{CTX}_{softmax}$ & \cellcolor{gray!20}CTX$_{softmax}$ & \cellcolor{gray!20}ARNN$_{softmax}$ \\ \hline \end{tabular} \vspace{1em} \vspace{-0.2in} \caption{Architectures for the models used in Section 4.1 of the main paper. $\downarrow$ implies that the previous and the next layer are directly connected. The input is passed to the top-most layer. The computation proceeds from top to bottom.} \label{tab:modelarc} \end{table} \begin{table}[h] \centering \setlength{\tabcolsep}{2.0em} \def1.4{1.4} \begin{tabular}{|c|c|c|c|} \hline \textbf{NONE} & \textbf{ARNN} & \textbf{BRNN} \\ \hline \multicolumn{3}{|c|}{conv1 (3x3@32)} \\ \hline \multicolumn{3}{|c|}{pool1 (2x2)} \\ \hline $\downarrow$ & \cellcolor{gray!20}$\text{ARNN}_{sigmoid}$ & \cellcolor{gray!20}$\text{BRNN}^{\delta}_{sigmoid}$ \\ \hline \multicolumn{3}{|c|}{\textbf{ARNN} (described in Table \ref{tab:modelarc})} \\ \hline \end{tabular} \vspace{1em} \caption{Model architectures for the scalability study described in Section 4.1 of the main paper. $\downarrow$ implies that the previous and the next layer are directly connected. \textbf{ARRN} in defined in Table \ref{tab:modelarc}.} \label{tab:ablationsupp} \end{table} \subsection{Image Classification} Please refer to Section 4.2 of the main paper for the task definition. We augment ARNN to the convolution block attention module (CBAM) proposed by \cite{woo2018cbam}. For a given feature map, CBAM computes two different types of attentions: 1) channel attention that exploits the inter channel dependencies in a feature map, and 2) spatial attention that uses local context to identify relationships in the spatial domain. Figure \ref{subfig1:cbam} shows the CBAM module integrated with a ResNet \cite{he2016deep} block. We replace only the \emph{spatial attention} in CBAM with ARNN. This modified module is referred to as CBAM+ARNN. Figure \ref{subfig2:cbam} better illustrates this modification. Both CBAM and CBAM+ARNN use a local context of $3 \times 3$ to compute attention. We use the same hyper-parameters and training procedure for both CBAM and CBAM+ARNN, which is identical to \cite{woo2018cbam}. \begin{figure}[H] \centering \begin{subfigure}{0.85\textwidth} \centering \includegraphics[width=\linewidth]{cbam1} \caption{CBAM module} \label{subfig1:cbam} \end{subfigure} \begin{subfigure}{0.85\textwidth} \centering \includegraphics[width=\linewidth]{cbam+arnn} \caption{CBAM+ARNN module} \label{subfig2:cbam} \end{subfigure} \vspace{-0.15in} \caption{{\bf Difference between CBAM and CBAM+ARNN.} (a) CBAM\cite{woo2018cbam} module integrated with a ResNet\cite{he2016deep} block. (b) CBAM+ARNN replaces the spatial attention in CBAM with ARNN. It is applied similar to (a) after each ResNet\cite{he2016deep} block. Refer to Section 4.2 of the main paper for more details.} \label{fig:cbam} \end{figure} \subsection{Visual Question Answering} Please refer to Section 4.3 of the main paper for task definition. We use the Multimodal Compact Bilinear Pooling with Attention (MCB+ATT) architecture proposed by \cite{fukui2016multimodal} as a baseline for our experiment. To compute attention, MCB+ATT uses two $1 \times 1$ convolutions over the features obtained after using the compact bilinear pooling operation. Figure \ref{subfig1:mcb} illustrates the architecture for MCB+ATT. We replace this attention with ARNN to obtain MCB+ARNN. MCB+ARNN also uses a $1 \times 1$ local context to compute attention. Figure \ref{subfig2:mcb} better illustrates this modification. We use the same hyper-parameters and training procedure for MCB, MCB+ATT and MCB+ARNN, which is identical to \cite{fukui2016multimodal}. \begin{figure}[H] \centering \begin{subfigure}{\textwidth} \centering \includegraphics[width=\linewidth]{mcb} \caption{MCB+ATT} \label{subfig1:mcb} \end{subfigure} \begin{subfigure}{\textwidth} \centering \includegraphics[width=\linewidth]{mcb+arnn} \caption{MCB+ARNN} \label{subfig2:mcb} \end{subfigure} \caption{{\bf Difference between MCB+ATT and MCB+ARNN.} (a) MCB+ATT model architecture proposed by \cite{fukui2016multimodal}. It uses a $1\times 1$ context to compute attention over the image features. (b) MCB+ARNN replaces the attention mechanism in MCB+ATT with ARNN. It is applied in the same location as (a) with $1 \times 1$ context. Refer to Section 4.3 of the main paper for more details.} \label{fig:mcb} \end{figure} \subsection{Image Generation} Please refer to Section 4.4 of the main paper for task definitions. We compare ARNN to a local attention mechanism used in the ModularGAN (MGAN) framework \cite{zhao2018modular}. MGAN consists of three modules: 1) encoder module that encodes an input image into an intermediate feature representation, 2) generator module that generates an image given an intermediate feature representation as input, and 3) transformer module that transforms a given intermediate representation to a new intermediate representation according to some input condition. The transformer module uses a $3×3$ local context to compute attention over the feature representations. Figure \ref{subfig1:mgan} illustrates the transformer module proposed by \cite{zhao2018modular}. We define MGAN+ARNN as the network obtained by replacing this local attention mechanism in the transformer module with ARNN. Note that the generator and encoder modules are unchanged. MGAN+ARNN also uses a $3 \times 3$ local context to compute attention. Figure \ref{subfig2:mgan} better illustrates this modification to the transformer module. We use the same hyper-parameters and training procedure for both MGAN and MGAN+ARNN, which is identical to \cite{zhao2018modular}. \begin{figure}[H] \centering \begin{subfigure}{0.85\textwidth} \centering \includegraphics[width=\linewidth]{mgan} \caption{Transformer module for MGAN} \label{subfig1:mgan} \end{subfigure} \begin{subfigure}{0.85\textwidth} \centering \includegraphics[width=\linewidth]{mgan+arnn} \caption{Transformer module for MGAN+ARNN} \label{subfig2:mgan} \end{subfigure} \caption{{\bf Difference between MGAN and MGAN+ARNN.} (a) The transformer module for the ModularGAN (MGAN) architecture proposed by \cite{zhao2018modular}. It uses a $3\times 3$ local context to compute attention over the intermediate features. (b) MGAN+ARNN replaces the attention mechanism in MGAN with ARNN. It is applied in the same location as (a) with $3 \times 3$ local context. Note that the generator and encoder modules in MGAN and MGAN+ARNN are identical. Refer to Section 4.4 of the main paper for more details.} \label{fig:mgan} \end{figure} \newpage \section{Additional Visualizations} \subsection{Visual Attribute Prediction} Please refer to Section 4.1 of the main paper for task definition. Figures \ref{fig:sample0} - \ref{fig:sample2} show the individual layer attended feature maps for three different samples from $\text{ARNN}^{\sim}$ for a fixed image and query. It can be seen that $\text{ARNN}^{\sim}$ is able to identify the different modes in each of the images. \begin{figure}[H] \centering \includegraphics[width=0.7\linewidth]{sample_7} \includegraphics[width=0.7\linewidth]{sample_8} \caption{{\bf Qualitative Analysis of Attention Masks sampled from $\text{ARNN}^{\sim}$.} Layer-wise attended feature maps sampled from $\text{ARNN}^{\sim}$ for a fixed image and query. The masks are able to span the different modes in the image. For detailed explanation see Section 4.1 of the main paper.} \label{fig:sample0} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.7\linewidth]{sample_1} \includegraphics[width=0.7\linewidth]{sample_2} \includegraphics[width=0.7\linewidth]{sample_3} \caption{{\bf Qualitative Analysis of Attention Masks sampled from $\text{ARNN}^{\sim}$.} Layer-wise attended feature maps sampled from $\text{ARNN}^{\sim}$ for a fixed image and query. The masks are able to span the different modes in the image. For detailed explanation see Section 4.1 of the main paper.} \label{fig:sample1} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.7\linewidth]{sample_4} \includegraphics[width=0.7\linewidth]{sample_5} \includegraphics[width=0.7\linewidth]{sample_6} \caption{{\bf Qualitative Analysis of Attention Masks sampled from $\text{ARNN}^{\sim}$.} Layer-wise attended feature maps sampled from $\text{ARNN}^{\sim}$ for a fixed image and query. The masks are able to span the different modes in the image. For detailed explanation see Section 4.1 of the main paper.} \label{fig:sample2} \end{figure} \subsection{Inverse Attribute Prediction} Please refer to Section 4.1 of the main paper for task definition. Figures \ref{fig:vap0} - \ref{fig:vap2} show the individual layer attended feature maps comparing the different attention mechanisms on the $\text{MBG}^{inv}$ dataset. It can be seen that ARNN captures the entire number structure, whereas the other two methods only focus on a part of the target region or on some background region with the same color as the number, leading to incorrect predictions. \begin{figure}[H] \centering \includegraphics[width=0.7\linewidth]{vap_1} \includegraphics[width=0.7\linewidth]{vap_6} \caption{{\bf Qualitative Analysis of Attention Masks on $\text{MBG}^{inv}$.} Layer-wise attended feature maps generated by different mechanisms visualized on images from $\text{MBG}^{inv}$ dataset. ARNN is able to capture the entire number structure, whereas the other two methods only focus on a part of the target region or on some background region with the same color as the target number. For detailed explanation see Section 4.1 of the main paper.} \label{fig:vap0} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.7\linewidth]{vis_9} \includegraphics[width=0.7\linewidth]{vap_2} \includegraphics[width=0.7\linewidth]{vap_7} \caption{{\bf Qualitative Analysis of Attention Masks on $\text{MBG}^{inv}$.} Layer-wise attended feature maps generated by different mechanisms visualized on images from $\text{MBG}^{inv}$ dataset. ARNN is able to capture the entire number structure, whereas the other two methods only focus on a part of the target region or on some background region with the same color as the target number. For detailed explanation see Section 4.1 of the main paper.} \label{fig:vap1} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.7\linewidth]{vap_5} \includegraphics[width=0.7\linewidth]{vap_3} \includegraphics[width=0.7\linewidth]{vap_10} \caption{{\bf Qualitative Analysis of Attention Masks on $\text{MBG}^{inv}$.} Layer-wise attended feature maps generated by different mechanisms visualized on images from $\text{MBG}^{inv}$ dataset. ARNN is able to capture the entire number structure, whereas the other two methods only focus on a part of the target region or on some background region with the same color as the target number. For detailed explanation see Section 4.1 of the main paper.} \label{fig:vap2} \end{figure} \subsection{Image Generation} Please refer to Section 4.4 of the main paper for task definition. Figures \ref{fig:gan1} and \ref{fig:gan2} show the attention masks generated by MGAN and MGAN+ARNN for the task of \emph{hair color} transformation. MGAN+ARNN encodes structural dependencies in the attention values, which is evident from the more uniform and continuous attention masks. MGAN, on the other hand, has sharp discontinuities which, in some cases, leads to less accurate hair color transformations. \begin{figure}[H] \centering \includegraphics[width=0.8\linewidth]{gan1_2} \includegraphics[width=0.8\linewidth]{gan1_8} \includegraphics[width=0.8\linewidth]{gan1_3} \includegraphics[width=0.8\linewidth]{gan1_4} \caption{{\bf Qualitative Results for Image Generation.} Attention masks generated by MGAN and MGAN+ARNN are shown. Notice that the hair mask is more uniform for MGAN+ARNN as it is able to encode structural dependencies in the attention mask. For detailed explanation see Section 4.4 of the main paper.} \label{fig:gan1} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.8\linewidth]{gan1_9} \includegraphics[width=0.8\linewidth]{gan1_5} \includegraphics[width=0.8\linewidth]{gan1_6} \includegraphics[width=0.8\linewidth]{gan1_7} \caption{{\bf Qualitative Results for Image Generation.} Attention masks generated by MGAN and MGAN+ARNN are shown. Notice that the hair mask is more uniform for MGAN+ARNN as it is able to encode structural dependencies in the attention mask. For detailed explanation see Section 4.4 of the main paper.} \label{fig:gan2} \end{figure} \twocolumn {\small \bibliographystyle{ieee}
1,116,691,497,445
arxiv
\section{Introduction} \label{sec:intro} \ \ It has long been suggested that antikaon ($K^-$) condensation should be realized in high density hadronic matter\cite{kn86,mtt93,t95,lbm95,fmmt96,l96,pbpelk97}.\footnote{We consider antikaon ($K^-$) condensation, while we conventionally call it ``kaon condensation''.} It is characterized as a macroscopic appearance of strangeness in a strongly interacting kaon-baryon system, where chiral symmetry and its spontaneous breaking has a key role. If kaon condensation exists in neutron stars, it softens the hadronic equation of state (EOS), having an influence on the bulk structure of neutron stars such as mass-radius relations\cite{fmmt96,l96,pbpelk97,tpl94}. Effects of the phase-equilibrium condition associated with the first-order phase transition on the inner structure of neutron stars have also been elucidated\cite{g01,hps93,cgs00,nr01,vyt03,mtv05}. With regard to dynamical evolution of newly-born neutron stars, delayed collapse of protoneutron stars accompanying a phase transition to kaon-condensed phase has been discussed\cite{bb94,bst95,p00,ty99}. The existence of kaon condensation is important for thermal evolution of neutron stars since the neutrino emission processes are largely enhanced in the presence of kaon condensates\cite{bk88,t88,pb90,fmtt94}. In the kaon-condensed phase in neutron stars, the net (negative) strangeness gets abundant as a consequence of chemical equilibrium for weak interaction processes, $n\rightleftharpoons p K^-$, $e^-\rightleftharpoons K^-\nu_e$. At threshold, the onset condition for kaon condensation has been given by\footnote{Throughout this paper, the units $\hbar$ = $c$ = 1 are used.} \begin{equation} \omega(\rho_{\rm B})=\mu\ , \label{eq:onset} \end{equation} where $ \omega(\rho_B)$ is the lowest $K^-$ energy obtained at the baryon number density $\rho_{\rm B}$ from the zero point of the $K^-$ inverse propagator, $D_K^{-1}(\omega; \rho_B)=0$, and $\mu$ is the charge chemical potential which is equal to both the antikaon chemical potential $\mu_K$ and electron chemical potential $\mu_e$ under the chemical equilibrium condition for the weak processes\cite{mt92,bkrt92}. This onset condition (\ref{eq:onset}) is based on the assumption of {\it continuous phase transition} : Kaon condensation sets in with zero amplitude at a critical density, above which kaon condensates develop smoothly with increase in baryon number density $\rho_{\rm B}$. It has been shown that the onset condition (\ref{eq:onset}) holds true even if hyperon ($Y$) particle-nucleon ($N$) hole excitation through the $p$-wave kaon-baryon interaction is taken into account\cite{m93,kvk95} in the ordinary neutron-star matter where only nucleons and leptons are in $\beta$ equilibrium. Concerning another hadronic phase including strangeness, hyperons ($\Lambda$, $\Sigma^-$, $\Xi^-$, $\cdots$) as well as nucleons and leptons have been expected to be mixed in the ground state of neutron-star matter \cite{g85,ekp95,sm96,phz99,h00,s00,h98,bbs98,v00,bg97,y02,t04}. We call the hyperon-mixed neutron-star matter {\it hyperonic matter} throughout this paper. With regard to coexistence or competition of kaon condensation with hyperons in neutrons stars, it has been pointed out that onset density of the $s$-wave kaon condensation subject to the condition (\ref{eq:onset}) in hyperonic matter is shifted to a higher density\cite{pbpelk97,ekp95,sm96} : The electron carrying the negative charge is replaced by the negatively charged hyperon, so that the charge chemical potential $\mu$ (=$\mu_e=(3\pi^2\rho_e)^{1/3}$) is diminished as compared with the case of neutron-star matter without hyperons. As a result, the lowest $K^-$ energy $\omega(\rho_{\rm B})$ meets the charge chemical potential $\mu$ at a higher density. Subsequently, several works on the onset density and the EOS for the kaon-condensed phase in hyperonic matter have been elaborated with the relativistic mean-field theory\cite{pbg00,bb01}, the quark-meson coupling models\cite{mpp05,hrh06}. Recently the in-medium kaon dynamics and mechanisms of kaon condensation stemming from the $s$ and $p$-wave kaon-baryon interactions in hyperonic matter have been investigated\cite{m02,kv03}. It is emphasized here that most of the results on the onset mechanisms of kaon condensation in {\it hyperonic matter} rely on the same assumption as the case of the usual neutron-star matter where hyperons are not mixed, i.e., the assumption of the {\it continuous phase transition} with the help of Eq.~(\ref{eq:onset}). In this paper, we reexamine the onset condition of kaon condensation realized from hyperonic matter. We consider the $s$-wave kaon condensation and incorporate the kaon-baryon interaction within the effective chiral Lagrangian. The nonrelativistic effective baryon-baryon interaction is taken into account, and the parameters are determined so as to reproduce the nuclear saturation properties and baryon potential depths deduced from the recent hypernuclear experiments\cite{g04}. We demonstrate that the assumption of continuous phase transition cannot always be applied to the case where the {\it negatively charged hyperons} ($\Sigma^-$) are already present in the ground state of hyperonic matter, as a result of competition between the negatively charged kaons and hyperons. It will be shown that, in the vicinity of the baryon density a little lower than that where Eq.~(\ref{eq:onset}) is satisfied, there exists already another energy solution, for which kaons are condensed without mixing of the $\Sigma^-$ hyperons, in addition to the usual energy solution corresponding to the noncondensed state with the $\Sigma^-$ mixing. In particular, in the case of the stronger kaon-baryon attractive interaction, there is a discontinuous transition between these two states in a small density interval. Thus, from a theoretical viewpoint, one ought to be careful about the previous results concerning coexistence and/or competition of kaon condensates and $\Sigma^-$ hyperons, although quantitative effects resulting from the discontinuous phase transition are small.\footnote{Our discussion is concentrated on obtaining the energy solutions in the presence of hyperons under the constraints relevant to neutron stars. We don't discuss here the prescription of the Gibbs condition for the phase equilibrium associated with a first-order phase transition\cite{g01,hps93,cgs00,nr01,vyt03,mtv05}. This issue will be reported elsewhere\cite{m06}. A first-order phase transition to the $K^-$-condensed phase has also been discussed in another context in Refs.~\cite{kvk95,kv03}.} The interplay between $K^-$ condensates and $\Sigma^-$ hyperons can also be revealed in the EOS and characteristic features of the fully-developed kaon-condensed phase such as density-dependence of particle fractions. In the case of the stronger kaon-baryon attractive interaction, we will see that there appears a local energy minimum with respect to the baryon number density (a density isomer state) as a consequence of considerable softening of the EOS due to both kaon condensation and hyperon-mixing and recovering of the stiffness of the EOS at very high densities due to the increase in the repulsive interaction between baryons. The paper is organized as follows. In Sec.~\ref{sec:form}, the formulation to obtain the effective energy density of the kaon-condensed phase in hyperonic matter is presented. In Sec.~\ref{sec:fraction}, numerical results for the composition of the noncondensed phase of hyperonic matter are given for the subsequent discussions. Section~\ref{sec:validity} is devoted to the discussion on the validity of the continuous phase transition. The results for the EOS of the kaon-condensed phase are given in Sec.~\ref{sec:eos}. In Sec.~\ref{sec:summary}, summary and concluding remarks are addressed. In the Appendix, we remark that the two sets of parameters used in this paper for the baryon-baryon interaction models give different behaviors for the onset of $\Lambda$ and $\Sigma^-$ hyperons in ordinary neutron-star matter. \section{Formulation} \label{sec:form} \subsection{Outline of the kaon-condensed matter} \label{subsec:outline} \ \ In order to simplify and to make clear the discussion about the interrelations between kaon condensation and hyperons, we consider the s-wave kaon condensation by putting the kaon momentum $|{\bf k}|$=0, and we also take into account only the proton ($p$), $\Lambda$, neutron ($n$), and $\Sigma^-$ of the octet baryons and the ultrarelativistic electrons for kaon-condensed hyperonic matter in neutron stars. Within chiral symmetry, the classical kaon field as an order parameter of the $s$-wave kaon condensation is chosen to be a spatially uniform type: \begin{equation} \langle K^-\rangle=\frac{f}{\sqrt{2}}\theta e^{-i\mu_K t} \ , \label{eq:kaon-field} \end{equation} where $\theta$ is the chiral angle as an amplitude of condensation, and $f$($\sim f_\pi$=93 MeV) is the meson decay constant. We impose the charge neutrality condition and baryon number conservation, and construct the effective Hamiltonian density by introducing the charge chemical potential $\mu$ and the baryon number chemical potential $\nu$, respectively, as the Lagrange multipliers corresponding to these two conditions. The resulting effective energy density is then written in the form \begin{equation} {\cal E}_{\rm eff}={\cal E}+\mu (\rho_p-\rho_{\Sigma^-}-\rho_{K^-}-\rho_e)+\nu(\rho_p+ \rho_\Lambda+\rho_n+\rho_{\Sigma^-}) \ , \label{eq:eff} \end{equation} where ${\cal E}$ is the total energy density of the kaon-condensed phase, and $\rho_i$ ($i$= $p$, $\Lambda$, $n$, $\Sigma^-$, $K^-$, $e^-$) are the number densities of the particles $i$. It is to be noted that the number density of the kaon condensates $\rho_{K^-}$ consists of the free kaon part and the kaon-baryon interaction part of the vector type.[See Eq.~(\ref{eq:rhok}).] From the extremum conditions for ${\cal E}_{\rm eff}$ with respect to variation of $\rho_i$, one obtains the following relations, \begin{subequations}\label{eq:chemeq} \begin{eqnarray} \mu_K&=&\mu_e=\mu_n-\mu_p=\mu_{\Sigma^-}-\mu_n =\mu \ ,\label{eq:chemeq1} \\ \mu_\Lambda&=&\mu_n=-\nu \ , \label{eq:chemeq2} \end{eqnarray} \end{subequations} where $\mu_i$ ($i$= $p$, $\Lambda$, $n$, $\Sigma^-$, $K^-$, $e^-$) are the chemical potentials, which are given by $\mu_i=\partial{\cal E}/\partial\rho_i$. Equations~(\ref{eq:chemeq1}) and (\ref{eq:chemeq2}) imply that the system is in chemical equilibrium for the weak interaction processes, $n\rightleftharpoons pK^-$, $n\rightleftharpoons pe^-(\bar\nu_e)$, $ne^-\rightleftharpoons \Sigma^-(\nu_e)$, and $n\rightleftharpoons \Lambda(\nu_e\bar\nu_e)$. \subsection{Kaon-baryon interaction} \label{subsec:kbint} \ \ We are based on chiral symmetry for kaon-baryon interaction and start with the effective chiral SU(3)$_L \times$ SU(3)$_R$ Lagrangian\cite{kn86}.\footnote{Except for setting $|{\bf k}|$=0, the basic formulation presented here is the same as that in Ref.~\cite{m02}, where both $s$-wave and $p$-wave kaon-baryon interactions are incorporated.} Then the relevant Lagrangian density, leading to the total energy density ${\cal E}$, consists of the following parts: \begin{eqnarray} {\cal L}&=&\frac{1}{4}f^2 \ {\rm Tr} \partial^\mu\Sigma^\dagger\partial_\mu\Sigma +\frac{1}{2}f^2\Lambda_{\chi{\rm SB}}({\rm Tr}M(\Sigma-1)+{\rm h.c.}) \cr &+&{\rm Tr}\overline{\Psi}(i{\not\partial}-M_{\rm B})\Psi+{\rm Tr}\overline{\Psi}i\gamma^\mu\lbrack V_\mu, \Psi\rbrack \cr &+&a_1{\rm Tr}\overline{\Psi}(\xi M^\dagger\xi+{\rm h.c.})\Psi + a_2{\rm Tr}\overline{\Psi}\Psi(\xi M^\dagger\xi+{\rm h.c.})+ a_3({\rm Tr}M\Sigma +{\rm h.c.}){\rm Tr}\overline{\Psi}\Psi \ , \label{eq:lag} \end{eqnarray} where the first and second terms on the r.~h.~s. of Eq.~(\ref{eq:lag}) are the kinetic and mass terms of mesons, respectively. $\Sigma$ is the nonlinear meson field defined by $\Sigma\equiv e^{2i\Pi/f}$, where $\displaystyle\Pi\equiv\sum_{a=1\sim 8}\pi_aT_a$ with $\pi_a$ being the octet meson fields and $T_a$ being the SU(3) generators. Since only charged kaon condensation is considered, the $\Pi$ is simply given as \begin{eqnarray} \Pi=\frac{1}{\sqrt{2}}\left( \begin{array}{ccc} 0 & 0 & K^+ \\ 0 & 0 & 0 \\ K^- & 0 & 0 \\ \end{array}\right) \ . \label{eq:meson} \end{eqnarray} In the second term of Eq.~(\ref{eq:lag}), $\Lambda_{\chi{\rm SB}}$ is the chiral symmetry breaking scale, $\sim$ 1 GeV, $M$ the mass matrix which is given by $M\equiv {\rm diag}(m_u, m_d, m_s)$ with the quark masses $m_i$. The third term in Eq.~(\ref{eq:lag}) denotes the free baryon part, where the $\Psi$ is the octet baryon field including only the $p$, $\Lambda$, $n$, $\Sigma^-$, and $M_{\rm B}$ the baryon mass generated as a consequence of spontaneous chiral symmetry breaking. The fourth term in Eq.~(\ref{eq:lag}) gives the $s$-wave kaon-baryon interaction of the vector type corresponding to the Tomozawa-Weinberg term with $V_\mu$ being the mesonic vector current defined by $V_\mu\equiv 1/2(\xi^\dagger\partial_\mu\xi+\xi\partial_\mu\xi^\dagger)$ with $\xi\equiv \Sigma^{1/2}$. The last three terms in Eq.~(\ref{eq:lag}) give the $s$-wave meson-baryon interaction of the scalar type, which explicitly breaks chiral symmetry.\footnote{The same types of the scalar and vector interactions are derived in the quark-meson coupling model\cite{tstw98}.} The quark masses $m_i$ are chosen to be $m_u$ = 6 MeV, $m_d$ = 12 MeV, and $m_s$ = 240 MeV. Together with these values, the parameters $a_1$ and $a_2$ are fixed to be $a_1$ = $-$0.28, $a_2$ = 0.56 so as to reproduce the empirical octet baryon mass splittings\cite{kn86}. The parameter $a_3$ is related to the kaon-nucleon ($KN$) sigma terms simulating the $s$-wave $KN$ attraction of the scalar type through the expressions, $\Sigma_{Kp}=-(a_1+a_2+2a_3)(m_u+m_s)$, $\Sigma_{Kn}=-(a_2+2a_3)(m_u+m_s)$, evaluated at the on-shell Cheng-Dashen point for the effective chiral Lagrangian (\ref{eq:lag}). Recent lattice calculations suggest the value of the $KN$ sigma term $\Sigma_{KN}$=(300$-$400) MeV\cite{dll96}. We take the value of $a_3=-0.9$, leading to $\Sigma_{Kn}$=305 MeV, as a standard value. For comparison, we also take another value $a_3=-0.7$, which leads to $\Sigma_{Kn}$=207 MeV. The $K^-$ optical potential in symmetric nuclear matter, $V_{\rm opt}(\rho_{\rm B})$, is estimated as a scale of the $K^-$-nucleon attractive interaction. It is defined by \begin{equation} V_{\rm opt}(\rho_{\rm B})=\Pi_{K^-}(\omega(\rho_{\rm B}),\rho_{\rm B}) /2 \omega(\rho_{\rm B}) \ , \label{eq:vopt} \end{equation} where $\Pi_{K^-}\left(\omega(\rho_{\rm B}),\rho_{\rm B}\right)$ is the $K^-$ self-energy at given $\rho_B$ with $\rho_p=\rho_n=\rho_{\rm B}/2$. For $a_3=-0.9$ ($a_3=-0.7$), $V_{\rm opt}(\rho_0)$ is estimated to be $-$ 115 MeV ($-$ 95 MeV) at the nuclear saturation density $\rho_0$ (=0.16 fm$^{-3}$). In order to be consistent with the on-shell $s$-wave $K$ ($\bar K$)-$N$ scattering lengths, we have to take into account the range terms proportional to $\omega^2$ coming from the higher-order terms in chiral expansion and a pole contribution from the $\Lambda$(1405)\cite{lbm95,fmmt96}. Nevertheless, these contributions to the energy density become negligible in high-density matter. Therefore, we omit these correction terms throughout this paper and consider the simplified expression for the energy density of the kaon-condensed phase. \subsection{Effective energy density} \label{subsec:energy} \ \ The total effective energy density ${\cal E}_{\rm eff}$ is separated into baryon, meson, and lepton parts as ${\cal E}_{\rm eff}={\cal E}_{\rm eff}^{\rm B}+{\cal E}_{\rm eff}^{\rm M}+{\cal E}_{\rm eff}^{\rm e}$. The kaon-baryon interaction is incorporated in the baryon part ${\cal E}_{\rm eff}^{\rm B}$, and the meson part ${\cal E}_{\rm eff}^{\rm M}$ consists of the free classical kaons only. The baryon part ${\cal E}_{\rm eff}^{\rm B}$ and the meson part ${\cal E}_{\rm eff}^{\rm M}$ are derived from the effective chiral Lagrangian (\ref{eq:lag}). After the nonrelativistic reduction for the baryon part of the effective Hamiltonian by way of the Foldy-Wouthuysen-Tani transformation and with the mean-field approximation, one obtains \begin{equation} {\cal E}_{\rm eff}^{\rm B}=\sum_{i=p,\Lambda,n,\Sigma^-} \sum_{\stackrel{|{\bf p}| \leq |{\bf p}_F(i)|}{s=\pm1/2}}E_{{\rm eff},s}^{(i)} ({\bf p}) \ , \label{eq:be} \end{equation} where ${\bf p}_F(i)$ are the Fermi momenta, and the subscript `$s$' stands for the spin states for the baryon. The effective single-particle energies $E_{{\rm eff},s}^{(i)} ({\bf p})$ for the baryons $i$ are represented by \begin{subequations}\label{eq:spe} \begin{eqnarray} E_{{\rm eff},s}^{(p)} ({\bf p})&=& {\bf p}^2/2M_N -(\mu+\Sigma_{Kp})(1-\cos\theta)+\mu+\nu \ , \label{eq:spep} \\ E_{{\rm eff},s}^{(\Lambda)} ({\bf p})&=& {\bf p}^2/2M_N -\Sigma_{K\Lambda}(1-\cos\theta) +\delta M_{\Lambda N}+\nu \ , \label{eq:spel} \\ E_{{\rm eff},s}^{(n)} ({\bf p})&=&{\bf p}^2/2M_N-\Big(\frac{1}{2}\mu+\Sigma_{Kn}\Big)(1-\cos\theta)+\nu \ , \label{eq:spen} \\ E_{{\rm eff},s}^{(\Sigma^-)} ({\bf p})&=& {\bf p}^2/2M_N-\Big(-\frac{1}{2}\mu+\Sigma_{K\Sigma^-}\Big)(1-\cos\theta) +\delta M_{\Sigma^- N}-\mu+\nu \ , \label{eq:spes} \end{eqnarray} \end{subequations} where $M_N$ is the nucleon mass, $\delta M_{\Lambda N}$ (= 176 MeV) is the $\Lambda$-$N$ mass difference and $\delta M_{\Sigma^- N}$ (= 258 MeV) the $\Sigma^-$-$N$ mass difference. The ``kaon-hyperon sigma terms'' are defined by $\displaystyle\Sigma_{K\Lambda}\equiv -\left(\frac{5}{6}a_1+\frac{5}{6}a_2+2a_3\right)(m_u+m_s)$ and $\displaystyle \Sigma_{K\Sigma^-}\equiv -(a_2+2a_3)(m_u+m_s)$ (=$\Sigma_{Kn}$). It is to be noted that each term in Eqs.~(\ref{eq:spe}) contains both the kaon-baryon attraction of the scalar type simulated by the ``sigma term'' and the kaon-baryon interaction of the vector type proportional to $\mu$ the coefficient of which is given by the V-spin charge of each baryon. The meson contribution to the effective energy density, ${\cal E}_{\rm eff}^{\rm M}$, is given by the substitution of the classical kaon field (\ref{eq:kaon-field}) into the meson part of the effective Hamiltonian : \begin{equation} {\cal E}_{\rm eff}^{\rm M}=-\frac{1}{2}f^2\mu^2\sin^2\theta+f^2m_K^2(1-\cos\theta) \ , \label{eq:me} \end{equation} where $m_K\equiv [\Lambda_{\chi {\rm SB}}(m_u+m_s)]^{1/2}$, which is identified with the free kaon mass, and is replaced by the experimental value, 493.7 MeV. The lepton contribution to the effective energy density is given as \begin{equation} {\cal E}_{\rm eff}^{\rm e} =\frac{\mu^4}{4\pi^2}-\mu\frac{\mu^3}{3\pi^2} =-\frac{\mu^4}{12\pi^2} \label{eq:ee} \end{equation} with $\rho_e=\mu^3/(3\pi^2)$ for the ultrarelativistic electrons. \subsection{Baryon potentials} \label{subsec:pot} \ \ We introduce a potential energy density ${\cal E}_{\rm pot}$ as a local effective baryon-baryon interaction, which is assumed to be given by functions of the number densities of the relevant baryons\cite{bg97}. In order to take into account the baryon potential effects on both the whole energy of the system and the baryon single-particle energies consistently, we take the following prescription: The baryon potential $V_i$ ($i=p, \Lambda$, $n$, $\Sigma^-$) is defined as \begin{equation} V_i=\partial{\cal E}_{\rm pot}/\partial\rho_i \label{eq:vi} \end{equation} with $\rho_i$ being the number density of baryon $i$, and it is added to each effective single particle energy, $E_{{\rm eff},s}^{(i)}({\bf p})\rightarrow {E'}_{{\rm eff},s}^{(i)}({\bf p})=E_{{\rm eff},s}^{(i)}({\bf p})+V_i$. The potential energy density ${\cal E}_{\rm pot}$ is added to the total effective energy density ${\cal E}^{\rm eff}$, and the term $\displaystyle\sum_{i=p, \Lambda, n,\Sigma^-}\rho_i V_i$ is subtracted to avoid the double counting of the baryon interaction energies in the sum over the effective single particle energies ${E'}_{{\rm eff},s}^{(i)}({\bf p})$. Accordingly, the baryon part of the effective energy density is modified as \begin{eqnarray} {\cal E}_{\rm eff}'^{\rm B}&=&\sum_{i=p, \Lambda, n, \Sigma^-} \sum_{\stackrel{|{\bf p}| \leq |{\bf p}_F(i)|}{s=\pm1/2}}{E'}_{{\rm eff},s}^{(i)} ({\bf p})+{\cal E}_{\rm pot}-\sum_{i=p, \Lambda, n, \Sigma^-} \rho_i V_i \cr &=&\frac{3}{5}\frac{(3\pi^2)^{2/3}}{2M_N} (\rho_p^{5/3}+\rho_\Lambda^{5/3}+\rho_n^{5/3}+\rho_{\Sigma^-}^{5/3})+(\rho_\Lambda\delta M_{\Lambda p}+\rho_{\Sigma^-}\delta M_{\Sigma^- n}) + {\cal E}_{\rm pot}\cr &-&\Bigg\lbrace\rho_p(\mu+\Sigma_{Kp})+\rho_\Lambda\Sigma_{K\Lambda}+\rho_n\Big(\frac{1}{2}\mu+\Sigma_{Kn}\Big) +\rho_{\Sigma^-}\Big(-\frac{1}{2}\mu+\Sigma_{K\Sigma^-}\Big)\Bigg\rbrace(1-\cos\theta) \cr &+& \mu(\rho_p-\rho_{\Sigma^-})+\nu\rho_{\rm B} \ . \label{eq:eb} \end{eqnarray} The total effective energy density ${\cal E}'_{\rm eff}$ is obtained as the sum of the baryon, meson, and lepton parts whose explicit forms are given by Eqs.~(\ref{eq:eb}), (\ref{eq:me}), and (\ref{eq:ee}), respectively. For later convenience, we also show the total energy density ${\cal E}'$ including the potential contribution for baryons: \begin{eqnarray} {\cal E}'&=&\frac{3}{5}\frac{(3\pi^2)^{2/3}}{2M_N}\Big(\rho_p^{5/3}+\rho_\Lambda^{5/3}+\rho_n^{5/3}+\rho_{\Sigma^-}^{5/3}\Big) \cr &+&(\rho_\Lambda\delta M_{\Lambda p}+\rho_{\Sigma^-}\delta M_{\Sigma^- n}) +{\cal E}_{\rm pot} \cr &-&\left(\rho_p \Sigma_{Kp}+\rho_\Lambda \Sigma_{K\Lambda}+\rho_n \Sigma_{Kn}+\rho_{\Sigma^-} \Sigma_{K\Sigma^-}\right)(1-\cos\theta) \cr &+&\frac{1}{2}f^2\mu^2\sin^2\theta+f^2m_K^2(1-\cos\theta) +\mu^4/(4\pi^2) \ , \label{eq:te2} \end{eqnarray} where the first term on the right hand side denotes the baryon kinetic energy, the second term comes from the mass difference between the hyperons and nucleons, the third term the baryon potential energy, the fourth term the $s$-wave kaon-baryon scalar interaction brought about by the kaon-baryon sigma terms, the fifth and sixth terms the free parts of the condensed kaon energy (kinetic energy and free mass), and the last term stands for the lepton kinetic energy. For the hyperonic matter composed of $p$, $\Lambda$, $n$, and $\Sigma^-$, the potential energy density ${\cal E}_{\rm pot}$ is given by \begin{eqnarray} {\cal E}_{\rm pot}&=&\frac{1}{2}\Big\lbrack a_{\rm NN}(\rho_{\rm p}+\rho_{\rm n})^2+b_{\rm NN}(\rho_{\rm p}-\rho_{\rm n})^2 +c_{\rm NN}(\rho_{\rm p}+\rho_{\rm n})^{\delta+1} \Big\rbrack\cr &+& a_{\rm \Lambda N}(\rho_{\rm p}+\rho_{\rm n}){\rho_\Lambda}+c_{\rm \Lambda N}\Bigg\lbrack\frac{(\rho_{\rm p} +\rho_{\rm n})^{\gamma+1}}{\rho_{\rm p}+\rho_{\rm n}+{\rho_\Lambda}}{\rho_\Lambda} +\frac{{\rho_\Lambda}^{\gamma+1}}{\rho_{\rm p} +\rho_{\rm n}+{\rho_\Lambda}}(\rho_{\rm p} +\rho_{\rm n})\Bigg\rbrack + \frac{1}{2}( a_{YY}{\rho_\Lambda}^2 +c_{\rm YY}{\rho_\Lambda}^{\gamma+1}) \cr &+&a_{\rm \Sigma N}(\rho_{\rm p}+\rho_{\rm n}){\rho_{\Sigma^-}}+b_{\rm \Sigma N}(\rho_{\rm n}-\rho_{\rm p}){\rho_{\Sigma^-}} + c_{\rm \Sigma N}\Bigg\lbrack\frac{(\rho_{\rm p} +\rho_{\rm n})^{\gamma+1}}{\rho_{\rm p}+\rho_{\rm n}+{\rho_{\Sigma^-}}}{\rho_{\Sigma^-}} +\frac{{\rho_{\Sigma^-}}^{\gamma+1}}{\rho_{\rm p} +\rho_{\rm n}+\rho_{\Sigma^-}}(\rho_{\rm p} +\rho_{\rm n})\Bigg\rbrack \cr &+&a_{\rm YY}{\rho_{\Sigma^-}}{\rho_\Lambda}+c_{\rm YY}\Bigg\lbrack \frac{{\rho_{\Sigma^-}}^{\gamma+1}}{{\rho_{\Sigma^-}}+{\rho_\Lambda}}{\rho_\Lambda}+\frac{{\rho_\Lambda}^{\gamma+1}}{{\rho_{\Sigma^-}}+{\rho_\Lambda}}{\rho_{\Sigma^-}}\Bigg\rbrack +\frac{1}{2}\Big\lbrack (a_{\rm YY} +b_{\Sigma\Sigma}){\rho_{\Sigma^-}}^2 +c_{\rm YY}{\rho_{\Sigma^-}}^{\gamma+1}\Big\rbrack \ . \label{eq:epot} \end{eqnarray} The parameters in the potential energy density (\ref{eq:epot}) are determined as follows: (i) The parameters $a_{NN}$ and $c_{NN}$ in the $NN$ part are fixed so as to reproduce the standard nuclear saturation density $\rho_0$=0.16 fm$^{-3}$ and the binding energy $-$16 MeV in symmetric nuclear matter. With the parameters $a_{NN}$, $c_{NN}$, and $\delta$, the incompressibility $K$ in symmetric nuclear matter is obtained. The parameter $b_{NN}$ for the isospin-dependent term in the $NN$ part is chosen to reproduce the empirical value of the symmetry energy $\sim$ 30 MeV at $\rho_B=\rho_0$. (ii) For the $YN$ parts, $a_{\Lambda N}$ and $c_{\Lambda N}$ are basically taken to be the same as those in Ref.~\cite{bg97}, where the single $\Lambda$ orbitals in ordinary hypernuclei are reasonably fitted. The depth of the $\Lambda$ potential in nuclear matter is then given as $V_{\Lambda}(\rho_p=\rho_n=\rho_0/2)=a_{\Lambda N}\rho_0+c_{\Lambda N}\rho_0^\gamma$=$-$27 MeV\cite{mdg88}. The depth of the $\Sigma^-$ potential $V_{\Sigma^-}$ in nuclear matter is taken to be repulsive, following recent theoretical calculations\cite{kf00,fk01} and the phenomenological analyses on the ($K^-$, $\pi^\pm $) reactions at BNL\cite{b99,d99}, ($\pi^-$, $K^+$) reactions at KEK\cite{n02,dr04,hh05}, and the $\Sigma^-$ atom data\cite{mfgj95}: $V_{\Sigma^-}(\rho_p=\rho_n=\rho_0/2)=a_{\Sigma N}\rho_0+c_{\Sigma N}\rho_0^\gamma$=23.5 MeV and $b_{\Sigma N}\rho_0$=40.2 MeV. This choice of the parameters corresponds to the values in Ref.~\cite{d99} based on the Nijmegen model F. (iii) Since the experimental information on the $YY$ interactions is not enough, we take the same parameters for the $YY$ part as those in Ref.~\cite{bg97}. Taking into account the conditions (i) $\sim$ (iii), we adopt the following two parameter sets throughout this paper: (A) $\delta$=$\gamma$=5/3. In this case, one obtains $K$=306 MeV, which is larger than the standard empirical value 210$\pm$30 MeV\cite{b80}. (B) $\delta$=4/3 and $\gamma$=2.0. From the choice $\delta$=4/3, one obtains $K$=236 MeV which lies within the empirical value. The choice $\gamma$=2.0 leads to the stiffer EOS for hyperonic matter at high densities compared with the case (A). Numerical values of the parameter sets (A) and (B) are listed in Table~\ref{tab:para}. Here we abbreviate the EOS for hyperonic matter with the use of (A) and (B) as H-EOS (A) and H-EOS (B), respectively. \begin{table}[h] \caption{Parameters in the potential energy density. ($^{\rm a}$MeV$\cdot$fm$^3$, \ $^{\rm b}$MeV$\cdot$fm$^{3\gamma}$, \ $^{\rm c}$MeV$\cdot$fm$^{3\delta}$)} \label{tab:para} \begin{center} \begin{tabular}{c|cr|cr|cr} \hline H-EOS & parameter & & parameter & & parameter & \\ \hline & $\gamma$ & 5/3 & ${a_{\Lambda N}}^{\rm a}$ & $-$387.0 & ${a_{YY}}^{\rm a}$ & $-$552.6 \\ & $\delta$ & 5/3 & ${c_{\Lambda N}}^{\rm b}$ & 738.8 & ${c_{YY}}^{\rm b}$ & 1055.4 \\ (A) & ${a_{NN}}^{\rm a}$ & $-$914.2 & ${a_{\Sigma N}}^{\rm a}$ & $-$70.9 & ${b_{\Sigma\Sigma}}^{\rm a}$ & 428.4 \\ & ${b_{NN}}^{\rm a}$ & 212.8 & ${b_{\Sigma N}}^{\rm a}$ & 251.3 & & \\ & ${c_{NN}}^{\rm c}$ & 1486.4 & ${c_{\Sigma N}}^{\rm b}$ & 738.8 & & \\\hline & $\gamma$ & 2 & ${a_{\Lambda N}}^{\rm a}$ & $-$342.8 & ${a_{YY}}^{\rm a}$ & $-$486.2 \\ & $\delta$ & 4/3 & ${c_{\Lambda N}}^{\rm b}$ & 1087.5 & ${c_{YY}}^{\rm b}$ & 1553.6 \\ (B) & ${a_{NN}}^{\rm a}$ & $-$1352.3 & ${a_{\Sigma N}}^{\rm a}$ & $-$27.1 & ${b_{\Sigma\Sigma}}^{\rm a}$ & 428.4 \\ & ${b_{NN}}^{\rm a}$ & 212.8 & ${b_{\Sigma N}}^{\rm a}$ & 251.3 & & \\ & ${c_{NN}}^{\rm c}$ & 1613.9 & ${c_{\Sigma N}}^{\rm b}$ & 1087.5 & & \\\hline \end{tabular} \end{center} \end{table} \subsection{Physical constraints} \label{subsec:conditions} \ \ The energy density and physical quantities in the ground state are obtained variationally by the extremization of the total effective energy density ${\cal E}_{\rm eff}'$ with respect to $\theta$, $\mu$, and each number density of the baryon $i$ at a given density $\rho_{\rm B}$. From $\partial{\cal E}_{\rm eff}'/\partial\theta=0$, one obtains the classical field equation for $\theta$, \begin{equation} \sin\theta\Bigg\lbrack \mu^2\cos\theta-m_K^2+\frac{\mu}{f^2}\Big(\rho_p+\frac{1}{2}\rho_n-\frac{1}{2}\rho_{\Sigma^-}\Big)+\frac{1}{f^2}\sum_{i=p, \Lambda, n, \Sigma^-}\rho_i\Sigma_{Ki}\Bigg\rbrack=0 \ . \label{eq:theta} \end{equation} From $\partial{\cal E}_{\rm eff}'/\partial\mu=0$, one obtains the charge neutrality condition, \begin{equation} \rho_p-\rho_{\Sigma^-}-\rho_{K^-}-\rho_e=0 \ , \label{eq:charge} \end{equation} where the number density of the kaon condensates $\rho_{K^-}$ is given as \begin{equation} \rho_{K^-} = \mu f^2\sin^2\theta+\left(\rho_p+\frac{1}{2}\rho_n-\frac{1}{2}\rho_{\Sigma^-}\right)(1-\cos\theta) \ . \label{eq:rhok} \end{equation} From $\partial{\cal E}_{\rm eff}'/\partial\nu=\rho_{\rm B}$, one obtains the baryon number conservation, \begin{equation} \sum_{i=p, \Lambda, n, \Sigma^-}\rho_i=\rho_{\rm B} \ . \end{equation} The chemical equilibrium conditions for the weak interaction processes (\ref{eq:chemeq}), $n\rightleftharpoons p e^-(\bar\nu_e$), $n\rightleftharpoons \Lambda(\nu_e\bar\nu_e)$, $ne^-\rightleftharpoons \Sigma^-(\nu_e)$, are rewritten as: \begin{subequations}\label{eq:wequil} \begin{eqnarray} \mu_n&=&\mu_p+\mu \ , \label{eq:wequil1} \\ \mu_\Lambda&=&\mu_n \ , \label{eq:wequil2} \\ \mu_{\Sigma^-}&=&\mu_n+\mu \ , \label{eq:wequil3} \end{eqnarray} \end{subequations} where the chemical potentials for the baryons are given by $\mu_i=\partial{\cal E}'/\partial\rho_i$ with the help of Eqs.~(\ref{eq:te2}), (\ref{eq:theta}) and (\ref{eq:rhok}) : \begin{subequations}\label{eq:mu} \begin{eqnarray} \mu_n& = &\frac{(3\pi^2\rho_n)^{2/3}}{2M_N} -\Big(\frac{1}{2}\mu+\Sigma_{Kn}\Big)(1-\cos\theta)+V_n \ , \label{eq:mun} \\ \mu_p& = & \frac{(3\pi^2\rho_p)^{2/3}}{2M_N} -(\mu+\Sigma_{Kp})(1-\cos\theta)+V_p \ , \label{eq:mup} \\ \mu_\Lambda& = & \frac{(3\pi^2\rho_\Lambda)^{2/3}}{2M_N} -\Sigma_{K\Lambda}(1-\cos\theta) +\delta M_{\Lambda N}+V_\Lambda \ , \label{eq:mul} \\ \mu_{\Sigma^-}& = & \frac{(3\pi^2\rho_{\Sigma^-})^{2/3}}{2M_N} -\Big(-\frac{1}{2}\mu+\Sigma_{K\Sigma^-}\Big)(1-\cos\theta) +\delta M_{\Sigma^- N}+V_{\Sigma^-} \ . \label{eq:mus} \end{eqnarray} \end{subequations} \section{Composition of matter in the noncondensed phase} \label{sec:fraction} \ \ The critical density satisfying Eq.~(\ref{eq:onset}) depends sensitively on the density dependence of $\mu$, which is also affected by the matter composition through the relation $\mu=\mu_e=(3\pi^2 \rho_e)^{1/3}$. \begin{figure}[!] \noindent\begin{minipage}[t]{0.50\textwidth} \begin{center} \includegraphics[height=.3\textheight]{fig1.eps} \caption{Particle fractions $\rho_i/\rho_{\rm B}$ in the noncondensed hyperonic matter as functions of baryon number density $\rho_{\rm B}$. The H-EOS (A) is used. } \label{fig:frac-ha} \end{center} \end{minipage}~ \begin{minipage}[t]{0.50\textwidth} \begin{center} \includegraphics[height=.3\textheight]{fig2.eps} \caption{The same as in Fig.~1, but for H-EOS (B). } \label{fig:frac-hb} \end{center} \end{minipage} \end{figure} Thereby, before going into detail on the onset density of kaon condensation, we address behaviors of particle fractions in the noncondensed hyperonic matter. In Figs.~\ref{fig:frac-ha} and \ref{fig:frac-hb}, particle fractions $\rho_i/\rho_{\rm B}$ ($i=p, \Lambda, n, \Sigma^- $, $e^-$) in the noncondensed hyperonic matter are shown as functions of baryon number density $\rho_{\rm B}$ for H-EOS (A) and (B), respectively. In both figures, the dashed lines stand for the ratio of the total negative strangeness number density $\rho_{\rm strange}$(=$\rho_\Lambda+\rho_{\Sigma^-}$) to the baryon number density $\rho_{\rm B}$. In the case of H-EOS (A), the $\Lambda$ hyperon starts to be mixed at $\rho_{\rm B}$ = $\rho_{\rm B}^c(\Lambda)$ = 0.340 fm$^{-3}$ (= 2.13 $\rho_0$) and the $\Sigma^-$ hyperon does at a higher density, $\rho_{\rm B}$= $\rho_{\rm B}^c(\Sigma^-)$ = 0.525 fm$^{-3}$ (= 3.28 $\rho_0$) (Fig.~\ref{fig:frac-ha}). In the case of H-EOS (B) , both hyperons start to be mixed at higher densities than the case of H-EOS (A), i.e., at $\rho_{\rm B}$ = $\rho_{\rm B}^c(\Lambda)$ $\sim$ 0.44 fm$^{-3}$ (= 2.69 $\rho_0$) for the $\Lambda$ and $\rho_{\rm B}$ = $\rho_{\rm B}^c(\Sigma^-)$ $\sim$ 0.59 fm$^{-3}$ (= 3.69 $\rho_0$) for the $\Sigma^-$, respectively (Fig.~\ref{fig:frac-hb}). In fact, the condition for the $\Lambda$-mixing in the ($n$, $p$, $e^-$) matter, for instance, is written by the use of Eqs.~(\ref{eq:mun}) and (\ref{eq:mul}) as \begin{equation} \delta M_{\Lambda N}+V_\Lambda \leq \frac{(3\pi^2\rho_n)^{2/3}}{2M_N}+V_n \ , \label{eq:lambda} \end{equation} where $V_\Lambda=a_{\Lambda N}\rho_{\rm B}+c_{\Lambda N}\rho_{\rm B}^\gamma$ and $\displaystyle V_n=a_{NN}\rho_{\rm B}-b_{NN}(\rho_p-\rho_n)+\frac{1}{2}c_{NN}(\delta+1)\rho_{\rm B}^\delta$. In the case of H-EOS (B), the index $\delta$ (=4/3) is smaller than that for H-EOS (A) (=5/3), which makes the repulsive interaction of $V_n$ smaller than that for H-EOS (A). Furthermore the index $\gamma$ (=2) is bigger than that for H-EOS (A) (=5/3), which makes the repulsive interaction of $V_\Lambda$ larger than that for H-EOS (A). Both effects push up the threshold density for the condition (\ref{eq:lambda}) as compared with the case of H-EOS (A). In general, the smaller value of the index $\delta$ simulating the higher order terms of the repulsive nucleon-nucleon interactions gives the smaller potential energy contributions for the nucleons. The larger value of the index $\gamma$ simulating the higher order repulsive terms of the hyperon-nucleon and hyperon-hyperon interactions gives the large potential energy contributions for the hyperons. As a result, the beta equilibrium conditions for the hyperons, $n\rightleftharpoons \Lambda \ (\nu_e\bar\nu_e)$, $n e^-\rightleftharpoons \Sigma^- \ (\nu_e)$, are satisfied at higher densities for H-EOS (B) than the case of H-EOS (A). It should be noted that, in the case of H-EOS (B), the $\Lambda$ and $\Sigma^-$ start to appear in the ground state of neutron-star matter such that the mixing ratios, $\rho_\Lambda/\rho_{\rm B}$, $\rho_{\Sigma^-}/\rho_{\rm B}$, increase discontinuously from zero to finite nonzero values above certain densities. This is a different behavior from the usual one, where the hyperon-mixing ratios increase continuously from zero as density increases like the case of H-EOS (A). (See the Appendix.) One can see common behavior with regard to the density dependence of each particle fraction for H-EOS (A) and H-EOS (B) : As the hyperons dominate the matter composition with increase in $\rho_{\rm B}$, the electron fraction decreases. In particular, the negative charge of the electron is taken over by that of the $\Sigma^-$ hyperon, so that the electron fraction decreases rapidly, while the $\Sigma^-$ fraction increases as the density increases. The proton fraction increases so as to compensate the negative charge of the $\Sigma^-$. At high densities, the $\Sigma^-$ and proton fractions amount to (20$-$30) \%, the $\Lambda$ fraction to (30$-$40) \%, and the fraction of total negative strangeness to (50$-$60) \%. As a result of increase in fractions of the proton, $\Lambda$, and $\Sigma^-$, the neutron fraction decreases rapidly with increase in $\rho_{\rm B}$. \section{Validity of continuous phase transition} \label{sec:validity} \ \ In ordinary neutron-star matter without hyperon-mixing, the onset density for kaon condensation is given by the condition, $\omega=\mu$ [Eq.~(\ref{eq:onset}) ]. Here the lowest energy $\omega$ for $K^-$ is obtained from the zero point of the inverse propagator for $K^-$, $D_K^{-1}(\omega; \rho_{\rm B})$, which can be read from expansion of the total effective energy density ${\cal E}_{\rm eff}'$ with respect to the chiral angle $\theta$ around $\theta=0$: \begin{equation} {\cal E}_{\rm eff}'(\theta)={\cal E}_{\rm eff}'(0)-\frac{f^2}{2}D_K^{-1}(\mu; \rho_{\rm B})\theta^2+O(\theta^4) \ . \label{eq:dkinv} \end{equation} This onset condition, $D_K^{-1}(\mu; \rho_{\rm B})=0$, is equal to the nontrivial classical kaon-field equation (\ref{eq:theta}) with $\theta$ = 0, and is based on the assumption of the continuous phase transition : The chiral angle $\theta$, for instance, increases continuously from zero as $\rho_{\rm B}$ increases. In this section, we consider validity of the assumption of the continuous phase transition to $K^-$ condensation in hyperonic matter. Numerical results are presented by the use of the H-EOS (A) and H-EOS(B) for the noncondensed hyperonic matter EOS in Secs.~4.1 and 4.~2, respectively. \subsection{Case of H-EOS (A) } \label{subsec:a} \ \ In Fig.~\ref{fig:w-a}, we show the lowest energies of the $K^-$ as functions of baryon number density $\rho_{\rm B}$ for $\Sigma_{Kn}$ = 305 MeV (bold solid line) and $\Sigma_{Kn}$ = 207 MeV (thin solid line) in the case of H-EOS (A). The dependence of the charge chemical potential $\mu$ (=$\mu_K=\mu_e$) on $\rho_{\rm B}$ is shown by the dotted line. The density at which the lowest $K^-$ energy $\omega$ crosses the $\mu$ is denoted as $\rho_{\rm B}^{c(2)}(K^-)$. The charge chemical potential $\mu$ decreases with increase in density after the appearance of the negatively charged hyperon $\Sigma^-$, as seen in Fig.~\ref{fig:w-a}, so that the onset condition Eq.~(\ref{eq:onset}) is satisfied at a higher density than the case of neutron-star matter without mixing of hyperons. From Fig.~\ref{fig:w-a}, one reads $\rho_{\rm B}^{c(2)}(K^-)$=0.6433~fm$^{-3}$ (=4.02$\rho_0$) for $\Sigma_{Kn}$=305 MeV and $\rho_{\rm B}^{c(2)}(K^-)$=0.9254~fm$^{-3}$ (=5.78$\rho_0$) for $\Sigma_{Kn}$=207 MeV. \begin{figure}[!] \begin{center} \includegraphics[height=.3\textheight]{fig3.eps} \caption{The lowest energies of $K^-$ as functions of baryon number density $\rho_{\rm B}$ for $\Sigma_{Kn}$ = 305 MeV (bold solid line) and $\Sigma_{Kn}$ = 207 MeV (thin solid line) in the case of H-EOS (A). } \label{fig:w-a} \end{center} \end{figure} Now we examine whether the state at $\rho_{\rm B}=\rho_{\rm B}^{c(2)}(K^-)$ is the true ground state or not, by considering the dependence of the total energy of the system at $\rho_{\rm B}=\rho_{\rm B}^{c(2)}(K^-)$ on the chiral angle $\theta$ and $\Sigma^-$-mixing ratio $\rho_{\Sigma^-}/\rho_B$ . In Fig.~\ref{fig:contour-a}, the contour plots of the total energy per baryon ${\cal E}'/\rho_{\rm B}$ in the ($\theta$, $\rho_{\Sigma^-}/\rho_B$) plane at $\rho_{\rm B}=\rho_{\rm B}^{c(2)}(K^-)$ are depicted for $\Sigma_{Kn}$=305 MeV [Fig.~\ref{fig:contour-a}(a)] and $\Sigma_{Kn}$=207 MeV [Fig.~\ref{fig:contour-a}(b)] in the case of H-EOS (A). Note that ${\cal E}'/\rho_{\rm B}$ has been maximized with respect to $\mu$ and minimized with respect to the other remaining parameters $\rho_\Lambda/\rho_{\rm B}$ and $\rho_p/\rho_{\rm B}$. The energy interval between the contours is taken to be 0.2 MeV for $\Sigma_{Kn}$=305 MeV and 0.5 MeV for $\Sigma_{Kn}$=207 MeV. For $\Sigma_{Kn}$=305 MeV [Fig.~\ref{fig:contour-a}(a)], one obtains a state satisfying the condition $\omega$=$\mu$ at a point, ($\theta$, $\rho_{\Sigma^-}/\rho_B$)=(0, 0.117), where ${\cal E}'/\rho_{\rm B}$=117.32 MeV (denoted as P). However, this point is not a minimum, but a saddle point in the ($\theta$, $\rho_{\Sigma^-}/\rho_B$) plane. A true minimum state exists at a different point, ($\theta$, $\rho_{\Sigma^-}/\rho_B$)=(0.70 rad, 0) in the plane (denoted as Q). This state Q stands for the fully-developed $K^-$-condensed state with no $\Sigma^-$-mixing. \begin{figure}[!] \noindent\begin{minipage}[l]{0.50\textwidth} \begin{center} \includegraphics[height=.30\textheight]{fig4a.eps} \end{center} \end{minipage}~ \begin{minipage}[r]{0.50\textwidth} \begin{center} \includegraphics[height=.30\textheight]{fig4b.eps} \end{center} \end{minipage} \caption{(a) Contour plot of the total energy per baryon ${\cal E}'/\rho_{\rm B}$ in the ($\theta$, $\rho_{\Sigma^-}/\rho_B$) plane at $\rho_{\rm B}=\rho_{\rm B}^{c(2)}(K^-)$ for $\Sigma_{Kn}$=305 MeV in the case of H-EOS (A). The energy interval is taken to be 0.2 MeV. (b) The same as in (a), but for $\Sigma_{Kn}$=207 MeV. The energy interval is taken to be 0.5 MeV. See the text for details.} \label{fig:contour-a} \end{figure} \begin{figure}[!] \begin{center} \includegraphics[height=.3\textheight]{fig5.eps} \end{center} \caption{The contributions from each term on the r.h.s. of Eq.~(12) to the total energy per baryon ${\cal E}'/\rho_{\rm B}$ as functions of the $\Sigma^-$-mixing ratio $\rho_{\Sigma^-}/\rho_{\rm B}$ at $\rho_{\rm B}$=$\rho_{\rm B}^{c(2)}(K^-)$ (=0.6433 fm$^{-3}$) for $\Sigma_{Kn}$=305 MeV (the solid lines). For comparison, those for the noncondensed state ($\theta$=0) is shown by the dashed lines. The H-EOS (A) is used for the hyperonic matter EOS. See the text for details.} \label{fig:contrib-1} \end{figure} In Fig.~\ref{fig:contrib-1}, the total energy per baryon ${\cal E}'/\rho_{\rm B}$ and the contributions to ${\cal E}'/\rho_{\rm B}$ from each term on the r.h.s. of Eq.~(\ref{eq:te2}) are shown as functions of the $\Sigma^-$-mixing ratio $\rho_{\Sigma^-}/\rho_{\rm B}$ at $\rho_{\rm B}$=$\rho_{\rm B}^{c(2)}(K^-)$ (= 0.6433 fm$^{-3}$) for $\Sigma_{Kn}$=305 MeV by the solid lines. At a given $\rho_{\Sigma^-}/\rho_{\rm B}$ the total energy per baryon ${\cal E}'/\rho_{\rm B}$ is minimized with respect to $\theta$, $\rho_p$, $\rho_\Lambda$, and maximized with respect to $\mu$. For comparison, those for the noncondensed state ($\theta$=0) are shown by the dashed lines. The state P satisfying the condition $\omega=\mu$ corresponds to the point at $\rho_{\Sigma^-}/\rho_{\rm B}$=0.117 on the bold solid line. The state Q denoting the absolute energy minimum with $\theta$=0.70 rad corresponds to the point at $\rho_{\Sigma^-}/\rho_{\rm B}$=0 on the bold solid line. From comparison of each energy contribution at the states P and Q, one can see that the hyperon($Y$)-nucleon ($N$) mass difference mainly pushes up the total energy as the $\Sigma^-$-mixing ratio increases. The mixing of the $\Sigma^-$ hyperon slightly reduces the total kinetic energy of baryons by lowering the Fermi momentum of each baryon, while slightly enlarging the baryon potential energy contribution. These two effects are compensated each other. The lepton energy contribution is litte changed by the $\Sigma^-$-mixing. Note that the sum of the kaon-baryon scalar interaction energy and free kaon energy consisting of the kaon free mass and kinetic energy is positive, and that it decreases as the $\Sigma^-$-mixing ratio increases. However, the decrease in the sum of the kaon-baryon scalar interaction energy and free kaon energy cannot compensate for the energy excess from the $Y$-$N$ mass difference as the $\Sigma^-$-mixing increases. It is to be noted that, for any value of the $\Sigma^-$-mixing ratio, all the energy contributions except for the sum of the kaon-baryon scalar interaction energy and free kaon energy have lower energy in the kaon-condensed state (solid lines) than in the noncondensed state (dashed lines). The more detailed numerical analysis shows the following behavior for the energy minima in the ($\theta$, $\rho_{\Sigma^-}/\rho_B$) plane in the vicinity of $\rho_{\rm B}^{c(2)}(K^-)$ : At a certain density below $\rho_{\rm B}^{c(2)}(K^-)$, a local minimum state Q' corresponding to the $K^-$-condensed state without the $\Sigma^-$-mixing appears in addition to the absolute minimum state P' corresponding to the noncondensed state with the $\Sigma^-$-mixing. [We denote this density as $\rho_{\rm B}^\ast (K^-; {\rm no}\ \Sigma^-)$.] As the density increases, the state Q' shifts to have a lower energy, and at a density, denoted as $\rho_{\rm B}^{c(1)}(K^-; {\rm no} \ \Sigma^-)$, the energy values of the two minima P' and Q' get equal. Above $\rho_{\rm B}=\rho_{\rm B}^{c(1)}(K^-; {\rm no} \ \Sigma^-)$, the state Q' becomes the absolute minimum, having a lower energy than that of the state P'. In Table~\ref{tab:onset}, we show the typical densities, $\rho_{\rm B}^\ast (K^-; {\rm no}\ \Sigma^-)$, $\rho_{\rm B}^{c(1)}(K^-; {\rm no} \ \Sigma^-)$ as well as $\rho_{\rm B}^{c(2)}(K^-)$. In the case of H-EOS (A) and $\Sigma_{Kn}$ = 305 MeV, there is a {\it discontinuous} transition from the noncondensed state of hyperonic matter with the $\Sigma^-$-mixing (the state P') to the $K^-$-condensed state without the $\Sigma^-$-mixing (the state Q') above $\rho_{\rm B}=\rho_{\rm B}^{c(1)}(K^-; {\rm no} \ \Sigma^-)$. This transition density $\rho_{\rm B}^{c(1)}(K^-; {\rm no} \ \Sigma^-)$ is slightly lower than $\rho_{\rm B}^{c(2)}(K^-)$. \begin{table}[h] \caption{The typical densities associated with the appearance of $K^-$ condensates and $\Sigma^-$ hyperons. They are calculated with the EOS models for hyperonic matter, H-EOS (A) and H-EOS (B). The values in the parentheses for $\rho_{\rm B}^{c(2)}(K^-)$ mean that they don't correspond to the true energy minimum but the local minimum or the saddle point in the ($\theta$, $\rho_{\Sigma^-}/\rho_{\rm B}$) plane. See the text for details. } \label{tab:onset} \begin{center} \begin{tabular}{c|c||c|c|c||c|c} \hline H-EOS & $\Sigma_{Kn}$ & $\rho_{\rm B}^\ast(K^-; {\rm no }\ \Sigma^-$) & $\rho_{\rm B}^{c(1)}(K^-; {\rm no} \ \Sigma^-)$ & $\rho_{\rm B}^{c(2)}(K^-)$ & $\rho_{\rm B}^\ast(K^-; \Sigma^-)$ & $\rho_{\rm B}^{c(1)}(K^-; \Sigma^-)$ \\ & (MeV) & (fm$^{-3}$) & (fm$^{-3}$) & (fm$^{-3}$) & (fm$^{-3}$) & (fm$^{-3}$) \\\hline (A) & 305 & 0.5782 & 0.6135 & (0.6433) & 1.011 & 1.039 \\ & 207 & 0.8280 & $-$ & 0.9254 & $-$ & $-$ \\\hline (B) & 305 & $-$ & $-$ & 0.5504 & 1.006 & 1.069 \\ & 207 & 0.7084 & 0.9086 & (0.9189) & 0.9189 & 1.170 \\\hline \end{tabular} \end{center} \end{table} Next we proceed to the case of H-EOS (A) and $\Sigma_{Kn}$=207 MeV. As seen from Fig.~\ref{fig:contour-a}(b), the state P is an absolute minimum. Therefore, the assumption of the continuous transition is kept valid, and the onset density for kaon condensation is given by $ \rho_{\rm B}^{c(2)}(K^-)$. For this weaker kaon-baryon scalar attraction case, the critical density $\rho_{\rm B}^{c(2)}(K^-)$ is far beyond the onset density of $\Sigma^-$, $\rho_{\rm B}^c(\Sigma^-)$ (=0.52 fm$^{-3}$), so that the concentration of the $\Sigma^-$ hyperon in matter is not affected much by the appearance of kaon condensates. However, it should be noted that there still exists a kaon-condensed local minimum Q ($\theta$, $\rho_{\Sigma^-}/\rho_B$)=(1.0 rad, 0) without the $\Sigma^-$-mixing. Indeed, the local minimum (the state Q') exists from a fairly lower density $\rho_{\rm B}^\ast(K^-; {\rm no} \ \Sigma^-)$ (=0.8280 fm$^{-3}$) than $\rho_{\rm B}^{c(2)}(K^-)$ (=0.9254 fm$^{-3}$). [See Table~\ref{tab:onset}.] \subsection{Case of H-EOS (B) } \label{subsec:b} \ \ In the case of H-EOS (B) for the hyperonic matter EOS, both the $\Lambda$ and $\Sigma^-$ start to be mixed at higher densities as compared with the case of H-EOS (A) [Sec.~\ref{sec:fraction}]. In Fig.~\ref{fig:w-b}, we show the lowest energies of the $K^-$ as functions of baryon number density $\rho_{\rm B}$ for $\Sigma_{Kn}$ = 305 MeV (bold solid line) and $\Sigma_{Kn}$ = 207 MeV (thin solid line). \begin{figure}[h] \begin{center} \includegraphics[height=.3\textheight]{fig6.eps} \caption{The lowest energies of $K^-$ as functions of baryon number density $\rho_{\rm B}$ for $\Sigma_{Kn}$ = 305 MeV (bold solid line) and $\Sigma_{Kn}$ = 207 MeV (thin solid line) in the case of H-EOS (B). } \label{fig:w-b} \end{center} \end{figure} In Fig.~\ref{fig:contour-b}, the contour plots of the total energy per baryon ${\cal E}'/\rho_{\rm B}$ in the ($\theta$, $\rho_{\Sigma^-}/\rho_B$) plane at $\rho_{\rm B}=\rho_{\rm B}^{c(2)}(K^-)$ are depicted for $\Sigma_{Kn}$=305 MeV [Fig.~\ref{fig:contour-b}(a)] and $\Sigma_{Kn}$=207 MeV [Fig.~\ref{fig:contour-b}(b)] in the case of H-EOS (B). \begin{figure}[!] \noindent\begin{minipage}[l]{0.50\textwidth} \begin{center} \includegraphics[height=.3\textheight]{fig7a.eps} \end{center} \end{minipage}~ \begin{minipage}[r]{0.50\textwidth} \begin{center} \includegraphics[height=.3\textheight]{fig7b.eps} \end{center} \end{minipage} \caption{(a) Contour plot of the total energy per baryon ${\cal E}'/\rho_{\rm B}$ in the ($\theta$, $\rho_{\Sigma^-}/\rho_B$) plane at $\rho_{\rm B}=\rho_{\rm B}^{c(2)}(K^-)$ for $\Sigma_{Kn}$=305 MeV in the case of H-EOS (B). The energy interval is taken to be 0.2 MeV. (b) The same as in (a), but for $\Sigma_{Kn}$=207 MeV. The energy interval is taken to be 0.5 MeV. See the text for details. } \label{fig:contour-b} \end{figure} From Fig.~\ref{fig:w-b}, the critical density $\rho_{\rm B}^{c(2)}(K^-)$ is read as $\rho_{\rm B}^{c(2)}(K^-)$=0.5504~fm$^{-3}$ (=3.44$\rho_0$) for $\Sigma_{Kn}$=305 MeV and $\rho_{\rm B}^{c(2)}(K^-)$=0.9189~fm$^{-3}$ (=5.74$\rho_0$) for $\Sigma_{Kn}$=207 MeV. For $\Sigma_{Kn}$=305 MeV, the condition $\omega=\mu$ [Eq.~(\ref{eq:onset}) ] is satisfied before mixing of the $\Sigma^-$ starts, i.e., $\rho_{\rm B}^{c(2)}(K^-) < \rho_{\rm B}^{c}(\Sigma^-)$. As seen from Fig.~\ref{fig:contour-b}(a), the corresponding state P is the absolute minimum for the total energy per baryon ${\cal E}'/\rho_{\rm B}$ in the ($\theta$, $\rho_{\Sigma^-}/\rho_B$) plane at $\rho_{\rm B}=\rho_{\rm B}^{c(2)}(K^-)$, and there is no local minimum with kaon condensates without the $\Sigma^-$-mixing. Therefore, the assumption of the continuous phase transition does not lose its validity when the $\Sigma^-$ hyperons are not mixed in the ground state. On the other hand, for $\Sigma_{Kn}$=207 MeV, the state P satisfying the condition $\omega=\mu$ is obtained as a local minimum at a point ($\theta$, $\rho_{\Sigma^-}/\rho_{\rm B}$)=(0, 0.200) in the presence of the $\Sigma^-$, and there is an absolute minimum with kaon condensates without the $\Sigma^-$-mixing (the state Q in Fig.~\ref{fig:contour-b}(b)) at a point ($\theta$, $\rho_{\Sigma^-}/\rho_{\rm B}$)=(1.02 rad, 0). As compared with the case of H-EOS (A), the critical density $\rho_{\rm B}^{c(2)}(K^-)$ is not so far from the onset density of the $\Sigma^-$, $\rho_{\rm B}^c(\Sigma^-)$. [ $\rho_{\rm B}^{c(2)}(K^-)-\rho_{\rm B}^c(\Sigma^-)$ = 2.1 $\rho_0$ for H-EOS (B), while $\rho_{\rm B}^{c(2)}(K^-)-\rho_{\rm B}^c(\Sigma^-)$ = 2.5 $\rho_0$ for H-EOS (A). ] As a result, competition between the $\Sigma^-$ and $K^-$ condensates is more remarkable in the case of H-EOS (B) and $\Sigma_{Kn}$ = 207 MeV than in the case of H-EOS (A) and $\Sigma_{Kn}$ = 207 MeV, making the state Q energetically more favorable than the state P. From Table~\ref{tab:onset}, one can see the common behavior as the case of H-EOS (A) and $\Sigma_{Kn}$ = 305 MeV concerning the appearance of $K^-$ condensates and $\Sigma^-$ hyperons : There is a {\it discontinuous} transition from the noncondensed state of hyperonic matter with the $\Sigma^-$-mixing to the $K^-$-condensed state without the $\Sigma^-$-mixing above $\rho_{\rm B}=\rho_{\rm B}^{c(1)}(K^-; {\rm no} \ \Sigma^-)$, and this transition density $\rho_{\rm B}^{c(1)}(K^-; {\rm no} \ \Sigma^-)$ is slightly lower than $\rho_{\rm B}^{c(2)}(K^-)$. \section{Equation of State} \label{sec:eos} \subsection{Two energy minima with and without the $\Sigma^-$-mixing for the $K^-$-condensed phase} \label{subsubsec:two-minima} \ \ Here we discuss the EOS of the $K^-$-condensed phase in hyperonic matter. The total energies per baryon in the $K^-$-condensed phase, ${\cal E}'/\rho_{\rm B}$, as functions of the baryon number density $\rho_{\rm B}$ are shown in Fig.~\ref{fig:eos}. Fig.~\ref{fig:eos} (a) is for H-EOS (A), and (b) is for H-EOS (B). The bold (thin) lines are for $\Sigma_{Kn}$ = 305 MeV ($\Sigma_{Kn}$ = 207 MeV). The solid lines stand for the total energies per baryon for the $K^-$-condensed state with the $\Sigma^-$-mixing, while the dashed lines for the $K^-$-condensed state without the $\Sigma^-$-mixing. For comparison, the energy per baryon for the noncondensed hyperonic matter is shown by the dotted line. \begin{figure}[h] \noindent\begin{minipage}[l]{0.50\textwidth} \begin{center} \includegraphics[height=.3\textheight]{fig8a.eps} \end{center} \end{minipage}~ \begin{minipage}[r]{0.50\textwidth} \begin{center} \includegraphics[height=.3\textheight]{fig8b.eps} \end{center} \end{minipage} \caption{(a) The total energies per baryon in the $K^-$-condensed phase, ${\cal E}'/\rho_{\rm B}$, as functions of the baryon number density $\rho_{\rm B}$ for H-EOS (A). The bold (thin) lines are for $\Sigma_{Kn}$ = 305 MeV ($\Sigma_{Kn}$ = 207 MeV). The solid lines stand for the total energies per baryon for the $K^-$-condensed state with the $\Sigma^-$-mixing, while the dashed lines for the $K^-$- condensed state without the $\Sigma^-$-mixing. For comparison, the energy per baryon for the noncondensed hyperonic matter [H-EOS (A)] is shown by the dotted line. (b) The same as in (a), but for H-EOS (B).} \label{fig:eos} \end{figure} In each case of the model EOS for hyperonic matter and the $Kn$ sigma term $\Sigma_{Kn}$, there are two solutions of the kaon-condensed phase corresponding to two minima in the ($\theta$, $\rho_{\Sigma^-}/\rho_{\rm B}$) plane at some density intervals: One is the $K^-$-condensed state without the $\Sigma^-$-mixing (dashed lines) called the state Q' in Sec.~\ref{sec:validity}, and the other is with the $\Sigma^-$-mixing (solid lines), which we call the state R'. The density at which the state R' appears as a local minimum is denoted as $\rho_{\rm B}^\ast(K^-; \Sigma^-)$. For example, we show the contour plots of the total energy per baryon ${\cal E}'/\rho_{\rm B}$ in the ($\theta$, $\rho_{\Sigma^-}/\rho_{\rm B}$) plane at $\rho_{\rm B}=\rho_{\rm B}^\ast(K^-; \Sigma^-)$ for $\Sigma_{Kn}$=305 MeV in the case of H-EOS (A) in Fig.~\ref{fig:contour2}(a) and H-EOS (B) in Fig.~\ref{fig:contour2}(b). \begin{figure}[!] \noindent\begin{minipage}[l]{0.50\textwidth} \begin{center} \includegraphics[height=.3\textheight]{fig9a.eps} \end{center} \end{minipage}~ \begin{minipage}[r]{0.50\textwidth} \begin{center} \includegraphics[height=.3\textheight]{fig9b.eps} \end{center} \end{minipage} \caption{(a) Contour plot of the total energy per baryon ${\cal E}'/\rho_{\rm B}$ in the ($\theta$, $\rho_{\Sigma^-}/\rho_B$) plane at $\rho_{\rm B}=\rho_{\rm B}^\ast(K^-; \Sigma^-)$ for $\Sigma_{Kn}$=305 MeV in the case of H-EOS (A). The energy interval is taken to be 0.05 MeV. (b) The same as in (a), but in the case of H-EOS (B). The energy interval is taken to be 0.2 MeV. See the text for details.} \label{fig:contour2} \end{figure} In Fig.~\ref{fig:cep}, we also show the dependence of the baryon potentials $V_{\Sigma^-}$, $V_n$, the neutron chemical potential $\mu_n$, and the difference of the $\Sigma^-$ and charge chemical potentials, $\mu_{\Sigma^-}-\mu$, on the $\Sigma^-$-mixing ratio $\rho_{\Sigma^-}/\rho_{\rm B}$ in the kaon-condensed phase at $\rho_{\rm B}=\rho_{\rm B}^\ast(K^-; \Sigma^-)$. Fig.~\ref{fig:cep}(a) is for $\Sigma_{Kn}$=305 MeV with H-EOS (A) and Fig.~\ref{fig:cep}(b) is for $\Sigma_{Kn}$=305 MeV with H-EOS (B). \begin{figure}[!] \noindent\begin{minipage}[l]{0.50\textwidth} \begin{center} \includegraphics[height=.3\textheight]{fig10a.eps} \end{center} \end{minipage}~ \begin{minipage}[r]{0.50\textwidth} \begin{center} \includegraphics[height=.3\textheight]{fig10b.eps} \end{center} \end{minipage} \caption{(a) Dependence of $V_{\Sigma^-}$, $V_n$, and $\mu_{\Sigma^-}-\mu$, $\mu_n$ on the $\Sigma^-$-mixing ratio $\rho_{\Sigma^-}/\rho_{\rm B}$ in the kaon-condensed phase at $\rho_{\rm B}=\rho_{\rm B}^\ast(K^-; \Sigma^-)$ for $\Sigma_{Kn}$=305 MeV with H-EOS (A). (b) The same as in (a), but in the case of H-EOS (B). } \label{fig:cep} \end{figure} One can see that the chemical potential difference $\mu_{\Sigma^-}-\mu$ has a minimum at a finite value of $\rho_{\Sigma^-}/\rho_{\rm B}$ (= 0.12 $-$ 0.14) and that the neutron chemical potential $\mu_n$ decreases monotonically with increase in the $\Sigma^-$-mixing ratio within the range $ \rho_{\Sigma^-}/\rho_{\rm B}$ = 0.0 $-$ 0.3. As a result, the chemical equilibrium condition, $\mu_n=\mu_{\Sigma^-}-\mu$, for the weak process $ne^-\rightleftharpoons \Sigma^-(\nu_e)$ is met at a finite $\Sigma^-$-mixing ratio (= 0.07 $-$ 0.1), which corresponds to the appearance of the state R' in Fig.~\ref{fig:contour2}. The dependence of the chemical potentials $\mu_{\Sigma^-}-\mu$ and $\mu_n$ on the $\Sigma^-$-mixing ratio is caused from that of the baryon potentials $V_{\Sigma^-}$ and $V_n$, respectively, as seen from Fig.~\ref{fig:cep}. As the density increases, the difference of the energies between the state R' and the absolute minimum state Q' gets smaller, and at a certain density denoted as $\rho_{\rm B}^{c(1)}(K^-; \Sigma^-)$, the energies of the states Q' and R' get equal. Above the density $\rho_{\rm B}^{c(1)}(K^-; \Sigma^-)$, the state R' develops as an absolute energy minimum. In Table~\ref{tab:onset}, we show the numerical values of $ \rho_{\rm B}^\ast(K^-; \Sigma^-)$ and $\rho_{\rm B}^{c(1)}(K^-; \Sigma^-)$ for each case of $\Sigma_{Kn}$ and the hyperonic matter EOS. At a given density, the ground state is determined by the lowest energy state in Fig.~\ref{fig:eos}. For all the cases, the state R' becomes the ground state at higher densities, i.e., the $\Sigma^-$ is mixed in the fully-developed kaon-condensed phase. Except for the case of $\Sigma_{Kn}$=207 MeV with H-EOS (A), the transition from the Q' state (the dashed lines) to the R' state (the solid lines) is discontinuous. Even when the state R' is the ground state, the local minimum (the state Q') prevails over the wide range of the baryon number density, in particular, in the case of H-EOS (B) [see the dashed lines in Fig.~\ref{fig:eos} (b)]. In Fig.~\ref{fig:pres}, we show the pressure in the $K^-$-condensed phase obtained by $P\equiv \rho_{\rm B}^2\partial({\cal E}'/\rho_{\rm B})/\partial \rho_{\rm B}$ (=$-{\cal E}_{\rm eff}'$), as functions of the energy density, $\epsilon\equiv{\cal E}'+M_N\rho_{\rm B}$, in MeV$\cdot$fm$^{-3}$. Fig.~\ref{fig:pres} (a) is for H-EOS (A), and (b) is for H-EOS (B). The bold (thin) lines are for $\Sigma_{Kn}$ = 305 MeV ($\Sigma_{Kn}$ = 207 MeV). The solid lines stand for the pressure for the $K^-$-condensed state with the $\Sigma^-$-mixing, while the dashed lines for the $K^-$-condensed state without the $\Sigma^-$-mixing. For comparison, the pressure for the noncondensed hyperonic matter is shown by the dotted line. Except for the case of $\Sigma_{Kn}$ = 207 MeV with H-EOS (A), there appears a gap in the pressure at the transition density $\rho_{\rm B}^{c(1)}(K^-; \Sigma^-)$ as a result of the discontinuous transition. It should be noted that the sound speed, $(\partial P/\partial\epsilon)^{1/2}$, exceeds the speed of light $c$ above a certain energy density, which is denoted as an arrow for each corresponding pressure curve. In such high density region, the relativistically covariant formulation is necessary for quantitative discussion of the EOS. \begin{figure}[h] \noindent\begin{minipage}[l]{0.50\textwidth} \begin{center} \includegraphics[height=.3\textheight]{fig11a.eps} \end{center} \end{minipage}~ \begin{minipage}[r]{0.50\textwidth} \begin{center} \includegraphics[height=.3\textheight]{fig11b.eps} \end{center} \end{minipage} \caption{(a) Pressure (MeV$\cdot$fm$^{-3}$) in the $K^-$-condensed phase, $P\equiv \rho_{\rm B}^2\partial({\cal E}'/\rho_{\rm B})/\partial \rho_{\rm B}$ (=$-{\cal E}_{\rm eff}'$), as functions of the energy density $\epsilon$ (MeV$\cdot$fm$^{-3}$) for H-EOS (A). The notations of the lines are the same as those in Fig.~8. (b) The same as in (a), but for H-EOS (B). } \label{fig:pres} \end{figure} \subsection{Density isomer state in the case of the stronger $s$-wave kaon-baryon scalar attraction} \label{subsec:dis} \ \ In the kaon-condensed phase realized in hyperonic matter, the EOS becomes considerably soft. In particular, in the case of $\Sigma_{Kn}$ = 305 MeV for both H-EOS (A) and (B), there appears a local energy minimum (which we call the density isomer state) at a certain density $\rho_{\rm B, min}$, and the pressure becomes negative at some density intervals below $\rho_{\rm B, min}$, as seen in Fig.~\ref{fig:eos} and \ref{fig:pres} (bold lines). For H-EOS (A) [H-EOS (B)], one reads $\rho_{\rm B, min}$ = 1.22 fm$^{-3}$ (0.92 fm$^{-3}$), and the minimum energy per baryon at $\rho_{\rm B, min}$ is 76.9 MeV (106.1 MeV), which is smaller than the $\Lambda$-$N$ mass difference, $\delta M_{\Lambda N}$=176 MeV. Thus the density isomer state is stable against the strong decay processes. In order to clarify mechanisms for the significant softening of the EOS leading to the appearance of the local energy minimum and for subsequent recovering of the stiffness of the EOS at higher density region, we show the energy contributions to the total energy per baryon by the solid lines in the case of $\Sigma_{Kn}$ = 305 MeV for H-EOS (A) and H-EOS (B) in Figs.~\ref{fig:e-contrib}~(a) and (b), respectively. For comparison, those for the kaon-condensed phase realized in ordinary neutron-star matter, obtained after putting $\rho_\Lambda$=$\rho_{\Sigma^-}$=0, are shown by the dashed lines. The density region where the total energy per baryon decreases with density (i.e., the negative pressure region) is bounded by the vertical dotted lines in Figs.~\ref{fig:e-contrib} (a) and (b). \begin{figure}[h] \noindent\begin{minipage}[l]{0.50\textwidth} \begin{center} \includegraphics[height=.3\textheight]{fig12a.eps} \end{center} \end{minipage}~ \begin{minipage}[r]{0.50\textwidth} \begin{center} \includegraphics[height=.3\textheight]{fig12b.eps} \end{center} \end{minipage} \caption{(a) The contributions to the total energy per baryon for the kaon-condensed phase in hyperonic matter as functions of the baryon number density $\rho_{\rm B}$ for H-EOS (A) and $\Sigma_{Kn}$ = 305 MeV (solid lines). For comparison, those for the kaon-condensed phase realized in ordinary neutron-star matter, obtained after putting $\rho_\Lambda$=$\rho_{\Sigma^-}$=0, are shown by the dashed lines. (b) The same as in (a), but for H-EOS (B). See the text for details.} \label{fig:e-contrib} \end{figure} The dependence of the total energy on the baryon number density is mainly determined by the two contributions: (I) the contribution from the classical kaons as the sum of the $s$-wave scalar kaon-baryon interaction and the free parts of the condensed kaon energy [the fourth, fifth and sixth terms in Eq.~(\ref{eq:te2})] and (II) the baryon potential energy ${\cal E}_{\rm pot}/\rho_{\rm B}$ [the third term in Eq.~(\ref{eq:te2})]. The contribution (I) decreases with increase in density, while the contribution (II) increases as density increases. As one can see from comparison of the solid lines and the dashed lines in Fig.~\ref{fig:e-contrib}, the attractive effect from the contribution (I) is pronounced due to mixing of hyperons as compared with the case without hyperons, lowering the total energy at a given density. In addition, the repulsive effect from the contribution (II) is much weakened due to mixing of hyperons at a given density, since the repulsive interaction between nucleons is avoided by lowering the relative nucleon density through mixing of hyperons.\footnote{This suppression mechanism of the repulsive interaction between nucleons is essentially the same as that for softening of the EOS in the noncondensed hyperonic matter as pointed out in Ref.~\cite{y02,t04}.} As a result, the total energy is much reduced, leading to significant softening of the EOS for the kaon-condensed phase in hyperonic matter. For the density region bounded by the dotted lines in Figs.~\ref{fig:e-contrib} (a) and (b), the increase of the absolute value of the kaon-baryon attractive interaction with density is more remarkable than the increase of the potential energy with density, so that the total energy per baryon decreases with density in this density region, and there is an energy minimum at $\rho_{\rm B, min}$. At higher densities above $\rho_{\rm B, min}$, the decrease of the contribution (I) gets slightly moderate, while the repulsive interaction between baryons becomes strong and so the increase of the contribution (II) with density gets more marked. As a result, the total energy per baryon increases rapidly with density, and the EOS recovers the stiffness at high densities. Thus the stiffness of the EOS depends on the quantitative behavior of the repulsive interaction between baryons at high densities, which has ambiguity depending on the model interaction. In our framework, the H-EOS (B) brings about stiffer EOS at high densities than the H-EOS (A), since the many-body hyperon-nucleon and hyperon-hyperon repulsive interaction terms, which control the stiffness of the EOS at high densities, contribute more significantly for H-EOS (B) [the index $\gamma$=2.0] than for H-EOS(A) [$\gamma$=5/3]. It has been suggested in Ref.~\cite{m05} that a density isomer state with kaon-condensates in hyperonic matter implies the existence of self-bound objects, which can be bound essentially without gravitation on a scale of an atomic nucleus to a neutron star just like a strangelet and a strange star\cite{b71,ck79,w84,fj84} or other exotic matter\cite{lw74,h75,m76,lnt90,shsg02}. The density isomer state is located at a local energy minimum with respect to the baryon number density as a metastable state, but it decays only through multiple weak processes, so that it is regarded as substantially stable. Implications of such self-bound objects with kaon condensates for astrophysical phenomena and nuclear experiments will be discussed in detail in a subsequent paper, where both $s$-wave and $p$-wave kaon-baryon interactions are taken into account\cite{m06}. \subsection{Composition of matter in the kaon-condensed phase} \label{subsec:composition-k} \ \ The characteristic features of the kaon-condensed phase in hyperonic matter can be surveyed from the density dependence of the compositon of matter. In Figs.~\ref{fig:fraction-a} and \ref{fig:fraction-b}, the particle fractions $\rho_i/\rho_{\rm B}$ ($i$=$p$, $\Lambda$, $n$, $\Sigma^-$, $K^-$, $e^-$) are shown as functions of the baryon number density $\rho_{\rm B}$. Figures~\ref{fig:fraction-a}~(a) and (b) are for H-EOS(A) with $\Sigma_{Kn}$ = 305 MeV and $\Sigma_{Kn}$ = 207 MeV, respectively, and Fig.~\ref{fig:fraction-b}~(a) and (b) are for H-EOS(B) with $\Sigma_{Kn}$ = 305 MeV and $\Sigma_{Kn}$ = 207 MeV, respectively. The long dashed lines stand for the ratio of the total negative strangeness number density $\rho_{\rm strange}$ to the baryon number density $\rho_{\rm B}$ with \begin{equation} \rho_{\rm strange}=\rho_{K^-}+\rho_\Lambda+\rho_{\Sigma^-} \ . \label{eq:rho-strange} \end{equation} The contribution from the classical kaon part, $\rho_{K^-}/\rho_{\rm B}$, is shown by the short dashed line in each figure. \begin{figure}[!] \noindent\begin{minipage}[l]{0.50\textwidth} \begin{center} \includegraphics[height=.3\textheight]{fig13a.eps} \end{center} \end{minipage}~ \begin{minipage}[r]{0.50\textwidth} \begin{center} \includegraphics[height=.3\textheight]{fig13b.eps} \end{center} \end{minipage} \caption{(a) Particle fractions $\rho_i/\rho_{\rm B}$ in the $K^-$-condensed phase as functions of the baryon number density $\rho_{\rm B}$ for H-EOS (A) and $\Sigma_{Kn}$=305 MeV. (b) The same as in (a), but for H-EOS (A) and $\Sigma_{Kn}$=207 MeV. } \label{fig:fraction-a} \end{figure} \begin{figure}[!] \noindent\begin{minipage}[l]{0.50\textwidth} \begin{center} \includegraphics[height=.3\textheight]{fig14a.eps} \end{center} \end{minipage}~ \begin{minipage}[r]{0.50\textwidth} \begin{center} \includegraphics[height=.3\textheight]{fig14b.eps} \end{center} \end{minipage} \caption{(a) Particle fractions $\rho_i/\rho_{\rm B}$ in the $K^-$-condensed phase as functions of the baryon number density $\rho_{\rm B}$ for H-EOS (B) and $\Sigma_{Kn}$=305 MeV. (b) The same as in (a), but for H-EOS (B) and $\Sigma_{Kn}$=207 MeV.} \label{fig:fraction-b} \end{figure} For each curve in Figs.~\ref{fig:fraction-a} and \ref{fig:fraction-b}, only the quantity corresponding to the lowest energy minimum state in the ($\theta$, $\rho_{\Sigma^-}/\rho_{\rm B}$) plane is shown as a function of density, so that there are gaps in the quantities at the transition densities $\rho_{\rm B}^{c(1)}(K^-; {\rm no} \ \Sigma^-)$ and $\rho_{\rm B}^{c(1)}(K^-;\ \Sigma^-)$. One can see competitive effects between kaon condensates and $\Sigma^-$ hyperons: As the former develops around the density $\rho_{\rm B}^{c(1)}(K^-; {\rm no} \ \Sigma^-)$, the latter is suppressed, while as the latter develops in the kaon-condensed phase around the density $\rho_{\rm B}^{c(1)}(K^-;\ \Sigma^-)$, the former is suppressed. Appearance of both kaon condensates and the $\Sigma^-$ leads to considerable suppression of the electron fraction, since the negative charge of the electron is replaced by that of kaon condensates and $\Sigma^-$ hyperons. Accordingly, the charge chemical potential $\mu$ decreases with increase in the baryon number density through the relation $\rho_e=\mu^3/(3\pi^2)$. It becomes even negative above the density $\rho_{\rm B}$ = 0.77 fm$^{-3}$ for $\Sigma_{Kn}$ = 305 MeV and $\rho_{\rm B}$ = 1.06 fm$^{-3}$ for $\Sigma_{Kn}$ = 207 MeV in both the cases of H-EOS (A) and H-EOS (B). For $\mu < 0$, the positrons ($e^+$) appear in place of the electrons. At high densities, the protons, $\Lambda$, and $\Sigma^-$ hyperons are equally populated in the kaon-condensed phase, i. e., $\rho_p/\rho_{\rm B}$, $\rho_\Lambda/\rho_{\rm B}$, $\rho_{\Sigma^-}/\rho_{\rm B}$ = 30$-$40 $\%$, whereas the neutrons almost disappear. The total negative strangeness ratio, $\rho_{\rm strange}/\rho_{\rm B}$, gets larger with increase in density: It reaches almost unity for $\Sigma_{Kn}$ = 305 MeV and 0.8$-$0.9 for $\Sigma_{Kn}$ = 207 MeV at high densities for both H-EOS (A) and H-EOS (B). Such a high strangeness fraction implies a close connection between the kaon-condensed phase in hyperonic matter and strange matter where $u$, $d$ and $s$ quarks are almost equally populated in quark matter. \section{Summary and Concluding Remarks} \label{sec:summary} \ \ We have studied the $s$-wave kaon condensation realized in hyperonic matter based on chiral symmetry for the kaon-baryon interactions and taking into account the parameterized effective interactions between baryons. We have concentrated on interrelations between kaon condensates and negatively charged hyperons ($\Sigma^-$) and reexamined the validity of the assumption of the continuous phase transition from the noncondensed hyperonic matter to the $K^-$-condensed phase. We have also discussed the EOS and the characteristic features of the system for the fully developed kaon-condensed phase. The validity of the continuous phase transition for kaon condensation in hyperonic matter is summarized as follows : In cases where the condition $\omega$ = $\mu$ [Eq.~(\ref{eq:onset}) ] is satisfied at $\rho_{\rm B}$=$\rho_{\rm B}^{c(2)}(K^-)$ in the presence of the $\Sigma^-$ hyperons, there exist, in general, two energy minima in the ($\theta$, $\rho_{\Sigma^-}/\rho_B$) plane at some density intervals near the density $\rho_{\rm B}^{c(2)}(K^-)$. One is the noncondensed state with the $\Sigma^-$-mixing (P'), and the other is the $K^-$-condensed state without the $\Sigma^-$-mixing (Q'). If the density $\rho_{\rm B}^{c(2)}(K^-)$ is located near the onset density of the $\Sigma^-$, $\rho_{\rm B}^c(\Sigma^-)$, the state P' is a local minimum or a saddle point, and the state Q' is the absolute minimum at $\rho_{\rm B}$ = $\rho_{\rm B}^{c(2)}(K^-)$. In this case, the assumption of the continuous phase transition is not valid : Below $\rho_{\rm B}^{c(2)}(K^-)$, there exists a typical density $\rho_{\rm B}^{c(1)}(K^-; {\rm no} \ \Sigma^-)$ at which the energies of the two minima become equal. Above the density $\rho_{\rm B}^{c(1)}(K^-; {\rm no} \ \Sigma^-)$, there is a discontinuous transition from the state P' to the state Q'. On the other hand, if the density $\rho_{\rm B}^{c(2)}(K^-)$ is located high enough from the density $\rho_{\rm B}^c(\Sigma^-)$, the state P' is always an absolute minimum. In this case, the assumption of the continuous phase transition holds true, and the onset density of kaon condensation is given by $\rho_{\rm B}^{c(2)}(K^-)$, above which kaon condensates develop continuously with increase in the baryon number density. In cases where the condition $\omega$ = $\mu$ is satisfied in the absence of the $\Sigma^-$ hyperon, there exists a unique minimum of the noncondensed state (P') at a point (0,0) in the ($\theta$, $\rho_{\Sigma^-}/\rho_B$) plane, and the assumption of the continuous phase transition is kept valid. The onset density is given by $\rho_{\rm B}^{c(2)}(K^-)$. The above consequences on the validity of the continuous phase transition are expected to be general and should also be applied to cases where the other negatively charged hyperons such as the cascade $\Xi^-$ are present in the noncondensed ground state and where both the $s$-wave and $p$-wave kaon-baryon interactions are taken into account\cite{m06}. In the fully developed phase with kaon condensates, there exist two energy minima with and without the $\Sigma^-$-mixing in the ($\theta$, $\rho_{\Sigma^-}/\rho_{\rm B}$) plane at some density intervals. At higher densities, the ground state is transferred discontinuously from the kaon-condensed state without the $\Sigma^-$-mixing to that with the $\Sigma^-$-mixing, except for the case of H-EOS (A) with the weaker $s$-wave kaon-baryon attractive interaction. The EOS of the kaon-condensed phase becomes considerably soft, since both the kaon-baryon attractions and mixing of hyperons work to lower the energy of the system. At higher densities, the stiffness of the EOS is recovered due to the increase in the repulsive interaction between baryons. As a result, in the case of the stronger $s$-wave kaon-baryon attractive interaction ($\Sigma_{Kn}$=305 MeV), there appears a local energy minimum as a density isomer state, which suggests the existence of self-bound objects with kaon condensates on any scale from an atomic nucleus to a neutron star. Recently, deeply bound kaonic nuclear states have been proposed theoretically, and much discussion has been made about the experimental achievements of them\cite{ay02,d04,yda04,k99,i01,ki03,s04,a05,mfg05,y05,ot06}. In particular, the double and/or multiple kaon clusters advocated in the recent experimental proposal by way of invariant mass spectroscopy\cite{yda04} may have a close connection with our results of kaon-condensed self-bound objects. These experimental searches for deeply bound kaonic nuclear states may provide us with important information on the existence of kaon condensation in high-density matter. In this paper, both kinematics and interactions associated with baryons are treated nonrelativistically. For more quantitative consideration, one needs a relativistic framework. Specifically, the $s$-wave kaon-baryon scalar attraction, which is proportional to the scalar densities for baryons in the relativistic framework, is suppressed at high densities due to saturation of the scalar densities\cite{fmmt96}. This effect is expected to make the EOS stiffer at high densities. The kaon-condensed phase is important for understanding the high-density QCD phase diagram from a hadronic picture which we have taken over the relevant baryon densities. At high densities, however, quark degrees of freedom may appear explicitly. It has been shown in this paper that the kaon-condensed phase in hyperonic matter leads to large (negative) strangeness fraction, $\rho_{\rm strange}/\rho_{\rm B}\sim 1$. This result suggests that kaon-condensed phase in the hadronic picture may be considered as a pathway to strange quark matter. In a quark picture, a variety of deconfined quark phases including color superconductivity have been elaborated\cite{nhhk04}. In particular, kaonic modes may be condensed in the color-flavor locked phase\cite{bs02,kr02,b05,f05}. It is interesting to clarify the relationship between kaon condensation in the hadronic phase and that in the quark phase and a possible transition between the two phases. \section*{Acknowledgments} \ \ The author is grateful to T.~Tatsumi, T.~Takatsuka, T.~Kunihiro, and M.~Sugawara for valuable discussions. He also thanks the Yukawa Institute for Theoretical Physics at Kyoto University, where this work was completed during the YKIS 2006 on ``New Frontiers on QCD''. This work is supported in part by the Grant-in-Aid for Scientific Research Fund (C) of the Ministry of Education, Science, Sports, and Culture (No. 18540288), and by the funds provided by Chiba Institute of Technology.
1,116,691,497,446
arxiv
\section{Introduction} In the last decades, the theory of optimal transport has made impressive inroads into other disciplines of mathematics, probably most notably with the Lott--Sturm--Villani theory \cite{LV09,Stu06a,Stu06b} of synthetic Ricci curvature bounds for metric measure spaces. More recently, optimal transport techniques have also been used to extend this theory to cover also discrete \cite{CHLZ12,EM12,Maa11,Mie11} and noncommutative geometries \cite{CM14,CM17a,MM17}. The starting point of our investigation are the results from \cite{CM14,CM17a} and their partial generalizations to the infinite-dimensional case in \cite{Hor18,Wir18}. For a symmetric quantum Markov semigroup $(P_t)$ the authors construct a noncommutative version of the $2$-Wasserstein metric, which allows to obtain a quantum analog of the characterization \cite{JKO98,Ott01} of the heat flow as $2$-Wasserstein gradient flow of the entropy. On this basis, the geodesic semi-convexity of the entropy in noncommutative $2$-Wasserstein space can be understood as a lower Ricci curvature bound for the quantum Markov semigroup, and it can be used to obtain a series of prominent functional inequalities such as a Talagrand inequality, a modified logarithmic Sobolev inequality and the Poincaré inequality \cite{BGJ20,CM17a,JLL20,RD19}. One of the major challenges in the development of this program so far has been to verify semi-convexity in concrete examples, and only few noncommutative examples have been known to date, even less infinite-dimensional ones. To prove geodesic semi-convexity, the gradient estimate \begin{equation}\label{eq:GE} \norm{\partial P_t a}_\rho^2\leq e^{-2Kt}\norm{\partial a}_{P_t \rho}^2,\tag{GE} \end{equation} or, equivalently, its integrated form has proven central. They can be seen as noncommutative analogs of the Bakry--Émery gradient estimate and the $\Gamma_2$ criterion. Indeed, if the underlying quantum Markov semigroup is the heat semigroup on a complete Riemannian manifold, (\ref{eq:GE}) reduces to the classical Bakry--Émery estimate \begin{equation*} \Gamma(P_t f)\leq e^{-2Kt}P_t \Gamma(f). \end{equation*} As often in noncommutative geometry, tensorization of inequalities is more difficult than that in the commutative case. It is not known whether the gradient estimate in the form (\ref{eq:GE}) has good tensorization properties. For this reason we introduce $(\mathrm{CGE})$, a complete version of (\ref{eq:GE}), and establish some of its stability properties. Using these in combination with a variant of the intertwining technique from \cite{CM17a} and a fine analysis of specific generators of Lindblad type, we are able to establish this tensor stable gradient estimate $(\mathrm{CGE})$ for a number of examples for which geodesic convexity was not known before. Let us briefly outline the content of the individual sections of this article. In Section \ref{sec:basics} we recall some basics of quantum Markov semigroups, the construction of the noncommutative transport distance $\mathcal{W}$ and the connection between the gradient estimate (\ref{eq:GE}) and the geodesic semi-convexity of the entropy. In Section \ref{sec:intertwining} we extend the intertwining technique from \cite{CM17a,CM20} to the infinite-dimensional setting. Working with arbitrary operator means, our result does not only cover the gradient estimate implying semi-convexity of the entropy in noncommutative $2$-Wasserstein space, but also the noncommutative Bakry--Émery estimate studied in \cite{JZ15a}. As examples we show that the Ornstein--Uhlenbeck semigroup on the mixed $q$-Gaussian algebras satisfies ($\mathrm{CGE}$) with constant $K=1$, the heat semigroup on quantum tori satisfies ($\mathrm{CGE}$) with constant $K=0$, and that a class of quantum Markov semigroups on discrete group von Neumann algebras and quantum groups $O_N^+,S_N^+$ satisfy ($\mathrm{CGE}$) with constant $K=0$. Moreover, this intertwining result is also central for the stability properties studied in the next section. In Section \ref{sec:stability} we show that the complete gradient estimate is stable under tensor products and free products of quantum Markov semigroups. Besides the applications investigated later in the article, these results also open the door for applications of group transference to get complete gradient estimates for Lindblad generators on matrix algebras. In Section \ref{sec:com_proj} we prove the complete gradient estimate ($\mathrm{CGE}$) with constant $K=1$ for quantum Markov semigroups whose generators are of the form \begin{equation*} \mathscr{L} x=\sum_j p_j x+xp_j -2p_j x p_j, \end{equation*} where the operators $p_j$ are commuting projections. In a number of cases, this result is better than the ones we could obtain by intertwining and yields the optimal constant in the gradient estimate. As examples we show that this result applies to the quantum Markov semigroups associated with the word length function on finite cyclic groups and the non-normalized Hamming length function on symmetric groups. Using ultraproducts and the stability under free products, we finally extend this result to Poisson-type quantum Markov semigroups on group von Neumann algebras of groups $\mathbb{Z}^{\ast k}\ast \mathbb{Z}_2^{\ast l}$ with $k,l\geq 0$. In particular, this implies the complete modified logarithmic Sobolev inequality with optimal constant for these groups. \subsection*{Note added.} When preparing this preprint for submission, we were made aware that several of the examples have been obtained independently by Brannan, Gao and Junge (see \cite{BGJ20,BGJ20b}). \subsection*{Acknowledgments} H.Z. is supported by the European Union's Horizon 2020 research and innovation programme under the Marie Sk\l odowska-Curie grant agreement No. 754411. M.W. was supported by the Austrian Science Fund (FWF) through grant number F65. Both authors would like to thank Jan Maas for fruitful discussions and helpful comments. \section{\texorpdfstring{The noncommutative transport metric $\mathcal{W}$ and geodesic convexity of the entropy}{The noncommutative transport metric W and geodesic convexity of the entropy}}\label{sec:basics} In this section we briefly recall the definition and basic properties of the noncommutative transport distance $\mathcal{W}$ associated with a tracially symmetric quantum Markov semigroup. For a more detailed description we refer readers to \cite{Wir18}. Let $\mathcal{M}$ be a separable von Neumann algebra equipped with a normal faithful tracial state $\tau\colon \mathcal{M}\to \mathbb{C}$. Denote by $\mathcal{M}_+$ the positive cone of $\mathcal{M}$. Given $1\le p<\infty$, we define $$\Vert x\Vert_p=[\tau(|x|^p)]^{\frac{1}{p}},~~x\in\mathcal{M},$$ where $|x|=(x^*x)^{\frac{1}{2}}$ is the modulus of $x$. One can show that $\|\cdot \|_p$ is a norm on $\mathcal{M}$. The completion of $( \mathcal{M}, \|\cdot \|_p )$ is denoted by $L^p (\mathcal{M}, \tau)$, or simply $L^p (\mathcal{M})$. As usual, we put $L^\infty(\mathcal{M})=\mathcal{M}$ with the operator norm. In this article, we are only interested in $p=1$ and $p=2$. In particular, $L^2(\mathcal{M})$ is a Hilbert space with the inner product $$\langle x,y\rangle_2=\tau(x^*y).$$ A family $(P_t)_{t\geq 0}$ of bounded linear operators on $\mathcal{M}$ is called a \emph{quantum Markov semigroup (QMS)} if \begin{itemize} \item $P_t$ is a normal unital completely positive map for every $t\geq 0$, \item $P_s P_t=P_{s+t}$ for all $s,t\geq 0$, \item $P_t x\to x$ in the weak$^\ast$ topology as $t\searrow 0$ for every $x\in \mathcal{M}$. \end{itemize} A QMS $(P_t)$ is called \emph{$\tau$-symmetric} if \begin{equation*} \tau((P_t x)y)=\tau(x P_t y) \end{equation*} for all $x,y\in \mathcal{M}$ and $t\geq 0$. The generator of $(P_t)$ is the operator $\mathscr{L}$ given by \begin{align*} D(\mathscr{L})&=\left\{x\in \mathcal{M}\mid \lim_{t\searrow 0}\frac1 t (x-P_t x)\text{ exists in the weak$^\ast$ topology}\right\},\\ \mathscr{L}(x)&=\lim_{t\to 0}\frac1 t (x-P_t x),~~x\in D(\mathscr{L}). \end{align*} Here and in what follows, $D(T)$ always means the domain of $T$. For all $p\in [1,\infty]$, the $\tau$-symmetric QMS $(P_t)$ extends to a strongly continuous contraction semigroup $(P_t^{(p)})$ on $L^p(\mathcal{M},\tau)$ with generator $\mathscr{L}_p$. Let $\cC=D(\mathscr{L}_2^{1/2})\cap \mathcal{M}$, which is a $\sigma$-weakly dense $\ast$-subalgebra of $\mathcal{M}$ \cite[Proposition 3.4]{DL92}. According to \cite[Section 8]{CS03}, there exists an (essentially unique) quintuple $(\mathcal{H}, L, R, \mathcal{J}, \partial)$ consisting of a Hilbert space $\mathcal{H}$, commuting non-degenerate $^\ast$-homomorphisms $L\colon \cC\to B(\mathcal{H})$, $R\colon \cC^\circ\to B(\mathcal{H})$, an anti-unitary involution $\mathcal{J}\colon \mathcal{H}\to \mathcal{H}$ and a closed operator $\partial\colon D(\mathscr{L}_2^{1/2})\to \mathcal{H}$ such that \begin{itemize} \item $\{R(x)\partial y\mid x,y\in \cC\}$ is dense in $\mathcal{H}$, \item $\partial(xy)=L(x)\partial y+R(y)\partial x$ for $x,y\in \cC$, \item $\mathcal{J}(L(x)R(y)\partial(z))=L(y^\ast)R(x^\ast)\partial(z^\ast)$ for $x,y,z\in \cC$, \item $\mathscr{L}_2=\partial^\dagger \partial$, \end{itemize} where $\cC^\circ$ is the opposite algebra of $\cC$. We will write $a\xi$ and $\xi b$ for $L(a)\xi$ and $R(b)\xi$, respectively. For $x,y\in D(\mathscr{L}_2^{1/2})$, the \emph{carré du champ} is defined as \begin{equation*} \Gamma(x,y)\colon \cC\to\mathbb{C},\,\Gamma(x,y)(z)=\langle \partial x,(\partial y)z\rangle_\mathcal{H}. \end{equation*} We write $\Gamma(x)$ to denote $\Gamma(x,x)$. A $\tau$-symmetric QMS is called \emph{$\Gamma$-regular} (see \cite{JLL20}) if the representations $L$ and $R$ are normal. Under this assumption, $\mathcal{H}$ is a \emph{correspondence} from $\mathcal{M}$ to itself in the sense of Connes \cite[Appendix B of Chapter 5]{Con94} (sometimes also called \emph{$\mathcal{M}$-bimodule} or \emph{Hilbert bimodule}). By \cite[Theorem 2.4]{Wir18}, $(P_t)$ is a $\Gamma$-regular semigroup if and only if $\Gamma(x,y)$ extends to a normal linear functional on $\mathcal{M}$ for all $x,y\in D(\mathscr{L}^{1/2}_2)$. By a slight abuse of notation, we write $\Gamma(x,y)$ for the unique element $h\in L^1(\mathcal{M},\tau)$ such that \begin{align*} \tau(z h)=\Gamma(x,y)(z) \end{align*} for all $z\in \cC$. If $(P_t)$ is $\Gamma$-regular, then we can extend $L$ to a map on the operators affiliated with $\mathcal{M}$ by defining \begin{align*} L(x)=u\int_{[0,\infty)}\lambda\,d(L\circ e)(\lambda), \end{align*} for any operator $x$ affiliated with $\mathcal{M}$, where $u$ is the partial isometry in the polar decomposition of $x$ and $e$ is the spectral measure of $\abs{x}$. The same goes for $R$. Let $\Lambda$ be an \emph{operator mean} in the sense of Kubo and Ando \cite{KA80}, that is, $\Lambda$ is a map from $B(\mathcal{H})_+\times B(\mathcal{H})_+$ to $B(\mathcal{H})_+$ satisfying \begin{enumerate}[(a)] \item if $A\leq C$ and $B\leq D$, then $\Lambda(A,B)\leq\Lambda(C,D)$, \item the \emph{transformer inequality}: $C \Lambda(A,B)C\leq \Lambda(C A C,C B C)$ for all $A,B,C\in B(\mathcal{H})_+$, \item if $A_n\searrow A$, $B_n\searrow B$, then $\Lambda(A_n,B_n)\searrow \Lambda(A,B)$, \item $\Lambda(\mathrm{id}_{\mathcal{H}},\mathrm{id}_{\mathcal{H}})=\mathrm{id}_{\mathcal{H}}$. \end{enumerate} Here and in what follows, by $A_n\searrow A$ we mean $A_1\ge A_2\ge \cdots$ and $A_n$ converges strongly to $A$. From (b), any operator mean $\Lambda$ is \emph{positively homogeneous}: $$\Lambda(\lambda A,\lambda B)=\lambda\Lambda (A,B),~~\lambda >0,A,B\in B(\mathcal{H})_+.$$ An operator mean $\Lambda$ is \emph{symmetric} if $\Lambda(A,B)=\Lambda(B,A)$ for all $A,B\in B(\mathcal{H})_+$. For a positive self-adjoint operator $\rho$ affiliated with $\mathcal{M}$, we define \begin{equation*} \hat \rho=\Lambda(L(\rho), R(\rho)). \end{equation*} Of particular interest for us are the cases when $\Lambda$ is the \emph{logarithmic mean} \begin{equation*} \Lambda_{\text{log}}(L(\rho), R(\rho))=\int_{0}^{1}L(\rho)^s R(\rho)^{1-s}\,ds, \end{equation*} or the \emph{arithmetic mean} \begin{equation*} \Lambda_{\text{ari}}(L(\rho), R(\rho))=\frac{L(\rho)+R(\rho)}{2}. \end{equation*} We write $\norm{\cdot}_\rho^2$ for the quadratic form associated with $\hat \rho$, that is, \begin{align*} \norm{\xi}_\rho^2=\begin{cases}\norm{\hat \rho^{1/2}\xi}_\mathcal{H}^2&\text{if }\xi\in D(\hat \rho^{1/2}),\\ \infty&\text{otherwise}.\end{cases} \end{align*} Given an operator mean $\Lambda$, consider the set \begin{align*} \mathcal{A}_\Lambda=\{a\in D(\mathscr{L}_2^{1/2})\cap\mathcal{M}\mid \exists C>0\,\forall \rho\in L^1_+(\mathcal{M},\tau)\colon \norm{\partial a}_\rho^2\leq C\norm{\rho}_1\}, \end{align*} equipped with the seminorm \begin{align*} \norm{a}_\Lambda^2=\sup_{0\neq \rho\in L^1_+(\mathcal{M},\tau)}\frac{\norm{\partial a}_\rho^2}{\norm{\rho}_1}. \end{align*} If $\Lambda$ is the arithmetic mean $\Lambda_{\text{ari}}$, then this set coincides with \begin{align*} \mathcal{A}_\Gamma=\{x\in D(\mathscr{L}_2^{1/2})\cap\mathcal{M}\mid \Gamma(x),\Gamma(x^\ast)\in \mathcal{M}\}. \end{align*} In fact, when $\Lambda=\Lambda_{\text{ari}}$, one has $\norm{\partial a}_{\rho}^2=\frac{1}{2}\tau\left((\Gamma(a)+\Gamma(a^*))\rho\right)$. If the operator mean $\Lambda$ is symmetric, then it is dominated by the arithmetic mean and therefore $\mathcal{A}_\Gamma\subset \mathcal{A}_\Lambda $ \cite[Theorem 4.5]{KA80},\cite[Lemma 3.24]{Wir18}. The following definition states that this inclusion is dense in a suitable sense. \begin{definition}\label{def:regular_mean} The operator mean $\Lambda$ is a \emph{regular mean} for $(P_t)$ if for every $x\in \mathcal{A}_\Lambda$ there exists a sequence $(x_n)$ in $\mathcal{A}_\Gamma$ that is bounded in $\mathcal{A}_\Lambda$ and converges to $x$ $\sigma$-weakly. \end{definition} Of course the arithmetic mean is always regular. In general it seems not easy to check this definition directly, but we will discuss a sufficient condition below. Given an operator mean $\Lambda$, let $\mathcal{H}_\rho$ be the Hilbert space obtained from $\partial(\mathcal{A}_\Lambda)$ after separation and completion with respect to $\langle\cdot,\cdot\rangle_\rho$ defined by \begin{equation*} \langle\xi,\eta\rangle_\rho=\langle\hat\rho^{1/2}\xi,\hat{\rho}^{1/2}\eta\rangle_{\mathcal{H}}. \end{equation*} If $\Lambda$ is regular, then $\partial(\mathcal{A}_\Gamma)$ is dense in $\mathcal{H}_\rho$. Let $\cD(\mathcal{M},\tau)$ be the set of all \emph{density operators}, that is, \begin{equation*} \cD(\mathcal{M},\tau)=\{\rho\in L^1_+(\mathcal{M},\tau)\mid \tau(\rho)=1\}. \end{equation*} \begin{definition}\label{def:admissible_curve} Fix an operator mean $\Lambda$. A curve $(\rho_t)_{t\in [0,1]}\subset \cD(\mathcal{M},\tau)$ is \emph{admissible} if \begin{itemize} \item the map $t\mapsto \tau(a\rho_t)$ is measurable for all $a\in \mathcal{A}_\Gamma$, \item there exists a curve $(\xi_t)_{t\in [0,1]}$ such that $\xi_t\in \mathcal{H}_{\rho_t}$ for all $t\in [0,1]$, the map $t\mapsto \langle \partial a,\xi_t\rangle_{\rho_t}$ is measurable for all $a\in \mathcal{A}_\Gamma$ and for every $a\in \mathcal{A}_\Gamma$ one has \begin{equation}\label{eq:CE} \frac{d}{dt}\tau(a\rho_t)=\langle \xi_t,\partial a\rangle_{\rho_t} \end{equation} for a.e. $t\in [0,1]$. \end{itemize} \end{definition} For an admissible curve $(\rho_t)$, the vector field $(\xi_t)$ is uniquely determined up to equality a.e. and will be denoted by $(D\rho_t)$. If $\Lambda$ is a regular mean, the set $\mathcal{A}_\Gamma$ can be replaced by $\mathcal{A}_\Lambda$ everywhere in Definition \ref{def:admissible_curve}. \begin{remark} The equation (\ref{eq:CE}) is a weak formulation of \begin{equation*} \dot\rho_t=\partial^\dagger (\hat\rho_t \xi_t), \end{equation*} which can be understood as noncommutative version of the continuity equation. Indeed, if $(P_t)$ is the heat semigroup on a compact Riemannian manifold, it reduces to the classical continuity equation $\dot\rho_t+\operatorname{div}(\rho_t \xi_t)=0$. \end{remark} \begin{definition} The noncommutative transport distance $\mathcal{W}$ on $\cD(\mathcal{M},\tau)$ is defined as \begin{equation*} \mathcal{W}(\bar\rho_0,\bar\rho_1)=\inf_{(\rho_t)}\int_0^1 \norm{D\rho_t}_{\rho_t}\,dt, \end{equation*} where the infimum is taken over all admissible curves $(\rho_t)$ connecting $\bar \rho_0$ and $\bar \rho_1$. \end{definition} \begin{definition}\label{defn:CGE} Let $K\in \mathbb{R}$. A $\Gamma$-regular QMS $(P_t)$ is said to satisfy the gradient estimate $\mathrm{GE}(K,\infty)$ if \begin{equation*} \norm{\partial P_t a}_\rho^2\leq e^{-2Kt}\norm{\partial a}_{P_t \rho}^2 \end{equation*} for $t\geq 0$, $a\in D(\mathscr{L}_2^{1/2})$ and $\rho\in \cD(\mathcal{M},\tau)$. It satisfies $\mathrm{CGE}(K,\infty)$ if $(P_t\otimes \mathrm{id}_{\mathcal{N}})$ satisfies $\mathrm{GE}(K,\infty)$ for any finite von Neumann algebra $\mathcal{N}$. \end{definition} Note that the gradient estimate $\mathrm{GE}(K,\infty)$ depends implicitly on the chosen operator mean $\Lambda$. As observed in \cite[Proposition 6.12]{Wir18}, if $(P_t)$ satisfies $\mathrm{GE}(K,\infty)$ for the arithmetic mean $\Lambda_{\text{ari}}$ and for $\Lambda$, then $\Lambda$ is regular for $(P_t)$. If $\Lambda$ is the right trivial mean, i.e., $\Lambda(L(\rho),R(\rho))=R(\rho)$, then $\mathrm{GE}(K,\infty)$ reduces to the Bakry--Émery criterion \begin{equation*} \Gamma(P_t a)\leq e^{-2Kt}P_t\Gamma(a), \end{equation*} which was considered in \cite{JZ15a}. \begin{remark} Recently, Li, Junge and LaRacuente \cite{JLL20} introduced a closely related notion of lower Ricci curvature bound for quantum Markov semigroups, the \emph{geometric Ricci curvature condition} (see also \cite[Definition 3.22]{BGJ20}). Like $\mathrm{CGE}$, this condition is tensor stable, and it implies $\mathrm{CGE}$ for arbitrary operator means \cite[Theorem 3.6]{JLL20} (the result is only formulated for the logarithmic mean, but the proof only uses the transformer inequality for operator means). In the opposite direction, the picture is less clear. For $\mathrm{GE}$, a direct computation on the two-point graph shows that the optimal constant depends on the mean in general. It seems reasonable to expect the same behavior for $\mathrm{CGE}$, which would imply that the optimal constant in $\mathrm{CGE}$ for a specific mean is in general bigger than the optimal constant in the geometric Ricci curvature condition. \end{remark} This gradient estimate is closely related to convexity properties of the logarithmic entropy \begin{equation*} \mathrm{Ent}\colon \cD(\mathcal{M},\tau)\to [0,\infty],\,\mathrm{Ent}(\rho)=\tau(\rho\log \rho). \end{equation*} As usual let $D(\mathrm{Ent})=\{\rho\in \cD(\mathcal{M},\tau)\mid \mathrm{Ent}(\rho)<\infty\}$. \begin{theorem}[{\cite[Theorem 7.12]{Wir18}}]\label{thm:geodesic_convex} Assume that $(P_t)$ is a $\Gamma$-regular QMS. Suppose that $\Lambda=\Lambda_{\log}$ is the logarithmic mean and is regular for $(P_t)$. If $(P_t)$ satisfies $\mathrm{GE}(K,\infty)$, then \begin{enumerate}[(a)] \item for every $\rho\in D(\mathrm{Ent})$ the curve $(P_t \rho)$ satisfies the \emph{evolution variational inequality} $(\mathrm{EVI}_K)$ \begin{equation*} \frac{d}{dt}\frac 1 2\mathcal{W}^2(P_t \rho,\sigma)+\frac K 2 \mathcal{W}^2(P_t \rho,\sigma)+\mathrm{Ent}(\rho)\leq \mathrm{Ent}(\sigma) \end{equation*} for a.e. $t\geq 0$ and $\sigma \in \cD(\mathcal{M},\tau)$ with $\mathcal{W}(\rho,\sigma)<\infty$, \item any $\rho_0,\rho_1\in D(\mathrm{Ent})$ with $\mathcal{W}(\rho_0,\rho_1)<\infty$ are connected by a $\mathcal{W}$-geodesic and $\mathrm{Ent}$ is $K$-convex along any constant speed $\mathcal{W}$-geodesic $(\rho_t)$, that is, $\frac{d^2}{dt^2}\mathrm{Ent}(\rho_t)\geq K$ in the sense of distributions. \end{enumerate} \end{theorem} This gradient flow characterization implies a number of functional inequalities for the QMS, see e.g. \cite[Section 8]{CM17a}, \cite[Section 7]{Wir18}, \cite[Section 11]{CM20}. Here we will focus on the modified logarithmic Sobolev inequality and its complete version (see \cite[Definition 2.8]{GJL18}, \cite[Definition 2.12]{JLL20} for the latter). For $\rho,\sigma\in \cD(\mathcal{M},\tau)$ the \emph{relative entropy} of $\rho$ with respect to $\sigma$ is defined as \begin{equation*} \mathrm{Ent}(\rho\Vert \sigma)=\begin{cases}\tau(\rho\log \rho)-\tau(\rho\log\sigma)&\text{if }\operatorname{supp} \rho\subset \operatorname{supp} \sigma,\\ \infty&\text{otherwise}.\end{cases} \end{equation*} If $\mathcal{N}\subset \mathcal{M}$ is a von Neumann subalgebra with $E\colon \mathcal{M}\to \mathcal{N}$ being the conditional expectation, then we define \begin{align*} \mathrm{Ent}_\mathcal{N}(\rho)=\mathrm{Ent}(\rho\lVert E(\rho)). \end{align*} Recall that a \emph{conditional expectation} $E\colon\mathcal{M}\to\mathcal{N}$ is a normal contractive positive projection from $\mathcal{M}$ onto $\mathcal{N}$ which preserves the trace and satisfies \begin{equation*} E(axb)=aE(x)b,~~a,b\in\mathcal{N}, x\in\mathcal{M}. \end{equation*} For $x\in D(\mathscr{L}_2^{1/2})\cap \mathcal{M}_+$ the \emph{Fisher information} is defined as \begin{equation*} \mathcal{I}(x)=\lim_{\epsilon\searrow 0}\langle \mathscr{L}_2^{1/2} x,\mathscr{L}_2^{1/2}\log(x+\epsilon)\rangle_2\in [0,\infty]. \end{equation*} This definition can be extended to $x\in L^1_+(\mathcal{M},\tau)$ by setting \begin{equation*} \mathcal{I}(x)=\begin{cases}\lim_{n\to\infty}\mathcal{I}(x\wedge n)&\text{if }x\wedge n\in D(\mathscr{L}_2^{1/2})\cap \mathcal{M}\text{ for all }n\in\mathbb{N},\\ \infty&\text{otherwise}.\end{cases} \end{equation*} Recall that the fixed-point algebra of $(P_t)$ is \begin{equation*} \mathcal{M}^{\mathrm{fix}}=\{x\in\mathcal{M}:P_t(x)=x\text{ for all }t\geq 0\}. \end{equation*} It is a von Neumann subalgebra of $\mathcal{M}$ \cite[Proposition 3.5]{DL92}. \begin{definition} Let $(P_t)$ be a $\Gamma$-regular QMS with the fixed-point subalgebra $\mathcal{M}^{\mathrm{fix}}$. For $\lambda>0$, we say that $(P_t)$ satisfies the modified logarithmic Sobolev inequality with constant $\lambda$ $(\mathrm{MLSI}(\lambda))$, if \begin{equation*} \lambda \mathrm{Ent}_{\mathcal{M}^{\mathrm{fix}}}(\rho)\leq \mathcal{I}(\rho) \end{equation*} for $\rho\in\cD(\mathcal{M},\tau)\cap D(\mathscr{L}_2^{1/2})\cap \mathcal{M}$. We say that $(P_t)$ satisfies the complete modified logarithmic Sobolev inequality with constant $\lambda$ $(\mathrm{CLSI}(\lambda))$ if $(P_t\otimes \mathrm{id}_\mathcal{N})$ satisfies the modified logarithmic Sobolev inequality with constant $\lambda$ for any finite von Neumann algebra $\mathcal{N}$. \end{definition} For ergodic QMS satisfying $\mathrm{GE}(K,\infty)$, the inequality $\mathrm{MLSI}(2K)$ is essentially contained in the proof of \cite[Proposition 7.9]{Wir18}. Since $(P_t\otimes \mathrm{id}_\mathcal{N})$ is not ergodic (unless $\mathcal{N}=\mathbb{C}$), this result cannot imply the complete modified logarithmic Sobolev inequality. However, the modified logarithmic Sobolev inequality for non-ergodic QMS can also still be derived from the gradient flow characterization, as we will see next. \begin{corollary}\label{cor:MLSI} Assume that $(P_t)$ is a $\Gamma$-regular QMS. Suppose that $\Lambda=\Lambda_{\log}$ is the logarithmic mean and is regular for $(P_t)$. If $(P_t)$ satisfies $\mathrm{GE}(K,\infty)$, then it satisfies \begin{equation*} \mathcal{I}(P_t \rho)\leq e^{-2Kt}\mathcal{I}(\rho) \end{equation*} for $\rho \in D(\mathscr{L}_2^{1/2})\cap\mathcal{M}_+$ and $t\geq 0$. Moreover, if $K>0$, then $(P_t)$ satisfies $\mathrm{MLSI}(2K)$. The same is true for the complete gradient estimate and the complete modified logarithmic Sobolev inequality. \end{corollary} \begin{proof} Let $\rho\in\cD(\mathcal{M},\tau)\cap D(\mathscr{L}_2^{1/2})\cap\mathcal{M}$ and $\rho_t=P_t\rho$. Since $(\rho_t)$ is an $\mathrm{EVI}_K$ gradient flow curve of $\mathrm{Ent}$ by Theorem \ref{thm:geodesic_convex} and $\frac{d}{dt}\mathrm{Ent}(\rho_t)=-\mathcal{I}(\rho_t)$, it follows from \cite[Theorem 3.5]{MS20} that \begin{equation*} \mathcal{I}(P_t \rho)\leq e^{-2Kt}\mathcal{I}(\rho) \end{equation*} for $t\geq 0$ (using the continuity of both sides in $t$). If $K>0$, then $\mathrm{MLSI}(2K)$ follows from a standard argument; see for example \cite[Lemma 2.15]{JLL20}. The implication for the complete versions is clear. \end{proof} \begin{remark} The inequality $\mathcal{I}(P_t \rho)\leq e^{-2Kt}\mathcal{I}(\rho)$ is called $K$-Fisher monotonicity in \cite{BGJ20} and plays a central role there in obtaining complete logarithmic Sobolev inequalities. \end{remark} \section{Gradient estimates through intertwining}\label{sec:intertwining} Following the ideas from \cite{CM17a,CM20}, we will show in this section how one can obtain gradient estimates for quantum Markov semigroups through intertwining. As examples we discuss the Ornstein--Uhlenbeck semigroup on the mixed $q$-Gaussian algebras, the heat semigroup on quantum tori, and a family of quantum Markov semigroups on discrete group von Neumann algebras and the quantum groups $O_N^+$ and $S_N^+$. Throughout this section we assume that $\mathcal{M}$ is a separable von Neumann algebra with normal faithful tracial state $\tau$ and $(P_t)$ is a $\Gamma$-regular QMS. We fix the corresponding first order differential calculus $(\mathcal{H}, L, R, \mathcal{J}, \partial)$. We do not make any assumptions on $\Lambda$ beyond being an operator mean. In particular, all results from this section apply to the logarithmic mean -- thus yielding geodesic convexity by Theorem \ref{thm:geodesic_convex} --- as well as the right-trivial mean -- thus giving Bakry--Émery estimates. \begin{theorem}\label{thm:intertwining} Let $K\in\mathbb{R}$. If there exists a family $(\vec P_t)$ of bounded linear operators on $\mathcal{H}$ such that \begin{enumerate}[(i)] \item $\partial P_t =\vec P_t \partial$ for $t\geq 0$, \item $\vec P_t^\dagger L(\rho) \vec P_t\leq e^{-2Kt}L(P_t \rho)$ for $\rho\in\mathcal{M}_+$, $t\geq 0$, \item $\vec P_t^\dagger R(\rho) \vec P_t\leq e^{-2Kt}R(P_t \rho)$ for $\rho\in\mathcal{M}_+$, $t\geq 0$, \end{enumerate} then $(P_t)$ satisfies $\mathrm{GE}(K,\infty)$. \end{theorem} \begin{proof} Let $\rho\in \mathcal{M}_+$ and $a\in D(\partial)$. Since $\Lambda$ is an operator mean, we have \cite[Theorem 3.5]{KA80} \begin{equation*} \vec P_t^\dagger \Lambda(L(\rho),R(\rho))\vec P_t\leq \Lambda(\vec P_t^\dagger L(\rho)\vec P_t,\vec P_t^\dagger R(\rho) \vec P_t). \end{equation*} Thus \begin{equation*} \langle \hat \rho \partial P_t a,\partial P_t a\rangle_\mathcal{H}=\langle \vec P_t^\dagger \hat \rho \vec P_t \partial a,\partial a\rangle_\mathcal{H}\leq \langle \Lambda(\vec P_t^\dagger L(\rho)\vec P_t,\vec P_t^\dagger R(\rho)\vec P_t)\partial a,\partial a\rangle_\mathcal{H}. \end{equation*} As $\Lambda$ is monotone in both arguments and positively homogeneous, conditions (ii) and (iii) imply \begin{equation*} \langle \Lambda(\vec P_t^\dagger L(\rho)\vec P_t,\vec P_t^\dagger R(\rho)\vec P_t)\partial a,\partial a\rangle_\mathcal{H}\leq e^{-2Kt}\langle \Lambda(L(P_t \rho),R(P_t \rho))\partial a,\partial a\rangle_\mathcal{H}. \end{equation*} All combined this yields \begin{equation*} \norm{\partial P_t a}_\rho^2\leq e^{-2Kt}\norm{\partial a}_{P_t \rho}^2.\qedhere \end{equation*} \end{proof} \begin{remark}\label{rmk:diff_calc_ind} The proof shows that assumptions (i)--(iii) still imply \begin{equation*} \norm{\partial P_t a}_{\rho}^2\leq e^{-2Kt}\norm{\partial a}_{P_t \rho}^2 \end{equation*} if the differential calculus is not the one associated with $(P_t)$. We will use this observation in the proofs of Theorem \ref{thm:tensor_product} and Theorem \ref{thm:com_proj}. \end{remark} \begin{remark} A similar technique to obtain geodesic convexity of the entropy has been employed in \cite{CM17a,CM20}. Our proof using the transformer inequality for operator means is in some sense dual to the monotonicity argument used there (see \cite{Pet96}). Apart from working in the infinite-dimensional setting, let us point out two main differences to the results from these two articles: In contrast to \cite{CM17a}, we do not assume that $\vec P_t$ is a direct sum of copies of $P_t$ (in fact, we do not even assume that $\mathcal{H}$ is a direct sum of copies of the trivial bimodule). This enhanced flexibility can lead to better bounds even for finite-dimensional examples (see Example \ref{ex:cond_exp}). In contrast to \cite{CM20}, our conditions (ii) and (iii) are more restrictive, but they are also linear in $\rho$, which makes them potentially more feasible to check in concrete examples. \end{remark} \begin{remark} We do not assume that the operators $\vec P_t$ form a semigroup or that they are completely positive (if $\mathcal{H}$ is realized as a subspace of $L^2(\mathcal{N})$ for some von Neumann algebra $\mathcal{N}$). However, this is the case for most of the concrete examples where we can prove (i)--(iii). \end{remark} \begin{remark} In particular, the conclusion of the previous theorem holds for all symmetric operator means, and in view of the discussions after Definition \ref{defn:CGE}, it implies that any symmetric operator mean is regular for $(P_t)$. \end{remark} Under a slightly stronger assumption, conditions (ii) and (iii) can be rewritten in a way that resembles the classical Bakry--Émery criterion. For that purpose define \begin{equation*} \vec \Gamma\left(\sum_{k=1}^n (\partial x_k)y_k\right)=\sum_{k,l=1}^n y_k^\ast \Gamma(x_k,x_l)y_l. \end{equation*} In particular, $\vec \Gamma(\partial x)=\Gamma(x)$. Since $(P_t)$ is $\Gamma$-regular, $\vec \Gamma$ extends to a continuous quadratic map from $\mathcal{H}$ to $L^1(\mathcal{M},\tau)$ that is uniquely determined by the property $\tau(x\vec \Gamma(\xi))=\langle\xi,\xi x\rangle_\mathcal{H}$ for all $x\in \mathcal{M}$ and $\xi\in \mathcal{H}$ (see \cite[Section 2]{Wir18}). \begin{lemma} If $(\vec P_t)$ is a family of bounded linear operators on $\mathcal{H}$ that commute with $\mathcal{J}$, then conditions (ii) and (iii) from Theorem \ref{thm:intertwining} are equivalent. Moreover, they are equivalent to \begin{equation}\label{eq:vec_Bakry_Emery} \vec \Gamma(\vec P_t \xi)\leq e^{-2Kt}P_t \vec\Gamma(\xi) \end{equation} for $\xi\in \mathcal{H}$, $t\geq 0$. \end{lemma} \begin{proof} To see the equivalence of (ii) and (iii), it suffices to notice that $\mathcal{J}$ is a bijection and $\mathcal{J} L(\rho)\mathcal{J}=R(\rho)$ for $\rho\in \mathcal{M}_+$. The equivalence of (iii) and (\ref{eq:vec_Bakry_Emery}) follows from the identities: for all $\rho\in \mathcal{M}_+$: \begin{align*} \langle \vec P_t \xi,R(\rho)\vec P_t \xi\rangle_\mathcal{H}&=\tau(\rho \vec \Gamma(\vec P_t \xi)),\\ \langle \xi,R(P_t \rho)\xi\rangle_\mathcal{H}&=\tau(P_t \rho\vec \Gamma(\xi))=\tau(\rho P_t \vec \Gamma(\xi)).\qedhere \end{align*} \end{proof} As indicated before, our theorem recovers the intertwining result in \cite{CM17a} (in the tracially symmetric case): \begin{corollary}\label{cor:intertwining_dir_sum} Assume that $\mathcal{H}\cong\bigoplus_j L^2(\mathcal{M},\tau)$, $L$ and $R$ act componentwise as left and right multiplication and $\mathcal{J}$ acts componentwise as the usual involution. If $\partial_j P_t=e^{-Kt}P_t \partial_j$, then $(P_t)$ satisfies $\mathrm{CGE}(K,\infty)$. \end{corollary} \begin{proof} Let $\vec P_t=e^{-Kt}\bigoplus_j P_t$. Condition (i) from Theorem \ref{thm:intertwining} is satisfied by assumption. Since $\vec P_t$ commutes with $\mathcal{J}$, conditions (ii) and (iii) are equivalent. Condition (iii) follows directly from the Kadison--Schwarz inequality: \begin{align*} \langle \vec P_t\xi,R(\rho)\vec P_t\xi\rangle_\mathcal{H}&=\sum_{j}e^{-2Kt}\tau((P_t \xi_j)^\ast (P_t \xi_j)\rho)\\ &\leq e^{-2Kt}\sum_j \tau(\xi_j^\ast \xi_j P_t\rho)\\ &=e^{-2Kt}\langle \xi,R(P_t \rho)\xi\rangle_\mathcal{H}. \end{align*} This settles $\mathrm{GE}(K,\infty)$. Applying the same argument to $(P_t\otimes \mathrm{id}_\mathcal{N})$ then yields the complete gradient estimate. \end{proof} \begin{example}[Conditional expectations]\label{ex:cond_exp} Let $E\colon \mathcal{M}\to \mathcal{N}$ be the conditional expectation onto a von Neumann subalgebra $\mathcal{N}$ and let $(P_t)$ be the QMS with generator $\mathscr{L}=I-E$, where $I=\mathrm{id}_{\mathcal{M}}$ is the identity operator on $\mathcal{M}$. Then $(P_t)$ satisfies $\mathrm{CGE}(1/2,\infty)$: A direct computation shows that $P_t =e^{-t}I+(1-e^{-t})E$. Let $\vec P_t=e^{-t}\mathrm{id}_{\mathcal{H}}$. Since $\mathscr{L} E=0$, we have $\partial E=0$ and therefore $\partial P_t =e^{-t}\partial=\vec{P}_t\partial$, which settles condition (i) from Theorem \ref{thm:intertwining}. Conditions (ii) and (iii) with $K=1/2$ follow immediately from $P_t \rho\geq e^{-t}\rho$ for $\rho\in \mathcal{M}_+$. So $(P_t)$ satisfies $\mathrm{CGE}(1/2,\infty)$. This result has been independently obtained in \cite[Theorem 4.16]{BGJ20}. In contrast, if for example $p$ is a projection and $E(x)=pxp+(1-p)x(1-p)$, then $\mathscr{L}$ has the Lindblad form $\mathscr{L} x=[p,[p,x]]$. Clearly, $[p,\cdot]$ commutes with $\mathscr{L}$, so that the intertwining criterion from \cite{CM17a} only implies $\mathrm{CGE}(0,\infty)$. In fact, in this case we may obtain a better result; see Theorem \ref{thm:com_proj}. \end{example} \begin{example}[Mixed $q$-Gaussian algebras] Let us recall the mixed $q$-Gaussian algebras. Our references are \cite{BS91,BS94,BKS97,LP99}. Let $H$ be a real Hilbert space with orthonormal basis $(e_j)_{j\in J}$. For $k\ge 1$, denote by $S_k$ the set of permutations of $\{1,2,\dots,k\}$. For $k\ge 2$ and $1\le j\le k-1$, denote by $\sigma_{j}$ the adjacent transposition between $j$ and $j+1$. For any $\sigma\in S_k$, $I(\sigma)$ is the number of inversions of the permutations $\sigma$: $$I(\sigma)=\sharp\{(i,j):1\le i<j\le k,~\sigma(i)>\sigma(j)\}.$$ For $k\ge 1$, a \emph{$k$-atom} on $H$ is an element of the form $f_1\otimes \cdots \otimes f_k$ with each $f_j\in H$. A \emph{$k$-basis atom} is an element of the form $e_{j_1}\otimes \cdots\otimes e_{j_k}$. Clearly all the $k$-basis atoms form a basis of $H^{\otimes k}$. For any $k$-basis atom $u=e_{j_1}\otimes \cdots\otimes e_{j_k}$, we use the notation that $\sigma(u)=e_{j_{\sigma(1)}}\otimes \cdots\otimes e_{j_{\sigma(k)}}$. Let $Q=(q_{ij})_{i,j\in J}\in\mathbb{R}^{J\times J}$ be such that $q_{ij}=q_{ji}$ for all $i,j\in J$ and $\sup_{i,j\in J}|q_{ij}|\le1$. For convenience, in the following we actually assume that $\sup_{i,j\in J}|q_{ij}|<1$. This is to simplify the definition of Fock space; our main results still apply to the general $\sup_{i,j\in J}|q_{ij}|\le1$ case. Put $P^{(0)}=\mathrm{id}_{H}$. For any $k\ge 1$, denote by $P^{(k)}$ the linear operator on $H^{\otimes k}$ such that \begin{equation*} P^{(k)}(u)=\sum_{\sigma\in S_k}a(Q,\sigma,u)\sigma^{-1}(u), \end{equation*} where $u=e_{j_1}\otimes \cdots \otimes e_{j_k}$ is any $k$-basis atom and \begin{equation*} a(Q,\sigma,u) =\begin{cases} 1&\text{if }\sigma=\mathrm{id},\\ q_{j_{m_l}j_{m_l+1}}\prod_{i=0}^{l-1}q_{j_{\varphi_{i}(m_{l-i})}j_{\varphi_{i}(m_{l-i}+1)}}&\text{if }\sigma=\sigma_{m_1}\cdots\sigma_{m_l}, \end{cases} \end{equation*} with $\varphi_i=\sigma_{m_{l-i+1}}\cdots\sigma_{m_l}$. Notice that if $\sigma=\sigma_{m_1}\cdots\sigma_{m_l}$, the coefficient $a(Q,\sigma,u)$ is well-defined, though such representation of $\sigma$ is not unique. When all the entries of $Q$ are the same, that is, $q_{ij} \equiv q$, the operator $P^{(k)}$ reduces to \begin{equation*} P^{(k)}(u)=\sum_{\sigma\in S_k} q^{I(\sigma)}\sigma(u). \end{equation*} Under the condition that $\sup_{i,j\in J}|q_{ij}|<1$, the operator $P^{(k)}$ is strictly positive \cite[Theorem 2.3]{BS94}. Let $\mathcal{F}_{Q}^{\text{finite}}$ be the subspace of finite sums of the spaces $H^{\otimes k},k\ge 0$, where $H^{\otimes 0}=\mathbb{R}\Omega$ and $\Omega$ is the vacuum vector. Then $\mathcal{F}_{Q}^{\text{finite}}$ is a dense subset of $\oplus_{k\ge 0}H^{\otimes k}$, and we define an inner product $\langle\cdot,\cdot \rangle_{Q}$ on $\mathcal{F}_{Q}^{\text{finite}}$ as: \begin{equation*} \langle \xi,\eta\rangle_{Q}=\delta_{kl}\langle \xi,P^{(l)}\eta\rangle_0,\text{ for }\xi\in H^{\otimes k},\eta\in H^{\otimes l},\text{ and }k,l\ge0, \end{equation*} where $\langle \cdot,\cdot\rangle_0$ is the usual inner product. The Fock space $\mathcal{F}_{Q}(H)$ is the completion of $\mathcal{F}_{Q}^{\text{finite}}$ with respect to the inner product $\langle\cdot,\cdot \rangle_{Q}$. When $q_{ij}\equiv q$, the Fock space $\mathcal{F}_{Q}(H)$ is also denoted by $\mathcal{F}_{q}(H)$ for short. Notice that if we only have $\sup_{i,j\in J}|q_{ij}|\le 1$, then each $P^{(k)}$ is only positive. One should divide $\mathcal{F}_{Q}^{\text{finite}}$ by the kernel of $\langle\cdot,\cdot \rangle_{Q}$ before taking the completion. The definition of Fock space here is actually the same as the one in \cite{BS94} associated to the Yang--Baxter operator $$T:H\otimes H\to H\otimes H,~~e_i\otimes e_j\mapsto q_{ji}e_j\otimes e_{i}.$$ See \cite[Part I]{LP99} for a detailed proof for this when $\dim H<\infty$. Now we recall the mixed $q$-Gaussian algebra $\Gamma_{Q}(H)$. For any $i\in J$, the \emph{left creation operator} $l_i$ is defined by \begin{equation*} l_i(\xi)=e_i\otimes \xi,~~\xi\in\mathcal{F}_{Q}(H). \end{equation*} Its adjoint with respect to $\langle \cdot,\cdot\rangle_{Q}$, the \emph{left annihilation operator} $l_i^*$, is given by \begin{equation*} l_i^*(\Omega)=0, \end{equation*} \begin{align*} l^*_i(e_{j_1}\otimes \cdots \otimes e_{j_k})=\sum_{m=1}^{k}&\big(\delta_{i j_m}q_{j_{m}j_{m-1}}q_{j_{m}j_{m-2}}\cdots q_{j_{m}j_{1}}\\ &\quad e_{j_1}\otimes \cdots \otimes e_{j_{m-1}}\otimes e_{j_{m+1}}\otimes \cdots \otimes e_{j_k}\big). \end{align*} The left annihilation operators and left creation operators satisfy the deformed communication relations on $\mathcal{F}_{Q}(H)$: \begin{equation*} l_i^* l_j-q_{ij}l_j l_i^*=\delta_{ij}\mathrm{id},~~i,j\in J. \end{equation*} The mixed $q$-Gaussian algebra $\Gamma_{Q}(H)$ is defined as the von Neumann subalgebra of $B(\mathcal{F}_{Q}(H))$ generated by self-adjoint operators $s_i=l_i+l_i^*,i\in J$. It is equipped with a normal faithful tracial state $\tau_Q$ given by \begin{equation*} \tau_Q(x)=\langle x \Omega,\Omega\rangle_Q. \end{equation*} The map $\phi_{H}\colon\Gamma_{Q}(H)\to\mathcal{F}_{Q}(H),x\mapsto x(\Omega)$, extends to a unitary, which we still denote by $\phi_H$, from $L^2(\Gamma_{Q}(H),\tau_Q)$ to $\mathcal{F}_{Q}(H)$. Note that $\phi_H(s_i)=e_i$. Let $T\colon H\to H$ be a contraction. Then it induces a contraction $\mathcal{F}_Q(T)$ on $\mathcal{F}_Q(H)$ such that \cite[Lemma 1.1]{LP99} $$\mathcal{F}_Q(T)\Omega=\Omega,$$ $$\mathcal{F}_Q(T)(f_1\otimes \cdots \otimes f_k)=T(f_1)\otimes \cdots \otimes T(f_k),$$ for any $k$-atom $f_1\otimes \cdots \otimes f_k$ and any $k\ge 1$. Moreover, there exists a unique unital and completely positive map $\Gamma_Q(T)$ on $\Gamma_Q(H)$ such that \cite[Lemma 3.1]{LP99} $$\Gamma_Q(T)=\phi_{H}^{-1}\mathcal{F}_{Q}(T)\phi_{H}.$$ Remark that $\Gamma_Q$ is a functor, that is, $\Gamma_Q(ST)=\Gamma_Q(S)\Gamma_Q(T)$ for two contractions $S,T$ on $H$. If $q_{ij}\equiv q\in[-1,1]$, then we write the functor $\Gamma_Q$ as $\Gamma_q$ for short. It interpolates between the bosonic and the fermionic functors by taking $q=+1$ and $q=-1$ respectively. When $q=0$, it becomes the free functor by Voiculescu \cite{Voi85}. For more examples, see \cite[Introduction]{LP99}. In particular, $T_t=T_t^Q=\mathcal{F}_Q(e^{-t}\mathrm{id}_{H})$ is a semigroup of contractions on $\mathcal{F}_Q(H)$. The mixed $q$-Ornstein--Uhlenbeck semigroup is defined as $P_t=P_t^Q=\Gamma_Q(e^{-t}\mathrm{id}_H), t\ge 0$. It extends to a semigroup of contractions on $L^2(\Gamma_Q(H),\tau_Q)$ and is $\tau_Q$-symmetric. Note that the generator of $P_t$ is $L=\phi^{-1}_{H}N\phi_{H}$, where $N\colon\mathcal{F}_{Q}^{\text{finite}}(H)\to \mathcal{F}_{Q}^{\text{finite}}(H)$, is the number operator defined as $k\mathrm{id}$ on its eigenspace $H^{\otimes k},k\ge 0$. Put \begin{equation*} Q'=Q\otimes \begin{pmatrix} 1&1\\ 1&1 \end{pmatrix}, \end{equation*} and $$e=\begin{pmatrix} 1\\ 0 \end{pmatrix},~~ f=\begin{pmatrix} 0\\ 1 \end{pmatrix}. $$ Then $H'\colon =H\oplus H$ can be identified with $H\otimes \mathbb{R}^2$, as a direct sum of $H\otimes \mathbb{R}e$ and $H\otimes \mathbb{R}f$. The number operator $N$ admits the following form \cite[Lemma 1.2]{LP99}: $N=\nabla^\dagger\nabla$, where $\nabla\colon\mathcal{F}_{Q}^{\text{finite}}(H)\to \mathcal{F}_{Q'}^{\text{finite}}(H')$ is the \emph{gradient operator} such that $\nabla(\Omega)=0$, and \begin{equation*} \nabla(u)=\sum_{i=1}^{k}u\otimes v_i, \end{equation*} for $k\ge 1$, $u$ being any $k$-atom on $H$ and $v_i=e\otimes \cdots\otimes f\otimes \cdots \otimes e\in(\mathbb{R}^2)^{\otimes k}$, $f$ occurring in the $i$-th factor. Remark that similar to the second quantization of any contraction $T:H\to H$, the natural embedding $\iota_H:H\to H',x\mapsto x\otimes e$ also induces a unique map $h_H\colon\Gamma_{Q}(H)\to \Gamma_{Q'}(H')$ such that \cite[Lemma 3.1]{LP99} \begin{equation}\label{eq:h_H} h_H=\Gamma_Q(\iota_H)=\phi_{H'}^{-1}\mathcal{F}_Q(\iota_H)\phi_H, \end{equation} where $\mathcal{F}_Q(\iota_H)$ is defined as $\iota_H\otimes \cdots \otimes \iota_H$ on $H^{\otimes k}$, $k\ge 0$. Set $\partial\colon=\phi_{H'}^{-1}\nabla \phi_{H}$. Then the generator $L$ of $P_t$ takes the form $L=\partial^\dagger\partial$ and $\partial$ is a derivation \cite[Proposition 3.2]{LP99}: \begin{equation*} \partial(xy)=\partial(x)h_{H}(y)+h_{H}(x)\partial(y), \end{equation*} for all $x,y\in \phi_H^{-1}(\mathcal{F}_{Q}^{\text{finite}}(H))$. \smallskip Now we prove that $P_t=e^{-tL}$ on $\Gamma_{Q}(H)$ satisfies $\mathrm{CGE}(1,\infty)$. For this let us first take a look of the semigroup $T_t=e^{-tN}$ on $\mathcal{F}_{Q}(H)$. By definition, it equals $e^{-kt}\mathrm{id}$ on its eigenspace $H^{\otimes k}$. For each $t\ge 0$, consider the map $$\vec{T}_t=e^{-t}\mathcal{F}_{Q'}(S_t)\colon\mathcal{F}_{Q'}(H')\to \mathcal{F}_{Q'}(H'),$$ where $S_t$ is a contraction on $H'$ given by $$S_t(x\otimes e)=e^{-t}x\otimes e,~~S_t(x\otimes f)=x\otimes f,~~x\in H.$$ Then by definition, we have the intertwining condition \begin{equation}\label{eq:intertwining for Fock space} \nabla T_t=\vec{T}_t \nabla. \end{equation} In fact, it is obvious when acting on $\mathbb{R}\Omega$. If $u$ is a $k$-atom on $H$, $k\ge1$, then \begin{equation*} \nabla T_t(u)=e^{-kt}\nabla (u)=e^{-kt}\sum_{i=1}^{k}u\otimes v_i, \end{equation*} and \begin{equation*} \vec{T}_t \nabla (u) =\sum_{i=1}^{k}\vec{T}_t (u\otimes v_i) =e^{-t}\sum_{i=1}^{k}\mathcal{F}_{Q'}(S_t)(u\otimes v_i) =e^{-kt}\sum_{i=1}^{k}u\otimes v_i. \end{equation*} Remark that if one chooses $\vec{T}_t=\mathcal{F}_{Q'}(e^{-t}\mathrm{id}_{H'})$, then we can only obtain $\mathrm{CGE}(0,\infty)$. Put $\vec{P}_t=\phi_{H'}^{-1}\vec{T}_t\phi_{H'}$. Then $\vec{P}_t$ is $\tau_{Q'}$-symmetric. Note that $P_t=\phi_H^{-1}T_t\phi_H$, thus by \eqref{eq:intertwining for Fock space} we have the intertwining condition \begin{equation*} \partial P_t =\phi_{H'}^{-1}\nabla T_t\phi_{H} =\phi_{H'}^{-1}\vec{T}_t\nabla\phi_{H} =\vec{P}_t \partial,~t\ge 0. \end{equation*} Note that $S_t\circ \iota_H=e^{-t}\iota_H\circ\mathrm{id}_H,t\ge 0$. This, together with the definitions of $h_H$ \eqref{eq:h_H} and $\vec{P}_t$, yields \begin{align} \begin{split}\label{eq:intertwining h_H P_t} \vec{P}_t h_{H} &=e^{-t}\phi_{H'}^{-1}\mathcal{F}_{Q'}(S_t)\mathcal{F}_Q(\iota_H)\phi_H\\ &=e^{-t}\phi_{H'}^{-1}\mathcal{F}_Q(\iota_H)\mathcal{F}_Q(e^{-t}\mathrm{id}_H)\phi_H\\ &=e^{-t}h_{H} P_t. \end{split} \end{align} By Theorem \ref{thm:intertwining}, to show that $P_t$ satisfies $\mathrm{GE}(1,\infty)$, it remains to check (ii) and (iii) with $\vec{P}_t$ as above and the left and right action of $\Gamma_Q(H)$ on $\Gamma_{Q'}(H')$ being $$L(\rho)a= h_{H}(\rho)a,~~ R(\rho)a=a h_{H}(\rho).$$ To prove (ii) we need to show that for any $\rho\in \Gamma_Q(H)_+$ and $a\in\Gamma_{Q'}(H')$: \begin{equation*} \langle\vec{P}_t (a),L(\rho)\vec{P}_t (a)\rangle_2 \le e^{-2t} \langle a,L(P_t(\rho))(a)\rangle_2, \end{equation*} where the inner product is induced by $\tau_{Q'}$. To see this, note that $\vec{P}_t$ is completely positive and $\vec{P}_t(1)=e^{-t}1$ \cite[Lemma 3.1]{LP99}. By the Kadison--Schwarz inequality and \eqref{eq:intertwining h_H P_t}, we have \begin{align*} \langle\vec{P}_t (a),L(\rho)\vec{P}_t (a)\rangle_2 &=\tau_{Q'}\left(\vec{P}_t (a)\vec{P}_t (a)^*h_{H}(\rho)\right)\\ &\le e^{-t}\tau_{Q'}\left(\vec{P}_t (aa^*)h_{H}(\rho)\right)\\ &=e^{-t}\tau_{Q'}\left(aa^*\vec{P}_t h_{H}(\rho)\right)\\ &=e^{-2t}\tau_{Q'}\left(aa^* h_{H} P_t(\rho)\right)\\ &=e^{-2t} \langle a,L(P_t(\rho))(a)\rangle_2, \end{align*} which finishes the proof of (ii). The proof of (iii) is similar. So $P_t$ satisfies $\mathrm{GE}(1,\infty)$. Applying the same argument to $P_t\otimes \mathrm{id}_{\mathcal{N}}$, we obtain $\mathrm{CGE}(1,\infty)$. \end{example} \begin{remark} As mentioned in \cite[Section 4.4]{JLL20}, the previous example can also be deduced from the complete gradient estimate for the classical Ornstein--Uhlenbeck semigroup using the ultraproduct methods from \cite{JZ15b}. However, in contrast to this approach we do not need to use the Ricci curvature bound for the classical Ornstein--Uhlenbeck semigroup, but get it as a special case (with minor modifications accounting for $\abs{q}=1$ in this case). \end{remark} \begin{example}[Quantum Tori] For $\theta\in [0,1)$ let $A_\theta$ be the universal $C^\ast$-algebra generated by unitaries $u=u_{\theta},v=v_{\theta}$ subject to the relation $vu=e^{2\pi i\theta}uv$. Let $\tau=\tau_{\theta}$ be the unique faithful tracial state on $A_\theta$ given by $\tau(u^m v^n)=\delta_{m,0}\delta_{n,0}$. The semigroup $(P_t)=(P^\theta_t)$ given by $P_t(u^m v^n)=e^{-t(m^2+n^2)}u^m v^n$ extends to a $\tau$-symmetric QMS on $L^\infty(A_\theta,\tau)$, which satisfies $\mathrm{CGE}(0,\infty)$. Here $L^\infty(A_\theta,\tau)$ denotes the strong closure of $A_\theta$ in the GNS representation associated with $\tau$. In fact, according to \cite[Section 10.6]{CS03}, $\mathcal{H}=L^2(A_{\theta},\tau)\oplus L^2(A_{\theta},\tau)$ and $\partial(u^m v^n)=(\partial_1(u^m v^n),\partial_2(u^m v^n))=i(mu^m v^n,n u^m v^n)$. Clearly, $\partial_j$ commutes with $P_t$ for $j=1,2$, so that $\mathrm{CGE}(0,\infty)$ follows from Corollary \ref{cor:intertwining_dir_sum}. In the commutative case $\theta=0$, $A_{\theta}=C(\mathbb{T}^2)$ is the C*-algebra of all continuous functions on flat $2$ torus $\mathbb{T}^2$ and the semigroup $(P_t)$ is the heat semigroup generated by the Laplace--Beltrami operator on the flat $2$-torus, which has vanishing Ricci curvature. Thus the constant $0$ in the gradient estimate is optimal. In fact, for any $\theta,\theta'\in [0,1)$, the semigroup $P_t^\theta$ on $L^\infty(A_{\theta},\tau_\theta)$ satisfies $\mathrm{CGE}(K,\infty)$ if and only if the semigroup $(P_t^{\theta'})$ on $L^\infty(A_{\theta'},\tau_{\theta'})$ satisfies $\mathrm{CGE}(K,\infty)$. Thus the gradient estimate $\mathrm{CGE}(0,\infty)$ is optimal for any $\theta\in[0,1)$. To see this, note first that by standard approximation arguments it suffices to show $\mathrm{GE}(K,\infty)$ for $\rho\in (A_{\theta})_+$ and $a\in D(\mathscr{L}_2^{1/2})\cap A_\theta$. By universal property of $A_{\theta+\theta'}$, there exists a $^\ast$-homomorphism $\pi\colon A_{\theta+\theta'}\to A_{\theta}\otimes A_{\theta'}$ such that \begin{equation*} \pi(u_{\theta+\theta'})=u_{\theta}\otimes u_{\theta'},~~\pi(v_{\theta+\theta'})=v_{\theta}\otimes v_{\theta'}. \end{equation*} Clearly $\pi$ is trace preserving and satisfies \begin{equation*} (P^{\theta}_t\otimes \mathrm{id}_{A_{\theta'}})\circ\pi=\pi\circ P_t^{\theta+\theta'}. \end{equation*} So if $P^{\theta}_t$ satisfies $\mathrm{CGE}(K,\infty)$, then so does $P^{\theta+\theta'}_t$. Since $\theta$ and $\theta'$ are arbitrary, we finish the proof of the assertion. This idea of transference was used in \cite{Ric16} to give a simple proof that the completely bounded Fourier multipliers on noncommutative $L_p$-spaces associated with quantum tori $A_{\theta}$ do not depend on the parameter $\theta$. The transference technique has been used in \cite{GJL18,JLL20} to study complete logarithmic Sobolev inequality. The same conclusion goes for $d$-dimensional quantum torus $A_{\theta}$ with $\theta$ being a $d$-by-$d$ real skew-symmetric matrix. \end{example} \begin{example}[Quantum groups]\label{ex:quantum groups} A \emph{compact quantum group} is a pair $\mathbb{G}=(A,\Delta)$ consisting of a unital C*-algebra $A$ and a unital $^\ast$-homomorphism $\Delta\colon A\to A\otimes A$ such that \begin{enumerate} \item $(\Delta\otimes\mathrm{id}_A)\Delta=(\mathrm{id}_A\otimes\Delta)\Delta$; \item $\{\Delta(a)(1\otimes b):a,b\in A\}$ and $\{\Delta(a)(b\otimes1):a,b\in A\}$ are linearly dense in $A\otimes A$. \end{enumerate} Here $A\otimes A$ is the minimal C*-algebra tensor product. The homomorphism $\Delta$ is called the \emph{comultiplication} on $A$. We denote $A=C(\mathbb{G})$. Any compact quantum group $\mathbb{G}=(A,\Delta)$ admits a unique \textit{Haar state}, i.e.\ a state $h$ on $A$ such that \begin{equation*} (h\otimes\mathrm{id}_A)\Delta(a)=h(a)1=(\mathrm{id}_A\otimes h)\Delta(a),~~a\in A. \end{equation*} Consider an element $u\in A\otimes B(H)$ with $\dim H=n$. By identifying $A\otimes B(H)$ with $M_n(A)$ we can write $u=[u_{ij}]_{i,j=1}^{n}$, where $u_{ij}\in A$. The matrix $u$ is called an \textit{n-dimensional representation} of $\mathbb{G}$ if we have \[ \Delta(u_{ij})=\sum_{k=1}^{n}u_{ik}\otimes u_{kj},~~i,j=1,\dots,n. \] A representation $u$ is called \textit{unitary} if $u$ is unitary as an element in $M_n(A)$, and \textit{irreducible} if the only matrices $T\in M_n(\mathbb{C})$ such that $uT=Tu$ are multiples of identity matrix. Two representations $u,v\in M_n(A)$ are said to be \textit{equivalent} if there exists an invertible matrix $T\in M_n(\mathbb{C})$ such that $Tu=vT$. Denote by $\mathrm{Irr}(\mathbb{G})$ the set of equivalence classes of irreducible unitary representations of $\mathbb{G}$. For each $\alpha\in\mathrm{Irr}(\mathbb{G})$, denote by $u^\alpha\in A\otimes B(H_\alpha)$ a representative of the class $\alpha$, where $H_\alpha$ is the finite dimensional Hilbert space on which $u^\alpha$ acts. In the sequel we write $n_\alpha=\dim H_\alpha$. Denote $\mathrm{Pol}(\mathbb{G})=\text{span} \left\{u^\alpha_{ij}:1\leq i,j\leq n_\alpha,\alpha\in\mathrm{Irr}(\mathbb{G})\right\}$. This is a dense subalgebra of $A$. On $\mathrm{Pol}(\mathbb{G})$ the Haar state $h$ is faithful. It is well-known that $(\mathrm{Pol}(\mathbb{G}),\Delta)$ is equipped with the Hopf*-algebra structure, that is, there exist a linear antihomormophism $S$ on $\mathrm{Pol}(\mathbb{G})$, called the \textit{antipode}, and a unital $^\ast$-homomorphism $\epsilon\colon\mathrm{Pol}(\mathbb{G})\to\mathbb{C}$, called the \textit{counit}, such that \begin{equation*} (\epsilon\otimes\mathrm{id}_{\mathrm{Pol}(\mathbb{G})})\Delta(a)=a=(\mathrm{id}_{\mathrm{Pol}(\mathbb{G})}\otimes\epsilon)\Delta(a),~~a\in\mathrm{Pol}(\mathbb{G}), \end{equation*} and \begin{equation*} m(S\otimes\mathrm{id}_{\mathrm{Pol}(\mathbb{G})})\Delta(a)=\epsilon(a)1=m(\mathrm{id}_{\mathrm{Pol}(\mathbb{G})}\otimes S)\Delta(a),~~a\in\mathrm{Pol}(\mathbb{G}). \end{equation*} Here $m$ denotes the multiplication map $m\colon\mathrm{Pol}(\mathbb{G})\otimes_{\text{alg}}\mathrm{Pol}(\mathbb{G})\to\mathrm{Pol}(\mathbb{G}),~~a\otimes b\mapsto ab$. Indeed, the antipode and the counit are uniquely determined by \begin{equation*} S(u^\alpha_{ij})=(u^{\alpha}_{ji})^*,~~1\leq i,j\leq n_\alpha,~~\alpha\in\mathrm{Irr}(\mathbb{G}), \end{equation*} \begin{equation*} \epsilon(u^\alpha_{ij})=\delta_{ij},~~1\leq i,j\leq n_\alpha,~~\alpha\in\mathrm{Irr}(\mathbb{G}). \end{equation*} \smallskip Since the Haar state $h$ is faithful on $\mathrm{Pol}(\mathbb{G})$, one may consider the corresponding GNS construction $(\pi_h,H_h,\xi_h)$ such that $h(x)=\langle \xi_h,\pi_h(x)\xi_h \rangle_{H_h}$ for all $x\in \mathrm{Pol}(\mathbb{G})$. The \emph{reduced $C^\ast$-algebra} $C_{r}(\mathbb{G})$ is the norm completion of $\pi_h(\mathrm{Pol}(\mathbb{G}))$ in $B(H_{h})$. Then the restriction of comultiplication $\Delta$ to $\mathrm{Pol}(\mathbb{G})$, extends to a unital $^\ast$-homomorphism on $C_r(\mathbb{G})$, which we still denote by $\Delta$. The pair $(C_r(\mathbb{G}),\Delta)$ forms a compact quantum group, and in the following we always consider this reduced version (instead of the \emph{universal} one, since the Haar state $h$ is always faithful on $C_{r}(\mathbb{G})$). Denote by $L^\infty(\mathbb{G})=C_r(\mathbb{G})''$ the von Neumann subalgebra of $B(H_h)$ generated by $C_r(\mathbb{G})$, and we can define the noncommutative $L^p$-spaces associated with $(L^\infty(\mathbb{G}),h)$. In particular, we identify $L^2(\mathbb{G})$ with $H_h$. We refer to \cite{MV98} and \cite{Wor98} for more details about compact quantum groups. \smallskip A compact quantum group $\mathbb{G}$ is of \emph{Kac type} if the Haar state is tracial. In the following $\mathbb{G}$ is always a compact quantum group of Kac type, which is the case for later examples $O_N^+$ and $S_N^+$. Given a L\'evy process $(j_t)_{t\ge 0}$ \cite[Definition 2.4]{CFK14} on $\mathrm{Pol}(\mathbb{G})$ one can associate it to a semigroup $P_t=(\mathrm{id}\otimes \phi_t)\Delta$ on $C_r(\mathbb{G})$, where $\phi_t$ is the marginal distribution of $j_t$. This $(P_t)$ is a strongly continuous semigroup of unital completely positive maps on $C_r(\mathbb{G})$ that are symmetric with respect to the Haar state $h$ \cite[Theorem 3.2]{CFK14}. Then $(P_t)$ extends to a $h$-symmetric QMS on $L^\infty(\mathbb{G})$. The corresponding first-order differential calculus can be described in terms of a \emph{Schürmann triple} $((H,\pi),\eta,\varphi)$ \cite[Propositions 8.1, 8.2]{CFK14}. The tangent bimodule $\mathcal{H}$ is then a submodule of $L^2(\mathbb{G})\otimes H$ with the left and right action given by $L=(\lambda_L\otimes \pi)\Delta$ and $R=\lambda_R\otimes \mathrm{id}_H$, respectively. Here $\lambda_L$ and $\lambda_R$ are the left and right action of $L^\infty(\mathbb{G})$ on $L^2(\mathbb{G})$: $$\lambda_L(a)(b\xi_h)=ab\xi_h,~~\lambda_R(a)(b\xi_h)=ba\xi_h.$$ The derivation \cite[Proposition 8.1]{CFK14} is given on $\mathrm{Pol}(\mathbb{G})$ by $\partial=(\iota_h\otimes\eta)\Delta$, where $\iota_h\colon L^\infty(\mathbb{G})\to L^2(\mathbb{G})$ is the natural embedding: $$\iota_h(a)=a\xi_h.$$ Note that the QMS $(P_t)$ is always \emph{right translation invariant}: $(\mathrm{id}\otimes P_t)\Delta=\Delta P_t$ for all $t\ge0$. In fact, any right translation invariant QMS must arise in this way \cite[Theorem 3.4]{CFK14}. Here we are interested in semigroups $(P_t)$ that are not only right translation invariant but also \emph{left translation invariant}, or \emph{translation bi-invariant}: for all $t\ge0$ \begin{equation}\label{eq:bi-invariance} (P_t\otimes \mathrm{id})\Delta=\Delta P_t=(\mathrm{id}\otimes P_t)\Delta. \end{equation} In this case, let $\vec P_t=P_t\otimes \mathrm{id}_H$, and we have \begin{equation*} \vec P_t \partial=(P_t\otimes \mathrm{id}_H)(\iota_h\otimes \eta)\Delta=(\iota_h\otimes \eta)(P_t\otimes \mathrm{id}_A) \Delta=(\iota_h\otimes\eta)\Delta P_t=\partial P_t. \end{equation*} It is not hard to check that $\vec P_t$ is $\mathcal{J}$-real. We will show that it also satisfies the condition (iii) from Theorem \ref{thm:intertwining} for $K=0$. For $\xi_1,\dots,\xi_n\in H$ and $x_1,\dots,x_n\in A$ we have \begin{align*} &\quad\left\langle (P_t\otimes \mathrm{id}_H)\sum_k x_k\otimes \xi_k,R(\rho)(P_t\otimes \mathrm{id})\sum_k x_k\otimes \xi_k\right\rangle\\ &=\sum_{k,l}\langle \xi_k,\xi_l\rangle h((P_t x_k)^\ast (P_t x_l)\rho), \end{align*} and \begin{equation*} \left\langle \sum_k x_k\otimes \xi_k,R(P_t \rho)\sum_k x_k\otimes \xi_k\right\rangle =\sum_{k,l}\langle \xi_k,\xi_l\rangle h(x_k^\ast x_l P_t\rho). \end{equation*} Clearly, the matrix $[\langle \xi_k,\xi_l\rangle]_{k,l}$ is positive semi-definite. By Kadison--Schwarz inequality, \begin{equation*} [(P_t x_k)^\ast (P_t x_l)]_{k,l}\leq [P_t (x_k^\ast x_l)]_{k,l}. \end{equation*} Thus also $[h((P_t x_k)^\ast (P_t x_l)\rho)]_{k,l}\leq [h(x_k^\ast x_l P_t \rho)]_{k,l}$. Since the Hadamard product of positive semi-definite matrices is positive semi-definite, it follows that \begin{equation*} [\langle \xi_k,\xi_l\rangle h((P_t x_k)^\ast(P_t x_l)\rho)]_{k,l}\leq [\langle \xi_k,\xi_l\rangle h(x_k^\ast x_l P_t\rho)]_{k,l}. \end{equation*} Hence \begin{equation*} \sum_{k,l}\langle \xi_k,\xi_l\rangle h((P_t x_k)^\ast (P_t x_l)\rho) \le\sum_{k,l}\langle \xi_k,\xi_l\rangle h(x_k^\ast x_l P_t\rho), \end{equation*} and we get the desired result. Thus $(P_t)$ satisfies $\mathrm{GE}(0,\infty)$. Applying the same argument to $(P_t\otimes \mathrm{id}_{\mathcal{N}})$, we get $\mathrm{CGE}(0,\infty)$. \smallskip If each $\phi_t$ is \emph{central}: \begin{equation}\label{eq:central} (\phi_t\otimes \mathrm{id})\Delta=(\mathrm{id}\otimes \phi_t)\Delta. \end{equation} then the QMS $P_t=(\mathrm{id}\otimes \phi_t)\Delta$ is translation-bi-invariant. Recall that the convolution of two functionals $\psi_1,\psi_2$ on $C(\mathbb{G})$ (or $C_r(\mathbb{G})$, $\mathrm{Pol}(\mathbb{G})$) is defined as $\psi_1\star \psi_2=(\psi_1\otimes \psi_2)\Delta$. The \emph{convolution semigroup of states} $\phi_t=\epsilon+\sum_{n\ge 1}\frac{t^{ n}}{n!}\psi^{\star n}$ is generated by $\psi$, called the \emph{generating functional}, where $\psi$ is hermitian, conditionally positive and vanishes on $1$ (see \cite[Section 2.5]{CFK14} for details). Then once the generating functional $\psi$ is central, the QMS $P_t=(\mathrm{id}\otimes \phi_t)\Delta=e^{tT_\psi}$ is translation-bi-invariant, and thus satisfies $\mathrm{CGE}(0,\infty)$, where $T_{\psi}=(\mathrm{id}\otimes \psi)\Delta$. For the geometric Ricci curvature condition this result was independently proven in \cite[Lemma 4.6]{BGJ20}. \end{example} In the next few examples we collect some specific instances of QMS on quantum groups which are translation-bi-invariant. Firstly we give some commutative examples. \begin{example}[Compact Lie groups] For any compact group $G$, $(C(G),\Delta)$ forms a compact quantum group, where $C(G)$ is the C*-algebra of all continuous functions on $G$ and the comultiplication $\Delta\colon C(G)\to C(G)\otimes C(G)\cong C(G\times G)$ is given by $\Delta f(s,t)=f(st)$. The Haar state $h$ is nothing but $\int \cdot\, d\mu$, with $\mu$ being the Haar (probability) measure. Consider the QMS $(P_t)$ on $C(G)$: $P_t(f)(s)=\int_{G}f(r)K_t(r,s)d\mu(r)$. Then $(P_t)$ is translation bi-invariant if and only if the kernel $K_t$ is bi-invariant under $G$: $K_t(gr,gs)=K_t(r,s)=K_t(rg,sg)$ for all $g,r,s\in G$, or equivalently, $(P_t)$ is a convolution semigroup with the kernel $\tilde{K}_t(s)=K(e,s)$ being conjugate-invariant: $\tilde{K}(s)=\tilde{K}(gsg^{-1})$ for all $g,s\in G$. Let $G$ be a compact Lie group with a bi-invariant Riemann metric $g$. If $(P_t)$ is the heat semigroup generated by the Laplace--Beltrami operator, then a direct computation shows that the bi-invariance of the metric implies the translation-bi-invariance of $(P_t)$. Thus we recover the well-known fact from Riemannian geometry that the Ricci curvature of a compact Lie group with bi-invariant metric is always nonnegative (see e.g. \cite[Section 7]{Mil76}). \end{example} Secondly, we give co-commutative examples. By saying co-commutative we mean $\Delta=\Pi\circ\Delta$, where $\Pi$ is the tensor flip, i.e., $\Pi(a\otimes b)=b\otimes a$. \begin{example}[Group von Neumann algebras]\label{ex:group_alg} Let $G$ be a countable discrete group with unit $e$, $C_r^\ast(G)$ the reduced $C^\ast$-algebra generated by the left regular representation $\lambda$ of $G$ on $\ell^2(G)$ and $L(G)$ the group von Neumann algebra $L(G)=C_r^\ast(G)^{\prime\prime}\subset B(\ell^2(G))$. Then $\mathbb{G}=(C_r^\ast(G),\Delta)$ is a quantum group with comultiplication given by $\Delta(\lambda_g)=\lambda_g\otimes \lambda_g$. The Haar state on $\mathbb{G}$ is given by $\tau(x)=\langle x\delta_e,\delta_e\rangle$, which is tracial and faithful. Here and in what follows, $\delta_g$ always denotes the function on $G$ that takes value 1 at $g$ and vanishes elsewhere. A function $\psi\colon G\to [0,\infty)$ is a \emph{conditionally negative definite} (cnd) length function if $\psi(e)=0$, $\psi(g^{-1})=\psi(g)$ and \begin{equation*} \sum_{g,h\in G}\overline{f(g)}f(h)\psi(g^{-1}h)\leq 0 \end{equation*} for every $f\in G\to \mathbb{C}$ with finite support such that $\sum_{g\in G} f(g)=0$. By Schoenberg's Theorem (see for example \cite[Theorem D.11]{BO08}), to every cnd function one can associate a $\tau$-symmetric QMS on $L(G)$ given by \begin{equation*} P_t \lambda_g=e^{-t\psi(g)}\lambda_g. \end{equation*} It is easy to check that $(P_t)$ satisfies the translation-bi-invariant condition (\ref{eq:bi-invariance}). Thus it satisfies $\mathrm{CGE}(0,\infty)$. Now we give some genuine quantum group examples. \begin{example}[Free orthogonal quantum group $O^+_N$ \cite{Wan95}]\label{ex:free orthogonal quantum group} Let $N\ge2$. The free orthogonal quantum group $O^+_N$ consists of a pair $(C_u(O^+_N),\Delta)$, where $C_u(O^+_N)$ is the universal C*-algebra generated by $N^2$ self-adjoint operators $u_{ij},1\le i,j\le N$, such that $U=[u_{ij}]_{1\le i,j\le N}\in M_N(\mathbb{C})\otimes C_u(O^+_N)$ is unitary, that is, \begin{equation*} \sum_{k=1}^{N}u_{ik}u_{jk}=\delta_{ij}=\sum_{k=1}^{N}u_{ki}u_{kj},~~1\le i,j\le N, \end{equation*} and the comultiplication $\Delta$ is given by \begin{equation*} \Delta(u_{ij})=\sum_{k=1}^{N}u_{ik}\otimes u_{kj},~~1\le i,j\le N. \end{equation*} The equivalent classes of irreducible unitary representations of $O_N^+$ can be indexed by $\mathbb{N}$, with $u^{(0)}=1$ the trivial representation and $u^{(1)}=U$ the fundamental representation. By \cite[Corollary 10.3]{CFK14}, the central generating functionals $\psi$ on $\mathrm{Pol}(O^+_N)$ are given by \begin{equation*} \psi(u^{(s)}_{ij})=\frac{\delta_{ij}}{U_s(N)}\left(-bU_s'(N)+\int_{-N}^{N}\frac{U_s(x)-U_s(N)}{N-x}\nu(dx)\right), \end{equation*} for $s\in \mathrm{Irr}(O_N^+)=\mathbb{N},1\le i,j\le n_s$, where $U_s$ denotes the $s$-th Chebyshev polynomial of second kind, $b\ge 0$, and $\nu$ is a finite measure on $[-N,N]$ with $\nu(\{N\})=0$. Then given $(b,\nu)$, the central functional $\psi$ defined as above induces a QMS $P_t^\psi=e^{t T_\psi}$ satisfying \eqref{eq:bi-invariance}, where $T_\psi=(\mathrm{id}\otimes \psi)\Delta$. Hence it satisfies $\mathrm{CGE}(0,\infty$). \end{example} \begin{example}[Free permutation quantum group $S^+_N$ \cite{Wan98}]\label{ex:free permutation quantum group} Let $N\ge2$. The free permutation quantum group $S^+_N$ consists of a pair $(C_u(S^+_N),\Delta)$, where $C_u(O^+_N)$ is the universal C*-algebra generated by $N^2$ self-adjoint operators $p_{ij},1\le i,j\le N$, such that \begin{equation*} p_{ij}^2=p_{ij}=p_{ij}^*,~~\sum_{k=1}^{N}p_{ik}=1=\sum_{k=1}^{N}p_{kj},~~1\le i,j\le N, \end{equation*} and the comultiplication $\Delta$ is given by \begin{equation*} \Delta(p_{ij})=\sum_{k=1}^{N}p_{ik}\otimes p_{kj},~~1\le i,j\le N. \end{equation*} The equivalent classes of irreducible unitary representations of $S_N^+$ can be indexed by $\mathbb{N}$. By \cite[Theorem 10.10]{FKS16}, the central generating functionals $\psi$ on $\mathrm{Pol}(S^+_N)$ are given by \begin{equation*} \psi(u^{(s)}_{ij})=\frac{\delta_{ij}}{U_{2s}(\sqrt{N})}\left(-b\frac{U_{2s}'(\sqrt{N})}{2\sqrt{N}}+\int_{0}^{N}\frac{U_{2s}(\sqrt{x})-U_{2s}(\sqrt{N})}{N-x}\nu(dx)\right), \end{equation*} for $s\in \mathrm{Irr}(S_N^+)=\mathbb{N},1\le i,j\le n_s$, where $U_s$ denotes the $s$-th Chebyshev polynomial of second kind, $b> 0$, and $\nu$ is a finite measure on $[0,N]$. Similarly, given $(b,\nu)$, the central functional $\psi$ defined as above induces a QMS $P_t^\psi=e^{t T_\psi}$ satisfying \eqref{eq:bi-invariance}, where $T_\psi=(\mathrm{id}\otimes \psi)\Delta$. Hence it satisfies $\mathrm{CGE}(0,\infty$). \end{example} \end{example} \begin{remark} Although many interesting functional inequalities like the Poincaré and the modified logarithmic Sobolev inequality only follow directly from $\mathrm{GE}(K,\infty)$ for $K>0$, the gradient estimate with constant $K\leq 0$ can still be helpful in conjunction with additional assumptions to prove such functional inequalities (see \cite{DR20,BGJ20}). \end{remark} \section{Stability under tensor products and free products}\label{sec:stability} In this section we prove that the complete gradient estimate $\mathrm{CGE}(K,\infty)$ is stable under taking tensor products and free products of quantum Markov semigroups. We refer to \cite{VDN92} and \cite{BD01} for more information on free products of von Neumann algebras and to \cite{Boc91} for free products of completely positive maps. \begin{theorem}\label{thm:tensor_product} Let $(\mathcal{M}_j,\tau_j)$, $j\in \{1,\dots,n\}$, be tracial von Neumann algebras and $(P_t^j)$ a $\tau_j$-symmetric $\Gamma$-regular QMS on $\mathcal{M}_j$. If for every $j\in\{1,\dots,n\}$ the QMS $(P_t^j)$ satisfies $\mathrm{CGE}(K,\infty)$, then $\bigotimes_j P_t^j$ satisfies $\mathrm{CGE}(K,\infty)$. \end{theorem} \begin{proof} Let $\mathcal{H}_j$ and $\partial_j$ denote the tangent bimodule and derivation for $(P_t^j)$ and let \begin{align*} \bar \mathcal{H}_j&=\bigotimes_{k=1}^{j-1}L^2(\mathcal{M}_k,\tau_k)\otimes \mathcal{H}_j \otimes \bigotimes_{k=j+1}^n L^2(\mathcal{M}_k,\tau_k),\\ \bar\partial_j&=\bigotimes_{k=1}^{j-1}\mathrm{id}_{\mathcal{M}_k}\otimes \partial_j \otimes \bigotimes_{k=j+1}^n \mathrm{id}_{\mathcal{M}_k}. \end{align*} The tangent module $\mathcal{H}$ for $P_t=\bigotimes_j P_t^j$ is a submodule of $\mathcal{H}=\bigoplus_j \bar \mathcal{H}_j$ with the natural left and right action and derivation $\partial=(\bar\partial_1,\dots,\bar\partial_n)$. For $j\in\{1,\dots,n\}$, put \begin{equation*} \tilde P_t^j=\bigotimes_{k=1}^{j-1}P_t^k\otimes \mathrm{id}_{\mathcal{M}_j}\otimes\bigotimes_{k=j+1}^n P_t^k \end{equation*} and \begin{equation*} \bar P_t^j=\bigotimes_{k=1}^{j-1}\mathrm{id}_{\mathcal{M}_k} \otimes P_t^j\otimes\bigotimes_{k=j+1}^n \mathrm{id}_{\mathcal{M}_k} \end{equation*} on $\bigotimes_k \mathcal{M}_k$, so that $P_t=\bar P_t^j \tilde P_t^j=\tilde P_t^j \bar P_t^j$. Then \begin{align*} \norm{\partial P_t a}_\rho^2&=\sum_{j=1}^n \norm{\bar \partial_j P_t a}_\rho^2\\ &=\sum_{j=1}^n \norm{\bar \partial_j \bar P_t^j \tilde P_t^j a}_\rho^2\\ &\leq \sum_{j=1}^n e^{-2K t}\norm{\bar \partial_j \tilde P_t^j a}_{\bar P_t^j\rho}^2 \end{align*} by $\mathrm{CGE}(K,\infty)$ for $(P_t^j)$. Let \begin{equation*} Q_t^j=\bigotimes_{k=1}^{j-1}P_t^k \otimes \mathrm{id}_{\mathcal{H}_j}\otimes \bigotimes_{k=j+1}^n P_t^k \end{equation*} on $\bar \mathcal{H}_j$. Then $\bar \partial_j \tilde P_t^j =Q_t^j \bar\partial_j$, and conditions (ii), (iii) in Theorem \ref{thm:intertwining} follow from the Kadison--Schwarz inequality (compare with Example \ref{ex:quantum groups}). Taking into account Remark \ref{rmk:diff_calc_ind}, we get \begin{align*} \norm{\bar\partial_j \tilde P_t^j a}_\rho^2\leq \norm{\bar\partial_j a}_{\tilde P_t^j \rho}^2. \end{align*} Together with the previous estimate, we obtain \begin{equation*} \norm{\partial P_t a}_\rho^2 \leq \sum_{j=1}^n e^{-2K t}\norm{\bar \partial_j \tilde P_t^j a}_{\bar P_t^j\rho}^2 \leq\sum_{j=1}^n e^{-2Kt}\norm{\bar\partial_j a}_{P_t \rho}^2 = e^{-2Kt}\norm{\partial a}_{P_t \rho}^2. \end{equation*} So $(P_t)$ satisfies $\mathrm{GE}(K,\infty)$. The same argument can be applied to $(P_t\otimes \mathrm{id}_\mathcal{N})$, so that we obtain $\mathrm{CGE}(K,\infty)$. \end{proof} \begin{theorem}\label{thm:free_product} For $j\in\{1,\dots,n\}$ let $(\mathcal{M}_j,\tau_j)$ be a tracial von Neumann algebras and $(P_t^j)$ a tracially symmetric $\Gamma$-regular QMS on $\mathcal{M}_j$. If for every $j\in\{1,\dots,n\}$ the QMS $(P_t^j)$ satisfies $\mathrm{CGE}(K,\infty)$, then $\ast_j P_t^j$ satisfies $\mathrm{CGE}(K,\infty)$. \end{theorem} \begin{proof} Let $\mathcal{M}=\ast_j \mathcal{M}_j$, $\tau=\ast_j \tau_j$ and $P_t=\ast_j P_t^j$. Recall that $L^2(\mathcal{M},\tau)$ is canonically identified with \begin{align*} \ast_j L^2(\mathcal{M}_j,\tau_j)=\mathbb{C} 1\oplus \bigoplus_{n\geq 1}\bigoplus_{j_1\neq\dots\neq j_n}\bigotimes_{l=1}^n L^2_0(\mathcal{M}_{j_l},\tau_{j_l}), \end{align*} where $L^2_0$ denotes the orthogonal complement of $\mathbb{C} 1$ in $L^2$. Then $\mathcal{H}$ can be identified with a submodule of \begin{equation*} \bigoplus_{n\geq 1}\bigoplus_{j_1\neq\dots\neq j_n}\bigoplus_{k=1}^n\left(\bigotimes_{l=1}^{k-1} L^2(\mathcal{M}_{j_l},\tau_{j_l})\otimes \mathcal{H}_{j_k}\otimes \bigotimes_{l=k+1}^n L^2(\mathcal{M}_{j_l},\tau_{j_l})\right) \end{equation*} with the natural left and right action on each direct summand and $\partial$ acts as $0$ on $\mathbb{C} 1$ and as \begin{align*} \partial(a_1\otimes\dots\otimes a_n)=(\partial_{j_1}(a_1)\otimes a_2\dots \otimes a_n,\dots,a_1\otimes a_2\otimes \dots\otimes \partial_{j_n}(a_n)) \end{align*} on the direct summand $\bigotimes_{j_1\neq\dots\neq j_n}L^2(\mathcal{M}_{j_l},\tau_{j_l})$. Since $\partial$ and $(P_t)$ restrict nicely to the direct summand of $L^2(\mathcal{M},\tau)$, the rest of the proof is similar to the one of Theorem \ref{thm:tensor_product}. \end{proof} \begin{remark} The same argument applies to free products with amalgamation if the common subalgebra over which one amalgates is contained in the fixed-point algebra of $(P_t^j)$ for all $j\in\{1,\dots,n\}$ (compare with the results from \cite[Section 6.2]{JZ15a} for the $\Gamma_2$ condition). \end{remark} \section{Quantum Markov semigroups generated by commuting projections}\label{sec:com_proj} In this section we move beyond applications of the intertwining result Theorem \ref{thm:intertwining} and obtain complete gradient estimate for quantum Markov semigroups whose generators take special Lindblad forms. \begin{theorem}\label{thm:com_proj} Let $p_1,\dots,p_n\in \mathcal{M}$ be commuting projections. The QMS $(P_t)$ generated by \begin{align*} \mathscr{L}\colon \mathcal{M}\to\mathcal{M},\,\mathscr{L} x=\sum_{j=1}^n p_j x+x p_j-2p_j x p_j \end{align*} is $\Gamma$-regular and satisfies $\mathrm{CGE}(1,\infty)$. \end{theorem} \begin{proof} For $1\leq j\leq n$ consider the operator $\mathscr{L}_j\colon\mathcal{M}\to \mathcal{M}$ defined by \begin{equation*} \mathscr{L}_j x=p_j x+xp_j-2p_j xp_j=x-p_j xp_j-(1-p_j)x(1-p_j). \end{equation*} In particular, $\mathscr{L}_j$ is of the form $\mathscr{L}_j=I-\Phi_j$ with $I=\mathrm{id}_{\mathcal{M}}$ and the conditional expectation $\Phi_j(x)=p_j x p_j+(1-p_j)x(1-p_j)$. Thus the QMS $(P_t^j)$ generated by $\mathscr{L}_j$ is given by \begin{align*} P_t^j x=x+(e^{-t}-1)\mathscr{L}_j x=e^{-t}x+(1-e^{-t})\Phi_j(x). \end{align*} A first-order differential calculus for $(P_t)$ is given by $\mathcal{H}=\bigoplus_{j=1}^n L^2(\mathcal{M},\tau)$ as bimodules, $L=(L_j)_j,R=(R_j)_j$ with $L_j$ and $R_j$ being the usual left and right multiplications of $\mathcal{M}$ on $L^2(\mathcal{M},\tau)$ respectively, and $\partial=(\partial_j)$, where $\partial_j x=[p_j,x]$. Thus $(P_t)$ is $\Gamma$-regular. Moreover, $\partial_j P_t^j x=e^{-t}\partial_j x$ and consequently \begin{equation}\label{eq:partial_P_t} \norm{\partial_j P_t^j x}_\rho^2=e^{-2t}\norm{\partial_j x}_\rho^2. \end{equation} On the other hand, by the concavity of operator means \cite[Theorem 3.5]{KA80} we have \begin{equation}\label{eq:conc_decom_QMS} \widehat{P_t^j \rho}\geq e^{-t}\hat \rho+(1-e^{-t})\widehat{ \Phi_j(\rho)}. \end{equation} Since \begin{align*} &\quad\;\mathscr{L}_j ((\partial_j x)^\ast (\partial_j x))\\ &=p_j x^\ast x p_j+p_j x^\ast p_j x-p_j x^\ast p_j x-p_j x^\ast p_j x p_j\\ &\quad+p_j x^\ast x p_j +x^\ast p_j x p_j-p_j x^\ast p_j x p_j-x^\ast p_j x p_j\\ &\quad -2 p_j x^\ast x p_j-2 p_j x^\ast p_j x p_j+2p_j x^\ast p_j v p_j+2 p_j x^\ast p_j x p_j\\ &=0, \end{align*} we have $$\Phi_j((\partial_j x)^\ast (\partial_j x)) =(I-\mathscr{L}_j)\left((\partial_j x)^\ast (\partial_j x)\right) =(\partial_j x)^\ast (\partial_j x).$$ Recall that $L_j$ and $R_j$ are respectively the usual left and right multiplications of $\mathcal{M}$ on $L^2(\mathcal{M},\tau)$ and denote by $E_j$ the projection onto $\overline{\operatorname{ran} \partial_j}$ in $L^2(\mathcal{M},\tau)$. It follows that \begin{align*} \langle R_j(\Phi_j(\rho))(\partial_j x),\partial_j x\rangle_2 &=\tau(\Phi_j(\rho)(\partial_j x)^\ast (\partial_j x))\\ &=\tau(\rho \Phi_j((\partial_j x)^\ast (\partial_j x)))\\ &=\tau(\rho (\partial_j x)^\ast (\partial_j x))\\ &=\langle R_j(\rho)\partial_j x,\partial_j x\rangle_2. \end{align*} Hence $E_j R_j(\Phi_j(\rho)) E_j=E_j R_j(\rho)E_j$. The analogous identity for the left multiplication follows similarly. Note that both the left and right multiplication by $\Phi_j(x)=p_j x p_j+(1-p_j)x(1-p_j)$ leave $\overline{\operatorname{ran} \partial_j}$ invariant. In fact, for any $x,y\in \mathcal{M}$ one has \begin{align*} \Phi_j(x)\partial_j(y) &=p_j(p_jx p_j y)- (p_jx p_j y )p_j\\ &\quad\;+p_j((1-p_j)x(1-p_j)y)-((1-p_j)x(1-p_j)y)p_j\\ &=\partial_j(p_j x p_j y)+\partial_j ((1-p_j)x(1-p_j)y), \end{align*} and a similar equation holds for the right multiplication. Therefore we have \begin{align*} E_j L_j(\Phi_j(\rho))E_j&\le L_j(\Phi_j(\rho)),\\ E_j R_j(\Phi_j(\rho))E_j&\le R_j(\Phi_j(\rho)). \end{align*} This, together with the conditions (a) and (b) in the definition of operator means, implies \begin{align*} E_j\widehat{\Phi_j(\rho)} E_j &\ge E_j\Lambda(E_j L_j(\Phi_j(\rho))E_j,E_j R_j(\Phi_j(\rho)) E_j)E_j\\ &=E_j\Lambda(E_j L_j(\rho)E_j,E_j R_j(\rho)E_j)E_j\\ &\geq E_j \hat \rho E_j. \end{align*} In other words, \begin{equation*} \langle \widehat{\Phi_j(\rho)} \partial_j x,\partial_j x\rangle_2\geq \langle \hat \rho \partial_j x,\partial_j x\rangle_2. \end{equation*} Together with (\ref{eq:conc_decom_QMS}) we conclude \begin{equation*} \norm{\partial_j x}_{P_t^j \rho}^2\geq e^{-t}\norm{\partial_j x}_\rho^2+(1-e^{-t})\norm{\partial_j x}_\rho^2=\norm{\partial_j x}_\rho^2. \end{equation*} In view of \eqref{eq:partial_P_t}, we have proved \begin{equation}\label{eq:estimate_P_t^j} \norm{\partial_j P_t^j x}_\rho^2\le e^{-2t}\norm{\partial_j x}_{P_t^j \rho}^2. \end{equation} Now let us come back to our original semigroup $(P_t)$. Let \begin{equation*} Q_t^j=\prod_{k\neq j} P_t^k. \end{equation*} Since the $p_j$'s commute, so do the generators $\mathscr{L}_j$'s and the semigroups $P_t^j$'s. This means that the order in the definition of $Q_t^j$ does not matter and $P_t =P_t^j Q_t^j$ for all $j\in\{1,\dots,n\}$. From the intertwining technique and Remark \ref{rmk:diff_calc_ind} we deduce \begin{equation*} \norm{\partial_j Q_t^j x}_\rho^2\leq \norm{\partial_j x}_{Q_t^j \rho}^2. \end{equation*} Combined with the estimate \eqref{eq:estimate_P_t^j} for $(P_t^j)$, we obtain \begin{equation*} \norm{\partial P_t x}_\rho^2=\sum_{j=1}^n \norm{\partial_j P_t^j Q_t^j x}_\rho^2\leq e^{-2t}\sum_{j=1}^n \norm{\partial_j Q_t^j x}_{P_t ^j \rho}^2\leq e^{-2t} \norm{\partial x}_{P_t \rho}^2. \end{equation*} So $(P_t)$ satisfies $\mathrm{GE}(1,\infty)$. To prove $\mathrm{CGE}(1,\infty)$, it suffices to note that the generator of $(P_t\otimes \mathrm{id}_\mathcal{N})$ is given by \begin{equation*} (\mathscr{L}\otimes \mathrm{id}_\mathcal{N})x=\sum_{j=1}^n (p_j\otimes 1) x+x(p_j\otimes 1)-2(p_j\otimes 1) x(p_j\otimes 1) \end{equation*} and the elements $(p_j\otimes 1)$ are again commuting projections. \end{proof} \begin{remark} Since $\mathscr{L}_j^2=\mathscr{L}_j$, the spectrum of $\mathscr{L}_j$ is contained in $\{0,1\}$ with equality unless $\mathscr{L}_j=0$. Thus the gradient estimate for the individual semigroups $(P_t^j)$ is optimal (unless $v_j=0$). It should also be noted that it is better than the gradient estimate one would get from Example \ref{ex:cond_exp}. \end{remark} \begin{remark} Inspection of the proof shows that the same result holds if the generator of $(P_t)$ is of the form $\mathscr{L}=\frac 1 2\sum_{j=1}^n (x-u_j xu_j)$ with commuting self-adjoint unitaries $u_j$. \end{remark} \begin{example} Let $X=\{0,1\}^n$ and $\epsilon_j\colon X\to X$ the map that swaps the $j$-th coordinate and leaves the other coordinates fixed. Let $v_j=\sum_x \ket{\epsilon_j(x)}\bra{x}\in B(\ell^2(X))$. By the previous remark, the QMS on $B(\ell^2(X))$ with generator \begin{align*} \mathscr{L} \colon B(\ell^2(X))\to B(\ell^2(X)),\,\mathscr{L} A=\frac 1 2\sum_{j=1}^n (A-v_j A v_j) \end{align*} satisfies $\mathrm{CGE}(1,\infty)$. The restriction of this semigroup to the diagonal algebra is (up to rescaling of the time parameter, depending on the normalization) the Markov semigroup associated with the simple random walk on the discrete hypercube (see \cite[Example 5.7]{EM12}). \end{example} To apply the theorem above to group von Neumann algebras, we will use the following Lindblad form for QMS generated by cnd length functions. Recall that for a countable discrete group $G$, a $1$-cocycle is a triple $(H,\pi,b)$, where $H$ is a real Hilbert space, $\pi\colon G\to O(H)$ is an orthogonal representation, and $b\colon G\to H$ satisfies the cocycle law: $b(gh)=b(g)+\pi(g)b(h),g,h\in G.$ To any cnd function $\psi$ on a countable discrete group $G$, one can associate with a $1$-cocycle $(H,\pi,b)$ such that $\psi(gh^{-1})=\|b(g)-b(h)\|^2,g,h\in G$. See \cite[Appendix D]{BO08} for more information. \begin{lemma} Let $G$ be a countable discrete group and $\psi\colon G\to [0,\infty)$ a cnd length function. Then $\mathscr{L}\colon\lambda_g\mapsto \psi(g)\lambda_g$ generates a QMS on the group von Neumann algebra of $G$. Assume that the associated $1$-cocycle $b\colon G\to H$ takes values in a finite-dimensional real Hilbert space $H$ with an orthonormal basis $(e_1,\dots,e_n)$. Then the generator $\mathscr{L}$ is of the form \begin{align*} \mathscr{L} x=\sum_{j=1}^n v_j^2 x+x v_j^2 -2v_j x v_j, \end{align*} where $v_j$ is a linear operator on $\ell^2(G)$ given by $v_j \delta_g=\langle b(g),e_j\rangle \delta_g$. \end{lemma} \begin{proof} By definition we have \begin{align*} v_j^2 \lambda_g(\delta_h)&=v_j^2 (\delta_{gh})=\langle b(gh),e_j\rangle v_j(\delta_{gh})=\langle b(gh),e_j\rangle^2\delta_{gh},\\ \lambda_g v_j^2(\delta_h)&=\langle b(h),e_j\rangle\lambda_g v_j(\delta_h)=\langle b(h),e_j\rangle^2\lambda_g (\delta_h)=\langle b(h),e_j\rangle^2\delta_{gh},\\ v_j\lambda_g v_j(\delta_h)&=\langle b(h),e_j\rangle v_j\lambda_g(\delta_h)=\langle b(h),e_j\rangle v_j(\delta_{gh})=\langle b(h),e_j\rangle \langle b(gh),e_j\rangle\delta_{gh}. \end{align*} Thus \begin{equation*} \begin{split} &\sum_{j}\left(v_j^2\lambda_g+\lambda_g v_j^2-2v_j\lambda_g v_j\right)(\delta_h)\\ =&\sum_{j}\left(\langle b(gh),e_j\rangle^2+\langle b(h),e_j\rangle^2-2\langle b(h),e_j\rangle \langle b(gh),e_j\rangle\right)\delta_{gh}\\ =&\sum_{j}\langle b(gh)-b(h),e_j\rangle^2 \delta_{gh}\\ =&\|b(gh)-b(h)\|^2\delta_{gh}. \end{split} \end{equation*} This is nothing but $\mathscr{L}(\lambda_g)(\delta_h)=\psi(g)\lambda_g(\delta_h)=\psi(g)\delta_{gh}$. \end{proof} \begin{remark}\label{rmk:extension_group_vna} The elements $v_j$ are not contained in the group von Neumann algebra $L(G)$ so that Theorem \ref{thm:com_proj} is not directly applicable (even if the $v_j$ are projections). However, if $G$ is finite, then the operator \begin{equation*} \mathscr{L}\colon B(\ell^2(G))\to B(\ell^2(G)),\,\mathscr{L} x=\sum_{j=1}^n v_j^2 x+x v_j^2 -2v_j x v_j, \end{equation*} generates a tracially symmetric QMS on $B(\ell^2(G))$ and we can apply Theorem \ref{thm:com_proj} to that semigroup instead. It is an interesting open question how to treat infinite groups for which the generator has such a Lindblad form. \end{remark} \begin{example} The cyclic group $\mathbb{Z}_n=\{0,1,\dots,n-1\}$; see \cite[Example 5.9]{JZ15a}: Its group (von Neumann) algebra is spanned by $\lambda_k,0\le k\le n-1$. One can embed $\mathbb{Z}_n$ to $\mathbb{Z}_{2n}$, so let us assume that $n$ is even. The word length of $k\in\mathbb{Z}_n$ is given by $\psi(k)=\min\{k,n-k\}$. Define $b\colon\mathbb{Z}_n\to \mathbb{R}^{\frac{n}{2}}$ via\begin{equation*} b(k)=\begin{cases} 0,&k=0,\\ \sum_{j=1}^{k}e_j,&1\le k\le \frac{n}{2},\\ \sum_{j=k-\frac{n}{2}+1}^{\frac{n}{2}}e_j,&\frac{n}{2}+1\le k\le n-1, \end{cases} \end{equation*} where $(e_j)_{1\le j\le \frac{n}{2}}$ is an orthonormal basis of $\mathbb{R}^{\frac{n}{2}}$. Then the linear operators $v_j\colon\ell^2(\mathbb{Z}_n)\to \ell^2(\mathbb{Z}_n)$ given by \begin{equation*} v_j(\delta_k)=\langle b(k),e_j\rangle \delta_k,~~1\le j\le \frac{n}{2} \end{equation*} are commuting projections. Thus the QMS associated with $\psi(g)=\norm{b(g)}^2$ satisfies $\mathrm{CGE}(1,\infty)$. \end{example} \begin{example}\label{ex:symmetric_group} The symmetric group $S_n$: Let $\psi$ be the length function induced by the (non-normalized) Hamming metric, that is, $\psi(\sigma)=\#\{j : \sigma(j)\neq j\}$. Let $A_\sigma\in M_n(\mathbb{R})$ be the permutation matrix associated with $\sigma$, i.e., $A_\sigma \delta_j =\delta_{\sigma(j)}$. Then the associated $1$-cocycle is given by $H=L^2(M_n(\mathbb{R}),\frac 1 2 \mathrm{tr})$, $b(\sigma)=A_\sigma-1$, $\pi(\sigma)=A_\sigma$. The matrices $E_{jk}=\sqrt{2}\ket{j}\bra{k}$ for $j\neq k$ and $E_{jj}=-\sqrt{2}\ket{j}\bra{j}$ form an orthonormal basis of $H$. Define $v_{jk}\in B(\ell^2(S_n))$ by $v_{jk}\delta_\sigma=\sqrt{2}\langle b(\sigma),E_{jk}\rangle \delta_{\sigma}$. Then $v_{jk}$ is a projection. Moreover, \begin{align*} \mathscr{L} x=\frac 1 2\sum_{j,k}v_{jk}^2x+x v_{jk}^2-2v_{jk}x v_{jk}. \end{align*} Thus the associated QMS satisfies $\mathrm{CGE}(1/2,\infty)$. \end{example} To extend the last example to the infinite symmetric group $S_\infty$, we need the following approximation result. \begin{lemma} Let $(\mathcal{M}_n)$ be an ascending sequence of von Neumann subalgebras such that $\bigcup_n \mathcal{M}_n$ is $\sigma$-weakly dense in $\mathcal{M}$. Further let $(P_t)$ be a $\Gamma$-regular QMS on $\mathcal{M}$ and assume that $P_t(\mathcal{M}_n)\subset \mathcal{M}_n$. Let $(P_t^n)$ denote the restriction of $(P_t)$ to $\mathcal{M}_n$. If $(P_t^n)$ satisfies $\mathrm{GE}(K,\infty)$ for all $n\in\mathbb{N}$, then $(P_t)$ also satisfies $\mathrm{GE}(K,\infty)$. The same is true for $\mathrm{CGE}$. \end{lemma} \begin{proof} It is not hard to see that $\bigcup_n \mathcal{M}_n$ is dense in $L^2(\mathcal{M},\tau)$. Since $P_t (\mathcal{M}_n)\subset \mathcal{M}_n$ and $P_t$ maps into the domain of its $L^2$ generator $\mathscr{L}_2$, the space $V=D(\mathscr{L}_2^{1/2})\cap \left(\bigcup_n \mathcal{M}_n\right)$ is also dense in $L^2(\mathcal{M},\tau)$ and invariant under $(P_t)$. Using a standard result in semigroup theory, this implies that $V$ is a form core for $\mathscr{L}$. Thus it suffices to prove \begin{equation*} \norm{\partial P_t a}_\rho^2\leq e^{-2Kt}\norm{\partial a}_{P_t \rho}^2 \end{equation*} for $a\in V$ and $\rho\in \mathcal{M}_+$. Moreover, by Kaplansky's density theorem and the strong continuity of functional calculus, checking it for $\rho \in (\bigcup_n \mathcal{M}_n)_+$ is enough. But for $a\in D(\mathscr{L}^{1/2}_2)\cap \mathcal{M}_n$ and $\rho\in(\mathcal{M}_n)_+$, this is simply the gradient estimate for $(P_t^n)$, which holds by assumption. The argument for $\mathrm{CGE}$ is similar. \end{proof} \begin{corollary} If $G$ is the ascending union of subgroups $G_n$ and $\psi$ is a cnd length function on $G$ such that for every $n$ the QMS associated with $\psi|_{G_n}$ satisfies $\mathrm{GE}(K,\infty)$, then the QMS associated with $\psi$ satisfies $\mathrm{GE}(K,\infty)$. The same is true for $\mathrm{CGE}$. \end{corollary} \begin{example}[Infinite symmetric group] Let $S_\infty$ be the group of permutations of $\mathbb N$ that keep all but finitely many elements fixed. The QMS associated with length function induced by the non-normalized Hamming metric on $S_\infty$ satisfies $\mathrm{CGE}(\frac 1 2,\infty)$. \end{example} Recall that for a countable discrete group $G$, a \emph{F\o lner sequence} is a sequence $\{F_n\}_{n\ge 1}$ of nonempty finite subsets of $G$ such that $$\lim_{n\to\infty}\frac{|gF_n\Delta F_n|}{|F_n|}=0,$$ for every $g\in G$, where $gF=\{gh:h\in F\}$ and $A\Delta B=[A\setminus (A\cap B)]\cup [B\setminus (A\cap B)]$. The group $G$ is called \emph{amenable} if it admits a F\o lner sequence. We refer to \cite[Chapter 2.6]{BO08} for more equivalent definitions and basic properties of amenable groups. \begin{proposition} Let $G$ be an amenable group, $\psi\colon G\to[0,\infty)$ a cnd function with associated $1$-cocycle $(H,\pi,b)$. If there exists an orthonormal basis $(e_j)_{j\in J}$ of $H$ such that $\langle b(g),e_j\rangle\in \{0,1\}$ for all $g\in G$, $j\in J$, then the QMS $(P_t)$ associated with $\psi$ satisfies $\mathrm{CGE}(1,\infty)$. \end{proposition} \begin{proof} To ease notation, we will only deal with $\mathrm{GE}(1,\infty)$. The proof of complete gradient estimate is similar, embedding $L(G)\otimes \mathcal{N}$ into a suitable ultraproduct. Let $(F_n)$ be a F\o lner sequence for $G$ and $\omega\in \beta\mathbb{N}\setminus\mathbb{N}$. Endow $B(\ell^2(F_n))$ with the normalized trace $\tau_n$ and let $p_n$ denote the projection from $\ell^2(G)$ onto $\ell^2(F_n)$. Then we have a trace-preserving embedding \begin{equation*} L(G)\to \prod_\omega B(\ell^2(F_n)),\,x\mapsto (p_n x p_n)^\bullet. \end{equation*} For each $j$, let $v_j$ be the linear operator on $\ell^2(G)$ given by $v_j (\delta_g) =\langle b(g),e_j\rangle \delta_g$, and denote its restriction to $\ell^2(F_n)$ by the same symbol. Note that for every fixed $n\in\mathbb{N}$, there are only finitely many indices $j\in J$ such that $v_j$ is non-zero on $\ell^2(F_n)$. Let \begin{equation*} \mathcal{H}_n=\bigoplus_{j\in J}L^2(B(\ell^2(F_n)),\tau_n) \end{equation*} and \begin{equation*} \partial_n\colon B(\ell^2(F_n))\to \mathcal{H}_n,\,a\mapsto ([v_j,a])_j. \end{equation*} For $x=\sum_g x_g\lambda_g$ with $\sum_g \psi(g)\abs{x_g}^2<\infty$, we have \begin{align*} v_j p_n x p_n (\delta_g) &=1_{F_n}(g)v_j p_n x (\delta_g)\\ &=\sum_{h\in G}1_{F_n}(g)x_h v_j p_n (\delta_{hg})\\ &=\sum_{h\in G}1_{F_n}(g)1_{F_n}(hg)x_h v_j (\delta_{hg})\\ &=\sum_{h\in G}1_{F_n}(g)1_{F_n}(hg)x_h \langle b(hg),e_j\rangle \delta_{hg}, \end{align*} and \begin{align*} p_n x p_n v_j (\delta_g) &=\langle b(g),e_j\rangle p_n x p_n(\delta_g)\\ &= 1_{F_n}(g) \langle b(g),e_j\rangle p_n x(\delta_{g})\\ &=\sum_{h\in G}1_{F_n}(g)x_h \langle b(g),e_j\rangle p_n (\delta_{hg})\\ &=\sum_{h\in G}1_{F_n}(g)1_{F_n}(hg)x_h \langle b(g),e_j\rangle \delta_{hg}. \end{align*} Hence \begin{align*} [v_j,p_n x p_n](\delta_g)&=(v_j p_n x p_n-p_n x p_n v_j) (\delta_g)\\ &=\sum_{h\in G}1_{F_n}(g)1_{F_n}(hg)x_h \langle b(hg)- b(g),e_j\rangle \delta_{hg}, \end{align*} and we get \begin{align*} &\quad\,\norm{\partial_n (p_n x p_n)}_{\mathcal{H}_n}^2\\ &=\frac 1{\abs{F_n}}\sum_{g\in F_n}\sum_{j\in J}\langle [v_j,p_n x p_n]\delta_g,[v_j,p_n x p_n]\delta_g\rangle\\ &=\frac 1{\abs{F_n}}\sum_{g\in F_n}\sum_{j\in J}\sum_{h,h'\in G}\bigg(\overline{x_h} x_{h'}\overline{\langle b(hg)-b(g),e_j\rangle} \langle b(h'g)-b(g),e_j\rangle\\ &\qquad\qquad1_{F_n}(hg)1_{F_n}(h'g)\langle \delta_{h g},\delta_{h'g}\rangle\bigg)\\ &=\frac 1{\abs{F_n}}\sum_{g\in F_n}\sum_{j\in J}\sum_{h\in G}|x_h|^2\langle b(hg)-b(g),e_j\rangle|^2 1_{F_n}(hg)\\ &=\frac 1{\abs{F_n}}\sum_{g\in F_n}\sum_{h\in G}|x_h|^2\norm{b(hg)-b(g)}^2 1_{F_n}(hg)\\ &=\frac 1{\abs{F_n}}\sum_{g\in F_n}\sum_{h\in G}\psi(h)|x_h|^2 1_{F_n}(hg)\\ &=\sum_{h\in G}\psi(h)\abs{x_h}^2 \frac {\abs{h^{-1}F_n\cap F_n}}{\abs{F_n}}, \end{align*} which converges to $$\sum_{h\in G}\psi(h)\abs{x_h}^2=\norm{\partial x}^2_{\mathcal{H}}$$ as $n\to\omega$ by the F\o lner property of $(F_n)$ after an application of the dominated convergence theorem. Thus the tangent bimodule $\mathcal{H}$ for $(P_t)$ can be viewed as a submodule of $\prod_\omega \mathcal{H}_n$ with the obvious left and right action and $\partial=(\partial_n)^\bullet$. Let $(P_t^n)$ be the QMS on $B(\ell^2(F_n))$ generated by $\partial_n^\dagger\partial_n$. Since $\langle b(\cdot),e_j\rangle$ takes values in $\{0,1\}$, the operators $v_j$'s are projections. Clearly all the $v_j$'s commute. Hence by Theorem \ref{thm:com_proj} and Remark \ref{rmk:extension_group_vna}, $(P_t^n)$ satisfies $\mathrm{GE}(1,\infty)$. From the ultraproduct structure of $\mathcal{H}$ and $\partial$ we deduce \begin{align*} \norm{\partial P_t(x_n)^\bullet}_{(\rho_n)^\bullet}^2&=\lim_{n\to\omega}\norm{\partial_n P_t^n x_n}_{\rho_n}^2\\ &\leq \lim_{n\to\omega} e^{-2t}\norm{\partial_n x_n}_{P_t^n \rho_n}^2\\ &=e^{-2t}\norm{\partial (x_n)^\bullet}_{P_t (\rho_n)^\bullet}^2 \end{align*} for $(x_n)^\bullet\in L(G)$ and $(\rho_n)^\bullet\in L(G)_+$. In other words, $(P_t)$ satisfies $\mathrm{GE}(1,\infty)$. \end{proof} \begin{remark} The group von Neumann algebra embeds into an ultraproduct of matrix algebras if and only if the underlying group is hyperlinear, so it might be possible to extend the previous proposition beyond amenable groups. \end{remark} \begin{example}[Amenable groups acting on trees] Let $\mathcal{T}$ be a tree (viewed as unoriented graph) and $G$ an amenable subgroup of $\mathrm{Aut}(\mathcal{T})$. For fixed $x_0\in \mathcal{T}$ define the length function $\psi$ on $G$ by $\psi(g)=d(x_0,gx_0)$, where $d$ is the combinatorial graph distance. As in the case of free groups, one sees that $\psi$ is conditionally negative definite and the associated $1$-cocycle can be described as follows (see \cite[Example C.2.2]{BHV08}): Let $E=\{(x,y)\mid x\sim y\}$ be the set of oriented edges of $\mathcal{T}$, and for $e=(x,y)\in E$ write $\bar e=(y,x)$. Let $H=\{\xi\in \ell^2(E)\mid \xi(\bar e)=-\xi(e)\}$ with inner product \begin{equation*} \langle \xi,\eta\rangle=\frac 1 2\sum_{e\in E}\xi(e)\eta(e). \end{equation*} The action of $G$ on $H$ is given by $\pi(g)\xi(x,y)=\xi(gx,gy)$, and the $1$-cocycle $b$ is given by \begin{equation*} b(g)(e)=\begin{cases}1&\text{if }e\text{ lies on }[x_0,gx_0],\\-1&\text{if }\bar e\text{ lies on }[x_0,gx_0],\\0&\text{otherwise},\end{cases} \end{equation*} where $[x_0,gx_0]$ denotes the unique geodesic joining $x_0$ and $gx_0$. Put $F=\{(x,y)\in E\mid d(x_0,x)<d(x_0,y)\}$. Then $(1_e-1_{\bar e})_{e\in F}$ is an orthonormal basis of $H$ and $\langle b(g),1_e-1_{\bar e}\rangle\in \{0,1\}$ for all $g\in G$ and $e\in F$. Thus the QMS associated with $\psi$ satisfies $\mathrm{CGE}(1,\infty)$. For example this is the case for $G=\mathbb{Z}$ with $\psi(k)=\abs{k}$. Here the tree is the Cayley graph of $\mathbb{Z}$ and the action is given by the left-regular representation. This QMS on $L(\mathbb{Z})$ corresponds, under the Fourier transform, to the Poisson semigroup on $L^\infty(S^1)$. \end{example} More generally, the Cayley graph of a group is a tree if and only if it is of the form $\mathbb{Z}^{\ast k}\ast \mathbb{Z}_2^{\ast l}$ for $k,l\geq 0$. This group is not amenable unless $k+l\leq 1$, but the free product structure allows us to obtain the same bound. \begin{theorem} If $G$ is a group whose Cayley graph is a tree and the cnd function $\psi$ is given by $\psi(g)=d(g,e)$, where $d$ is the combinatorial metric on the Cayley graph, then the QMS associated with $\psi$ satisfies $\mathrm{CGE}(1,\infty)$ and $\mathrm{CLSI}(2)$ and the constants in both inequalities are optimal. \end{theorem} \begin{proof} As previously mentioned, $G$ is of the form $\mathbb{Z}^{\ast k}\ast \mathbb{Z}_2^{\ast l}$ with $k,l\geq 0$. It is not hard to see that the QMS associated with $\psi$ decomposes as free product of the QMS associated with the word length functions on the factors. Thus it satisfies $\mathrm{CGE}(1,\infty)$ by Theorem \ref{thm:free_product} and $\mathrm{CLSI}(2)$ by Corollary \ref{cor:MLSI}. Since both the gradient estimate and the modified logarithmic Sobolev inequality imply that the generator has a spectral gap of $1$, the constants are optimal. \end{proof} \begin{example} If $G$ is a free group and $\psi$ the word length function, then the associated QMS satisfies $\mathrm{CGE}(1,\infty)$ and $\mathrm{CLSI}(2)$. Note that the usual logarithmic Sobolev inequality, which is equivalent to the optimal hypercontractivity, is still open. Some partial results have been obtained in \cite{JPPR15,RX16}. Our optimal modified LSI supports the validity of optimal LSI from another perspective. \end{example} \newcommand{\etalchar}[1]{$^{#1}$}
1,116,691,497,447
arxiv
\section{Introduction and Definitions} \begin{theorem}[Waring's Problem/Hilbert-Waring Theorem] For every integer $k \geq 2$ there exists a positive integer $g(k)$ such that every positive integer is the sum of at most $g(k)$ $k$-th powers of integers. \end{theorem} The idea behind Waring's Problem -- examining sums of powers -- can be easily extended to any ring. (For example, number fields \cite{siegel} and polynomial rings over finite fields \cite{newman}.) For an excellent and thorough exposition of the research on Waring's Problem and its generalizations, see Vaughan and Wooley \cite{wooley}. We will specifically look at sums of cubes in quaternion rings, extending the previous work on sum of squares begun in Cooke, Hamblen, and Whitfield \cite{chw}. \begin{definition} Let $LQ_{a,b}$ denote the quaternion ring \[ \{\alpha_0 + \alpha_1 {\bf i} + \alpha_2 {\bf j} + \alpha_3 {\bf k} \mid \alpha_n,a,b \in {\mathbb Z}, {\bf i}^2 = -a, {\bf j}^2 = -b, {\bf i}{\bf j}=-{\bf j}{\bf i}={\bf k}\}.\] Let $LQ_{a,b}^n$ denote the additive group generated by all $n$th powers in $LQ_{a,b}$. \end{definition} Note here that ${\bf k}^2 = -ab$, and that if $a = b = 1$, we have the {\em Lipschitz quaternions}. We then have the following analogue of Waring's Problem. \begin{conjecture} For every integer $k \geq 2$ and all positive integers $a,b$ there exists a positive integer $g_{a,b}(k)$ such that every element of $LQ_{a,b}^k$ can be written as the sum of at most $g_{a,b}(k)$ $k$-th powers of elements of $LQ_{a,b}$. \end{conjecture} In contrast with the case when $k=2$, it is much harder when an element of a ring can be represented as a sum of a small number of cubes. For example, it was only recently determined \cite{booker} that 33 is the sum of 3 integer cubes. Our goal in this paper, therefore, is to determine global upper and lower bounds for $g_{a,b}(3)$, the number of cubes necessary to represent all elements of $LQ_{a,b}^3$. We have the following main result. \begin{theorem} \label{mainthm} Let $a,b$ be positive integers. Then \begin{itemize} \item if $3 \nmid a$ or $3 \nmid b$, then $3 \leq g_{a,b}(3) \leq 6$, and \item if $3 \mid a$ and $3 \mid b$, then $4 \leq g_{a,b}(3) \leq 5$. \end{itemize} \end{theorem} The upper bounds of Theorem \ref{mainthm} are given in Section 2, following an algorithmic approach based on cubic algebraic identities. The lower bounds are given in Section 3. It seems quite possible that the lower bounds in Theorem \ref{mainthm} are the actual values for $g_{a,b}(3)$. A number of individual quaternions were tested in SAGE, and all were found to be expressible as the minimum number of cubes. Additionally, the identities of Equations (\ref{cube1}) and (\ref{cube2}), while very useful for our upper bound proof, are by no mean optimal. A search for similar identities involving quaternions was unsuccessful, due to the complications introduced by non-commutativity. Lastly, it should be noted that Propositions \ref{n3abup} and \ref{3abup} were both initially proven by checking individual residue classes in SAGE. While we were able to cover all possible cases, more theoretical versions of the proofs are provided here. \section{$LQ_{a,b}^3$ and Upper Bounds} Recall that $LQ_{a,b}^3$ is the additive subgroup generated by all cubes in $LQ_{a,b}$. Our first goal is to determine the shape of elements in $LQ_{a,b}^3$; we therefore first give the general forms of cubes in $LQ_{a,b}$. If $\alpha = \alpha_0 + \alpha_1{\bf i}+\alpha_2{\bf j}+\alpha_3{\bf k}$, we have \begin{align} \alpha^3 & = \alpha_0^3 - 3a\alpha_0\alpha_1^2 - 3b\alpha_0\alpha_2^2 - 3ab\alpha_0\alpha_3^2 \label{cubeeq} \\ & \quad + (3\alpha_0^2\alpha_1 - a\alpha_1^3 - b\alpha_1\alpha_2^2 - ab\alpha_1\alpha_3^2) {\bf i} \notag \\ & \quad + (3\alpha_0^2\alpha_2 - a\alpha_1^2\alpha_2 - b\alpha_2^3 - ab\alpha_3\alpha_3^2) {\bf j} \notag \\ & \quad + (3\alpha_0^2\alpha_3 - a\alpha_1^2\alpha_3 - b\alpha_1\alpha_2^2 - ab\alpha_3^3) {\bf k} \notag \end{align} We can simplify this equation by noting common factors in each of the coefficients on the right side of Equation (\ref{cubeeq}). For $\alpha = \alpha_0 + \alpha_1{\bf i} + \alpha_2{\bf j} + \alpha_3{\bf k}$, let \begin{equation} \label{Pal} P_{\alpha} = a\alpha_1^2 + b\alpha_2^2 + ab\alpha_3^2. \end{equation} We then have \begin{equation} \label{cubeeq2} \alpha^3 = (\alpha_0^2 - 3 P_{\alpha}) \alpha_0 + (3\alpha_0^2 - P_{\alpha} ) \left(\alpha_1 {\bf i} + \alpha_2 {\bf j} + \alpha_3 {\bf k} \right) \end{equation} Additionally, we will make frequent use of the following two identities: \begin{align} 6z &=(z+1)^3+(z-1)^3+(-z)^3+(-z)^3 \label{cube1}\\ 6z+3&=(-z-5)^3+(z+1)^3+(-2z-6)^3+(2z+7)^3 \label{cube2} \end{align} These two identities, and these proofs, are inspired by Cohn's results \cite{cohn1, cohn2} on sums of cubes in quadratics fields: $g_{{\mathbb Z}[i]}(3) = 4$ and $g_{{\mathbb Z}[\sqrt{d}]}(3) \leq 5$. We start by treating the case when $3 \nmid a$ or $3 \nmid b$. \begin{prop} \label{n3ab3} If $3 \nmid a$ or $3 \nmid b$, then $LQ_{a,b}^3 = LQ_{a,b}$. \end{prop} Note that in the Lipschitz quaternions ($a=b=1$), this follows from Theorem 1.1 of \cite{pollack}. \begin{prop} \label{n3abup} If $3 \nmid a$ or $3 \nmid b$, then every element of $LQ_{a,b}^3$ can be written as the sum of at most 6 cubes of elements in $LQ_{a,b}$. \end{prop} We will prove that every element of $LQ_{a,b}$ can be written as the sum of at most 6 cubes, which yields both propositions. \begin{proof} First, note that by Equations (\ref{cube1}) and (\ref{cube2}), we immediately have that every element in $LQ_{a,b}$ that is a multiple of 6, or 3 more than a multiple of 6, can be written as the sum of 4 cubes. It then suffices to restrict our attention to the resulting residue classes, and we need only consider the residue of $a,b$ mod 6. We will break the problem into two cases, and in each case will need two supporting Lemmas. Our two cases are as follows: \begin{itemize} \item \underline{Case 1:} Suppose $3 \nmid ab$, and at least one of $a$ or $b$ is congruent to $2 \bmod 3$, and \item \underline{Case 2:} All other cases: either $a \equiv b \equiv 1 \bmod 3$, or exactly one of $a$ and $b$ is divisible by 3. \end{itemize} For the following Lemmas, we let $\text{Re}(x)$ be the real part of $x$ and $\text{Im}(x)$ be the imaginary or pure part of $x$. That is, if $x= x_0 + x_1 {\bf i} + x_2 {\bf j} + x_3 {\bf k}$, then $\text{Re}(x) = x_0$ and $\text{Im}(x) = x_1 {\bf i} + x_2 {\bf j} + x_3 {\bf k}$. Additionally, we write $\text{Im}(x) \equiv \text{Im}(y) \bmod 6$ if 6 divides each of the coefficients of $\text{Im}(x-y)$. Lastly, for $n \in {\mathbb Z}$, we write $\overline{n}$ for the least non-negative residue of $n \bmod 6$; that is $\overline{n} \equiv n \bmod 6$ and $\overline{n} \in \{0,1, \dots, 5\}$. \begin{lemma}\label{cubeset1} Suppose we are in Case 1: $3 \nmid ab$, and at least one of $a$ or $b$ is congruent to $2 \bmod 3$, and let \[S = \{ \alpha \in LQ_{a,b} \mid 2 \nmid \alpha_0 \text{ and } 3 \nmid \alpha_1\alpha_2\alpha_3\}.\] Then, for all $\alpha \in S$, there exists $x \in LQ_{a,b}$ such that $\text{Re}(x^3) \equiv \text{Re}(\alpha) \bmod 3$ and $\text{Im}(x^3) \equiv \text{Im}(\alpha) \bmod 6$. \end{lemma} Note that as an immediately corollary of Lemma \ref{cubeset1} and Equations (\ref{cube1}) and (\ref{cube2}), every element of $S$ can be written as the sum of at most 5 cubes. \begin{proof} Take $\alpha = \alpha_0 + \alpha_1 {\bf i} + \alpha_2 {\bf j} + \alpha_3 {\bf k} \in S$. Then let $x = x_0 + x_1 {\bf i} + x_2 {\bf j} + x_3 {\bf k}$, where $x_{\ell} = \overline{\alpha_{\ell}}$ for $\ell \in \{1,2,3\}$ and $x_0 = \overline{\alpha_0} - 3\delta_{\alpha}$, where \[\delta_{\alpha} = \begin{cases} 1, & \text{if } P_{\alpha} \text{ is odd;} \\ 0, & \text{otherwise.} \end{cases} \] By Equation (\ref{cubeeq2}), it suffices to show that $x_0^3 - 3x_0P_x \equiv \alpha_0 \bmod 3$, and $x_{\ell}(3x_0^2 - P_x) \equiv \alpha_{\ell} \bmod 6$ for $\ell \in \{1,2,3\}$. We then have \begin{equation} \label{realcube} x_0^3 - 3x_0P_x = (\overline{\alpha_0} - 3\delta_{\alpha})^3 - 3(\overline{\alpha_0} - 3\delta_{\alpha})P_x \equiv \alpha_0^3 \equiv \alpha_0 \bmod 3, \end{equation} so $\text{Re}(x^3) \equiv \text{Re}(\alpha) \bmod 3$. Then, note that in this case we have $\alpha \in S$, $\alpha_1^2 \equiv \alpha_2^2 \equiv \alpha_3^2 \equiv 1 \bmod 3$, so \begin{align*} P_{\alpha} & \equiv a\cdot 1 + b \cdot 1 + ab \cdot 1 \bmod 3 \\ & \equiv (a + 1)(b+1) - 1\bmod 3 \end{align*} Since at least one of $a$ or $b$ is congruent to $2 \bmod 3$, we must have that $P_{\alpha} \equiv 2 \bmod 3$. Therefore if $\delta_{\alpha} = 1$, then $P_{\alpha} \equiv 5 \bmod 6$, and if $\delta_{\alpha} = 0$, then $P_{\alpha} \equiv 2 \bmod 6$; in either case, $3\delta_{\alpha} - P_{\alpha} \equiv -2 \bmod 6$. Then note that since $P_x \equiv P_{\alpha} \bmod 6$ (since by definition $\text{Im}(x) \equiv \text{Im}(\alpha) \bmod 6$) and $\alpha_0$ is odd, we have \begin{align*} 3x_0^2 - P_x = 3(\overline{\alpha_0} - 3\delta_{\alpha})^2 - P_x & \equiv 3\alpha_0^2 + 3\delta_\alpha - P_{\alpha} \bmod 6\\ & \equiv 3 - 2 = 1\bmod 6 \end{align*} Therefore $x_{\ell}(3x_0^2 - P_x) \equiv \alpha_{\ell} \bmod 6$ for $\ell \in \{1,2,3\}$, so $\text{Im}(x^3) \equiv \text{Im}(\alpha) \bmod 6$, which completes the proof. \end{proof} \begin{lemma}\label{sumset1} Suppose $3 \nmid ab$, and at least one of $a$ or $b$ is congruent to $2 \bmod 3$, and let $S$ be defined as in Lemma \ref{cubeset1}. Then, for all $\alpha \in LQ_{a,b}$, there exists $\alpha',\alpha'' \in S$ such that $\text{Re}(\alpha' + \alpha'') \equiv \text{Re}(\alpha) \bmod 3$ and $\text{Im}(\alpha' + \alpha'') \equiv \text{Im}(\alpha) \bmod 6$. \end{lemma} \begin{proof} Notice that elements of $S$ can have real coefficient equivalent to 1, 3, or $5 \bmod 6$, and can have imaginary coefficients equivalent to 1, 2, 4, or $5 \bmod 6$. The first conclusion then follows since the real coefficients cover all residue classes$\mod 3$, and the second follows from the fact that in ${\mathbb Z}_6$, $\{1,2,4,5\} + \{1,2,4,5\} = {\mathbb Z}_6$. \end{proof} As a consequence of Lemmas \ref{cubeset1} and \ref{sumset1}, for all $\alpha \in LQ_{a,b}$, there exists $x_1, x_2 \in LQ_{a,b}$ such that $\alpha - x_1^3 + x_2^3$ is either a multiple of 6, or 3 more than a multiple of 6; Equations (\ref{cube1}) and (\ref{cube2}) then imply that under the hypotheses of Case 1, every element of $LQ_{a,b}$ can be written as the sum of at most 6 cubes. We have therefore proven Propositions \ref{n3ab3} and \ref{n3abup} in the case when $3 \nmid ab$, and at least one of $a$ or $b$ is congruent to $2 \bmod 3$. We then move to Case 2, where we suppose that we are in one of the following cases: \begin{itemize} \item \underline{Case 2a}: $a \equiv b \equiv 1 \bmod 3$. \item \underline{Case 2b}: Exactly one of $a$ and $b$ is divisible by 3, and the other is $2 \bmod 3$. Without loss of generality, in this case we assume $a \equiv 2 \bmod 3$ and $b \equiv 0 \bmod 3$. \item \underline{Case 2c}: Exactly one of $a$ and $b$ is divisible by 3, and the other is $1 \bmod 3$. Without loss of generality, in this case we assume $a \equiv 1 \bmod 3$ and $b \equiv 0 \bmod 3$. \end{itemize} \begin{lemma}\label{cubeset2} Given $a$ and $b$ satisfying one of the cases above, let \[T_2 = \{ \alpha \in LQ_{a,b} \mid 2 \nmid \alpha_0 \text{ and } 3 \nmid \alpha_1\alpha_3 \text{ and } 3 \mid \alpha_2\},\] \[T_3 = \{ \alpha \in LQ_{a,b} \mid 2 \nmid \alpha_0 \text{ and } 3 \nmid \alpha_1\alpha_2 \text{ and } 3 \mid \alpha_3\},\] and $T = T_2 \cup T_3$. Then, for all $\alpha \in T$, there exists $x \in LQ_{a,b}$ such that $\text{Re}(x^3) \equiv \text{Re}(\alpha) \bmod 3$ and $\text{Im}(x^3) \equiv \text{Im}(\alpha) \bmod 6$. \end{lemma} \begin{proof} The proofs in each subcase are very similar to that of Lemma \ref{cubeset1}; we will only highlight where the definitions and calculations differ. Take $\alpha = \alpha_0 + \alpha_1 {\bf i} + \alpha_2 {\bf j} + \alpha_3 {\bf k} \in S$, let $x_0 = \overline{\alpha_0} - 3\delta_{\alpha}$ as defined in Lemma \ref{cubeset1}, and let \[x_{\ell} = \begin{cases} \overline{\alpha_{\ell}}, & \text{in Cases 2a and 2b; }\\ 6 - \overline{\alpha_{\ell}} ,& \text{in Case 2c}. \end{cases} \] Immediately by Equation (\ref{realcube}) in Lemma \ref{cubeset1}, we have that $\text{Re}(x^3) \equiv \text{Re}(\alpha) = \alpha_0 \bmod 3$. Then, for $\alpha \in T_2$, we have $\alpha_1^2 \equiv \alpha_3^2 \equiv 1 \bmod 3$ and $\alpha_2^2 \equiv 0 \bmod 3$, so from Equation (\ref{Pal}): \[ P_{\alpha} \equiv \begin{cases} 2 \quad \equiv 1\cdot 1 + 1 \cdot 0 + 1 \cdot 1\bmod 3, & \text{in Case 2a;} \\ 2 \quad \equiv 2\cdot 1 + 0 \cdot 0 + 0 \cdot 1 \bmod 3, & \text{in Case 2b;} \\ 1 \quad \equiv 1\cdot 1 + 0 \cdot 0 + 0 \cdot 1 \bmod 3, & \text{in Case 2c}. \end{cases} \] Note that in all of these Cases, $b \equiv ab \bmod 3$, so for $\alpha \in T_3$, the values of $P_{\alpha}$ mod 3 are the same as for $\alpha \in T_2$. Therefore, in Cases 2a and 2b, if $\delta_{\alpha} = 1$, then $P_{\alpha} \equiv 5 \bmod 6$, and if $\delta_{\alpha} = 0$, then $P_{\alpha} \equiv 2 \bmod 6$; either way, $3\delta_{\alpha} - P_{\alpha} \equiv -2 \bmod 6$. Since $P_x \equiv P_{\alpha} \bmod 6$ and $\alpha_0$ is odd, we have \begin{align*} 3x_0^2 - P_x = 3(\overline{\alpha_0} - 3\delta_{\alpha})^2 - P_x & \equiv 3\alpha_0^2 + 3\delta_\alpha - P_{\alpha} \bmod 6\\ & \equiv 3 - 2 = 1\bmod 6 \end{align*} Therefore $\text{Im}(x^3) \equiv \text{Im}(\alpha) \bmod 6$, which completes the proof for Cases 2a and 2b. In Case 2c, if $\delta_{\alpha} = 1$, then $P_{\alpha} \equiv 1 \bmod 6$, and if $\delta_{\alpha} = 0$, then $P_{\alpha} \equiv 4 \bmod 6$; either way, $3\delta_{\alpha} - P_{\alpha} \equiv 2 \bmod 6$. The same calculation as above then yields \[ 3x_0^2 - P_x \equiv 3 + 2 \equiv -1\bmod 6\] But, as we have defined $x_{\ell} = 6 - \overline{\alpha_{\ell}}$ in this case, we have \[x_{\ell} (3x_0^2 - P_x) \equiv (6 - \overline{\alpha_{\ell}})(-1) \equiv \alpha_{\ell} \bmod 6\] for $\ell \in \{1, 2, 3\}$, which implies $\text{Im}(x^3) \equiv \text{Im}(\alpha) \bmod 6$, completing the proof for Case 2c. \end{proof} \begin{lemma}\label{sumset2} Given $a$ and $b$ satisfying Case 2, let $T$ be defined as in Lemma \ref{cubeset2}. Then, for all $\alpha \in LQ_{a,b}$, there exists $\alpha',\alpha'' \in T$ such that $\text{Re}(\alpha' + \alpha'') \equiv \text{Re}(\alpha) \bmod 3$ and $\text{Im}(\alpha' + \alpha'') \equiv \text{Im}(\alpha) \bmod 6$. \end{lemma} \begin{proof} In light of Lemma \ref{sumset1}, if $3 \mid \alpha_2$ or $3 \mid \alpha_3$, we can choose $\alpha'$ and $\alpha''$ both to be in $T_2$ or $T_3$, respectively. If $3 \nmid \alpha_2\alpha_3$, then there exists $\alpha' \in T_2$ and $\alpha'' \in T_3$ satisfying the conclusions. \end{proof} This completes the proofs of Propositions \ref{n3ab3} and \ref{n3abup}: as in Case 1, Lemmas \ref{cubeset2} and \ref{sumset2} imply that in Case 2, every element of $LQ_{a,b}$ can be written as the sum of at most 6 cubes. \end{proof} If $3 \mid a$ and $3 \mid b$, there is slightly more work to do, as not all elements of the ring can be written as the sum of cubes. \begin{prop} If $3 \mid a$ and $3 \mid b$, then \[LQ_{a,b}^3=\{ \alpha_0 + 3\alpha_1 {\bf i} + 3\alpha_2 {\bf j} + 3\alpha_3 {\bf k} \mid {\bf i}^2 = -a, {\bf j}^2 = -b, ={\bf i}{\bf j} = -{\bf j} {\bf i} = {\bf k}, \alpha_n \in {\mathbb Z} \}.\] \end{prop} \begin{proof} Note that if $3 \mid a$ and $3 \mid b$, then for all $\alpha \in LQ_{a,b}$, we have $3 \mid P_{\alpha}$ from Equation \ref{Pal}. Then by Equation \ref{cubeeq2}, we have that each of the imaginary coefficients (the coefficients of ${\bf i}, {\bf j}, {\bf k}$) are each divisible by 3, showing that the form above is necessary for all elements of $LQ_{a,b}^3$. The sufficiency of the above form is then the result of the proof of Proposition \ref{3abup}, which shows that every element of this form can be written as the sum of at most 5 cubes. \end{proof} \begin{prop} \label{3abup} If $3 \mid a$ and $3 \mid b$, then every element of $LQ_{a,b}^3$ can be written as the sum of at most 5 cubes of elements in $LQ_{a,b}$. \end{prop} \begin{proof} In light of Equations \ref{cube1} and \ref{cube2}, it suffices to show that for all elements $\alpha \in LQ_{a,b}^3$, there exists $x \in LQ_{a,b}$ such that $\text{Re}(x^3) \equiv \text{Re}(\alpha) \bmod 3$ and $\text{Im}(x^3) \equiv \text{Im}(\alpha) \bmod 6$. Take $\alpha = \alpha_0 + \alpha_1 {\bf i} + \alpha_2 {\bf j} + \alpha_3 {\bf k} \in LQ_{a,b}^3$. Then let $x_{\ell} = \overline{\alpha_{\ell}}$ for $\ell \in \{1,2,3\}$ and $x_0 = \overline{\alpha_0} - 3\delta_{\alpha}$, where \[\delta_{\alpha} = \begin{cases} 1, & \text{if } P_{\alpha} \equiv \alpha_0 \bmod 2; \\ 0, & \text{otherwise.} \end{cases} \] We immediately get $\text{Re}(x^3) \equiv \text{Re}(\alpha) \bmod 3$ by the calculations in Lemma \ref{cubeset1}. For $\alpha \in LQ_{a,b}^3$, since $3 \mid a$ and $3 \mid b$, we have $P_{\alpha} \equiv 0 \bmod 3$. Therefore if $\delta_{\alpha} = 1$, then $\alpha_0$ is odd and $P_{\alpha} \equiv 3 \bmod 6$, or $\alpha_0$ is even and $P_{\alpha} \equiv 0 \bmod 6$. If $\delta_{\alpha} = 0$, then $\alpha_0$ is odd and $P_{\alpha} \equiv 0 \bmod 6$, or $\alpha_0$ is even and $P_{\alpha} \equiv 3 \bmod 6$. Specifically, an {\em odd} number of $\alpha_0$, $\delta_{\alpha}$, and $P_{\alpha}$ will be odd. We then have \begin{align*} 3x_0^2 - P_x = 3(\overline{\alpha_0} - 3\delta_{\alpha})^2 - P_x & \equiv 3\alpha_0^2 + 3\delta_\alpha - P_{\alpha} \bmod 6 \\ & \equiv 3 \bmod 6 \end{align*} Then, since $\alpha \in LQ_{a,b}^3$, $\alpha_{\ell}$ is a multiple of 3 for $\ell \in \{1,2,3\}$, so $3\alpha_{\ell} \equiv \alpha_{\ell} \bmod 6$. But these are now exactly the mod 6 imaginary coefficients of $x^3$. Therefore $\text{Im}(x^3) \equiv \text{Im}(\alpha) \bmod 6$, which completes the proof. \end{proof} \section{Lower Bounds} We now prove the lower bounds of Theorem \ref{mainthm} via example. \begin{prop} \label{n3lb} If $3 \nmid a$ or $3 \nmid b$, then $3+3{\bf i}$ cannot be written as the sum of 2 cubes in $LQ_{a,b}$. \end{prop} \begin{proof} Suppose $x, y \in LQ_{a,b}$ are such that \begin{equation} 3+ 3{\bf i} = x^3 + y^3, \label{not3eq} \end{equation} and write $x = x_0 + x_1{\bf i} + x_2{\bf j} + x_3{\bf k}$, $y = y_0 + y_1{\bf i} + y_2{\bf j} + y_3{\bf k}$ with $x_n, y_n \in {\mathbb Z}$. We then have the following four equations from the coefficients of Equation (\ref{not3eq}): \begin{align} x_0^3-3x_0P_x+y_0^3-3y_0P_y &=3 &(\text{real coefficient}) \label{re1} \\ 3x_0^2x_1-x_1P_x+3y_0^2y_1-y_1P_y &=3 & ({\bf i} \text{ coefficient}) \label{ico1}\\ 3x_0^2x_2-x_2P_x +3y_0^2y_2-y_2P_y &=0 & ({\bf j} \text{ coefficient}) \label{jco1}\\ 3x_0^2x_3-x_3P_x+3y_0^2y_3-y_3P_y &=0 & ({\bf k} \text{ coefficient}) \label{kco1} \end{align} From Equation (\ref{re1}), we get $x_0^3 + y_0^3 \equiv 0 \bmod 3$; as the only cubes mod 9 are 0, 1, and 8, we immediately get $x_0^3 + y_0^3 \equiv 0 \bmod 9$. Since $x_0^3 \equiv x_0 \bmod 3$, we also get \begin{equation} \label{xminy} x_0 + y_0 \equiv 0 \bmod 3. \end{equation} We can then examine Equation (\ref{re1}) mod 9 and simplify: \begin{align} x_0^3-3x_0P_x+y_0^3-3y_0P_y & \equiv 3 \bmod 9 \notag\\ -3x_0P_x-3y_0P_y & \equiv 3 \bmod 9 \notag\\ -x_0P_x-y_0P_y & \equiv 1 \bmod 3 \notag\\ -x_0P_x-y_0P_y & \equiv 1 \bmod 3 \notag\\ y_0(P_x-P_y) & \equiv 1 \bmod 3 \label{n31} \end{align} If we first assume (without loss of generality) that $P_x \equiv 0 \bmod 3$. Then $P_y \not\equiv 0 \bmod 3$, and Equations (\ref{ico1}), (\ref{jco1}), (\ref{kco1}) become \begin{align*} -y_1P_y & \equiv 0 \bmod 3 \\ -y_2P_y & \equiv 0 \bmod 3 \\ -y_3P_y & \equiv 0 \bmod 3 \end{align*} Therefore $y_1 \equiv y_2 \equiv y_3 \equiv 0 \bmod 3$, which implies that $P_y \equiv 0 \bmod 3$, a contradiction. Therefore $P_x, P_y \not\equiv 0 \bmod 3$. We additionally have from Equation (\ref{n31}) that $P_x \not\equiv P_y \bmod 3$, so assume $P_x \equiv 1 \bmod 3$ and $P_y \equiv 2 \bmod 3$. From Equations (\ref{ico1}), (\ref{jco1}), and (\ref{kco1}) we have $x_n \equiv 2y_n \bmod 3$ for $n \in \{1,2,3\}$, which implies $x_n^2 \equiv y_n^2 \bmod 3$. We then have \begin{align*} 1 \equiv P_y - P_x & \equiv (ay_1^2 + by_2^2 + aby_3^2) - (ax_1^2 + bx_2^2 + abx_3^2) \bmod 3\\ & \equiv a(y_1^2 - x_1^2) + b(y_1^2 - x_1^2) + ab(y_3^2 - x_3^2) \bmod 3 \\ & \equiv 0 \bmod 3 \end{align*} We therefore have the contradiction in this case, which completes the proof. \end{proof} \begin{prop} \label{3lb} If $3 \mid a$ and $3 \mid b$, then $4$ cannot be written as the sum of 3 cubes in $LQ_{a,b}$. \end{prop} \begin{proof} Suppose $x, y, z \in LQ_{a,b}$ are such that $4= x^3 + y^3 + z^3$. Examining the real coefficients of Equation (\ref{not3eq}), we get the following (similar to Equation (\ref{re1})): \begin{equation} \label{re2} x_0^3-3x_0P_x+y_0^3-3y_0P_y + z_0^3-3z_0P_z = 4 \end{equation} Note that since $3\mid a$ and $3\mid b$, we have $P_x \equiv P_y \equiv P_z \equiv 0 \bmod 3$; therefore Equation (\ref{re2}) becomes \[ x_0^3+y_0^3 + z_0^3 \equiv 4 \bmod 9, \] which has no integer solutions. \end{proof} Propositions \ref{n3abup}, \ref{3abup}, \ref{n3lb}, and \ref{3lb} then complete the proof of Theorem \ref{mainthm}. \section{Acknowledgments} The authors would like to thank the McDaniel College Student-Faculty Summer Research Fund and Research and Creativity Fund for supporting their research. \bibliographystyle{plainnat}
1,116,691,497,448
arxiv
\subsection*{}% \hbox{\hspace{.05\hsize}\defbox{\medskip#1\bigskip}}% \subsection*{}} \def{} \def{\hbox{\it\tiny T}}{{\hbox{\it\tiny T}}} \def\phantom{-}{\phantom{-}} \def\buildrel{\rm w}\over\longrightarrow{\buildrel{\rm w}\over\longrightarrow} \def\buildrel{\rm v}\over\longrightarrow{\buildrel{\rm v}\over\longrightarrow} \def\tvlimit#1{\buildrel{\scriptscriptstyle #1}\over\Longrightarrow} \def\buildrel{\rm d}\over\longrightarrow{\buildrel{\rm d}\over\longrightarrow} \def\goes#1{\buildrel{#1}\over\longrightarrow} \def\wiggles#1{\buildrel{#1}\over\leadsto} \def{\rm tr\, }{{\rm tr\, }} \def{\rm rank\,}{{\rm rank\,}} \def\hbox{\rm deg\thinspace}{\hbox{\rm deg\thinspace}} \def{\rm sign}{{\rm sign}} \def{\rm supp\,}{{\rm supp\,}} \newsavebox{\junk} \savebox{\junk}[1.6mm]{\hbox{$|\!|\!|$}} \def{\usebox{\junk}}{{\usebox{\junk}}} \def{\mathop{\rm det}}{{\mathop{\rm det}}} \def\mathop{\rm lim\ sup}{\mathop{\rm lim\ sup}} \def\mathop{\rm lim\ inf}{\mathop{\rm lim\ inf}} \def\mathop{\rm arg\, min}{\mathop{\rm arg\, min}} \def\mathop{\rm arg\, max}{\mathop{\rm arg\, max}} \def\field{A}{{\sf A}} \def{\sf K}{{\sf K}} \def{\sf U}{{\sf U}} \def{\cal V}{{\sf V}} \def{\sf W}{{\sf W}} \def{\mathchoice{\hbox{\sf X}}\sfX{\hbox{\scriptsize\sf X}}\smallsfX}{{\sf X}} \def{\mathchoice{\hbox{\sf Y}}\sfY{\hbox{\scriptsize\sf Y}}\smallsfY}{{\sf Y}} \def{\sf Z}{{\sf Z}} \def{\cal B}(\ystate){{\cal B}({\mathchoice{\hbox{\sf Y}}\sfY{\hbox{\scriptsize\sf Y}}\smallsfY})} \def\tinyplus{\vbox{\hrule width 3pt depth -1.5pt height 1.9pt \vskip -1.5pt \hbox{\hskip 1.3pt\vrule height 3pt depth 0pt width .4pt}}} \def{\sf X^{\rm z}}{{\sf X^{\rm z}}} \def{\cal B}(\state^\hbox{\rm\tiny z}){{\cal B}({\mathchoice{\hbox{\sf X}}\sfX{\hbox{\scriptsize\sf X}}\smallsfX}^\hbox{\rm\tiny z})} \def{\sf X^{{\rm z}_{ \tinyplus}}}{{\sf X^{{\rm z}_{ \tinyplus}}}} \def({\sf X},{\cal B}({\bf X}),{\bf m}){({\sf X},{\cal B}({\bf X}),{\bf m})} \def{{\cal B}(\state)}{{{\cal B}({\mathchoice{\hbox{\sf X}}\sfX{\hbox{\scriptsize\sf X}}\smallsfX})}} \def{\cal B}(\state^{{\rm z}_{\tinyplus}} )} \def\bxplus{{{\cal B}^+(\state)}{{\cal B}({\mathchoice{\hbox{\sf X}}\sfX{\hbox{\scriptsize\sf X}}\smallsfX}^{{\rm z}_{\tinyplus}} )} \def\bxplus{{{\cal B}^+({\mathchoice{\hbox{\sf X}}\sfX{\hbox{\scriptsize\sf X}}\smallsfX})}} \newcommand{\field}[1]{\mathbb{#1}} \def\field{R}_+{\field{R}_+} \def\field{R}{\field{R}} \def\field{A}{\field{A}} \def\mbox{\rm{\large{1}}}{\mbox{\rm{\large{1}}}} \def\mbox{\rm{\large{0}}}{\mbox{\rm{\large{0}}}} \def\field{I}{\field{I}} \def\field{T}{\field{T}} \def\field{Z}{\field{Z}} \def\field{Z}{\field{Z}} \def\field{Z}_+{\field{Z}_+} \def\field{Q}{\field{Q}} \def\field{C}{\field{C}} \def{\check{{\cal F}}}{{\check{{\cal F}}}} \def{\check{{\mathchoice{\bfatom}{\bfatom}{\alpha}{\alpha}}}}{{\check{{\mathchoice{\bfatom}{\bfatom}{\alpha}{\alpha}}}}} \def\check{{\sf E}}{\check{{\sf E}}} \def\check{\Phi}{\check{\Phi}} \def\check{\bfmath{\Phi}}{\check{\bfmath{\Phi}}} \def\check{{\sf P}}{\check{{\sf P}}} \def\cProb\!{\check{{\sf P}}\!} \def\check{\Phi}{\check{\Phi}} \def\check{\pi}{\check{\pi}} \def\check{A}{\check{A}} \def{\check{G}}{{\check{G}}} \def{\check{P}}{{\check{P}}} \def{\cal B}^+({\check{\state}}){{\cal B}^+({\check{\state}})} \def{\check{\state}}{{\check{{\mathchoice{\hbox{\sf X}}\sfX{\hbox{\scriptsize\sf X}}\smallsfX}}}} \def{\cal B}(\cstate){{\cal B}({\check{\state}})} \def\bfmath{\Phi}^{\Lambda}{\bfmath{\Phi}^{\Lambda}} \def{\bf A}{{\bf A}} \def{\bf B}{{\bf B}} \def{\bf C}{{\bf C}} \def{\bf D}{{\bf D}} \def{\bf E}{{\bf E}} \def{\bf F}{{\bf F}} \def{\bf G}{{\bf G}} \def{\bf H}{{\bf H}} \def{\bf I}{{\bf I}} \def{\bf J}{{\bf J}} \def{\bf K}{{\bf K}} \def{\bf L}{{\bf L}} \def{\bf M}{{\bf M}} \def{\bf N}{{\bf N}} \def{\bf O}{{\bf O}} \def{\bf P}{{\bf P}} \def{\bf Q}{{\bf Q}} \def{\bf R}{{\bf R}} \def{\bf S}{{\bf S}} \def{\bf T}{{\bf T}} \def{\bf U}{{\bf U}} \def{\bf V}{{\bf V}} \def{\bf W}{{\bf W}} \def{\bf X}{{\bf X}} \def{\bf Y}{{\bf Y}} \def{\bf Z}{{\bf Z}} \def{\bf a}{{\bf a}} \def{\bf b}{{\bf b}} \def{\bf c}{{\bf c}} \def{\bf d}{{\bf d}} \def{\bf e}{{\bf e}} \def{\bf f}{{\bf f}} \def{\bf g}{{\bf g}} \def{\bf h}{{\bf h}} \def{\bf i}{{\bf i}} \def{\bf j}{{\bf j}} \def{\bf k}{{\bf k}} \def{\bf l}{{\bf l}} \def{\bf m}{{\bf m}} \def{\bf n}{{\bf n}} \def{\bf o}{{\bf o}} \def{\bf p}{{\bf p}} \def{\bf q}{{\bf q}} \def{\bf r}{{\bf r}} \def{\bf s}{{\bf s}} \def{\bf t}{{\bf t}} \def{\bf u}{{\bf u}} \def{\bf v}{{\bf v}} \def{\bf w}{{\bf w}} \def{\bf x}{{\bf x}} \def{\bf y}{{\bf y}} \def{\bf z}{{\bf z}} \def\bfmath#1{{\mathchoice{\mbox{\boldmath$#1$}}% {\mbox{\boldmath$#1$}}% {\mbox{\boldmath$\scriptstyle#1$}}% {\mbox{\boldmath$\scriptscriptstyle#1$}}}} \def\bfmath{a}{\bfmath{a}} \def\bfmath{b}{\bfmath{b}} \def\bfmath{d}{\bfmath{d}} \def\bfmath{e}{\bfmath{e}} \def\bfmath{m}{\bfmath{m}} \def\bfmath{q}{\bfmath{q}} \def\bfmath{r}{\bfmath{r}} \def\bfmv{\bfmath{v}} \def\bfmath{x}{\bfmath{x}} \def\bfmath{y}{\bfmath{y}} \def\bfmath{u}{\bfmath{u}} \def\bfmath{w}{\bfmath{w}} \def\bfmath{\widehat w}{\bfmath{{\hat w}}} \def\bfmz{\bfmath{z}} \def\bfmath{A}{\bfmath{A}} \def\bfmB{\bfmath{B}} \def\bfmC{\bfmath{C}} \def\bfmD{\bfmath{D}} \def\bfmE{\bfmath{E}} \def\bfmF{\bfmath{F}} \def\bfmG{\bfmath{G}} \def\bfmH{\bfmath{H}} \def\bfmI{\bfmath{I}} \def\bfmhaI{\bfmath{{\hat I}}} \def\bfmhaL{\bfmath{\haL}} \def\bfmath{L}{\bfmath{L}} \def\bfmath{M}{\bfmath{M}} \def\bfmN{\bfmath{N}} \def\bfmQ{\bfmath{Q}} \def\bfmR{\bfmath{R}} \def\bfmbarR{\bfmath{\bar{R}}} \def\bfmS{\bfmath{S}} \def\bfmT{\bfmath{T}} \def\bfmU{\bfmath{U}} \def\bfmath{\widehat N}{\bfmath{\widehat N}} \def{\bfmath{$\widehat Q$}}{\bfmath{\widehat Q}} \def\bfmath{X}{\bfmath{X}} \def\tilde \bfmX{\bfmath{\widetilde{X}}} \def\bfmath{\widetilde{Q}}{\bfmath{\widetilde{Q}}} \def\tilde{\bfmath{q}}{\tilde{\bfmath{q}}} \def\widetilde{\bfmath{y}}{\widetilde{\bfmath{y}}} \def\tilde{y}{\tilde{y}} \def\bfmY{\bfmath{Y}} \def\bfmath{\widehat Y}{\bfmath{\widehat Y}} \def\hbox to 0pt{$\widehat{\bfmY}$\hss}\widehat{\phantom{\raise 1.25pt\hbox{$\bfmY$}}}{\bfmath{\hhaY}} \def\hbox to 0pt{$\widehat{\bfmY}$\hss}\widehat{\phantom{\raise 1.25pt\hbox{$\bfmY$}}}{\hbox to 0pt{$\widehat{\bfmY}$\hss}\widehat{\phantom{\raise 1.25pt\hbox{$\bfmY$}}}} \def\bfmV{\bfmath{V}} \def\bfmW{\bfmath{W}} \def\bfmath{\widehat W}{\bfmath{{\widehat W}}} \def\bfmZ{\bfmath{Z}} \def{\bfmath{\widehat Z}}{{\bfmath{\widehat Z}}} \def\bfmath{\widehat w}{\bfmath{\widehat w}} \def\bfmath{\widehat y}{\bfmath{\widehat y}} \def\bfmath{\widehat z}{\bfmath{\widehat z}} \def{\bfmath{$\widehat Q$}}{\bfmath{\widehat Q}} \def\bfmath{\widehat S}{\bfmath{\widehat S}} \def\bfmath{\widehat U}{\bfmath{\widehat U}} \def\bfmath{\widehat W}{\bfmath{\widehat W}} \def{\mbox{\boldmath$\alpha$}}{\bfmath{\theta}} \def\bfmath{\Phi}{\bfmath{\Phi}} \def\bfmath{\eta}{\bfmath{\eta}} \def\bfpi{\bfmath{\pi}} \def\bfphi{\bfmath{\phi}} \def\bfmath{\psi}{\bfmath{\psi}} \def\bfmath{\rho}{\bfmath{\rho}} \def\bfmath{\zeta}{\bfmath{\zeta}} \def\bfUpsilon{\bfmath{\Upsilon}} \def{\widehat{\Psi}}{{\widehat{\Psi}}} \def{\widehat{\Gamma}}{{\widehat{\Gamma}}} \def{\widehat{\Sigma}}{{\widehat{\Sigma}}} \def{\widehat{\bfPhi}}{{\widehat{\bfmath{\Phi}}}} \def{\widehat P}{{\widehat P}} \def{\hat c}{{\hat c}} \def\hat{z}{\hat{z}} \def{\hat {\imath}}{{\hat {\imath}}} \def{\hat f}{{\hat f}} \def{\hat g}{{\hat g}} \def{\hat m}{{\hat m}} \def{\hat h}{{\hat h}} \def{\hat p}{{\hat p}} \def{\hat q}{{\hat q}} \def{\bfmath{$\widehat Q$}}{{\bfmath{$\widehat Q$}}} \def\hat v{\hat v} \def{\hat w}{{\hat w}} \def{\hat y}{{\hat y}} \def{\widehat F}{{\widehat F}} \def{\hat I}{{\hat I}} \def{\hat J}{{\hat J}} \def\hat{N}{\hat{N}} \def{\widehat Q}{{\widehat Q}} \def{\widehat T}{{\widehat T}} \def{\widehat W}{{\widehat W}} \def{\hat\eta}{{\hat\eta}} \def{\hat\theta}{{\hat\theta}} \def{\hat\psi}{{\hat\psi}} \def{\hat\pi}{{\hat\pi}} \def{\hat\nu}{{\hat\nu}} \def{\hat\mu}{{\hat\mu}} \def{\hat\alpha}{{\hat\alpha}} \def{\hat\beta}{{\hat\beta}} \def{\hat\gamma}{{\hat\gamma}} \def{\hat\lambda}{{\hat\lambda}} \def{\hat\Lambda}{{\hat\Lambda}} \def{\rm A}{{\rm A}} \def{\rm B}{{\rm B}} \def{\rm C}{{\rm C}} \def{\rm C}{{\rm C}} \def{\rm E}{{\rm E}} \def{\rm H}{{\rm H}} \def{\rm I}{{\rm I}} \def{\rm M}{{\rm M}} \def{\rm P}{{\rm P}} \def{\rm Q}{{\rm Q}} \def{\rm d}{{\rm d}} \def{\rm g}{{\rm g}} \def{\rm k}{{\rm k}} \def{\rm u}{{\rm u}} \def{\rm x}{{\rm x}} \def{\rm P}{{\rm P}} \def{\rm E}{{\rm E}} \def\til={{\widetilde =}} \def{\widetilde \Phi}{{\widetilde \Phi}} \def{\widetilde N}{{\widetilde N}} \def{\widetilde P}{{\widetilde P}} \def{\widetilde Q}{{\widetilde Q}} \def\widetilde T{\widetilde T} \def\widetilde W{\widetilde W} \def{\tilde \mu}{{\tilde \mu}} \def{\tilde \pi}{{\tilde \pi}} \def{\bf\tilde X}{{\bf\tilde X}} \def{\bf \tilde m}{{\bf \tilde m}} \def{\tilde \theta}{{\tilde \theta}} \def\tilde e{\tilde e} \def\tilde \sigma{\tilde \sigma} \def\tilde \tau{\tilde \tau} \def{\cal A}{{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}} \def{\cal E}{{\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def{\cal H}{{\cal H}} \def{\cal J}{{\cal J}} \def{\cal I}{{\cal I}} \def{\cal K}{{\cal K}} \def{\cal L}{{\cal L}} \def{\cal M}{{\cal M}} \def{\cal N}{{\cal N}} \def{\cal O}{{\cal O}} \def{\cal P}{{\cal P}} \def{\cal Q}{{\cal Q}} \def{\cal R}{{\cal R}} \def{\cal S}{{\cal S}} \def{\cal T}{{\cal T}} \def{\cal U}{{\cal U}} \def{\cal V}{{\cal V}} \def{\cal W}{{\cal W}} \def{\cal X}{{\cal X}} \def{\cal Y}{{\cal Y}} \def{\cal Z}{{\cal Z}} \def{\mbox{\boldmath$\alpha$}}{{\mbox{\boldmath$\alpha$}}} \def{\mathchoice{\bfatom}{\bfatom}{\alpha}{\alpha}}{{\mathchoice{{\mbox{\boldmath$\alpha$}}}{{\mbox{\boldmath$\alpha$}}}{\alpha}{\alpha}}} \def\clG^+(\gamma){{\cal G}^+(\gamma)} \def\clG^+(s){{\cal G}^+(s)} \def\hbox{\sf Var}\,{\hbox{\sf Var}\,} \defA_+{A_+} \def\barAt#1{\overline{A_+(#1)}} \def\head#1{\subsubsection*{#1}} \def\atopss#1#2{\genfrac{}{}{0cm}{2}{#1}{#2}} \def\atop#1#2{\genfrac{}{}{0pt}{}{#1}{#2}} \def\FRAC#1#2#3{\genfrac{}{}{}{#1}{#2}{#3}} \def\fraction#1#2{{\mathchoice{\FRAC{0}{#1}{#2}}% {\FRAC{1}{#1}{#2}}% {\FRAC{3}{#1}{#2}}% {\FRAC{3}{#1}{#2}}}} \def\ddt{{\mathchoice{\FRAC{1}{d}{dt}}% {\FRAC{1}{d}{dt}}% {\FRAC{3}{d}{dt}}% {\FRAC{3}{d}{dt}}}} \def\ddr{{\mathchoice{\FRAC{1}{d}{dr}}% {\FRAC{1}{d}{dr}}% {\FRAC{3}{d}{dr}}% {\FRAC{3}{d}{dr}}}} \def\ddr{{\mathchoice{\FRAC{1}{d}{dy}}% {\FRAC{1}{d}{dy}}% {\FRAC{3}{d}{dy}}% {\FRAC{3}{d}{dy}}}} \def\ddtp{{\mathchoice{\FRAC{1}{d^{\hbox to 2pt{\rm\tiny +\hss}}}{dt}}% {\FRAC{1}{d^{\hbox to 2pt{\rm\tiny +\hss}}}{dt}}% {\FRAC{3}{d^{\hbox to 2pt{\rm\tiny +\hss}}}{dt}}% {\FRAC{3}{d^{\hbox to 2pt{\rm\tiny +\hss}}}{dt}}}} \def\ddalpha{{\mathchoice{\FRAC{1}{d}{d\alpha}}% {\FRAC{1}{d}{d\alpha}}% {\FRAC{3}{d}{d\alpha}}% {\FRAC{3}{d}{d\alpha}}}} \def\half{{\mathchoice{\FRAC{1}{1}{2}}% {\FRAC{1}{1}{2}}% {\FRAC{3}{1}{2}}% {\FRAC{3}{1}{2}}}} \def\third{{\mathchoice{\FRAC{1}{1}{3}}% {\FRAC{1}{1}{3}}% {\FRAC{3}{1}{3}}% {\FRAC{3}{1}{3}}}} \def\fourth{{\mathchoice{\FRAC{1}{1}{4}}% {\FRAC{1}{1}{4}}% {\FRAC{3}{1}{4}}% {\FRAC{3}{1}{4}}}} \def\threefourth{{\mathchoice{\FRAC{1}{3}{4}}% {\FRAC{1}{3}{4}}% {\FRAC{3}{3}{4}}% {\FRAC{3}{3}{4}}}} \def\sixth{{\mathchoice{\FRAC{1}{1}{6}}% {\FRAC{1}{1}{6}}% {\FRAC{3}{1}{6}}% {\FRAC{3}{1}{6}}}} \def\eighth{{\mathchoice{\FRAC{1}{1}{8}}% {\FRAC{1}{1}{8}}% {\FRAC{3}{1}{8}}% {\FRAC{3}{1}{8}}}} \def\buildrel{\rm dist}\over ={\buildrel{\rm dist}\over =} \def\mathbin{:=}{\mathbin{:=}} \def\mu^{\hbox{\rm \tiny Leb}}{\mu^{\hbox{\rm \tiny Leb}}} \def{\cal M}{{\cal M}} \def{\sf P}{{\sf P}} \def{\sf P\! }{{\sf P\! }} \def{\sf E}{{\sf E}} \def\taboo#1{{{}_{#1}P}} \def{\rm i.o.}{{\rm i.o.}} \def{\rm a.s.}{{\rm a.s.}} \def{\rm vec\, }{{\rm vec\, }} \def{\displaystyle\frac{1}{ N}}{{\displaystyle\frac{1}{ N}}} \def\average#1,#2,{{1\over #2} \sum_{#1}^{#2}} \def\hbox{ \rm \ enters\ }{\hbox{ \rm \ enters\ }} \def\eye(#1){{\bf(#1)}\quad} \def\varepsilon{\varepsilon} \def\,\cdot\,{\,\cdot\,} \newtheorem{theorem}{Theorem}[section] \newtheorem{corollary}[theorem]{Corollary} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{lemma}[theorem]{Lemma} \def\Lemma#1{Lemma~\ref{#1}} \def\Proposition#1{Proposition~\ref{#1}} \def\Theorem#1{Theorem~\ref{#1}} \def\Corollary#1{Corollary~\ref{#1}} \def\Section#1{Section~\ref{#1}} \def\Chapter#1{Chapter~\ref{c:#1}} \def\Appendix#1{Appendix~\ref{c:#1}} \def\eq#1/{(\ref{e:#1})} \newcommand{\beqn}[1]{\notes{#1}% \begin{eqnarray} \elabel{#1}} \newcommand{\end{eqnarray} }{\end{eqnarray} } \newcommand{\beq}[1]{\notes{#1}% \begin{equation}\elabel{#1}} \newcommand{\end{equation}}{\end{equation}} \def\begin{description}{\begin{description}} \def\end{description}{\end{description}} \def{\overline {a}}{{\overline {a}}} \def{\overline {c}}{{\overline {c}}} \def{\overline {f}}{{\overline {f}}} \def{\overline {g}}{{\overline {g}}} \def{\overline {h}}{{\overline {h}}} \def{\overline {l}}{{\overline {l}}} \def{\overline {m}}{{\overline {m}}} \def{\overline {n}}{{\overline {n}}} \def{\overline {p}}{{\overline {p}}} \newcommand{{\bar{x}}}{{\bar{x}}} \def{\overline {y}}{{\overline {y}}} \def{\overline {A}}{{\overline {A}}} \def{\overline {B}}{{\overline {B}}} \def{\overline {C}}{{\overline {C}}} \def{\overline {E}}{{\overline {E}}} \def{\overline {M}}{{\overline {M}}} \def{\overline {P}}{{\overline {P}}} \def{\overline {Q}}{{\overline {Q}}} \def{\overline {T}}{{\overline {T}}} \def{\underline{n}}{{\underline{n}}} \def{\underline{\rho}}{{\underline{\rho}}} \def{\overline{\atom}}{{\overline{{\mathchoice{\bfatom}{\bfatom}{\alpha}{\alpha}}}}} \def{\overline{\rho}}{{\overline{\rho}}} \def{\overline{\mu}}{{\overline{\mu}}} \def{\overline{\nu}}{{\overline{\nu}}} \def{\overline{\alpha}}{{\overline{\alpha}}} \def{\overline{\beta}}{{\overline{\beta}}} \def{\overline{\alpha}}{{\overline{\alpha}}} \def{\overline{\eta}}{{\overline{\eta}}} \def\overline{\bfPhi}{\overline{\bfmath{\Phi}}} \def\overline{\Phi}{\overline{\Phi}} \def\overline{\clF}{\overline{{\cal F}}} \def\overline{\sigma}{\overline{\sigma}} \def\overline{\Sigma}{\overline{\Sigma}} \def\overline{\tau}{\overline{\tau}} \def\hat{\sigma}{\hat{\sigma}} \def\hat{\tau}{\hat{\tau}} \def\hat{P}{\hat{P}} \def\hat{\Phi}{\hat{\Phi}} \def\hat{\bfPhi}{\hat{\bfmath{\Phi}}} \def\hat{\cal F}{\hat{\cal F}} \newcounter{rmnum} \newenvironment{romannum}{\begin{list}{{\upshape (\roman{rmnum})}}{\usecounter{rmnum} \setlength{\leftmargin}{14pt} \setlength{\rightmargin}{8pt} \setlength{\itemsep}{2pt} \setlength{\itemindent}{-1pt} }}{\end{list}} \newcounter{anum} \newenvironment{alphanum}{\begin{list}{{\upshape (\alph{anum})}}{\usecounter{anum} \setlength{\leftmargin}{14pt} \setlength{\rightmargin}{8pt} \setlength{\itemsep}{2pt} \setlength{\itemindent}{-1pt} }}{\end{list}} \def\textsf{S}{\textsf{S}} \def\bfmath{\Delta}{\bfmath{\Delta}} \def\bfmath{\gamma}{\bfmath{\gamma}} \def\widetilde{\Psi}{\widetilde{\Psi}} \def\bfmath{\gamma}{\bfmath{\gamma}} \def\welf{\mathchoice{\mbox{\small$\cal W$}}% {\mbox{\small$\cal W$}}% {\mbox{$\scriptstyle\cal W$}}% {\mbox{$\scriptscriptstyle\cal W$}}} \def\util{\mathchoice{\mbox{\small$\cal U$}}% {\mbox{\small$\cal U$}}% {\mbox{$\scriptstyle\cal U$}}% {\mbox{$\scriptscriptstyle\cal U$}}} \def\tilutil{\mathchoice{\mbox{\small$\cal \widetilde U$}}% {\mbox{\small$\cal\widetilde U$}}% {\mbox{$\scriptstyle\cal \widetilde U$}}% {\mbox{$\scriptscriptstyle\cal \tilde U$}}} \def\Ebox#1#2{% \begin{center} \includegraphics[width= #1\hsize]{#2} \end{center}} \def{\cal S}{{\cal S}} \defH{H} \def\kappa^2{\kappa^2} \def\check p{\check p} \def\check \mu{\check \mu} \def{\cal V}{{\cal V}} \def{\cal H}{{\cal H}} \def{\cal V}{{\cal V}} \def\GNoplus{g_{\text{\scriptsize p}}} \def\Fig#1{Fig.~\ref{#1}} \defP^{\text{\tiny tot}}{P^{\text{\tiny tot}}} \def\field{I}{\field{I}} \def\field{R}_+{\field{R}_+} \def\field{R}{\field{R}} \title{Ancillary Service to the Grid\\ Using Intelligent Deferrable Loads } \author{Sean Meyn, Prabir Barooah, Ana Bu\v{s}i\'{c}, Yue Chen, and Jordan Ehre \thanks{This research is supported by the NSF grant CPS-0931416, the Department of Energy Awards DE-OE0000097 \&\ DE-SC0003879, and the French National Research Agency grant ANR-12-MONU-0019. We acknowledge the help of Mark Rosenberg who offered many suggestions to improve the manuscript, and caught several typos in earlier drafts. \thanks{S.M., Y.C.\ and J.E.\ are with the Dept.\ ECE and P.B. is with the Dept.\ MAE at the University of Florida, Gainesville. A.B.\ is with INRIA and the Computer Science Dept. of \'Ecole Normale Sup\'erieure, Paris, France. } } \begin{document} \maketitle \thispagestyle{empty} \begin{abstract} Renewable energy sources such as wind and solar power have a high degree of unpredictability and time-variation, which makes balancing demand and supply challenging. One possible way to address this challenge is to harness the inherent flexibility in demand of many types of loads. Introduced in this paper is a technique for decentralized control for automated demand response that can be used by grid operators as ancillary service for maintaining demand-supply balance. A Markovian Decision Process (MDP) model is introduced for an individual load. A randomized control architecture is proposed, motivated by the need for decentralized decision making, and the need to avoid synchronization that can lead to large and detrimental spikes in demand. An aggregate model for a large number of loads is then developed by examining the mean field limit. A key innovation is an LTI-system approximation of the aggregate nonlinear model, with a scalar signal as the input and a measure of the aggregate demand as the output. This makes the approximation particularly convenient for control design at the grid level. The second half of the paper contains a detailed application of these results to a network of residential pools. Simulations are provided to illustrate the accuracy of the approximations and effectiveness of the proposed control approach. \end{abstract} \clearpage \section{Introduction} \label{s:intro} Renewable energy penetration is rising rapidly throughout the world, and bringing with it high volatility in energy supply. Resources are needed to compensate for these large fluctuations in power. The federal energy regulatory commission (FERC) in conjunction with generation and utility companies are struggling to find resources, and finding ways to properly compensate for ancillary services that are badly needed by each \textit{balancing authority} (BA) in the U.S.. FERC orders 755 and 745 are examples of their attempts to provide incentives. \notes{the utilities are often victims in all of this -- I hope BA is o.k. -spm\\ Expand on FERC. Also see refs from Toulouse (in commented text) } This paper concerns decentralized control of a large number of electric loads in a power grid. A particular load has a service it is intended to provide -- clean dishes, hot water, or a clean pool. It is assumed that each load has some flexibility in energy consumption. This flexibility is harnessed to provide ancillary services to the power grid to help maintain stability, and to help offset any volatility in the grid because of line or generation outage, or because of the volatile nature of renewable energy. This is commonly called ``demand response'', but the meaning is slightly different here: The tuning of energy consumption is automated, and we assume that the consumers do not suffer any degradation in the service offered by the loads. We argue that most of the load in the U.S. is highly flexible, and this flexibility can be harnessed to provide ancillary service without central control, and without significant impact on the needs of consumers or industry. A defining characteristic of ancillary service is that on average it is a \emph{zero-energy} service, so that the desired power consumption level to be tracked is zero on average. This makes use of deferrable loads particularly attractive as sources of ancillary service. Many utilities already employ demand response programs that use deferrable loads to reduce peak demand and manage emergency situations. Florida Power and Light (FPL), for example, has 780,000 customers enrolled in their \textit{OnCall Savings Program} in which residential air conditioners, water heaters, and pool pumps systems are automatically controlled when needed \cite{FPLsaving}. Today, FPL uses this service only 3--4 times per year \cite{FPLsaving}. While a valuable service to the grid, there is tremendous additional potential from these sources that today is virtually untapped. Nearly all of America's ISOs/RTOs also allow for demand side resources to participate in their regulation and spinning reserve markets, but as of the summer of 2013, only PJM allows aggregation (with approval) \cite{maccapcalkil12}. Growth of these resources in these wholesale markets has helped lower costs per megawatt-hour from 2009 to 2011 \cite{maccapcalkil12}. Still, markets for regulation and spinning reserves from traditional generation sources continue to grow because of increasing dependency on renewable generation. \Fig{fig:BPA} shows the regulation signal for a typical week within the Bonneville Power Authority (BPA)~\cite{BPA}. Its role is analogous to the control signal in the feedback loop in a flight control system. Just like in an aviation control system, the variability seen in this figure is in part a consequence of variability of wind generation in this region. \begin{figure}[h] \vspace{-.15cm} \Ebox{.75}{BPAregulationAndLowPassWeb.pdf} \vspace{-.25cm} \caption{\textit{BPA Balancing Reserves Deployed} --- Ancillary service needs at the BPA during one week in 2013. The maximum is approximately one-tenth of maximum load in this region. } \label{fig:BPA} \vspace{-08pt} \end{figure} We propose to break up a regulation signal into frequency bands for the purposes of ancillary services provisioning by various resources. In prior work it is shown how heating and ventilation systems in commercial buildings can provide service in the high frequency band, corresponding to periods ranging from 3 minutes to one hour \cite{haokowbarmey13,haobarmidmey12,linbarmey13}. At the lowest frequencies, an important resource will be flexible manufacturing. An example is Alcoa, that today provides 70MW of service to MISO by providing control over their aluminum smelting operation in Indiana. Alcoa's service is provided continuously, and provides significant revenue to Alcoa and even greater benefits to the region managed by MISO. The technical content of the paper starts with a control architecture designed to address privacy concerns and communication constraints. It is assumed that an individual load can view a regulation signal, much as we can view BPA's regulation signal online today. To provide ancillary service in a specified frequency band, we argue that it is essential to introduce randomization at each load. Among many benefits, randomization avoids synchronization, much like randomized congestion avoidance protocols in communication networks. First deployed nearly fifty years ago, ALOHA may be the first distributed communication protocol based on randomization. \textit{Random Early Detection} for congestion control was introduced in the highly influential paper \cite{flojac93}. The historical discussion in this paper points to significant research on randomized algorithms beginning in the early 1970s, following wide deployment of ALOHA. Randomized protocols are now standard practice in communication networks \cite{sri04a}. It is likely that randomized algorithms will become a standard feature of the power grid of the future. To formulate a randomized control strategy, a Markovian Decision Process (MDP) model is proposed for an individual load. An aggregate model for a large number of loads is then obtained as a mean field limit. A particular formulation of Todorov~\cite{tod07} is adopted because we can obtain an explicit solution, and because of available tools for analysis borrowed from the theory of large deviations. In particular, a key innovation in the present paper is an LTI--system approximation of the aggregate nonlinear model, which is possible through application of results from \cite{konmey05a}. The scalar input in this linear model is a parameter that appears in the MDP cost function. The LTI approximation is convenient for control design at the grid level: the input becomes the control signal that the BA will broadcast to all the loads, which adjusts a parameter in the randomized policy for the optimal MDP solution at each load. In the second half of this paper we apply the general results of this paper to show how pool pumps can be harnessed to obtain ancillary service in a medium frequency band, corresponding to the dashed line in \Fig{fig:BPA}. This is the same BPA regulation signal, passed through a low pass filter A pool pump is the heart of a pool's filtration system: It runs each day for a period of time range from 4 to 24 hours, and consumes over 1~KW of power when in operation \cite{PPDR08}. The ability to control just half of Florida's pool pumps amounts to over 500~MW of power! Much of the control infrastructure is already in place~\cite{hallo06}. Still, constraints and costs must be satisfied. These include run-times per day and per week, the cost of startup and shut down, as well as the total energy consumption. Moreover, there are privacy concerns and related communication constraints. Consequently, control algorithms must be distributed so that most of the required intelligence resides at individual pool pumps. In this paper we focus on constraints related to run-times per day, which is critical for keeping the water in the pool clean. Privacy and communication constraints will be addressed through the distributed control architecture. A number of recent works have explored the potential for flexible loads for providing ancillary service. These include commercial building thermostatic loads to provide ancillary service in the time-scale of a few minutes (see \cite{matkoccal13} and refs.\ therein), electric vehicle charging~\cite{macalhis10,tombou10,coupertemdeb12,matkoccal13} that can provide ancillary service in the time scale of a few hours, and our own recent work on harnessing ancillary service from commercial building HVAC~\cite{haokowbarmey13,haobarmidmey12,linbarmey13}. Mean-field games have been employed for analysis of aggregate loads in several recent papers \cite{macalhis10,coupertemdeb12}. See \cite{huacaimal07,borsun12,gasgauleb12} for more on general theory of mean-field technques. The work of~\cite{matkoccal13} is most closely related to the present paper, in that the authors also consider an aggregate model for a large collection of loads. The natural state space model is bilinear, and converted to a linear model through division of the state. The control architecture consists of a centralized control signal computation based on state feedback, and the resulting input is broadcast to the devices. In this paper, intelligence is concentrated at the individual load: An MDP control solution is obtained at each load, but the aggregate behavior is well approximated by a \textit{single-input single-output, linear time-invariant} (SISO-LTI) system. Hence the control problem for the balancing authority can be addressed using classical control design methods. State estimation is not required --- the information required at the BA is an estimate of the proportion of loads that are operating. In the numerical example considered in this paper, the linear system is minimum-phase and stable, which is very helpful for control design. The remainder of the paper is organized as follows. The control solution for a single pool is described in \Section{s:ppcontrol}, along with approximations of the optimal control solution based on general theory presented in the Appendix. The control of the aggregate collection of pools is considered in \Section{s:mfg}. Conclusions and directions of future research are contained in \Section{s:conclude}. \section{Optimal control for a load and for the grid} \label{s:ppcontrol} \begin{figure}[h] \vspace{-.25cm} \Ebox{.7}{ControlArchitectureTAC.pdf} \vspace{-.25cm} \caption{The control architecture: command $\bfmath{\zeta}$ is computed at a BA, and transmitted to each pool pump. The control decision at a load is binary (turn on/off), and is based only on its own state and the signal $\bfmath{\zeta}$.} \label{fig:arch} \vspace{-08pt} \end{figure} \subsection{Control architecture overview} We begin with a description of the control and information architecture that is the subject of this paper. The components of the architecture are illustrated in \Fig{fig:arch}: \begin{romannum} \item There are $N$ homogeneous loads that receive a common scalar command signal from the balancing authority, or BA, denoted $\bfmath{\zeta}=\{\zeta_t\}$ in the figure. Randomization at each load is desirable to avoid synchronization of loads, and also to facilitate analysis of the aggregate system. It is assumed that each load evolves as a controlled Markov chain: The transition probability for each load is determined by its own state, and the BA signal $\bfmath{\zeta}$. The common dynamics are defined by a controlled transition matrix $\{P_\zeta : \zeta\in\field{R}\}$. For the $i$th load, there is a state process $\bfmath{X}^i$ whose transition probability at time $t$ is given by, \begin{equation} {\sf P}\{X^i_{t+1} = x^+ \mid X^i_t = x^- ,\, \zeta_t=\zeta\} = P_{\zeta}(x^-,x^+) \label{e:Pzeta} \end{equation} where $x^-$ and $x^+$ are possible state-values. The details of the model are described in \Section{s:loadmodel}. \item The BA has measurements of the other two scalar signals shown in the figure: The normalized aggregate power consumption $\bfmath{y}$ and desired deviation in power consumption $\bfmath{r}$. When $\zeta_t=0$ for all $t$, then the aggregate power consumption takes the value $\bfmath{y}^0$. The goal of the BA is a tracking problem: Achieve $y_t\approx y^0 +r_t$ for all $t$. This can be addressed using classical control techniques if the dynamics from $\bfmath{\zeta}$ to $\widetilde{\bfmath{y}}= \bfmath{y}-\bfmath{y}^0$ can be approximated by an LTI system. \end{romannum} The main contributions of this paper are based on the construction of the controlled transition matrix for an individual load, taking into account potentially conflicting goals: The BA desires overall dynamics from $\bfmath{\zeta}$ to $\bfmath{y}$ that facilitate tracking the reference signal $\bfmath{r}$. Each load requires good quality of service. In the case of a pool, the water must be kept clean, and the electricity bill must remain constant over each month. An approach of Todorov \cite{tod07} is adopted to construct the family of transition matrices $\{P_\zeta : \zeta\in\field{R}\}$. They are smooth in the parameter $\zeta$, and a first-order Taylor series approximation gives, for any pair of states $(x^-,x^+) $ \begin{equation} P_\zeta(x^-,x^+) = \exp(\zeta \Gamma(x^-,x^+) +O(\zeta^2)) P_0(x^-,x^+) \label{e:expclE} \end{equation} where $P_0$ denotes the dynamics of a load when $\bfmath{\zeta}\equiv 0$, and $\Gamma$ is a matrix. Based on \eqref{e:clE}, we have \[ \Gamma(x^-,x^+) = \tilutil(x^-) +H (x^+) -H (x^-) \] where the function $H$ is a solution to \textit{Poisson's equation}; a linear equation for the nominal model. This structure leads to the LTI approximation of the input-output dynamics from $\bfmath{\zeta}$ to $\widetilde{\bfmath{y}}$ that is presented in \Proposition{t:linear}. \Section{s:approx} also contains second-order approximations of $P_\zeta$. In \Section{s:mfg} these general techniques are applied to a collection of residential pools. In this example it is found that the LTI model is minimum phase, and that a simple PI controller can be effectively used for the control transfer function $G_c$ shown in \Fig{fig:arch}. \subsection{Load model and design} \label{s:loadmodel} In this section we present a procedure to construct the controlled transition matrix appearing in \eqref{e:Pzeta}. The controlled Markov chain evolves on a finite state space, denoted ${\mathchoice{\hbox{\sf X}}\sfX{\hbox{\scriptsize\sf X}}\smallsfX} = \{x^1,\dots,x^d\}$. The construction is based on an optimal control problem for an individual load, taking into account the needs of the load and the grid. It is assumed that a transition matrix $P_0$ is given that models ``control free'' behavior of the Markov chain, and a utility function $\util\colon{\mathchoice{\hbox{\sf X}}\sfX{\hbox{\scriptsize\sf X}}\smallsfX}\to \field{R}$ is used to model the needs of the grid. The optimal control problem will balance average utility and the cost of deviation. Since we focus on a single load, in this subsection the index $i$ in \eqref{e:Pzeta} is dropped, and we denote by $\bfmath{X}=(X_0,X_1,\dots)$ the stochastic process evolving on ${\mathchoice{\hbox{\sf X}}\sfX{\hbox{\scriptsize\sf X}}\smallsfX}$ that models this load. In the second half of the paper we will focus on a particular example in which each load is a residential pool pump. The true nominal behavior would be deterministic -- most consumers set the pump to run a fixed number of hours each day. However, the randomized policy is based on a stochastic model for nominal behavior, so we introduce some randomness to define the nominal transition matrix $P_0$. The state space is taken to be the finite set, \begin{equation} {\mathchoice{\hbox{\sf X}}\sfX{\hbox{\scriptsize\sf X}}\smallsfX}=\{ (m,i) : m\in \{ \oplus,\ominus\} ,\ i\in \{1,\dots,T\} \} \label{e:poolstate} \end{equation} If $X_t = (\ominus,i)$, this indicates that the pool-pump was turned off and has remained off for $i$ time units, and $X_t = (\oplus,i)$ represents the alternative that the pool-pump has been operating continuously for exactly $i$ time units. A state-transition diagram is shown in \Fig{fig:pppDynamics}. The values of $P_0(x,y)$ will be chosen to be nearly $0$ or $1$ for most $x,y\in{\mathchoice{\hbox{\sf X}}\sfX{\hbox{\scriptsize\sf X}}\smallsfX}$. \begin{figure}[h] \Ebox{.55}{pppDynamicsTAC.pdf} \vspace{-.25cm} \caption{State transition diagram for the pool-pump model. } \label{fig:pppDynamics} \vspace{-08pt} \end{figure} The utility function $\util$ on ${\mathchoice{\hbox{\sf X}}\sfX{\hbox{\scriptsize\sf X}}\smallsfX}$ is chosen as the indicator function that the pool pump is operating: \[ \util(x) = \sum_i \field{I}\{ x = (\oplus, i) \} \] Whether this actually represents any utility to the grid operator depends on the state of the grid. This will be clarified after we define the optimization problem and its solution. \medbreak \textit{We now return to the general model.} Consider first a finite-time-horizon optimization problem: For a given terminal time $T$, let $p_0$ denote the probability measure on strings of length $T$: \[ p_0(x_1,\dots, x_{T}) = \prod_{i=0}^{T-1} P_0(x_i,x_{i+1}),\qquad x\in {\mathchoice{\hbox{\sf X}}\sfX{\hbox{\scriptsize\sf X}}\smallsfX}^{T} \] where $x_0\in{\mathchoice{\hbox{\sf X}}\sfX{\hbox{\scriptsize\sf X}}\smallsfX}$ is assumed to be given. A fixed scalar $\zeta\in\field{R}$ is interpreted as a \textit{weighting parameter} in the following definition of \textit{total welfare}. For any probability measure $p$, this is defined as the weighted difference, \[ \welf_T(p) = \zeta {\sf E}_p\Bigl[\sum_{t=1}^{T} \util(X_t) \Bigr] -D(p\| p_0) \, \] where the expectation is with respect to $p$, and $D$ denotes relative entropy. Let $p^{T*}$ denote the probability measure that maximizes this expression. \begin{proposition} \label{t:twisted} The probability measure $p^{T*}$ is the \textit{twisted distribution}, \begin{equation} p^{T*} (x_1,\dots, x_{T}) = \exp\Bigl(\zeta \sum_{t=1}^{T} \util(x_t) - \Lambda_T(\zeta)\Bigr) p_0 (x_1,\dots, x_{T}) \label{e:pstar} \end{equation} where \begin{equation} \Lambda_T(\zeta) = \log\Bigl\{ {\sf E}\Bigl[ \exp\Bigl(\zeta \sum_{t=1}^{T} \util(X_t) \Bigr) \Bigr] \Bigr\}\,, \label{e:LMGF} \end{equation} and the expectation is with respect to $p_0$. Moreover, $\welf_T(p^{T*})=\Lambda_T(\zeta) $ is the optimal value. \qed \end{proposition} \IEEEproof Optimality of $p^{T*}$ follows from convex duality between the log moment generating function and relative entropy -- \cite[Proposition II.1]{huaunnmeyveesur11} and \cite[Lemma~2.39]{demzei98a}. The formula \eqref{e:LMGF} follows from the fact that $p^{T*}$ sums to unity, so that $\Lambda_T(\zeta)$ can be interpreted as a normalizing constant. The identity $\welf_T(p^{T*}) = \Lambda_T(\zeta)$ follows from the definitions of $\welf_T$ and $p^{T*}$. \qed The probability measure $p^{T*}$ defines a Markov chain on the time interval $\{0,1, \dots, T\}$, but it is not necessarily time-homogeneous. In the infinite horizon case, we would like to find a distribution $p^*$ on infinite sequences that attains the optimal average welfare, \notes{SM to PB: you added the homogeneous statement before the equation, but it already appeared after.} \begin{equation} \eta^*_\zeta=\lim_{T\to\infty} \frac{1}{T} \welf_T(p^{T*}) = \lim_{T\to\infty} \frac{1}{T} \log\Bigl\{ {\sf E}\Bigl[ \exp\Bigl(\zeta \sum_{t=1}^{T} \util(X_t) \Bigr) \Bigr] \Bigr\} \label{e:etastar} \end{equation} A solution to the infinite horizon problem is given by a time-homogenous Markov chain whose transition matrix ${\check{P}}_\zeta$ is easy to compute, based on the solution of an eigenvector problem; these results are summarized in the proposition that follows. The proof of \Proposition{lem:Pcheck-infinite} is given in the Appendix. \begin{proposition} \label{lem:Pcheck-infinite} If $P_0$ is irreducible, an optimizing $p^*$ that achieves \eqref{e:etastar} is defined by a time-homogeneous Markov chain whose transition probability is given by \begin{equation} {\check{P}}_\zeta(x,y) = \frac{1}{\lambda } \frac{1}{ v(x)} {\widehat P}_\zeta(x,y) v(y)\,,\qquad x,y\in{\mathchoice{\hbox{\sf X}}\sfX{\hbox{\scriptsize\sf X}}\smallsfX} , \label{e:cPool} \end{equation} where ${\widehat P}_\zeta$ is the scaled transition matrix, \begin{equation} {\widehat P}_\zeta(x,y) = \exp(\zeta \util(x)) P_0(x,y) \,,\qquad x,y\in{\mathchoice{\hbox{\sf X}}\sfX{\hbox{\scriptsize\sf X}}\smallsfX}, \label{e:scaledTM} \end{equation} and $\lambda,v$ is that eigen-pair corresponding to the eigenvector problem \begin{equation} {\widehat P}_\zeta v = \lambda v \label{e:TodPF} \end{equation} such that $\lambda=\lambda_\zeta>0$ is the unique maximal eigenvalue for ${\widehat P}_\zeta$, $v=v_\zeta$ is unique up to constant multiples, and $v(x)>0$ for each $x\in{\mathchoice{\hbox{\sf X}}\sfX{\hbox{\scriptsize\sf X}}\smallsfX}$. In addition, the following bounds hold for each $T$, \begin{equation} \begin{aligned} 0\le \welf_T(p^{T*}) - \welf_T(\check p^{T}) & \le 2 \| h \|_{\text{\small sp}} \\ |T\Lambda - \welf_T(p^{T*}) |& \le \| h \|_{\text{\small sp}} \\ \end{aligned} \label{e:cpOpt} \end{equation} where $h=\log v$, and the span norm is defined by $ \| h \|_{\text{\small sp}} = \max h - \min h$. Consequently, the Markov model achieves the optimal average welfare \eqref{e:etastar} with $\eta^*_\zeta=\Lambda$. \qed \end{proposition} \notes{ $\Lambda=\log(\lambda)$PB: is this equality a definition or a result? SM: result - clearer now?} The eigenvector problem~\ref{e:TodPF} appears in multiplicative ergodic theory \cite{konmey05a}, and also in Todorov's analysis \cite{tod07}. It is shown in \cite{tod07} that the \textit{relative value function} appearing in the average cost optimality equations is the logarithm of the eigenvector: \begin{equation} h^*(x) = \log(v(x)),\qquad x\in{\mathchoice{\hbox{\sf X}}\sfX{\hbox{\scriptsize\sf X}}\smallsfX}. \label{e:relative} \end{equation} See also the derivation in \cite{meybarbusehr13} for a variant of this model. Second order Taylor series approximations for $v$ and $\eta^*$ near $\zeta\approx 0$ can be found by borrowing tools from large-deviations theory. Some of these approximation results are new, and are collected together in the next section and in the Appendix. \subsection{Approximations} \label{s:approx} Approximations will be needed for analysis when we extend the model to allow $\zeta$ to change with time. A solution to the eigenvector problem \eqref{e:TodPF} can be represented through a regenerative formula. Let ${\mathchoice{\bfatom}{\bfatom}{\alpha}{\alpha}}\in{\mathchoice{\hbox{\sf X}}\sfX{\hbox{\scriptsize\sf X}}\smallsfX}$ be some fixed state that is reachable from each initial condition of the chain, under the transition law $P_0$. That is, the chain is assumed to be ${\mathchoice{\bfatom}{\bfatom}{\alpha}{\alpha}}$-irreducible \cite{MT}. Since the state space is assumed to be finite, it follows that there is a unique invariant probability measure $\pi_0$ for $P_0$. The first return time is denoted, \[ \tau = \min\{ t\ge 1 : X_t = {\mathchoice{\bfatom}{\bfatom}{\alpha}{\alpha}}\}\,. \] Recall that the infinite horizon optimal welfare is given by $\eta^*_\zeta=\log(\lambda)$. From the theory of positive matrices \cite{sen81,num84,konmey05a}, it follows that it is the unique solution to, \begin{equation} 1= {\sf E}_{\mathchoice{\bfatom}{\bfatom}{\alpha}{\alpha}} \Bigl[\exp \Bigl(\sum_0^{\tau-1} [ \zeta \util(X_t) - \eta^*_\zeta ] \Bigr)\Bigr] \label{e:PFeta} \end{equation} where the subscript indicates that the initial condition is $X(0)={\mathchoice{\bfatom}{\bfatom}{\alpha}{\alpha}}$. Moreover, for each $x\in{\mathchoice{\hbox{\sf X}}\sfX{\hbox{\scriptsize\sf X}}\smallsfX}$, the value of $v(x)$ is obtained as the expected sum, with initial condition $X(0)=x$: \begin{equation} v(x)= {\sf E}_x\Bigl[\exp \Bigl(\sum_0^{\tau -1} [ \zeta \util(X_t) - \eta^*_\zeta ] \Bigr) \Bigr] \label{e:PFv} \end{equation} These expectations are each with respect to the nominal transition law $P_0$. A Taylor-series approximation of $\eta^*_\zeta$ is based on two parameters, defined with respect to the nominal model $P_0$ with invariant probability measure $\pi_0$. The first-order coefficient is the the steady-state mean of $\util$, \begin{equation} \eta_0=\sum_x \pi_0(x)\util(x) \label{e:eta0} \end{equation} The second-order coefficient is based on the \textit{asymptotic variance} of $\util$ for the nominal model (the variance appearing in the Central Limit Theorem (CLT) for the nominal model). For this finite state space model this has two similar representations, \begin{equation} \begin{aligned} \kappa^2 &=\lim_{T\to\infty}\frac{1}{T} {\sf E}\Bigl[ \Bigl( \sum_0^{T -1} \tilutil(X_t) \Bigr)^2\Bigr] \\ &= \pi_0({\mathchoice{\bfatom}{\bfatom}{\alpha}{\alpha}}) {\sf E}_{\mathchoice{\bfatom}{\bfatom}{\alpha}{\alpha}}\Bigl[ \Bigl( \sum_0^{\tau -1} \tilutil(X_t) \Bigr)^2\Bigr] \end{aligned} \label{e:varsigma} \end{equation} where $\tilutil = \util-\eta_0$. See \cite[Theorem~17.0.1]{MT} for the CLT, and eqn.~(17.13) of \cite{MT} for the second representation above. Similarly, the following functions of $x$ are used to define a second order Taylor series approximation for $ h^*_\zeta$. The first-order term is the solution to \textit{Poisson's equation} for $P_0$, \begin{equation} H (x) = {\sf E}_x\Bigl[\sum_0^{\tau -1} \tilutil(X_t) \Bigr] \label{e:fish0} \end{equation} The asymptotic variance can be expressed in terms of Poisson's equation \cite{MT,CTCN}: \[ \kappa^2 =\sum_x \pi_0(x) \bigl(2\tilutil(x)H (x) -\tilutil(x)^2 \bigr) \] The second-order term in an approximation of $v$ is another variance, \begin{equation} {\cal S}(x)= {\sf E}_x\Bigl[ \Bigl(\sum_0^{\tau -1} \tilutil(X_t) \Bigr)^2 \Bigr] - \bigl(H (x)\bigr)^2 \,, \quad x\in{\mathchoice{\hbox{\sf X}}\sfX{\hbox{\scriptsize\sf X}}\smallsfX}. \label{e:SOvH} \end{equation} \begin{proposition} \label{t:etaSecondOrder} The following hold for the finite state space model in which $P_0$ is irreducible: \begin{romannum} \item The optimal average welfare $\eta^*_\zeta$ is convex as a function of $ \zeta$, and admits the Taylor series expansion, \begin{equation} \eta^*_\zeta = \eta_0 \zeta + \half \kappa^2 \zeta^2 + O( \zeta^3) \label{e:SOeta} \end{equation} \item The mean of $\util$ under the the invariant probability measure $\check{\pi}_\zeta$ for ${\check{P}}_\zeta$ is given by, \begin{equation} \sum_x \check{\pi}_\zeta(x)\util(x) = \frac{d}{d\zeta} \eta^*_\zeta \label{e:ppoffProb} \end{equation} This admits the first-order Taylor series approximation \begin{equation} \frac{d}{d\zeta} \eta^*_\zeta = \eta_0 + \kappa^2 \zeta + O( \zeta^2) \label{e:ppoffProbApprox} \end{equation} \item The relative value function \eqref{e:relative} admits the second-order Taylor series approximation, \begin{equation} h^*_\zeta(x) = \zeta H (x) + \half \zeta^2 {\cal S}(x) + O( \zeta^3) \label{e:happrox} \end{equation} \end{romannum} \qed \end{proposition} \IEEEproof Equations \eqref{e:SOeta}---\eqref{e:ppoffProbApprox} follow from the fact that $\eta^*_\zeta = \log(\lambda)$ can be expressed as a cumulative log-moment generating function \cite[Prop. 4.9]{konmey05a}. Convexity follows from the fact that $\eta^*_\zeta$ is the maximum of linear functions of $ \zeta$ (following the linear-program formulation of the ACOE \cite{bor02a}). \notes{The following also might be obtained from extending \cite[Prop. 4.9]{konmey05a}. I don't have a reference, but it is obvious from the representation of $v$} The approximation \eqref{e:happrox} follows from the the representation \eqref{e:PFv} for $v$, and the definition $h^*_\zeta = \log(v)$ (see \eqref{e:relative}). \qed The representations in this subsection are useful for analysis, but not for computation. Methods to compute $H $ and ${\cal S}$ are contained in the Appendix. \subsection{Aggregate load model} Consider $N$ loads operating independently under the randomized policy described in the previous section. The state of the $i$th load is denoted $ X^i_t $. For large $N$ we have from the Law of Large Numbers, \begin{equation} \begin{aligned} \frac{1}{N}\sum_{i=1}^N \util(X^i_t) & \approx {\sf E} [\util (X_t ) ] \end{aligned} \label{e:AvgCostPool} \end{equation} The expectation and probability on the right are with respect to the optimal transition law ${\check{P}}_\zeta$, where $ \zeta$ is the parameter used in \eqref{e:etastar}. We pose the following centralized control problem: How to choose the variable $ \zeta$ to regulate average utility \textit{in real time}, based on measurements of the average utility, and also a regulation signal denoted $\bfmath{r}$. Let $y_t$ be the fraction of loads that are on at time $t$: \begin{equation} y_t =\frac{1}{N}\sum_{i=1}^N \util(X^i_t), \label{e:y} \end{equation} which is assumed to be observed by the BA. To address the control problem faced by the BA it is necessary to relax the assumption that this parameter is fixed. We let $\bfmath{\zeta}=\{\zeta_0,\zeta_1,\dots\}$ denote a sequence of scalars, which is regarded as an input signal for the control problem faced by the BA. An aggregate model is obtained in two steps. In step 1 the existence of a mean-field limit is assumed: Let $N\to\infty$ to obtain the generalization of \eqref{e:AvgCostPool}, \begin{equation} \lim_{N\to\infty} \frac{1}{N}\sum_{i=1}^N \field{I}\{ X^i_t =x \} = \mu_t(x)\,, \quad x\in{\mathchoice{\hbox{\sf X}}\sfX{\hbox{\scriptsize\sf X}}\smallsfX}. \label{e:mfgPool} \end{equation} For a given initial distribution $\mu_0$ on ${\mathchoice{\hbox{\sf X}}\sfX{\hbox{\scriptsize\sf X}}\smallsfX}$, the distribution $\mu_t$ is defined by $\mu_t(x_t) = {}$ \begin{equation} \sum_{x_i\in{\mathchoice{\hbox{\sf X}}\sfX{\hbox{\scriptsize\sf X}}\smallsfX}} \mu_0(x_0) {\check{P}}_{\zeta_0}(x_0,x_1) {\check{P}}_{\zeta_1}(x_1,x_2) \cdots {\check{P}}_{\zeta_{t-1}}(x_{t-1},x_t) \label{e:muMF} \end{equation} where $x_t$ is an arbitrary state in ${\mathchoice{\hbox{\sf X}}\sfX{\hbox{\scriptsize\sf X}}\smallsfX}$, and the sum is over all intermediate states. We view $\{\mu_t\}$ as a state process that is under our control through $\bfmath{\zeta}$. Justification for the mean-field limit is contained in \Theorem{t:MFL}. Step 2 is based on the Taylor series approximations surveyed in the previous section to approximate this nonlinear system by a linear state space model with $d$-dimensional state $\bfmath{\Phi}$ and output $\bfmath{\gamma}$. It is defined so that for any time $t$, and any $i$, \[ \begin{aligned} \mu_t(x^i)&=\pi_0(x^i)+\Phi_t(i) + o(\bfmath{\zeta}) \\ \gamma_t &= \tilde{y}_t + o(\bfmath{\zeta}) \end{aligned} \] where $ \tilde{y}_t=y_t-y^0$, with $y^0 = \sum_x \pi_0(x) \util(x)$, and where $o(\bfmath{\zeta})$ is in fact $O(\zeta_0^2+\cdots + \zeta_t^2)$. \notes{PB: why is the IC part of the linearization description? SM: ok now? Remember, some readers might be outside of control area. } \begin{proposition} \label{t:linear} Consider the nonlinear state space model whose state evolution is $\mu_{t+1} = \mu_t {\check{P}}_{\zeta_t}$, and output is $y_t=\sum_x \mu_t(x)\util(x)$. Its unique equilibrium with $\bfmath{\zeta}\equiv 0$ is $\mu_t\equiv \pi_0$ and $y_t\equiv y^0\mathbin{:=} \sum_x \pi_0(x)\util(x)$. Its linearization around this equilibrium is given by, \begin{equation} \begin{aligned} \Phi_{t+1} &= A \Phi_t + B \zeta_t \\ \gamma_t &= C \Phi_t \end{aligned} \label{e:LSSmfg} \end{equation} where $A=P^{\hbox{\it\tiny T}}_0$, $C$ is a row vector of dimension $d=|{\mathchoice{\hbox{\sf X}}\sfX{\hbox{\scriptsize\sf X}}\smallsfX}|$ with $C_i= \util(x^i) $ for each $i$, and $B$ is a $d$-dimensional column vector with entries $B_j = \sum_x\pi_0(x) {\cal E}(x,x^j) $, where \begin{equation} {\cal E}(x^i,x^j) = \Bigl[ \tilutil(x^i)+ H (x^j) -H (x^i) \Bigr]P_0(x^i,x^j) \label{e:clE} \end{equation} for each $x^i,x^j\in {\mathchoice{\hbox{\sf X}}\sfX{\hbox{\scriptsize\sf X}}\smallsfX}$. The initial condition is $\Phi_0(i)=\mu_0(x^i)-\pi_0(x^i)$, $1\le i\le d$. The matrix ${\cal E}$ is equal to the derivative, \[ {\cal E}=\frac{d}{d\zeta} P_\zeta \Big|_{\zeta=0} \] Consequently, the formula \eqref{e:clE} implies the approximation \eqref{e:expclE}. \qed \end{proposition} \IEEEproof The formulae for $A$ and $C$ follow from the fact that the system is linear in the state. We have, from \eqref{e:cPool}, \[ {\check{P}}_\zeta(x^i,x^j) = e^{ \zeta\util(x^i) - \eta^*_\zeta - h^*_\zeta(x^i)} P_0(x^i,x^j) e^{h^*_\zeta(x^j)} \] Based on the first order approximation of $h^*_\zeta$ in \Proposition{t:etaSecondOrder} we obtain, \[ {\check{P}}_\zeta(x^i,x^j) \approx e^{\zeta[-H (x^i)+\tilutil(x^i) ]} P_0(x^i,x^j) e^{\zeta H (x^j)} \] where $H $ is a solution to Poisson's equation (with forcing function $\util$) for the nominal model (see \eqref{e:fish0}). Using a first order Taylor series for the exponential then gives, \[ \begin{aligned} {\check{P}}_\zeta(x^i,x^j) &\approx [1-\zeta(H (x^i)-\tilutil(x^i) )] P_0(x^i,x^j)[1+ \zeta H (x^j)] \\ &\approx P_0(x^i,x^j) + \zeta {\cal E}(x^i,x^j) \end{aligned} \] If $\mu \approx \pi_0$ and $ \zeta$ is small, then we can approximate, \[ \mu {\check{P}}_\zeta \approx \mu P_0 + \zeta B^{\hbox{\it\tiny T}} \,, \] where $B$ is the column vector with entries $B_j = \sum_x\pi_0(x) {\cal E}(x,x^j) $. \qed Next we justify the mean-field model \eqref{e:mfgPool}. For the purpose of analysis we lift the state space from the $d$-element set ${\mathchoice{\hbox{\sf X}}\sfX{\hbox{\scriptsize\sf X}}\smallsfX} = \{x^1,\cdots,x^d\}$, to the $d$-dimensional simplex $\textsf{S}$. For the $i^{th}$ load at time $t$, the element $\pi_t^i \in \textsf{S}$ is the degenerate distribution whose mass is concentrated at $x$ if $X^i_t= x$. The average over $N$, denoted $\mu_t^N\in \textsf{S}$, is the empirical distribution, \[ \mu_t^N( x) =\frac{1}{N}\sum_{i=1}^N \pi_t^i(x) =\frac{1}{N}\sum_{i=1}^N \field{I}\{X^i_t =x \} \, , \quad x\in{\mathchoice{\hbox{\sf X}}\sfX{\hbox{\scriptsize\sf X}}\smallsfX}, \] In the proof of convergence it is assumed that $\bfmath{\zeta}^N$ is obtained using state feedback of the form, \[ \zeta_t^N = \phi_t(\mu_0^N,\dots,\mu_t^N) \] where $\phi_t\colon\textsf{S}^{t+1}\to\field{R}$ is continuous for each $t$, and does not depend upon $N$. The following result establishes convergence. \begin{theorem} \label{t:MFL} Suppose $\mu_0^N \rightarrow \mu_0$ as $ N\rightarrow \infty$, and that the state transition matrix $P_\zeta$ is continuous as a function of $\zeta$. Then for each $t$, \begin{equation} \lim_{N\to\infty} \mu_t^N= \mu_t, \qquad \text{\it with probability one, } \label{e:MFL} \end{equation} where the right hand side denotes the probability measure \eqref{e:muMF}, in which \[ \zeta_t = \phi_t(\mu_0,\dots,\mu_t),\qquad t\ge 0. \] \qed \end{theorem} The proof of this result is given at the end of this subsection, and is largely based on a version of the Law of Large Numbers. Let $\{M_{N,k},1\le k \le N\}$ denote a martingale array: This means that ${\sf E}[M_{N,k}|M_{N,1},\cdots M_{N,j}]=M_{N,j}$ for each $N$ and $1 \le j < k \le N$. When $k=N$, we denote $M_N = M_{N,N}$. \begin{proposition} \label{t:MA-LLN} Suppose that $M_{N,k}$ is a martingale array with bounded increments: For some $c_m<\infty$, \[ | M_{N,k+1}-M_{N,k} | \le c_m\qquad \text{for all $k$ and $N$} \] Then the Law of Large Numbers holds: \[ \lim_{N\to\infty} \frac{M_N}{N} = 0, \qquad \text{\it with probability one. } \] \qed \end{proposition} \IEEEproof The Hoeffding-Azuma inequality \cite{mcd98a} gives the following bound: \[ {\sf P}\{ N^{-1} |M_N |\ge t\} \le 2 \exp(- [N t]^2/[2 N c_m^2] ) \] The right hand is summable, so the result follows from the Borel-Cantelli Lemma. \notes{Isn't this an amazing bound!?} \qed \Proposition{t:MA-LLN} is applied to show that the sequence of empirical distribution $\mu_t^N$ can be approximated by the mean-field model perturbed by a disturbance that vanishes as $N\to\infty$: \begin{lemma} \label{t:W-is-MA} The empirical distributions $\{\mu_t^N: t\ge 0\}$ obey the recursion \begin{equation} \mu_{t+1}^N = \mu_t^N P_{\zeta_t^N}+W_{t+1}^N, \label{e:empir_dist} \end{equation} in which, $W_{t+1}^N=\frac{1}{N}\sum_{i=1}^N \Delta_{t+1}^i$ for a family of vector random variables $\{ \Delta_{t+1}^i\}$. On denoting $M_{N,k}=\sum_{i=1}^k \Delta_t^i$ we have, \begin{romannum} \item $\{M_{N,k}:1\le k\le N\}$ is a martingale array. \item There exits $c_m$ such that $\| M_{N,k}-M_{N,k-1}\| \le c_m$ for all $N$ and all $k$ such that $1< k\leq N$. \end{romannum} \end{lemma} \textit{Proof of \Lemma{t:W-is-MA}}: To establish \eqref{e:empir_dist} we first establish a similar expression for $\{\pi_t^i \}$. For each $i$, the sequence of degenerate distributions $\{\pi_t^i \}$ evolve according to a random linear system, \begin{equation} \pi_{t+1}^i=\pi_t^iG_{t+1}^i \label{e:piG} \end{equation} in which $\pi_t^i$ is interpreted as a $d$-dimensional row vector, and $G_t^i$ is a $d\times d$ matrix with entries $0$ or $1$ only, and $\sum_l G_t^i(x^j,x^l)=1$ for all $j$. It is conditionally independent of $\{\pi_0^i,\cdots,\pi_t^i\}$, given $\zeta^N_t$, with \begin{equation} \label{e:EG=P} {\sf E}[G_{t+1}^i|\pi_0^i, \cdots, \pi_t^i, \zeta^N_t]=P_{\zeta^N_t}. \end{equation} Dependency of $\pi_t^i$, $G_t^i$ on $N$ is suppressed, but we must distinguish $\zeta^N_t$ from its limit $\zeta_t$. The random linear system \eqref{e:piG} can thus be described as a linear system driven by ``white noise'': \begin{equation} \pi_{t+1}^i=\pi_t^iP_{\zeta^N_t}+\Delta_{t+1}^i \label{e:Sys_delta} \end{equation} where, $\{\Delta_{t+1}^i=\pi_t^i(G_{t+1}^i -P_{\zeta^N_t}): t\geq1 \}$, which establishes \eqref{e:empir_dist}. The following representation will clarify the remaining analysis: \notes{SM: I need to make its use more apparent} \begin{equation} \text{ $G_t^i={\cal G}(\zeta^N_{t-1},\xi_t^i)$, where $\{ \xi_t^i: t\geq 1, \ i\ge 1\}$ are i.i.d.} \label{e:A1} \end{equation} For $1\le i< N$ and fixed $t$, we define two $\sigma$-algebras: \[ \begin{aligned} {\cal F}_i &= \sigma \{\Delta _t^k, k\le i \} \\ {\cal H}_i&=\sigma\{\pi_{t-1}^{k+1}, \zeta^N_{t-1}, \Delta _t^k, k\le i \} \end{aligned} \] Under \eqref{e:A1} we have the extension of $\eqref{e:EG=P}$, that ${\sf E}[G_t^{i+1}\mid {\cal H}_i]=P_{\zeta^N_{t-1}}$. Moreover, by construction the random variable $\pi_{t-1}^{i+1}$ is ${\cal H}_i$-measurable. Therefore, \[ {\sf E}[\Delta_t^{i+1}\mid {\cal H}_i]={\sf E}[\pi_{t-1}^{i+1}(G_t^{i+1}-P_{\zeta^N_{t-1}})\mid {\cal H}_i]=0 \] The smoothing property of the conditional expectation, and the construction $ {\cal F}_i \subset {\cal H}_i$, then gives (i), \[ {\sf E}[\Delta_t^{i+1}\mid {\cal F}_i]={\sf E}[{\sf E}[\Delta_t^{i+1}\mid {\cal H}_i]\mid {\cal F}_i]=0 \] From the definition of $\Delta_t^i$ below equation \eqref{e:Sys_delta}, it follows that $\{ \|\Delta_t^i\| \}$ admits a uniform bound. Consequently, $\| M_{N,k}-M_{N,k-1}\|=\| \Delta_t^k\|$ is bounded, which is (ii). \qed \head{Proof of \Theorem{t:MFL}} Denote, for $T\ge 0$, the deviation $ \tilde{\mu}_T^N = \mu_T^N -\mu_T$. We prove by induction on $T$ that $\tilde{\mu}_T^N \to 0$ as $N\to\infty$. This holds by assumption when $T=0$. Suppose now that \eqref{e:MFL} holds for $t\le T$. By continuity of $\phi_t$, it follows that $\zeta_t^N\to \zeta_t$ as $N\to\infty$. We also have by the definitions, \[ \tilde{\mu}_{T+1}^N = \tilde{\mu}_T^NP_{\zeta_T} + \mu_T^N(P_{\zeta_T^N}-P_{\zeta_T}) + W_{T+1}^N \] \Lemma{t:W-is-MA} and \Proposition{t:MA-LLN} imply that $W_{T+1}^N \to0$ as $N\to\infty$. Continuity of $P_\zeta$ then implies that \[ \lim_{N\to\infty} \tilde{\mu}_{T+1}^N = 0 \] \qed \section{Controlling a large number of pools} \label{s:mfg} For the remainder of the paper we apply the results of the previous section to the control of a large population of residential pools. The nominal transition matrix $P_0$ is defined by the probabilities of turning the pump on or off, as illustrated in the state transition diagram \Fig{fig:pppDynamics}. In many of the numerical results described below a symmetric model was chosen for $P_0$ in which $p_i^\oplus=p_i^\ominus$, where $p_i^\oplus \mathbin{:=} {\sf P}\{\text{pump switches on} \,|\, \text{it has been off $i$ hours} \}$. Similarly, $p_i^\ominus \mathbin{:=} {\sf P}\{\text{pump switches off} \,|\, \text{it has been on $i$ hours} \}$. The utility function $\util$ on ${\mathchoice{\hbox{\sf X}}\sfX{\hbox{\scriptsize\sf X}}\smallsfX}$ is chosen as the indicator function that the pool pump is operating: \begin{equation} \util(x) = \sum_i \field{I}\{ x = (\oplus, i) \} \label{e:kappaOff} \end{equation} The parameter $ \zeta$ in \eqref{e:etastar} can be positive or negative; If $ \zeta>0$ this control formulation is designed to provide incentive to turn pumps on. It remains to give numerical values for $p_i^\oplus$ and $p_i^\ominus$, $1\le i\le T$. In the symmetric model, the specification of these probabilities is performed as follows. Fix $\gamma>1$ and define, \[ \varrho_s(x) = \begin{cases} 2^{\gamma-1} x^\gamma & 0\le x\le 1/2 \\ 1- 2^{\gamma-1}(1- x)^\gamma & 1/2\le x\le 1 \end{cases} \] If over a 24 hour day we choose a sampling time $T=30$ minutes, then in the symmetric model we take, \begin{equation} p_i^\oplus = p_i^\ominus =\varrho_s(i/48)\,,\qquad 1\le i\le 48. \label{e:pvarrho} \end{equation} \Fig{fig:plus} shows a plot of the resulting probability $p_i^\oplus$ vs.\ $i$ with $\gamma = 6$. To go beyond the asymmetric model, introduce a parameter $\alpha $ intended to represent the fraction of the day that the pool is operating. We modify $ \varrho_s$ as follows, \[ \varrho_s^+(x) = \varrho_s(x^{\delta_+}),\qquad \varrho_s^-(x) =\varrho_s(x^{\delta_-}), \] where $\delta_+$ is chosen so that $0.5^{\delta_+}=1-\alpha$, or $\delta_+ =- \log_2(1-\alpha)$, \spm{Please check my work} and similarly $\delta_- =- \log_2(\alpha)$. For the same sampling parameters as in the previous example, we then take, \begin{equation} p_i^\oplus =\varrho_s^+(i/48),\quad p_i^\ominus =\varrho_s^-(i/48)\,,\qquad 1\le i\le 48. \label{e:pdefn} \end{equation} As $\gamma\to\infty$, the functions in \eqref{e:pdefn} will converge to step functions corresponding to a deterministic cleaning period of $\alpha \times 24$ hours. We find numerically that the average cleaning period is somewhat smaller when $\alpha <\half$ and $\gamma<\infty$. \begin{figure}[h] \Ebox{.5}{pplusTAC.pdf} \vspace{-.05cm} \caption{Control free behavior of a pool used for numerical studies.} \label{fig:plus} \vspace{-08pt} \end{figure} \subsection{Approximations} The steady-state probability that a pool-pump is in operation is given by \[ \check P\{\text{pool-pump is on}\}= \sum_x \check{\pi}_\zeta(x)\util(x) \] A linear approximation is obtained in \Proposition{t:etaSecondOrder}~(ii): \begin{equation} \check P\{\text{pool-pump is on}\} = \eta_0 + \kappa^2 \zeta + O( \zeta^2) \label{e:ppoffProbApp} \end{equation} A comparison of the true probability and its affine approximation is shown in \Fig{fig:pponprob} for the symmetric model, in which $\eta_0=1/2$. The approximation is very tight for $|z|\le 3$. For larger values of $ \zeta$ the true steady-state probability saturates (approximately $0.9$ as $ \zeta\to +\infty$). \notes{Yue, are all the numerics obtained with $\gamma=6$? \\ Note that it is impossible to approach the values $\check P\{\text{pool-pump is on}\} =1$ or $0$ with this model. \\ conclusion: stick to $|z|\le 3$} \begin{figure}[h] \Ebox{.5}{pponprobTAC.pdf} \vspace{-.2cm} \caption{Approximation of the steady-state probability that a pool-pump is operating under ${\check{P}}$.} \label{fig:pponprob} \end{figure} For fixed $ \zeta$, the controlled model ${\check{P}}$ has the same form as $P_0$, with transformed probability vectors $\check p^\oplus$ and $\check p^\ominus$. \Fig{fig:checkpplus} contains plots of the transformed vector $\check p^\oplus$ for values $ \zeta=0, \pm 2, \pm 4$. The plots of $\check p^\ominus$ are obtained through symmetry. \begin{figure}[h] \Ebox{.65}{checkpplusTAC.pdf} \vspace{-.2cm} \caption{Transformed probability vector $\check p^\oplus$ under ${\check{P}}$.} \vspace{-.2cm} \label{fig:checkpplus} \end{figure} The approximation of the average welfare established in \Proposition{t:etaSecondOrder} is, \begin{equation} \eta^*_\zeta = \eta_0 \zeta+\half \kappa^2 \zeta^2 +O( \zeta^3) \label{e:SOeta_pool} \end{equation} Shown in \Fig{fig:QuadApprox} is a comparison of $\eta_\zeta^*$ with linear and quadratic approximations based on \eqref{e:SOeta_pool}. \begin{figure}[h] \Ebox{.55}{QuadApproxTAC.pdf} \vspace{-.25cm} \caption{The optimal average welfare $\eta_\zeta^*$ and its quadratic approximation.} \label{fig:QuadApprox} \end{figure} The plots in \Fig{fig:QuadUapproxExp} compare the eigenvector $v=e^{h^*_\zeta}$ with the exponential of the quadratic approximation \eqref{e:happrox} given in \Proposition{t:etaSecondOrder}~(iii). The computations of $H$ and ${\cal S}$ were based on the alternate expression for these functions that are described in \Proposition{t:PoissonDerivatives}. They are normalized so that the common maxima are equal to unity. The approximation is nearly perfect for the range of $\zeta\in [-4,4]$. \begin{figure}[h] \Ebox{.75}{vposnegTAC.pdf} \caption{Eigenvectors $v_\zeta=e^{h^*_\zeta}$, and their quadratic approximations $\exp( \zeta H (x) + \half \zeta^2 {\cal S}(x)) $. } \label{fig:QuadUapproxExp} \vspace{-13pt} \end{figure} \subsection{Aggregate load model for pool population} Here we examine the linear model \eqref{e:LSSmfg} that will be used by the BA for control synthesis. We begin with an equilibrium analysis in which $\bfmath{\zeta}$ is held constant: Suppose that $\bfmath{\zeta}$ does not vary with time, $\zeta_t= \zeta^*$ for all $t$, and consider the steady-state behavior of the mean-field model. We denote $y_\infty = \lim_{t\to\infty} y_t$, which is the steady-state probability that a pool is on, for the model with transition law ${\check{P}}_{ \zeta^*}$. This can be approximated using \Proposition{t:etaSecondOrder}: \[ y_\infty = {\sf P}\{ \text{Pump is operating} \} \approx \eta_0 + \kappa^2 \zeta^* \] From the viewpoint of the BA, there is a value $G^*$ of desired consumption by all the pools. If $\GNoplus>0$ denotes the consumption of one pool pump in operation, and if there are $N$ pools in total, then the desired steady-state probability is $y_\infty = G^*/(N \GNoplus )$. This translates to a corresponding value of $ \zeta^*$, \begin{equation} \zeta^* \approx \frac{1}{\kappa^2} \Bigl[ \frac{1}{\GNoplus} \frac{G^*}{N} - \eta_0 \Bigr] = \frac{1}{\kappa^2} \frac{1}{\GNoplus} \frac{\widetilde G}{N} \label{e:zGapprox} \end{equation} where $\widetilde G=G^*-G_0$, with $G_0 = \GNoplus N \eta_0 $, the control-free value obtained with $ \zeta^*=0$. \begin{figure} \Ebox{1}{FRandPZ_TAC.pdf} \vspace{-.05cm} \caption{Frequency response and pole-zero plot for the linearized model $C[Iz-A]^{-1}B$} \vspace{-.25cm} \label{fig:fr} \vspace{-08pt} \end{figure} Consider now the case in which $\bfmath{\zeta}$ is a function of time. \Fig{fig:fr} shows the Bode plot and pole-zero plot for the linear model \eqref{e:LSSmfg}. The transfer function from $\bfmath{\zeta}$ to $\bfmath{\gamma}$ is BIBO stable and minimum phase. \notes{I removed the pole-zero cancellations: The pole-zero cancellations imply that the model cannot be both controllable and observable. The controllability matrix has rank $23$ and the observability matrix rank $15$. This is a 50-state model, so the model is neither controllable nor observable. |} \notes{See commented text for \it Frequency Sweep (Swept Sine)} \subsection{Super-sampling} \label{sec:sim} Recall the control architecture described at the start of \Section{s:mfg}. At any given time, the desired power consumption/curtailment is determined by the BA based on its knowledge of dispatachable and uncontrollable generation as well as prediction of load. This is passed through a band-pass filter and scaled appropriately based on the proportion of ancillary service provided by the pools, and the average power consumption of pool pumps. The resulting reference signal is denoted $\bfmath{r}$. We introduce here a refinement of the randomized control scheme to account for delay in the system: Even if sampling takes place each hour, if a percentage of pools turn off in response to a regulation signal, then the power consumption in the grid will drop nearly instantaneously. Nevertheless, the control system model will have a one hour delay, which is unacceptable. To obtain a more responsive system we employ ``super-sampling'' at the grid level, which is obtained as follows: We maintain the assumption that each pool checks the regulation signal at intervals of length $T$. However, the pools have no common clock. It is convenient to model super-sampling via binning of time, so that we retain a discrete time model. Let $m>1$ denote a ``super-sampling'' parameter. At the grid-level the system is in discrete time, with sampling interval $T/m$. For example, if $T=30$ minutes, then $m=6$ corresponds to a five minute sampling interval. A pool is class $i$ if the reference signal is checked at times $nT + (i-1)T/m$, with $n\ge 0$, $1\le i\le m$. Letting $y_t^i$ denote the fraction of pools in the $i$th class that are operating, the total that are operating at time $t$ is the sum, \[ y_t =\sum_{i=0}^{m-1} y_t^i \] Let $H_0$ denote the discrete time transfer function using $m=1$, which is simply the transfer function for the linear state space model \eqref{e:LSSmfg}. For general $m$, the transfer function from $\zeta$ to $y$ is \spm{This needs to be checked - we need a clear analysis in the paper. } \begin{equation} H(z^{-1}) = z^mH^0(z^{-m}) L(z) \label{e:supSampleHvt} \end{equation} where $L$ is the low pass filter, \[ L(z)= \frac{1}{m} \sum_{i=1}^{m} z^{-i} = \frac{1}{m} z^{-1} \frac{1-z^{-m}}{1-z^{-1}} \] The term ``$1/m$'' appears because the pools in each bin contribute this fraction of total ancillary service. In the second representation there is a pole-zero cancellation at $z=1$. The filter $L(z)$ has $m-1$ zeros on the unit circle: All of the solutions to $z^{m}=1$, except for the solution $z=1$. Using super-sampling we have achieved our goal of reducing delay: In real time, the delay in this model is $T/m$ rather than $T$. \subsection{Simulation results} The numerical results described here are based on a stochastic simulation of one million pools ($N=10^6$), using Matlab. This large number of pools is consistent with Florida or California. For the purposes of translation to megawatts, it is assumed that each pool in operation consumes $\GNoplus =1$~KW. Power consumption at time $t$ is assumed to be equal to $N \GNoplus y_t$ (in KW). \notes{no!! or $10^3 y_t $ (in MW).} The supersampling approach was used in all of these experiments, with the following values of $T$ and $m$ fixed throughout: Each pool checks the regulation signal every $T=30$ minutes. The supersampling parameter is $m=12$, corresponding to $150$~second sampling intervals at the grid level. The reference signal was chosen to be the BPA regulation signal passed through a low pass filter, shown in \Fig{fig:BPA}. It was found that one million pools could provide far more regulation than the $\pm$~200~MW required at BPA during this week. More experiments were conducted in which the signal was scaled to investigate the limits of regulation from a population of one million pools. We summarize results obtained from two sets of experiments conducted in two scenarios. In the first, the symmetric model based on the nominal model was used, with switching probability \eqref{e:pvarrho}. The second scenario was based on a shorter cleaning schedule of 8 hours per day. The switching probabilities defined in \eqref{e:pdefn} were used in which $\alpha=1/3$, and $\gamma = 6$ in both scenarios. The function $ p^\oplus $ using $\alpha=1/3$ is shown in \Fig{fig:plus8}. \begin{figure}[h] \Ebox{.5}{pplus-8hrsTAC.pdf} \vspace{-.05cm} \caption{Nominal model with 8 hour cleaning schedule.} \label{fig:plus8} \vspace{-08pt} \end{figure} In both scenarios, the linearization \eqref{e:LSSmfg} is minimum phase: All zeros of $H_0(z) = C(Iz-A)^{-1}B$ lie strictly within the unit disk in the complex plane. With the introduction of super-sampling, the resulting transfer function \eqref{e:supSampleHvt} also has zeros on the unit circle. In these experiments it was assumed that the BA had perfect measurements of the total power consumption of the population of pools. PI control was used to obtain the signal $\bfmath{\zeta}$: A proportional gain of $20$, and integral gain of $4$ worked well in all cases. That is, the command $\bfmath{\zeta}$ was taken to be \[ \zeta_t = 20 e_t + 4 e^I_t,\qquad \text{with} \ \ e_t = r_t-y_t\ \ \text{\it and} \ \ e^I_t = \sum_{k=0}^t e_k \] This is of the form $ \zeta_t = \phi_t(\mu_0,\dots,\mu_t)$, $ t\ge 0$, that is required in \Theorem{t:MFL}. \begin{figure}[h] \Ebox{1}{YueSimScale.pdf} \vspace{-.25cm} \caption{Closed loop simulation in two scenarios, using two different reference signals. } \vspace{-.05cm} \label{f:YueSimScale} \end{figure} The average proportion of time that a pool is on will be approximately $1/2$ in Scenario 1, and $1/3$ in Scenario 2. Consequently, the class of regulation signals that can be tracked is not symmetric in Scenario 2: The population of pools has more potential for increasing rather than decreasing power consumption. To attempt to quantify this effect, define \textit{potential capacity} as the upper and lower limits of power deviation, subject to the constraint that tracking performance does not degrade, denoted $\{+\text{Demand}, -\text{Supply}\}$. Through simulations it was found that the potential capacity in Scenario~$1$ is $\{+500MW, -500MW\}$, and $\{+695MW, -305MW\}$ in Scenario~$2$. Results from four experiments are shown in \Fig{f:YueSimScale}. Subplots (a) and (b) show tracking results using the low-pass filtered signal shown in \Fig{fig:BPA}, and the second row shows tracking performance when the signal magnitude is increased and shifted to match its potential capacity. The tracking performance is remarkable in all cases. In particular, it is surprising that a $\pm400$~MW signal can be tracked, given that the average power consumption of the pools is $500$~MW in Scenario~1. \begin{figure}[h] \Ebox{1}{YueSimScaleWindup.pdf} \vspace{-.25cm} \caption{The impact of exceeding capacity} \vspace{-.05cm} \label{f:YueSimScaleWindup} \end{figure} Subplots (a) and (b) in \Fig{f:YueSimScaleWindup} shows what happens when the reference signal exceeds capacity. Two sources of error are evident in these plots. First, the power deviation saturates when all of the $10^6$ pools are turned off, or all are turned on. Secondly, large tracking errors are observed immediately after saturation. This is a consequence of memory in the PI controller -- what is known as \textit{integrator windup}. To solve this problem, the BA should truncate the regulation signal so that it does not exceed the values $\{+\text{Demand}, -\text{Supply}\}$. Subplots (c) and (d) in \Fig{f:YueSimScaleWindup} use the same regulation signal used in (a), (b), but truncated to meet these capacity constraints. Once again, the tracking is nearly perfect. \paragraph*{Individual risk} These simulation experiments have focused on the service to the grid, and the accuracy of the mean-field model approximations. The fidelity of approximation is remarkable. The next question to ask is, what happens to an individual pool? Because of constraints on the regulation signal, it is found in simulations that the average cleaning time for each pool owner is close to the target values (either 12 or 8 hours per day in the two scenarios treated here). This is to be expected by the Law of Large Numbers. The Central Limit Theorem can be appealed to if we wish to understand the impact of this control architecture on an individual pool. In simulations we find that the empirical distribution of hours cleaned over a four day period appears to be roughly Gaussian, and hence there is a portion of pools that are under-cleaned, and another portion that receive too many hours of cleaning. Risk to individual consumers can be reduced or eliminated by using an additional layer of control at the loads. If over a period of several days the system detects over or under-cleaning, then the control system will ignore the signal sent by the BA. The aggregate impact of this modification represents a small amount of un-modeled dynamics. In preliminary experiments we have seen virtually no impact on tracking; only a small reduction in capacity. Analysis of individual risk is a topic of ongoing research. \section{Conclusions} \label{s:conclude} The simplicity of the MDP solution, and the remarkable accuracy of the LTI approximation for the mean-field model makes this approach appealing for this and many related applications. There are several issues that have not been addressed here: \begin{romannum} \item We do not fully understand the potential cost to consumers in terms of energy, or risk in terms of rare events in which the pool is under- or over-cleaned. It is likely that hard constraints on performance can be put in place without impacting the analysis. \item Does the grid operator need to know the real-time power consumption of the population of pools? Probably not. The BA is interested in regulating frequency, and this may be the only measurements needed for harnessing ancillary service from these loads. The grid frequency passed through a band-pass filter could serve as a surrogate for the measurement $y_t$ assumed in this paper. It may be valuable to have \textit{two} measurements at each load: The BA command, and local frequency measurements. \item How can we engage consumers? The formulation of contracts with customers requires a better understanding of the value of ancillary service, as well as consumer preferences. \end{romannum} \bibliographystyle{IEEEtran} \subsection*{}% \hbox{\hspace{.05\hsize}\defbox{\medskip#1\bigskip}}% \subsection*{}} \def{} \def{\hbox{\it\tiny T}}{{\hbox{\it\tiny T}}} \def\phantom{-}{\phantom{-}} \def\buildrel{\rm w}\over\longrightarrow{\buildrel{\rm w}\over\longrightarrow} \def\buildrel{\rm v}\over\longrightarrow{\buildrel{\rm v}\over\longrightarrow} \def\tvlimit#1{\buildrel{\scriptscriptstyle #1}\over\Longrightarrow} \def\buildrel{\rm d}\over\longrightarrow{\buildrel{\rm d}\over\longrightarrow} \def\goes#1{\buildrel{#1}\over\longrightarrow} \def\wiggles#1{\buildrel{#1}\over\leadsto} \def{\rm tr\, }{{\rm tr\, }} \def{\rm rank\,}{{\rm rank\,}} \def\hbox{\rm deg\thinspace}{\hbox{\rm deg\thinspace}} \def{\rm sign}{{\rm sign}} \def{\rm supp\,}{{\rm supp\,}} \newsavebox{\junk} \savebox{\junk}[1.6mm]{\hbox{$|\!|\!|$}} \def{\usebox{\junk}}{{\usebox{\junk}}} \def{\mathop{\rm det}}{{\mathop{\rm det}}} \def\mathop{\rm lim\ sup}{\mathop{\rm lim\ sup}} \def\mathop{\rm lim\ inf}{\mathop{\rm lim\ inf}} \def\mathop{\rm arg\, min}{\mathop{\rm arg\, min}} \def\mathop{\rm arg\, max}{\mathop{\rm arg\, max}} \def\field{A}{{\sf A}} \def{\sf K}{{\sf K}} \def{\sf U}{{\sf U}} \def{\cal V}{{\sf V}} \def{\sf W}{{\sf W}} \def{\mathchoice{\hbox{\sf X}}\sfX{\hbox{\scriptsize\sf X}}\smallsfX}{{\sf X}} \def{\mathchoice{\hbox{\sf Y}}\sfY{\hbox{\scriptsize\sf Y}}\smallsfY}{{\sf Y}} \def{\sf Z}{{\sf Z}} \def{\cal B}(\ystate){{\cal B}({\mathchoice{\hbox{\sf Y}}\sfY{\hbox{\scriptsize\sf Y}}\smallsfY})} \def\tinyplus{\vbox{\hrule width 3pt depth -1.5pt height 1.9pt \vskip -1.5pt \hbox{\hskip 1.3pt\vrule height 3pt depth 0pt width .4pt}}} \def{\sf X^{\rm z}}{{\sf X^{\rm z}}} \def{\cal B}(\state^\hbox{\rm\tiny z}){{\cal B}({\mathchoice{\hbox{\sf X}}\sfX{\hbox{\scriptsize\sf X}}\smallsfX}^\hbox{\rm\tiny z})} \def{\sf X^{{\rm z}_{ \tinyplus}}}{{\sf X^{{\rm z}_{ \tinyplus}}}} \def({\sf X},{\cal B}({\bf X}),{\bf m}){({\sf X},{\cal B}({\bf X}),{\bf m})} \def{{\cal B}(\state)}{{{\cal B}({\mathchoice{\hbox{\sf X}}\sfX{\hbox{\scriptsize\sf X}}\smallsfX})}} \def{\cal B}(\state^{{\rm z}_{\tinyplus}} )} \def\bxplus{{{\cal B}^+(\state)}{{\cal B}({\mathchoice{\hbox{\sf X}}\sfX{\hbox{\scriptsize\sf X}}\smallsfX}^{{\rm z}_{\tinyplus}} )} \def\bxplus{{{\cal B}^+({\mathchoice{\hbox{\sf X}}\sfX{\hbox{\scriptsize\sf X}}\smallsfX})}} \newcommand{\field}[1]{\mathbb{#1}} \def\field{R}_+{\field{R}_+} \def\field{R}{\field{R}} \def\field{A}{\field{A}} \def\mbox{\rm{\large{1}}}{\mbox{\rm{\large{1}}}} \def\mbox{\rm{\large{0}}}{\mbox{\rm{\large{0}}}} \def\field{I}{\field{I}} \def\field{T}{\field{T}} \def\field{Z}{\field{Z}} \def\field{Z}{\field{Z}} \def\field{Z}_+{\field{Z}_+} \def\field{Q}{\field{Q}} \def\field{C}{\field{C}} \def{\check{{\cal F}}}{{\check{{\cal F}}}} \def{\check{{\mathchoice{\bfatom}{\bfatom}{\alpha}{\alpha}}}}{{\check{{\mathchoice{\bfatom}{\bfatom}{\alpha}{\alpha}}}}} \def\check{{\sf E}}{\check{{\sf E}}} \def\check{\Phi}{\check{\Phi}} \def\check{\bfmath{\Phi}}{\check{\bfmath{\Phi}}} \def\check{{\sf P}}{\check{{\sf P}}} \def\cProb\!{\check{{\sf P}}\!} \def\check{\Phi}{\check{\Phi}} \def\check{\pi}{\check{\pi}} \def\check{A}{\check{A}} \def{\check{G}}{{\check{G}}} \def{\check{P}}{{\check{P}}} \def{\cal B}^+({\check{\state}}){{\cal B}^+({\check{\state}})} \def{\check{\state}}{{\check{{\mathchoice{\hbox{\sf X}}\sfX{\hbox{\scriptsize\sf X}}\smallsfX}}}} \def{\cal B}(\cstate){{\cal B}({\check{\state}})} \def\bfmath{\Phi}^{\Lambda}{\bfmath{\Phi}^{\Lambda}} \def{\bf A}{{\bf A}} \def{\bf B}{{\bf B}} \def{\bf C}{{\bf C}} \def{\bf D}{{\bf D}} \def{\bf E}{{\bf E}} \def{\bf F}{{\bf F}} \def{\bf G}{{\bf G}} \def{\bf H}{{\bf H}} \def{\bf I}{{\bf I}} \def{\bf J}{{\bf J}} \def{\bf K}{{\bf K}} \def{\bf L}{{\bf L}} \def{\bf M}{{\bf M}} \def{\bf N}{{\bf N}} \def{\bf O}{{\bf O}} \def{\bf P}{{\bf P}} \def{\bf Q}{{\bf Q}} \def{\bf R}{{\bf R}} \def{\bf S}{{\bf S}} \def{\bf T}{{\bf T}} \def{\bf U}{{\bf U}} \def{\bf V}{{\bf V}} \def{\bf W}{{\bf W}} \def{\bf X}{{\bf X}} \def{\bf Y}{{\bf Y}} \def{\bf Z}{{\bf Z}} \def{\bf a}{{\bf a}} \def{\bf b}{{\bf b}} \def{\bf c}{{\bf c}} \def{\bf d}{{\bf d}} \def{\bf e}{{\bf e}} \def{\bf f}{{\bf f}} \def{\bf g}{{\bf g}} \def{\bf h}{{\bf h}} \def{\bf i}{{\bf i}} \def{\bf j}{{\bf j}} \def{\bf k}{{\bf k}} \def{\bf l}{{\bf l}} \def{\bf m}{{\bf m}} \def{\bf n}{{\bf n}} \def{\bf o}{{\bf o}} \def{\bf p}{{\bf p}} \def{\bf q}{{\bf q}} \def{\bf r}{{\bf r}} \def{\bf s}{{\bf s}} \def{\bf t}{{\bf t}} \def{\bf u}{{\bf u}} \def{\bf v}{{\bf v}} \def{\bf w}{{\bf w}} \def{\bf x}{{\bf x}} \def{\bf y}{{\bf y}} \def{\bf z}{{\bf z}} \def\bfmath#1{{\mathchoice{\mbox{\boldmath$#1$}}% {\mbox{\boldmath$#1$}}% {\mbox{\boldmath$\scriptstyle#1$}}% {\mbox{\boldmath$\scriptscriptstyle#1$}}}} \def\bfmath{a}{\bfmath{a}} \def\bfmath{b}{\bfmath{b}} \def\bfmath{d}{\bfmath{d}} \def\bfmath{e}{\bfmath{e}} \def\bfmath{m}{\bfmath{m}} \def\bfmath{q}{\bfmath{q}} \def\bfmath{r}{\bfmath{r}} \def\bfmv{\bfmath{v}} \def\bfmath{x}{\bfmath{x}} \def\bfmath{y}{\bfmath{y}} \def\bfmath{u}{\bfmath{u}} \def\bfmath{w}{\bfmath{w}} \def\bfmath{\widehat w}{\bfmath{{\hat w}}} \def\bfmz{\bfmath{z}} \def\bfmath{A}{\bfmath{A}} \def\bfmB{\bfmath{B}} \def\bfmC{\bfmath{C}} \def\bfmD{\bfmath{D}} \def\bfmE{\bfmath{E}} \def\bfmF{\bfmath{F}} \def\bfmG{\bfmath{G}} \def\bfmH{\bfmath{H}} \def\bfmI{\bfmath{I}} \def\bfmhaI{\bfmath{{\hat I}}} \def\bfmhaL{\bfmath{\haL}} \def\bfmath{L}{\bfmath{L}} \def\bfmath{M}{\bfmath{M}} \def\bfmN{\bfmath{N}} \def\bfmQ{\bfmath{Q}} \def\bfmR{\bfmath{R}} \def\bfmbarR{\bfmath{\bar{R}}} \def\bfmS{\bfmath{S}} \def\bfmT{\bfmath{T}} \def\bfmU{\bfmath{U}} \def\bfmath{\widehat N}{\bfmath{\widehat N}} \def{\bfmath{$\widehat Q$}}{\bfmath{\widehat Q}} \def\bfmath{X}{\bfmath{X}} \def\tilde \bfmX{\bfmath{\widetilde{X}}} \def\bfmath{\widetilde{Q}}{\bfmath{\widetilde{Q}}} \def\tilde{\bfmath{q}}{\tilde{\bfmath{q}}} \def\widetilde{\bfmath{y}}{\widetilde{\bfmath{y}}} \def\tilde{y}{\tilde{y}} \def\bfmY{\bfmath{Y}} \def\bfmath{\widehat Y}{\bfmath{\widehat Y}} \def\hbox to 0pt{$\widehat{\bfmY}$\hss}\widehat{\phantom{\raise 1.25pt\hbox{$\bfmY$}}}{\bfmath{\hhaY}} \def\hbox to 0pt{$\widehat{\bfmY}$\hss}\widehat{\phantom{\raise 1.25pt\hbox{$\bfmY$}}}{\hbox to 0pt{$\widehat{\bfmY}$\hss}\widehat{\phantom{\raise 1.25pt\hbox{$\bfmY$}}}} \def\bfmV{\bfmath{V}} \def\bfmW{\bfmath{W}} \def\bfmath{\widehat W}{\bfmath{{\widehat W}}} \def\bfmZ{\bfmath{Z}} \def{\bfmath{\widehat Z}}{{\bfmath{\widehat Z}}} \def\bfmath{\widehat w}{\bfmath{\widehat w}} \def\bfmath{\widehat y}{\bfmath{\widehat y}} \def\bfmath{\widehat z}{\bfmath{\widehat z}} \def{\bfmath{$\widehat Q$}}{\bfmath{\widehat Q}} \def\bfmath{\widehat S}{\bfmath{\widehat S}} \def\bfmath{\widehat U}{\bfmath{\widehat U}} \def\bfmath{\widehat W}{\bfmath{\widehat W}} \def{\mbox{\boldmath$\alpha$}}{\bfmath{\theta}} \def\bfmath{\Phi}{\bfmath{\Phi}} \def\bfmath{\eta}{\bfmath{\eta}} \def\bfpi{\bfmath{\pi}} \def\bfphi{\bfmath{\phi}} \def\bfmath{\psi}{\bfmath{\psi}} \def\bfmath{\rho}{\bfmath{\rho}} \def\bfmath{\zeta}{\bfmath{\zeta}} \def\bfUpsilon{\bfmath{\Upsilon}} \def{\widehat{\Psi}}{{\widehat{\Psi}}} \def{\widehat{\Gamma}}{{\widehat{\Gamma}}} \def{\widehat{\Sigma}}{{\widehat{\Sigma}}} \def{\widehat{\bfPhi}}{{\widehat{\bfmath{\Phi}}}} \def{\widehat P}{{\widehat P}} \def{\hat c}{{\hat c}} \def\hat{z}{\hat{z}} \def{\hat {\imath}}{{\hat {\imath}}} \def{\hat f}{{\hat f}} \def{\hat g}{{\hat g}} \def{\hat m}{{\hat m}} \def{\hat h}{{\hat h}} \def{\hat p}{{\hat p}} \def{\hat q}{{\hat q}} \def{\bfmath{$\widehat Q$}}{{\bfmath{$\widehat Q$}}} \def\hat v{\hat v} \def{\hat w}{{\hat w}} \def{\hat y}{{\hat y}} \def{\widehat F}{{\widehat F}} \def{\hat I}{{\hat I}} \def{\hat J}{{\hat J}} \def\hat{N}{\hat{N}} \def{\widehat Q}{{\widehat Q}} \def{\widehat T}{{\widehat T}} \def{\widehat W}{{\widehat W}} \def{\hat\eta}{{\hat\eta}} \def{\hat\theta}{{\hat\theta}} \def{\hat\psi}{{\hat\psi}} \def{\hat\pi}{{\hat\pi}} \def{\hat\nu}{{\hat\nu}} \def{\hat\mu}{{\hat\mu}} \def{\hat\alpha}{{\hat\alpha}} \def{\hat\beta}{{\hat\beta}} \def{\hat\gamma}{{\hat\gamma}} \def{\hat\lambda}{{\hat\lambda}} \def{\hat\Lambda}{{\hat\Lambda}} \def{\rm A}{{\rm A}} \def{\rm B}{{\rm B}} \def{\rm C}{{\rm C}} \def{\rm C}{{\rm C}} \def{\rm E}{{\rm E}} \def{\rm H}{{\rm H}} \def{\rm I}{{\rm I}} \def{\rm M}{{\rm M}} \def{\rm P}{{\rm P}} \def{\rm Q}{{\rm Q}} \def{\rm d}{{\rm d}} \def{\rm g}{{\rm g}} \def{\rm k}{{\rm k}} \def{\rm u}{{\rm u}} \def{\rm x}{{\rm x}} \def{\rm P}{{\rm P}} \def{\rm E}{{\rm E}} \def\til={{\widetilde =}} \def{\widetilde \Phi}{{\widetilde \Phi}} \def{\widetilde N}{{\widetilde N}} \def{\widetilde P}{{\widetilde P}} \def{\widetilde Q}{{\widetilde Q}} \def\widetilde T{\widetilde T} \def\widetilde W{\widetilde W} \def{\tilde \mu}{{\tilde \mu}} \def{\tilde \pi}{{\tilde \pi}} \def{\bf\tilde X}{{\bf\tilde X}} \def{\bf \tilde m}{{\bf \tilde m}} \def{\tilde \theta}{{\tilde \theta}} \def\tilde e{\tilde e} \def\tilde \sigma{\tilde \sigma} \def\tilde \tau{\tilde \tau} \def{\cal A}{{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}} \def{\cal E}{{\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def{\cal H}{{\cal H}} \def{\cal J}{{\cal J}} \def{\cal I}{{\cal I}} \def{\cal K}{{\cal K}} \def{\cal L}{{\cal L}} \def{\cal M}{{\cal M}} \def{\cal N}{{\cal N}} \def{\cal O}{{\cal O}} \def{\cal P}{{\cal P}} \def{\cal Q}{{\cal Q}} \def{\cal R}{{\cal R}} \def{\cal S}{{\cal S}} \def{\cal T}{{\cal T}} \def{\cal U}{{\cal U}} \def{\cal V}{{\cal V}} \def{\cal W}{{\cal W}} \def{\cal X}{{\cal X}} \def{\cal Y}{{\cal Y}} \def{\cal Z}{{\cal Z}} \def{\mbox{\boldmath$\alpha$}}{{\mbox{\boldmath$\alpha$}}} \def{\mathchoice{\bfatom}{\bfatom}{\alpha}{\alpha}}{{\mathchoice{{\mbox{\boldmath$\alpha$}}}{{\mbox{\boldmath$\alpha$}}}{\alpha}{\alpha}}} \def\clG^+(\gamma){{\cal G}^+(\gamma)} \def\clG^+(s){{\cal G}^+(s)} \def\hbox{\sf Var}\,{\hbox{\sf Var}\,} \defA_+{A_+} \def\barAt#1{\overline{A_+(#1)}} \def\head#1{\subsubsection*{#1}} \def\atopss#1#2{\genfrac{}{}{0cm}{2}{#1}{#2}} \def\atop#1#2{\genfrac{}{}{0pt}{}{#1}{#2}} \def\FRAC#1#2#3{\genfrac{}{}{}{#1}{#2}{#3}} \def\fraction#1#2{{\mathchoice{\FRAC{0}{#1}{#2}}% {\FRAC{1}{#1}{#2}}% {\FRAC{3}{#1}{#2}}% {\FRAC{3}{#1}{#2}}}} \def\ddt{{\mathchoice{\FRAC{1}{d}{dt}}% {\FRAC{1}{d}{dt}}% {\FRAC{3}{d}{dt}}% {\FRAC{3}{d}{dt}}}} \def\ddr{{\mathchoice{\FRAC{1}{d}{dr}}% {\FRAC{1}{d}{dr}}% {\FRAC{3}{d}{dr}}% {\FRAC{3}{d}{dr}}}} \def\ddr{{\mathchoice{\FRAC{1}{d}{dy}}% {\FRAC{1}{d}{dy}}% {\FRAC{3}{d}{dy}}% {\FRAC{3}{d}{dy}}}} \def\ddtp{{\mathchoice{\FRAC{1}{d^{\hbox to 2pt{\rm\tiny +\hss}}}{dt}}% {\FRAC{1}{d^{\hbox to 2pt{\rm\tiny +\hss}}}{dt}}% {\FRAC{3}{d^{\hbox to 2pt{\rm\tiny +\hss}}}{dt}}% {\FRAC{3}{d^{\hbox to 2pt{\rm\tiny +\hss}}}{dt}}}} \def\ddalpha{{\mathchoice{\FRAC{1}{d}{d\alpha}}% {\FRAC{1}{d}{d\alpha}}% {\FRAC{3}{d}{d\alpha}}% {\FRAC{3}{d}{d\alpha}}}} \def\half{{\mathchoice{\FRAC{1}{1}{2}}% {\FRAC{1}{1}{2}}% {\FRAC{3}{1}{2}}% {\FRAC{3}{1}{2}}}} \def\third{{\mathchoice{\FRAC{1}{1}{3}}% {\FRAC{1}{1}{3}}% {\FRAC{3}{1}{3}}% {\FRAC{3}{1}{3}}}} \def\fourth{{\mathchoice{\FRAC{1}{1}{4}}% {\FRAC{1}{1}{4}}% {\FRAC{3}{1}{4}}% {\FRAC{3}{1}{4}}}} \def\threefourth{{\mathchoice{\FRAC{1}{3}{4}}% {\FRAC{1}{3}{4}}% {\FRAC{3}{3}{4}}% {\FRAC{3}{3}{4}}}} \def\sixth{{\mathchoice{\FRAC{1}{1}{6}}% {\FRAC{1}{1}{6}}% {\FRAC{3}{1}{6}}% {\FRAC{3}{1}{6}}}} \def\eighth{{\mathchoice{\FRAC{1}{1}{8}}% {\FRAC{1}{1}{8}}% {\FRAC{3}{1}{8}}% {\FRAC{3}{1}{8}}}} \def\buildrel{\rm dist}\over ={\buildrel{\rm dist}\over =} \def\mathbin{:=}{\mathbin{:=}} \def\mu^{\hbox{\rm \tiny Leb}}{\mu^{\hbox{\rm \tiny Leb}}} \def{\cal M}{{\cal M}} \def{\sf P}{{\sf P}} \def{\sf P\! }{{\sf P\! }} \def{\sf E}{{\sf E}} \def\taboo#1{{{}_{#1}P}} \def{\rm i.o.}{{\rm i.o.}} \def{\rm a.s.}{{\rm a.s.}} \def{\rm vec\, }{{\rm vec\, }} \def{\displaystyle\frac{1}{ N}}{{\displaystyle\frac{1}{ N}}} \def\average#1,#2,{{1\over #2} \sum_{#1}^{#2}} \def\hbox{ \rm \ enters\ }{\hbox{ \rm \ enters\ }} \def\eye(#1){{\bf(#1)}\quad} \def\varepsilon{\varepsilon} \def\,\cdot\,{\,\cdot\,} \newtheorem{theorem}{Theorem}[section] \newtheorem{corollary}[theorem]{Corollary} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{lemma}[theorem]{Lemma} \def\Lemma#1{Lemma~\ref{#1}} \def\Proposition#1{Proposition~\ref{#1}} \def\Theorem#1{Theorem~\ref{#1}} \def\Corollary#1{Corollary~\ref{#1}} \def\Section#1{Section~\ref{#1}} \def\Chapter#1{Chapter~\ref{c:#1}} \def\Appendix#1{Appendix~\ref{c:#1}} \def\eq#1/{(\ref{e:#1})} \newcommand{\beqn}[1]{\notes{#1}% \begin{eqnarray} \elabel{#1}} \newcommand{\end{eqnarray} }{\end{eqnarray} } \newcommand{\beq}[1]{\notes{#1}% \begin{equation}\elabel{#1}} \newcommand{\end{equation}}{\end{equation}} \def\begin{description}{\begin{description}} \def\end{description}{\end{description}} \def{\overline {a}}{{\overline {a}}} \def{\overline {c}}{{\overline {c}}} \def{\overline {f}}{{\overline {f}}} \def{\overline {g}}{{\overline {g}}} \def{\overline {h}}{{\overline {h}}} \def{\overline {l}}{{\overline {l}}} \def{\overline {m}}{{\overline {m}}} \def{\overline {n}}{{\overline {n}}} \def{\overline {p}}{{\overline {p}}} \newcommand{{\bar{x}}}{{\bar{x}}} \def{\overline {y}}{{\overline {y}}} \def{\overline {A}}{{\overline {A}}} \def{\overline {B}}{{\overline {B}}} \def{\overline {C}}{{\overline {C}}} \def{\overline {E}}{{\overline {E}}} \def{\overline {M}}{{\overline {M}}} \def{\overline {P}}{{\overline {P}}} \def{\overline {Q}}{{\overline {Q}}} \def{\overline {T}}{{\overline {T}}} \def{\underline{n}}{{\underline{n}}} \def{\underline{\rho}}{{\underline{\rho}}} \def{\overline{\atom}}{{\overline{{\mathchoice{\bfatom}{\bfatom}{\alpha}{\alpha}}}}} \def{\overline{\rho}}{{\overline{\rho}}} \def{\overline{\mu}}{{\overline{\mu}}} \def{\overline{\nu}}{{\overline{\nu}}} \def{\overline{\alpha}}{{\overline{\alpha}}} \def{\overline{\beta}}{{\overline{\beta}}} \def{\overline{\alpha}}{{\overline{\alpha}}} \def{\overline{\eta}}{{\overline{\eta}}} \def\overline{\bfPhi}{\overline{\bfmath{\Phi}}} \def\overline{\Phi}{\overline{\Phi}} \def\overline{\clF}{\overline{{\cal F}}} \def\overline{\sigma}{\overline{\sigma}} \def\overline{\Sigma}{\overline{\Sigma}} \def\overline{\tau}{\overline{\tau}} \def\hat{\sigma}{\hat{\sigma}} \def\hat{\tau}{\hat{\tau}} \def\hat{P}{\hat{P}} \def\hat{\Phi}{\hat{\Phi}} \def\hat{\bfPhi}{\hat{\bfmath{\Phi}}} \def\hat{\cal F}{\hat{\cal F}} \newcounter{rmnum} \newenvironment{romannum}{\begin{list}{{\upshape (\roman{rmnum})}}{\usecounter{rmnum} \setlength{\leftmargin}{14pt} \setlength{\rightmargin}{8pt} \setlength{\itemsep}{2pt} \setlength{\itemindent}{-1pt} }}{\end{list}} \newcounter{anum} \newenvironment{alphanum}{\begin{list}{{\upshape (\alph{anum})}}{\usecounter{anum} \setlength{\leftmargin}{14pt} \setlength{\rightmargin}{8pt} \setlength{\itemsep}{2pt} \setlength{\itemindent}{-1pt} }}{\end{list}} \def\textsf{S}{\textsf{S}} \def\bfmath{\Delta}{\bfmath{\Delta}} \def\bfmath{\gamma}{\bfmath{\gamma}} \def\widetilde{\Psi}{\widetilde{\Psi}} \def\bfmath{\gamma}{\bfmath{\gamma}} \def\welf{\mathchoice{\mbox{\small$\cal W$}}% {\mbox{\small$\cal W$}}% {\mbox{$\scriptstyle\cal W$}}% {\mbox{$\scriptscriptstyle\cal W$}}} \def\util{\mathchoice{\mbox{\small$\cal U$}}% {\mbox{\small$\cal U$}}% {\mbox{$\scriptstyle\cal U$}}% {\mbox{$\scriptscriptstyle\cal U$}}} \def\tilutil{\mathchoice{\mbox{\small$\cal \widetilde U$}}% {\mbox{\small$\cal\widetilde U$}}% {\mbox{$\scriptstyle\cal \widetilde U$}}% {\mbox{$\scriptscriptstyle\cal \tilde U$}}} \def\Ebox#1#2{% \begin{center} \includegraphics[width= #1\hsize]{#2} \end{center}} \def{\cal S}{{\cal S}} \defH{H} \def\kappa^2{\kappa^2} \def\check p{\check p} \def\check \mu{\check \mu} \def{\cal V}{{\cal V}} \def{\cal H}{{\cal H}} \def{\cal V}{{\cal V}} \def\GNoplus{g_{\text{\scriptsize p}}} \def\Fig#1{Fig.~\ref{#1}} \defP^{\text{\tiny tot}}{P^{\text{\tiny tot}}} \def\field{I}{\field{I}} \def\field{R}_+{\field{R}_+} \def\field{R}{\field{R}} \title{Ancillary Service to the Grid\\ Using Intelligent Deferrable Loads } \author{Sean Meyn, Prabir Barooah, Ana Bu\v{s}i\'{c}, Yue Chen, and Jordan Ehre \thanks{This research is supported by the NSF grant CPS-0931416, the Department of Energy Awards DE-OE0000097 \&\ DE-SC0003879, and the French National Research Agency grant ANR-12-MONU-0019. We acknowledge the help of Mark Rosenberg who offered many suggestions to improve the manuscript, and caught several typos in earlier drafts. \thanks{S.M., Y.C.\ and J.E.\ are with the Dept.\ ECE and P.B. is with the Dept.\ MAE at the University of Florida, Gainesville. A.B.\ is with INRIA and the Computer Science Dept. of \'Ecole Normale Sup\'erieure, Paris, France. } } \begin{document} \maketitle \thispagestyle{empty} \begin{abstract} Renewable energy sources such as wind and solar power have a high degree of unpredictability and time-variation, which makes balancing demand and supply challenging. One possible way to address this challenge is to harness the inherent flexibility in demand of many types of loads. Introduced in this paper is a technique for decentralized control for automated demand response that can be used by grid operators as ancillary service for maintaining demand-supply balance. A Markovian Decision Process (MDP) model is introduced for an individual load. A randomized control architecture is proposed, motivated by the need for decentralized decision making, and the need to avoid synchronization that can lead to large and detrimental spikes in demand. An aggregate model for a large number of loads is then developed by examining the mean field limit. A key innovation is an LTI-system approximation of the aggregate nonlinear model, with a scalar signal as the input and a measure of the aggregate demand as the output. This makes the approximation particularly convenient for control design at the grid level. The second half of the paper contains a detailed application of these results to a network of residential pools. Simulations are provided to illustrate the accuracy of the approximations and effectiveness of the proposed control approach. \end{abstract} \clearpage \section{Introduction} \label{s:intro} Renewable energy penetration is rising rapidly throughout the world, and bringing with it high volatility in energy supply. Resources are needed to compensate for these large fluctuations in power. The federal energy regulatory commission (FERC) in conjunction with generation and utility companies are struggling to find resources, and finding ways to properly compensate for ancillary services that are badly needed by each \textit{balancing authority} (BA) in the U.S.. FERC orders 755 and 745 are examples of their attempts to provide incentives. \notes{the utilities are often victims in all of this -- I hope BA is o.k. -spm\\ Expand on FERC. Also see refs from Toulouse (in commented text) } This paper concerns decentralized control of a large number of electric loads in a power grid. A particular load has a service it is intended to provide -- clean dishes, hot water, or a clean pool. It is assumed that each load has some flexibility in energy consumption. This flexibility is harnessed to provide ancillary services to the power grid to help maintain stability, and to help offset any volatility in the grid because of line or generation outage, or because of the volatile nature of renewable energy. This is commonly called ``demand response'', but the meaning is slightly different here: The tuning of energy consumption is automated, and we assume that the consumers do not suffer any degradation in the service offered by the loads. We argue that most of the load in the U.S. is highly flexible, and this flexibility can be harnessed to provide ancillary service without central control, and without significant impact on the needs of consumers or industry. A defining characteristic of ancillary service is that on average it is a \emph{zero-energy} service, so that the desired power consumption level to be tracked is zero on average. This makes use of deferrable loads particularly attractive as sources of ancillary service. Many utilities already employ demand response programs that use deferrable loads to reduce peak demand and manage emergency situations. Florida Power and Light (FPL), for example, has 780,000 customers enrolled in their \textit{OnCall Savings Program} in which residential air conditioners, water heaters, and pool pumps systems are automatically controlled when needed \cite{FPLsaving}. Today, FPL uses this service only 3--4 times per year \cite{FPLsaving}. While a valuable service to the grid, there is tremendous additional potential from these sources that today is virtually untapped. Nearly all of America's ISOs/RTOs also allow for demand side resources to participate in their regulation and spinning reserve markets, but as of the summer of 2013, only PJM allows aggregation (with approval) \cite{maccapcalkil12}. Growth of these resources in these wholesale markets has helped lower costs per megawatt-hour from 2009 to 2011 \cite{maccapcalkil12}. Still, markets for regulation and spinning reserves from traditional generation sources continue to grow because of increasing dependency on renewable generation. \Fig{fig:BPA} shows the regulation signal for a typical week within the Bonneville Power Authority (BPA)~\cite{BPA}. Its role is analogous to the control signal in the feedback loop in a flight control system. Just like in an aviation control system, the variability seen in this figure is in part a consequence of variability of wind generation in this region. \begin{figure}[h] \vspace{-.15cm} \Ebox{.75}{BPAregulationAndLowPassWeb.pdf} \vspace{-.25cm} \caption{\textit{BPA Balancing Reserves Deployed} --- Ancillary service needs at the BPA during one week in 2013. The maximum is approximately one-tenth of maximum load in this region. } \label{fig:BPA} \vspace{-08pt} \end{figure} We propose to break up a regulation signal into frequency bands for the purposes of ancillary services provisioning by various resources. In prior work it is shown how heating and ventilation systems in commercial buildings can provide service in the high frequency band, corresponding to periods ranging from 3 minutes to one hour \cite{haokowbarmey13,haobarmidmey12,linbarmey13}. At the lowest frequencies, an important resource will be flexible manufacturing. An example is Alcoa, that today provides 70MW of service to MISO by providing control over their aluminum smelting operation in Indiana. Alcoa's service is provided continuously, and provides significant revenue to Alcoa and even greater benefits to the region managed by MISO. The technical content of the paper starts with a control architecture designed to address privacy concerns and communication constraints. It is assumed that an individual load can view a regulation signal, much as we can view BPA's regulation signal online today. To provide ancillary service in a specified frequency band, we argue that it is essential to introduce randomization at each load. Among many benefits, randomization avoids synchronization, much like randomized congestion avoidance protocols in communication networks. First deployed nearly fifty years ago, ALOHA may be the first distributed communication protocol based on randomization. \textit{Random Early Detection} for congestion control was introduced in the highly influential paper \cite{flojac93}. The historical discussion in this paper points to significant research on randomized algorithms beginning in the early 1970s, following wide deployment of ALOHA. Randomized protocols are now standard practice in communication networks \cite{sri04a}. It is likely that randomized algorithms will become a standard feature of the power grid of the future. To formulate a randomized control strategy, a Markovian Decision Process (MDP) model is proposed for an individual load. An aggregate model for a large number of loads is then obtained as a mean field limit. A particular formulation of Todorov~\cite{tod07} is adopted because we can obtain an explicit solution, and because of available tools for analysis borrowed from the theory of large deviations. In particular, a key innovation in the present paper is an LTI--system approximation of the aggregate nonlinear model, which is possible through application of results from \cite{konmey05a}. The scalar input in this linear model is a parameter that appears in the MDP cost function. The LTI approximation is convenient for control design at the grid level: the input becomes the control signal that the BA will broadcast to all the loads, which adjusts a parameter in the randomized policy for the optimal MDP solution at each load. In the second half of this paper we apply the general results of this paper to show how pool pumps can be harnessed to obtain ancillary service in a medium frequency band, corresponding to the dashed line in \Fig{fig:BPA}. This is the same BPA regulation signal, passed through a low pass filter A pool pump is the heart of a pool's filtration system: It runs each day for a period of time range from 4 to 24 hours, and consumes over 1~KW of power when in operation \cite{PPDR08}. The ability to control just half of Florida's pool pumps amounts to over 500~MW of power! Much of the control infrastructure is already in place~\cite{hallo06}. Still, constraints and costs must be satisfied. These include run-times per day and per week, the cost of startup and shut down, as well as the total energy consumption. Moreover, there are privacy concerns and related communication constraints. Consequently, control algorithms must be distributed so that most of the required intelligence resides at individual pool pumps. In this paper we focus on constraints related to run-times per day, which is critical for keeping the water in the pool clean. Privacy and communication constraints will be addressed through the distributed control architecture. A number of recent works have explored the potential for flexible loads for providing ancillary service. These include commercial building thermostatic loads to provide ancillary service in the time-scale of a few minutes (see \cite{matkoccal13} and refs.\ therein), electric vehicle charging~\cite{macalhis10,tombou10,coupertemdeb12,matkoccal13} that can provide ancillary service in the time scale of a few hours, and our own recent work on harnessing ancillary service from commercial building HVAC~\cite{haokowbarmey13,haobarmidmey12,linbarmey13}. Mean-field games have been employed for analysis of aggregate loads in several recent papers \cite{macalhis10,coupertemdeb12}. See \cite{huacaimal07,borsun12,gasgauleb12} for more on general theory of mean-field technques. The work of~\cite{matkoccal13} is most closely related to the present paper, in that the authors also consider an aggregate model for a large collection of loads. The natural state space model is bilinear, and converted to a linear model through division of the state. The control architecture consists of a centralized control signal computation based on state feedback, and the resulting input is broadcast to the devices. In this paper, intelligence is concentrated at the individual load: An MDP control solution is obtained at each load, but the aggregate behavior is well approximated by a \textit{single-input single-output, linear time-invariant} (SISO-LTI) system. Hence the control problem for the balancing authority can be addressed using classical control design methods. State estimation is not required --- the information required at the BA is an estimate of the proportion of loads that are operating. In the numerical example considered in this paper, the linear system is minimum-phase and stable, which is very helpful for control design. The remainder of the paper is organized as follows. The control solution for a single pool is described in \Section{s:ppcontrol}, along with approximations of the optimal control solution based on general theory presented in the Appendix. The control of the aggregate collection of pools is considered in \Section{s:mfg}. Conclusions and directions of future research are contained in \Section{s:conclude}. \section{Optimal control for a load and for the grid} \label{s:ppcontrol} \begin{figure}[h] \vspace{-.25cm} \Ebox{.7}{ControlArchitectureTAC.pdf} \vspace{-.25cm} \caption{The control architecture: command $\bfmath{\zeta}$ is computed at a BA, and transmitted to each pool pump. The control decision at a load is binary (turn on/off), and is based only on its own state and the signal $\bfmath{\zeta}$.} \label{fig:arch} \vspace{-08pt} \end{figure} \subsection{Control architecture overview} We begin with a description of the control and information architecture that is the subject of this paper. The components of the architecture are illustrated in \Fig{fig:arch}: \begin{romannum} \item There are $N$ homogeneous loads that receive a common scalar command signal from the balancing authority, or BA, denoted $\bfmath{\zeta}=\{\zeta_t\}$ in the figure. Randomization at each load is desirable to avoid synchronization of loads, and also to facilitate analysis of the aggregate system. It is assumed that each load evolves as a controlled Markov chain: The transition probability for each load is determined by its own state, and the BA signal $\bfmath{\zeta}$. The common dynamics are defined by a controlled transition matrix $\{P_\zeta : \zeta\in\field{R}\}$. For the $i$th load, there is a state process $\bfmath{X}^i$ whose transition probability at time $t$ is given by, \begin{equation} {\sf P}\{X^i_{t+1} = x^+ \mid X^i_t = x^- ,\, \zeta_t=\zeta\} = P_{\zeta}(x^-,x^+) \label{e:Pzeta} \end{equation} where $x^-$ and $x^+$ are possible state-values. The details of the model are described in \Section{s:loadmodel}. \item The BA has measurements of the other two scalar signals shown in the figure: The normalized aggregate power consumption $\bfmath{y}$ and desired deviation in power consumption $\bfmath{r}$. When $\zeta_t=0$ for all $t$, then the aggregate power consumption takes the value $\bfmath{y}^0$. The goal of the BA is a tracking problem: Achieve $y_t\approx y^0 +r_t$ for all $t$. This can be addressed using classical control techniques if the dynamics from $\bfmath{\zeta}$ to $\widetilde{\bfmath{y}}= \bfmath{y}-\bfmath{y}^0$ can be approximated by an LTI system. \end{romannum} The main contributions of this paper are based on the construction of the controlled transition matrix for an individual load, taking into account potentially conflicting goals: The BA desires overall dynamics from $\bfmath{\zeta}$ to $\bfmath{y}$ that facilitate tracking the reference signal $\bfmath{r}$. Each load requires good quality of service. In the case of a pool, the water must be kept clean, and the electricity bill must remain constant over each month. An approach of Todorov \cite{tod07} is adopted to construct the family of transition matrices $\{P_\zeta : \zeta\in\field{R}\}$. They are smooth in the parameter $\zeta$, and a first-order Taylor series approximation gives, for any pair of states $(x^-,x^+) $ \begin{equation} P_\zeta(x^-,x^+) = \exp(\zeta \Gamma(x^-,x^+) +O(\zeta^2)) P_0(x^-,x^+) \label{e:expclE} \end{equation} where $P_0$ denotes the dynamics of a load when $\bfmath{\zeta}\equiv 0$, and $\Gamma$ is a matrix. Based on \eqref{e:clE}, we have \[ \Gamma(x^-,x^+) = \tilutil(x^-) +H (x^+) -H (x^-) \] where the function $H$ is a solution to \textit{Poisson's equation}; a linear equation for the nominal model. This structure leads to the LTI approximation of the input-output dynamics from $\bfmath{\zeta}$ to $\widetilde{\bfmath{y}}$ that is presented in \Proposition{t:linear}. \Section{s:approx} also contains second-order approximations of $P_\zeta$. In \Section{s:mfg} these general techniques are applied to a collection of residential pools. In this example it is found that the LTI model is minimum phase, and that a simple PI controller can be effectively used for the control transfer function $G_c$ shown in \Fig{fig:arch}. \subsection{Load model and design} \label{s:loadmodel} In this section we present a procedure to construct the controlled transition matrix appearing in \eqref{e:Pzeta}. The controlled Markov chain evolves on a finite state space, denoted ${\mathchoice{\hbox{\sf X}}\sfX{\hbox{\scriptsize\sf X}}\smallsfX} = \{x^1,\dots,x^d\}$. The construction is based on an optimal control problem for an individual load, taking into account the needs of the load and the grid. It is assumed that a transition matrix $P_0$ is given that models ``control free'' behavior of the Markov chain, and a utility function $\util\colon{\mathchoice{\hbox{\sf X}}\sfX{\hbox{\scriptsize\sf X}}\smallsfX}\to \field{R}$ is used to model the needs of the grid. The optimal control problem will balance average utility and the cost of deviation. Since we focus on a single load, in this subsection the index $i$ in \eqref{e:Pzeta} is dropped, and we denote by $\bfmath{X}=(X_0,X_1,\dots)$ the stochastic process evolving on ${\mathchoice{\hbox{\sf X}}\sfX{\hbox{\scriptsize\sf X}}\smallsfX}$ that models this load. In the second half of the paper we will focus on a particular example in which each load is a residential pool pump. The true nominal behavior would be deterministic -- most consumers set the pump to run a fixed number of hours each day. However, the randomized policy is based on a stochastic model for nominal behavior, so we introduce some randomness to define the nominal transition matrix $P_0$. The state space is taken to be the finite set, \begin{equation} {\mathchoice{\hbox{\sf X}}\sfX{\hbox{\scriptsize\sf X}}\smallsfX}=\{ (m,i) : m\in \{ \oplus,\ominus\} ,\ i\in \{1,\dots,T\} \} \label{e:poolstate} \end{equation} If $X_t = (\ominus,i)$, this indicates that the pool-pump was turned off and has remained off for $i$ time units, and $X_t = (\oplus,i)$ represents the alternative that the pool-pump has been operating continuously for exactly $i$ time units. A state-transition diagram is shown in \Fig{fig:pppDynamics}. The values of $P_0(x,y)$ will be chosen to be nearly $0$ or $1$ for most $x,y\in{\mathchoice{\hbox{\sf X}}\sfX{\hbox{\scriptsize\sf X}}\smallsfX}$. \begin{figure}[h] \Ebox{.55}{pppDynamicsTAC.pdf} \vspace{-.25cm} \caption{State transition diagram for the pool-pump model. } \label{fig:pppDynamics} \vspace{-08pt} \end{figure} The utility function $\util$ on ${\mathchoice{\hbox{\sf X}}\sfX{\hbox{\scriptsize\sf X}}\smallsfX}$ is chosen as the indicator function that the pool pump is operating: \[ \util(x) = \sum_i \field{I}\{ x = (\oplus, i) \} \] Whether this actually represents any utility to the grid operator depends on the state of the grid. This will be clarified after we define the optimization problem and its solution. \medbreak \textit{We now return to the general model.} Consider first a finite-time-horizon optimization problem: For a given terminal time $T$, let $p_0$ denote the probability measure on strings of length $T$: \[ p_0(x_1,\dots, x_{T}) = \prod_{i=0}^{T-1} P_0(x_i,x_{i+1}),\qquad x\in {\mathchoice{\hbox{\sf X}}\sfX{\hbox{\scriptsize\sf X}}\smallsfX}^{T} \] where $x_0\in{\mathchoice{\hbox{\sf X}}\sfX{\hbox{\scriptsize\sf X}}\smallsfX}$ is assumed to be given. A fixed scalar $\zeta\in\field{R}$ is interpreted as a \textit{weighting parameter} in the following definition of \textit{total welfare}. For any probability measure $p$, this is defined as the weighted difference, \[ \welf_T(p) = \zeta {\sf E}_p\Bigl[\sum_{t=1}^{T} \util(X_t) \Bigr] -D(p\| p_0) \, \] where the expectation is with respect to $p$, and $D$ denotes relative entropy. Let $p^{T*}$ denote the probability measure that maximizes this expression. \begin{proposition} \label{t:twisted} The probability measure $p^{T*}$ is the \textit{twisted distribution}, \begin{equation} p^{T*} (x_1,\dots, x_{T}) = \exp\Bigl(\zeta \sum_{t=1}^{T} \util(x_t) - \Lambda_T(\zeta)\Bigr) p_0 (x_1,\dots, x_{T}) \label{e:pstar} \end{equation} where \begin{equation} \Lambda_T(\zeta) = \log\Bigl\{ {\sf E}\Bigl[ \exp\Bigl(\zeta \sum_{t=1}^{T} \util(X_t) \Bigr) \Bigr] \Bigr\}\,, \label{e:LMGF} \end{equation} and the expectation is with respect to $p_0$. Moreover, $\welf_T(p^{T*})=\Lambda_T(\zeta) $ is the optimal value. \qed \end{proposition} \IEEEproof Optimality of $p^{T*}$ follows from convex duality between the log moment generating function and relative entropy -- \cite[Proposition II.1]{huaunnmeyveesur11} and \cite[Lemma~2.39]{demzei98a}. The formula \eqref{e:LMGF} follows from the fact that $p^{T*}$ sums to unity, so that $\Lambda_T(\zeta)$ can be interpreted as a normalizing constant. The identity $\welf_T(p^{T*}) = \Lambda_T(\zeta)$ follows from the definitions of $\welf_T$ and $p^{T*}$. \qed The probability measure $p^{T*}$ defines a Markov chain on the time interval $\{0,1, \dots, T\}$, but it is not necessarily time-homogeneous. In the infinite horizon case, we would like to find a distribution $p^*$ on infinite sequences that attains the optimal average welfare, \notes{SM to PB: you added the homogeneous statement before the equation, but it already appeared after.} \begin{equation} \eta^*_\zeta=\lim_{T\to\infty} \frac{1}{T} \welf_T(p^{T*}) = \lim_{T\to\infty} \frac{1}{T} \log\Bigl\{ {\sf E}\Bigl[ \exp\Bigl(\zeta \sum_{t=1}^{T} \util(X_t) \Bigr) \Bigr] \Bigr\} \label{e:etastar} \end{equation} A solution to the infinite horizon problem is given by a time-homogenous Markov chain whose transition matrix ${\check{P}}_\zeta$ is easy to compute, based on the solution of an eigenvector problem; these results are summarized in the proposition that follows. The proof of \Proposition{lem:Pcheck-infinite} is given in the Appendix. \begin{proposition} \label{lem:Pcheck-infinite} If $P_0$ is irreducible, an optimizing $p^*$ that achieves \eqref{e:etastar} is defined by a time-homogeneous Markov chain whose transition probability is given by \begin{equation} {\check{P}}_\zeta(x,y) = \frac{1}{\lambda } \frac{1}{ v(x)} {\widehat P}_\zeta(x,y) v(y)\,,\qquad x,y\in{\mathchoice{\hbox{\sf X}}\sfX{\hbox{\scriptsize\sf X}}\smallsfX} , \label{e:cPool} \end{equation} where ${\widehat P}_\zeta$ is the scaled transition matrix, \begin{equation} {\widehat P}_\zeta(x,y) = \exp(\zeta \util(x)) P_0(x,y) \,,\qquad x,y\in{\mathchoice{\hbox{\sf X}}\sfX{\hbox{\scriptsize\sf X}}\smallsfX}, \label{e:scaledTM} \end{equation} and $\lambda,v$ is that eigen-pair corresponding to the eigenvector problem \begin{equation} {\widehat P}_\zeta v = \lambda v \label{e:TodPF} \end{equation} such that $\lambda=\lambda_\zeta>0$ is the unique maximal eigenvalue for ${\widehat P}_\zeta$, $v=v_\zeta$ is unique up to constant multiples, and $v(x)>0$ for each $x\in{\mathchoice{\hbox{\sf X}}\sfX{\hbox{\scriptsize\sf X}}\smallsfX}$. In addition, the following bounds hold for each $T$, \begin{equation} \begin{aligned} 0\le \welf_T(p^{T*}) - \welf_T(\check p^{T}) & \le 2 \| h \|_{\text{\small sp}} \\ |T\Lambda - \welf_T(p^{T*}) |& \le \| h \|_{\text{\small sp}} \\ \end{aligned} \label{e:cpOpt} \end{equation} where $h=\log v$, and the span norm is defined by $ \| h \|_{\text{\small sp}} = \max h - \min h$. Consequently, the Markov model achieves the optimal average welfare \eqref{e:etastar} with $\eta^*_\zeta=\Lambda$. \qed \end{proposition} \notes{ $\Lambda=\log(\lambda)$PB: is this equality a definition or a result? SM: result - clearer now?} The eigenvector problem~\ref{e:TodPF} appears in multiplicative ergodic theory \cite{konmey05a}, and also in Todorov's analysis \cite{tod07}. It is shown in \cite{tod07} that the \textit{relative value function} appearing in the average cost optimality equations is the logarithm of the eigenvector: \begin{equation} h^*(x) = \log(v(x)),\qquad x\in{\mathchoice{\hbox{\sf X}}\sfX{\hbox{\scriptsize\sf X}}\smallsfX}. \label{e:relative} \end{equation} See also the derivation in \cite{meybarbusehr13} for a variant of this model. Second order Taylor series approximations for $v$ and $\eta^*$ near $\zeta\approx 0$ can be found by borrowing tools from large-deviations theory. Some of these approximation results are new, and are collected together in the next section and in the Appendix. \subsection{Approximations} \label{s:approx} Approximations will be needed for analysis when we extend the model to allow $\zeta$ to change with time. A solution to the eigenvector problem \eqref{e:TodPF} can be represented through a regenerative formula. Let ${\mathchoice{\bfatom}{\bfatom}{\alpha}{\alpha}}\in{\mathchoice{\hbox{\sf X}}\sfX{\hbox{\scriptsize\sf X}}\smallsfX}$ be some fixed state that is reachable from each initial condition of the chain, under the transition law $P_0$. That is, the chain is assumed to be ${\mathchoice{\bfatom}{\bfatom}{\alpha}{\alpha}}$-irreducible \cite{MT}. Since the state space is assumed to be finite, it follows that there is a unique invariant probability measure $\pi_0$ for $P_0$. The first return time is denoted, \[ \tau = \min\{ t\ge 1 : X_t = {\mathchoice{\bfatom}{\bfatom}{\alpha}{\alpha}}\}\,. \] Recall that the infinite horizon optimal welfare is given by $\eta^*_\zeta=\log(\lambda)$. From the theory of positive matrices \cite{sen81,num84,konmey05a}, it follows that it is the unique solution to, \begin{equation} 1= {\sf E}_{\mathchoice{\bfatom}{\bfatom}{\alpha}{\alpha}} \Bigl[\exp \Bigl(\sum_0^{\tau-1} [ \zeta \util(X_t) - \eta^*_\zeta ] \Bigr)\Bigr] \label{e:PFeta} \end{equation} where the subscript indicates that the initial condition is $X(0)={\mathchoice{\bfatom}{\bfatom}{\alpha}{\alpha}}$. Moreover, for each $x\in{\mathchoice{\hbox{\sf X}}\sfX{\hbox{\scriptsize\sf X}}\smallsfX}$, the value of $v(x)$ is obtained as the expected sum, with initial condition $X(0)=x$: \begin{equation} v(x)= {\sf E}_x\Bigl[\exp \Bigl(\sum_0^{\tau -1} [ \zeta \util(X_t) - \eta^*_\zeta ] \Bigr) \Bigr] \label{e:PFv} \end{equation} These expectations are each with respect to the nominal transition law $P_0$. A Taylor-series approximation of $\eta^*_\zeta$ is based on two parameters, defined with respect to the nominal model $P_0$ with invariant probability measure $\pi_0$. The first-order coefficient is the the steady-state mean of $\util$, \begin{equation} \eta_0=\sum_x \pi_0(x)\util(x) \label{e:eta0} \end{equation} The second-order coefficient is based on the \textit{asymptotic variance} of $\util$ for the nominal model (the variance appearing in the Central Limit Theorem (CLT) for the nominal model). For this finite state space model this has two similar representations, \begin{equation} \begin{aligned} \kappa^2 &=\lim_{T\to\infty}\frac{1}{T} {\sf E}\Bigl[ \Bigl( \sum_0^{T -1} \tilutil(X_t) \Bigr)^2\Bigr] \\ &= \pi_0({\mathchoice{\bfatom}{\bfatom}{\alpha}{\alpha}}) {\sf E}_{\mathchoice{\bfatom}{\bfatom}{\alpha}{\alpha}}\Bigl[ \Bigl( \sum_0^{\tau -1} \tilutil(X_t) \Bigr)^2\Bigr] \end{aligned} \label{e:varsigma} \end{equation} where $\tilutil = \util-\eta_0$. See \cite[Theorem~17.0.1]{MT} for the CLT, and eqn.~(17.13) of \cite{MT} for the second representation above. Similarly, the following functions of $x$ are used to define a second order Taylor series approximation for $ h^*_\zeta$. The first-order term is the solution to \textit{Poisson's equation} for $P_0$, \begin{equation} H (x) = {\sf E}_x\Bigl[\sum_0^{\tau -1} \tilutil(X_t) \Bigr] \label{e:fish0} \end{equation} The asymptotic variance can be expressed in terms of Poisson's equation \cite{MT,CTCN}: \[ \kappa^2 =\sum_x \pi_0(x) \bigl(2\tilutil(x)H (x) -\tilutil(x)^2 \bigr) \] The second-order term in an approximation of $v$ is another variance, \begin{equation} {\cal S}(x)= {\sf E}_x\Bigl[ \Bigl(\sum_0^{\tau -1} \tilutil(X_t) \Bigr)^2 \Bigr] - \bigl(H (x)\bigr)^2 \,, \quad x\in{\mathchoice{\hbox{\sf X}}\sfX{\hbox{\scriptsize\sf X}}\smallsfX}. \label{e:SOvH} \end{equation} \begin{proposition} \label{t:etaSecondOrder} The following hold for the finite state space model in which $P_0$ is irreducible: \begin{romannum} \item The optimal average welfare $\eta^*_\zeta$ is convex as a function of $ \zeta$, and admits the Taylor series expansion, \begin{equation} \eta^*_\zeta = \eta_0 \zeta + \half \kappa^2 \zeta^2 + O( \zeta^3) \label{e:SOeta} \end{equation} \item The mean of $\util$ under the the invariant probability measure $\check{\pi}_\zeta$ for ${\check{P}}_\zeta$ is given by, \begin{equation} \sum_x \check{\pi}_\zeta(x)\util(x) = \frac{d}{d\zeta} \eta^*_\zeta \label{e:ppoffProb} \end{equation} This admits the first-order Taylor series approximation \begin{equation} \frac{d}{d\zeta} \eta^*_\zeta = \eta_0 + \kappa^2 \zeta + O( \zeta^2) \label{e:ppoffProbApprox} \end{equation} \item The relative value function \eqref{e:relative} admits the second-order Taylor series approximation, \begin{equation} h^*_\zeta(x) = \zeta H (x) + \half \zeta^2 {\cal S}(x) + O( \zeta^3) \label{e:happrox} \end{equation} \end{romannum} \qed \end{proposition} \IEEEproof Equations \eqref{e:SOeta}---\eqref{e:ppoffProbApprox} follow from the fact that $\eta^*_\zeta = \log(\lambda)$ can be expressed as a cumulative log-moment generating function \cite[Prop. 4.9]{konmey05a}. Convexity follows from the fact that $\eta^*_\zeta$ is the maximum of linear functions of $ \zeta$ (following the linear-program formulation of the ACOE \cite{bor02a}). \notes{The following also might be obtained from extending \cite[Prop. 4.9]{konmey05a}. I don't have a reference, but it is obvious from the representation of $v$} The approximation \eqref{e:happrox} follows from the the representation \eqref{e:PFv} for $v$, and the definition $h^*_\zeta = \log(v)$ (see \eqref{e:relative}). \qed The representations in this subsection are useful for analysis, but not for computation. Methods to compute $H $ and ${\cal S}$ are contained in the Appendix. \subsection{Aggregate load model} Consider $N$ loads operating independently under the randomized policy described in the previous section. The state of the $i$th load is denoted $ X^i_t $. For large $N$ we have from the Law of Large Numbers, \begin{equation} \begin{aligned} \frac{1}{N}\sum_{i=1}^N \util(X^i_t) & \approx {\sf E} [\util (X_t ) ] \end{aligned} \label{e:AvgCostPool} \end{equation} The expectation and probability on the right are with respect to the optimal transition law ${\check{P}}_\zeta$, where $ \zeta$ is the parameter used in \eqref{e:etastar}. We pose the following centralized control problem: How to choose the variable $ \zeta$ to regulate average utility \textit{in real time}, based on measurements of the average utility, and also a regulation signal denoted $\bfmath{r}$. Let $y_t$ be the fraction of loads that are on at time $t$: \begin{equation} y_t =\frac{1}{N}\sum_{i=1}^N \util(X^i_t), \label{e:y} \end{equation} which is assumed to be observed by the BA. To address the control problem faced by the BA it is necessary to relax the assumption that this parameter is fixed. We let $\bfmath{\zeta}=\{\zeta_0,\zeta_1,\dots\}$ denote a sequence of scalars, which is regarded as an input signal for the control problem faced by the BA. An aggregate model is obtained in two steps. In step 1 the existence of a mean-field limit is assumed: Let $N\to\infty$ to obtain the generalization of \eqref{e:AvgCostPool}, \begin{equation} \lim_{N\to\infty} \frac{1}{N}\sum_{i=1}^N \field{I}\{ X^i_t =x \} = \mu_t(x)\,, \quad x\in{\mathchoice{\hbox{\sf X}}\sfX{\hbox{\scriptsize\sf X}}\smallsfX}. \label{e:mfgPool} \end{equation} For a given initial distribution $\mu_0$ on ${\mathchoice{\hbox{\sf X}}\sfX{\hbox{\scriptsize\sf X}}\smallsfX}$, the distribution $\mu_t$ is defined by $\mu_t(x_t) = {}$ \begin{equation} \sum_{x_i\in{\mathchoice{\hbox{\sf X}}\sfX{\hbox{\scriptsize\sf X}}\smallsfX}} \mu_0(x_0) {\check{P}}_{\zeta_0}(x_0,x_1) {\check{P}}_{\zeta_1}(x_1,x_2) \cdots {\check{P}}_{\zeta_{t-1}}(x_{t-1},x_t) \label{e:muMF} \end{equation} where $x_t$ is an arbitrary state in ${\mathchoice{\hbox{\sf X}}\sfX{\hbox{\scriptsize\sf X}}\smallsfX}$, and the sum is over all intermediate states. We view $\{\mu_t\}$ as a state process that is under our control through $\bfmath{\zeta}$. Justification for the mean-field limit is contained in \Theorem{t:MFL}. Step 2 is based on the Taylor series approximations surveyed in the previous section to approximate this nonlinear system by a linear state space model with $d$-dimensional state $\bfmath{\Phi}$ and output $\bfmath{\gamma}$. It is defined so that for any time $t$, and any $i$, \[ \begin{aligned} \mu_t(x^i)&=\pi_0(x^i)+\Phi_t(i) + o(\bfmath{\zeta}) \\ \gamma_t &= \tilde{y}_t + o(\bfmath{\zeta}) \end{aligned} \] where $ \tilde{y}_t=y_t-y^0$, with $y^0 = \sum_x \pi_0(x) \util(x)$, and where $o(\bfmath{\zeta})$ is in fact $O(\zeta_0^2+\cdots + \zeta_t^2)$. \notes{PB: why is the IC part of the linearization description? SM: ok now? Remember, some readers might be outside of control area. } \begin{proposition} \label{t:linear} Consider the nonlinear state space model whose state evolution is $\mu_{t+1} = \mu_t {\check{P}}_{\zeta_t}$, and output is $y_t=\sum_x \mu_t(x)\util(x)$. Its unique equilibrium with $\bfmath{\zeta}\equiv 0$ is $\mu_t\equiv \pi_0$ and $y_t\equiv y^0\mathbin{:=} \sum_x \pi_0(x)\util(x)$. Its linearization around this equilibrium is given by, \begin{equation} \begin{aligned} \Phi_{t+1} &= A \Phi_t + B \zeta_t \\ \gamma_t &= C \Phi_t \end{aligned} \label{e:LSSmfg} \end{equation} where $A=P^{\hbox{\it\tiny T}}_0$, $C$ is a row vector of dimension $d=|{\mathchoice{\hbox{\sf X}}\sfX{\hbox{\scriptsize\sf X}}\smallsfX}|$ with $C_i= \util(x^i) $ for each $i$, and $B$ is a $d$-dimensional column vector with entries $B_j = \sum_x\pi_0(x) {\cal E}(x,x^j) $, where \begin{equation} {\cal E}(x^i,x^j) = \Bigl[ \tilutil(x^i)+ H (x^j) -H (x^i) \Bigr]P_0(x^i,x^j) \label{e:clE} \end{equation} for each $x^i,x^j\in {\mathchoice{\hbox{\sf X}}\sfX{\hbox{\scriptsize\sf X}}\smallsfX}$. The initial condition is $\Phi_0(i)=\mu_0(x^i)-\pi_0(x^i)$, $1\le i\le d$. The matrix ${\cal E}$ is equal to the derivative, \[ {\cal E}=\frac{d}{d\zeta} P_\zeta \Big|_{\zeta=0} \] Consequently, the formula \eqref{e:clE} implies the approximation \eqref{e:expclE}. \qed \end{proposition} \IEEEproof The formulae for $A$ and $C$ follow from the fact that the system is linear in the state. We have, from \eqref{e:cPool}, \[ {\check{P}}_\zeta(x^i,x^j) = e^{ \zeta\util(x^i) - \eta^*_\zeta - h^*_\zeta(x^i)} P_0(x^i,x^j) e^{h^*_\zeta(x^j)} \] Based on the first order approximation of $h^*_\zeta$ in \Proposition{t:etaSecondOrder} we obtain, \[ {\check{P}}_\zeta(x^i,x^j) \approx e^{\zeta[-H (x^i)+\tilutil(x^i) ]} P_0(x^i,x^j) e^{\zeta H (x^j)} \] where $H $ is a solution to Poisson's equation (with forcing function $\util$) for the nominal model (see \eqref{e:fish0}). Using a first order Taylor series for the exponential then gives, \[ \begin{aligned} {\check{P}}_\zeta(x^i,x^j) &\approx [1-\zeta(H (x^i)-\tilutil(x^i) )] P_0(x^i,x^j)[1+ \zeta H (x^j)] \\ &\approx P_0(x^i,x^j) + \zeta {\cal E}(x^i,x^j) \end{aligned} \] If $\mu \approx \pi_0$ and $ \zeta$ is small, then we can approximate, \[ \mu {\check{P}}_\zeta \approx \mu P_0 + \zeta B^{\hbox{\it\tiny T}} \,, \] where $B$ is the column vector with entries $B_j = \sum_x\pi_0(x) {\cal E}(x,x^j) $. \qed Next we justify the mean-field model \eqref{e:mfgPool}. For the purpose of analysis we lift the state space from the $d$-element set ${\mathchoice{\hbox{\sf X}}\sfX{\hbox{\scriptsize\sf X}}\smallsfX} = \{x^1,\cdots,x^d\}$, to the $d$-dimensional simplex $\textsf{S}$. For the $i^{th}$ load at time $t$, the element $\pi_t^i \in \textsf{S}$ is the degenerate distribution whose mass is concentrated at $x$ if $X^i_t= x$. The average over $N$, denoted $\mu_t^N\in \textsf{S}$, is the empirical distribution, \[ \mu_t^N( x) =\frac{1}{N}\sum_{i=1}^N \pi_t^i(x) =\frac{1}{N}\sum_{i=1}^N \field{I}\{X^i_t =x \} \, , \quad x\in{\mathchoice{\hbox{\sf X}}\sfX{\hbox{\scriptsize\sf X}}\smallsfX}, \] In the proof of convergence it is assumed that $\bfmath{\zeta}^N$ is obtained using state feedback of the form, \[ \zeta_t^N = \phi_t(\mu_0^N,\dots,\mu_t^N) \] where $\phi_t\colon\textsf{S}^{t+1}\to\field{R}$ is continuous for each $t$, and does not depend upon $N$. The following result establishes convergence. \begin{theorem} \label{t:MFL} Suppose $\mu_0^N \rightarrow \mu_0$ as $ N\rightarrow \infty$, and that the state transition matrix $P_\zeta$ is continuous as a function of $\zeta$. Then for each $t$, \begin{equation} \lim_{N\to\infty} \mu_t^N= \mu_t, \qquad \text{\it with probability one, } \label{e:MFL} \end{equation} where the right hand side denotes the probability measure \eqref{e:muMF}, in which \[ \zeta_t = \phi_t(\mu_0,\dots,\mu_t),\qquad t\ge 0. \] \qed \end{theorem} The proof of this result is given at the end of this subsection, and is largely based on a version of the Law of Large Numbers. Let $\{M_{N,k},1\le k \le N\}$ denote a martingale array: This means that ${\sf E}[M_{N,k}|M_{N,1},\cdots M_{N,j}]=M_{N,j}$ for each $N$ and $1 \le j < k \le N$. When $k=N$, we denote $M_N = M_{N,N}$. \begin{proposition} \label{t:MA-LLN} Suppose that $M_{N,k}$ is a martingale array with bounded increments: For some $c_m<\infty$, \[ | M_{N,k+1}-M_{N,k} | \le c_m\qquad \text{for all $k$ and $N$} \] Then the Law of Large Numbers holds: \[ \lim_{N\to\infty} \frac{M_N}{N} = 0, \qquad \text{\it with probability one. } \] \qed \end{proposition} \IEEEproof The Hoeffding-Azuma inequality \cite{mcd98a} gives the following bound: \[ {\sf P}\{ N^{-1} |M_N |\ge t\} \le 2 \exp(- [N t]^2/[2 N c_m^2] ) \] The right hand is summable, so the result follows from the Borel-Cantelli Lemma. \notes{Isn't this an amazing bound!?} \qed \Proposition{t:MA-LLN} is applied to show that the sequence of empirical distribution $\mu_t^N$ can be approximated by the mean-field model perturbed by a disturbance that vanishes as $N\to\infty$: \begin{lemma} \label{t:W-is-MA} The empirical distributions $\{\mu_t^N: t\ge 0\}$ obey the recursion \begin{equation} \mu_{t+1}^N = \mu_t^N P_{\zeta_t^N}+W_{t+1}^N, \label{e:empir_dist} \end{equation} in which, $W_{t+1}^N=\frac{1}{N}\sum_{i=1}^N \Delta_{t+1}^i$ for a family of vector random variables $\{ \Delta_{t+1}^i\}$. On denoting $M_{N,k}=\sum_{i=1}^k \Delta_t^i$ we have, \begin{romannum} \item $\{M_{N,k}:1\le k\le N\}$ is a martingale array. \item There exits $c_m$ such that $\| M_{N,k}-M_{N,k-1}\| \le c_m$ for all $N$ and all $k$ such that $1< k\leq N$. \end{romannum} \end{lemma} \textit{Proof of \Lemma{t:W-is-MA}}: To establish \eqref{e:empir_dist} we first establish a similar expression for $\{\pi_t^i \}$. For each $i$, the sequence of degenerate distributions $\{\pi_t^i \}$ evolve according to a random linear system, \begin{equation} \pi_{t+1}^i=\pi_t^iG_{t+1}^i \label{e:piG} \end{equation} in which $\pi_t^i$ is interpreted as a $d$-dimensional row vector, and $G_t^i$ is a $d\times d$ matrix with entries $0$ or $1$ only, and $\sum_l G_t^i(x^j,x^l)=1$ for all $j$. It is conditionally independent of $\{\pi_0^i,\cdots,\pi_t^i\}$, given $\zeta^N_t$, with \begin{equation} \label{e:EG=P} {\sf E}[G_{t+1}^i|\pi_0^i, \cdots, \pi_t^i, \zeta^N_t]=P_{\zeta^N_t}. \end{equation} Dependency of $\pi_t^i$, $G_t^i$ on $N$ is suppressed, but we must distinguish $\zeta^N_t$ from its limit $\zeta_t$. The random linear system \eqref{e:piG} can thus be described as a linear system driven by ``white noise'': \begin{equation} \pi_{t+1}^i=\pi_t^iP_{\zeta^N_t}+\Delta_{t+1}^i \label{e:Sys_delta} \end{equation} where, $\{\Delta_{t+1}^i=\pi_t^i(G_{t+1}^i -P_{\zeta^N_t}): t\geq1 \}$, which establishes \eqref{e:empir_dist}. The following representation will clarify the remaining analysis: \notes{SM: I need to make its use more apparent} \begin{equation} \text{ $G_t^i={\cal G}(\zeta^N_{t-1},\xi_t^i)$, where $\{ \xi_t^i: t\geq 1, \ i\ge 1\}$ are i.i.d.} \label{e:A1} \end{equation} For $1\le i< N$ and fixed $t$, we define two $\sigma$-algebras: \[ \begin{aligned} {\cal F}_i &= \sigma \{\Delta _t^k, k\le i \} \\ {\cal H}_i&=\sigma\{\pi_{t-1}^{k+1}, \zeta^N_{t-1}, \Delta _t^k, k\le i \} \end{aligned} \] Under \eqref{e:A1} we have the extension of $\eqref{e:EG=P}$, that ${\sf E}[G_t^{i+1}\mid {\cal H}_i]=P_{\zeta^N_{t-1}}$. Moreover, by construction the random variable $\pi_{t-1}^{i+1}$ is ${\cal H}_i$-measurable. Therefore, \[ {\sf E}[\Delta_t^{i+1}\mid {\cal H}_i]={\sf E}[\pi_{t-1}^{i+1}(G_t^{i+1}-P_{\zeta^N_{t-1}})\mid {\cal H}_i]=0 \] The smoothing property of the conditional expectation, and the construction $ {\cal F}_i \subset {\cal H}_i$, then gives (i), \[ {\sf E}[\Delta_t^{i+1}\mid {\cal F}_i]={\sf E}[{\sf E}[\Delta_t^{i+1}\mid {\cal H}_i]\mid {\cal F}_i]=0 \] From the definition of $\Delta_t^i$ below equation \eqref{e:Sys_delta}, it follows that $\{ \|\Delta_t^i\| \}$ admits a uniform bound. Consequently, $\| M_{N,k}-M_{N,k-1}\|=\| \Delta_t^k\|$ is bounded, which is (ii). \qed \head{Proof of \Theorem{t:MFL}} Denote, for $T\ge 0$, the deviation $ \tilde{\mu}_T^N = \mu_T^N -\mu_T$. We prove by induction on $T$ that $\tilde{\mu}_T^N \to 0$ as $N\to\infty$. This holds by assumption when $T=0$. Suppose now that \eqref{e:MFL} holds for $t\le T$. By continuity of $\phi_t$, it follows that $\zeta_t^N\to \zeta_t$ as $N\to\infty$. We also have by the definitions, \[ \tilde{\mu}_{T+1}^N = \tilde{\mu}_T^NP_{\zeta_T} + \mu_T^N(P_{\zeta_T^N}-P_{\zeta_T}) + W_{T+1}^N \] \Lemma{t:W-is-MA} and \Proposition{t:MA-LLN} imply that $W_{T+1}^N \to0$ as $N\to\infty$. Continuity of $P_\zeta$ then implies that \[ \lim_{N\to\infty} \tilde{\mu}_{T+1}^N = 0 \] \qed \section{Controlling a large number of pools} \label{s:mfg} For the remainder of the paper we apply the results of the previous section to the control of a large population of residential pools. The nominal transition matrix $P_0$ is defined by the probabilities of turning the pump on or off, as illustrated in the state transition diagram \Fig{fig:pppDynamics}. In many of the numerical results described below a symmetric model was chosen for $P_0$ in which $p_i^\oplus=p_i^\ominus$, where $p_i^\oplus \mathbin{:=} {\sf P}\{\text{pump switches on} \,|\, \text{it has been off $i$ hours} \}$. Similarly, $p_i^\ominus \mathbin{:=} {\sf P}\{\text{pump switches off} \,|\, \text{it has been on $i$ hours} \}$. The utility function $\util$ on ${\mathchoice{\hbox{\sf X}}\sfX{\hbox{\scriptsize\sf X}}\smallsfX}$ is chosen as the indicator function that the pool pump is operating: \begin{equation} \util(x) = \sum_i \field{I}\{ x = (\oplus, i) \} \label{e:kappaOff} \end{equation} The parameter $ \zeta$ in \eqref{e:etastar} can be positive or negative; If $ \zeta>0$ this control formulation is designed to provide incentive to turn pumps on. It remains to give numerical values for $p_i^\oplus$ and $p_i^\ominus$, $1\le i\le T$. In the symmetric model, the specification of these probabilities is performed as follows. Fix $\gamma>1$ and define, \[ \varrho_s(x) = \begin{cases} 2^{\gamma-1} x^\gamma & 0\le x\le 1/2 \\ 1- 2^{\gamma-1}(1- x)^\gamma & 1/2\le x\le 1 \end{cases} \] If over a 24 hour day we choose a sampling time $T=30$ minutes, then in the symmetric model we take, \begin{equation} p_i^\oplus = p_i^\ominus =\varrho_s(i/48)\,,\qquad 1\le i\le 48. \label{e:pvarrho} \end{equation} \Fig{fig:plus} shows a plot of the resulting probability $p_i^\oplus$ vs.\ $i$ with $\gamma = 6$. To go beyond the asymmetric model, introduce a parameter $\alpha $ intended to represent the fraction of the day that the pool is operating. We modify $ \varrho_s$ as follows, \[ \varrho_s^+(x) = \varrho_s(x^{\delta_+}),\qquad \varrho_s^-(x) =\varrho_s(x^{\delta_-}), \] where $\delta_+$ is chosen so that $0.5^{\delta_+}=1-\alpha$, or $\delta_+ =- \log_2(1-\alpha)$, \spm{Please check my work} and similarly $\delta_- =- \log_2(\alpha)$. For the same sampling parameters as in the previous example, we then take, \begin{equation} p_i^\oplus =\varrho_s^+(i/48),\quad p_i^\ominus =\varrho_s^-(i/48)\,,\qquad 1\le i\le 48. \label{e:pdefn} \end{equation} As $\gamma\to\infty$, the functions in \eqref{e:pdefn} will converge to step functions corresponding to a deterministic cleaning period of $\alpha \times 24$ hours. We find numerically that the average cleaning period is somewhat smaller when $\alpha <\half$ and $\gamma<\infty$. \begin{figure}[h] \Ebox{.5}{pplusTAC.pdf} \vspace{-.05cm} \caption{Control free behavior of a pool used for numerical studies.} \label{fig:plus} \vspace{-08pt} \end{figure} \subsection{Approximations} The steady-state probability that a pool-pump is in operation is given by \[ \check P\{\text{pool-pump is on}\}= \sum_x \check{\pi}_\zeta(x)\util(x) \] A linear approximation is obtained in \Proposition{t:etaSecondOrder}~(ii): \begin{equation} \check P\{\text{pool-pump is on}\} = \eta_0 + \kappa^2 \zeta + O( \zeta^2) \label{e:ppoffProbApp} \end{equation} A comparison of the true probability and its affine approximation is shown in \Fig{fig:pponprob} for the symmetric model, in which $\eta_0=1/2$. The approximation is very tight for $|z|\le 3$. For larger values of $ \zeta$ the true steady-state probability saturates (approximately $0.9$ as $ \zeta\to +\infty$). \notes{Yue, are all the numerics obtained with $\gamma=6$? \\ Note that it is impossible to approach the values $\check P\{\text{pool-pump is on}\} =1$ or $0$ with this model. \\ conclusion: stick to $|z|\le 3$} \begin{figure}[h] \Ebox{.5}{pponprobTAC.pdf} \vspace{-.2cm} \caption{Approximation of the steady-state probability that a pool-pump is operating under ${\check{P}}$.} \label{fig:pponprob} \end{figure} For fixed $ \zeta$, the controlled model ${\check{P}}$ has the same form as $P_0$, with transformed probability vectors $\check p^\oplus$ and $\check p^\ominus$. \Fig{fig:checkpplus} contains plots of the transformed vector $\check p^\oplus$ for values $ \zeta=0, \pm 2, \pm 4$. The plots of $\check p^\ominus$ are obtained through symmetry. \begin{figure}[h] \Ebox{.65}{checkpplusTAC.pdf} \vspace{-.2cm} \caption{Transformed probability vector $\check p^\oplus$ under ${\check{P}}$.} \vspace{-.2cm} \label{fig:checkpplus} \end{figure} The approximation of the average welfare established in \Proposition{t:etaSecondOrder} is, \begin{equation} \eta^*_\zeta = \eta_0 \zeta+\half \kappa^2 \zeta^2 +O( \zeta^3) \label{e:SOeta_pool} \end{equation} Shown in \Fig{fig:QuadApprox} is a comparison of $\eta_\zeta^*$ with linear and quadratic approximations based on \eqref{e:SOeta_pool}. \begin{figure}[h] \Ebox{.55}{QuadApproxTAC.pdf} \vspace{-.25cm} \caption{The optimal average welfare $\eta_\zeta^*$ and its quadratic approximation.} \label{fig:QuadApprox} \end{figure} The plots in \Fig{fig:QuadUapproxExp} compare the eigenvector $v=e^{h^*_\zeta}$ with the exponential of the quadratic approximation \eqref{e:happrox} given in \Proposition{t:etaSecondOrder}~(iii). The computations of $H$ and ${\cal S}$ were based on the alternate expression for these functions that are described in \Proposition{t:PoissonDerivatives}. They are normalized so that the common maxima are equal to unity. The approximation is nearly perfect for the range of $\zeta\in [-4,4]$. \begin{figure}[h] \Ebox{.75}{vposnegTAC.pdf} \caption{Eigenvectors $v_\zeta=e^{h^*_\zeta}$, and their quadratic approximations $\exp( \zeta H (x) + \half \zeta^2 {\cal S}(x)) $. } \label{fig:QuadUapproxExp} \vspace{-13pt} \end{figure} \subsection{Aggregate load model for pool population} Here we examine the linear model \eqref{e:LSSmfg} that will be used by the BA for control synthesis. We begin with an equilibrium analysis in which $\bfmath{\zeta}$ is held constant: Suppose that $\bfmath{\zeta}$ does not vary with time, $\zeta_t= \zeta^*$ for all $t$, and consider the steady-state behavior of the mean-field model. We denote $y_\infty = \lim_{t\to\infty} y_t$, which is the steady-state probability that a pool is on, for the model with transition law ${\check{P}}_{ \zeta^*}$. This can be approximated using \Proposition{t:etaSecondOrder}: \[ y_\infty = {\sf P}\{ \text{Pump is operating} \} \approx \eta_0 + \kappa^2 \zeta^* \] From the viewpoint of the BA, there is a value $G^*$ of desired consumption by all the pools. If $\GNoplus>0$ denotes the consumption of one pool pump in operation, and if there are $N$ pools in total, then the desired steady-state probability is $y_\infty = G^*/(N \GNoplus )$. This translates to a corresponding value of $ \zeta^*$, \begin{equation} \zeta^* \approx \frac{1}{\kappa^2} \Bigl[ \frac{1}{\GNoplus} \frac{G^*}{N} - \eta_0 \Bigr] = \frac{1}{\kappa^2} \frac{1}{\GNoplus} \frac{\widetilde G}{N} \label{e:zGapprox} \end{equation} where $\widetilde G=G^*-G_0$, with $G_0 = \GNoplus N \eta_0 $, the control-free value obtained with $ \zeta^*=0$. \begin{figure} \Ebox{1}{FRandPZ_TAC.pdf} \vspace{-.05cm} \caption{Frequency response and pole-zero plot for the linearized model $C[Iz-A]^{-1}B$} \vspace{-.25cm} \label{fig:fr} \vspace{-08pt} \end{figure} Consider now the case in which $\bfmath{\zeta}$ is a function of time. \Fig{fig:fr} shows the Bode plot and pole-zero plot for the linear model \eqref{e:LSSmfg}. The transfer function from $\bfmath{\zeta}$ to $\bfmath{\gamma}$ is BIBO stable and minimum phase. \notes{I removed the pole-zero cancellations: The pole-zero cancellations imply that the model cannot be both controllable and observable. The controllability matrix has rank $23$ and the observability matrix rank $15$. This is a 50-state model, so the model is neither controllable nor observable. |} \notes{See commented text for \it Frequency Sweep (Swept Sine)} \subsection{Super-sampling} \label{sec:sim} Recall the control architecture described at the start of \Section{s:mfg}. At any given time, the desired power consumption/curtailment is determined by the BA based on its knowledge of dispatachable and uncontrollable generation as well as prediction of load. This is passed through a band-pass filter and scaled appropriately based on the proportion of ancillary service provided by the pools, and the average power consumption of pool pumps. The resulting reference signal is denoted $\bfmath{r}$. We introduce here a refinement of the randomized control scheme to account for delay in the system: Even if sampling takes place each hour, if a percentage of pools turn off in response to a regulation signal, then the power consumption in the grid will drop nearly instantaneously. Nevertheless, the control system model will have a one hour delay, which is unacceptable. To obtain a more responsive system we employ ``super-sampling'' at the grid level, which is obtained as follows: We maintain the assumption that each pool checks the regulation signal at intervals of length $T$. However, the pools have no common clock. It is convenient to model super-sampling via binning of time, so that we retain a discrete time model. Let $m>1$ denote a ``super-sampling'' parameter. At the grid-level the system is in discrete time, with sampling interval $T/m$. For example, if $T=30$ minutes, then $m=6$ corresponds to a five minute sampling interval. A pool is class $i$ if the reference signal is checked at times $nT + (i-1)T/m$, with $n\ge 0$, $1\le i\le m$. Letting $y_t^i$ denote the fraction of pools in the $i$th class that are operating, the total that are operating at time $t$ is the sum, \[ y_t =\sum_{i=0}^{m-1} y_t^i \] Let $H_0$ denote the discrete time transfer function using $m=1$, which is simply the transfer function for the linear state space model \eqref{e:LSSmfg}. For general $m$, the transfer function from $\zeta$ to $y$ is \spm{This needs to be checked - we need a clear analysis in the paper. } \begin{equation} H(z^{-1}) = z^mH^0(z^{-m}) L(z) \label{e:supSampleHvt} \end{equation} where $L$ is the low pass filter, \[ L(z)= \frac{1}{m} \sum_{i=1}^{m} z^{-i} = \frac{1}{m} z^{-1} \frac{1-z^{-m}}{1-z^{-1}} \] The term ``$1/m$'' appears because the pools in each bin contribute this fraction of total ancillary service. In the second representation there is a pole-zero cancellation at $z=1$. The filter $L(z)$ has $m-1$ zeros on the unit circle: All of the solutions to $z^{m}=1$, except for the solution $z=1$. Using super-sampling we have achieved our goal of reducing delay: In real time, the delay in this model is $T/m$ rather than $T$. \subsection{Simulation results} The numerical results described here are based on a stochastic simulation of one million pools ($N=10^6$), using Matlab. This large number of pools is consistent with Florida or California. For the purposes of translation to megawatts, it is assumed that each pool in operation consumes $\GNoplus =1$~KW. Power consumption at time $t$ is assumed to be equal to $N \GNoplus y_t$ (in KW). \notes{no!! or $10^3 y_t $ (in MW).} The supersampling approach was used in all of these experiments, with the following values of $T$ and $m$ fixed throughout: Each pool checks the regulation signal every $T=30$ minutes. The supersampling parameter is $m=12$, corresponding to $150$~second sampling intervals at the grid level. The reference signal was chosen to be the BPA regulation signal passed through a low pass filter, shown in \Fig{fig:BPA}. It was found that one million pools could provide far more regulation than the $\pm$~200~MW required at BPA during this week. More experiments were conducted in which the signal was scaled to investigate the limits of regulation from a population of one million pools. We summarize results obtained from two sets of experiments conducted in two scenarios. In the first, the symmetric model based on the nominal model was used, with switching probability \eqref{e:pvarrho}. The second scenario was based on a shorter cleaning schedule of 8 hours per day. The switching probabilities defined in \eqref{e:pdefn} were used in which $\alpha=1/3$, and $\gamma = 6$ in both scenarios. The function $ p^\oplus $ using $\alpha=1/3$ is shown in \Fig{fig:plus8}. \begin{figure}[h] \Ebox{.5}{pplus-8hrsTAC.pdf} \vspace{-.05cm} \caption{Nominal model with 8 hour cleaning schedule.} \label{fig:plus8} \vspace{-08pt} \end{figure} In both scenarios, the linearization \eqref{e:LSSmfg} is minimum phase: All zeros of $H_0(z) = C(Iz-A)^{-1}B$ lie strictly within the unit disk in the complex plane. With the introduction of super-sampling, the resulting transfer function \eqref{e:supSampleHvt} also has zeros on the unit circle. In these experiments it was assumed that the BA had perfect measurements of the total power consumption of the population of pools. PI control was used to obtain the signal $\bfmath{\zeta}$: A proportional gain of $20$, and integral gain of $4$ worked well in all cases. That is, the command $\bfmath{\zeta}$ was taken to be \[ \zeta_t = 20 e_t + 4 e^I_t,\qquad \text{with} \ \ e_t = r_t-y_t\ \ \text{\it and} \ \ e^I_t = \sum_{k=0}^t e_k \] This is of the form $ \zeta_t = \phi_t(\mu_0,\dots,\mu_t)$, $ t\ge 0$, that is required in \Theorem{t:MFL}. \begin{figure}[h] \Ebox{1}{YueSimScale.pdf} \vspace{-.25cm} \caption{Closed loop simulation in two scenarios, using two different reference signals. } \vspace{-.05cm} \label{f:YueSimScale} \end{figure} The average proportion of time that a pool is on will be approximately $1/2$ in Scenario 1, and $1/3$ in Scenario 2. Consequently, the class of regulation signals that can be tracked is not symmetric in Scenario 2: The population of pools has more potential for increasing rather than decreasing power consumption. To attempt to quantify this effect, define \textit{potential capacity} as the upper and lower limits of power deviation, subject to the constraint that tracking performance does not degrade, denoted $\{+\text{Demand}, -\text{Supply}\}$. Through simulations it was found that the potential capacity in Scenario~$1$ is $\{+500MW, -500MW\}$, and $\{+695MW, -305MW\}$ in Scenario~$2$. Results from four experiments are shown in \Fig{f:YueSimScale}. Subplots (a) and (b) show tracking results using the low-pass filtered signal shown in \Fig{fig:BPA}, and the second row shows tracking performance when the signal magnitude is increased and shifted to match its potential capacity. The tracking performance is remarkable in all cases. In particular, it is surprising that a $\pm400$~MW signal can be tracked, given that the average power consumption of the pools is $500$~MW in Scenario~1. \begin{figure}[h] \Ebox{1}{YueSimScaleWindup.pdf} \vspace{-.25cm} \caption{The impact of exceeding capacity} \vspace{-.05cm} \label{f:YueSimScaleWindup} \end{figure} Subplots (a) and (b) in \Fig{f:YueSimScaleWindup} shows what happens when the reference signal exceeds capacity. Two sources of error are evident in these plots. First, the power deviation saturates when all of the $10^6$ pools are turned off, or all are turned on. Secondly, large tracking errors are observed immediately after saturation. This is a consequence of memory in the PI controller -- what is known as \textit{integrator windup}. To solve this problem, the BA should truncate the regulation signal so that it does not exceed the values $\{+\text{Demand}, -\text{Supply}\}$. Subplots (c) and (d) in \Fig{f:YueSimScaleWindup} use the same regulation signal used in (a), (b), but truncated to meet these capacity constraints. Once again, the tracking is nearly perfect. \paragraph*{Individual risk} These simulation experiments have focused on the service to the grid, and the accuracy of the mean-field model approximations. The fidelity of approximation is remarkable. The next question to ask is, what happens to an individual pool? Because of constraints on the regulation signal, it is found in simulations that the average cleaning time for each pool owner is close to the target values (either 12 or 8 hours per day in the two scenarios treated here). This is to be expected by the Law of Large Numbers. The Central Limit Theorem can be appealed to if we wish to understand the impact of this control architecture on an individual pool. In simulations we find that the empirical distribution of hours cleaned over a four day period appears to be roughly Gaussian, and hence there is a portion of pools that are under-cleaned, and another portion that receive too many hours of cleaning. Risk to individual consumers can be reduced or eliminated by using an additional layer of control at the loads. If over a period of several days the system detects over or under-cleaning, then the control system will ignore the signal sent by the BA. The aggregate impact of this modification represents a small amount of un-modeled dynamics. In preliminary experiments we have seen virtually no impact on tracking; only a small reduction in capacity. Analysis of individual risk is a topic of ongoing research. \section{Conclusions} \label{s:conclude} The simplicity of the MDP solution, and the remarkable accuracy of the LTI approximation for the mean-field model makes this approach appealing for this and many related applications. There are several issues that have not been addressed here: \begin{romannum} \item We do not fully understand the potential cost to consumers in terms of energy, or risk in terms of rare events in which the pool is under- or over-cleaned. It is likely that hard constraints on performance can be put in place without impacting the analysis. \item Does the grid operator need to know the real-time power consumption of the population of pools? Probably not. The BA is interested in regulating frequency, and this may be the only measurements needed for harnessing ancillary service from these loads. The grid frequency passed through a band-pass filter could serve as a surrogate for the measurement $y_t$ assumed in this paper. It may be valuable to have \textit{two} measurements at each load: The BA command, and local frequency measurements. \item How can we engage consumers? The formulation of contracts with customers requires a better understanding of the value of ancillary service, as well as consumer preferences. \end{romannum} \bibliographystyle{IEEEtran}
1,116,691,497,449
arxiv
\section{Introduction} \label{sec:intro} The $^{13}$C ($\alpha$,n) $^{16}$O\ reaction plays an important role in nuclear physics and astrophysics. Many conventional nuclear physics experiments suffer from background which is produced by the $^{13}$C ($\alpha$,n) $^{16}$O\ reaction in carbon buildup on the target although $^{13}$C\ has only a small natural abundance of about 1\%. In addition, the $^{13}$C ($\alpha$,n) $^{16}$O\ reaction may be relevant as radiogenic neutron background in underground laboratories (e.g., \cite{Coo18,Wes17,Mei09,Hea89,Fei68}). Here typical primary energies $E_\alpha$ vary between about 5 and 9 MeV for the uranium and thorium decay chains. As the ($\alpha$,n)\ cross section decreases strongly towards low energies, the relevant thick-target yield is essentially defined by the ($\alpha$,n)\ cross section close and slightly below the primary $E_\alpha$, i.e.\ between about 5 and 8 MeV. (All energies are given as laboratory energies $E_{\alpha,{\rm{lab}}}$ or $E_{n,{\rm{lab}}}$ throughout this paper; exceptions are explicitly stated.) Unfortunately, this energy range above 5 MeV is not well-studied in literature. Much work has been done to measure the $^{13}$C ($\alpha$,n) $^{16}$O\ cross section at very low energies. This energy range is important to determine the stellar $^{13}$C ($\alpha$,n) $^{16}$O\ reaction rate which defines the strength of the main neutron source for the astrophysical $s$-process . The various experimental data sets in the low MeV region \cite{Heil08,Har05,Bru93,Dro93,Bair73,Dav68,Sek67} agree reasonably well, as e.g.\ discussed in the NACRE compilations \cite{NACRE1,NACRE2} and in a recent review \cite{Cri18}. The experimental data by Harissopulos {\it et al.}\ \cite{Har05} (hereafter: Har05) extend the low MeV region up to about 8 MeV and are thus the only experimental basis for the determination of radiogenic neutron yields from the $^{13}$C ($\alpha$,n) $^{16}$O\ reaction. However, these Har05 data have been questioned severely in a recent Comment by Peters \cite{Pet17}. There it is stated that ``the actual cross section above 5 MeV could be almost 50\% lower than reported by Harissopulos {\it et al.}'', and it is pointed out that there is a problem with the neutron detection efficiency in the Har05 data. It is the aim of the present study to further investigate the Har05 data above 5 MeV and to provide a reliable correction to these experimental data. \section{Re-Analysis of the Har05 data} \label{sec:re} The Har05 experiment used a $4 \pi$ thermal $^3$He neutron detector, embedded in a cylindric polyethylene moderator. The determination of the neutron efficiency $\eta$ for such a detector is a complicated problem because $\eta$ depends on the neutron energy. However, this information is lost because of the thermalization of the neutrons in the moderator. It is worth noting that similar problems with the neutron efficiency have been identified in a series of ($\gamma$,n)\ experiments, performed at Livermore and Saclay; a correction to these ($\gamma$,n)\ data was recently provided (e.g., \cite{Var17}). The present study follows the idea of \cite{Var17} to provide improved data from a combination of experimental and theoretical information. In Har05, the neutron efficiency $\eta$ was determined as a function of the neutron energy $E_n$ (in MeV) in their Eq.~(1) from 2 to 9 MeV. It is stated that $\eta$ varies between 31\% at $E_\alpha = 0.8$ MeV and 16\% at $E_\alpha = 8.0$ MeV. As pointed out by Peters \cite{Pet17}, the low efficiency $\eta$ at $E_\alpha = 8$ MeV indicates that Har05 assumed that the $^{13}$C ($\alpha$,n) $^{16}$O\ reaction is governed by the ($\alpha$,n$_0$)\ channel, leading to relatively high neutron energies. However, slightly above $E_\alpha \approx 5$ MeV the ($\alpha$,n$_1$) , ($\alpha$,n$_2$) , ($\alpha$,n$_3$) , and ($\alpha$,n$_4$)\ channels open, and depending on the branching, the average neutron energy $E_n$ is significantly lower and the effective neutron detection efficiency $\eta_{\rm{eff}}$ is significantly higher than assumed in Har05. Thus, instead of using the efficiency $\eta_0$ for the ($\alpha$,n$_0$)\ channel, an effective efficiency \begin{equation} \eta_{\rm{eff}} = \sum_{j=0}^{4} \, b_j(E_\alpha) \, \eta_j(E_{n,j}) \label{eq:eta_eff} \end{equation} has to be used where the $b_j$ are the neutron branchings of the ($\alpha$,n$_j$)\ channel at a given $E_\alpha$, and the $\eta_j$ are the energy-dependent detection efficiencies for neutrons from the ($\alpha$,n$_j$)\ channel. For the energy range under study in Har05, the sum in Eq.~(\ref{eq:eta_eff}) runs over the $^{16}$O $0^+$ ground state ($j=0$) and the excited states at $E_x = 6049$ keV ($0^+$), 6130 keV ($3^-$), 6917 keV ($2^+$), and 7117 keV ($1^-$). Finally, this leads to a correction factor $f_{\rm{corr}}$ for the Har95 cross section data: \begin{equation} f_{\rm{corr}} = \frac{\eta_0}{\eta_{\rm{eff}}} \label{eq:f_corr} \end{equation} Obviously, the correction factor is $f_{\rm{corr}} = 1.0$ for energies below 5 MeV, and thus the agreement of the Har05 data with other literature data at low energies is not affected by the present correction. For a vanishing ($\alpha$,n$_0$)\ contribution (and thus low neutron energies around $E_\alpha \approx 8$ MeV) the correction factor will approach its lower limit $f_{\rm{corr}} \approx 0.5$ which results from the given efficiency limits of 31\% at low and 16\% at high neutron energies in Eq.~(1) of Har05. The present study uses the TALYS code \cite{TALYS,TALYS2} to calculate the branching ratios $b_j$ of the ($\alpha$,n$_j$)\ channels. Of course, such a statistical model approach can only be valid on average, and individual resonances in the $^{13}$C ($\alpha$,n) $^{16}$O\ reaction may show a completely different decay branching. But it has been shown recently that a careful selection of TALYS parameters allows to reproduce ($\alpha$,n)\ cross sections for intermediate \cite{Mohr15,Tal18} and even light nuclei \cite{Mohr17}, at least at energies $E_\alpha$ above a few MeV. The calculated branching ratios $b_j$ as a function of energy $E_\alpha$ are shown in Fig.~\ref{fig:branch}. The correction factor $f_{\rm{corr}}$ is then calculated from Eqs.~(\ref{eq:eta_eff}) and (\ref{eq:f_corr}) using the energy-dependent efficiencies $\eta_j$ from Eq.~(1) of Har05 and the neutron energies $E_{n,j}$ of the ($\alpha$,n$_j$)\ channels from reaction kinematics. $f_{\rm{corr}}$ is also shown in Fig.~\ref{fig:branch}. All numbers (Har05 cross sections, calculated branching ratios $b_j$, efficiencies $\eta_0$ and $\eta_{\rm{eff}}$, correction factor $f_{\rm{corr}}$, and the corrected cross sections) are provided as Supplemental Material to this study \cite{Suppl}. \begin{figure}[htb] \includegraphics[bbllx=35,bblly=20,bburx=470,bbury=440,width=0.95\columnwidth,clip=]{branch.eps} \caption{ \label{fig:branch} (Color online) Branching ratios $b_j$ for the ($\alpha$,n$_j$)\ channels of the $^{13}$C ($\alpha$,n) $^{16}$O\ reaction as a function of energy $E_\alpha$ (lower part a) and resulting correction factor $f_{\rm{corr}}$ from Eq.~(\ref{eq:f_corr}) for the cross sections of Har05 (upper part b). } \end{figure} The original cross sections of Har05 are shown in Fig.~\ref{fig:sigma} as dots; the larger diamonds show the corrected data using $f_{\rm{corr}}$ from Eq.~(\ref{eq:f_corr}) and Fig.~\ref{fig:branch}. Further details of Fig.~\ref{fig:sigma} are discussed in the following Sect.~\ref{sec:disc}. \begin{figure}[htb] \includegraphics[bbllx=30,bblly=10,bburx=515,bbury=485,width=0.95\columnwidth,clip=]{sigma.eps} \caption{ \label{fig:sigma} (Color online) Cross section of the $^{13}$C ($\alpha$,n) $^{16}$O\ reaction. The corrected data (blue diamonds) are significantly lower than the original Har05 data (lightblue dots) for energies above the opening of the ($\alpha$,n$_1$)\ channel at $E_\alpha \approx 5$ MeV. The new estimated ($\alpha$,n$_0$)\ cross sections (red triangles) are close to the results which are obtained from the reverse $^{16}$O (n,$\alpha_0$) $^{13}$C\ reaction (orange stars and green squares). Further discussion see text. } \end{figure} \section{Discussion} \label{sec:disc} Up to now, a statistical model calculation (using TALYS) was applied to correct the experimental data of Har05. Fortunately, there are two ways to verify the calculations and the applied correction factor $f_{\rm{corr}}$. The first check uses the recently measured branching ratios $b_j$ by Febbraro {\it et al.}\ \cite{Feb15}. Here a deuterated scintillator was used for neutron spectroscopy, and it was possible to unfold the light response of the scintillator to derive the neutron energies in the $^{13}$C ($\alpha$,n) $^{16}$O\ reaction at $E_\alpha = 7.5$ MeV (see Fig.~8 of \cite{Feb15}). It is found that the ($\alpha$,n$_2$)\ channel dominates which populates the $3^-$ state in $^{16}$O\ at 6130 keV. The ($\alpha$,n$_0$)\ ground state and ($\alpha$,n$_3$)\ $2^+$ (6917 keV) contributions are about a factor of four smaller. Although no absolute efficiency calibration was applied in \cite{Feb15}, the TALYS calculation nicely reproduces the trend with a dominating ($\alpha$,n$_2$)\ channel (46\%), weaker ($\alpha$,n$_0$)\ (20\%) and ($\alpha$,n$_3$)\ (19\%) channels, and minor contributions from the ($\alpha$,n$_1$)\ (6\%) and ($\alpha$,n$_4$)\ (9\%) channels at $E_\alpha = 7.5$ MeV. The measured branching ratios $b_j$ of \cite{Feb15} clearly exclude the assumption in Har05 that the ($\alpha$,n$_0$)\ channel is dominating, and it results that the neutron energies are much lower than assumed in Har05. Consequently, the correction factor $f_{\rm{corr}}$ in Eq.~(\ref{eq:f_corr}) and Fig.~\ref{fig:branch} is confirmed. A second test can be made using experimental data from the reverse $^{16}$O (n,$\alpha$) $^{13}$C\ reaction. The $^{13}$C ($\alpha$,n$_0$) $^{16}$O $_{\rm{g.s.}}$ cross section is directly related to the $^{16}$O (n,$\alpha_0$) $^{13}$C $_{\rm{g.s.}}$ cross section by the reciprocity theorem. The relevant energy range is covered by the (n,$\alpha_0$)\ data by Khryachkov {\it et al.}\ \cite{Khr12} and Giorginis {\it et al.}\ \cite{Gio07} (as provided by EXFOR \cite{EXFOR}, including a correction to the presented data in Fig.~8 of \cite{Gio07}). After conversion from (n,$\alpha_0$)\ cross sections to ($\alpha$,n$_0$)\ cross sections, the data of \cite{Khr12} and \cite{Gio07} are also included in Fig.~\ref{fig:sigma}. As expected, at low energies below the opening of the ($\alpha$,n$_1$)\ channel, the converted (n,$\alpha_0$)\ data agree well with the ($\alpha$,n$_0$)\ data of Har05. However, at higher energies the converted (n,$\alpha_0$)\ data are significantly lower than the Har05 data, reaching a discrepancy up to about one order of magnitude at energies around $7-8$ MeV. This finding again invalidates the approach by Har05 that the ($\alpha$,n$_0$)\ channel is dominating. An attempt is made to estimate the ($\alpha$,n$_0$)\ cross section from the corrected Har05 data and the calculated ground state branching $b_0$ (red triangles in Fig.~\ref{fig:sigma}). These estimated ($\alpha$,n$_0$)\ data are close to the converted (n,$\alpha_0$)\ data of \cite{Gio07} (green squares). At energies above 6 MeV, the estimated data are still slightly higher than the converted (n,$\alpha_0$)\ data; this can be interpreted as evidence that most of the resonances in the ($\alpha$,n)\ data at higher energies preferentially decay to excited states in $^{16}$O , but not to the $^{16}$O\ ground state. Both above methods of verification confirm that the TALYS calculation of the ground state branching is realistic with a trend that the real ground state branching may be even lower than the calculated $30\% - 15\%$ above 6.5 MeV. Thus, it becomes obvious that a correction to the Har05 data has to be applied where a ground state branching $b_0 = 1.0$ was assumed. A correction factor $f_{\rm{corr}} \approx 0.65 - 0.55$ is determined above $E_\alpha \approx 6.5$ MeV, with a lower limit of about 0.5 (for a vanishing ground state branching $b_0$ and thus low neutron energies $E_n$). This leads to an uncertainty of the correction factor $f_{\rm{corr}}$ of the order of $10\% - 20\%$. This result is almost independent of details of the $b_j$ ($j \ne 0$) branching ratios towards excited states in $^{16}$O\ because only the ($\alpha$,n$_0$)\ channel leads to neutrons with relatively high energies. The uncertainty of $\eta_{\rm{eff}}$ and $f_{\rm{corr}}$ may be somewhat larger close above the respective ($\alpha$,n$_j$)\ thresholds where the neutron emission in the laboratory is kinematically focused to forward directions. The uncertainty of the correction factor $f_{\rm{corr}}$ is explained in more detail for the energies of 6 MeV and 7.5 MeV, i.e.\ relatively close above the threshold of the ($\alpha$,n$_1$)\ channel and at the energy of the new experimental data of \cite{Feb15}. At 6 MeV, the calculated branching ratios are $b_0 = 0.48$, $b_1 = 0.09$, and $b_2 = 0.43$, leading to an effective efficiency $\eta_{\rm{eff}} = 29.4$\% instead of $\eta_0 = 20.4$\%. The uncertainty of the calculated ground state branching $b_0$ is carefully assumed with a factor of two. This leads to an upper limit of $b_0 \approx 1$ and to a lower limit $b_0 = 0.24$. Obviously, for the upper limit of $b_0$ I find $\eta_{\rm{eff}} = 21.0\% \approx \eta_0$. The lower limit of $b_0$ results in an increased $\eta_{\rm{eff}} = 33.6$\%. Consequently, $f_{\rm{corr}} = 0.695^{+0.278}_{-0.087}$. At 7.5 MeV, the corresponding numbers are $b_0 = 0.20$, $b_1 = 0.06$, $b_2 = 0.46$, $b_3 = 0.19$, and $b_4 = 0.09$, leading $\eta_{\rm{eff}} = 31.4$\% instead of $\eta_0 = 18.2$\%. The upper and lower limits of $b_0$ (again assuming a factor of two uncertainty for $b_0$) result in a range of $\eta_{\rm{eff}}$ between 28.4\% and 33.0\% and $f_{\rm{corr}} = 0.580^{+0.062}_{-0.027}$. Summarizing, even the assumed significant uncertainty of a factor of 2 for the ground state branching $b_0$ translates to a typical uncertainty of the correction factor $f_{\rm{corr}}$ of the order of $10-20$\%. Note that this result is almost independent on the detailed branching towards the 4 excited states because the excitation energies are within about 1 MeV, and thus the neutron energies are low and very similar for all branchings $b_1$, $b_2$, $b_3$, and $b_4$. Of course, these uncertainties should be considered as average uncertainties, i.e.\ uncertainties of the average cross sections over a significant energy interval. Individual resonances (as visible in Fig.~\ref{fig:sigma}) may show a completely different branching than calculated by TALYS. In the extreme case of a resonance with a full ground state branching $b_0 = 1.0$, the correction factor $f_{\rm{corr}} = 1.0$ remains unity within the energy interval of this resonance. Thus it is not meaningful to provide uncertainties for each data point of the corrected Har05 data. Instead, an overall uncertainty of about 15\% is recommended for yield calculations which average over a sufficiently wide energy interval of at least a few hundred keV. In principle, the experimental approach of Har05 can also be used to provide at least a rough estimate of the neutron energy via the so-called ``ring ratio'': the ratio of the neutron yields in the outer and inner ring of the Har05 neutron detector depends on the neutron energy. Unfortunately, the experimental setup of Har05 used only one ADC for the sum signal of all neutron detectors, and thus no ring ratio can be provided from the Har05 experiment \cite{Har18}. It is also interesting to see that in general the statistical model calculation provides a reasonable agreement (on average) with the experimental $^{13}$C ($\alpha$,n) $^{16}$O\ data (see Fig.~\ref{fig:sigma}). However, the calculation clearly overestimates the experimental data around $E_\alpha \approx 3.5 - 5$ MeV. This energy interval shows a relatively small number of resonances, compared to lower and higher energies. It is not surprising that the agreement between the statistical model calculation and the experimental data becomes better in regions with a higher number of resonances, but even at the highest energies under study between 6 and 8 MeV the calculation is slightly higher than the average of the experimental data. The overestimation of the experimental cross sections in the statistical model does not affect the correction factor $f_{\rm{corr}}$ which depends only on the calculated branching ratios $b_j$. Interestingly, a similar overestimation for the TALYS calculation is also found for new preliminary data of the $^{13}$N($\alpha$,$p$) $^{16}$O\ mirror reaction \cite{Tal18b}. Finally, a brief comparison to R-matrix fits from literature \cite{Heil08,Kun14} is provided. The fit by Heil {\it et al.}\ \cite{Heil08} did not include the Har05 data, but was constrained by (n,$\alpha$)\ data up to neutron energies of 8.5 MeV. Above about $E_\alpha = 5$ MeV, the fit of the ($\alpha$,n)\ data in Fig.~17 of \cite{Heil08} is lower than the Har05 data whereas the fit agrees with the Har05 data at lower energies. This result is consistent with the findings of the present study. The later study by Kunieda {\it et al.}\ \cite{Kun14} uses the Har05 data for fitting. But unfortunately this study focuses on the low-energy region with $E_\alpha < 4.6$ MeV, and no conclusion can be drawn from \cite{Kun14} for the energy range under study in this work. \section{Conclusions} \label{sec:conc} The $^{13}$C ($\alpha$,n) $^{16}$O\ data of Harissopulos {\it et al.}\ \cite{Har05} cover a wide energy range from about 0.8 MeV to 8 MeV. At low energies below the opening of the ($\alpha$,n$_1$)\ channel at about 5 MeV, these data agree well with various literature data. The cross sections between 5 MeV and 8 MeV are important for the estimate of radiogenic neutron background in low-background environments like underground laboratories. In this energy range experimental data are rare, and the experimental data by Harissopulos {\it et al.}\ have been questioned in a Comment by Peters \cite{Pet17}. Following the criticism by Peters, the present study provides a correction to the experimental data which is based on an improved determination of the neutron detection efficiency $\eta_{\rm{eff}}$. Whereas the original study of Harissopulos {\it et al.}\ assumed a dominating ($\alpha$,n$_0$)\ ground state contribution (with resulting high neutron energies and low detection efficiency), the present work finds a dominating ($\alpha$,n$_2$)\ channel, populating the $3^-$ state in $^{16}$O\ (with resulting lower neutron energies and higher detection efficiency). The derived correction factor $f_{\rm{corr}}$ decreases from unity at the opening of the ($\alpha$,n$_1$)\ channel at $E_\alpha \approx 5$ MeV down to about 0.55 at $E_\alpha \approx 8$ MeV. The applied method and the resulting $f_{\rm{corr}}$ are validated by further studies which are based on recent neutron spectroscopy data \cite{Feb15} and on data from the reverse $^{16}$O (n,$\alpha$) $^{13}$C\ reaction \cite{Khr12,Gio07}. The corrected $^{13}$C ($\alpha$,n) $^{16}$O\ cross sections are reliable with uncertainties of about 15\%. A further reduction of uncertainties requires new experiments which should use improved neutron detectors, either with spectroscopic properties \cite{Feb15} or with an almost flat detection efficiency (as e.g.\ suggested in \cite{Uts17}). \acknowledgments I thank R. Talwar and K.\ E.\ Rehm for motivating this study, and S.\ Harissopulos and H.-W.\ Becker for encouraging discussions. This work was supported by NKFIH (K108459 and K120666).
1,116,691,497,450
arxiv
\chapter{Speculative Analysis to Predict Impact of Architectural Decay on the Implementation of Software Systems} \label{sec:prediction} \section{Foundation} \label{sec:emp3_found} This chapter discusses the dissertation's third large study, which aims to solve a prediction problem by using supervised machine learning (ML) techniques. In supervised learning, ML systems learn how to combine input to produce useful predictions of data which has not been seen before. In particular, we would like to predict the impact of architectural decay on the implementation of software systems. In this prediction problem, the input features are architectural smells of a system and the output features, i.e., labels, are the issue- and change-proneness of that system. This section will describe two steps to pre-process the raw data before it is used in supervised ML algorithms. The two pre-processing steps are (1) labeling implementation's properties and (2) balancing the datasets. \subsection{Labeling Data} \label{sec:labeling} Labeling data is a crucial step to ensure the success of building prediction models. In our prediction problem, the raw information of a system's implementation includes the numbers of issues and changes of each source file. Nonetheless, these numbers could not be used directly as predicted labels because intuitively it would be impossible to build a model which can accurately predict a precise number of issues or changes. Rather than that, those numbers have to be converted to nominal labels which represent the levels of issue- and change-proneness. The way to convert a set of numeric values to nominal labels depends on the distribution of the numeric values. In our problem, the numbers of issues and changes of a system follow a heavy-tailed distribution \cite{heavy-tailed}. These distributions are often segmented into ranges. Most simply and commonly, a distribution can be divided into two segments, which are the head and the tail. A more sophisticated approach is to divide the distribution into three parts, which are the head, the body, and the tail. This study uses the three-segment division, which represents three levels of proneness: low, medium, and high. This approach is chosen because of its potential to give developers a better estimation of architectural decay's impact. To segment a dataset, this study uses the Pareto principle \cite{pareto}, a popular segmentation method for heavy-tailed distributions. This principle is also widely used in software engineering, particularly value-based software engineering \cite{boehm2006value}. To obtain three segments, the Pareto principle is repeatedly applied two times as suggested in the literature \cite{arthur2001six}. To collect the data regarding architectural decay, this study uses the approach already applied in the second study of this dissertation on architectural decay. More specifically , for each version of a subject system, we first collect the list of issues which affects that version. Similar to the second stud , only ``resolved'' and ``fixed'' issues are taken into account. Then the files that were changed in order to fix the issues are collected. For each file, we gather its associated architectural smells (determined by the triad relationships of issues, files, and smells in Figure \ref{fig:issue_smell_mapping}), the number of issues whose fixing commits changed the file, and the total number of changes After the raw data is collected, it has to be labeled using the Pareto technique mentioned above before being fed to supervised ML algorithms. Specifically, to determine the level of issue-proneness of a source file in a software version, first, the number of issues related to that source file is collected. This is considered as one data point. We collect data points for all files in all versions of a software system, and then sort the dataset by the numbers of issues, from low to high. Then, the first 80\% of data points are marked with ``low'' labels. Finally, the next 16\% (80\% of 20\%) and 4\% (20\% of 20\%) of data points are marked with ``medium'' and ``high'' labels respectively. Similarly, to determine the change-proneness of a source file in a software version, we count the number of commits related to that file and repeat the labeling process like above. Table \ref{tab:data_point_sample} shows a few data samples in our datasets after labeling. All the eleven architectural smells in the second empirical study of this dissertation are considered as the input features of this third study. The examples of input features in Table \ref{tab:data_point_sample} are CO (Concern Overload), SPF (Scattered Parasitic Functionality), LO (Link Overload), and DC (Dependency Cycle) smells. The output features, i.e., labels, are the levels of issue- and change-proneness. The two leftmost columns show the versions and the filenames of data points. The next eleven columns are binary features which indicate whether or not the files have a specific smell. ``1'' means having the smell, and ``0'' is the other way around. The two rightmost columns indicate the issue- and change-proneness of the files. For example, in version 0.20.0 of Hadoop, DFSClient.java has three smells: Scattered Parasitic Functionality, Link Overload, and Dependency Cycle. The file's issue-proneness is high, and its change-proneness is low \begin{table*}[th] \centering \caption{Data Samples of Hadoop} \label{tab:data_point_sample} \begin{tabular}{|l|l|l|l|l|l|l|p{1.5cm}|p{1.5cm}|} \hline Version & Filename & CO & SPF & LO & DC & ... & Issue-proneness & Change-proneness \\ \hline 0.20.0 & hadoop/dfs/DFSClient.java & 0 & 1 & 1 & 1 & ... & high & low \\ \hline 0.20.0 & hadoop/mapred/JobTracker.java & 1 & 0 & 1 & 0 & ... & medium & medium \\ \hline 0.20.0 & hadoop/tools/Logalyzer.java & 0 & 0 & 0 & 0 & ... & low & low \\ \hline ... & ... & ... & ... & ... & ... & ... & ... & ... \\ \hline \end{tabular} \end{table*} \subsection{Balancing Data} Because of the data's distribution and the labeling approach, the datasets in this third study are mostly unbalanced. Recall from Section \ref{sec:labeling} that the ``low'':``medium'':``high'' ratio of our datasets is 80:16:4 (i.e., 20:4:1). If a dataset with that ratio is used to train a prediction model, the model will tend to predict ``low'' every time, with an 80\% chance of being correct. For this reason, it is very important to balance these datasets to ensure that weighted metrics are not biased by less (or more) frequent labels. This dissertation use SMOTE \cite{chawla2002smote} to balance our datasets. Details of SMOTE has been described in Section \ref{sec:ml_found}. For our specific problem, SMOTE is used to oversample ``medium'' by a factor of 5 and ``high'' by a factor of 20. After oversampling, the dataset will be balanced and its ``low'':``medium'':``high'' ratio will be 1:1:1. \section{Research Question and Hypotheses} The two empirical studies of software architectural changes and decay in this dissertation (Section \ref{sec:empirical_change} and Section \ref{sec:empirical_decay}) have answered some key research questions in software architecture community and confirmed the visible impact of architectural decay on software systems' implementation. The decay's impact reveals itself in the form of correlations with systems' issue-proneness and change-proneness. This is the cornerstone for exploring further research questions. Specifically, that finding has provided the intuition for the third study in this dissertation, which attempts to create an architectural-based approach to predicting the decay's impact on a system's implementation. The following hypothesis about the approach has been developed. \textbf{Hypothesis}: {It is possible to construct accurate models to predict the impact of architectural smells on systems' implementation. } To prove this hypothesis, this study particularly focuses on the predictability of the issue-proneness and change-proneness of a system based on its architectural smells. This decision is based on the result of this dissertation's second empirical study, which showed the significant correlations between architectural smells and those two properties. The two following research questions have been defined accordingly. \textit{\textbf{RQ1.}} To what extent can the architectural smells detected in a system help to predict the issue-proneness and change-proneness of that system at a given point in time? The training data used to build the prediction models of a system is collected from different versions of that system during its lifetime. Therefore, if these models can yield high accuracy in predicting issue- and change-proneness, this indicates that architectural smells have consistent impacts on those two properties throughout the system's lifecycle. This is very important because it ensures that the impact of architectural smells is not related to other factors, such as the system's size, which may change during the system's evolution. In addition, a highly accurate prediction model will be useful for developers to foresee the future issue-proneness and change-proneness of newly smell-affected parts of the system. The prediction models will also be useful for the system's maintainers in order to decide when and where to refactor the system. For example, the maintainers can use the models to estimate the reduction of the issue- and change-proneness if they remove some smell instances from the system's implementation. \textit{\textbf{RQ2.}} To what extent do unrelated software systems tend to share properties with respect to issue- and change-proneness? This research question aims to determine whether architectural smells have similar impacts on the implementations of different systems. Specifically, we would like to see if the issue- and change-proneness of a system can be accurately predicted by a ``general-purpose'' model trained by the datasets of other software systems. If this hypothesis is confirmed, architectural smell-based models can be reused by developers to predict the issue- and change-proneness of new software systems in the early stages of their development, before sufficiently large numbers of system versions become available. Moreover, we would like to see how different combinations of systems result in the accuracy of the general-purpose models To answer these two research questions, we need to build different prediction models based on the data regarding architectural smells and then determine how accurate these models are. First, we collect the data regarding architectural decay of the subject systems as well as their issue- and change-proneness. Then we apply different machine learning techniques and use accuracy metrics to evaluate their effectiveness in building prediction models in the context of this study. The models' accuracy will be measured under different architectural views to see how different architectural recovery techniques affect the prediction models. Similar to the first two empirical studies in this dissertation, this study uses three architectural recovery methods: ACDC \cite{tzerpos2000}, ARC \cite{garcia2011enhancing}, and PKG \cite{leempirical} To evaluate the accuracy of prediction models, we use three widely accepted metrics: precision, recall, and f-score \cite{powers2011evaluation}. Precision is the fraction of correctly predicted labels over all predicted labels. Recall is the fraction of correctly predicted labels overall actual labels. Finally, f-score is the harmonic mean of precision and recall to represent a test's accuracy. We use two different approaches to obtain evaluation results. The first approach uses two independent datasets: a training set and a test set. The second approach uses only a dataset with cross-validation setup. For the second approach, this study uses 10-fold-cross-validation, where the dataset is randomly divided into ten equal-sized subsets. Then we sequentially select one subset and test it against the prediction model built by the other nine subsets. The final result is the mean of ten tests' results. To facilitate the whole process, this study uses ARCADE (Section \ref{sec:arcade})---our framework to study software architecture---to collect raw data, and WEKA\cite{weka}---a well-known ML framework---to pre-process data, build prediction models and evaluate the models' accuracy. \section{Subject Systems} In order to answer the above research questions, we use the data collected from the subject systems in the second empirical study (Section \ref{sec:empirical_decay}) of this dissertation. Table \ref{tab:subject_systems} shows the list of seven subject systems. One system, Continuum, was excluded from this study because its small number of samples is not appropriate for building of prediction models. The data that this study uses include architectural smells detected in recovered architectures, implementation issues collected from the Jira issue repository \cite{jiraclient}, and code commits extracted from GitHub \cite{github}. \begin{comment} \bgroup \def1.25{1.25} \begin{table}[bt \centering \caption{Subject systems used in building prediction models} \scriptsize \begin{tabular}{|l|p{3.5cm}|p{2.25cm}|p{2.25cm}|p{2.25cm}|} \toprule \textbf{System} & \textbf{Domain} & {\textbf{\# Versions}} & \textbf{\# Issues} & \textbf{Avg. LOC} \\ \midrule Camel & Integration F-work & 78 & 9665 & 1.13M \\ CXF & Service F-work & 120 & 6371 & 915K \\ Hadoop & Data Proc. F-work & 63 & 9381 & 1.96M \\ Nutch & Web Crawler & 21 & 1928 & 118K \\ OpenJPA & Java Persist. & 20 & 1937 & 511K \\ Struts2 & Web App F-work & 36 & 4207 & 379K \\ Wicket & Web App F-work & 72 & 6098 & 332K \\ \bottomrule \end{tabular}% \vspace{1.5mm} \label{tab:subject_systems_predict}% \end{table}% \egroup \end{comment} \section{Results} For each research question, the method employed in validating it and the associated findings will be discussed in the following sections. \subsection{RQ1. To what extent can the architectural smells detected in a system help to predict the issue-proneness and change-proneness of that system at a given point in time?} \begin{table*}[bth] \centering \caption{Predicting Issue-proneness using Decision Table} \label{RQ1.issues_proneness} \begin{tabular}{|l|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{System} & \multicolumn{3}{c|}{ACDC} & \multicolumn{3}{c|}{ARC} & \multicolumn{3}{c|}{PKG} \\ \cline{2-10} & Precision & Recall & F-score & Precision & Recall & F-score & Precision & Recall & F-score \\ \hline Camel & 69.9\% & 68.4\% & 68.9\% & 70.8\% & 67.0\% & 67.5\% & 68.2\% & 62.8\% & 63.4\% \\ \hline CXF & 78.0\% & 76.7\% & 77.1\% & 68.9\% & 68.3\% & 68.3\% & 64.7\% & 63.8\% & 63.8\% \\ \hline Hadoop & 81.2\% & 80.1\% & 80.3\% & 76.6\% & 76.6\% & 75.4\% & 72.8\% & 73.4\% & 72.1\% \\ \hline Nutch & 80.8\% & 71.6\% & 70.9\% & 82.5\% & 82.7\% & 82.3\% & 68.3\% & 52.1\% & 43.7\% \\ \hline OpenJPA & 71.4\% & 68.3\% & 70.8\% & 74.5\% & 73.2\% & 72.6\% & 69.2\% & 67.9\% & 67.3\% \\ \hline Struts2 & 89.2\% & 89.0\% & 89.0\% & 95.0\% & 94.8\% & 94.8\% & 79.1\% & 78.3\% & 78.4\% \\ \hline Wicket & 69.2\% & 70.1\% & 68.8\% & 76.7\% & 77.1\% & 76.5\% & 63.7\% & 65.4\% & 59.6\% \\ \hline Average & 77.1\% & 74.9\% & 75.1\% & 77.9\% & 77.1\% & 76.8\% & 69.4\% & 66.2\% & 64.0\% \\ \hline \end{tabular} \end{table*} In our prediction problem, all the features are binary (recall Table \ref{tab:data_point_sample}). A feature indicates whether or not a file has an architectural smell. For this reason, decision-based techniques are likely to yield a better result than others. Our observation of the evaluation metrics yielded by different classification techniques --- such as decision table \cite{kohavi1995power}, decision tree \cite{quinlan2014c4}, logistic regression \cite{le1992ridge}, naive bayes \cite{john1995estimating} --- also confirms this intuition. This section will only discuss the obtained results of the decision table based models, which generally has the best accuracy among the aforementioned prediction techniques. Table \ref{RQ1.issues_proneness} shows the precisions, recalls and f-scores of the decision table models for predicting the issue-proneness of 7 subject systems. Those values are computed using the 10-fold-cross-validation setup \cite{kohavi1995study}. The left column shows the systems' name and the last row shows the average values across all the systems. For each system, we built three different prediction models based on three sets of architectural smells, which were detected in three architectural views: ACDC, ARC, and PKG. In total, 21 prediction models based on the decision table technique were created and evaluated. In general, the prediction models of PKG yield the lowest accuracy. The models of ACDC and ARC exhibit performance that is up to 15\% better than PKG. Across all the systems, the prediction models of ACDC achieve the precision of at least 69.2\%, the recall of at least 68.4\%, and f-score of at least 68.8\%. The corresponding values of the prediction models of ARC are 68.9\%, 67.0\%, and 67.5\%. Notably, the prediction model of Struts2 under ARC view achieves $\sim$95.0\% in all three metrics. As a result, ARC has the highest average accuracy, which is 77.9\% in precision, 77.1\% in recall, and 76.8\% in f-score. The corresponding values in ACDC are slightly lower, 77.1\% in precision, 74.9\% in recall, and 75.1\% in f-score. These results confirm that architectural smell-based models can accurately predict the issue-proneness of a system. In other words, architectural smells have a consistent impact on a system's implementation with respect to issue-proneness over the system's lifetime. This finding will provide software maintainers with a powerful indicator of an implementation's health. It urges the maintainers to pay more attention to architectural smells existed in their systems. The maintainers can use the architectural smell-based prediction models to foresee future problems as well as to devise refactoring plans. The relatively poor performance of PKG in answering RQ1 is in line with one finding that emerged from our first empirical study on architectural changes (recall Chapter \ref{sec:empirical_change}). The finding showed that PKG is not as useful for software architects as ACDC and ARC, in order to understand the actual underlying architectural changes. In this study, PKG again shows that is not as useful as ACDC and ARC in correctly capturing the impact of the underlying architectural smells. This can be explained by the fact that PKG is a simple architecture recovery technique which only depends on the package structure of subject systems. On the other hand, ACDC and ARC do clustering based on sophisticated algorithms with respect to the systems' dependencies and concerns. Recall the categorization of architectural smells in Chapter \ref{sec:smells_detection}: two categories out of the four are dependency-based and concern-based smells. This observation suggests a future work for us to examine whether a smell should be detected under a specific architectural view in order to be more accurate. \begin{table*}[t] \centering \caption{Predicting Change-proneness using Decision Table} \label{RQ1.change_proneness} \begin{tabular}{|l|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{System} & \multicolumn{3}{c|}{ACDC} & \multicolumn{3}{c|}{ARC} & \multicolumn{3}{c|}{PKG} \\ \cline{2-10} & \multicolumn{1}{l|}{Precision} & \multicolumn{1}{l|}{Recall} & \multicolumn{1}{l|}{F-score} & \multicolumn{1}{l|}{Precision} & \multicolumn{1}{l|}{Recall} & \multicolumn{1}{l|}{F-score} & \multicolumn{1}{l|}{Precision} & \multicolumn{1}{l|}{Recall} & \multicolumn{1}{l|}{F-score} \\ \hline Camel & 69.9\% & 63.4\% & 63.3\% & 68.0\% & 67.1\% & 63.2\% & 60.3\% & 61.0\% & 61.9\% \\ \hline CXF & 73.7\% & 70.8\% & 78.3\% & 69.7\% & 63.4\% & 64.1\% & 60.8\% & 63.4\% & 62.8\% \\ \hline Hadoop & 78.1\% & 73.2\% & 75.6\% & 74.9\% & 74.8\% & 73.7\% & 67.4\% & 70.0\% & 68.2\% \\ \hline Nutch & 73.1\% & 66.8\% & 67.9\% & 76.3\% & 78.0\% & 82.0\% & 62.2\% & 46.1\% & 44.7\% \\ \hline OpenJPA & 78.3\% & 77.7\% & 73.0\% & 74.3\% & 70.0\% & 66.5\% & 68.2\% & 62.1\% & 64.4\% \\ \hline Struts2 & 89.3\% & 85.8\% & 83.1\% & 87.8\% & 96.7\% & 96.0\% & 71.2\% & 73.7\% & 80.2\% \\ \hline Wicket & 66.6\% & 65.3\% & 65.9\% & 72.1\% & 71.8\% & 73.3\% & 62.7\% & 59.0\% & 56.4\% \\ \hline Average & 75.4\% & 72.7\% & 73.3\% & 74.7\% & 74.5\% & 74.1\% & 64.7\% & 62.2\% & 62.7\% \\ \hline \end{tabular} \end{table*} Like issue-proneness, we used the same approach to evaluate the accuracy of 21 architectural smell based models for predicting change-proneness. Table \ref{RQ1.change_proneness} shows the accuracy of models built using the decision table technique. The models of PKG again have the lowest accuracy. In some systems like CXF, Nutch, and Struts2, the evaluation values in PKG are 10-20\% lower than the corresponding values in the other two views. This is similar to the result of issue-proneness. In the ACDC view, the average precision is 75.4\%, the average recall is 72.7\%, and the average f-score is 73.3\%. The corresponing numbers in the ARC views are 74.7\%, 74.5\% and 74.1\%. These results again are promising and they prove that architectural smells based models can accurately predict the change-proneness of a system. In summary, we can use the historical data of a system regarding its architectural smells, issues, and changes to develop the models which can predict issue- and change-proneness of that system with high accuracy. This result indicates that architectural smells have a consistent impact on systems' implementations. Our architecture-based prediction approach is useful for maintainers to foresee likely future problems in newly smell-impacted parts of the system. The approach could also help in creating maintenance plans in order to effectively reduce the system's issue- and change-proneness. Lastly, ACDC and ARC outperform PKG in predicting both issue- and change-proneness. This again emphasizes the importance of architectural recovery techniques to help developers understand the underlying architecture precisely. \subsection{RQ2. To what extent do unrelated software systems tend to share properties with respect to issue- and change-proneness?} The results of the RQ1-related experiments show that architectural smells have consistent impacts on the issue- and change-proneness of a software system during its lifetime. In that sense, the RQ2 can be considered as an extension of the RQ1, in which we would like to see if architectural smells have consistent impacts across unrelated software systems. Specifically, we would like to see if the issue- and change-proneness of a system can be accurately predicted by the models trained by data from other unrelated software systems. To answer this research question, instead of using 10-fold-cross-validation, we sequentiality select a system from seven subject systems as the test system and use its dataset as the test set. The training set is created by combining datasets of other systems. To obtain comprehensive results, we conducted different experiments by combining six, five, and four systems (excluding the test system). If we select the other six systems, there is only one combination. However, if we select five out of the six systems, there are six different combinations. We tried all those six combinations and computed the means of precisions and recalls. Similarly, for combining four out of the six systems, we tried all fifteen possible combinations and computed the mean and standard deviation of precisions and recalls. Note that the datasets of different subject systems have different sizes, hence we have to resample those datasets to a same size before combining them Tables \ref{tab:rq2_all_data}, \ref{tab:rq2_all_data_arc}, and \ref{tab:rq2_all_data_pkg} summarize the results of all RQ2-related experiments with regard to predicting issue-proneness under ACDC, ARC, and PKG, respectively. The left sides of these tables show the list of systems. The precisions and recalls in 5 different cases are presented: 10-fold-cross-validation (``10-fold'' column) on the test test, models trained by 7 datasets including the test set (``All 7'' column), models trained by 6 other systems' datasets (``6 Others'' column), models trained by 5 other systems' datasets (``5 Others'' column), and models trained by 4 other systems' datasets (``4 Others'' column). Note that the "5 Others" and "4 Others" columns contain the mean values of the evaluation metrics of the models which are trained on all possible combinations of systems. In summary, beside 21 issue-proneness models from the RQ1's experiments, we built and evaluated 483 more issue-proneness models to answer the RQ2 We found consistent trends in all three architectural views. First, a prediction model built by combining datasets of different software systems, even if the test system is included, has lower precision and recall than the model built for that specific test system. In all three Tables \ref{tab:rq2_all_data}, \ref{tab:rq2_all_data_arc}, and \ref{tab:rq2_all_data_pkg}, the ``10-fold'' columns have the highest precision and recall values. This result can be explained by the intuition that adding more systems' datasets can create a more general-purpose model, but it also adds noise and reduces the model's capability in predicting the properties of a specific system. For this reason, if the dataset of a system is available, its prediction models should be trained only on its own dataset. Second, the prediction models trained by the datasets of six systems---excluding the test system--- yield relatively high precisions and recalls. They are close to their corresponding values in ``All 7'' columns, and lower than corresponding values in the ``10-fold'' columns by $\sim$10\% or less For example, for the models built by 6 systems under the ACDC view (Table \ref{tab:rq2_all_data}), the precisions are from 63.4\% to 85.3\% and the recalls are from 56.5\% to 83.2\%. Similarly, the prediction models trained by the datasets of 5 systems also achieve good results. Notably, in some cases, the results in ``5 Others'' are equal or slightly higher the corresponding values in ``6 Others''. Especially in the ARC view, precision values in "5 Others" columns are slightly higher than the ones in "6 Others" for all the systems, and recall values in "5 Others" columns are slight higher than the ones in "6 Others" for five of seven systems. This observation shows that using a very large number of systems to train the model might slightly reduce the prediction models' accuracy because the models become too generic \begin{table}[tb] \centering \caption{Predicting issue-proneness using the datasets of other subject systems - ACDC view} \label{tab:rq2_all_data} \begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{System} & \multicolumn{5}{c|}{Precision} & \multicolumn{5}{c|}{Recall} \\ \cline{2-11} & 10-fold & All 7 & 6 Others & 5 Others & 4 Others & 10-fold & All 7 & 6 Others & 5 Others & 4 Others \\ \hline Camel & 69.9\% & 63.7\% & 63.5\% & 61.7\% & 43.9\% & 63.4\% & 56.6\% & 56.5\% & 56.4\% & 44.2\% \\ \hline CXF & 78.0\% & 77.1\% & 75.1\% & 74.7\% & 50.0\% & 76.7\% & 73.2\% & 71.7\% & 71.3\% & 43.3\% \\ \hline Hadoop & 81.2\% & 74.8\% & 69.8\% & 70.0\% & 41.3\% & 80.4\% & 73.5\% & 69.9\% & 70.0\% & 42.1\% \\ \hline Nutch & 84.1\% & 74.9\% & 74.9\% & 74.9\% & 41.6\% & 77.3\% & 68.8\% & 68.8\% & 68.8\% & 40.4\% \\ \hline OpenJPA & 71.4\% & 66.7\% & 65.1\% & 65.3\% & 49.9\% & 68.3\% & 63.9\% & 62.0\% & 62.3\% & 49.9\% \\ \hline Struts2 & 89.2\% & 85.3\% & 83.1\% & 82.6\% & 48.8\% & 89.0\% & 85.4\% & 83.2\% & 82.7\% & 43.1\% \\ \hline Wicket & 69.2\% & 63.4\% & 63.4\% & 60.1\% & 48.2\% & 70.1\% & 64.6\% & 64.6\% & 61.5\% & 52.9\% \\ \hline \end{tabular} \end{table} \begin{table}[b] \centering \caption{Predicting issue-proneness using the datasets of other subject systems - ARC view} \label{tab:rq2_all_data_arc} \begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{System} & \multicolumn{5}{c|}{Precision} & \multicolumn{5}{c|}{Recall} \\ \cline{2-11} & 10-fold & All 7 & 6 Others & 5 Others & 4 Others & 10-fold & All 7 & 6 Others & 5 Others & 4 Others \\ \hline Camel & 70.8\% & 68.1\% & 63.6\% & 63.7\% & 38.1\% & 67.0\% & 61.2\% & 59.9\% & 59.6\% & 37.1\% \\ \hline Cxf & 68.9\% & 62.1\% & 58.0\% & 58.2\% & 45.4\% & 68.3\% & 59.5\% & 55.3\% & 55.0\% & 43.7\% \\ \hline Hadoop & 72.8\% & 75.3\% & 73.7\% & 74.1\% & 38.2\% & 73.4\% & 74.9\% & 70.1\% & 70.4\% & 38.5\% \\ \hline Nutch & 82.5\% & 86.9\% & 81.8\% & 82.3\% & 31.3\% & 82.7\% & 84.1\% & 78.1\% & 79.2\% & 38.2\% \\ \hline OpenJPA & 74.5\% & 71.2\% & 69.0\% & 69.9\% & 32.7\% & 73.2\% & 69.7\% & 67.1\% & 67.7\% & 38.9\% \\ \hline Struts2 & 95.0\% & 92.9\% & 91.8\% & 92.1\% & 34.8\% & 94.8\% & 92.5\% & 91.5\% & 91.9\% & 38.8\% \\ \hline Wicket & 76.7\% & 65.6\% & 64.7\% & 64.9\% & 34.4\% & 77.1\% & 65.9\% & 65.2\% & 65.3\% & 39.0\% \\ \hline \end{tabular} \end{table} \begin{table}[tb] \centering \caption{Predicting issue-proneness using the datasets of other subject systems - PKG view} \label{tab:rq2_all_data_pkg} \begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{System} & \multicolumn{5}{c|}{Precision} & \multicolumn{5}{c|}{Recall} \\ \cline{2-11} & 10-fold & All 7 & 6 Others & 5 Others & 4 Others & 10-fold & All 7 & 6 Others & 5 Others & 4 Others \\ \hline Camel & 68.2\% & 66.0\% & 64.8\% & 64.7\% & 52.0\% & 62.8\% & 59.6\% & 59.2\% & 59.0\% & 45.2\% \\ \hline Cxf & 64.7\% & 53.6\% & 52.3\% & 52.6\% & 45.9\% & 63.8\% & 50.7\% & 46.6\% & 47.6\% & 43.7\% \\ \hline Hadoop & 76.6\% & 71.6\% & 69.9\% & 70.1\% & 38.6\% & 76.6\% & 69.6\% & 68.1\% & 68.3\% & 39.4\% \\ \hline Nutch & 68.3\% & 64.6\% & 63.1\% & 63.6\% & 56.5\% & 52.1\% & 48.1\% & 45.2\% & 46.1\% & 41.6\% \\ \hline OpenJPA & 69.2\% & 66.7\% & 65.6\% & 65.7\% & 40.5\% & 67.9\% & 66.1\% & 64.9\% & 65.0\% & 42.4\% \\ \hline Struts2 & 79.1\% & 77.7\% & 74.5\% & 74.9\% & 54.3\% & 78.3\% & 77.5\% & 73.9\% & 74.4\% & 53.7\% \\ \hline Wicket & 63.7\% & 63.7\% & 63.2\% & 62.5\% & 52.2\% & 65.4\% & 65.4\% & 65.2\% & 64.2\% & 52.6\% \\ \hline \end{tabular} \end{table} The third observations is that the precisions and recalls of the prediction models are reduced significantly when the number of training systems is reduced to four. In most of the cases of all three architectural views, the precisions and recalls in ``4 Others'' columns are lower than corresponding values in ``5 Others'' columns by at least $\sim$15\% or more. It suggests that combining four systems is not enough to build a generic prediction model, if that model is intended to be used on unrelated systems. Tables \ref{tab:5_systems} and \ref{tab:4_systems} show the further details of our experiments with the combinations of five and four systems, respectively, under the ACDC view. As mentioned in the beginning of this section, for each test system, there are six possible combinations of five other systems' datasets, and fifteen possible combinations of four other systems' datasets. Tables \ref{tab:5_systems} and \ref{tab:4_systems} show the means and the standard deviations of both the precisions and recalls for each test system. The tables show that the combinations of five systems not only have the high means of precisions and recalls but also have the low standard deviations of both precisions and recalls. The standard variations of precisions and recalls are less than 5\% when we combine five systems. In contrast, the combinations of four systems have the low means and high standard deviations of those two metrics. The standard variations of precisions and recalls can go up to 10\% in this case. A low standard deviation indicates that the data values tend to be close to the mean, while a high standard deviation indicates that the data values are spread out over a wider range around the mean. These results indicate that the accuracy of prediction models starts converging while we increase the number of systems in the training sets. The more systems we includes, the lower variation is, until the accuracy reaches its maximum vales. Based on our experiments, combining five systems generally yields the best results \begin{table}[tb] \centering \caption{Predicting issue-proneness using combinations of 5 other systems} \label{tab:5_systems} \begin{tabular}{|l|c|c|c|c|} \hline \multirow{2}{*}{System} & \multicolumn{2}{c|}{Precision} & \multicolumn{2}{c|}{Recall} \\ \cline{2-5} & Mean & Std. Dev. & Mean & Std. Dev. \\ \hline Camel & 61.7\% & 5.0\% & 66.4\% & 1.0\% \\ \hline CXF & 74.7\% & 2.3\% & 71.3\% & 1.7\% \\ \hline Hadoop & 70.0\% & 0.4\% & 70.0\% & 0.3\% \\ \hline Nutch & 74.9\% & 0.0\% & 68.8\% & 0.0\% \\ \hline OpenJPA & 65.3\% & 0.4\% & 62.3\% & 0.5\% \\ \hline Struts2 & 82.6\% & 1.5\% & 82.7\% & 1.5\% \\ \hline Wicket & 60.1\% & 4.0\% & 61.5\% & 3.5\% \\ \hline \end{tabular} \end{table} \begin{table}[tb] \centering \caption{Predicting issue-proneness using combinations of 4 other systems} \label{tab:4_systems} \begin{tabular}{|l|c|c|c|c|} \hline \multirow{2}{*}{System} & \multicolumn{2}{c|}{Precision} & \multicolumn{2}{c|}{Recall} \\ \cline{2-5} & Mean & Std. Dev. & Mean & Std. Dev. \\ \hline Camel & 43.9\% & 8.8\% & 44.2\% & 3.8\% \\ \hline CXF & 50.0\% & 8.5\% & 43.3\% & 3.8\% \\ \hline Hadoop & 41.3\% & 8.1\% & 42.1\% & 9.3\% \\ \hline Nutch & 41.6\% & 7.1\% & 40.4\% & 5.9\% \\ \hline OpenJPA & 49.9\% & 4.9\% & 49.9\% & 2.7\% \\ \hline Struts2 & 48.8\% & 10.0\% & 43.1\% & 9.3\% \\ \hline Wicket & 48.2\% & 7.5\% & 52.9\% & 3.6\% \\ \hline \end{tabular} \end{table} We also found the similar trends in the experiments which attempt to predict the change-proneness of a system using unrelated systems' datasets. In summary, the results of the RQ2-related experiments confirm that software systems tend to share properties with respect to issue- and change-proneness. The accuracy of generic models is less than the one of specific models, however, the gap is just about 10\% or less. This allows developers to use generic models to predict the issue- and change-proneness of new software systems in the early stages of their development, before sufficiently large numbers of system versions become available. Furthermore, our empirical study's results suggest that using at least five systems can create a reliable generic model which can predict the issue- and change-proneness of new software systems with high accuracy. \subsection{Formalization of Architectural Concepts} \label{subsec:smells_concepts} \begin{figure}[t] \centering \includegraphics[scale=0.45]{graphics/simple_arch.pdf} \vspace{-6mm} \caption{A notional software system's architecture. } \label{fig:simple_arch} \end{figure} Figure \ref{fig:simple_arch} shows a notional software architecture $A$ that comprises two components, $C_1$ and $C_2$. Each component contains multiple implementation-level entities. Between entities, links are presented by solid arrows and couplings by dashed lines. In the view of typical recovery methods \cite{garcia2013obtaining}, we represent the structure of a system's architecture as a graph \textit{A} whose vertices represent the system's components \textit{C} and topology represents the connections embodied in the set of links \textit{L} and the set of couplings \textit{Cp} between these components: $A = (\mathit{C}, \mathit{L}, \mathit{Cp})$ Since our study involves a concern-based recovery technique (ARC), we will also formalize this concept. For our purposes, the architecture of a software system can be associated with a nonempty set of topics \textit{T}, which captures functionalities of the system and its components. We define a topic as a probability distribution \textit{Pd} over the system's nonempty set of keywords \textit{W}, whose elements are used to ``describe'' that system (e.g., via comments in source code). By examining the words that have the highest probabilities in a topic, the meaning of that topic may be discerned. In this way, a topic can serve as a representation of a \emph{concern} addressed by a system. The set of topics \textit{T} is then a representation of the system's concerns. \vspace{-2mm} \begin{noindent} \begin{center} \begin{tabular}{l} $W=\{w_i \mid i \in \mathbb{N}\}$ ~ $T = \{z_i \mid i \in \mathbb{N} \}$ ~ $z = Pd(W)$\\ \end{tabular} \end{center} \end{noindent} \vspace{-2mm} A component is a tuple comprising the component's internal entities \textit{E} and the probability distribution \textit{$\theta$} over the system's topics \textit{T}. Entities are implementation elements used to build a system. An entity e contains its interface $\mathit{I}$, a set of links $\mathit{L_E}$, and a set of couplings $\mathit{Cp_E}$ to other entities. In OO systems, entities are classes and interfaces are public methods. \vspace{-1mm} \begin{center} \begin{tabular}{ll} $\mathit{C} = \{c_i \mid i \in \mathbb{N}\}$ ~~ $c = (E, \theta_c)$ ~~ \\ $\mathit{E} = \{e_i \mid i \in \mathbb{N}\}$ ~~ $e_i = (I_i, L_{e_i}, Cp_{e_i})$ ~~ $\mathit{I} = \{ie_i \mid i \in \mathbb{N} \}$\\ \end{tabular} \end{center} \vspace{-1mm} Both a link $l$ and a coupling $cp$ consist of a source $src$ and a destination $dst$, which are the components' internal entities involved in an interconnection. Links are \textit{unidirectional}, while couplings are \textit{bidirectional}. The union of the links of all entities is the set of links \textit{L} of the graph $A$. The union of the couplings of all entities is the set of couplings \textit{Cp} of the graph $A$. \begin{noindent} \begin{center} \begin{tabular}{ll} ~~$L = \cap_{i=1}^{n}L_{e_i}$~~~~~~$\mathit{L_e} = \{l_j \mid i \in \mathbb{N}_0\}$ ~~~~~~~$l = (src, dst)$ ~~ \\ ~~$Cp = \cap_{i=1}^{n}Cp_{e_i}$~~$\mathit{Cp_e} = \{cp_j \mid i \in \mathbb{N}_0\}$ ~~ $cp = (src, dst)$ ~~ \\ \end{tabular} \end{center} \end{noindent} \vspace{-1mm} \subsection{Smell Formalization and Detection Algorithms} \label{subsec:formal_arch_smells} One critical issue in smell detection is setting thresholds, i.e., defining the criteria that serve as indicators of smells. We set thresholds by using Interquartile analysis \cite{tukey1977exploratory}, which is a widely used, efficient technique \cite{Lanza:2005:OMP:1076853} for detecting outliers (i.e., smells in our study) in a population without requiring it to have a normal probability distribution. In the Interquartile method, the lower quartile ($q_1$) is the $25^{th}$ percentile, and the upper quartile ($q_3$) is the $75^{th}$ percentile of the data. The interquartile range ($iqr$) is defined as the interval between $q_1$ and $q_3$. $q_1-(1.5*iqr)$ and $q_3+(1.5*iqr)$ are defined as “inner fences”, that mark off the "reasonable" values from the outliers \cite{Fenton:1998:SMR:580949}. In this section, we will use a shorthand function \textbf{\textit{getHighThreshold()}}, which accepts a list of values and return the high value of the ``inner fences''. \begin{algorithm}[!t] \renewcommand{\AlCapSty}[1]{\small\small{\textbf{#1}}\unskip} \scriptsize \caption{\textbf{detectCO}\xspace} \label{alg:detectbco} \SetAlgoVlined \LinesNumbered \DontPrintSemicolon \SetInd{0.3em}{0.6em} \KwIn{$C$: a set of components, $T$: a set of system concerns \KwOut{$\mathit{COsmells}: $ a set of Component Concern Overload\xspace instances } $\mathit{COsmells} \leftarrow \emptyset$\; $\mathit{componentConcernCounts} \leftarrow$ initialize all brick concern counts to 0\; \For{$c \in C$}{ \label{alg:detectbco:count_topics:start} $T_c \leftarrow \mathit{getConcernsOfComponent(c)}$\; $\mathit{th_{z_c}} \leftarrow \mathit{getHighThreshold(P(T_c))}$\; \label{alg:detectbco:compute_threshold} \For{$z \in T_c$}{ \If{$P(z \mid c) > \mathit{th_{z_c}}$} { $\mathit{componentConcernCounts}[c] = \mathit{componentConcernCounts}[c] + 1$\ \label{alg:detectbco:count_topics:end} } } } $\mathit{th_{co}} \leftarrow \mathit{getHighThreshold(componentConcernCounts)}$\; \label{alg:detectbco:th_t} \For{$c \in C$}{ \label{alg:detectbco:detect:start} \If{$\mathit{componentConcernCounts[c]} > \mathit{th_{co}}$} { $\mathit{COsmells} \leftarrow \mathit{COsmells} \cup \{c\}$\;\label{alg:detectbco:detect:end} } } \end{algorithm} \subsubsection{\textit{\underline{Concern Overload (CO)}}} indicates that a component implements an excessive number of concerns \cite{dijkstra1982role}. CO may increase the size of a component, hurting its maintainability. Formally, a component $c$ suffers from this smell iff ~~~~~~~~$ |\; \{z_j \mid (j \in \mathbb{N}) \wedge (P(z_j \mid c) > th\_z_c) \} \;| > th_{co}$ \noindent where thresholds $0 \leq th\_z_c \leq 1$ and $th_{co} \in \mathbb{N}$, respectively, indicate a topic is significantly represented in the component and the maximum acceptable number of concerns per component. Algorithm \ref{alg:detectbco}, \textbf{detectCO}\xspace, determines which components in the system have CO. \textbf{detectCO}\xspace begins by creating a map, $componentConcernCounts$, where keys are components and values are numbers of relevant concerns in each component (Lines \ref{alg:detectbco:count_topics:start}-\ref{alg:detectbco:count_topics:end}). While creating the map, threshold $\mathit{th_{z_c}}$ is dynamically computed (Line \ref{alg:detectbco:compute_threshold}) and used to determine prevalent concerns for each component.\textbf{detectCO}\xspace uses that map to compute threshold $th_{co}$ (Line \ref{alg:detectbco:th_t}), which is then used to determine which components have the CO smell (Lines \ref{alg:detectbco:detect:start}-\ref{alg:detectbco:detect:end}). \subsubsection{\textit{\underline{Dependency Cycle (DC)}}} indicates a set of components whose links form a circular chain, causing changes to one component to possibly affect all other components in the cycle. Such high coupling between components violates the principle of modularity. Formally, this smell occurs in a set of three or more components iff ~~~~~~$\exists l \in L \mid (\forall x \mid (1 \leq x \leq k) \mid \\ ~~~~~~~~~~~~~((x < k) \implies \mathit{(l.src \in c_x.\mathit{E} \wedge l.dst \in c_{x+1}.\mathit{E})}) \wedge \\ ~~~~~~~~~~~~~((x = k) \implies \mathit{(l.src \in c_x.\mathit{E} \wedge l.dst \in c_1.\mathit{E}))} $ We detect DC smells by identifying \emph{strongly connected subgraphs} in a system's architectural graph $G = (C,L)$. A strongly connected subgraph is one where each vertex is reachable from every other vertex. Algorithms that detect strongly connected subgraphs are well known \cite{dijkstra1976discipline} and can be used to identify DC smells. \subsubsection{\textit{\underline{Link Overload (LO)}}} is a dependency-based smell that occurs when a component has interfaces involved in an excessive number of links (e.g., call dependencies), affecting the system's separation of concerns and isolation of changes. \begin{comment} \begin{table}[h] \centering \caption{Link Overload\xspace Variants} \scriptsize \begin{tabular}{ccc} \toprule Link Type for Variant &$\mathit{direction(l)}$\\ \toprule Incoming links& $l = (ie_2,ie_1)$ \\ \midrule Outgoing links& $l = (ie_1,ie_2)$\\ \midrule Both incoming and outgoing links& $l = (ie_1,ie_2) \vee l = (ie_2,ie_1)$\\ \bottomrule \end{tabular} \label{tab:lo_variation} \end{table} \end{comment} Formally, a component $c$ suffers from both incoming and outgoing link overload iff ~~$|\, \{l \in L \mid l.src \in c.\mathit{E} \} \,| + |\, \{l \in De \mid l.dst \in c.\mathit{E} \} \,| > th_{\mathit{lo}}$ \noindent where $th_{\mathit{lo}}$ is a threshold indicating the maximum number of links for a component that is considered reasonable. \begin{algorithm}[t] \renewcommand{\AlCapSty}[1]{\small\small{\textbf{#1}}\unskip} \scriptsize \caption{\textbf{detectLO}\xspace} \label{alg:detectlo} \SetAlgoVlined \LinesNumbered \DontPrintSemicolon \SetInd{0.3em}{0.6em} \KwIn{$C$: a set of components, $L$: links between components} \KwOut{$\mathit{LOsmells}: $ a set of Link Overload\xspace instances } $\mathit{LOsmells} \leftarrow \emptyset$\; $\mathit{numLinks} \leftarrow$ initialize map as empty\; $\mathit{directionality} \leftarrow \{``in",``out",``both"\}$\; \For{$c \in C$}{ \label{alg:detectlo:numlinks:start} \For{$d \in directionality$} { $numLinks[(c,d)] \leftarrow numLinks[(c,d)] + \mathit{getNumLinks(c,d,L)}$\;\label{alg:detectlo:numlinks:end} } } \For{$d \in directionality$} { \label{alg:detectbco:th_lo:start} $\mathit{th_{lo}[d]} \leftarrow \mathit{getHighThreshold(numLinks,d)}$\; \label{alg:detectbco:th_lo:end} } \For{$c \in C$}{ \label{alg:detectlo:detect:start} \For{$d \in directionality$} { \If{$\mathit{getNumLinks(c,d,L)} > \mathit{th_{lo}[d]}$} { $\mathit{LOsmells} \leftarrow \mathit{LOsmells} \cup \{(c,d)\}$\;\label{alg:detectlo:detect:end} } } } \end{algorithm} Algorithm \ref{alg:detectlo}, \textbf{detectLO}\xspace, extracts the LO variants for a set of components $C$ by examining their links $L$. The algorithm first determines the number of incoming and outgoing links per component (Lines \ref{alg:detectlo:numlinks:start}-\ref{alg:detectlo:numlinks:end}). \textbf{detectLO}\xspace sets the threshold $th_{\mathit{lo}}$ for each variant of LO (Lines \ref{alg:detectbco:th_lo:start}-\ref{alg:detectbco:th_lo:end}). Finally, \textbf{detectLO}\xspace identifies each component and the directionality that indicates the variant of LO from which the component suffers (Lines \ref{alg:detectlo:detect:start}-\ref{alg:detectlo:detect:end}). \subsubsection{\textit{\underline{Unused Interface (UI)}}} is an interface of a system entity that is linked to no other entities. Including entities in a system without any associated use cases violates the principle of incremental development \cite{Fowler:1997:UDA:270005}. Having that unused entity adds unnecessary complexity to the component and the software system which, in turn, hinders maintenance. Formally, a component $c \in C$ contains a UI smell in entity $e \in b.\mathit{E}$ iff ~~~~~~~~~~~~~$ (|e.I| \neq 0) \wedge (\not\exists l \in e.L ~|~ l.dst = e)$ Algorithm \ref{alg:detectui_ub}, \textbf{\textbf{detectUI}\xspace}, detects this smell. \textbf{\textbf{detectUI}\xspace} uses the set of links L to determine if an interface has been used. The algorithm checks every entity in each component (Lines \ref{alg:detectui_ub:numlinks:start}-\ref{alg:detectui_ub:numlinks:end}), and if an entity has a public interface but no link, the entity and its parent component are added to the UI instances list. \begin{algorithm}[h] \renewcommand{\AlCapSty}[1]{\small\small{\textbf{#1}}\unskip} \scriptsize \caption{\textbf{detectUI}\xspace} \label{alg:detectui_ub} \SetAlgoVlined \LinesNumbered \DontPrintSemicolon \SetInd{0.3em}{0.6em} \KwIn{$C$: a set of bricks, $L$: links between components} \KwOut{$\mathit{UIsmells}: $ a set of Unused Interface instances $\mathit{UIsmells} \leftarrow \emptyset$, $\mathit{UCsmells} \leftarrow \emptyset$\; \For{$c \in C$}{ \label{alg:detectui_ub:numlinks:start} \For{$e \in c.E$} { \If{$\mathit{getNumInterfaces(e.I)} > 0 \land getNumLinks(e.L)= 0$} { $\mathit{UIsmells} \leftarrow \mathit{UIsmells} \cup \{(c,e)\}$\; } \label{alg:detectui_ub:numlinks:end} } } \end{algorithm} \subsubsection{\textit{\underline{Sloppy Delegation (SD)}}} occurs when a component delegates to other components functionality it could have performed internally. An example of SD is a component that manages all aspects of an aircraft's current velocity, fuel level, and altitude, but passes that data to an entity in another component that solely calculates that aircraft's burn rate. Such inappropriate separation of concerns complicates the system's data- and control-flow which, in turn, impacts system maintenance. Formally, SD occurs between components $c_1, c_2 \in C$ iff ~~$\exists l \in L ~|~ l.src = e_1 \in c_1.E \land l.dst = e_2 \in c_2.E \\ ~~~~~~~~~~(outLink(e2) = 0) \wedge (inLink(e2) < th_{sd}) \wedge (c_1 \not\equiv c_2)\\ $ \noindent where \textit{outLink(e)} and \textit{inLink(e)} return the numbers of links from and to entity \textit{e}, respectively. Threshold $th_{sd}$ ensures entity $e_2$ is not a library-type entity. In a strict constraint, $th_{sd} = 2$, meaning that $e_2$'s functionality is only used by $e_1$. Algorithm \ref{alg:detectsd}, \textbf{\textbf{detectSD}\xspace}, requires a threshold $th_{sd}$, which defines the minimum number of in-links to consider a delegation appropriate. The algorithm checks every link in each entity (Lines \ref{alg:detectsd:numlinks:start}-\ref{alg:detectsd:numlinks:end}), and if a link has a \textit{dst} entity which satisfies the checking condition of S , then the \textit{dst} entity and its parent component are added to the list of SD instances (Line \ref{alg:detectsd:addsd}). \begin{algorithm}[!h] \renewcommand{\AlCapSty}[1]{\small\small{\textbf{#1}}\unskip} \scriptsize \caption{\textbf{detectSD}\xspace} \label{alg:detectsd} \SetAlgoVlined \LinesNumbered \DontPrintSemicolon \SetInd{0.3em}{0.6em} \KwIn{$C$: a set of bricks, $L$: links between components,\\ ~~~~~~~~$\mathit{th_{sd}}$: threshold for relevance delegation} \KwOut{$\mathit{smells}: $ a set of Sloppy Delegation instances } $\mathit{smells} \leftarrow \emptyset$\; \For{$c_1 \in C$}{ \label{alg:detectsd:numlinks:start} \For{$e_1 \in c_1.E$} { \For{$l \in e_1.L$}{ \If{$l.src=e_1$}{\label{alg:detectsd:numlinks:end} $e_2 \leftarrow l.dst$\; $c_2 \leftarrow getParent(e_2)$\; \If{$(e_1 \not\equiv e_2) \land (getOutLink(e_2)=0) \land (getInLink(e_2) <th_{sd})$}{ $\mathit{smells} \leftarrow \mathit{smells} \cup \{((c_1,e_1),(c_2,e_2))\}$\; } \label{alg:detectsd:addsd} } } } } \end{algorithm} \subsubsection{\textit{\underline{Co-change Coupling (CC)}}} is a logical coupling that occurs when changes to an entity of a given component also require changes to an entity in another component. Formally, a component $c \in C$ has a CC iff ~~~~~~~~~~~~~~~~~$e_i \in c.E ~|~ \sum_{i=1}^{n} |e_i.\mathit{Cp}| > th_{cc}$ \noindent where $th_{cc}$ specifies a threshold for an excessively high number of logical couplings. Algorithm \ref{alg:detectdf_cc}, \textbf{detectCC}\xspace, shows how to detect this type of smell. The algorithm first creates a map between components and their total numbers of co-changes (Lines \ref{alg:detectdf_cc:numCouplings:start}-\ref{alg:detectdf_cc:numCouplings:end}). The $getNumCp$ function returns the number of co-changes of an entity. \textbf{detectCC}\xspace uses these maps to compute a threshold, $th_{cc}$, which is the high inner-fence value (Line \ref{alg:detectdf_cc:computecc}). Finally, the algorithm visits each component again, checks, and adds detected smell instances into the smell list (Lines \ref{alg:detectdf_cc:detect:start}-\ref{alg:detectdf_cc:detect:end}). \begin{algorithm}[h] \renewcommand{\AlCapSty}[1]{\small\small{\textbf{#1}}\unskip} \scriptsize \caption{\textbf{detectCC}\xspace} \label{alg:detectdf_cc} \SetAlgoVlined \LinesNumbered \DontPrintSemicolon \SetInd{0.3em}{0.6em} \KwIn{$C$: a set of bricks, $Cp$: couplings between components} \KwOut $\mathit{CCsmells}: $a set of Co-change Coupling instances } $\mathit{CCsmells} \leftarrow \emptyset$, \; $\mathit{numCp} \leftarrow$ initialize map as empty\; \For{$c \in C$}{ \label{alg:detectdf_cc:numCouplings:start} \For{$e \in c.E$} { $numCp[c] \leftarrow numCp[c] + \mathit{getNumCp(e.Cp)}$\; } \label{alg:detectdf_cc:numCouplings:end} } $\mathit{th_{cc}} \leftarrow \mathit{getHighThreshold(numCp.values)}$\label{alg:detectdf_cc:computecc}\; \For{$c \in C$}{ \label{alg:detectdf_cc:detect:start} \If{$\mathit{numCp[c]} > \mathit{th_{cc}}$} { $\mathit{CCsmells} \leftarrow \mathit{CCsmells} \cup \{(c)\}$\; } \label{alg:detectdf_cc:detect:end} } \end{algorithm} \section{Conclusion} \label{sec:conclusion} This paper's contributions are twofold. First, we have developed an approach that can identify parts of a software system that are likely targets of future maintenance activities based on architectural characteristics as well as the change- and issue-proneness of different architectural elements. Second, we have conducted an empirical study that highlights the impact of architectural decay on ten well known open-source systems. We leverage the identified correlations between symptoms of architectural decay and reported implementation issues to develop an architecture-based approach that accurately predicts a system's issue- and change-proneness. Our approach has been validated on ten existing systems, considering 11 different types of smells under three different architectural views. This is the first study of its kind and, as such, its results can be treated as a foundation on which subsequent work should build. At the same time, the study has resulted in several important findings regarding the predictive power of architecture-based models. Our study confirmed that architectural smells consistently impact a system's implementation during the system's lifecycle. In other words, the impact does not change significantly with other factors such as system size. This means that the detected architectural smells can help to accurately predict the issue-proneness and change-proneness of a system at any relevant point in time. In turn, such architecture-based prediction can serve as a useful tool for maintainers to recognize future problems associated with newly smell-impacted parts of the system and to plan their activities. As a perhaps more unexpected result, we have shown that unrelated software systems tend to share properties with respect to issue- and change-proneness. This allows developers to use general-purpose models created with the available data from a set of existing systems to predict the properties of systems for which such information is missing. Unsurprisingly, the accuracy of such general-purpose models is lower than that of system-specific models, but not prohibitively so. Our results suggest that it is possible to develop such models sufficiently accurately to use them as a basis of actionable advice. \looseness-1 It is important to keep in mind that this was an initial attempt at constructing general-purpose prediction models. Our models were trained using all architectural smells and software systems without particular prior planning. Our future work will investigate how to select an appropriate set of systems to improve the accuracy of these models. We will also explore whether further accuracy improvements can be achieved by restricting the types of architectural smells on which the models are trained. \section{Discussion} \looseness-1 Overall, the predictive models we developed provide developers another tool to check and maintain their software system's health and track technical debt. A straightforward way to identify ``unhealthy'' parts of a system is to look for \emph{long-lived smelly files}, i.e., files that have been involved in architectural smells across a large number of system versions. These files have a high potential to introduce new issues. Figure~\ref{fig:4_struts} shows examples of such files from Hadoop and Struts2. The x-axes in both plots indicate system versions, while the the y-axes indicate the numbers of smells in which each of the files is involved. From the collected data such as that depicted in Figure~\ref{fig:4_struts}, we have observed that long-lived smelly files are repeatedly involved in new issues during a system's lifetime. For example, \texttt{DFSClient.java} is mentioned in $\approx$2,900 Hadoop issues to date; \texttt{JobTrackers.java} is mentioned in $\approx$2,200 Hadoop issues; \texttt{Dispatcher.java} is mentioned in $\approx$670 Struts2 issues; and so on. We posit that stemming such trends and properly addressing the underlying problems will require considering the architectural causes of these issues. \subsection{Discussion} \label{sec:discussion} In this section, we first discuss the significance and implications of our empirical study's results. We then consider architectural smells from the perspective of technical debt. \section{Empirical Study Setup} \label{sec:exp_setup} This section describes our study setup. Our hypothesis and research questions are described in Section \ref{subsec:Research_questions}. We then describe how we pre-processed the raw data in Section~\ref{subsec:extending_arcade}. \subsection{Research Question and Research Hypothesis} \label{subsec:Research_questions} \begin{figure*}[t] \centering \includegraphics[scale=0.5]{graphics/datapipe.pdf} \vspace{-2mm} \caption{Data processing pipeline.} \vspace{-5mm} \label{fig:datapipe} \end{figure*} Our \textit{\textbf{hypothesis}} is that \emph{it is possible to construct accurate models to predict the impact of architectural decay on a system's implementation}. To evaluate this hypothesis, we focus on the predictability of a system's issue- and change-proneness based on the identified architectural smells (i.e., the symptoms of decay). We define two research questions accordingly. \vspace{1.5mm} \textit{\textbf{RQ1.}} \emph{To what extent can the architectural smells detected in a system help to predict the issue-proneness and change-proneness of that system at a given point in time?} \vspace{1.5mm} The training data used to build the prediction models for a system is collected from different versions of that system. If these models can be shown to accurately predict issue- and change-proneness, this would indicate that architectural smells have consistent impacts on those two properties throughout a system's life span. In turn, this would confirm that the impact of architectural smells is not related to other factors, such as system size, which will change during a system's evolution. In addition, an accurate prediction model will be useful for maintainers to foresee the future issue- and change-proneness of newly smell-affected parts of a system, helping them to decide when and where they may need to refactor the system. \vspace{1.5mm} \textit{\textbf{RQ2.}} \emph{To what extent do unrelated software systems tend to share properties with respect to issue-proneness and change-proneness?} \vspace{1.5mm} \looseness-1 This question investigates whether the issue- and change-proneness of a system can be accurately predicted by a general-purpose model trained using symptoms of architectural decay from unrelated systems. If such a model can be constructed, it can be reused by developers to predict properties of systems for which historical information is not (yet) available. An affirmative answer to this question would also have a deeper implication: software systems tend to share fundamental properties regardless of system type, application domain, developers, employed tools, programming languages, execution platforms, etc. \subsection{Building the Data Pipeline} \label{subsec:extending_arcade} \looseness-1 To answer the two research questions, we build multiple prediction models based on different systems' architectural-smell data and assess the models' accuracy. We rely on ARCADE~\cite{leempirical} to collect the underlying raw architectural-smell data, and WEKA\cite{weka}---a well-known ML framework---to pre-process the data, build prediction models, and evaluate their accuracy. The data pipeline we use is illustrated in Figure \ref{fig:datapipe}. Section \ref{subsec:subjects} introduces the list of subject systems and the process of recovering their architectural artifacts with ARCADE. Two main pre-processing tasks are labeling and balancing the raw data, which are discussed in Sections \ref{sec:labeling} and \ref{sec:balancing}, respectively. Creating the training and test sets, evaluating prediction models as well as determining the baseline models are discussed in Sections \ref{sec:training_test}, \ref{sec:metrics}, and \ref{sec:baseline}, respectively. \bgroup \def1.25{1.15} \begin{table}[b \vspace{3.25mm} \centering \caption{Subject systems in our study} \begin{tabular}{|l|p{2.45cm}|p{1.2cm}|p{0.9cm}|p{1.2cm}|} \hline \rowcolor[HTML]{C0C0C0} {System} & {Domain} & {{\# Versions}} & {\# Issues} & {Avg. LOC} \\ \hline Camel & Integration F-work & 78 & 9665 & 1.13M \\ \hline CXF & Service F-work & 120 & 6371 & 915K \\ \hline Hadoop & Data Proc. F-work & 63 & 9381 & 1.96M \\ \hline Ignite & In-memory F-work & 17 & 3410 & 1.40M \\ \hline Nutch & Web Crawler & 21 & 1928 & 118K \\ \hline OpenJPA & Java Persist. & 20 & 1937 & 511K \\ \hline Pig & Data Analysis F-work & 16 & 3465 & 358K \\ \hline Struts2 & Web App F-work & 36 & 4207 & 379K \\ \hline Wicket & Web App F-work & 72 & 6098 & 332K \\ \hline ZooKeeper & Config. Mgmt F-work & 23 & 1390 & 144K \\ \hline \end{tabular}% \label{tab:subject_systems_predict}% \end{table}% \egroup \vspace{1mm}\subsubsection{ARCADE and Subject Systems} \label{subsec:subjects} We collected data from ten open-source systems from the Apache Software Foundation, shown in Table~\ref{tab:subject_systems_predict}. Specifically, our study uses three types of data: (1) architectural smells detected in recovered architectures, (2) implementation issues collected from the Jira~\cite{apachejira} issue repository, and (3) code commits extracted from GitHub \cite{github}. Using ARCADE, we recover the subject systems' architectures using the three recovery techniques---ACDC, ARC, and PKG---whose accuracy and scalability have been demonstrated by prior work (recall Section~\ref{subsec:arcade}). We then analyze the recovered architectures for the presence of smells identified in the literature (recall Section~\ref{subsec:cat_of_arch_smells} and Table~\ref{tab:catalog_of_smells}), as well as the systems' issue- and change-proneness. Those architectural artifacts are the raw data for building prediction models. \begin{comment} \begin{figure}[tb] \centering \includegraphics[scale=0.37]{graphics/process.png} \caption{ARCADE Workflow} \label{fig:arcade-workflow} \end{figure} \end{comment} \vspace{1mm}\subsubsection{Labeling the Data} \label{sec:labeling} Data labeling is a key step to ensure the success of prediction models. In our prediction problem, we are interested in two properties---issue-proneness and change-proneness. These properties can be obtained by, first, counting the raw numbers of issues and changes in a system's development history and, then, finding a way of characterizing those numbers. Specifically, we assign nominal labels based on the raw numbers of issues and changes related to source files to represent different levels of issue- and change-proneness. \begin{figure}[b] \vspace{2mm} \centering \includegraphics[scale=0.52]{graphics/hadoop-issue-histogram.png} \caption{Pareto chart of issues per file in Hadoop. The x-axis represents the Hadoop files grouped by the number of issues they contain, the left y-axis the number of files in same groups, and the right y-axis the cumulative percentage of groups' sizes.} \label{fig:long-tail} \end{figure} \looseness-1 Converting a set of numerical values to nominal labels~depends on the values' distribution. In our problem, the~numbers of issues and changes follow a heavy-tailed distribution~\cite{foss2011introduction}, where many files are associated with small numbers of issues and code changes, while comparatively fewer files are associated with large numbers of issues and changes. This is not an uncommon type of distribution~\cite{yamashita2015revisiting, boehm2006value}. As an illustration, the Pareto chart \cite{doi:10.1198/000313006X152243} in Figure \ref{fig:long-tail} depicts the distribution of issues per file in Hadoop: while few files are associated with a large number of issues, the arc, which represents the cumulative percentage of file-groups' sizes, shows a clear heavy-tailed pattern. \begin{figure*}[t] \centering \includegraphics[scale=0.58]{graphics/divide_sets_2.pdf} \vspace{-2mm} \caption{Creating datasets to answer RQ1 (top) and RQ2 (bottom).} \vspace{-5mm} \label{fig:divide_sets} \end{figure*} One common labeling approach is to segment a heavy-tailed distribution into head and tail segments. A more sophisticated approach is to divide the distribution into three parts---head, body, and tail---which in our case represent the three levels of proneness: low, medium, and high. We choose the latter approach because the numerical values in our study span a wide range. Having these three levels gives developers a better estimation of architectural decay's impact. To segment a dataset, we use the Pareto principle \cite{pareto1906manuale}, a popular segmentation method for heavy-tailed distributions, widely used in software engineering (e.g., \cite{boehm2006value, kiremire2011application, sayyad2013pareto}). To obtain the three segments, we apply the Pareto principle twice, as suggested in literature \cite{arthur2001six}. Specifically, we divide the original dataset into two portions. The first portion contains 80\% of the original dataset's low-end, while the second portion contains 20\% of the high-end. We apply the Pareto segmentation once more to the latter portion, thus obtaining two new portions that respectively contain the next 16\% (80\% of the 20\%) and 4\% (20\% of the 20\%) of the high-end data points. In order to collect the data regarding architectural decay, for each version of a subject system, we first collect the list of ``fixed'' issues affecting that version. Next, we collect the files that were changed when fixing the issues. For each file, we gather its associated architectural smells, the number of issues whose fixing commits changed that file (used when determining the system's \emph{issue-proneness} in Sections~{IV-RQ1-A}~and IV-RQ2-A), and the total number of changes (used when determining the system's \emph{change-proneness} in Sections~IV-RQ1-B and IV-RQ2-B). After the raw data is collected, we label it using the Pareto technique mentioned above before feeding it to supervised ML algorithms. To determine the level of issue-proneness of a source file in a system version, first, the number of issues related to that file is collected. This is one data point. We collect data points for all files in all available versions of a system, and then sort the dataset by the numbers of issues, from low to high. Then, the first 80\% of data points are marked with ``low'' labels; the next 16\% and 4\%, respectively, are marked with ``med(ium)'' and ``high'' labels. To determine the change-proneness of a source file in a system's version, we count the number of commits related to that file and repeat a similar labeling process. \begin{table}[b!] \vspace{2mm} \centering \caption{Data samples from Hadoop} \vspace{-1mm} \label{tab:data_point_sample} \setlength\tabcolsep{4.75pt} \begin{tabular}{|l|l|l|l|l|l|l|p{0.2cm}|p{0.352cm}|} \hline \rowcolor[HTML]{C0C0C0} Vers. & Filename & CO & SF & LO & DC & ... & Iss & Chg \\ \hline 0.20.0 & dfs/DFSClient.java & 0 & 1 & 1 & 1 & ... & H & L \\ \hline 0.20.0 & mapred/JobTracker.java & 1 & 0 & 1 & 0 & ... & M & M \\ \hline 0.20.0 & tools/Logalyzer.java & 0 & 0 & 0 & 0 & ... & L & L \\ \hline ... & ... & ... & ... & ... & ... & ... & ... & ... \\ \hline \end{tabular} \end{table} Table \ref{tab:data_point_sample} shows several data samples in our datasets after labeling. The shown features, i.e., architectural smells in our case, are CO (Concern Overload), SF (Scattered parasitic Functionality), LO (Link Overload), and DC (Dependency Cycle). The output features, i.e., labels, are the levels of issue-proneness and change-proneness. The two leftmost columns show the versions and filenames of each data point. The next eleven columns are binary features that indicate the presence (1) or absence (0) of a specific smell (recall Table~\ref{tab:catalog_of_smells}) in a given file. The two rightmost columns indicate the issue-proneness (``Iss'') and change-proneness (``Chg'') of the files. For example, in version 0.20.0 of Hadoop, \texttt{DFSClient.java} has three smells: SPF, LO, and DC. The file's issue-proneness is high (H), and its change-proneness is low (L). On the other hand, both issue- and change-proneness of \texttt{JobTracker.java} are medium (M). \vspace{1mm}\subsubsection{Balancing the Data} \label{sec:balancing} \looseness-1 Due to the distribution of data and the labeling approach, we need to balance our datasets~\cite{provost2000machine}. Recall from Section \ref{sec:labeling} that the {{$low:med:high$}} ratio of our datasets is 80:16:4 (i.e., 20:4:1). If such a dataset were used to train a prediction model, the most likely outcome would be a model that predicts ``low'' for every data point. As we are more interested in ``high'' and ``med'' labels, such a model would be useless. It is thus important to ensure that weighted metrics are not biased by less (or more) frequent labels. We use SMOTE \cite{chawla2002smote} to balance our dataset, oversampling ``med'' by a factor of 5 and ``high'' by a factor of 20. SMOTE is a technique that synthesizes new minority samples based on nearest neighbors between sample data points. Adding new minority samples guarantees that the dataset will be balanced, i.e., that the {{$low:med:high$}} ratio will be 1:1:1. \vspace{1mm}\subsubsection{Training and Test Sets} \label{sec:training_test} To build and test our prediction models, we use two different approaches for the two research questions, as illustrated in Figure \ref{fig:divide_sets}. In the first approach, used for RQ1, one dataset is created for each subject system with a cross-validation setup. Specifically, we use 10-fold cross-validation, where the dataset is randomly divided into ten equal-sized subsets. Then, we sequentially select one subset and test it against the prediction model built by the other nine subsets. The final result is the mean of the ten tests' results. In the second approach, used for RQ2, we combine all subject systems and then divide them into two independent datasets: a training set, which comprises nine systems, and a test set, which comprises the single remaining system. \vspace{2mm}\subsubsection{Evaluation Metrics} \label{sec:metrics} \looseness-1 To evaluate the accuracy of our models, we use precision and recal ~\cite{powers2011evaluation}. Precision is the fraction of correctly predicted labels over all predicted labels. Recall is the fraction of correctly predicted labels over all actual labels. \looseness-1 For illustration, consider the sample confusion matrix, shown in Table \ref{tab:multiclass}, that is produced after classifying 25 samples into ``high'', ``med'', and ``low''. The precision for the ``high'' label is the number of correctly predicted ``high'' samples (4) out of all samples predicted to be ``high'' (4+3+6=13), i.e., $30.8$\%; its recall is the number of correctly predicted ``high'' samples (4) out of the number of actual ``high'' samples (4+1+1=6), i.e., $66.7$\%. We can similarly calculate the precision and recall for ``med'' and ``low''. Finally, we compute the average values of all labels. If a model predicts the correct labels, we consider this a true positive. On the other hand, if the model predicts any of the three labels (``high'', ``med'', or ``low'') incorrectly, we consider this a false positive. This is the standard way of measuring the accuracy of multi-label problems \cite{tsoumakas2007multi}. \begin{table}[t!] \centering \caption{Example predicted vs. actual values} \vspace{-1mm} \begin{tabular}{ll|l|l|l|} \hhline{|~~|---|} & & \multicolumn{3}{c|}{\cellcolor[HTML]{C0C0C0}True/Actual} \\ \hhline{|~~|---|} & &High & Med & Low \\ \hline \multicolumn{1}{|l|}{\cellcolor[HTML]{C0C0C0}} & High & \cellcolor[HTML]{EFEFEF}4 & 6 & 3 \\ \hhline{|~|----|} \multicolumn{1}{|l|}{\cellcolor[HTML]{C0C0C0}} & Med & 1 & \cellcolor[HTML]{EFEFEF}2 & 0 \\ \hhline{|~|----|} \multicolumn{1}{|l|}{\multirow{-3}{*}{\cellcolor[HTML]{C0C0C0}Predict}} & Low & 1 & 2 & \cellcolor[HTML]{EFEFEF}6 \\ \hline \end{tabular} \label{tab:multiclass} \end{table} \vspace{1mm}\subsubsection{Determining Baseline Models} \label{sec:baseline} To determine the effectiveness of the prediction models, we need to compare them to a baseline. In this case, we consider a baseline model to be the simplest possible prediction. The model can be obtained through different approaches. For some problems this may be a random result, and for others in may be the most common prediction. As our dataset has been balanced (Section \ref{sec:balancing}), the simplest approach is ``uniform'' --- generate predictions uniformly at random. This implies a prediction in which Table~\ref{tab:multiclass} has equal values in all cells, giving us a model with both precision and recall of 33.3\%. \section{Foundation} \label{sec:foundation} Our work is directly enabled by three research threads: (1)~software architecture recovery, (2) definition and analysis of architectural smells, and (3) tracking implementation issues. Figure~\ref{fig:workflow} depicts how these threads are combined to answer our research questions in this paper. \subsection{Architecture Recovery with ARCADE} \label{subsec:arcade} Garcia et al.~\cite{garcia2013comparative} conducted a comparative evaluation of software architecture recovery techniques. Their objective was to measure the existing techniques' accuracy and scalability on a set of systems for which researchers had previously obtained ``ground-truth'' architectures~\cite{garcia2013obtaining}. To that end, the authors implemented a tool suite, named ARCADE, offering a large set of architecture recovery choices to an engineer.\footnote{The existing techniques implemented within ARCADE support structural clusterings of software systems' elements based on a range of criteria. While the resulting recovered models contain only partial architectural information for a given system, in this paper we will refer to them as ``recovered architectures''. We note that our use of this term is consistent with existing literature.} Garcia et al.'s results indicate that two techniques implemented in ARCADE consistently outperformed the rest: \emph{ACDC}\xspace~\cite{tzerpos2000} and \emph{ARC}\xspace~\cite{garcia2011enhancing}. We select these techniques for our study. \emph{ACDC}\xspace leverages a system's \emph{structural characteristics} to cluster implementation-level modules into architectural components, while \emph{ARC}\xspace focuses on the \emph{concerns} implemented by a system. \emph{ACDC}\xspace relies on static dependency analysis; \emph{ARC}\xspace uses information retrieval and machine learning. \looseness-1 \emph{PKG}\xspace is another technique implemented in ARCADE. \emph{PKG}\xspace extracts a system's implementation \emph{package structure}. The package structure of a system is considered to be a reliable view of a system's ``implementation architecture''~\cite{kruchten1995}. We use it to complement the two selected clustering-based architectural views. \subsection{Architectural Smells} \label{subsec:cat_of_arch_smells} Architectural smells are instances of poor architectural design decisions \cite{mens2004}. They negatively impact system lifecycle properties, such as understandability, testability, extensibility, and reusability \cite{garcia2009toward}. While code smells \cite{fowler1999}, anti-patterns \cite{buschmann2007pattern}, or structural design smells \cite{ganesh2013towards} originate from implementation constructs (e.g., classes, methods, variables), architectural smells stem from poor use of software architecture-level abstractions --- components, connectors, interfaces, patterns, styles, etc. Detected instances of architectural smells are candidates for restructuring \cite{bowman1999}, to help prevent architectural decay and improve system quality. Researchers have collected a growing catalog of architectural smells. Garcia et al. \cite{garcia2009toward,garcia2009identifying} identified an initial set of four smells related to connectors, interfaces, and concerns. Mo et al.~\cite{mo2013mapping} introduced a new concern-related smell. Ganesh et al. \cite{ganesh2013towards} also summarized a catalog of structural design smells, some of which are at the architecture-level. Le et al.~\cite{icsa2018duc} described 11 different architectural smells and proposed a set of algorithms to detect them. Table~\ref{tab:catalog_of_smells} summarizes a consolidated list of smells that were identified in the above references, after removing duplicates and non-architectural smells. \subsection{Issue Tracking Systems} \label{subsec:jira} Issue tracking systems are commonly used development tools that allow users to report different problems and concerns about a system and monitor their status. All subject systems selected for analysis in this paper use Jira~\cite{apachejira} as their issue tracking system. However, this is not a limitation; our approach can be applied to other issue trackers. \looseness-1 When reporting implementation issues, engineers categorize them into different types: \emph{bug}, \emph{new feature}, \emph{improvement}, \emph{task} to be performed, etc. We consider all issue types in our study because they may result in relevant changes to a system. In other words, any issue type or individual issue instance may have an underlying architectural cause. Note that it would be possible to perform a finer-grained analysis using the same process we employed that would focus on a specific subset of issues or types. Each issue has a status that indicates where the issue is in its lifecycle~\cite{jiraissue}. An issue starts as ``open'', progresses to ``resolved'', and finally to ``closed''. We restrict our study to closed and resolved issues that have been ``fixed'', and ignore those resolved issues that fall under ``won't fix'', ``cannot reproduce'', etc. We do so because any effects caused by the fixed issues presumably appear in certain system versions and disappear once the issue is addressed. Additionally, a fixed issue contains information that is useful for our study: (1)~\emph{affected versions} in which the issue has been found, (2)~\emph{type} of issue, and (3)~\emph{fixing commits}, i.e., the changes applied to the system to resolve the issue. Finding fixing commits is not always easy since there is no standard method for engineers to keep track of this information. Three ways of keeping track of an issue's fixing commits are commonly employed in our set of subject systems: (1) direct links to the commits, (2) specifying pull requests, and (3)~specifying patch files. Our implemented tool supports collecting data from all three methods. Based on the collected information, issues are mapped to detected smells. To do this, first, we find the system versions that the issue affects. Then we find the architectural smells present in those versions. We say the issue is infected by a given smell if and only if (1) both the issue and the smell affect the same system version and (2) the resolution of the issue changes files that are involved in the smell. Based on this relationship, we studied if the characteristics of an issue (e.g. issue type, number of fixing commits) depend on whether the issue is infected by a given smell. \looseness-1 Note that resolving an issue may not remove the smell that led to the issue in the first place. One reason is that developers could find a workaround. The smell may also~correlate with more than one issue. In general, it is difficult to identify the exact relationship between a specific architectural smell instance and a specific implementation issue. Fortunately, we do not need to do that in our work, because we are looking for prediction models that uncover smell-issue correlations across most cases. \begin{comment} \begin{figure}[b] \centering \includegraphics[scale=0.7]{graphics/Issues_to_Smells_Mapping.png} \caption{Mapping architectural smells to issues.} \label{fig:issue_smell_mapping} \end{figure} \end{comment} \section{Acknowledgments} This work is supported by the U.S. National Science Foundation under grants 1717963, 1823354, and 1823262 and U.S. Office of Naval Research under grant N00014-17-1-2896. \bibliographystyle{abbrv} \clearpage \section{Introduction} \label{sec:introduction} \looseness-1 Software systems change regularly, as do their architectures. Over time, a system's architecture is increasingly affected by decay, caused by careless or unintended design decisions~\cite{perry1992foundations}. Decay results in systems whose implemented architectures differ in important ways from their designed architectures. Both researchers and practitioners have recognized the negative impact of architectural decay and its role in causing technical debt. Despite this, when developers modify a system during maintenance, they often focus on code and neglect the architecture. \looseness-1 Researchers have proposed a number of techniques to analyze a system at the code level and to predict issues that are likely to appear in the system's future versions. A common approach has been to use historical artifacts, such as data from issue trackers and version control systems, to build prediction models. Early approaches \cite{Kim:2007:PFC:1248820.1248881, Moser:2008:CAE:1368088.1368114, Gyimothy2005, Coleman1994} built models to predict implementation issues based on code metrics. Later studies made use of other properties that were reckoned to be potential causes of issues, such as code dependencies ~\cite{Zimmermann:2008:PDU:1368088.1368161} and code smells~\cite{Hall:2014:CSS:2668018.2629648}. \looseness-1 In contrast to code-level techniques, analogous techniques at the architecture level have not received nearly as much attention, even though recent work has demonstrated that even simple code updates can cause system-wide architectural changes~\cite{leempirical}. Frequently, such updates introduce \emph{architectural smells} in a system (e.g., dependency cycle, ambiguous interface~\cite{icsa2018duc}). These smells may have no immediately visible effect, but they are symptoms of architectural decay and accumulated technical debt~\cite{taylor2009software, joshthesis2014, leempirical, icsa2018duc}. As decay compounds in long-lived systems, the number of architectural smells grows, creating unforeseen issues when engineers try to modify a system. In such cases, engineers are eventually likely to realize the negative effects of the incurred technical debt and the need to refactor their system. However, they usually spot deeper architectural problems only when related implementation-level issues surface. For example, issue \#1178 reported for Apache Pig indicates that developers recognize the problem of having a large number of functions in a component: ``[The component] has been an area of numerous bugs, many of which have been difficult to fix''~\cite{pig-1178}. Similarly, issue \#223 in Apache CXF acknowledges the need to refactor CXF's architecture to reduce the amount of code changes and improve extensibility~\cite{cxf-223}. \begin{figure*}[tbh] \centering \includegraphics[width=18cm]{graphics/arcade_framework_icsa2020.pdf} \vspace{-5mm} \caption{Architecture recovery pipeline used in our study and enabled by the ARCADE tool suite.} \label{fig:workflow} \vspace{-5mm} \end{figure*} Recent studies have established strong correlations between architectural smells and both (1) a system's proneness to change and (2) the emergence of certain implementation issues~\cite{icsa2018duc, le2016relating}. Furthermore, many bugs reported for a system have been shown to have architectural roots~\cite{Xiao:2014:DPA:2635868.2661679, 10.1007/978-3-030-00761-4_21}. Prior work has also demonstrated that identifying \emph{code} smells using existing approaches will not help to uncover the underlying architectural issues, and modifications to address thus identified problems run the risk of being inadequate, short-term patches~\cite{OizumiSBES6943486}. Despite this, predictive models that leverage \emph{architectural characteristics} to anticipate the implementation issues or the amount of change a system may experience have been scarce. In this paper, we propose and empirically evaluate an approach to predict a system's (1)~future implementation issues and (2)~proneness to change based on the system's current and past architectural characteristics. Our work is inspired in part by the recent finding~\cite{icsa2018duc} that architectural smells and implementation issues are strongly correlated. Specifically, we analyze 466 versions of 10 open-source software systems. For each system version, we use 3 different methods to recover its architectures from source code. We analyze thus obtained 1,398 architectural models to detect 11 distinct types of architectural smells. The detected smells are subsequently used as features in our prediction models. We make use of different machine learning techniques to predict a given system's issue- and change-proneness based on the collected architectural features. Our study has resulted in two principal findings regarding the predictive power of the models obtained in this manner: \begin{enumerate} \item The architectural smells detected in a system can help to accurately predict both the issue-proneness and change-proneness of that system at a given point in time. Our models yielded precision and recall scores of at least 70\% (and as high as 95\%) for specific recovered architectural views of the subject systems. This finding allows maintainers to foresee future problems involving new smell-impacted parts of a system. \item Different, independently developed software systems tend to share issue- and change-proneness characteristics. This allows developers to use models created using data from a set of existing systems to predict the issue- and change-proneness of an unrelated system for which historical data does not exist (e.g., a newly developed system). While the accuracy of such general-purpose prediction models is lower than the system-specific models, the loss in accuracy is moderate, typically under 10\%. Our results indicate that this is a fruitful area for further investigation, and that our models are already usable in practice for making certain types of decisions. \end{enumerate} Section \ref{sec:foundation} introduces foundations for our study. Section \ref{sec:exp_setup} presents the research questions and describes the study. The results are detailed in Section \ref{sec:results}. Threats to validity, related work, conclusions, and acknowledgment round out the paper \section{Related Work} \label{sec:related_work} \balance Predicting implementation issues and code change have been widely studied research problems in software maintenance. The main type of implementation issues that researchers were interested early on were defects. Li et al. \cite{LI1993111} used OO metrics as predictors of software maintenance effort. Subramanyam et al.~\cite{Subramanyam1191795} also demonstrated that a set of metrics~\cite{Chidamber:1994:MSO:630808.631131} has significant implications on software defects. Nagappan et al.~\cite{nagappan2006mining} found a representative set of code complexity measures to determine failure-prone software entities. However, the metrics considered in prior work cannot prevent defects at higher abstraction levels, such as architectural problems. Issue prediction based on bug-fixing history is also an established area. Rahman et al. \cite{Rahman:2011:BIH:2025113.2025157} developed an algorithm that ranks files by their numbers of past changes. The algorithm helps developers find hot spots in the system that need developers' attention. There are more sophisticated methods that combine historical information and software change impact analysis to increase the efficiency and accuracy of the prediction \cite{wang2014version, hata2012bug, rahman2013and}. However, as before, these approaches do not explain higher-level defects caused by architectural decay. Code changes have a close connection with defects in software. Nagappan et al. \cite{nagappan2005use} used code churn to predict the defect density of software systems. Hassan et al. \cite{hassan2009predicting} used complexity metrics based on code changes to predict faults. Code change has been used in a number of other research efforts \cite{Xiao:2016:IQA:2884781.2884822,d2008analysing,le2016relating,leempirical} to evaluate system maintainability. To predict code changes, Romano et al. proposed two approaches, relying on code metrics~\cite{romano2011using} and anti-patterns~\cite{romano2012analyzing}. Xia et al.'s approach~\cite{xia2015cross} predicts a system's change-proneness using co-change information of unrelated systems. While their approach is similar to the one we employed in the context of RQ2, it yields relatively low accuracy. Malhotra et al. \cite{malhotra2017exploratory} used hybridized techniques to identify change-prone classes. However, their empirical study is relatively small. Kouroshfar et al. \cite{kouroshfar2015study} do use architectural information to study the correlation between co-changes across architectural models and defects. However, they restrict their study to cross-module changes. \section{Empirical Study Results} \label{sec:results} In this section, for each of the two research questions we discuss the validation method and the associated findings. \vspace{3mm} \noindent\emph{\textbf{RQ1:} To what extent can the architectural smells detected in a system help to predict the issue-proneness and change-proneness of that system at a given point in time?} \vspace{1mm} In this prediction problem, all input features are binary (recall Table \ref{tab:data_point_sample}), indicating whether a file contains an architectural smell. For this reason, decision-based techniques are most~likely to yield good results \cite{lim2000comparison}. Metrics collected from a range of models we built and evaluated using four different classification techniques---decision table \cite{kohavi1995power}, decision tree \cite{quinlan2014c4}, logistic regression \cite{le1992ridge}, naive bayes \cite{john1995estimating}---confirmed this. We thus only discuss the results obtained by the decision-table models. \vspace{3mm} \noindent\emph{A. Issue-Proneness} \vspace{1mm} \looseness-1 Recall from Section~\ref{subsec:extending_arcade} that, to compute issue-proneness, for each file in each version of a given system, we gather the file's associated architectural smells and number of issues whose fixing commits changed the file. Table \ref{RQ1.issues_proneness} shows the precision and recall of the models for predicting the issue-proneness of our subject systems from Table \ref{tab:subject_systems_predict}. These metrics are computed using 10-fold cross-validation~\cite{kohavi1995study}. The bottom-most row shows the average values across all systems. For each system, we built different prediction models based on smells detected in the three architectural views (ACDC, ARC, and PKG). In total, 30 prediction models per system were created and evaluated. \begin{table}[t] \centering \caption{Predicting issue-proneness} \label{RQ1.issues_proneness} \vspace{-1mm} \scriptsize \begin{tabular}{|l|c|c|c|c|c|c|} \hline \rowcolor[HTML]{C0C0C0} \multirow{2}{*}{} & \multicolumn{2}{c|}{ACDC} & \multicolumn{2}{c|}{ARC} & \multicolumn{2}{c|}{PKG} \\ \cline{2-7} \rowcolor[HTML]{C0C0C0}{System} & Precision & Recall & Precision & Recall & Precision & Recall \\ \hline Camel & 69.9\% & 68.4\% & 70.8\% & 67.0\% & 68.2\% & 62.8\% \\ \hline CXF & 78.0\% & 76.7\% & 68.9\% & 68.3\% & 64.7\% & 63.8\% \\ \hline Hadoop & 81.2\% & 80.1\% & 76.6\% & 76.6\% & 72.8\% & 73.4\% \\ \hline Ignite & 78.9\% & 78.1\% & 78.9\%& 79.1\%& 70.4\%& 71.0\% \\ \hline Nutch & 80.8\% & 71.6\% & 82.5\% & 82.7\% & 68.3\% & 52.1\% \\ \hline OpenJPA & 71.4\% & 68.3\% & 74.5\% & 73.2\% & 69.2\% & 67.9\% \\ \hline Pig & 71.7\% &69.1\% & 71.3\%& 71.1\%& 68.6\%& 69.5\%\\ \hline Struts2 & 89.2\% & 89.0\% & 95.0\% & 94.8\% & 79.1\% & 78.3\% \\ \hline Wicket & 69.2\% & 70.1\% & 76.7\% & 77.1\% & 63.7\% & 65.4\% \\ \hline ZooKeeper & 72.0\% &72.6\% & 70.8\%& 69.2\%& 68.7\%& 69.4\% \\ \hline Average & 76.2\% &74.4\% &76.6\% &75.9\% &69.4\% &67.4\% \\ \hline \end{tabular} \vspace{1mm} \end{table} \begin{table}[b] \vspace{3mm} \centering \caption{Predicting issue-proneness with "high", "med" and "low" labels under ACDC } \label{RQ1.issues_proneness_high_med} \vspace{-1mm} \scriptsize \begin{tabular}{|l|c|c|c|c|c|c|} \hline \rowcolor[HTML]{C0C0C0} \multirow{2}{*}{} & \multicolumn{2}{c|}{High} & \multicolumn{2}{c|}{Med} & \multicolumn{2}{c|}{Low} \\ \cline{2-7} \rowcolor[HTML]{C0C0C0}{System} & Precision & Recall & Precision & Recall & Precision & Recall \\ \hline Camel & 73.9\% & 56.9\% & 57.6\% & 63.6\% & 78.2\% & 69.9\% \\ \hline CXF & 94.4\% & 83.3\% & 65.2\% & 76.0\% & 74.5\% & 70.7\% \\ \hline Hadoop & 71.2\% & 81.5\% & 78.3\% & 78.1\% & 72.1\% & 81.5\% \\ \hline Ignite & 93.8\% & 89.1\% & 66.4\%& 76.8\%& 76.6\%& 67.8\% \\ \hline Nutch & 66.9\% & 94.7\% & 90.4\% & 61.2\% & 95.0\% & 75.8\% \\ \hline OpenJPA & 69.3\% & 89.5\% & 65.9\% & 49.8\% & 79.1\% & 65.6\% \\ \hline Pig & 80.5\% &90.9\% & 72.9\%& 52.3\%& 61.8\%& 64.1\%\\ \hline Struts2 & 96.3\% & 95.7\% & 88.2\% & 81.1\% & 83.1\% & 90.4\% \\ \hline Wicket & 78.8\% & 89.6\% & 59.3\% & 60.3\% & 69.5\% & 57.3\% \\ \hline ZooKeeper & 71.0\% &88.0\% & 64.2\%& 54.2\%& 80.7\%& 75.6\% \\ \hline Average & 79.6\% &85.9\% & 70.5\% & 65.3\% & 76.1\% & 71.9\% \\ \hline \end{tabular} \end{table} \looseness-1 In general, the prediction models that relied on architectures recovered by ACDC and ARC were comparable in terms of accuracy: the average (precision, recall) for the ACDC and ARC models were (76.2\%, 74.4\%) and (76.6\%, 75.9\%) respectively. On the other hand, the models emerging from PKG yielded accuracy that was up to 13\% lower. The models yielded very high predictive power in the cases of certain systems. For example, the ARC-based models for Struts2 achieved $\approx$95.0\% and the ACDC-based models $\approx$90.0\% for each of the two metrics. As discussed in Section \ref{sec:balancing}, our dataset has been balanced to ensure that the trained models will accurately predict ``high'' and ``med'' labels, in which we are interested. Table \ref{RQ1.issues_proneness_high_med} shows the precision and recall of issue prediction for all three labels. While there are variations across the three labels, the average precision and recall for the ``high'' label---79.6\% and 85.9\%, respectively---outstrip the average values for the other two labels. Figure \ref{fig:compare_rq1} shows the comparison of our prediction models with the baseline model. Our prediction models are at least 1.5$\times$ better (2$\times$ in a majority of cases) than the baseline's 33.3\%, further confirming that our models are useful for predicting files with high numbers of related issues. Our results confirm that architectural smell-based models can accurately predict the issue-proneness of a system. In other words, architectural smells have a consistent impact on a system's implementation with respect to issue-proneness over the system's lifetime. This finding means that \emph{architectural decay can be a powerful indicator of the health of a system's implementation}. It serves as a direct motivator for software engineers to pay more attention to the architecture, and architectural smells, in their systems. For example, system maintainers can use our models to foresee future problems, to devise refactoring plans, to prioritize their activities, etc. \begin{figure}[t!] \vspace{-4.5mm} \centering \subfloat[Precision]{{\includegraphics[width=4.3cm]{rq_pics/rq1_labels_baseline_precision.png} }}% \hspace{-0.9cm} \qquad \subfloat[Recall]{{\includegraphics[width=4.3cm]{rq_pics/rq1_labels_baseline_recall.png} }}% \vspace{-1.5mm} \caption{Precision and Recall of issue-proneness prediction for each label in ACDC.} \label{fig:compare_rq1} \end{figure} \looseness-1 The comparatively poorer performance of PKG in answering RQ1 suggests that implementation-package structure is not effective for measuring architectural decay and it can mask deeper architectural problems. This observation is in line with previous findings~\cite{leempirical}, which showed that, compared to ACDC and ARC, PKG is markedly less useful for understanding the underlying architectural changes and their impact. This leads to another observation. Recall the categorization of architectural smells in Section \ref{subsec:cat_of_arch_smells} and Table \ref{tab:catalog_of_smells}: two of the four categories are dependency-based and concern-based smells. This suggests that ACDC (dependency-based recovery) and ARC (concern-based recovery) should inherently outperform PKG when such smells are encountered. It further suggests that targeting specific recovery techniques to specific types of smells, and then finding a way to combine their results, may yield even higher accuracy in our prediction models. We are exploring this hypothesis in our ongoing work. \vspace{3mm} \noindent\emph{B. Change-Proneness} \vspace{1mm} Recall from Section~\ref{subsec:extending_arcade} that, to compute change-proneness, for each file in each version of a given system we gather (1)~the file's associated architectural smells and (2)~the total number of changes to the file reflected in the implementation issues' fixing commits. We used the same approach to evaluate the accuracy of the 30 architectural models for each system in predicting change-proneness as we did for predicting issue-proneness. \looseness-1 Table \ref{RQ1.change_proneness} shows the accuracy of our models. The models based on PKG-recovered architectures again have the lowest accuracy. In some systems, e.g., CXF and Nutch, the values for PKG-recovered architectures are 10-20\% lower than the corresponding values in the other two views. The average (precision, recall) are (74.7\%, 71.6\%) and (73.6\%, 73.5\%) for the ACDC- and ARC-based architectural views, respectively. Notably, the values yielded when analyzing Struts2 are, once again, very high. A further investigation of Struts2's dataset highlighted a distinguishing characteristic: 36 of the analyzed versions are distributed across just four minor Struts2 versions: 2.0.x, 2.1.x, 2.2.x and 2.3.x. In other words, the changes in most of these 36 versions were ``patches''. It is reasonable to expect that the architectures and detected smell instances between patches within a single minor version will be very similar. The prediction model for Struts2 benefits from this similarity and thus achieves very high accuracy in the cross-validation test. This suggests a promising strategy for building prediction models: to increase the accuracy of models used to predict properties of a system version, one should \emph{select recent versions} instead of all versions across the entire system lifespan. In summary, our results confirm that the historical data of a software system regarding its architectural smells, issues, and changes can be used to develop models to accurately predict the issue- and change-proneness of that system. The results also indicate that architectural smells have a consistent impact on software system implementations throughout the systems' lifetimes. Our architecture-based prediction approach, whose performance is usually two times better than the baseline, is useful for software maintainers to foresee likely future problems in newly smell-impacted parts of their system. The approach can also help in creating maintenance plans that can help to effectively reduce the system's issue- and change-proneness. Lastly, ACDC and ARC outperform PKG, emphasizing the importance of selecting the appropriate architecture recovery techniques and targeting them to the task at hand. \begin{table}[t] \centering \caption{Predicting change-proneness} \vspace{-1mm} \scriptsize \label{RQ1.change_proneness} \begin{tabular}{|l|c|c|c|c|c|c|} \hline \rowcolor[HTML]{C0C0C0} \multirow{2}{*}{} & \multicolumn{2}{c|}{ACDC} & \multicolumn{2}{c|}{ARC} & \multicolumn{2}{c|}{PKG} \\ \cline{2-7} \rowcolor[HTML]{C0C0C0}{System} & Precision & Recall & Precision & Recall & Precision & Recall \\ \hline Camel & 69.9\% & 63.4\% & 68.0\% & 67.1\% & 60.3\% & 61.0\% \\ \hline CXF & 73.7\% & 70.8\% & 69.7\% & 63.4\% & 60.8\% & 63.4\% \\ \hline Hadoop & 78.1\% & 73.2\% & 74.9\% & 74.8\% & 67.4\% & 70.0\% \\ \hline Ignite & 77.5\% & 76.1\% & 75.8\% & 76.1\% & 68.7\% & 69.1\%\\ \hline Nutch & 73.1\% & 66.8\% & 76.3\% & 78.0\% & 62.2\% & 46.1\% \\ \hline OpenJPA & 78.3\% & 77.7\% & 74.3\% & 70.0\% & 68.2\% & 62.1\% \\ \hline Pig & 70.1\% &67.4\% & 69.6\% & 70.2\% & 65.9\% & 66.5\%\\ \hline Struts2 & 89.3\% & 85.8\% & 87.8\% & 96.7\% & 71.2\% & 73.7\% \\ \hline Wicket & 66.6\% & 65.3\% & 72.1\% & 71.8\% & 62.7\% & 59.0\% \\ \hline ZooKeeper & 69.9\% &69.6\% & 67.8\% & 67.2\% & 65.5\% & 64.4\%\\ \hline Average & 74.7\% &71.6\% &73.6\% &73.5\% &65.3\% &63.5\% \\ \hline \end{tabular} \end{table} \vspace{3mm} \noindent\emph{\textbf{RQ2:} To what extent do unrelated software systems tend to share properties with respect to issue- and change-proneness?} \vspace{1mm} The results obtained in answering RQ1 showed that architectural smells consistently impact the issue- and change-proneness of a software system during its lifetime. In that sense, RQ2 can be considered an extension of RQ1: we aim to understand whether architectural smells have consistent impacts across \emph{unrelated} software systems, more specifically, whether the issue- and change-proneness of a system can be accurately predicted by models trained with data from unrelated systems. More deeply, this research question tries to assess whether there are fundamentally shared traits across software systems, regardless of their developers and development processes, implementation features, application domains, underlying designs, etc. \looseness-1 To answer this question, instead of using 10-fold cross-validation, we selected each subject system as the test system and used its dataset as the test set; the training set was then created by combining datasets of the remaining nine systems. For reference, we also built a prediction model by combining all ten systems, i.e., including the test systems. Note that the datasets of different subject systems have different sizes; we had to resample those datasets to the same size before combining them. \vspace{3mm} \noindent\emph{A. Issue-Proneness} \vspace{1mm} \looseness-1 Tables \ref{tab:rq2_acdc_p}, \ref{tab:rq2_arc_p}, and \ref{tab:rq2_pkg_p} summarize the precision and recall values of \emph{RQ2} experiments with regard to predicting issue-proneness under ACDC, ARC, and PKG, respectively. The left-most columns of these tables show the lists of systems. The precision and recall values are presented for three different cases: \begin{enumerate} \item ``10-fold'' column -- 10-fold cross-validation on the test set. We reproduce this result from RQ1 for easy reference. \item ``All 10'' column -- Models trained by datasets from all 10 systems, including the test set. \item ``9 Others'' column -- Models trained by 9 other systems' datasets, not including the test set. \end{enumerate} \looseness-1 \noindent In total, beside the {300} issue-proneness prediction models per system that emerged from RQ1's analysis, we built and evaluated 60 additional issue-proneness models to answer RQ2. \def\rowswitch#1\\{} \begin{table}[] \centering \caption{Predicting issue-proneness -- precision~(top) and recall (bottom) under ACDC} \vspace{-1mm} \label{tab:rq2_acdc_p} \begin{tabular}{|l|p{1.7cm}|p{1cm}|p{1cm}|} \hline \rowcolor[HTML]{C0C0C0} System & 10-fold (RQ1) & All 10 & 9 Others \\ \hline Camel & 69.9\% & 64.8\% & 53.6\% \\ \hline CXF & 78.0\% & 71.4\% & 66.4\% \\ \hline Hadoop & 81.2\% & 71.1\% & 62.8\% \\ \hline Ignite & 78.9\% & 73.9\% & 60.2\% \\ \hline Nutch & 80.8\% & 74.9\% & 59.6\% \\ \hline OpenJPA & 71.4\% & 68.8\% & 63.9\% \\ \hline Pig & 71.7\% & 66.8\% & 61.4\% \\ \hline Struts2 & 89.2\% & 77.1\% & 69.1\% \\ \hline Wicket & 69.2\% & 66.7\% & 55.0\% \\ \hline ZooKeeper & 72.0\% & 65.4\% & 56.0\% \\ \end{tabular} \begin{tabular}{|l|p{1.7cm}|p{1cm}|p{1cm}|} \hline \hline \hline Camel & 68.4\% & 57.5\% & 46.7\% \\ \hline CXF & 76.7\% & 71.3\% & 65.7\% \\ \hline Hadoop & 80.1\% & 69.2\% & 62.9\% \\ \hline Ignite & 78.1\% & 73.5\% & 59.3\% \\ \hline Nutch & 71.6\% & 68.8\% & 54.4\% \\ \hline OpenJPA & 68.3\% & 63.0\% & 57.3\% \\ \hline Pig & 69.1\% & 64.1\% & 58.8\% \\ \hline Struts2 & 89.0\% & 76.4\% & 68.8\% \\ \hline Wicket & 70.1\% & 66.0\% & 54.9\% \\ \hline ZooKeeper & 72.6\% & 60.3\% & 56.9\% \\ \hline \end{tabular} \end{table} \begin{table}[] \centering \caption{Predicting issue-proneness -- precision~(top) and recall (bottom) under ARC} \vspace{-1mm} \label{tab:rq2_arc_p} \begin{tabular}{|l|p{1.7cm}|p{1cm}|p{1cm}|} \hline \rowcolor[HTML]{C0C0C0} System & 10-fold (RQ1) & All 10 & 9 Others \\ \hline Camel & 70.8\% & 64.9\% & 59.7\% \\ \hline CXF & 68.9\% & 55.2\% & 49.0\% \\ \hline Hadoop & 76.6\% & 67.6\% & 59.6\% \\ \hline Ignite & 78.9\% & 66.9\% & 62.3\% \\ \hline Nutch & 82.5\% & 64.6\% & 62.3\% \\ \hline OpenJPA & 74.5\% & 66.9\% & 63.9\% \\ \hline Pig & 71.3\% & 62.1\% & 61.7\% \\ \hline Struts2 & 95.0\% & 76.1\% & 63.8\% \\ \hline Wicket & 76.7\% & 63.3\% & 62.0\% \\ \hline ZooKeeper & 70.8\% & 66.3\% & 50.4\% \\ \end{tabular} \begin{tabular}{|l|p{1.7cm}|p{1cm}|p{1cm}|} \hline \hline \hline Camel & 67.0\% & 59.4\% & 48.5\% \\ \hline CXF & 68.3\% & 62.3\% & 54.5\% \\ \hline Hadoop & 76.6\% & 67.4\% & 59.4\% \\ \hline Ignite & 79.1\% & 66.5\% & 61.6\% \\ \hline Nutch & 82.7\% & 58.1\% & 53.9\% \\ \hline OpenJPA & 73.2\% & 65.5\% & 62.0\% \\ \hline Pig & 71.1\% & 62.5\% & 61.1\% \\ \hline Struts2 & 94.8\% & 75.7\% & 63.7\% \\ \hline Wicket & 77.1\% & 65.3\% & 63.6\% \\ \hline ZooKeeper & 69.2\% & 67.1\% & 56.4\% \\ \hline \end{tabular} \end{table} \begin{table}[] \centering \caption{Predicting issue-proneness -- Precision~(top) and recall (bottom) under PKG } \vspace{-1mm} \label{tab:rq2_pkg_p} \begin{tabular}{|l|p{1.7cm}|p{1cm}|p{1cm}|} \hline \rowcolor[HTML]{C0C0C0} System & 10-fold (RQ1) & All 10 & 9 Others \\ \hline Camel & 68.2\% & 59.5\% & 46.0\% \\ \hline CXF & 64.7\% & 62.7\% & 59.1\% \\ \hline Hadoop & 72.8\% & 61.8\% & 50.2\% \\ \hline Ignite & 70.4\% & 70.2\% & 62.6\% \\ \hline Nutch & 68.3\% & 66.9\% & 51.9\% \\ \hline OpenJPA & 69.2\% & 71.2\% & 53.1\% \\ \hline Pig & 68.6\% & 68.0\% & 53.6\% \\ \hline Struts2 & 79.1\% & 92.4\% & 67.6\% \\ \hline Wicket & 63.7\% & 66.1\% & 60.2\% \\ \hline ZooKeeper & 68.7\% & 66.3\% & 44.0\% \\ \end{tabular} \begin{tabular}{|l|p{1.7cm}|p{1cm}|p{1cm}|} \hline \hline \hline Camel & 62.8\% & 50.9\% & 43.5\% \\ \hline CXF & 63.8\% & 60.0\% & 44.7\% \\ \hline Hadoop & 73.4\% & 61.5\% & 50.3\% \\ \hline Ignite & 71.0\% & 69.5\% & 62.3\% \\ \hline Nutch & 62.1\% & 54.1\% & 50.9\% \\ \hline OpenJPA & 67.9\% & 68.3\% & 39.2\% \\ \hline Pig & 69.5\% & 68.0\% & 44.5\% \\ \hline Struts2 & 78.3\% & 92.0\% & 67.1\% \\ \hline Wicket & 65.4\% & 66.1\% & 58.9\% \\ \hline ZooKeeper & 69.4\% & 66.8\% & 42.7\% \\ \hline \end{tabular} \end{table} We found several consistent trends across all three architectural views. First, a prediction model built by combining data sets of multiple different software systems, even if the test system itself is included, has lower accuracy than the model built for that specific test system. This can be seen in all three Tables \ref{tab:rq2_acdc_p}, \ref{tab:rq2_arc_p}, and \ref{tab:rq2_pkg_p}, where the ``All 10'' columns have lower values for precision and recall than the corresponding ``10-fold'' (results from RQ1) columns. More interesting is the case where the test system is excluded and the model is trained on the datasets from the remaining nine systems (the ``9 others'' column). This represents the scenario of using a generic predictive model comprising entirely different systems. The precision and recall values predictably decrease further across all three architectural views. These results are reflective of the intuition that using datasets from different systems can create a more general-purpose model, but is also likely to add noise and reduce the model's ability to predict the properties of a specific system. Therefore, if a sufficiently large dataset for a given system is available, the system's prediction models should be trained only on that dataset. At the same time, it is interesting to note that the loss of accuracy between the ``10-fold'' and ``9~Others" models is relatively moderate: with few exceptions, it is on the order of 10-20\%. On the lower end, one example exception is PKG's precision for Wicket's issue-proneness (Table~\ref{tab:rq2_pkg_p}-top), where the discrepancy is only 3.5\%. On the higher end, an interesting exception are the precision and recall values obtained by ARC for Struts2 (Table~\ref{tab:rq2_arc_p}), which are both more than 30\% lower for the ``9~Others'' models. This ties to the above discussion of the limited types of smells that exist in Struts2: its uniqueness decreased the ability of other systems to predict its issue-proneness, just like it helped ensure highly accurate models when using only its own historical data. \begin{figure}[t] \vspace{-4mm} \centering \subfloat[Precision]{{\includegraphics[width=4.3cm]{rq_pics/rq2_precision_issueness.png} }}% \qquad \hspace{-.9cm} \subfloat[Recall]{{\includegraphics[width=4.3cm]{rq_pics/rq2_recall_issueness.png} }}% \vspace{-1mm} \caption{Predicting issue-proneness under ACDC.} \label{fig:compare_rq2} \end{figure} Figure \ref{fig:compare_rq2} shows a comparison of precision and recall between different combinations of ACDC based models. We observe that using data from ``9~Others'' systems can yield a relatively good prediction model with at least 50\% improvement compared to the baseline (0.5 vs. 0.33). In addition, the accuracy of ``All~10'' models lends support to a hypothesis that if a system has a short history of development, then including generic data can help improve predictive performance. We are currently evaluating this hypothesis more extensively. \vspace{3mm} \noindent\emph{B. Change-Proneness} \vspace{1mm} We observed analogous trends to those discussed above in the experiments that attempt to predict the change-proneness using unrelated systems' datasets. We elide this data for space. In summary, the results of the experiments conducted in the context of \emph{RQ2} confirm that software systems tend to share properties with respect to issue- and change-proneness. The accuracy of general-purpose models is lower than that of specific models, but the gap is not prohibitive. Our results suggest that developers can use general-purpose models to get an overall sense of the likely issue- and change-proneness of a new software system in the early stages of its development, before sufficiently large numbers of system versions become available. Similarly, developers can use such models to predict important properties of any existing systems for which historical data is missing, spotty, or unreliable. An interesting question is whether restricting general-purpose models to systems that are likely to share certain key characteristics can improve the models' predictive power. This is something we have not done in our current study: while the set of test systems we used share some characteristics (e.g., Java-based enterprise systems and Apache Projects), they are also inherently different systems targeting a variety of domains. Our ongoing work is investigating whether taking into account factors such as the role of the employed development processes, off-the-shelf frameworks, system design principles and patterns, application domains, etc. can be used to increase the accuracy of the general-purpose models. \begin{figure}[t!] \centering \includegraphics[width=7.5cm]{rq_pics/rq4_hadoop_new.png} \includegraphics[width=7.5cm]{rq_pics/rq4_struts_new.png} \vspace{-1mm} \caption{Top-5 long-lived smelly files \\ in Hadoop (top) and Struts2 (bottom).} \label{fig:4_struts} \end{figure} \section{Detecting Architectural Smells} \label{sec:smells_detection} This section describes (1) three smell detection strategies and (2) corresponding detection algorithms for the 11 architectural smells defined in Section \ref{sec:smell_cata}. A detection strategy is a general approach to detect a group of smells. All architectural-smell detection algorithms have been implemented and applied to several systems. Each system has a large number of versions, allowing us to study the smells over the evolution of the system. Specifically, to study the effectiveness, performance, and scalability of our algorithms, we applied them to a total of 421 versions of eight widely used Apache open-source systems: Camel, Continuum, CXF, Hadoop, Nutch, OpenJPA, Struts2, and Wicket. The total amount of code analyzed by our algorithms was 376 MSLOC. Different systems' individual versions ranged in size from 118 KSLOC (Nutch) to 1.96 MSLOC (Hadoop). Average analysis times of our algorithms per system version ranged from few seconds (for \textbf{\textbf{detectUI}\xspace} shown in Algorithm \ref{alg:detectui_ub} below) to 10 minutes (for \textbf{\textbf{detectSD}\xspace} shown in Algorithm \ref{alg:detectsd}). For illustration purposes in this section, given the space constraints, we will only highlight one instance of each smell per system. To that end, we selected two Apache systems that contain examples of all 11 smells defined above: CXF, a widely used open source web services framework, and Nutch, a large open source web crawler, the parent to Hadoop. Table \ref{tab:subject_systems} shows the two subject systems, the respective numbers of versions we analyzed, and the average sizes of each version. \bgroup \def1.25{1.25} \vspace{-5mm} \begin{table}[h \centering \caption{Subject systems analyzed in our study} \scriptsize \begin{tabular}{|p{2cm}|p{2cm}|p{2cm}|p{2cm}|} \toprule \textbf{System} & \textbf{Domain} & {\textbf{No. of Versions}} & \textbf{Average SLOC} \\ \midrule Apache CXF & Service Framework & 120 & 915K \\ Apache Nutch & Web Crawler & 21 & 118K \\ \bottomrule \end{tabular}% \label{tab:subject_systems}% \end{table}% \egroup \vspace{-10mm} \vspace{-4mm} \subsection{Smell Detection Algorithms} \vspace{-2mm} \noindent\textbf{Concern-based smells} \textit{\underline{Scattered Parasitic Functionality (SPF)}}: Algorithm \ref{alg:detectspf}, \textbf{detectSPF}\xspace, returns a map \textit{SPFsmells} where each key in the map is a scattered concern $z$, and each value is a set of at least $\mathit{th_{spf}}$ number of components that have the corresponding key $z$ above the threshold $\mathit{th_{z_c}}$. Lines \ref{alg:detectspf:count_topics:start}-\ref{alg:detectspf:count_topics:end} create a map where keys are concerns and values are the number of components that have that concern above threshold $\mathit{th_{z_c}}$. The $\textit{getConcernsOfComponent}$ function in Line \ref{alg:detectspf:get_concern} returns the topic distribution of the component, which is computed by MALLET. Line \ref{alg:detectspf:th_spf} calculates the threshold $\mathit{th_{z_c}}$ for each component $c$, which helps determine representative topics of a component (Line \ref{alg:detectspf:th_z_c}). Line \ref{alg:detectspf:th_tc} calculates the threshold $\mathit{th_{spf}}$ for the maximum number of concerns in a component. Both thresholds are determined dynamically by the InterQuartile method. Lines \ref{alg:detectspf:detect:start}-\ref{alg:detectspf:detect:end} identify SPF instances by checking if concern \textit{z} appears in at least $th_{spf}$ components (Line \ref{alg:detectspf:th_spf_}). \label{subsec:detection_algos} \begin{algorithm}[!th] \algsetup{linenosize=\tiny} \scriptsize \renewcommand{\AlCapSty}[1]{\small\small{\textbf{#1}}\unskip} \caption{\textbf{detectSPF}\xspace} \label{alg:detectspf} \SetAlgoVlined \LinesNumbered \DontPrintSemicolon \SetInd{0.3em}{0.6em} \KwIn{$C$: a set of components, $T$: a set of system concerns \KwOut{$\mathit{SPFsmells}: $ a map where keys are concerns and values are components} $\mathit{SPFsmells} \leftarrow$ initialize map as empty\; $\mathit{concernCounts} \leftarrow$ initialize all concern counts to 0\; \For{$c \in C$}{ \label{alg:detectspf:count_topics:start} $T_c \leftarrow \mathit{getConcernsOfComponent(c)}$\; \label{alg:detectspf:get_concern} $\mathit{th_{z_c}} \leftarrow \mathit{getHighThreshold(P(T_c))}$\; \label{alg:detectspf:th_spf} \For{$z \in T_c$}{ \If{$P(z \mid c) > \mathit{th_{z_c}}$} { \label{alg:detectspf:th_z_c} $\mathit{concernCounts}[z] = \mathit{concernCounts}[z] + 1$\ \label{alg:detectspf:count_topics:end} } } } $\mathit{th_{spf}} \leftarrow \mathit{getHighThreshold(concernCounts)}$\; \label{alg:detectspf:th_tc} \For{$z \in T$}{ \label{alg:detectspf:detect:start} \If{$\mathit{concernCounts[z]} > \mathit{th_{spf}}$} { \For{$c \in C$}{ \If{$P(z \mid c) > \mathit{th_{z_c}}$} { \label{alg:detectspf:th_spf_} $\mathit{SPFsmells[z]} \leftarrow \mathit{SPFsmells[z]} \cup \{c\}$\;\label{alg:detectspf:detect:end} } } } } \end{algorithm} In most versions of CXF, under the ARC view, we found a Scattered Parasitic Functionality instance where the $org.apache.cxf.BusFactory$ entity and its subclasses are scattered across different components. Even their parent packages are different. Although those classes address the same big concern, which is ``bus'', the subclasses implement different specific concerns. This is the main reason for ARC to assign them to different components. \textit{\underline{Concern Overload (CO)}}: Algorithm \ref{alg:detectbco}, \textbf{detectCO}\xspace, determines which components in the system have CO. The algorithm operates in a manner similar to \textbf{detectSPF}\xspace. \textbf{detectCO}\xspace begins by creating a map, $componentConcernCounts$, where keys are components and values are the number of relevant concerns in the component (Lines \ref{alg:detectbco:count_topics:start}-\ref{alg:detectbco:count_topics:end}). While creating the map, threshold $\mathit{th_{z_c}}$ is dynamically computed for each component (Line \ref{alg:detectbco:compute_threshold}) and used to determine prevalent concerns in each component. Later, \textbf{detectCO}\xspace uses that map to compute threshold $th_{co}$ in Line \ref{alg:detectbco:th_t} which is then used to determine which components have the CO smell (Lines \ref{alg:detectbco:detect:start}-\ref{alg:detectbco:detect:end}). Like Scattered Parasitic Functionality, we also found a long-lived Concern Overload instance under the ARC view. We found a component that contains the most classes in the $org.apache.cxf.phase$ package. This component implements different steps of information processing in CXF, which include reading a message, transforming it, processing headers and validating the message. Although all these steps are related to message handling, putting all of them into a single component results in the CO smell. \begin{algorithm}[!b] \renewcommand{\AlCapSty}[1]{\small\small{\textbf{#1}}\unskip} \scriptsize \caption{\textbf{detectCO}\xspace} \label{alg:detectbco} \SetAlgoVlined \LinesNumbered \DontPrintSemicolon \SetInd{0.3em}{0.6em} \KwIn{$C$: a set of components, $T$: a set of system concerns \KwOut{$\mathit{COsmells}: $ a set of Component Concern Overload\xspace instances } $\mathit{COsmells} \leftarrow \emptyset$\; $\mathit{componentConcernCounts} \leftarrow$ initialize all brick concern counts to 0\; \For{$c \in C$}{ \label{alg:detectbco:count_topics:start} $T_c \leftarrow \mathit{getConcernsOfComponent(c)}$\; $\mathit{th_{z_c}} \leftarrow \mathit{getHighThreshold(P(T_c))}$\; \label{alg:detectbco:compute_threshold} \For{$z \in T_c$}{ \If{$P(z \mid c) > \mathit{th_{z_c}}$} { $\mathit{componentConcernCounts}[c] = \mathit{componentConcernCounts}[c] + 1$\ \label{alg:detectbco:count_topics:end} } } } $\mathit{th_{co}} \leftarrow \mathit{getHighThreshold(componentConcernCounts)}$\; \label{alg:detectbco:th_t} \For{$c \in C$}{ \label{alg:detectbco:detect:start} \If{$\mathit{componentConcernCounts[c]} > \mathit{th_{co}}$} { $\mathit{COsmells} \leftarrow \mathit{COsmells} \cup \{c\}$\;\label{alg:detectbco:detect:end} } } \end{algorithm} \noindent\textbf{Dependency-based smells} ~ \textit{\underline{Dependency Cycle (DC)}}: We detect DC smells by identifying \emph{strongly connected components} in a software system's architectural graph $G = (C,L)$. A strongly connected component is a graph or subgraph where each vertex is reachable from every other vertex. Each strongly connected component in $G$ is a Component Dependency Cycle\xspace. Any algorithm that detects strongly connected components \cite{dijkstra1976discipline,leiserson2001introduction} can then be used to identify DC. Therefore, we do not include the detection algorithm for DC in this paper. Both Nutch and CXF have one instance of Dependency smell starting with their early versions. These instances are persistent throughout both systems' life cycles As both system evolve, the cycles increase in size by involving more components. This observation holds with all three architectural views. \begin{comment} \begin{algorithm}[!th] \renewcommand{\AlCapSty}[1]{\small\small{\textbf{#1}}\unskip} \algsmall \caption{\textbf{detectDC}\xspace} \label{alg:detectdc} \SetAlgoVlined \LinesNumbered \DontPrintSemicolon \SetInd{0.3em}{0.6em} \KwIn{$C$: a set of components, $L$: links between components} \KwOut{$\mathit{DCsmells}: $ a set of Component Dependency Cycle\xspace instances } $\mathit{DCsmells} \leftarrow \emptyset$\; $\mathit{L} \leftarrow \emptyset$\; $\mathit{componentConcernCounts} \leftarrow$ initialize all brick concern counts to 0\; $\mathit{th_{co}} \leftarrow \mathit{detectCycle(G, L)}$\; \end{algorithm} \end{comment} \textit{\underline{Link Overload (LO)}}: Algorithm \ref{alg:detectlo}, \textbf{detectLO}\xspace, extracts the LO variants for a set of components $C$ by examining their links $L$. The algorithm first determines the number of incoming, outgoing, and combined links per component (Lines \ref{alg:detectlo:numlinks:start}-\ref{alg:detectlo:numlinks:end}). \textbf{detectLO}\xspace sets the threshold $th_{\mathit{lo}}$ for each variant of LO by computing the thresholds $th_{lo}$ for incoming, outgoing, and combined links (Lines \ref{alg:detectbco:th_lo:start}-\ref{alg:detectbco:th_lo:end}) . The last part of \textbf{detectLO}\xspace identifies each component and the directionality that indicates the variant of LO the component suffers from (Lines \ref{alg:detectlo:detect:start}-\ref{alg:detectlo:detect:end}). \vspace{-7mm} \begin{algorithm}[!th] \renewcommand{\AlCapSty}[1]{\small\small{\textbf{#1}}\unskip} \scriptsize \caption{\textbf{detectLO}\xspace} \label{alg:detectlo} \SetAlgoVlined \LinesNumbered \DontPrintSemicolon \SetInd{0.3em}{0.6em} \KwIn{$C$: a set of components, $L$: links between components} \KwOut{$\mathit{LOsmells}: $ a set of Link Overload\xspace instances } $\mathit{LOsmells} \leftarrow \emptyset$\; $\mathit{numLinks} \leftarrow$ initialize map as empty\; $\mathit{directionality} \leftarrow \{``in",``out",``both"\}$\; \For{$c \in C$}{ \label{alg:detectlo:numlinks:start} \For{$d \in directionality$} { $numLinks[(c,d)] \leftarrow numLinks[(c,d)] + \mathit{getNumLinks(c,d,L)}$\;\label{alg:detectlo:numlinks:end} } } \For{$d \in directionality$} { \label{alg:detectbco:th_lo:start} $\mathit{th_{lo}[d]} \leftarrow \mathit{getHighThreshold(numLinks,d,C)}$\; \label{alg:detectbco:th_lo:end} } \For{$c \in C$}{ \label{alg:detectlo:detect:start} \For{$d \in directionality$} { \If{$\mathit{getNumLinks(c,d,L)} > \mathit{th_{lo}[d]}$} { $\mathit{LOsmells} \leftarrow \mathit{LOsmells} \cup \{(c,d)\}$\;\label{alg:detectlo:detect:end} } } } \end{algorithm} \vspace{-8mm} In Nutch, we found that some components, especially those which are related to web user interfaces, suffered from Link Overload, such as \seqsplit{$org.apache.nutch.webui.pages.crawls$} or \seqsplit{$org.apache.nutch.webui.pages.instances$} in the PKG view. These components were created with many inner classes within the main classes. \noindent\textbf{Interface-based smells} \textit{\underline{Unused Interface (UI)}} and \textit{\underline{Unused Component (UC):}} As we mentioned in Section \ref{subsec:formal_arch_smells}, UC is the extreme case of UI. Algorithm \ref{alg:detectui_ub}, \textbf{\textbf{detectUI}\xspace}, allows us to detect both of these smells. \textbf{\textbf{detectUI}\xspace} uses the set of links L to determine if an interface has been used or not. The algorithm checks every entity in each component (Lines \ref{alg:detectui_ub:numlinks:start}-\ref{alg:detectui_ub:numlinks:end}), and if an entity has a public interface but no link, then the entity and its parent component are added to the UI instances list. Line \ref{alg:detectui_ub:checkub} uses a boolean flag, \textit{isUB}, to mark that a component does not have UC if at least one entity in the component does not have UI. Line \ref{alg:detectui_ub:addub} checks and adds UC instances to the smell list. In Nutch, we found that the \textit{SequenceFileInputFormat} class was unused in some 1.x versions (in each view, the parent component of this class was affected by the Unused Interface smell). This class had been removed in version 2.0. We assume that developers noticed and decided to remove this unused class. We could not find any instances of Unused Component in Nutch, but in CXF. Under the PKG view, the $org.apache.cxf.simple$ component had been unused from version 2.0.6 to version 2.2.9. Later, it was used again from version 2.2.10. \begin{algorithm}[t] \renewcommand{\AlCapSty}[1]{\small\small{\textbf{#1}}\unskip} \scriptsize \caption{\textbf{detectUI}\xspace} \label{alg:detectui_ub} \SetAlgoVlined \LinesNumbered \DontPrintSemicolon \SetInd{0.3em}{0.6em} \KwIn{$C$: a set of bricks, $L$: links between components} \KwOut{$\mathit{UIsmells}: $ a set of Unused Interface instances, $\mathit{UCsmells}: $ a set of Unused Brick instances } $\mathit{UIsmells} \leftarrow \emptyset$, $\mathit{UCsmells} \leftarrow \emptyset$\; \For{$c \in C$}{ \label{alg:detectui_ub:numlinks:start} $isUC \leftarrow true$ \\ \For{$e \in c.E$} { \If{$\mathit{getNumInterfaces(e.I)} > 0$} { \If{$getNumLinks(e.L)= 0$} { $\mathit{UIsmells} \leftarrow \mathit{UIsmells} \cup \{(c,e)\}$\; } \label{alg:detectui_ub:numlinks:end} \Else{ $isUC \leftarrow false$ \label{alg:detectui_ub:checkub} } } } \If{$isUC$} { $\mathit{UCsmells} \leftarrow \mathit{UCsmells} \cup \{(c)\}$\; \label{alg:detectui_ub:addub} } } \end{algorithm} \textit{\underline{Sloppy Delegation (SD):}} Algorithm \ref{alg:detectsd}, \textbf{\textbf{detectSD}\xspace} requires a threshold $th_{sd}$, which defines the minimum number of in-links to consider a delegation appropriate. The algorithm checks every link in each entity (Lines \ref{alg:detectsd:numlinks:start}-\ref{alg:detectsd:numlinks:end}), and if a link has a \textit{`dst'} entity which satisfies the checking condition of SD (defined in Section \ref{subsec:formal_arch_smells}), then the \textit{`dst'} entity and its parent component are added to the list of SD instances (Line \ref{alg:detectsd:addsd}). In Nutch, we found that $org.apache.nutch.crawl$ and \seqsplit{$org.apache.nutch.scoring.webgraph$} are two components which are heavily affected by the Sloppy Delegation smell. Out of the three architectural views, ACDC provides a better clustering to reduce the number of SD smells. However, those two components still need refactoring. \vspace{-8mm} \begin{algorithm}[!bh] \renewcommand{\AlCapSty}[1]{\small\small{\textbf{#1}}\unskip} \scriptsize \caption{\textbf{detectSD}\xspace} \label{alg:detectsd} \SetAlgoVlined \LinesNumbered \DontPrintSemicolon \SetInd{0.3em}{0.6em} \KwIn{$C$: a set of bricks, $L$: links between components, ~~~~~~~~$\mathit{th_{sd}}$: threshold for relevance delegation} \KwOut{$\mathit{smells}: $ a set of Sloppy Delegation instances } $\mathit{smells} \leftarrow \emptyset$\; \For{$c_1 \in C$}{ \label{alg:detectsd:numlinks:start} \For{$e_1 \in c_1.E$} { \For{$l \in e_1.L$}{ \If{$l.src=e_1$}{\label{alg:detectsd:numlinks:end} $e_2 \leftarrow l.dst$\; $c_2 \leftarrow getParent(e_2)$\; \If{$(e_1 \not\equiv e_2) \land (getOutLink(e_2)=0) $\; ~~~~~~~~~~~~~~$\land (getInLink(e_2) <th_{sd})$}{ $\mathit{smells} \leftarrow \mathit{smells} \cup \{((c_1,e_1),(c_2,e_2))\}$\; } \label{alg:detectsd:addsd} } } } } \end{algorithm} \vspace{-7mm} \textit{\underline{Function Overload (FO)}} and \textit{\underline{Lego Syndrome (LO):}} These two smells come in a pair, which indicates overloaded and underloaded functionality in components. Algorithm \ref{alg:detectfo_ls}, \textbf{\textbf{detectFO\_LS}\xspace}, allows to us to detect both smells in one run. The algorithm first creates a map between components and their numbers of interfaces (Lines \ref{alg:detectfo_ls:countI:start}-\ref{alg:detectfo_ls:countI:end}). This map then is used to compute two thresholds, $th_{fo}$ and $th_{ls}$, i.e., the high (Line \ref{alg:detectfo_ls:computefo}) and low values (Line \ref{alg:detectfo_ls:computels}) of the inner fences, respectively. Finally, the algorithm revisits each component, checking and adding detected smell instances into two smell lists (Lines \ref{alg:detectfo_ls:check:start}-\ref{alg:detectfo_ls:check:end}). Across all 1.x versions of Nutch under the PKG view, we found that one component, \textit{org.apache.nutch.crawl}, has Function Overload and two other components, \textit{org.apache.nutch.net.protocols} and \textit{org.apache.nutch.tool.arc}, have Lego Syndrome. \vspace{-2mm} \begin{algorithm}[!t] \renewcommand{\AlCapSty}[1]{\small\small{\textbf{#1}}\unskip} \scriptsize \caption{\textbf{detectFO\_LS}\xspace} \label{alg:detectfo_ls} \SetAlgoVlined \LinesNumbered \DontPrintSemicolon \SetInd{0.3em}{0.6em} \KwIn{$C$: a set of bricks, $L$: links between components} \KwOut{$\mathit{FOsmells}: $ a set of Functionality Overload , $\mathit{LSsmells}: $a set of Lego Syndrome instances } $\mathit{FOsmells} \leftarrow \emptyset$, $\mathit{LSsmells} \leftarrow \emptyset$\; $\mathit{numInterfaces} \leftarrow$ initialize map as empty\; \For{$c \in C$}{ \label{alg:detectfo_ls:countI:start} \For{$e \in c.E$} { $numInterfaces[c] \leftarrow numInterfaces[c] + \mathit{getNumInterfaces(e.I)}$\;\label{alg:detectdf_cc:numlinks:end} } } \label{alg:detectfo_ls:countI:end} $\mathit{th_{fo}} \leftarrow \mathit{getHighThreshold(numInterfaces,C)}$\; \label{alg:detectfo_ls:computefo} $\mathit{th_{ls}} \leftarrow \mathit{getLowThreshold(numInterfaces,C)}$\; \label{alg:detectfo_ls:computels} \For{$c \in C$}{ \label{alg:detectfo_ls:check:start} \If{$\mathit{numInterfaces[c]} > \mathit{th_{fo}}$} { $\mathit{FOsmells} \leftarrow \mathit{FOsmells} \cup \{(c)\}$\;\label{alg:detectfo_ls:addfo} } \ElseIf{$\mathit{numInterfaces[c]} < \mathit{th_{ls}}$} { $\mathit{LSsmells} \leftarrow \mathit{LSsmells} \cup \{(c)\}$\;\label{alg:detectfo_ls:addls} \label{alg:detectfo_ls:check:end} } } \end{algorithm} \begin{algorithm}[!t] \renewcommand{\AlCapSty}[1]{\small\small{\textbf{#1}}\unskip} \scriptsize \caption{\textbf{detectCC}\xspace} \label{alg:detectdf_cc} \SetAlgoVlined \LinesNumbered \DontPrintSemicolon \SetInd{0.3em}{0.6em} \KwIn{$C$: a set of bricks, $Cp$: couplings between components} \KwOut{$\mathit{DFsmells}: $ a set of Duplicate Functionality , $\mathit{CCsmells}: $a set of Co-change Coupling instances } $\mathit{DFsmells} \leftarrow \emptyset$, $\mathit{CCsmells} \leftarrow \emptyset$, \; $\mathit{numDu} \leftarrow$ initialize map as empty\; $\mathit{numCo} \leftarrow$ initialize map as empty\; \For{$c \in C$}{ \label{alg:detectdf_cc:numCouplings:start} \For{$e \in e.E$} { $numDu[c] \leftarrow numDu[c] + \mathit{getNumDu(c,e.Du)}$\; $numCo[c] \leftarrow numCo[c] + \mathit{getNumCo(c,e.Co)}$\; } \label{alg:detectdf_cc:numCouplings:end} } $\mathit{th_{df}} \leftarrow \mathit{getHighThreshold(numDu,C)}$\label{alg:detectdf_cc:computedf}\; $\mathit{th_{cc}} \leftarrow \mathit{getHighThreshold(numCo,C)}$\label{alg:detectdf_cc:computecc}\; \For{$c \in C$}{ \label{alg:detectdf_cc:detect:start} \If{$\mathit{numDu[c]} > \mathit{th_{df}}$} { $\mathit{DFsmells} \leftarrow \mathit{DFsmells} \cup \{(c)\}$\; } \If{$\mathit{numCo[c]} > \mathit{th_{cc}}$} { $\mathit{CCsmells} \leftarrow \mathit{CCsmells} \cup \{(c)\}$\; } \label{alg:detectdf_cc:detect:end} } \end{algorithm} \vspace{2mm} \noindent\textbf{Coupling-based smells} \textit{\underline{Duplicate Functionality (DF)}} and \textit{\underline{Co-changes Coupling}} \textit{\underline{(CC):}} Detecting coupling-based smells is similar to detecting Link Overload. The difference is that the detection algorithms use couplings instead of links as their input. Detecting DF depends on \textit{duplicates}, and detecting CC depends on \textit{co-changes}. Algorithm \ref{alg:detectdf_cc}, \textbf{detectCC}\xspace, shows how to detect these two types of smells. The algorithm first creates two maps between components and their numbers of duplicates as well as co-changes (Lines \ref{alg:detectdf_cc:numCouplings:start}-\ref{alg:detectdf_cc:numCouplings:end}). \textbf{detectCC}\xspace uses these maps to compute two thresholds, $th_{df}$ and $th_{cc}$, which are the high inner-fence values (Lines \ref{alg:detectdf_cc:computedf}-\ref{alg:detectdf_cc:computecc}). Finally, the algorithm visits each component again, checks and adds detected smell instances into two smell lists (Lines \ref{alg:detectdf_cc:detect:start}-\ref{alg:detectdf_cc:detect:end}). Under the PKG view of CXF, we found that the component \textit{org.apache.cxf.interceptor} has the Co-changes Coupling smell along with 6 other components in version 2.x. Later on, one CC instance has been removed in version 3.x. However, new CC smell instances were also introduced. We also found \seqsplit{$org.apache.cxf.ws.security.wss4j.policyvalidators$} and $org.apache.cxf.jibx$ to be strongly affected by Duplicate Functionality. Almost all entities in those two components have duplications with entities in other components. \section{Selecting, Capturing, and Detecting Smells} \label{sec:smell_cata} \section{Threats to Validity} \label{sec:thread_to_validity} \looseness-1 The key threats to \textbf{external validity} include our subject systems. Most of the steps in our data gathering process are automated. However, manual intervention is required since each system has different implementation conventions. Due to the manually-intensive data gathering process, we have used data from ten subject systems in our dataset. We mitigate a possible threat stemming from the number of systems by using data from their 466 versions and evaluating 720 prediction models. All our subject systems are Apache projects, implemented in Java, and use the Jira issue tracking system. The reason for this is that it helped to simplify our data gathering and analysis workflow. In our on-going work, we are expanding our analysis beyond Apache. The diversity of the chosen systems, however, helps to reduce this threat, as does the wide adoption of Apache software, Java, and Jira. Further, all the recovery techniques and smell definitions in this paper are language-independent. % Our study's \textbf{construct validity} is threatened by (1)~the accuracy of the recovered architectural views, (2) the detection of architectural smells, and (3) the relevance of implementation issues. To mitigate the first threat, we applied three architecture recovery techniques (ACDC, ARC, and PKG) that had previously exhibited the greatest usefulness in an extensive comparative analysis of available techniques \cite{garcia2013comparative} and in a study of architectural change during system evolution \cite{leempirical, behnamghader2016large, msr2018}. The three techniques were developed independently and use different strategies for recovering a system's architecture. To mitigate the second threat, we selected architectural smell types that were previously studied on a smaller scale \cite{macia2012automatically,mo2013mapping,le2016architectural, garcia2009identifying,garcia2009toward}, and were shown to be strong indicators of architectural problems. Finally, to mitigate the third threat, we only collected ``resolved'' and ``closed'' issues, i.e., those issues that have been independently verified and fixed by developers. The primary threat to our study's \textbf{internal validity} and \textbf{conclusion validity} involves the predictability relationship between reported implementation issues and architectural smells. Our prediction models are built based on significant correlations between architectural smells and implementation issues, which have been confirmed in other work~\cite{icsa2018duc}. {Although correlation does not imply causality, we have shown examples of the causal relationship's existence. Prior work has also confirmed the causality between implementation issues and architectural smells via manual inspection \cite{Xiao:2014:DPA:2635868.2661679, 10.1007/978-3-030-00761-4_21}. } In addition, our observations are consistent across the ten systems.
1,116,691,497,451
arxiv
\section{Introduction} \label{sec:intro} The successful launch of the James Webb Space Telescope (JWST, \citealt{2006SSRv..123..485G}) and current progress in building of the ground-based extremely large telescopes (ELTs) are opening a new era of in-depth studies of extrasolar planets \citep{2014PASP..126.1134B,2015PASP..127..311C,2016ApJ...817...17G}. Specifically, detailed atmospheric characterization of transiting temperate terrestrial exoplanets will soon be possible \citep{deWit2013,2013ApJ...764..182S,2017ApJ...850..121M}, and it will provide us with the first opportunity to detect chemical traces of life beyond our solar system \citep{Kaltenegger2009, Seager2009}. This opportunity relies on the discovery of suitable targets, i.e., habitable terrestrial planets transiting a host star bright and small enough to lead to adequate signal-to-noise ratios for detection of biosignatures through eclipse spectroscopy \citep{2016AsBio..16..465S}. In this framework, the best target for biosignatures detection would be a habitable terrestrial planet transiting one of the nearest ultracool dwarfs (UCDs), i.e., very-low-mass stars and brown dwarfs with effective temperatures lower than 2700\,K, luminosities smaller than $\mathrm{10^{-3}\,L_{\odot}}$, where $\mathrm{L_{\odot}}$ is the solar luminosity, and spectral types later than M6 \citep{2005ARA&A..43..195K, 2011ApJ...743...50C}. One of the first surveys, which tried to explore transiting exoplanets around UCDs was conducted using the Peters Automated InfRared Imaging TELescope (PAIRITEL), which observed a small set of 13 UCDs in 2004 and 2005 for a period of 10 months \citep{2008PASP..120..860B}. This project did not report any detections, most likely because of the small number of explored targets and the small number of accumulated observing hours per target. Though not uniquely focused on UCDs, APACHE \citep{2013EPJWC..4703006S} and MEarth \citep{2015csss...18..767I} ground-based surveys have been exploring mid-to-late M-dwarfs for the presence of transiting exoplanets since 2010s. The first rocky exoplanet amenable for atmospheric research was discovered around an M4.5 dwarf star GJ1132 by MEarth in 2015 \citep{2015Natur.527..204B}. TESS (Transiting Exoplanet Survey Satellite; \citealt{2014SPIE.9143E..20R}) is designed for detecting planets around G to mid M-dwarf stars and its transit detection sensitivity is low for M-dwarf stars later than M5 due to their faintness in TESS filter. However, TESS exoplanet transit detection is possible for the brightest late M-dwarfs (see two super-Earths discovered by TESS around M5 dwarf star LHS~3844 \citep{2019ApJ...871L..24V} and M6 dwarf star LP~791-18 \citep{2019ApJ...883L..16C}). Ongoing surveys, such as ExTrA \citep{Bon2015} and EDEN \citep{2020AJ....159..169G} targeting specifically late M-dwarfs, and PINES \citep{2022AJ....163..253T} targeting L- and T-type dwarfs have recently started their operations and have not reported any detections yet. To date, there is only one known system with transiting planets orbiting a UCD: it is the TRAPPIST-1 system with seven exoplanets that form a unique near-resonant chain. Remarkably, three of the TRAPPIST-1 planets are located in the habitable zone of the host star \citep{2016Natur.533..221G,2017Natur.542..456G,2017NatAs...1E.129L,2021PSJ.....2....1A}. This planetary system was discovered in the context of the TRAPPIST UCD transit survey \citep{2013EPJWC..4703001G,2020MNRAS.497.3790L}. This survey has observed 50 brightest southern UCDs for about 100\,hr each with the TRAPPIST-South telescope located at the La Silla Observatory in Chile \citep{2011Msngr.145....2J,2011EPJWC..1106002G}. Most importantly, the survey served as a prototype for a more ambitious search for exoplanets around UCDs -- SPECULOOS. SPECULOOS stands for \textbf{S}earch for habitable \textbf{P}lanets \textbf{EC}lipsing \textbf{UL}tra-c\textbf{OO}l \textbf{S}tars and it is a ground-based transit survey led by the University of Li\`ege (Belgium), which consists of six identical 1-m robotic telescopes. The immediate goal of the project is to detect temperate terrestrial planets transiting nearby ($< 40\,\mathrm{pc}$) UCDs, which are bright enough in the near-IR to make possible the atmospheric characterization of their planets with JWST and ELTs. Upon completion of the survey, we will be able to reach the ultimate goal: determining the frequency of short-period Earth-sized planets around UCDs and further constrain planet formation theories, which currently predict UCD planets ranging from metal-rich Mercury-sized planets to volatile-rich Earth-sized planets (e.g., \citealt{2007ApJ...669..606R,2009Icar..202....1M,2020MNRAS.491.1998M}). SPECULOOS consists of three nodes: four telescopes are installed at the ESO Paranal Observatory (Atacama Desert, Chile) and they compose the SPECULOOS Southern Observatory (SSO), which has been operational since January 2019. The second node is SPECULOOS Northern Observatory (SNO), which is currently composed of one 1m-aperture telescope that is located at the Teide Observatory (Canary Islands, Spain) and which has been operational since June 2019. The third node is the SAINT-EX telescope (Search And characterIsatioN of Transiting EXoplanets; \citealt{2020A&A...642A..49D}). It is a 1-m telescope located at the National Astronomical Observatory of Mexico (San Pedro M\'artir, Mexico), which contributes to the SPECULOOS project by observing UCD targets for 80\% of its observation time since March 2019. For the next 10 years, these 6 telescopes aim to observe 1700 nearby UCDs brighter than $K$=12.5\,mag to search for TRAPPIST-1-like systems. \begin{figure}[ht!] \begin{center} \includegraphics[height=0.5\textwidth]{Images/artemis12.jpeg} \caption{The Artemis telescope. Note a set of lightweight black metallic petals attached to the optical tube assembly, which protect the primary mirror when the telescope is not observing.} \end{center} \label{fig:sno} \end{figure} \begin{figure*}[ht!] \begin{center} \includegraphics[height=5.75cm]{Images/artemis13.png} \includegraphics[height=5.75cm]{Images/artemis_from_loperz.png} \end{center} \caption{Left: Technical drawing of the facility (credit: Astelco Systems): 1 -- control room, 2 -- dome, 3 -- pillar, 4 -- counterweight, 5 -- direct drive NTM-1000 mount, 6 -- CCD camera and filter wheel, 7 -- primary mirror, 8 -- secondary mirror, 9 -- focuser. Right: View of the Artemis dome with the Teide volcano in the background (credit: D.~López).} \label{fig:sno-outside} \end{figure*} \cite{2018NatAs...2..344G}, \cite{2018Msngr.174....2J}, and \cite{2018haex.bookE.130B} provide a general overview of the SPECULOOS project, while the work of \cite{2018SPIE10700E..1ID} provides technical details of the survey and evaluates preliminary photometric performance of two newly installed telescopes of SSO. Performance of all four SSO telescopes during the first year of operations and a dedicated SPECULOOS photometric data reduction pipeline are presented in \cite{2020MNRAS.495.2446M}. The survey target list and observational strategy for different SPECULOOS programs are presented in \cite{2021A&A...645A.100S}, whereas the latest development of the project as a whole can be found in the work of \cite{2020SPIE11445E..21S}. In this paper, we report the developments of SNO and present its performance during the first three years of operations from mid-2019 to mid-2022. The paper is organized as follows: in Section~\ref{sec:SNO_facility}, we present technical information about the observatory and its operations. Section~\ref{sec:weather} is dedicated to the discussion of statistics of observations. In Section~\ref{sec:photometry}, we describe photometric performance of the first SNO telescope and provide examples of early scientific results. We review some of the complementary science projects in Section~\ref{sec:complementary}. In Section~\ref{sec:discuss_conclusions}, we discuss the results from the first three years of operations and outline future prospects of the observatory. \section{SPECULOOS North}\label{sec:SNO_facility} SNO is located at the Teide Observatory on the island of Tenerife (Canary Islands, Spain). It is envisioned as a twin observatory to SSO, and it is operated by Massachusetts Institute of Technology (MIT, USA) and the University of Li\`ege (ULi\`ege, Belgium), in collaboration with the Instituto de Astrof\'isica de Canarias (IAC, Spain). Currently, SNO is composed of one telescope, which is named Artemis\footnote{Following the Greek mythology naming as in the case of SPECULOOS South telescopes (where the telescopes were named after the Galilean moons -- Europa, Io, Callisto and Ganymede), the first SPECULOOS North telescope is named after Artemis -- the goddess of wilderness, vegetation and hunting.} (see Fig.~\ref{fig:sno} and Fig.~\ref{fig:sno-outside}). It was installed in April and commissioned in June 2019. All technical information is summarized in Table~\ref{tab:general_info}. \begin{table}[h] \centering \caption{General information about the Artemis telescope.} \begin{tabular}{ll} \hline \hline Geographical coordinates & 28$^\circ$18\arcm01.44\arcm\arcm\,N,\\ & 16$^\circ$30\arcm41.04\arcm\arcm\,W,\\ & 2440\,m\\ Diameter of the primary mirror & 1\,m\\ Diameter of the secondary mirror & 0.28\,m\\ Focal length & 8\,m\\ Focal ration & f/8\\ Mount & Direct-drive German\\ & equatorial NTM-1000\\ CCD name & Andor iKon-L\\ & BEX2-DD-9TW\\ CCD size & $\mathrm{2K\times2K}$\\ CCD pixel size & 13.5\,$\mu$m\\ CCD pixel scale & 0.35\,$\mathrm{arcsec\,pixel}^{-1}$\\ Field of view & $12\,\times12\,\mathrm{arcmin^2}$\\ Filter wheel & FLI CFW3-10\\ Installed filters & Sloan-$g'$, -$r'$, -$i'$, -$z',$\\ & $I+z'$, blue-blocking,\\ & z cut\\ \hline \end{tabular} \label{tab:general_info} \end{table} Artemis is a Ritchey-Chr\'etien telescope with 1-m primary and 0.28-m secondary mirrors, and a correction lens, which together provide a focal length of 8\,m (focal ratio f/8). Both mirrors are coated with pure aluminium. The telescope was manufactured in Germany by Astelco Systems\footnote{\url{www.astelco.com}}. In contrast to SSO telescopes, Artemis has a set of lightweight metallic petals attached to the optical tube assembly (OTA; See Fig.~\ref{fig:sno}). These petals are closed during the daytime and protect the primary mirror from dust particles and various debris coming from the local flora and fauna. OTA is installed on the German equatorial mount NTM-1000 with direct drive motors, which provides precise pointing and is optimized for optimal tracking performance with no periodic errors and with no need for hardware guiding. However, a software autoguiding algorithm \texttt{DONUTS} is utilized to keep stars on the same spots of the CCD (charge-coupled device) with sub-pixel precision to improve photometric performance \citep{2013PASP..125..548M}. Thanks to the special mount pillar, the telescope has no meridian flip and allows night-long tracking of targets (see the left panel of Fig.~\ref{fig:sno-outside}). The telescope is enclosed in a circular 6.2-5m classic robotic Astelco dome, with its controls integrated into the telescope’s control software (TCS). Adjacent to the circular dome building, a control room keeps the telescope's electrical cabinet and control computers. The control room of the Artemis telescope is also designed to serve as a hub of operations for planned additional SNO telescopes. Similar to other SPECULOOS telescopes, Artemis is equipped with an Andor\footnote{\url{www.andor.oxinst.com}} iKon-L Peltier-cooled deep-depletion fringe-suppressed CCD camera (BEX2-DD-9TW). Its size is $\mathrm{2K\times2K}$ with a pixel size of 13.5~$\mu$m. The field of view is $12\,\times12\,\mathrm{arcmin^2}$ and the corresponding pixel scale is 0.35\,$\mathrm{arcsec\,pixel}^{-1}$. It is usually operated at 1\,MHz readout mode, at -60$^\circ$C with a gain of 1\,e$^{-}$/ADU, read-out noise of 6\,e$^{-}$, and dark current of 0.1 e$^{-}$/s/pixel. The CCD detector is sensitive from near-UV to the near-IR (350-950\,nm), with a maximum quantum efficiency of 94\% at both 420 and 740\,nm. The CCD camera is coupled with a Finger Lake Instruments\footnote{\url{www.flicamera.com}} CFW3-10 filter wheel for ten 50-mm square filters. The following set of filters is currently installed at Artemis: Sloan-$g'$, -$r'$, -$i'$, -$z'$ filters, custom exoplanet filters $I+z'$ (transmittance $>$90\% from 750\,nm to beyond 1000\,nm) and "blue-blocking" (transmittance $>$90\% from 500\,nm to beyond 1000\,nm), and a "z cut" filter, which suppresses the effect of atmospheric water absorption (transmittance $>90\%$ from 860\,nm to 1100\,nm). This setup is optimized to observe UCDs up to $J$=14\,mag, and to obtain their high-precision light curves ($\sim$0.1\%) with a sampling time of a few minutes. We note that the wide near-IR $I+z'$ filter is our standard one and it is used for observing the majority of the SPECULOOS targets. Usage of the $I+z'$ filter allows us to minimize the effect of atmospheric extinction, which is most prominent in blue wave-range and obtain maximum photons from UCDs as their spectral energy distribution peak at near-and mid-IR wavelengths (see Fig.~7 from \citealt{2018SPIE10700E..1ID} for the $I+z'$ filter transmission curve). \subsection{Auxiliary devices} SNO has a broad set of auxiliary devices, which enable smooth and safe operations of the facility. Specifically, the Boltwood\footnote{\url{www.diffractionlimited.com/product/boltwood-cloud-sensor-ii/}} Cloud Sensor II is used as a weather station. It is installed on a mast near the telescope building (Fig.~\ref{fig:mast}). It measures the presence of rain droplets, proximity to a dew point, ambient temperature, cloud cover using an IR sensor, wind speed using a friction sensor, relative humidity and amount of sunlight. The Boltwood sensors trigger weather alerts, which are received by a control computer. It also features direct connection to the dome, which enables closure of the slit in case the control computer is not operational. Additionally, an independent rain sensor is installed on the telescope building. It can trigger dome closure if the Boltwood weather station is not working. The mast also has a GPS receiver (164DHS from Meinberg Radio Clocks GmbH), which is used as a source of precise time, and the Alcor System Cyclope seeing monitor\footnote{\url{www.alcor-system.com/new/SeeingMon/Cyclope.html}}, which is used to measure seeing using the Polaris. An outside webcam and the SBIG 340 all sky camera\footnote{\url{www.diffractionlimited.com/product/all-sky-340-cameras/}} are installed on the mast as well and are used by the telescope operators to assess weather conditions. An uninterruptable power supply (UPS) keeps running Artemis for at least 30\,min during an electrical power cut and an emergency shutdown is triggered at the end of this period. \begin{figure}[ht!] \begin{center} \includegraphics[width=0.4\textwidth]{Images/mast.png} \end{center} \caption{The mast with auxiliary devices: weather station, seeing monitor, GPS receiver, all sky camera, webcam.} \label{fig:mast} \end{figure} \subsection{Operations}\label{subsec:operations} The Artemis telescope is controlled by the \texttt{ACP} Observatory Control Software\footnote{\url{www.acp.dc3.com}}, which provides a high level of automatization. A human telescope operator is needed only during the startup: the operator ensures the successful start of an \texttt{ACP} observing sequence before the local sunset. After that, \texttt{ACP} handles all the following operations: opening the dome after sunset, taking flat images using twilight sky and executing science observations which start when Sun's altitude is -9 degrees. In the end of the night, morning twilight flat images are taken, and when the slit of the dome is closed, calibration images are taken (bias and dark). We use a custom-made addition to \texttt{ACP}, which enables automatic restart (with no need for human intervention) of the observations after a period of safe time in case of a weather trigger (high wind and/or clouds). Observing commands for \texttt{ACP} are contained in the text scripts, which are created automatically before the beginning of the night using the \texttt{SPOCK}\footnote{\url{www.github.com/educrot/SPOCK}} (SPeculoos Observatory sChedule maKer; \citealt{2020SPIE11445E..21S}). Typically, the telescope observes 1-2 targets per night (with an airmass upper limit of 2.4) and aiming at accumulating 100-200 hours for each SPECULOOS target. To ensure continuous operations of SNO, the facility is regularly checked. Under the agreement with IAC, its staff performs a visual check of closure of the slit every day before sunrise. Every two weeks, IAC staff goes inside the facility to inspect for any signs of technical problems, which can not be seen remotely. \subsection{Data flow}\label{subsec:data} Depending on the exposure time (and thus on the number of the images), Artemis typically produces 4-16\,GB of data every night. After the end of the local night, science and calibration images (bias, dark and flat) are transferred to an archive at the University of Li\`ege. Later, from there the images are transferred to a storage server at the University of Cambridge (UK) where they are processed by SPECULOOS South data reduction pipeline \citep{2020MNRAS.495.2446M}. The same set of images is also transferred from the archive at the University of Li\`ege to a storage server at MIT where it is processed by the \texttt{Prose}\footnote{\url{www.github.com/lgrcia/prose}} pipeline \citep{2022MNRAS.509.4817G} to produce an independent set of light curves. Both pipelines perform standard image reduction steps (bias, dark, and flat-field corrections) and use an automated differential photometry iterative algorithm. Raw light curve of a target star is divided by an ‘artificial’ comparison light curve, which is constructed by weighting the sufficiently bright comparison stars according to their variability and distance to the target. Once the last night's data is processed, it is available at the SPECULOOS PORTAL \citep{2020SPIE11445E..21S}. Inspection of the light curves (including the ones from other SPECULOOS telescopes) at PORTAL is visual as we aim at detecting single transit events. Processed data is ready for an inspection before the beginning of the next night. Such a timely data processing allows us to spot any problems with the telescope and/or trigger follow-up observations of targets of interest. When observations of a target from the SPECULOOS catalog are fully completed, a global light curve is created by applying the differential photometry algorithm to the entire time series at once (which can span several weeks or months). Global light curves are used to search for transit-like signals by the Box-fitting Least Squares method (BLS; \citealt{2002A&A...391..369K}) after removal of systematic effects and stellar variability in the data. \section{Statistics of observations}\label{sec:weather} For a period of three years from June 2019 to June 2022, the Artemis telescope observed for 6000\,hr (on-sky time, not the total exposure time). Almost 90\% of all the targets are from SPECULOOS Program 1 (96 UCDs), whereas the remaining targets are split between Program 2, TESS follow-up, solar system small bodies follow-up observations, and educational programs. Program 1 features 365 UCDs, which are close-by and small enough for atmospheric studies of their possible planets with JWST. Program 2 focuses on UCDs later than M5 for which terrestrial planet detection should be within reach of TESS. The distribution of stellar magnitudes in $J$ band of all the observed targets is presented in Fig.~\ref{fig:target_dist}, along with the distribution of their effective temperatures. We refer the reader to the work of \citealt{2021A&A...645A.100S} for further details about the SPECULOOS target list and its programs. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{Images/targets_mag_dist.png} \includegraphics[width=0.5\textwidth]{Images/Teff_dist.png} \caption{Top: the distribution of stellar magnitudes in $J$ band of all the targets, which were observed by the Artemis telescope from June 2019 to June 2022. Bottom: targets' distribution of effective temperatures.} \label{fig:target_dist} \end{figure} Artemis' downtime during the first three-year period, defined as the ratio of the time on-sky to the sum of duration of all the nights, was 40\% (see Fig.~\ref{fig:downtime}). Percentage of nights with clement observing conditions using the Boltwood weather station measurements was 76\% (hence, the downtime according to the Boltwood station only was 24\%). These values are in agreement with the percentage of clear nights presented in Table~2 in \citealt{2021AJ....162...25A}, although higher than those recorded at IAC telescopes\footnote{http://research.iac.es/OOCC/about/statistics/} (17-18\% for the same period, which also includes downtime due to dust). We define observing night as clement if these conditions apply: there is no rain and cloud coverage (difference between temperature of the sky and ambient temperature is lower than -38\,$^{\circ}\mathrm{C}$), relative humidity is lower than 70\%, ambient temperature is 5\,$^{\circ}\mathrm{C}$ warmer than a dew point, wind speed is smaller than 45\,$\mathrm{km}\,\mathrm{hr^{-1}}$. If any weather factor reaches its threshold, \texttt{ACP} closes the dome automatically and reopens it if weather conditions are clement for at least 1\,hr. Difference in downtime between IAC telescopes and Artemis is due to more conservative weather thresholds employed at Artemis (in comparison with local decision made by human operators of IAC telescopes). \begin{figure*} \centering \includegraphics[width=1.0\textwidth]{Images/downtime.png} \caption{Artemis telescope downtime in hours starting from June 1st 2019. Maximum length of a bar depends on a duration of astronomical night reaching minimum in summer ($\sim$8\,hr) and maximum in winter ($\sim$12\,hr). Major blocks of downtime: technical interventions (block centered on day 150), observatory shutdown because of the COVID-19 pandemic (block centered on day 310), rain and snowstorms in winter seasons (blocks centered on days 580, 900, 960).} \label{fig:downtime} \end{figure*} In contrast to the SPECULOOS telescopes in Chile and the SAINT-EX telescope in Mexico, SNO has an additional relatively small downtime because of the dust storms (\emph{calima} in Spanish). These events happen when strong seasonal winds carry sand and dust from the Sahara desert through the Canary Islands archipelago. At the altitude of the Teide Observatory, these events are sporadic and mainly occur in summer months. Such events display themselves as an increase of dust concentration in the air (which is \textit{not} necessarily accompanied by the strong wind). The \emph{calima} is strongly stratified with height in the lower atmosphere, with only $\sim$20\% of the total intrusions reaching the altitude of the observatory \citep{lak16}. To monitor concentration of dust particles in the atmosphere, we have relied on a dust sensor\footnote{\url{www://stella-archive.aip.de/stella/status/status.php}} installed at the neighboring STELLA telescope \citep{2001AN....322..287S}. For calibration purposes we have also used the $\leq$10 micrometer particles (PM10) measurements carried out by the Spanish State Meteorological Agency (AEMet) on the neighbor atmospheric observatory, and managed by the IAC Sky Quality team\footnote{http://research.iac.es/OOCC/ciai-pm10}. We also plan to install and use a commercial dust sensor Purple Air~PA-II\footnote{\url{www2.purpleair.com}} in June 2022, which will be placed on the roof of the Artemis control building. Our dust concentration threshold value from the STELLA sensor is 0.025 units, which correspond to a concentration of 150\,$\mu\mathrm{g/m^3}$ of calibrated PM10. We employ the threshold to prevent damage to the equipment, which can happen after accumulation of dust particles on the rails of the slit, inside the motors of the slit and dome, etc. During the three-year period of operations, we experienced 48 nights when concentration of dust was above the threshold for a least 30\,min. We have not operated the telescope on the nights when dust concentration was close or above the threshold at the beginning of the local night. This contributed to the overall downtime by increasing it from 24\% (Boltwood-only) to 30\%. In almost half of these cases, change of the dust level from clear to above 0.025 units happened in less than 10 hours. As found by \citealt{1998NewAR..42..521J} and \citealt{1998NewAR..42..543F}, the behavior of Saharan dust is gray in the visible (450-870\,nm) and in the near-IR ($J$ and $H$ bands), meaning that moderately dusty nights should not pose a major obstacle for the differential photometry. Besides 30\% weather-related downtime, additional 10\% time loss was associated with technical problems (see subsection~\ref{subsec:tech_downtime}). Our major blocks of downtime are caused by several factors (Fig.~\ref{fig:downtime}): \begin{itemize} \item Technical interventions by Astelco Systems during the first year after the installation of the telescope (e.g., a block centered on day 150 since June 1st 2019). \item Observatory shutdown because of the COVID-19 pandemic (a block centered on day 310). \item Rain and snowstorms in winter seasons (blocks centered on days 580, 900, 960). \end{itemize} \subsection{Seeing}\label{subsec:seeing} Regarding the seeing conditions, we rely on measurements\footnote{All measurements were obtained using Cyclope Software version 1.1.12 build 54} made with the Cyclope seeing monitor installed on the mast. The monitor is fixed and continuously observes the Polaris 50 times per sec, using a 1/125\,sec exposure time in a green filter. Then, a dedicated software measures the jitter of the Polaris and translates it to the line of sight and zenith seeing values. We found a good agreement between Cyclope and IAC Differential Image Motion Monitor (DIMM) seeing measurements made in April-June 2019 in tendency, behavior and log-normal statistical distribution, but with a bias of around 0.8\,arcsec in median values. According to our seeing observations from April 2019 until June 2022, the most frequent zenith seeing value (mode) was 1.3 arcsec, median value was 1.6 arcsec and its standard deviation was 0.9 arcsec (see Fig~\ref{fig:seeing}). Minimal registered seeing was 0.5\,arcsec. The same values obtained with the IAC DIMM (period February 2019 to January 2020; IAC Sky Quality Team, private communication) were 0.59\,arcsec (mode), 0.77\,arcsec (median), 0.48\,arcsec (standard deviation) and 0.19\,arcsec (minimal registered seeing), which are in agreement with equivalent values obtained with other techniques, such as Scintillation Detection and Ranging (SCIDAR; \citealt{gar11a, gar11b}). The discrepancies between Cyclope monitor and DIMM measurements may come from different factors, including a much larger contribution of the turbulence surface layer \citep{ver94} to the Cyclope monitor (due to its proximity to the ground). On the other hand, the 62\,degrees of zenith distance imposed by Polaris also imply that the turbulence models for zenith correction are edging (standard DIMMs at observatories usually measure up to 30\,degrees of zenith distance). In any case, for the purpose of this paper and the operation of SPECULOOS, the absolute seeing values are not as relevant as the seeing behavior. Further investigation on the precision and accuracy of the Cyclope seeing monitor will be carried out in a near future in collaboration with the IAC Sky Quality Team. A median value of FWHM (full width at half maximum) of the stellar point spread functions (PSFs) measured by the SSO data reduction pipeline from the Artemis data over the course of three years, is 1.2\,arcsec. Discrepancy between median values from the Cyclope seeing monitor and from FWHM of stars comes from the fact that the seeing monitor records seeing whenever the Polaris is visible (which can happen when observing conditions do not allow the telescope to observe). Another factor is the difference in observing filters: the Cyclope monitor records seeing in a green filter (central wavelength 550\,nm), while the majority of observations with Artemis are conducted in $I+z'$ filter (central wavelength $\sim900$\,nm). \begin{figure}[h] \begin{center} \includegraphics[width=0.5\textwidth]{Images/cyclope_seeing2.png} \end{center} \caption{The distribution of seeing measurements from April 2019 to April 2022 made by the Alcor Cyclope seeing monitor. The \textit{black} solid vertical line shows position of the median of the distribution (1.6\,arcsec), the \textit{black} dashed vertical line shows mode of the distribution (1.3\,arcsec). For comparison, values obtained with the IAC DIMM are presented (period February 2019 to January 2020; IAC Sky Quality Team, private communication). The \textit{blue} dotted vertical line shows position of the median of the distribution (0.77\,arcsec), the \textit{blue} dashed and dotted vertical line shows mode of the distribution (0.59\,arcsec). See Subsection~\ref{subsec:seeing} for more details.} \label{fig:seeing} \end{figure} \subsection{Major technical problems}\label{subsec:tech_downtime} Though a periodic telescope maintenance decreases a total number of technical problems, sporadic major issues occur. The Artemis telescope suffered from a problem with an excessive optical aberration (spherical), which was fixed by Astelo Systems in the end of 2019. Failure of a thermo-electrical cooling component of the Andor CCD camera forced us to use a replacement CCD camera for a period of 6 months in 2020 (until the science camera was repaired and installed back at the telescope). Because of harsh winter season conditions at the Teide Observatory, control room and dome areas experienced minor water leakages during the first winter of operations (2019-2020). Problematic areas were sealed and water-proofed. Other weather-related incidents were caused by ice formed on the slit of the dome (which prevented the slit opening) and inside the slit rails (which prevented closure of the slit in April 2022). Ice formation and frozen precipitation can happen from October to May, with maximum probability in March \citep{cas18}. \section{Photometric performance}\label{sec:photometry} All SPECULOOS telescopes aim at detecting single transits from terrestrial planets around UCDs. Depending on the mutual sizes of planets and host stars, transit depths might differ from a few 0.1\% up to several per cent. Reaching this goal requires photometric precision of $\sim$0.1\% (1\,mmag), which is typically obtained with the Artemis telescope. As an illustration of this precision, we provide light curves of a known transiting exoplanet around a UCD (TRAPPIST-1) and some of the targets from the SPECULOOS input catalog. TRAPPIST-1\,b is the shortest-period planet in the TRAPPIST-1 system ($J$=11.4\,mag, $I$=14.0\,mag) with a radius of $1.12\pm0.014\,\mathrm{R_{E}}$, where $\mathrm{R_{E}}$ is the radius of Earth \citep{2021PSJ.....2....1A}. Its transit was observed on 31 Oct 2021 in $I+z'$ filter with 23 sec exposure. Its light curve and evolution of systematics throughout the observing run (shift of stars' positions on the CCD along X and Y axes, change of airmass, sky background and FWHM of PSFs) are presented in Fig.~\ref{fig:trappist-1h_LC}. The transit is clearly detectable and its significance is 12-$\sigma$. The light curve was detrended (linear trend removal) in order to minimize the RMS of the best-fit residuals, which are 620\,ppm per 7.2\, min bins. Note sub-pixel precision of the tracking coupled with the DONUTS software guiding algorithm. Sp1256-1257 (VHS 1256-1257; $J$=11.0\,mag, $I$=14.3 mag) is a triple brown dwarf system \citep{2015ApJ...804...96G,2016ApJ...818L..12S}. The primary of this system is an equal-magnitude binary which components spectroscopically determined to be M7.5$\pm$0.5. It shows complex photometric variability, including flaring activity. One of its light curves is presented in Fig.~\ref{fig:var_flare}, where a 3\,\% flare is visible. Such type of light curves makes possible precise determination of periodic features (e.g., rotational periods of UCDs), and the exploration of the possible relationship between flaring activity and rotation period (e.g, see \citealt{2022MNRAS.tmp.1043M}). \begin{figure}[h] \centering \includegraphics[width=0.475\textwidth]{Images/TRAPPIST-1b_SNO_3.pdf} \caption{Differential light curve of TRAPPIST-1\,b (top panel: before detrendeing, second to top panel: after detrending. The solid red line represents the best-fit model). The data was obtained with the Artemis telescope on 31 Oct 2021 in $I+z'$ filter with 23 sec exposure. Evolution of systematics throughout the observing run is presented in the remaining panels: shift of stars' positions on the CCD along X and Y axes (dx, dy), change of airmass and elevation, evolution of sky background and FWHM (full width at half maximum of the stellar point spread function).} \label{fig:trappist-1h_LC} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.475\textwidth]{Images/SP1256-1257_SNO_2.pdf} \caption{Light curve of Sp1256-1257 (VHS 1256-1257; $J$=11.0\,mag, $I$=14.3\,mag) obtained with the Artemis telescope on 8 April 2022. Sp1256-1257 is a triple brown dwarf system \citep{2015ApJ...804...96G,2016ApJ...818L..12S}.} \label{fig:var_flare} \end{figure} We also analyzed individual light curves processed by the SSO data reduction pipeline of every SPECULOOS target, which was observed with the Artemis telescope in $I+z'$ band for the last three years. For every light curve from each night spanning more than 120\,min, we calculated flux standard deviation after the differential photometry and normalization (with no data binning, sigma-clipping and/or trend removals). A global set of flux standard deviations as a function of target stellar magnitude in Cousins $I$ band are presented in Fig.~\ref{fig:flux_stdv}. The vertical lines on the plot correspond to different flux standard deviations on different nights, but of the same target. The vertical scatter of these points is explained by either the intrinsic variability of the target (e.g, flares, rotational modulations), or by variations in the observing conditions from night to night. Our typical photometric precision is $0.5\%$ (median of a set of flux standard deviations of the observed targets). For bright non-variable targets, we are able to reach flux standard deviations $0.2\%$ with a typical exposure time of 25\,sec. We note that second-order extinction effects due to highly variable absorption by atmospheric water vapor can cause changes of differential flux of very red targets mimicking or hiding transits and affecting the long-term photometric variability studies of our targets (\citealt{2020MNRAS.495.2446M}, Pedersen et al. 2022, submitted). This problem is mitigated at SSO by using precise high-cadence (every 2 minutes) precipitable water vapor (PWV) measurements from a microwave radiometer optimized for measuring PWV in dry conditions (from 0\,mm to a saturation value of 20\,mm, within an accuracy of 0.1\,mm). Such PWV measurements are used to correct differential light curves from SSO as part of the automatic pipeline. Lower-cadence (every 30 minutes) PWV measurements with a precision of 1\,mm are available from a GPS-based system located at the Teide Observatory \citep{2016SPIE.9910E..0PC}. Much higher cadence (every 1.5 minutes) and more precise PWV values will be also available from mid-2022 after the testing of a newly installed radiometer by the Japanese company Furuno\footnote{\url{www.furuno.com}} at the Teide Observatory. Currently, data from the Artemis telescope does not undergo any PWV correction. However, we plan to compare PWV correction based on the data from GPS and from the Furuno radiometer in the future and correct differential light curves as part of the automatic pipeline. While most of the observing time is dedicated to targets from the SPECULOOS catalog, Artemis does follow-up observations of other noteworthy exoplanets, including the ones from TESS and Kepler/K2 \citep{2014PASP..126..398H}. We refer the reader to the work by \cite{2022MNRAS.514.4120G} featuring Artemis' light curve of a sub-Neptune orbiting a mid-M dwarf TOI-2136 and to the work by \cite{2020AJ....160..172N} showing a transit of an Earth-sized planet around an M3.5 dwarf observed with Artemis and several SSO telescopes. Artemis also does follow-up observations of solar system minor bodies, e.g., photometric monitoring of asteroids, which display cometary activity \citep{2021MNRAS.505..245D} and observations of occultation of stars by asteroids \citep{2021DPS....5350305F}. \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{Images/flux_stdv_Imag.png} \caption{Flux standard deviation of all UCDs' lightcurves observed with the Artemis telescope in $I + z'$ filter from June 2019 to June 2022. The vertical lines correspond to different flux standard deviations on different nights, but for the same targets.} \label{fig:flux_stdv} \end{figure} \section{Complementary science}\label{sec:complementary} Upon completion of the survey, SPECULOOS telescopes will have observed almost 70\,$\mathrm{deg^2}$ of sky for 100-200\,hr with remarkable photometric performance and high cadence. In order to fully utilize the potential of this data set, we initiated an automatic detection of "moving objects", i.e., objects which cross a field of view of the telescope. Such moving objects are typically asteroids and comets (small bodies), which are pristine remnants of the solar system formation. To find these objects, we employ digital tracking (also known as synthetic tracking, or shift and stack method; \citealt{1995ApJ...455..342C,2014ApJ...782....1S,2014ApJ...792...60Z,2015AJ....150..125H}). This approach relies on shifting and stacking of individual astronomical images according to a motion vector of a moving body in order to increase the object’s signal in the data. This approach allows the detection of smaller and/or more distant objects in the solar system compared to a conventional technique, which determines if an object exhibits consistent movement from one image to the next (and where the moving object’s signal-to-noise ratio is limited to that of a single exposure). Our dedicated GPU-accelerated (Graphics Processing Unit) digital tracking pipeline is based on the \texttt{Tycho Tracker} software\footnote{\url{www.tycho-tracker.com}}. The pipeline searches for moving objects in the archival images and in the last night's data (to be able to trigger follow-up of noteworthy objects during the next observing night). The pipeline is able to confidently detect moving objects as faint as $V\sim$23.0\,mag. The first results of the pipeline application on a set of archival SPECULOOS fields will be presented in Burdanov et al. 2022, submitted. \section{Discussion and conclusions}\label{sec:discuss_conclusions} We have presented Artemis: the first telescope of the SPECULOOS Northern Observatory (SNO), and its development over the course of the first three years of operations at the Teide Observatory on the island of Tenerife. According to our weather station measurements, percentage of nights with clement observing conditions was 76\%. However, our actual downtime was 40\% because of additional time loss associated with technical problems, nights lost due to dust storms and observatory shutdown because of the COVID-19 pandemic. We plan to use our dust sensor and implement automatic slit closure and re-opening using its data. Thanks to this and fewer major technical problems, we expect the downtime to decrease during the next years. The Artemis telescope demonstrates remarkable photometric precision, allowing it to be ready to fulfill its main goal -- finding new transiting terrestrial exoplanets around UCDs. Over the period of the first three years after the installation, we observed 96 objects from the SPECULOOS target list for 6000\,hours with a typical photometric precision of $0.5\%$, and reaching a precision of $0.2\%$ for relatively bright non-variable targets with a typical exposure time of 25\,sec. We plan to compare PWV correction based on the data from GPS and from the Furuno radiometer in the future and correct differential light curves as part of the automatic pipeline. We expect further improvements of our photometric precision after applying the PWV correction. Though no new planets transiting UCDs were confirmed yet, the SPECULOOS survey continues as currently less than 10\% of targets were fully observed. \acknowledgments \textit{Acknowledgments:} J.d.W. and MIT gratefully acknowledge financial support from the Heising-Simons Foundation, Dr. and Mrs. Colin Masson and Dr. Peter A. Gilman for Artemis, the first telescope of the SPECULOOS network situated in Tenerife, Spain. The ULiege's contribution to SPECULOOS has received funding from the European Research Council under the European Union's Seventh Framework Programme (FP/2007-2013) (grant Agreement n$^\circ$ 336480/SPECULOOS), from the Balzan Prize and Francqui Foundations, from the Belgian Scientific Research Foundation (F.R.S.-FNRS; grant n$^\circ$ T.0109.20), from the University of Liege, and from the ARC grant for Concerted Research Actions financed by the Wallonia-Brussels Federation. Authors would like to thank the anonymous reviewer for their time and attention. The constructive comments we received, helped us to improve the quality of the paper. The SPECULOOS North consortium would like to thank IAC telescope operators (Técnico de Operaciones Telescópicas), General and Instrumental maintenance teams for their support on site, IAC Sky Quality Team for providing useful comments and access to PWV measurements, THEMIS solar telescope team and Dr.~Carlos Dominguez for their invaluable help during the installation of the Artemis telescope. \facilities{SPECULOOS Northern Observatory, SPECULOOS Southern Observatory, Teide Observatory} \software{\texttt{DONUTS} \citep{2013PASP..125..548M}, \texttt{ACP} (\url{www.acp.dc3.com}), \texttt{SPOCK} (\url{www.github.com/educrot/SPOCK}), \texttt{Prose} (\url{www.github.com/lgrcia/prose}), \texttt{Tycho Tracker} (\url{www.tycho-tracker.com})}
1,116,691,497,452
arxiv
\section{Introduction} Because of the difficulty to calculate the physical accretion rate to the BH in an AGN, a dimensionless accretion rate can be defined and estimated based on the bolometric luminosity and BH mass, $\dot{m} = \frac{L_{bol}}{L_{Edd}}$. This parameter can be used as the substitute for the actual accretion rate. Moreover, the dimensionless accretion rate is an important parameter in the scheme of the so called unified model for AGN (Antonucci 1993, Quintilio \& Viegas 1997, Urry \& Padovani 1995), and also the 4D Eigenvector 1 Scheme (e.g., Dultzin-Hacyan et al. (2007). The difference in accretion rate leads to the principal difference between high luminosity QSOs and low luminosity Seyfert galaxies. It seems to be also one of the main physical parameters underlying the 4D Eigenvector 1 Scheme as explained recently in Dultzin-Hacyan et al. (2007). In order to obtain the dimensionless accretion rate $\dot{m}$, two other parameters must be calculated first: BH mass $M_{BH}$ and the bolometric luminosity $L_{bol}$. The common method to estimate the bolometric luminosity $L_{bol}$ of AGN is based on the continuum luminosity from the nuclei: $L_{bol}\sim 9\times L_{5100\AA}$ given by Kaspi et al. (2000) and confirmed by Shang et al. (2005) for QSOs. Recently, the relation was used by Bonning et al. (2007) to study the correlation between the accretion disk temperatures and the continuum colors in QSOs. However, it should be stressed that the relation does not hold for ALL kinds of AGN, in particular, for low luminosity AGN (Ho et al. 1997a, 1997b, Ho 1999), because of the different Spectral Energy Distribution (SED) (the lack of the Big Blue Bump). Several methods are used to estimate BH masses of AGN. The most reliable method is based on the stellar velocity dispersion of the bulge of the host galaxy first presented by Ferrarese \& Merritt (2000) and Gebhardt et al. (2000), then confirmed by Tremaine et al. (2002) and Merritt \& Ferrarese (2001) etc. \begin{equation} M_{BH} = 10^{8.13\pm0.06}(\frac{\sigma}{200{\rm km\cdot s^{-1}}})^{4.02\pm0.32} {\rm M_{\odot}} \end{equation} which indicates a strong correlation between BH masses and bulge masses (H\"{a}ing \& Rix 2004, Marconi \& Hunt 2003, McLure \& Dunlop 2002, Laor 2001, Kormendy 2001, Wandel 1999) etc. However we should note that the relation of $M_{BH} - \sigma$ is obtained through the results of nearby inactive galaxies. Whether the relation can be applied to far away active galaxies is an interesting question. So far, there are a few dynamical mass estimates of central black holes of broad line AGN, and the BH masses are consistent with the BH masses estimated from the relation of $M_{BH} - \sigma$, although the uncertainties are still large. In addition, we should say that the objects in our sample described in the following section are not high luminosity and high redshift QSOs, thus the correlation between central black hole and bulge of host galaxy can be reasonably considered to hold. The other methods are based on the assumption of virialization of the Broad Line Emitting Regions (BLRs), or at least part of them (Peterson et al. 2004, Onken et al. 2004, Sulentic et al. 2006, Dultzin-Hacyan et al. 2007). In order to calculate the parameter of dimensionless accretion rate, the BH mass is necessary. However, for high luminosity and high redshift AGN, it is difficult to measure the stellar velocity dispersions. The assumption of virialization is applied for QSOs. In order to estimate BH masses of QSOs based on the assumption of virialization, the most convenient way is to use the equation: \begin{equation} \begin{split} M_{BH} &= f\times\frac{R_{BLRs}\times\sigma_{b}^2}{G} \\ &= 2.15\times10^8(\frac{\sigma_b}{3000{\rm km\cdot s^{-1}}})^2(\frac{L_{5100\AA}}{10^{44}{\rm erg\cdot s^{-1}}})^{0.69} {\rm M_{\odot}} \end{split} \end{equation} There are, however, some caveats with this method. First, the question of whether the relation $R_{BLRs}\sim L_{5100\AA}^{0.69}$ found by Kaspi et al. (2000, 2005) can be applied for all AGN, in particular high redshift ones. In an attempt to answer this question, we have found that the relation is not valid for some special kinds of AGN, such as the low luminosity AGN (Zhang, Dultzin-hacyan \& Wang 2007a, Wang \& Zhang 2003) and the AGN with double-peaked low ionization emission lines (Zhang, Dultzin-Hacyan \& Wang 2007b). Second, the estimation of the BH masses of high redshift AGN by means of Equation (2) will lead to BH masses larger than $10^{10}{\rm M{\odot}}$ (meaning $\sigma>600{\rm km\cdot s^{-1}}$), which leads to unreasonable masses of the bulge larger than $10^{13}{\rm M_{\odot}}$ (Netzer 2003, Sulentic et al. 2006). For this reason, finding another parameter which can be observationally determined, related to the dimensionless accretion rate is an important task and is the main objective of this paper. The accretion rate is determined by two properties: the continuum luminosity $L_{5100\AA}$ and the BH mass $M_{BH}$. The continuum luminosity can be calculated from the observed spectra as discussed in the the next section. Thus, in order to obtain a reliable result, we select Equation (1) to estimate the central BH masses of AGN rather than Equation (2). The accretion disk model has been widely accepted as the standard model for AGN. In the NLTE (Non Local Thermodynamic Equilibrium) accretion disk mode, the generated SED (Spectral Energy Distribution) is based on three main parameters: BH masses $M_{BH}$, accretion rate $\dot{M}$ and the viscosity parameter $\alpha$. An expected result is that there should be a correlation between the spectral index and the accretion rate $\dot{m}$. In this paper, we answer the question whether the observed spectral index can be used to trace the dimensionless accretion rate. In section II, we present the data sample. Section III gives the results. Finally the discussion and conclusions are given in Section IV. In this paper, the cosmological parameters $H_{0}=70{\rm km\cdot s}^{-1}{\rm Mpc}^{-1}$, $\Omega_{\Lambda}=0.7$ and $\Omega_{m}=0.3$ have been adopted. \section{Data Sample} We select objects from SDSS DR4 (Adelman-McCarthy et al. 2006) to make up our sample according to the following two criteria: First, and most important is that the objects' spectra present absorption features, here we focused on the absorption line MgI$\lambda5175\AA$, in order to measure the stellar velocity dispersion of the bulge. Second, in order to obtain the intrinsic continuum luminosity from the nuclei after the correction of internal reddening effects using the Balmer decrement, Balmer emission lines, both H$\alpha$ and H$\beta$ must be also present. In order to perform accurate measurements of the lines mentioned above, several procedures have to be followed. In order to obtain the continuum luminosity from the nuclei we must subtract first the contribution of stellar light. An efficient method to subtract the stellar light is the PCA (Principle Component analysis) method described by Li et al. (2005) and Hao et al. (2005), using the eigenspectra from pure absorption galaxies from SDSS or the eigenspectra from stars in STELIB (Le Borgne et al. 2003), because the method of Principle Component Analysis (PCA) provides a better way to constrict more favorable information from a series of spectra of stars or galaxies into several eigenspectra. Here, we used the method from Hao et al. (2005). The eigenspectra are calculated by KL (Karhunen-Loeve) transformation for about 1500 pure absorption galaxies selected from SDSS DR4. Then, the first eight eigenspectra and the spectra of an A star (which is used to account for star formation) selected from STELIB (Le Borgne et al. 2003) are used to fit the stellar properties of the observed spectra. After this, rather than a power law, a three-order polynomial function is used to fit the featureless continuum, because the study of composite spectra of AGN shows that the continuum should be best fitted by two power laws with a break of $\sim5000\AA$ (Francis et al. 1991, Zheng et al. 1997, Vanden Berk et al. 2001). After the last step, the featureless continuum and the stellar components are obtained based on the Levenberg-Marquardt least-squares minimization method. After the subtraction of stellar components and the continuum emission, the line parameters of emission lines can be measured by Levenberg-Marquardt least-squares minimization: one gaussian function for each forbidden emission line, two gaussian functions (one broad and one narrow) for each permitted emission line. For [OIII]$\lambda4959,5007\AA$, we use an extra gaussian function for the extended wings as shown in Greene \& Ho (2005a). Then we select the objects with reliable broad H$\alpha$ and broad H$\beta$ according to the following criteria: $\sigma(B)\ge3\times\sigma(B)_{err}$, $flux(B)\ge3\times flux(B)_{err}$ and $\sigma(B)\ge600{\rm km\cdot s^{-1}}$, where 'B' represents the values for the broad Balmer components, 'err' means the measured error of the value, $\sigma$ is the second moment of broad Balmer emission lines. Then it is necessary to measure the stellar velocity dispersions of the objects selected. However, the accurate measurement of stellar velocity dispersion is an open question, because of the known problems with the template mismatch. A commonly used method is to select spectra of several kinds of stars (commonly, G and K) as templates, and then broaden the templates by the same velocity to fit stellar features, leaving the contributions from different kinds of stars as free parameters (Rix \& White 1992). However, more information about stars included by the templates should lead to more accurate measurement of stellar velocity dispersion. According to the above mentioned method to subtract stellar components, we created a new template rather than several spectra of G or K stars as templates. Thus, we apply the PCA method for all 255 spectra of different kinds of stars in STELIB. Selecting the first several eigenspectra and a three-order polynomial function for the background as templates, the value of stellar velocity dispersion can be measured by the minimum $\chi^2$ method applied to the absorption features around MgI$\lambda5175\AA$ within the wavelength range from 5100$\AA$ to 5300$\AA$ (Zhang, Dultzin-Hacyan \& Wang 2007c). Finally, we select the objects for which the measured values of stellar velocity dispersions are at least three times larger than the measured errors. Finally, we select 193 AGN with redshift from 0.015 to 0.25 and with observed featureless continuum luminosity within the range from $10^{41.23} {\rm erg\cdot s^{-1}}$ to $10^{43.79} {\rm erg\cdot s^{-1}}$ from about 400000 objects classified as galaxies in SDSS DR4. The objects have reliable stellar velocity dispersions and reliable broad Balmer emission lines. \section{Results from the database of SDSS} Before proceeding further, a simple discussion about the origin of the featureless continuum emission is given. Basically, the possibility of nebular emission can be rejected. According to the luminosity of Recombination lines, the nebular continuum emission at $5100\AA$ can be simply estimated by $L_{5100\AA, Nebulae}\sim0.1\times L_{H\beta}$, if the electron temperature $T = 10^4{\rm K}$ is accepted. Thus the effects of nebular emission can be neglected. Furthermore, we check the correlation between continuum luminosity and the luminosity of Balmer emission lines found by Greene \& Ho (2005b). The result is shown in Figure 1. We should note that the continuum luminosity and luminosity of H$\alpha$ are the values before the internal reddening correction as shown in Greene \& Ho (2005b). The coincident correlation between $L_{5100\AA}$ and $L_{H\alpha}$ ( including the narrow component) indicates that our method to subtract the stellar components is reliable to some extent. Based on the correlation of $L_{H\alpha} - L_{5100\AA}^{1.157}$ for AGN with high continuum luminosity, we can estimate the effects of star formation on the continuum luminosity. For AGN with low luminosity and low redshift, there are two components in narrow H$\alpha$, one from the AGN $L_{H\alpha,AGN}$ and the other one from star formation $L_{H\alpha,SF}$ (Kauffmann et al. 2003). Moreover, we assumed the continuum luminosity also includes two components, one from the AGN $L_{5100\AA,AGN}$ and the other one from star formation $L_{5100\AA,SF}$. Furthermore, there is a strong correlation between line and continuum luminmosities shown in Figure 1, $L_{H\alpha,SF}+L_{H\alpha,AGN}\propto(L_{5100\AA,SF}+L_{5100\AA,AGN})^{1.157}$. We can simply estimate the effects of star formation on continuum luminosity, if we accept that $L_{H\alpha,SF} = s\times L_{H\alpha,AGN}$ and $L_{H\alpha,AGN}\propto L_{5100\AA,AGN}^{1.157}$ (because the relation is better applied for high luminosity AGN with less effects of star formation): \begin{equation} \begin{split} &L_{H\alpha,SF}+L_{H\alpha,AGN}\sim(L_{5100\AA,SF}+L_{5100\AA,AGN})^{1.157} \\ &L_{H\alpha,AGN}\sim L_{5100\AA,AGN}^{1.157} \\ &1 + s \sim (1 + \frac{L_{5100\AA,SF}}{L_{5100\AA,AGN}})^{1.157} \end{split} \end{equation} If we accepted that star-forming regions contribute 65\% of the narrow H$\alpha$ flux as described in Kauffmann et al. (2003), the parameter of $s$ can be determined as: \begin{equation} \begin{split} s &= \frac{L_{H\alpha,SF}}{L_{H\alpha,AGN}} \\ &= \frac{0.65\times L_{H\alpha,N}}{L_{H\alpha,B}+0.35\times H_{H\alpha,N}} \end{split} \end{equation} where 'N' and 'B' represent the narrow component and broad component of H$\alpha$. The mean value of $L_{H\alpha,B}/L_{H\alpha,N}$ is about 3.96 for the objects in our sample. Then we can determine that $L_{5100\AA,SF}/L_{5100\AA,AGN}\sim0.12$, i.e., the star-forming regions contribute about 10\% of the observed continuum luminosity. Thus in the following section, we can ignore the effects of star fromation. In order to obtain a reliable intrinsic continuum shape, the internal reddening effects must be corrected. The common way to correct them is through the Balmer decrement. Here, we assume the intrinsic Balmer decrement as 3.1 for H$\alpha$ and H$\beta$ expected by Case B recombination (albeit it is debatable wether it can be applied to broad lines) with some contributions from collisional excitation. Then, the value of E(B-V) can be determined from the Balmer decrement: \begin{equation} {\rm E(B-V)} = -0.97615448+1.9866313\log(\frac{{\rm H}\alpha}{{\rm H}\beta}) \end{equation} where $\frac{{\rm H}\alpha}{{\rm H}\beta}$ is the observed flux ratio. This equation calculated from the R-dependent Galactic extinction curve presented by Fitzpatrick (1999) can be used to calculate the value of E(B-V) simply through the Balmer decrement. In the following of the paper, the continuum luminosity and luminosity of H$\alpha$ are the ones after the correction of BLRs extinction. After the correction of internal reddening effects, the spectral index can be determined. Here we select three spectral indices: $\frac{F_{4400\AA}}{F_{5100\AA}}$, $\frac{F_{5100\AA}}{F_{6800\AA}}$ and $\frac{F_{4400\AA}}{F_{6800\AA}}$. The BH masses are calculated using Equation (1). The internal continuum luminosity after the correction of the internal reddening can also be calculated. Thus it is easy to check the correlation between dimensionless accretion rate and spectral index. Here we should notice that the diminsionless accretion rate for low luminosity AGN as discussed in the introduction and in the next section is also calculated by $\dot{m}=\frac{L_{bol}}{L_{Edd}}\sim\frac{9\times L_{5100\AA}}{L_{Edd}}$, although, the bolometric luminosity of low luminosity AGN cannot be correctly calculated by $L_{bol}\sim9\times L_{5100\AA}$ as shown in Ho (1999). However, to some extent, we can accept that, if there is also a simple relation $L_{bol}\sim k\times L_{5100\AA}$ for low luminosity AGN, the calculated $\frac{9\times L_{5100\AA}}{L_{Edd}}$ can be used as a substitute of accretion rate, and it is convenient to compare the properties of low luminosity and normal AGN. The correlations are shown in Figure 2. The Spearman Rank Correlation Coefficient is 0.68 with $P_{null}\sim1.51\times10^{-27}$, 0.65 with $P_{null}\sim1.42\times10^{-24}$ and 0.67 with $P_{null}\sim1.94\times10^{-26}$ for $\frac{F_{5100\AA}}{F_{6800\AA}}$, $\frac{F_{4400\AA}}{F_{5100\AA}}$ and $\frac{F_{4400\AA}}{F_{6800\AA}}$ respectively. In order to check the effects of internal reddening, we also show the correlation between the dimensionless accretion rate and the spectral index $\frac{F_{5100\AA}}{F_{6800\AA}}$ in the right-bottom panel of Figure 2, without internal reddening correction. The Spearman Rank Correlation Coefficient for the correlation without reddening correction is about 0.56 with $P_{null}\sim2.01\times10^{-17}$. The unweighted best fitted results for the correlations between spectral indices and dimensionless accretion rates are shown as solid lines that correspond to: \begin{equation} \begin{split} &\log(\frac{F_{5100\AA}}{F_{6800\AA}}) = 0.56 + 0.18\times\log(\frac{9\times L_{5100\AA}}{L_{Edd}}) \\ &\log(\frac{F_{4400\AA}}{F_{5100\AA}}) = 0.35 + 0.14\times\log(\frac{9\times L_{5100\AA}}{L_{Edd}}) \\ &\log(\frac{F_{4400\AA}}{F_{6800\AA}}) = 0.92 + 0.33\times\log(\frac{9\times L_{5100\AA}}{L_{Edd}}) \\ &\log(\frac{F_{5100\AA}}{F_{6800\AA}}) (uncorr)= 0.37 + 0.14\times\log(\frac{9\times L_{5100\AA}}{L_{Edd}}) \end{split} \end{equation} The last expression is for the correlation between spectral index $\frac{F_{5100\AA}}{F_{6800\AA}}$ and dimensionless accretion rate without internal reddening correction. Furthermore, we are interested in the absolute scatter in the parameter of spectra index which can be calculated by: \begin{equation} \Delta_Y = \sqrt{\frac{\sum_{i=1}^{N}(Y_i-Y_{i,fit})^2}{N}} \end{equation} where $Y_i, Y_{i,fit}$ are the measured value of spectra index and the fitted value by the equations listed in Equation (6). Finally, we can obtain the scatters as follows, $\Delta_{\log(\frac{F5100\AA}{F6800\AA})}\sim0.128$, $\Delta_{\log(\frac{F4400\AA}{F5100\AA})}\sim0.124$, $\Delta_{\log(\frac{F4400\AA}{F6800\AA})}\sim0.227$ and $\Delta_{\log(\frac{F5100\AA}{F6800\AA})(uncorr)}\sim0.123$ We also show the correlation between the line width of broad H$\alpha$ and the line width of broad H$\beta$ in Figure 3. The Spearman Rank Correlation Coefficient is about 0.82 with $P_{null}\sim0$. The correlation between the line widths of broad Balmer emission lines is: \begin{equation} \sigma_{H\beta_B} = (1096.56\pm124.74)\times(\frac{\sigma_{H\alpha_B}}{10^3 {\rm km\cdot s^{-1}}})^{1.01\pm0.02} {\rm km\cdot s^{-1}} \end{equation} where $\sigma$ is the measured value using a gaussian function to measure broad emission lines. This correlation is similar to the one for QSOs found by Greene \& Ho (2005b), $FWHM_{H\beta}\propto FWHM_{H\alpha}^{1.03\pm0.03}$. This indicates that the measurement of line parameters of broad Balmer emission lines is reliable. In addition, it is convenient for us to compare the two kinds of BH masses estimated from Equation (1) and Equation (2). Before proceeding further, we should notice that Equation (2) cannot be applied to the low luminosity AGN discussed in the next section, because of the unreasonable correlation between the size of BLRs and the continuum luminosity found by Kaspi et al. (2000, 2005) (Wang \& Zhang 2003, Zhang, Dultzin-Hacyan \& Wang 2007a). Thus, here, we select the normal AGN in our sample to estimate the virial BH masses with Equation (2). Finally there are 155 objects shown in Figure 4 to compare the two kinds of BH masses estimated by Equation (1) and Equation (2). The Spearman Rank correlation coefficient is about 0.36 with $P_{null}\sim4.8\times10^{-6}$, after the internal reddening corrections. The correlation also indicates that the measured stellar velocity dispersions, the measured internal continuum luminosities and the line widths are reliable. \section{Discussion and Conclusions} There are 38 low luminosity AGN with $L_{H\alpha}<10^{41}{\rm erg\cdot s^{-1}}$ (Ho et al. 1997a, 1997b and Ho 1999), which are shown in solid circles in Figure 2. From the figure, we can see that there is no difference in the correlation between spectral index and $\frac{9\times L_{5100\AA}}{L_{Edd}}$ for normal AGN and low luminosity AGN. If the bolometric luminosity of low luminosity AGN was different from $9\times L_{5100\AA}$, all the low luminosity AGN would deviate from the correlation for normal AGN, due (probably) to different accretion modes. Even if there is a different accretion mode for low luminosity AGN (as suggested by the lack of the big blue bump in the spectra of low luminosity AGN as shown in Ho (1999)), the bolometric luminosity of low luminosity AGN can also be calculated using $L_{bol}\sim k\times L_{5100\AA}$. Otherwise, we could not find the same correlation between spectral index and $\frac{9\times L_{5100\AA}}{L_{Edd}}$ for low luminosity AGN and normal AGN. According to the accretion disk model, the output SED is the result of the convolution of other parameters as well, such as the central BH mass, the viscosity in the disk, and the inclination angle. However there is no correlation between the spectral index and central BH masses, which is shown in Figure 5. The Spearman Rank Correlation Coefficient is less than 0.1 with $P_{null}>60\%$ for all objects in our sample. An interesting result is that there is actually a negative trend (anticorrelation) between BH masses and the spectral indexes for the 38 low luminosity AGN. The coefficient is about -0.54 with $P_{null}\sim4.97\times10^{-4}$, -0.52 with $P_{null}\sim8.62\times10^{-4}$ and -0.55 with $P_{null}\sim3.98\times10^{-4}$ for $\frac{F_{5100\AA}}{F_{6800\AA}}$, $\frac{F_{4400\AA}}{F_{5100\AA}}$ and $\frac{F_{4400\AA}}{F_{6800\AA}}$ respectively. Because of the positive correlation between the spectral index and the accretion rate, a negative correlation between the spectral index and central BH masses could be expected for all AGN. However our results indicate that this expectation is only valid for low luminosity AGN. The reason is probably related to the correlation between BH masses and continuum luminosity. For Normal AGN, there is strong correlation between the BH masses and the continuum luminosity (Peterson et al. 2004). However for low luminosity AGN, this correlation is much weaker (Zhang, Dultzin-Hacyan \& Wang 2007a). The correlation between the central BH masses and the internal continuum luminosity is shown in Figure 6. The coefficient is about 0.47 with $P_{null}\sim9.05\times10^{-10}$, however, the coefficient is only 0.12 with $P_{null}\sim49\%$ for the 38 low luminosity AGN. The same result for low luminosity AGN can be found in Panessa et al. (2006). In their paper, they selected all the low luminosity Seyfert galaxies from Ho, Filippenko \& Sargent (1997a, 1997b), and found that there is NO correlation between the X-ray or optical emission line luminosities (especially [OIII]$\lambda5007\AA$ line) and BH masses. The result also confirms that there is a different accretion mode for normal and low luminosity AGN. To estimate the effects of the inclination angle of the accretion disk is difficult. However under the assumption that the narrow line emission region is isotropic, we can check the correlation between the continuum luminosity and the luminosity of narrow emission lines. If the objects have very different inclination angles of the accretion disk, a loose correlation between the continuum luminosity and the luminosity of narrow emission line should be expected. Here we show the correlation between $L_{5100\AA}$ and the luminosity of narrow H$\alpha$ in Figure 7. Although it is more common to use the [OIII]$\lambda5007\AA$ as an isotropic estimator of AGN luminosity, we prefer to use the narrow component of H$\alpha$ because of the following reason. [OIII] emission line frequently cannot be fitted by one single gaussian function, because it has extended wings (Greene \& Ho 2005a). The two components are not emitted from the same region. The extended component is probably emitted from the far-side of the BLRs. Thus when we fit the [OIII] line, two gaussian functions are applied as described in Section II. We thus select the narrow component of H$\alpha$ rather than [OIII] to test the effects of inclination angle. A strong correlation can be confirmed. The spearman Rank Correlation Coefficient is about 0.89 with $P_{null}\sim0$ for normal AGN, and about 0.67 with $P_{null}\sim4.04\times10^{-26}$ for the 38 low luminosity AGN. The best fit to the correlation (after considering the error in the determination of the luminosity of narrow H$\alpha$) is given by: \begin{equation} \log(L_{H\alpha_N}) = (1.373\pm0.032) + (0.915\pm0.003)\times\log{L_{5100\AA}} {\rm erg\cdot s^{-1}} \end{equation} This result indicates that the effects of the inclination angle can be neglected for the correlation between the spectral index and the dimensionless accretion rate. The correlation between spectral index and dimensionless accretion rate found in this research provides another independent method to estimate the central BH masses of AGN. The spectral index and continuum luminosity can be directly determined from the observed spectrum in the optical band, and subsequently the Eddington Luminosity, i.e. BH masses, can be determined by means of the correlation we found. This method has the advantage of being independent of the different correlations between the size of the BLRs and the continuum luminosity in Equation (2). Also, this method can be applied when it is not possible to measure the stellar velocity dispersion of the bulge. In future work, we will estimate the BH masses of QSOs with higher redshift using this method to solve the problem of why virial BH masses of QSOs estimated by Equation (2) lead to BH masses larger than $10^{10} {M_{\odot}}$, while observational results (fortunately) seem to contradict this result (Dultzin-Hacyan et al. 2007). Finally, a simple summary is as follows. We first select 193 AGN with both broad H$\alpha$ and broad H$\beta$, and with apparent absorption MgI$\lambda5175\AA$ from SDSS DR4. Then after the determination of the spectral index (after the correction of internal reddening effects through the Balmer decrements for broad Balmer emission lines, and after the subtraction of stellar component) and dimensionless accretion rate ($\frac{9\times L_{5100\AA}}{L_{Edd}}$), we find a strong correlation between these parameters for AGN, which provides another independent and method to estimate the central BH masses of AGN. \section*{Acknowledgements} ZXG gratefully acknowledges the postdoctoral scholarships offered by la Universidad Nacional Autonoma de Mexico (UNAM). D. D. acknowledges support from grant IN100507 from PAPIIT, DGAPA, UNAM. This paper has made use of the data from the SDSS projects. Funding for the creation and the distribution of the SDSS Archive has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Aeronautics and Space Administration, the National Science Foundation, the U.S. Department of Energy, the Japanese Monbukagakusho, and the Max Planck Society. The SDSS is managed by the Astrophysical Research Consortium (ARC) for the Participating Institutions. The Participating Institutions are The University of Chicago, Fermilab, the Institute for Advanced Study, the Japan Participation Group, The Johns Hopkins University, Los Alamos National Laboratory, the Max-Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA), New Mexico State University, Princeton University, the United States Naval Observatory, and the University of Washington.
1,116,691,497,453
arxiv
\section{Introduction} The transformer \citep{vaswani2017attention} is a deep learning architecture commonly used to process sequences of data, such as text. Thanks to multi-head self-attention, the transformer builds contextual embeddings by capturing the relationships between the sequence elements. While being most famous for their state-of-the-art performance in natural language processing \citep{brown2020gpt3, devlin2019bert}, transformers are also used in computer vision \citep{chen2020imagegpt, dosovitskiy2021vit, jiang2021transgan, strudel2021segmenter, wu2020visualtransformers}, reinforcement learning \citep{chen2021decisiontransformer, kumar2020adaptive}, as well as audio \citep{gong2021ast, huang2018musictransformer, payne2019musenet} and video \citep{yan2021videogpt} processing, yielding impressive results. The Bayesian learning paradigm provides a theoretical framework to obtain predictive uncertainty, select the optimal model, and improve its calibration. Furthermore, by designing an informative prior for the parameters, Bayesian models offer a principled way to incorporate assumptions about the inferred distribution, thus providing regularization. Finally, recent work \citep{kristiadi2020bayesian, mitros2019validity} has shown that Bayesian neural networks (BNN) are often better calibrated than standard neural networks. If transformers and Bayesian deep learning are both so popular, why have we not seen any successful Bayesian transformer models? By attempting to implement such models, we make the following contributions: (i) We find that weight space inference in transformers does not provide any improvements over a model trained by maximum likelihood. (ii) We show that the prior is at least partially at fault for this. (iii) We propose to perform inference on the attention weights rather than on the parameters, and present a novel variational method for this using the Dirichlet distribution. \section{Background} \vspace{-0.75em} \subsection{Bayesian deep learning} Bayesian inference computes the posterior distribution as \begin{equation} \mathit{P}(\theta \mid y_{1:N}, x_{1:N}) = \mathit{P}(y_{1:N} \mid \theta, x_{1:N}) \mathit{P}(\theta) / \mathit{P}(y_{1:N} \mid x_{1:N}) \label{eq:posterior} \end{equation} with neural network parameters $\theta$, training data $\{(x_i, y_i)\}_{i=1}^N$, likelihood function $\mathit{P}(y_{1:N} \mid \theta, x_{1:N})$, prior $\mathit{P}(\theta)$, and evidence $\mathit{P}(y_{1:N} \mid x_{1:N})$. The predictive distribution of a new target $y^*$ given $x^*$ is then obtained by \begin{equation} P(y^* \mid x^*, y_{1:N}, x_{1:N}) = \mathrm{E}_{ \theta \sim \mathit{P}(\theta \mid y_{1:N}, x_{1:N})} [\mathit{P}(y^* \mid \theta, x^*)] \label{eq:predicitve} \end{equation} Applied to neural networks, both \cref{eq:posterior} and \cref{eq:predicitve} are intractable and need to be estimated using approximate inference methods, such as variational inference or Monte Carlo sampling. \subsection{Bayesian neural network weight space inference} \begin{wrapfigure}{r}{0.5\linewidth} \vspace{-10pt} \includegraphics[width=\linewidth]{figures/logistic_mle_full_plot.pdf} \caption{Plot of MLE and VI transformer. MLE captures the mean of the generative process well (MSE: 0.66), but VI does not (MSE: 9.58).} \label{fig:m3_vi_plots} \vspace{-10pt} \end{wrapfigure} The most commonly used weight space inference methods in BNNs are variational inference (VI) \citep{blundell2015weight, dusenberry2020efficient, gal2017concretedropout, louizos2016maxtrixgaussianposteriors, louizos2017multiplicativenfvi, mishkin2019slang}, the Laplace method \citep{daxberger2021bayesian, immer2021glm, kristiadi2020bayesian}, and Markov Chain Monte Carlo (MCMC) \citep{chen2014stochastichamiltonianmcmc, neal2011hamiltonianmcmc, welling2011sgld}. While MCMC methods directly sample from the (unnormalized) posterior, VI and Laplace approximate the posterior by another distribution. As MCMC methods are expensive computationally and memory-wise, we restrict our focus to VI and Laplace. When applying these methods to transformers, we find that weight space inference fails to improve data fit, calibration and predictive uncertainty compared to a model trained by likelihood maximization (see~\cref{fig:m3_vi_plots}). \subsection{Empirical weight study} To understand why weight space inference fails, we study the empirical weight distribution of transformers trained with stochastic gradient descent (SGD), hoping to obtain better priors. We follow the framework proposed by \citet{fortuin2021bayesian}. We first examine the marginal weight distribution where we especially study the tailedness and modality. We also identify the best-fitting distribution and its parameters within the Gaussian, Student, Logistic, Cauchy, and Laplace families. Furthermore, we investigate the correlation among layer weights by comparing the empirical covariance matrix and the distribution of off-diagonal covariance elements against samples from an isotropic Gaussian. \section{Methods} \vspace{-0.75em} \subsection{Variational attention} As as alternative to weight-space inference in transformers, we propose to treat self-attention weights as random variables and approximate their posterior distribution using VI. Previous attention weight inference methods focus on sampling \citep{an2020repulsive}, while others explicitly parameterize the attention weights with a particular distribution \citep{bahuleyan2018variationalattention, deng2018latentalignment, fan2020bam}. Parameters of explicitly reparameterizable distributions such as the Gaussian \cite{bahuleyan2018variationalattention}, Weibull, and Lognormal distributions \citep{fan2020bam} are learned via VI, while others such as the Dirichlet \citep{deng2018latentalignment} require using REINFORCE gradient estimators \citep{sutton2000reinforce}. We implement two baselines for our comparison: \begin{inparaenum} \item \textit{Gaussian attention} where the attention logits are parameterized with a Gaussian distribution and parameters are inferred via VI and \item \textit{DD}, a data dependent configuration where the variational variances of the Gaussian distribution are amortized in order to support input-dependent (i.e., \emph{heteroscedastic}) uncertainties. \end{inparaenum} \subsection{Implicitly reparameterized Dirichlet attention} Alternatively, we propose to directly parameterize the attention weights of each position $i$ by a Dirichlet distribution with parameter $\boldsymbol{\alpha} = a\mathbf{A_i}$, where $a$ is the sharpness parameter and $\mathbf{A_i}$ the i\textsuperscript{th} row of the scaled dot-product attention weights. We then infer $a$ using VI. Samples are obtained by drawing from independent Gamma distributions $X_k \sim \textrm{Gamma}(\alpha_k, 1)$ and normalizing $(\sum_{k=1}^K X_k)^{-1} \mathbf{X} \sim \textrm{Dirichlet}(\boldsymbol{\alpha})$. We further use contextual Gamma priors such that $\boldsymbol{\hat{\alpha}} \propto \mathbf{A_{i}}$, yielding an analytical KL divergence as done by \citet{joo2019dirichlet}. To obtain gradients of a Gamma random variable with respect to $\boldsymbol{\alpha}$, we use the implicit gradient reparametrization \citep{figurnov2019implicit}: \begin{equation} \nabla_{\alpha} z = - (q_{\alpha}(z))^{-1} \nabla_{\alpha}F(z|\alpha) \end{equation} where $q_{\alpha}(z)$ is the Gamma density function and $F(z|\alpha)$ its CDF. Like Gaussian attention, we consider a variation where the sharpness parameter depends on the input, referred to as data dependent. \section{Experiments} \vspace{-1em} We run experiments using the transformer \cite{vaswani2017attention} and vision transformer \cite{dosovitskiy2021vit} on MNIST image classification \citep{lecun2010mnist}, Universal Dependencies part-of-speech (POS) tagging \citep{nivreuniversaldependencies} and on synthetic datasets (M1, M2). We evaluate our models using test log-likelihood, predicted variance mean squared error and the expected mean square error on the synthetic dataset. The test log-likelihood, accuracy, F1-score, and expected calibration error (ECE) \citep{guo2017calibration} are used for experiments on the POS tagging and MNIST datasets. We compare the results obtained by our methods with a transformer (MLE) and an ensemble of 30 transformers both trained by maximum likelihood. Further details are given in \cref{sec:imp_details}. \subsection{Result 1: Weight-space inference does not improve over MLE} \label{sec:weight_vi} \paragraph{Different posteriors do not help.} We find that all weight-space VI methods are outperformed by both maximum-likelihood baselines with respect to all metrics and on all datasets (see \cref{table:main_results}). Interestingly, changing the posterior distribution does not significantly influence the performance, considering the large gap between the scores of the VI methods and baselines (see also~\cref{table:full_weight_inference_results} in the appendix). Furthermore, no variational posterior systematically outperforms the others. \newline Linearized Laplace inference (either on all parameters or just the final layer) shows much better results than VI. However, it still underperforms our baselines. Finally, even concrete dropout improves over VI and Laplace inference and is more competitive with our baselines. \begin{table}[t] \caption{VI and Laplace inference in weight-space compared to maximum likelihood models and concrete dropout. We see that the Bayesian transformers do not outperform the baselines.} \centering \resizebox{\linewidth}{!}{ \begin{tabular}{c l c c c c c c c c} \textbf{Dataset} & \textbf{Metric} & \textbf{MLE} & \textbf{Ensemble} & \textbf{Gaussian VI} & \textbf{Laplace} & \textbf{Final Laplace} & \textbf{Concrete DP} & \textbf{Gauss. Attention} & \textbf{Dir. Attention} \\ \hline \parbox[t]{2mm}{\multirow{3}{*}{\rotatebox[origin=c]{90}{M1}}} & Log-like. & -26.206 $\pm$ 0.000 & -26.011 $\pm$ 0.007 & -27.23 $\pm$ 0.01 & -26.282 $\pm$ 0.014 & -26.219 $\pm$ 0.003 & -25.767 $\pm$ 0.008 & -26.1623 $\pm$ 0.0006 & \textbf{-22.04 $\pm$ 0.01}\\ & Var. MSE & 0.014 $\pm$ 0.000 & 0.0081 $\pm$ 0.0002 & 0.082 $\pm$ 0.004 & 0.021 $\pm$ 0.002 & 0.020 $\pm$ 0.003 & \textbf{0.007 $\pm$ 0.000} & 0.029 $\pm$ 0.000 & 0.430 $\pm$ 0.002 \\ & MSE & \textbf{0.996 $\pm$ 0.000} & 1.0143 $\pm$ 0.0002 & 1.078 $\pm$ 0.001 & 1.0432 $\pm$ 0.0009 & 1.043 $\pm$ 0.002 & 1.0175 $\pm$ 0.0001 & 1.007 $\pm$ 0.000 & 1.0263 $\pm$ 0.0002 \\ \hline \parbox[t]{2mm}{\multirow{3}{*}{\rotatebox[origin=c]{90}{M2}}} & Log-like. & -26.5670 $\pm$ 0.000 & -28.592 $\pm$ 0.009 & -35.43 $\pm$ 0.03 & -32.92 $\pm$ 0.05 & -32.469 $\pm$ 0.01 & -27.11 $\pm$ 0.04 & -26.374 $\pm$ 0.002 & \textbf{-24.841 $\pm$ 0.007} \\ & Var. MSE & \textbf{16.943 $\pm$ 0.000} & 23.45 $\pm$ 0.09 & 110.57 $\pm$ 3.25 & 47.56 $\pm$ 0.06 & 47.07 $\pm$ 0.04 & 21.85 $\pm$ 0.08 & 20.9010 $\pm$ 0.0007 & 17.93 $\pm$ 0.03 \\ & MSE & \textbf{1.170 $\pm$ 0.000} & 1.3552 $\pm$ 0.0003 & 2.95 $\pm$ 0.02 & 1.9943 $\pm$ 0.0008 & 1.972 $\pm$ 0.002 & 1.192 $\pm$ 0.001 & 1.2015 $\pm$ 0.0002 & 1.1928 $\pm$ 0.0006\\ \hline \parbox[t]{2mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{POS}}} & Log-like. & \textbf{-3.707 $\pm$ 0.000} & -4.240 $\pm$ 0.006 & -17.86 $\pm$ 0.03 & -4.539 $\pm$ 0.000 & -4.539 $\pm$ 0.000 & -8.2004 $\pm$ 0.0001 & -3.9692 $\pm$ 0.0008 & -3.9682 $\pm$ 0.0003\\ & Acc. & 0.9706 $\pm$ 0.0000 & \textbf{0.9708 $\pm$ 0.0001} & 0.871 $\pm$ 0.002 & 0.959 $\pm$ 0.000 & 0.958 $\pm$ 0.000 & 0.964 $\pm$ 0.000 & 0.969 $\pm$ 0.000 & 0.968 $\pm$ 0.000\\ & F1 & \textbf{0.971 $\pm$ 0.000} & \textbf{0.971 $\pm$ 0.000} & 0.852 $\pm$ 0.000 & 0.959 $\pm$ 0.000 & 0.959 $\pm$ 0.000 & 0.964 $\pm$ 0.000 & 0.969 $\pm$ 0.000 & 0.968 $\pm$ 0.000\\ & ECE & 0.03 $\pm$ 0.00 & \textbf{0.0261 $\pm$ 0.0001} & 0.052 $\pm$ 0.001 & 0.048 $\pm$ 0.000 & 0.048 $\pm$ 0.000 & 0.031 $\pm$ 0.000 & 0.0271 $\pm$ 0.0000 & 0.0287 $\pm$ 0.0000 \\ \hline \parbox[t]{2mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{MNIST}}} & Log-like. & -0.074 $\pm$ 0.000 & -0.1133 $\pm$ 0.0008 & -3.18 $\pm$ 0.04 & -0.088 $\pm$ 0.000 & -0.09 $\pm$ 0.00 & \textbf{-0.064 $\pm$ 0.000} & -0.0720 $\pm$ 0.0001 & -0.1045 $\pm$ 0.0005 \\ & Acc. & 0.979 $\pm$ 0.000 & \textbf{0.9825 $\pm$ 0.0003} & 0.101 $\pm$ 0.002 & 0.972 $\pm$ 0.000 & 0.972 $\pm$ 0.000 & 0.981 $\pm$ 0.000 & 0.9790 $\pm$ 0.0002 & 0.9738 $\pm$ 0.0003\\ & F1 & 0.979 $\pm$ 0.000 & \textbf{0.982 $\pm$ 0.000} & 0.092 $\pm$ 0.000 & 0.972 $\pm$ 0.000 & 0.972 $\pm$ 0.000 & 0.981 $\pm$ 0.000 & 0.9786 $\pm$ 0.0000 & 0.9736 $\pm$ 0.0000\\ & ECE & 0.022 $\pm$ 0.000 & 0.0326 $\pm$ 0.0004 & 0.097 $\pm$ 0.009 & 0.035 $\pm$ 0.000 & 0.038 $\pm$ 0.000 & \textbf{0.020 $\pm$ 0.000} & 0.0227 $\pm$ 0.0002 & 0.0305 $\pm$ 0.0003\\ \end{tabular} } \vspace{-1em} \label{table:main_results} \end{table} \paragraph{The prior is (at least partially) at fault.} \label{sec:weight_empirical_study} In our attempt to understand the poor performance of weight-space VI in transformers, we conduct an empirical weight distribution study. We find that the marginal weight distributions are essentially uni-modal, except for some embedding and projection layers which tend to have two or three less significant modes. \begin{wraptable}{r}{0.5\linewidth} \caption{Improvement of VI with improved priors relative to Gaussian priors.} \centering \resizebox{\linewidth}{!}{ \begin{tabular}{l c c c c c} \hline \textbf{Dataset} & \textbf{Gauss. VI} & \textbf{Laplace VI} & \textbf{Logistic VI} & \textbf{Cauchy VI} & \textbf{Student VI} \\ \hline M1 & 1.40\% & 3.80\% & 4.12\% & 1.85\% & 2.79\% \\ M2 & 2.85\% & 3.06\% & 2.76\% & 4.36\% & 2.70\% \\ POS & 0.12\% & 2.05\% & 2.16\% & 0.87\% & -0.32\%\\ MNIST & 26.95\% & 33.31\% & 31.36\% & 5.66\% & 26.94\% \\ \hline \end{tabular} } \label{fig:improved_priors} \end{wraptable} Furthermore, other than a decrease in tailedness (high degree of freedom) of the last layer, no recurrent pattern in the tailedness of the weight distribution appears across the considered datasets (\cref{fig:tailedness} in the appendix). Likewise, no single distribution seems to universally fit the empirical distributions of the weights across all datasets (see \cref{fig:attention_queries_qq_plots} and \cref{fig:attention_mlp_qq_plots} in the appendix). This suggests that the shape of the weight distribution strongly depends on the considered dataset. Using the observations from this weight distribution study, we choose more appropriate ("improved") priors. \Cref{fig:improved_priors} shows systematic likelihood improvements. Moreover, we find that the performance of VI critically depends on the prior parameters (see \cref{fig:ll_sensitivity_prior_scale} in the appendix). \subsection{Results 2: Variational attention is better than weight-space inference} \label{sec:variational_attetnion} \paragraph{Dirichlet attention works well.} Unlike weight-space inference, we find that inference on the attention weights works competitively. Indeed, Dirichlet attention strongly outperforms our baselines in terms of likelihood on the synthetic data and lies between both baselines on the POS tagging and MNIST (\cref{table:main_results}). However, the data dependent configuration does not systematically outperform its standard counterpart (see \cref{table:variational_attention} in the appendix). Moreover, Dirichlet attention outperforms Gaussian attention in terms of log-likelihood on the toy data and POS tagging, but not on MNIST. \paragraph{Variational attention leads to more consistent prior entropies.} While investigating the entropy of the predictive distribution when sampling weights from the priors, we find that non-improved priors yield highly variable entropy distributions, ranging from low values around $1$ to higher values around $2.3$ bits. However, when sampling from improved priors selected by our weight distribution analysis, the entropy distribution concentrates very strongly around a high value of $2.3$ bits. This same behavior is observed when sampling from the Gaussian and our proposed Dirichlet attention. This is desirable as the prior predictive should show high uncertainty in function space. \begin{figure}[H] \vspace{-1em} \resizebox{\linewidth}{!}{ \begin{tabular}{cccccccc} \subfloat[Gaussian]{\includegraphics[width=0.15\linewidth]{figures/VI_functional_effect_output_experiment/gaussian_predictive_dist_entropy.pdf}} & \subfloat[Laplace]{\includegraphics[width=0.15\linewidth]{figures/VI_functional_effect_output_experiment/laplace_predictive_dist_entropy.pdf}} & \subfloat[Logistic]{\includegraphics[width=0.15\linewidth]{figures/VI_functional_effect_output_experiment/logistic_predictive_dist_entropy.pdf}} & \subfloat[Cauchy]{\includegraphics[width=0.15\linewidth]{figures/VI_functional_effect_output_experiment/cauchy_predictive_dist_entropy.pdf}} & \subfloat[Student]{\includegraphics[width=0.15\linewidth]{figures/VI_functional_effect_output_experiment/student_predictive_dist_entropy.pdf}} & \subfloat[Improved]{\includegraphics[width=0.169\linewidth]{figures/VI_functional_effect_output_experiment/improved_predictive_dist_entropy.pdf}} & \subfloat[Gauss. att.]{\includegraphics[width=0.169\linewidth]{figures/VA_functional_effect_output_experiment/gaussian_va_predictive_dist_entropy.pdf}} & \subfloat[Dir. att.]{\includegraphics[width=0.169\linewidth]{figures/VA_functional_effect_output_experiment/dirichlet_va_predictive_dist_entropy.pdf}} \end{tabular} } \caption{Prior predictive entropy distributions on MNIST train data. Improving the weight-space priors and using variational attention both lead to more consistently high entropies.} \label{fig:output_dist_entropy_sample_prior} \end{figure} \section{Related work} \paragraph{Bayesian Transformers.} Previous attempts to make the transformer Bayesian have used VI to perform inference on a subset of layers \citep{tran2019bayesian, xue2021bayesian}. While both methods claim state-of-the-art performance on their respective benchmarks, \citet{tran2019bayesian} do not provide any quantitative results and \citet{xue2021bayesian} initialize their priors to a maximum estimate of the weights which is not strictly Bayesian. Alternatively, \citet{fan2020bam} parameterize the attention logits of a transformer by a Gaussian distribution and finetune the deterministic self-attention of language models pretrained on large corpora. They however only consider finetuning and not full training using variational attention. Orthogonally, \citet{martin2020monte} consider attention keys, queries, values, and weights as unobserved random variables and use sequential Monte Carlo methods to sample them. \paragraph{Bayesian neural network inference.} BNN inference has recently advanced in terms of VI methods with more expressive posteriors \citep{dusenberry2020efficient, louizos2016maxtrixgaussianposteriors, louizos2017multiplicativenfvi, mishkin2019slang, tomczak2020lowrankgaussianvi}, more efficient inference \citep{gal2016mcdropout, gal2017concretedropout, swiatkowski2020ktied}, and greater stability \citep{kingma2015lrt, wen2018flipout}. Likewise, the Laplace inference for BBNs has improved in scalability using further GGN approximations \citep{immer2021scalable, immer2021glm, kristiadi2020bayesian, ritter2018scalable, ritter2018scalablelaplace} and sub-network inference \citep{daxberger2021bayesian, kristiadi2020bayesian}. Orthogonally, MCMC methods for BNNs have been improved \citep{fortuin2021bnnpriors, garriga2021exact, wenzel2020good, zhang2019cyclical}, better BNN priors have been studied \citep{fortuin2021priors, fortuin2021bayesian}, and even deep ensembles \cite{lakshminarayanan2017simple} have been cast as approximate inference \citep{ciosek2019conservative, d2021repulsive, d2021stein, izmailov2021bayesian, pearce2018bayesian, pearce2020uncertainty, wilson2020bayesian}. \section{Conclusion} We have shown that weight space inference in Bayesian transformers does not work well, regardless of the choice of posterior. We also found that choosing priors according to an empirical weight distribution analysis improved the performance, suggesting that priors are at least partially at fault. However, we have not found the right priors to make the method competitive. Moreover, we found evidence that na\"ive weight-space priors lead to low prior predictive entropy, and therefore do not reflect our true beliefs about the output distribution. In order to move closer to the function-space distribution, we suggested to perform inference on the attention weights rather than on parameters. We proposed a novel method based on the implicit reparameterization of the Dirichlet distribution to apply variational inference on the attention weights, which performed competitively with respect to our baselines. \newpage \bibliographystyle{plainnat}
1,116,691,497,454
arxiv
\section*{Introduction} Let $X$ be a complex algebraic variety. We call a regular map $p\colon X \to X^{\rm qp}$ from $X$ to a quasi--projective variety $X^{\rm qp}$ a {\it quasi--projective reduction} of $X$ if every regular map $f \colon X \to Z$ to a quasi--projective variety $Z$ factors uniquely through $p$, i.e., there exists a unique regular map $\t{f} \colon X^{\rm qp} \to Z$ such $f = \t{f} \circ p$. As a first result of the present article we characterize when a given toric variety $X$ has a quasi--projective reduction. In Section 1 we construct a {\it toric quasi--projective reduction} of $X$, i.e., a toric morphism $q\colon X \to X^{\rm tqp}} \def\h#1{\widehat{#1}} \def\t#1{\widetilde{#1}$ to a quasi--projective toric variety $X^{\rm tqp}} \def\h#1{\widehat{#1}} \def\t#1{\widetilde{#1}$ such that every toric morphism from $X$ to a quasi--projective toric variety factors uniquely through $q$. Then we prove (see Section 2): \bigskip \noindent{\bf Theorem 1.}\enspace {\it A toric variety $X$ has a quasi--projective reduction if and only if its toric quasi--projective reduction $q\colon X \to X^{\rm tqp}} \def\h#1{\widehat{#1}} \def\t#1{\widetilde{#1}$ is surjective. If $q$ is surjective, then it is the quasi--projective reduction of $X$.} \bigskip The above theorem implies in particular that every complete toric variety has a projective reduction. But as we show by an explicit example (see \ref{nonsurj}), the quasi--projective reduction need not exist in general. We apply Theorem 1 to obtain a complete answer to the following problem, posed by A. Bia\l ynicki-Birula: Let $X$ be a quasi--projective toric variety with acting torus $T$ and let $H \subset T$ be a subtorus. When does the action of $H$ admit a quotient in the category of quasi--projective varieties, i.e., an $H$-invariant regular map $s\colon X \to Y$ to a quasi--projective variety $Y$ such that every $H$-invariant regular map from $X$ to a quasi--projective variety factors uniquely through $s$? In order to state our answer, let $s_{1} \colon X \to X \tq H$ denote the toric quotient (see \cite{acha}). Recall that $s_{1}$ is universal with respect to $H$-invariant toric morphisms. Moreover, let $q\colon X \tq H \to Y$ be the toric quasi--projective reduction. Then our result is the following (for the proof see Section 2): \goodbreak\bigskip \noindent{\bf Theorem 2.}\enspace {\it The action of $H$ on $X$ admits a quotient in the category of quasi--projective varieties if and only if $s := q \circ s_{1}$ is surjective. If $s$ is surjective, then it is the quotient for the action of $H$ on $X$.} \bigskip Examples of quasi--projective toric varieties with a subtorus action admitting a quotient in the category of quasi--projective varieties are obtained from Mumford's Geometric Invariant Theory. In \ref{noquot} and \ref{subtil} we discuss examples of subtorus actions that have no such quotient. \section*{Notation} A {\it toric variety\/} is a normal algebraic variety $X$ endowed with an effective regular action of an algebraic torus $T$ that has an open orbit. We refer to $T$ as the {\it acting torus\/} of $X$. For every toric variety $X$ we fix a base point $x_{0}$ in the open orbit. A regular map $f \colon X \to X'$ of toric varieties with base points $x_{0}$ and $x_{0}'$ respectively is called a {\it toric morphism\/} if $f(x_{0}) = f(x'_{0})$ and there is a homomorphism $\varphi \colon T \to T'$ of the acting tori such that $f(t \mal x) = \varphi(t) \mal f(x)$ holds for every $(t,x) \in T \times X$. The basic construction in the theory of toric varieties is to associate to a given fan $\Delta$ in an $n$-dimensional lattice an $n$-dimensional toric variety $X_{\Delta}$. The assignment $\Delta \mapsto X_{\Delta}$ is in fact an equivalence of categories (see e.g. \cite{Fu} or \cite{Od}). For our construction of toric quasiprojective reductions we need the following generalization of the notion of a fan: Let $N$ denote a $n$-dimensional lattice and set $N_{\RR} := \RR \otimes_{\ZZ} N$. A {\it quasi--fan} in $N$ is a finite set $\Delta$ of rational convex polyhedral cones in $N_{\RR}$ such that for each $\sigma \in \Delta$ also every face of $\sigma$ is an element of $\Delta$ and any two cones of $\Delta$ intersect in a common face. So a quasi--fan is a fan if all its cones are strictly convex. For a quasi--fan $\Delta$, we denote by $\Delta^{\max}$ the set of its maximal cones and by $\vert \Delta \vert := \bigcup_{\sigma \in \Delta} \sigma$ its support. For a homomorphism $F \colon N \to N'$ of lattices, let $F_{\RR}$ denote the associated homomorphism of real vector spaces. A {\it map of quasi--fans} $\Delta$ in $N$ and $\Delta'$ in $N'$ is a lattice homomorphism $F \colon N \to N'$ such that for every $\sigma \in \Delta$ there is a $\sigma' \in \Delta'$ with $F_{\RR}(\sigma) \subset \sigma'$. As mentioned above, every map $F \colon \Delta \to \Delta'$ of fans gives rise to a toric morphism $f \colon X_{\Delta} \to X_{\Delta'}$. Every quasi--fan $\Delta$ in $N$ defines in a canonical manner a fan: Let $V$ denote the intersection of all cones of $\Delta$. Then $V$ is a linear subspace of $N_{\RR}$. Set $L := V \cap N$ and let $Q\colon N \to \t{N} := N/L$ denote the projection. Then the cones $Q_{\RR}(\sigma)$, $\sigma \in \Delta^{\max}$, are the maximal cones of a fan $\t{\Delta}$ in $\t{N}$. We call $\t{\Delta}$ the {\it quotient fan} of $\Delta$. By construction, $Q$ is a map of the quasi--fans $\Delta$ and $\t{\Delta}$. \section{Construction of the Toric Quasi-Projective \\ Reduction} The construction of the toric quasi--projective reduction is done in the category of fans. Toric morphisms from a complete toric variety $X_{\Delta}$ to projective spaces are related to concave support functions of the fan $\Delta$. Since we also want to consider non-complete fans it is more natural to work with the following notion instead of support functions: \goodbreak Let $\Delta$ be a quasi--fan in a lattice $N$. A finite family ${\mathfrak U} := (u_{i})_{i \in I}$ of linear forms $u_{i} \in M := \Hom(N,\ZZ)$ is called {\it $\Delta$-concave}, if it satisfies the following condition: for every $\sigma \in \Delta^{\max}$ there is an index $i(\sigma)$ such that $$u_{i(\sigma)}\lower 4pt \hbox{$\vert \sigma$} \; \le \; u_{i}\lower 4pt \hbox{$\vert \sigma$} \quad \mbox{for all } i \in I.$$ Note that for two given $\Delta$-concave families ${\mathfrak U} := (u_{i})_{i \in I}$ and ${\mathfrak U}' := (u'_{j})_{j \in J}$ of linear forms the {\it sum family} $$ {\mathfrak U} + {\mathfrak U}' := (u_{i} + u'_{j})_{(i,j) \in I \times J}$$ is again a $\Delta$-concave family. For a $\Delta$-concave family ${\mathfrak U}$ let $P_{{\mathfrak U}}$ denote the convex hull of ${\mathfrak U}$. Then $P_{{\mathfrak U}}$ is a lattice polytope in $M_{\RR}$. Let $\Sigma_{\mathfrak U}$ denote the normal quasi--fan of $P_{\mathfrak U}$ in $N$. Recall that the faces $P'$ of $P$ correspond order-reversingly to the cones of $\Sigma_{\mathfrak U}$ by $$ P' \mapsto \tau_{P'} := \{v \in N_{\RR}; \; p'(v) \le p(v) \hbox{ for all } (p',p) \in P' \times P\}.$$ If $u_{1}, \ldots, u_{r}$ denote the vertices of $P_{\mathfrak U}$, then the family $(u_{i})_{i = 1, \ldots, r}$ is {\it strictly $\Sigma_{\mathfrak U}$-concave}, i.e., on every relative interior $\tau_{\{u_{i}\}}^{\circ}$ the linear form $u_{i}$ is strictly smaller than the forms $u_{j}$ with $j \ne i$. Now assume that $\Delta$ is a fan. Call a subset $R$ of the set $\Delta^{(1)}$ of extremal rays of $\Delta$ {\it indecomposable}, if for every $\Delta$-concave family ${\mathfrak U}$ the set $R$ is contained in some maximal cone of $\Sigma_{\mathfrak U}$. Let $R_{1}, \ldots, R_{k}$ be the maximal indecomposable subsets of $\Delta^{(1)}$. \begin{lemma}\label{generic} There exists a $\Delta$-concave family ${\mathfrak U}$ such that every $R_{i}$ is the intersection of $\Delta^{(1)}$ with some maximal cone of $\Sigma_{\mathfrak U}$. \end{lemma} \proof For every decomposable subset $S$ of $\Delta^{(1)}$ choose a $\Delta$-concave family ${\mathfrak U}_{S}$ such that $S$ is not contained in any maximal cone of $\Sigma_{{\mathfrak U}_{S}}$. Let ${\mathfrak U}$ be the sum of these families ${\mathfrak U}_{S}$. Then $P_{\mathfrak U}$ is the Minkowski-Sum of the $P_{{\mathfrak U}_{S}}$. Consequently $\Sigma_{\mathfrak U}$ is the common refinement of the $\Sigma_{{\mathfrak U}_{S}}$. This readily yields the claim. \endproof \bigskip A $\Delta$-concave family ${\mathfrak U}$ with the property of Lemma \ref{generic} will be called {\it generic}. As a consequence of the above lemma we obtain the following statement for the sets $$ \varrho_{i} := \conv\biggl(\bigcup_{\varrho \in R_{i}} \varrho \biggr). $$ \begin{remark}\label{Sigma} The $\varrho_{i}$ are the maximal cones of a quasi--fan $\Sigma$ in $N$. The lattice homomorphism $\id_{N}$ is a map of the quasi--fans $\Delta$ and $\Sigma$. Moreover, if ${\mathfrak U}$ is a generic $\Delta$-concave family, then $\id_{N}$ is an affine map of the quasi--fans $\Sigma$ and $\Sigma_{\mathfrak U}$. \endproof \end{remark} Here a map $F$ of quasi--fans $\Delta$ in $N$ and $\Delta'$ in $N'$ is called {\it affine} if for every maximal cone $\sigma'$ of $\Delta'$ the set $F_{\RR}^{-1}(\sigma') \cap \vert \Delta \vert$ is a (maximal) cone of $\Delta$. Note that a map of fans is affine if and only if the associated toric morphism is affine. Now we construct the quasi--projective toric reduction of a toric variety $X_{\Delta}$ defined by the fan $\Delta$. Let $V$ denote the minimal cone of the quasi--fan $\Sigma$ determined by $\Delta$ as in Remark \ref{Sigma}. Set $L := N \cap V$, let $Q \colon N \to \t{N} := N/L$ be the projection and denote by $\t{\Delta}$ the quotient-fan of $\Sigma$. \begin{proposition} The toric morphism $q \colon X_{\Delta} \to X_{\t{\Delta}}$ associated to $Q$ is the toric quasi--projective reduction of $X_{\Delta}$. \end{proposition} \proof First we show that $X_{\t{\Delta}}$ is in fact quasi--projective. Choose a generic $\Delta$-concave family ${\mathfrak U} = (u_{\sigma})_{\sigma \in \Delta^{\max}}$. Let $V_1$ denote the minimal cone of the quasi--fan $\Sigma_{\mathfrak U}$. Set $L_1 := N \cap V_1$, let $P \colon N \to \b{N} := N/L_{1}$ be the projection and denote the quotient-fan of $\Sigma_{\mathfrak U}$ by $\b{\Delta}$. Since ${\mathfrak U}$ induces a strictly $\b{\Delta}$-concave family, the associated toric variety $X_{\b{\Delta}}$ is projective. The minimal cone $V$ of $\Sigma$ is contained in $V_1$, so we obtain a lattice homomorphism $G \colon \t{N} \to \b{N}$ with $G \circ Q = P$. By construction, $G$ is an affine map of the fans $\t{\Delta}$ and $\b{\Delta}$. So the associated toric morphism $g \colon X_{\t{\Delta}} \to X_{\b{\Delta}}$ is affine. Since $X_{\b{\Delta}}$ is projective we can use \cite{EGA}, Chap. II, Th. 4.5.2, to conclude that $X_{\t{\Delta}}$ is quasi--projective. Now we verify the universal property of $q$. Let $f \colon X_{\Delta} \to X'$ be a toric morphism to a quasi--projective toric variety $X'$. We may assume that $f$ arises from a map $F \colon N \to N'$ of fans $\Delta$ and $\Delta'$. Choose a polytopal completion $\Delta''$ of $\Delta'$. By suitable stellar subdivisions (see \cite{Ew}, p. 72) we achieve that every maximal cone of $\Delta''$ contains at most one maximal cone of $\Delta'$. Let $(u_{\sigma''})_{\sigma'' \in {\Delta''}^{\max}}$ be a strictly $\Delta''$-concave family. Then the linear forms $u_{\sigma''} \circ F$ form a $\Delta$-concave family. Let $\sigma \in \Sigma^{\max}$. By construction, $\sigma$ is mapped by $F_{\RR}$ into some cone of $\Delta''$. Moreover, $\sigma$ is the convex hull of certain extremal rays of $\Delta$, so $F_{\RR}(\sigma)$ is in fact contained in a maximal cone of $\Delta'$. Hence $F$ is a map of the quasi--fans $\Sigma$ and $\Delta'$. In particular we have $F(L) = 0$. Thus there is a map $\t{F} \colon \t{N} \to N'$ of the fans $\t{\Delta}$ and $\Delta'$ with $F = \t{F} \circ Q$. The associated toric morphism $\t{f} \colon X_{\t{\Delta}} \to X'$ yields the desired factorization of $f$. \endproof \section{Proof of the Theorems} Let $X$ be a toric variety with acting torus $T$ and assume that $H \subset T$ is an algebraic subgroup. Let $Z$ be an arbitrary quasi--projective variety. We need the following decomposition result for regular maps: \begin{proposition}\label{factmor} Let $f \colon X \to Z$ be an $H$-invariant regular map. Then there exist a locally closed subvariety $W$ of some $\PP_{r}$, an $H$-invariant toric morphism $g\colon X \to \PP_{r}$ with $g(X) \subset W$ and a regular map $h\colon W \to Z$ such that $f = h \circ g$. \end{proposition} \proof In a first step we consider the special case that $Z = \PP_{m}$ and $X$ is an open toric subvariety of some ${\mathbb C}} \def\ZZ{{\mathbb Z}^{n}$. Then there are polynomials $f_0, \ldots, f_m\in{\mathbb C}} \def\ZZ{{\mathbb Z}[z_1, \ldots, z_n]$ having no common zero in $X$ such that $f(z) = [f_0(z), \ldots, f_m(z)]$ holds for every $z \in X$. Clearly we may assume that the $f_{i}$ have no non-trivial common divisor. Since $f$ is $H$-invariant, every $f_i/f_j$ is an $H$-invariant rational function on ${\mathbb C}} \def\ZZ{{\mathbb Z}^{n}$. Thus, using $1 \in \gcd(f_0(z), \ldots, f_m(z))$ we can conclude that there is a character $\chi \colon H \to {\mathbb C}} \def\ZZ{{\mathbb Z}^*$ satisfying $f_i(h \mal x) = \chi(h) f_i(x)$ for every $i$ and every $(h,x)$ in $H \times X$. Now every $f_i$ is a sum of monomials $q_{i1}, \ldots, q_{ir_i}$. Note that also each of the monomials $q_{ij}$ is homogeneous with respect to the character $\chi$. Moreover, since the $f_i$ have no common zero in $X$, neither have the $q_{ij}$. Set $r := \sum r_i$ and define a toric morphism $$ g \colon X \to \PP_{r}, \quad x \mapsto [q_{01}(x), \ldots, q_{0r_0}(x), \ldots, q_{m1}(x), \ldots, q_{mr_m}(x)]. $$ Then $g$ is $H$-invariant. In order to define an open subset $W$ of $\PP_{r}$ and a regular map $h \colon W \to \PP_{m}$ with the desired properties, consider the linear forms $$ L_{i} \colon [z_{01}, \ldots, z_{0r_{0}}, \ldots, z_{m1}, \ldots, z_{mr_{m}}] \mapsto z_{i1} + \ldots + z_{ir_{i}}$$ on $\PP_{r}$. Set $W := \PP_{r} \setminus V(\PP_{r}; L_{1}, \ldots, L_{m})$ and $$h \colon W \to \PP_{m}, \quad [z] \mapsto [L_{0}(z), \ldots, L_{m}(z)].$$ Since the $f_{i}$ have no common zero in $X$ we obtain $g(X) \subset W$. Moreover, by construction we have $f = h \circ g$. So the assertion is proved for the case that $Z = \PP_{m}$ and $X$ is an open toric subvariety of some ${\mathbb C}} \def\ZZ{{\mathbb Z}^{n}$. In a second step assume that $Z$ is arbitrary but $X$ again is an open toric subvariety of some ${\mathbb C}} \def\ZZ{{\mathbb Z}^{n}$. Choose a locally closed embedding $\imath \colon Z \to \PP_{m}$. By Step one we obtain a decomposition of $f' := \imath \circ f$ as $f' = h' \circ g$, where $g \colon X \to \PP_{r}$ is an $H$-invariant toric morphism such that $g(X)$ is contained in an open subset $W'$ of $\PP_{r}$ and $h' \colon W' \to \PP_{m}$ is regular. Then $W := {h'}^{-1}(\imath(Z))$ is a locally closed subvariety of $W'$. Moreover we have $g(X) \subset W$ and there is a unique regular map $h \colon W \to Z$ with $h' = \imath \circ h$. It follows that $f = h \circ g$ is the desired decomposition. Finally, let also $X$ be arbitrary. As described in \cite{cox}, there is an open toric subvariety $U$ of some ${\mathbb C}} \def\ZZ{{\mathbb Z}^n$ and a surjective toric morphism $p \colon U \to X$ such that $p$ is the good quotient of $U$ by some algebraic subgroup $H_{0}$ of $({\mathbb C}} \def\ZZ{{\mathbb Z}^*)^n$. Consider $f' := f \circ p$. Then $f'$ is invariant by the action of $H' := \pi^{-1}(H)$, where $\pi$ denotes the homomorphism of the acting tori associated to $p$. By the first two steps we can decompose $f'$ as $f' = h \circ g'$ with an $H'$-invariant toric morphism $g' \colon U \to \PP_{r}$ and a regular map $h \colon W \to Z$, where $W \subset \PP_{r}$ is locally closed with $g'(U) \subset W$. Since $g'$ is $H'$-invariant, it is also invariant by the action of $H_{0}$. Thus there is a unique toric morphism $g\colon X \to \PP_{r}$ such that $g' = g \circ p$. Since $p$ is surjective, $g$ is $H$-invariant and we have $f = h \circ g$ which is the desired decomposition of $f$. \endproof \bigskip \noindent{\it Proof of Theorem 1.}\enspace Let us first assume that the toric quasi--projective reduction $q\colon X \to X^{\rm tqp}} \def\h#1{\widehat{#1}} \def\t#1{\widetilde{#1}$ is surjective. Let $f\colon X\to Z$ be a regular map to a quasi--projective variety $Z$. We have to show that $f$ factors uniquely through $q$. By Proposition \ref{factmor} there is a toric morphism $g\colon X \to X'$ to a projective toric variety $X'$, and a rational map $h\colon X' \to Z$ which is regular on $g(X)$ such that $f = h \circ g$. Now there is a toric morphism $\t{g} \colon X^{\rm tqp}} \def\h#1{\widehat{#1}} \def\t#1{\widetilde{#1} \to X'$ such that $g = \t{g} \circ q$. Since $q$ was assumed to be surjective, we have $\t{g}(X^{\rm tqp}} \def\h#1{\widehat{#1}} \def\t#1{\widetilde{#1}) = g(X)$, and hence $f$ factors through $q$. Now suppose that $p\colon X \to X^{\rm qp}$ is a quasi--projective reduction. Then clearly $p$ is surjective and $X^{\rm qp}$ is normal. Moreover, there is an induced action of the torus $T$ on $X^{\rm qp}$ making $p$ equivariant. We claim that this action is regular: Accoding to Proposition \ref{factmor} choose a toric morphism $g\colon X \to X'$ to a projective toric variety $X'$, and a rational map $h\colon X' \to X^{\rm qp}$ such that $g(X)$ is contained in the domain $W'$ of definition of $h$ and $p=h\circ g$. By the universal property of the toric quasi--projective reduction $q \colon X \to X^{\rm tqp}} \def\h#1{\widehat{#1}} \def\t#1{\widetilde{#1}$ there is a toric morphism $\t{g} \colon X^{\rm tqp}} \def\h#1{\widehat{#1}} \def\t#1{\widetilde{#1} \to X'$ such that $g = \t{g} \circ q$. Moreover, by the universal property of $p$, there is a regular map $\alpha \colon X^{\rm qp} \to X^{\rm tqp}} \def\h#1{\widehat{#1}} \def\t#1{\widetilde{#1}$ such that $q = \alpha \circ p$. Note that $\alpha(X^{\rm qp}) \subset q(X)$ and $\t{g}(q(X)) \subset W'$. So, using surjectivity of $p$ and equivariance of $p$ and $q$, we obtain for a given pair $(t,y) \in T \times X^{\rm qp}$ the equality $$t \mal y = h(\t{g}(t \mal \alpha(y))).$$ This implies regularity of the induced $T$-action on $X^{\rm qp}$. It follows that $X^{\rm qp}$ is in fact a toric variety and $p$ is a toric morphism. Thus we obtain a toric morphism $\beta \colon X^{\rm tqp}} \def\h#1{\widehat{#1}} \def\t#1{\widetilde{#1} \to X^{\rm qp}$ with $p = \beta \circ q$. By uniqueness of the factorizations we obtain that $\alpha$ and $\beta$ are inverse to each other, i.e., $q$ is also a quasi--projective reduction of $X$. \endproof \bigskip \noindent{\it Proof of Theorem 2.}\enspace Suppose first that $s \colon X \to Y$ is a quotient for the action of $H$ on $X$ in the category of quasi--projective varieties. As above we see that $Y$ is a toric variety and $s$ is a surjective toric morphism. The universal property of the toric quotient $s_{1} \colon X \to X \tq H$ yields a toric morphism $q \colon X \tq H \to Y$ such that $s = q \circ s_{1}$. Clearly $q$ satisfies the universal property of the toric quasi--projective reduction of $X \tq H$. Now let $s_{1} \colon X \to X \tq H$ denote the toric quotient, $q \colon X \tq H \to Y$ the toric quasi--projective reduction and assume that $q \circ s_{1}$ is surjective. Let $f \colon X \to Z$ be an $H$-invariant regular map to a quasi--projective variety. Choose a decomposition $f = h \circ g$ as in Proposition \ref{factmor}. Then, by the universal properties of toric quotient and quasi--projective reduction there is a regular map $\t{g}$ with $g = \t{g} \circ q \circ s_{1}$. Since $q \circ s_{1}$ is surjective, $h$ is defined on $\t{g}(Y)$. Thus $f = (h \circ \t{g}) \circ (q \circ s_{1})$ is the desired factorization of $f$. \endproof \bigskip The above proofs yield in fact the following generalization of Theorems 1 and 2: Let $X$ be any toric variety and let $H$ be an algebraic subgroup of the acting torus $T$ of $X$. Call an $H$-invariant regular map $p \colon X \to X^{\rm qp}_H$ to a quasi--projective variety $X^{\rm qp}_H$ an {\it $H$-invariant quasi--projective reduction} if it is universal with respect to $H$-invariant regular maps from $X$ to quasi--projective varieties. Now write $H = \Gamma H^0 $ with a finite subgroup $\Gamma$ and a subtorus $H^0$ of $T$. Let $g \colon X \to X'$ denote the geometric quotient for the action of $\Gamma$ on $X'$. Then $g$ is a toric morphism. Hence there is an induced action of $H^0$ on $X'$. Let $s_1 \colon X' \to X' \tq H^0$ be the toric quotient for this action and let $q \colon X' \tq H^0 \to Y$ be the quasi--projective toric reduction. Then we obtain: \bigskip \noindent{\bf Theorem 3.}\enspace{\it $X$ has an $H$-invariant quasi--projective reduction if and only $q \circ s_1$ is surjective. If so, then $q \circ s_1 \circ g$ is the $H$-invariant quasi--projective reduction of $X$. \endproof} \section{Examples} We first give an example of a $3$-dimensional toric variety $X_{\Delta}$ that admits no quasi--projective reduction. This variety is an open toric subvariety of the minimal example for a smooth complete but non-projective toric variety presented in \cite{Od}, Section 2.3. \begin{example}\label{nonsurj} Let $e_{1}$, $e_{2}$ and $e_{3}$ denote the canonical basis vectors of the lattice $\ZZ^{3}$. Consider the vectors $$\begin{array}{ll} v_{1} := -e_{1}, & \qquad v_{1}' := e_{2} + e_{3}, \\ v_{2} := -e_{2}, & \qquad v_{2}' := e_{1} + e_{3}, \\ v_{3} := -e_{3}, & \qquad v_{3}' := e_{1} + e_{2}. \\ \end{array}$$ Let $\Delta$ be the fan in $\ZZ^3$ with the maximal cones $$ \tau_1:=\cone(v_1,v_{3}'), \quad \tau_2:=\cone(v_2,v_{1}')\quad \hbox{and} \quad \tau_3:=\cone(v_3,v_{2}').$$ \begin{center} \input{delta.pstex_t} \end{center} We claim that the toric quasi--projective reduction $q$ of $X_{\Delta}$ is the toric morphism associated to $\id_{N}$ interpreted as a map from $\Delta$ to the fan $\t{\Delta}$ having as its maximal cones $$ \sigma_1:=\cone(v_1,v_3,v_{1}',v_{3}'), \quad \sigma_2:=\cone(v_1,v_2,v_{1}',v_{2}')\quad\hbox{and}\quad \sigma_3:=\cone(v_2,v_3,v_{2}',v_{3}').$$ \begin{center} \input{tdelta.pstex_t} \end{center} Note that $q$ is not surjective. In order to prove that $q$ is the toric quasi--projective reduction of $X_{\Delta}$, we have to show that every $\Delta$-concave family $(u_{i})_{i=1,2,3}$ can be extended to a $\t{\Delta}$-concave family. Note that $v_{1} + v_{3}'$ equals $v_{3} + v_{1}'$ and hence we have $$\begin{array}{ccccc} u_{1}(v_{1}) + u_{1}(v_{3}') & = & u_{1}(v_{3}) + u_{1}(v_{1}') & \ge & u_{3}(v_{3}) + u_{2}(v_{1}'). \\ \end{array}$$ Similarly we obtain $$\begin{array}{ccccc} u_{2}(v_{2}) + u_{2}(v_{1}') & = & u_{2}(v_{1}) + u_{2}(v_{2}') & \ge & u_{1}(v_{1}) + u_{3}(v_{2}'), \\ u_{3}(v_{3}) + u_{3}(v_{2}') & = & u_{3}(v_{3}') + u_{3}(v_{2}) & \ge & u_{1}(v_{3}') + u_{2}(v_{2}). \\ \end{array}$$ Summing over these three inequalities, we arrive at an identity, and therefore the inequalities are in fact equalities. This implies $$\begin{array}{ll} u_{1}(v_{1}) = u_{2}(v_{1}), & \qquad u_{1}(v_{1}') = u_{2}(v_{1}'), \\ u_{1}(v_{3}) = u_{3}(v_{3}), & \qquad u_{1}(v_{3}') = u_{3}(v_{3}'), \\ u_{2}(v_{2}) = u_{3}(v_{2}), & \qquad u_{2}(v_{2}') = u_{3}(v_{2}'). \quad \diamondsuit \\ \end{array}$$ \end{example} In the above example the quasi--projective toric reduction has a trivial kernel, and the variety $X_{\t{\Delta}}$ has the same dimension as $X_{\Delta}$. For the complete case we have more generally: \begin{remark} Let $\Delta$ be a complete fan in a lattice $N$. Then $\dim X_{\Delta} = \dim X_{\Delta}^{\rm qp}$ holds if and only if $\Delta$ can be defined via a subdivision of a lattice polytope in $N_{\RR}$.\endproof \end{remark} The next example is taken from the book of Fulton. It shows that in general a complete toric variety is very far from its projective reduction. \begin{example} Consider the complete fan in $\ZZ^{3}$ obtained by taking the cones over the faces of the standard cube with vertices $(\pm 1,\pm 1,\pm 1)$. Deform this fan into a new complete fan $\Delta$ in $\ZZ^{3}$ by moving the vertex $(1,1,1)$ to $(1,2,3)$. The only support functions of $\Delta$ are the linear functions in $M$ (see \cite{Fu}, p. 26). So $X_{\Delta}^{\rm qp}$ is a point. \quad $\diamondsuit$ \end{example} Now we turn to quotients of a quasi--projective toric variety $X$ with acting torus $T$ by subtori $H \subset T$. Examples of such quotients are obtained by Mumford's Geometric Invariant Theory: For the sake of simplicity assume $X=\PP_{n}$. Then the choice of a lifting of the $T$-action to ${\mathbb C}} \def\ZZ{{\mathbb Z}^{n+1}$ yields a notion of $H$-semistability. The set $X^{\rm ss} \subset X$ of $H$-semistable points is $T$-invariant and there is a quotient $X^{\rm ss} \to Y$ in the category of quasi--projective varieties for the action of $H$ on $X^{\rm ss}$ (see \cite{Mu}, also \cite{KaStZe} and \cite{BBSw}). \begin{example}\label{noquot} Let $\Delta$ be the fan in $\RR^{4}$ that has $\sigma_{1} := \cone(e_{1},e_{2})$ and $\sigma_{2} := \cone(e_{3},e_{4})$ as maximal cones. Then $X_{\Delta}$ is an open toric subvariety of ${\mathbb C}} \def\ZZ{{\mathbb Z}^{4}$ with acting torus $T = {{\mathbb C}} \def\ZZ{{\mathbb Z}^{*}}^{4}$. Define a projection $S_{1} \colon \ZZ^{4} \to \ZZ^{3}$ by setting $$S_{1}(e_{1}) := e_{1}, \quad S_{1}(e_{2}) := e_{2}, \quad S_{1}(e_{3}) := e_{3}, \quad S_{1}(e_{4}) := e_{1} + e_{2}.$$ Then $S_{1}(e_{1}), \ldots, S_{1}(e_{4})$ generate $\tau := \cone(e_{1}, e_{2}, e_{3}) \subset \RR^{3}$. The faces $\cone(e_{1}, e_{3})$ and $\cone(e_{2}, e_{3})$ of $\tau$ are not containd in $S_{1}(\vert \Delta \vert)$. \begin{center} \input{nonsurj.pstex_t} \end{center} By \cite{acha}, the toric morphism $s_{1} \colon X_{\Delta} \to X_{\tau}$ defined by $S_{1}$ is the toric quotient for the action of the subtorus $H \subset T$ corresponding to the sublattice $\ker(S_{1})$ of $\ZZ^{4}$. In particular, $s_{1}$ is not surjective. So the action of $H$ on $X_{\Delta}$ has no quotient in the category of quasi--projective varieties. \quad $\diamondsuit$ \end{example} Note that surjectivity of the toric quotient $s_{1} \colon X \to X \tq H$ does not imply the existence of a quotient in the category of quasi--projective varieties: \begin{example}\label{subtil} Let $\Delta'$ be the fan in $\RR^{3}$ with the maximal cones $$ \tau_{1} := \cone(e_{1},e_{2}), \quad \tau_{2} := \cone(e_{3},e_{4}), \quad \tau_{3} := \cone(e_{5},e_{6}).$$ Then the associated toric variety $X_{\Delta'}$ is an open toric subvariety of ${\mathbb C}} \def\ZZ{{\mathbb Z}^{6}$. In the notation of Example \ref{nonsurj}, define a projection $S_{1} \colon \ZZ^{6} \to \ZZ^{3}$ by $$\begin{array}{ll} S_{1}(e_{1}) := -e_{1}, & \qquad S_{1}(e_{2}) := v_{1}', \\ S_{1}(e_{3}) := -e_{2}, & \qquad S_{1}(e_{4}) := v_{2}', \\ S_{1}(e_{5}) := -e_{3}, & \qquad S_{1}(e_{6}) := v_{3}'. \\ \end{array}$$ Then $S_{1}$ is a map of the fan $\Delta'$ and the fan $\Delta$ of \ref{nonsurj}, in fact the (surjective) toric morphism $s_{1} \colon X_{\Delta'} \to X_{\Delta}$ associated to $S_{1}$ is the toric quotient of the action of the subtorus $H$ of ${{\mathbb C}} \def\ZZ{{\mathbb Z}^{*}}^{6}$ corresponding to the sublattice $\ker(S_{1}) \subset \ZZ^{6}$. Since the quasi--projective reduction of $X_{\Delta}$ is not surjective, there is no quotient in the category of quasi--projective varieties for the action of $H$ on $X_{\Delta'}$. \end{example}
1,116,691,497,455
arxiv
\section{Introduction\label{s1}} Let $\sR^-$ and $\sR^+$ be, respectively, the subsets of all negative and all positive real numbers, and let $\chi_E$ refer to the characteristic function of the subset $E$ of the set of real numbers $\sR$, i.e. $$ \chi_E(t):=\left \{ \begin{array}{ll} 1 & \text{ if } t\in E, \\ 0 & \text{ if } t\in \sR\setminus E. \\ \end{array} \right. $$ By $L^p(\sR^+):=\chi_{\sR^+} \, L^p(\sR)$ and $L^p(\sR^-):=\chi_{\sR^-}\, L^p(\sR)$ we denote the subspaces of $L^p(\sR)$, $1\leq p \leq\infty$ which contain all functions vanishing on $\sR^-$ and $\sR^+$, cor\-respondingly. Consider the set $G$ of functions defined on the real line $\sR$ and having the form \begin{equation}\label{cst1} a(t)=\sum_{j=-\infty}^\infty a_j e^{i\delta_j t} +\int_{-\infty}^\infty k(s) e^{its} \, ds, \quad -\infty<t<\infty, \end{equation} where $\delta_j$ are pairwise distinct real numbers and \begin{equation* \sum_{j=-\infty}^\infty |a_j|<\infty, \quad \int_{-\infty}^\infty |k(s)| \, ds <\infty. \end{equation*} Each element $a$ of $G$ generates three operators $W^0(a):L^p(\sR)\to L^p(\sR)$ and $W(a), H(a):L^p(\sR^+)\to L^p(\sR^+)$, \begin{align* (W^0(a)f)(t)&:=\sum_{j=-\infty}^\infty a_j f(t-\delta_j) +\int_{-\infty}^\infty k(t-s) f(s)\,ds, \\ W(a)&:=PW^0(a) , \nn \\ H(a)&:= PW^0(a)QJ, \n \end{align*} where $P: f\to \chi_{\sR^+} f$ and $Q:=I-P$ are the projections on the subspaces $L^p(\sR^+)$ and $L^p(\sR^-)$, correspondingly, and the operator $J:L^p(\sR)\to L^p(\sR)$ is defined by $J\vp := \widetilde{\vp}$ with $\widetilde{\vp}(t):=\vp(-t)$. Note that $W^0(a), W(a)$ and $H(a)$ are bounded linear operators on the corresponding spaces. The function $a$ is called the generating function or the symbol for each of the operators $W^0(a)$, $W(a)$ and $H(a)$. Wiener-Hopf and Hankel operators are closely connected. Thus for any $a,b\in G$, one has \begin{equation}\label{cst4} \begin{aligned} W(ab)&=W(a)W(b)+H(a) H(\widetilde{b}),\\ H(ab)&= W(a)H(b)+H(a)W(\widetilde{b}). \end{aligned} \end{equation} The Fredholm theory for the operators $W^0(a)$, $a\in G$ is relatively simple. An operator $W^0(a)$ is semi-Fredholm if and only if $a$ is invertible in $G$. The study of the operators $W(a)$ is more involved. Nevertheless, for various classes of generating functions $a$, Wiener-Hopf operators $W(a)$ are well studied (see, for example, \cite{BS:2006,BKS:2002,CD:1969,Du:1973, Du:1977,Du:1979,GF1974}). In particular, Fredholm properties of such operators are known and a description of the kernel is available. On the other hand, Wiener-Hopf plus Hankel operators, i.e. the operators of the form $$ B=B(a,b)= W(a)+H(b) $$ remains less studied. Fredholm properties of such operators can be derived by reducing the initial operator to a Wiener-Hopf operator with a matrix symbol, and there is a number of works where this idea is successfully implemented \cite{MR2400132,CN:2009,CS2010d,CS:2011}. However, these works mainly deal with generating functions $a$ and $b$ satisfying the condition $a=b$ and consider the operators acting in an $L^2$-space. If $a\neq b$, then a rarely verifiable assumption about special matrix factorization is used. A different approach to the study of the operators of the form $I+H(b)$ has been employed in \cite{KS2000, Karapetiants2001}, where the essential spectrum and the index of such operators have been found. On the other hand, no information is available about the kernel elements of the operators $W(a)+H(b)$ even in the above mentioned cases $a=1$ and $a=b$. The goal of this work is to present an efficient description of the space $\ker B(a,b)$ when the generating functions $a$ and $b$ belong to the Banach algebra $G$ and satisfy a specific algebraic relation. Point out that our approach does not involve factorization of any matrix function but only the one of scalar functions. Let $a,b\in L^\infty(\sR)$. We say that the duo $(a,b)$ is a matching pair if \begin{equation}\label{cst9} a \widetilde{a}= b \widetilde{b}, \end{equation} where $\widetilde{a}:=a(-t)$. The relation \eqref{cst9} is called matching condition. In the following we always assume that $a$, and therefore $b$, is invertible in $G$. For each matching pair $(a,b)$, consider the pair $(c,d)$ with $$ c:=\widetilde{b}\widetilde{a}^{-1}, \quad d:= b \widetilde{a}^{-1}. $$ It is easily seen that $(c,d)$ is also a matching pair. This pair is called the subordinated pair for $(a,b)$ or just the subordinated pair. The elements $c$ and $d$ of the subordinated pair possess a specific property, namely $$ c\tilde{c}=1, \quad d\widetilde{d}=1. $$ Throughout this paper any function $g\in G$ satisfying the condition \begin{equation*} g \widetilde{g}=1, \end{equation*} is called matching function. Note that the matching functions $c$ and $d$ can be also expressed as \begin{equation*} c=ab^{-1}, \quad d= a \widetilde{b}^{-1}. \end{equation*} Further, if $(c,d)$ is the subordinated pair for $(a,b)$, then $(\overline{d},\overline{c})$ is the subordinated pair for the matching pair $(\overline{a}, \overline{\widetilde{b}})$. Moreover, if $p\in[1,\infty)$, then $\overline{a}$ and $\overline{\widetilde{b}}$ are generating functions for the operator adjoint to the Wiener-Hopf plus Hankel operator $W(a)+H(b):L^p(\sR^+)\to L^p(\sR^+)$, i.e. \begin{equation}\label{cst10} (W(a)+H(b))^*=W(\overline{a})+H(\overline{\widetilde{b}}). \end{equation} The Wiener-Hopf operators with matching generating symbols possess a number of remarkable properties. In particular, the kernels of such operators can be structured in a special way and this structurization can be used in the description of the kernels of Wiener-Hopf plus Hankel operators. More precisely, let $g$ be a matching function and let $\mathbf{P}(g)$ be the operator defined on the kernel $\ker W(g)$ by \begin{equation}\label{Pro1} \mathbf{P}(g):=J QW^0(g)P\left |_{\ker W(g)}\right .. \end{equation} One can easily check that $\mathbf{P}(g)$ maps $\ker W(g)$ into $\ker W(g)$ and $\mathbf{P}^2(g)=I$ (see \cite{DS:2014b} for more details). Therefore, the operators \begin{equation}\label{Pro2} \mathbf{P}^{-}(g):=(1/2)(I- \mathbf{P}(g)), \quad \mathbf{P}^{+}(g):=(1/2)(I+ \mathbf{P}(g)), \end{equation} considered on the space $\ker W(g)$, are complementary projections generating a decomposition of $\ker W(g)$, i.e. \begin{equation* \ker W(g)=\im \mathbf{P}^{-}(g)\dotplus\im\mathbf{P}^{+}(g).\\ \end{equation*} Consider now the Wiener-Hopf plus Hankel operators $W(a)+H(b)$, generating functions of which constitute a matching pair. In this case the elements of the subordinated pair $(c,d)$ are matching functions. Assume that the operator $W(c)$ is right-invertible and let $W_r^{-1}$ be a right inverse for $W(c)$. By $\vp_{\pm}$ we denote the operators defined on the kernel of the operator $W(d)$ by \begin{equation}\label{eqphi} 2\vp_{\pm}(s):=W_r^{-1}(c)W(\widetilde{a}^{-1}) s \mp J Q W^0(c)P W_r^{-1}(c)W(\widetilde{a}^{-1})s \pm J Q W^0(\widetilde{a}^{-1}) s, \end{equation} where $\widetilde{a}^{-1}=a^{-1}(-t)$. It was shown in \cite{DS:2014b} that for any $s\in \ker W(d)$ one has $\vp_{\pm}(s)\in \ker (W(a)\pm H(b))$, and the operators $\vp_+$ and $\vp_-$ are injections on the spaces $\im \mathbf{P}^+(d)$ and $\im \mathbf{P}^-(d)$, respectively. Moreover, the following result is true. \begin{prop}[{see \cite[Proposition 2.3]{DS:2014b}}]\label{p2} Let $(c,d)$ be the subordinated pair for a matching pair $(a,b)\in G \times G$. If the operator $W(c)$ is right-invertible, then \begin{equation}\label{cst14} \begin{aligned \ker(W(a)+H(b))& =\vp_{+}(\im \mathbf{P}^+(d)) \dotplus\im \mathbf{P}^-(c), \\ \ker(W(a)-H(b))& =\vp_{-}(\im \mathbf{P}^-(d)) \dotplus\im \mathbf{P}^+(c). \end{aligned} \end{equation} \end{prop} Thus to describe the kernels of the Wiener-Hopf plus/minus Hankel operators, one needs to find an efficient description of the images of the projections $\mathbf{P}^{\pm}(c)$ and $\mathbf{P}^{\pm}(d)$. Notice that the above statements do not depend on $p$. This paper is organized as follows. In Section \ref{s2} we present a decomposition of the kernel of $W(g)$ with a generating matching function $g$. These results are used in Section \ref{s3} in order to derive an efficient description of the kernels $\ker(W(a)\pm H(b))$, $p\in [1,\infty]$ and the cokernels $\coker (W(a)\pm H(b))$, $p\in [1,\infty)$. Similar results for Toeplitz plus Hankel operators have been obtained in \cite{DS:2014a, DS:2014}, and generalized Toeplitz plus Hankel operators are considered in \cite{DS:2013i}. However, all the relevant operators in \cite{DS:2014a, DS:2014, DS:2013i} are Fredholm. On the other hand, the really new feature of the present study is the consideration of situations where the operators $W(c)$ and $W(d)$ can have infinite-dimensional kernels or co-kernels. \section{Kernels of Wiener-Hopf operators with a matching generating function.\label{s2}} Our aim now is to describe the subspaces $\im \mathbf{P}^{\pm}(g)\subset \ker W(g)$. For, let us recall certain results of Fredholm theory for Wiener-Hopf operators with generating functions from the Banach algebra $G$. As we know, any element $a\in G$ can be represented in the form $a=b+k$, where $b$ belongs to the algebra $AP_w$ of all almost periodic functions with absolutely convergent Fourier series and $k$ is in the algebra $\cL_0$ of all Fourier transforms of functions from $L^1(\sR)$. If $a=b+k$, $b\in AP_w, k\in \cL_0$ is an invertible element of $G$, then $b$ is invertible in $AP_w$ and one can define the numbers $\nu=\nu(a)$ and $n=n(a)$ by \begin{equation* \nu(a):=\lim_{l\to\infty}\frac{1}{2l} [\arg b(t)]_{-l}^l,\quad n(a):=\frac{1}{2\pi} [\arg (1+b^{-1}(t)k(t)]_{t=-\infty}^\infty. \end{equation*} Recall that $a\in G$ is invertible in $G$ if and only if $\inf_{t\in\sR} |a(t)|>0$ and $\cL_0$ forms a closed two-sided ideal in $G$. \begin{thm}[Gohberg/Feldman \cite{GF1974}]\label{t1} Let $1\leq p\leq \infty$ and $g\in G$. The operator $W(g)$ is one-sided invertible in the space $L^p(\sR^+)$ if and only if $g$ is invertible in $G$. Further, if $g\in G$ is invertible in $G$, then the following assertions are true: \sloppy \begin{enumerate} \item If $\nu(g)<0$, then the operator $W(g)$ is invertible from the right and $\dim \ker W(g)=\infty$. \item If $\nu(g)=0$ and $n(g)\geq 0$ ($\nu(g)=0$ and $n(g)\leq 0$), then the operator $W(g)$ is invertible from the left (from the right) and \begin{equation*} \dim \coker W(g)=n(g) \quad (\dim \ker W(g)=-n(g)). \end{equation*} \item If $\nu(g)>0$, then the operator $W(g)$ is invertible from the left and $\dim \coker W(g)=\infty$. \item If $g\in G$ is not invertible in $G$, then $W(g)$ is not a semi-Fredholm operator. \end{enumerate} \end{thm} The proof of this theorem is based on the fact that every invertible function $a\in G$ admits a factorization of the form \begin{equation}\label{EqFactor} g(t)=g_-(t) e^{i\nu t} \left (\frac{t-i}{t+i} \right )^n g_+(t), \quad -\infty<t<\infty, \end{equation} where $g_{+}^{\pm1}\in G^+$, $g_{-}^{\pm1}\in G^-$, $\nu=\nu(g)$ and $n=n(g)$. Recall that $G^+(G^-)$ is defined as follows: $G^+(G^-)$ consists of all functions \eqref{cst1} such that all indices $\delta_j$ are non-negative (non-positive) and the function $k$ vanishes on the negative (positive) semi-axis. It is clear that functions from $G^+$ and $G^-$ admit holomorphic extensions to the upper and to the lower half-plane, correspondingly, and the intersection of the algebras $G^+$ and $G^-$ consists of constant functions only. Note that under the condition $g_-(0)=1$, the factorization \eqref{EqFactor} is unique. Moreover, for $a\in G^-, b \in G$ and $c\in G^+$, the first identity from \eqref{cst4} leads to the relation $$ W(abc)=W(a) W(b) W(c). $$ Combined with the factorization \eqref{EqFactor}, this relation leads to the following representation of the operator $W(g)$, $$ W(g)= W(g_-) W\left ( e^{i\nu t} \left ( \frac{t-i}{t+i}\right )^n \right ) W(g_+). $$ Therefore, theory of the Wiener-Hopf operators $W(g)$ with invertible symbol $g$ is based on the study of the middle factor of this factorization (see \cite[Chapter VII]{GF1974}). Thus the operator $W(a)$ has a kernel containing non-zero elements in the two cases--viz. if $\nu<0$, then $\dim\ker W(g)=\infty$, or if $\nu=0$ and $n<0$, then $\dim\ker W(g)=|n|$. In what follows we consider all possible situations separately. Let us note that $\ker W(a)$ do not depend on $p$. Assume that $g$ is a matching function. Then, as was pointed out in \cite{DS:2014b}, the factorization \eqref{EqFactor} comes down to the following one \begin{equation}\label{Eq1} g(t)= \boldsymbol\sigma(g)\, \widetilde{g}_+^{-1}(t) e^{i\nu t} \left (\frac{t-i}{t+i} \right )^n g_+(t) \end{equation} where $\boldsymbol\sigma(g)=(-1)^n g(0)$, $\widetilde{g}_+^{\pm1}(t)\in G^-$ and $g_-(t)=\boldsymbol\sigma(g)\,\widetilde{g}_+^{-1}(t)$. In passing note that $\boldsymbol\sigma(g)=\pm1$. Our goal now is to describe the projections $\mathbf{P}^{\pm}(g)$ from \eqref{Pro2}. Let us start with the case where the parameters $\nu$ and $n$ in the factorization \eqref{Eq1} satisfy the relations $\nu=0$, $n<0$. It is known \cite{GF1974} that in this case \begin{equation* \ker W(g)=\left \{ W(g_+^{-1}) \left ( \sum_{j=0}^{|n|-1}c_j t^j e^{-t} \right ): c_j\in \sC \right \}. \end{equation*} Thus the functions $W(g_+^{-1})t^j e^{-t}$, $j=0,1,\cdots,|n|-1$ form a basis in $\ker W(g)$. However, the space $\ker W(g)$ has another basis, namely, \begin{equation}\label{Eq1.5} \{ W(g_+^{-1}) \psi_j(t): \quad j=0,1, \cdots, |n|-1\}, \end{equation} where \begin{align* \psi_j(t)&:= \left \{ \begin{array}{ll} \sqrt{2} e^{-t} \Lambda_j(2t),& \text{ if } t>0,\\ 0, & \text{ if } t<0,\\ \end{array} \right .,& \quad j=0,1,\cdots\,,\phantom{--} \end{align*} and $\Lambda_j$ are the normalized Laguerre polynomials. Moreover, for $j=-1,-2, \ldots,$ one can define the functions $\psi_j$ by \begin{align} \psi_j(t) &:=-\psi_{-j-1}(-t), &\quad j=-1,-2, \cdots \,, \label{Eq3} \end{align} The functions $\psi_j$, $j\in \sZ$ can be also expressed in the form \begin{equation}\label{Eq4}\begin{aligned \psi_j(t)&=(U^j \psi_0)(t), \quad j=\pm1,\pm2, \cdots \,,\\ \psi_0(t)&= \left \{ \begin{array}{ll} \sqrt{2} e^{-t}, & \text{ if } t>0\\ 0, & \text{ if } t<0, \end{array} \right .\, . \end{aligned} \end{equation} where $U:=W^0((\lambda-i)/(\lambda+i))$. Note that the operators $U^j,j\in \sZ$ are unitary operators on $L^2(\sR)$. Thus, the functions $\psi_j, j\in\sZ$ form an orthonormal basis on this space. Indeed, it is shown in \cite[Chapter 3, \S 3.2]{GF1974} that for $j>0$, one has $$ (U^j \psi_0)(t)=\psi_j(t), $$ and applying \eqref{Eq3} one gets the result. Note that the relation \eqref{Eq3} can be obtained by using the Fourier transform. Indeed, let us recall the formula $$ (\cF \psi_n)(\lambda)=\int_0^\infty \psi_n (t)\,e^{i\lambda t}\,dt=\int_0^\infty U^n \psi_0 (t)\,e^{i\lambda t}\,dt= \left (\frac{\lambda-i}{\lambda+i}\right )^n \frac{i\sqrt{2}}{\lambda+i}, \quad n\in\sZ_+, $$ where $\cF$ is the Fourier transform \cite{GF1974} and $\sZ_+$ refers to the set of all non-negative integers. Consider the operator $J:L^p(\sR)\to L^p(\sR)$ defined by $(Jf)(t)=f(-t)$. If $n\in \sN$, then one has \begin{align}\label{psi1} \cF(-J\psi_{n-1})(\lambda)=-J\cF(\psi_{n-1})(\lambda) \nn \\ =-J\left ( \left (\frac{\lambda-i}{\lambda+i} \right )^{n-1}\frac{i\sqrt{2}}{\lambda+i} \right ) &=-\left ( \left (\frac{-\lambda-i}{-\lambda+i} \right )^{n-1}\frac{i\sqrt{2}}{-\lambda+i} \right )\nn \\ &= \left (\frac{\lambda+i}{\lambda-i} \right )^{n-1}\frac{i\sqrt{2}}{\lambda-i}. \end{align} On the other hand, if $n\leq-1$, then \begin{align}\label{psi2} (\cF\psi_n)(\lambda)& =\int_0^\infty \psi_n(t)e^{i\lambda t}\, dt =\int_0^\infty U^n \psi_0(t)e^{i\lambda t}\, dt \nn\\ &= \left (\frac{\lambda+i}{\lambda-i} \right )^{|n|}\frac{i\sqrt{2}}{\lambda+i} =\left (\frac{\lambda+i}{\lambda-i} \right )^{|n|-1}\frac{i\sqrt{2}}{\lambda-i}. \end{align} Comparing \eqref{psi1} and \eqref{psi2}, one obtains that $$ \cF(\psi_n (t))=\cF(-\psi_{|n|-1}(-t)) $$ and one has to use the injectivity of the Fourier transform to complete the proof. Let $g$ be a matching function. In order to describe the corresponding projections $\mathbf{P}^{\pm}(g)$ of \eqref{Pro1}-\eqref{Pro2}, we will study how the operator $\mathbf{P}(g)$ interacts with the basis elements \eqref{Eq1.5}. Thus \begin{align* \mathbf{P}(g) W(g_+^{-1}) \psi_j (t)& =JQW^0(g)P W(g_+^{-1})\psi_j(t)\\ &=JQW^0 \left(\boldsymbol\sigma(g)\, \widetilde{g}_+^{-1} \left (\frac{t-i}{t+i} \right )^{-|n|} g_+ \right ) W(g_+^{-1}) \psi_j \\ &=\boldsymbol\sigma(g)\,JQW^0(\widetilde{g}_+^{-1} ) W^0\left (\left (\frac{t-i}{t+i} \right )^{-|n|}\right ) \psi_j . \end{align*} Considering the elements $W^0\left (\left ((t-i)/(t+i) \right )^{-|n|}\right ) \psi_j$, $j=0,1,\cdots, |n|-1$ and using relations \eqref{Eq3} and \eqref{Eq4}, we get \begin{align* W^0\left (\left (\frac{t-i}{t+i} \right )^{-|n|}\right ) \psi_j &= W^0\left (\left (\frac{t-i}{t+i} \right )^{-|n|}\right )W^0\left (\left (\frac{t-i}{t+i} \right )^j \right ) \psi_0\\ &=W^0\left (\left (\frac{t-i}{t+i} \right )^{-|n|+j} \right ) \psi_0 =\psi_{-|n|+j}=-J\psi_{|n|-j-1}. \end{align*} Hence, $$ \boldsymbol\sigma(g)\,JQW^0(\widetilde{g}_+^{-1} ) W^0\left (\left (\frac{t-i}{t+i} \right )^{-|n|}\right ) \psi_j =-\boldsymbol\sigma(g)\,PW^0(g_+^{-1} )\psi_{|n|-j-1}. $$ Now one can proceed similarly to \cite[Section 5]{DS:2014} and obtain the following result. \begin{thm}\label{thm1} Let $g\in G$ be a matching function such that the operator $W(g):L_p(\sR_+)\to L_p(\sR_+)$ is Fredholm and $n:=\ind W(g)>0$. If $$ g(t)= g_-(t) \left ( \frac{t-i}{t+i} \right)^{-n}g_+(t)= \boldsymbol\sigma(g)\, \widetilde{g}_+^{-1}(t) \left (\frac{t-i}{t+i} \right )^{-n} g_+ (t)\, , \quad g_-(0)=1, $$ is the related Wiener-Hopf factorization of the function $g$, then the following systems $\fB_{\pm}(g)$ of functions $W(g_+^{-1}) \psi_{j}$ form bases in the spaces $\im \mathbf{P}^{\pm}(g)$: \begin{enumerate} \item If $n=2m$, $m\in\sN$, then $$ \fB_{\pm}(g)=\{W(g_+^{-1}) \left ( \psi_{m-k-1}\mp \boldsymbol\sigma(g)\psi_{m+k}\right ): k=0,1,\cdots, m-1\}, $$ and $$ \dim\im \mathbf{P}^{\pm}(g)=m. $$ \item If $n=2m+1$, $m\in\sZ_+$, then $$ \fB_{\pm}(g)=\{W(g_+^{-1})\left ( \psi_{m+k}\mp \boldsymbol\sigma(g)\psi_{m-k}\right ): k=0,1,\cdots, m\}, $$ and $$ \dim\im \mathbf{P}^{\pm}(g)=m+ \frac{1\mp\boldsymbol\sigma(g)}{2}. $$ \end{enumerate} \end{thm} \begin{rem}\label{rem1} It is worth mentioning that the zero element belongs to the one of the sets $\{W(g_+^{-1})(\psi_{m+k}- \boldsymbol\sigma (g)\psi_{m-k}):k=0,1,\cdots, m \}$ or $\{W(g_+^{-1})(\psi_{m+k}+ \boldsymbol\sigma (g)\psi_{m-k}):k=0,1,\cdots, m \}$ only. Namely, for $k=0$ one of the terms $\psi_m(1\pm\boldsymbol\sigma(g))$ is equal to zero. \end{rem} Consider now the case $\nu<0$ and $n=0$. Then \begin{equation* \ker W(g)= \left \{ W(g_+^{-1})f: f\in L^p(\sR^+) \text{ and } f(t)=0 \text{ for } t>|\nu|\right \}, \end{equation*} (see \cite[Chapter VII, \S 2.4]{GF1974}). \begin{thm}\label{thm2} Let $g\in G$ be a matching function such that the function $g$ possesses the Wiener-Hopf factorization $$ g(t)= g_-(t) e^{i\nu t} g_+(t)= \boldsymbol\sigma(g)\,\widetilde{g}_+^{-1}(t) e^{i\nu t}\,g_+ (t) , \quad \nu<0 \text{ and } g_-(0)=1, $$ and let $h\in \ker W(g)$, that is $h=W(g_+^{-1})f$ with an $f\in L^p(\sR^+)$ such that $f(t)=0$ for $t>|\nu|$. Then $$ JQW^0(g)Ph=\boldsymbol\sigma(g)\,W(g_+^{-1}) \cR_{|\nu|} f, $$ where \begin{equation}\label{Eq4.5} (\cR_{|\nu|})(t)=\left \{ \begin{array}{cc} f(|\nu|-t), & \text{ if } 0<t< |\nu|\\ 0 & \text{ if } t> |\nu| \\ \end{array} \right ., \end{equation} and $$ \mathbf{P}^{\pm}(g)h= \frac{h \pm \boldsymbol\sigma(g)\,W(g_+^{-1}) \cR_{|\nu|}\, f}{2}. $$ \end{thm} The prof of this result runs similarly to the proof of Theorem \ref{thm4} below where a more general factorization of the corresponding matching function $g$ has to be used. Next we consider the situation $\nu<0$ and $n<0$. In this case the function $g$ admits the Wiener-Hopf factorization of the form \begin{equation}\label{Eq5} g=\boldsymbol\sigma(g)\,\widetilde{g}_+^{-1} e^{i\nu t}\left (\frac{t-i}{t+i} \right )^{n} g_+ \,. \end{equation} As is shown in \cite[Chapter VII]{GF1974}, the kernel of the operator $W(g)$ is the direct sum of the kernels of the operators $W(g_+((t-i)/(t+i))^n)$ and $W(g_+ e^{i\nu t})$. Thus $$ \ker W(g)= \ker W\left (g_+\left (\frac{t-i}{t+i}\right )^n \right )\dotplus \ker W\left (g_+ e^{i\nu t}\right ). $$ Therefore, in order to characterize the projections $\mathbf{P}^{\pm}(g):\ker W(g)\to \ker W(g)$, one can describe their action on the subspaces $\ker W(g_+((t-i)/(t+i))^n)$ and $ \ker W\left (g_+ e^{i\nu t}\right )$ separately. To this aim, let us use the following representations of the function $g$: \begin{align* g=e^{i\nu t} g_1, \quad g_1:=\boldsymbol\sigma (g)\widetilde{g}_+^{-1} \left ( \frac{t-i}{t+i} \right )^n g_+,\\ g=\left ( \frac{t-i}{t+i} \right )^n g_2, \quad g_2:=\boldsymbol\sigma (g)\widetilde{g}_+^{-1} e^{i\nu t} g_+. \label{Eqg2} \end{align*} Moreover, observe that $JQW^0(g)P=H(\widetilde{g})$. \begin{thm}\label{thm3} Assume that $g$ is a matching function of the form \eqref{Eq5}. \begin{enumerate} \item If $h\in \ker W(g_+((t-i)/(t+i))^n)$, then $$ \mathbf{P}^{\pm}(g)h= \frac12 \left [I\pm \left ( W(e^{i|\nu| t})(\mathbf{P}^+(g_1)-\mathbf{P}^-(g_1))\right ) \right ] h. $$ \item If $h\in \ker W(g_+e^{i\nu t})$, then $$ \mathbf{P}^{\pm}(g)h= \frac12\left [I\pm \left ( W\left (\left ( \frac{t-i}{t+i} \right )^{|n|}\right)(\mathbf{P}^+(g_2)-\mathbf{P}^-(g_2))\right ) \right ] h. $$ \end{enumerate} \end{thm} \textbf{Proof.} Let us start with assertion (i). Using \eqref{cst4} we obtain $$ JQW^0(g)P=PW^0(\widetilde{g})QJ=H(\widetilde{g})= W \left (\widetilde{e^{i\nu t}}\right )H(\widetilde{g}_1)+H\left (\widetilde{e^{i\nu t}}\right ) W(g_1), $$ and the relation $W (g_1)h=0$ implies that $$ H(\widetilde{g})h=W \left (e^{i|\nu| t}\right )H(\widetilde{g}_1)h. $$ Therefore, \begin{align* \mathbf{P}^{\pm}(g)h&=\left [ \frac{I\pm H(\widetilde{g})}{2}\right ]h=\left [ \frac{I\pm W \left (e^{i|\nu| t}\right )H(\widetilde{g}_1))}{2}\right ]h \\ &=\frac12 \left [I\pm \left ( W(e^{i|\nu| t})(\mathbf{P}^+(g_1)-\mathbf{P}^-(g_1))\right ) \right ] h, \end{align*} so the assertion (i) is proved. The proof of assertion (ii) is similar to that of (i). It is based on the formula $$ H(\widetilde{g})=W \left ( \widetilde{\left ( \frac{t-i}{t+i} \right )^{|n|}}\right) H(\widetilde{g}_2) + H\left ( \widetilde{\left ( \frac{t-i}{t+i} \right )^{|n|}}\right) W(g_2), $$ and is left to the reader. \rbx \begin{rem}\label{rem2} Recall that the projections $\mathbf{P}^{\pm}(g_1)$ and $\mathbf{P}^{\pm}(g_2)$ acting, respectively, on the subspaces $\ker W(g_1)$ and $\ker W(g_2)$ are described by Theorem \ref{thm1} and Theorem \ref{thm2}. Besides, $$ \boldsymbol\sigma(g)=\boldsymbol\sigma(g_1)=\boldsymbol\sigma(g_2). $$ \end{rem} Finally, let us consider the case $\nu<0$ and $n>0$, i.e. now we assume that the Wiener-Hopf factorization of the matching function $g$ is \begin{equation}\label{Eq6} g=\boldsymbol\sigma(g)\,\widetilde{g}_+^{-1} e^{i\nu t}\left (\frac{t-i}{t+i} \right )^{n} g_+ \,. \end{equation} If this is the case, then according to \cite[Chapter VII]{GF1974} the kernel of the operator $W(g)$ consists of the functions $h$ having the form \begin{equation}\label{Eq7} h=W(g_+^{-1})W\left ( \left ( \frac{t-i}{t+i}\right )^{-n} \right )\vp, \end{equation} where $\vp\in L^p(\sR^+)$ is such that \begin{equation}\label{Eq8} \vp(t)=0 \text{ for all } t>|\nu| \text{ and } \int_0^\infty \vp(t)\, t^j e^{-t}\,dt=0, \quad j=0,1,\cdots, n-1. \end{equation} \begin{thm}\label{thm4} Let $g\in G$ be a matching function such that the function $g$ possesses the Wiener-Hopf factorization \eqref{Eq6}. Assume that $h\in \ker W(g)$. Then it can be represented in the form \eqref{Eq7}--\eqref{Eq8} and $$ JQW^0(g)Ph=\boldsymbol\sigma(g)\,W(g_+^{-1}) \cR_{|\nu|} \vp, $$ where $\cR_{|\nu|}$ is defined by \eqref{Eq4.5} and $$ \mathbf{P}^{\pm}(g)h= \frac{h \pm \boldsymbol\sigma(g)\,W(g_+^{-1}) \cR_{|\nu|}\, \vp}{2}. $$ \end{thm} \textbf{Proof.} Consider the expression $JQW^0(g)Ph$. One has \begin{align* JQW^0(g)Ph &=\boldsymbol\sigma(g) JQ W^0(\widetilde{g}_+^{-1}) W^0(e^{i\nu t}) W^0 \left ( \left (\frac{t-i}{t+i}\right )^n\right ) W^0(g_+) Ph \\ &=\boldsymbol\sigma(g) JQ W^0(\widetilde{g}_+^{-1}) W^0(e^{i\nu t}) P\vp =\boldsymbol\sigma(g) P W^0(g_+^{-1}) W^0(e^{i|\nu| t}) J P\vp\\ &=\boldsymbol\sigma(g) P W(g_+^{-1}) \cR_{|\nu|} \vp. \end{align*} Application of the relation $$ \mathbf{P}^{\pm}(g)h= \frac{h\pm JQW(g)P h}{2}, $$ competes the proof. \rbx \section{Kernels and cokernels of Wiener-Hopf plus Hankel operators. Specification.\label{s3}} In this section we study the kernels and cokernels of Wiener-Hopf plus Hankel operators in the case where the generating functions $a,b\in G$ satisfy the matching condition \eqref{cst9} and $a$ is invertible in $G$. Then according to Theorem \ref{t1}, the operators $W(c)$ and $W(d)$ are one-sided invertible in $L^p(\sR^+)$, $1\leq p \leq \infty$. Using results of Section \ref{s2}, we derive an explicit description for the kernels and cokernels of the operators mentioned. As before, we again have to consider several cases. \subsection{The Case I: $\nu(c)=\nu(d)=0$.\label{ss2.1}} This case is also used as a model case in order to show how to handle all other situations. If the indexes $\nu(c)$ and $\nu(d)$ are equal to zero, then the operators $W(c)$ and $W(d)$ are Fredholm. Using the relations (2.4) and (2.7) of \cite{DS:2014b}, one obtains that the operators $W(a)\pm H(b)$ are Fredholm. Set $\kappa_1:=\ind W(c)$, $\kappa_2:=\ind W(d)$ and let $\sZ_-$ and $\sZ_+$ refer to the set of all negative and non-negative integers, correspondingly. \begin{thm}\label{thm5} Assume that $\nu(c)=\nu(d)=0$. \begin{enumerate} \item If $(\kappa_1,\kappa_2)\in \sZ_+\times \sN$, then for all $p\in [1, \infty]$ the operators $W(a)\pm H(b):L^p \to L^p$ are invertible from the right and \begin{equation* \begin{aligned \ker(W(a)+H(b))& =\im \mathbf{P}^-(c) \dotplus \vp_{+}(\im \mathbf{P}^+(d)), \\ \ker(W(a)-H(b))& =\im \mathbf{P}^+(c) \dotplus\vp_{-}(\im \mathbf{P}^-(d)), \end{aligned} \end{equation*} where the spaces $\im \mathbf{P}^{\pm}(c)$, $\im \mathbf{P}^{\pm}(d)$ are described in Theorem \ref{thm1} and the mappings $\vp_{\pm}$ are defined by \eqref{eqphi}. \item If $(\kappa_1,\kappa_2)\in \sZ_-\times (\sZ\setminus \sN)$, then for all $p\in [1, \infty]$ the operators $W(a)\pm H(b):L^p \to L^p$ are invertible from the left and for all $p\in [1, \infty)$ one has \begin{equation* \begin{aligned \coker(W(a)+H(b))& =\im \mathbf{P}^-(\overline{d}) \dotplus \vp_{+}(\im \mathbf{P}^+(\overline{c})), \\ \coker(W(a)-H(b))& =\im \mathbf{P}^+(\overline{d}) \dotplus\vp_{-}(\im \mathbf{P}^-(\overline{c})), \end{aligned} \end{equation*} with $\im \mathbf{P}^{\pm}(\overline{d})=\{0\}$ if $\kappa_2=0$. \item If $(\kappa_1,\kappa_2)\in \sZ_+ \times (\sZ\setminus \sN)$, then for all $p\in [1, \infty]$ one has \begin{equation* \begin{aligned \ker(W(a)+H(b))& =\im \mathbf{P}^-(c), \\ \ker(W(a)-H(b))& =\im \mathbf{P}^+(c), \end{aligned} \end{equation*} and for all $p\in [1, \infty)$, \begin{equation* \begin{aligned \coker(W(a)+H(b))& =\im \mathbf{P}^-(\overline{d}), \\ \coker(W(a)-H(b))& =\im \mathbf{P}^+(\overline{d}) ). \end{aligned} \end{equation*} \end{enumerate} \end{thm} \textbf{Proof.} Let us note that all results concerning the kernels of the corresponding operators follow immediately from Proposition \ref{p2} and from Theorem \ref{thm1}. As far as the cokernel structure is concerned, one has to take into account the already mentioned relation \eqref{cst10} and the fact that $(\overline{d},\overline{c})$ is the subordinated pair for the duo $(\overline{a},\overline{b})$. \rbx It remains to consider the case $(\kappa_1,\kappa_2)\in \sZ_- \times \sN$. This situation is more involved. In order to formulate the next result, we need a special representation for the index of the operator $W(c)$. Thus chose $k\in \sN$ such that \begin{equation*} 1\geq 2k+\kappa_1\geq 0. \end{equation*} Such a number $k$ is uniquely defined and \begin{equation*} 2k+\kappa_1 =\left\ \begin{array}{ll} 0, & \hbox{if\;} \kappa_1 \; \hbox{is even,} \\ 1, &\hbox{if\;} \kappa_1 \; \hbox{is odd.} \\ \end{array \right. \end{equation*} Now the operators $W(a)\pm H(b)$ can be represented in the form \begin{equation}\label{eq6.1} W(a)\pm H(b)= \left( W \left (a\left (\frac{t-i}{t+i} \right )^{-k} \right )\pm H \left (b\left (\frac{t-i}{t+i} \right )^k \right )\right )W \left (\left (\frac{t-i}{t+i} \right )^k \right ). \end{equation} Observe that $\left (a\left (\frac{t-i}{t+i} \right )^{-k}, b\left (\frac{t-i}{t+i} \right )^k \right )$ is a matching pair with the subordinated pair $\left (c\left (\frac{t-i}{t+i} \right )^{-2k}, d\right )$. Therefore, the operators $W\left (a\left (\frac{t-i}{t+i} \right )^{-k}\right )\pm H \left (b\left (\frac{t-i}{t+i} \right )^k \right )$ are subject to assertion (i) of Theorem \ref{thm5}. Thus they are right-invertible, and if $\kappa_1$ is even, then \begin{equation}\label{eq6.2} \begin{aligned} \ker \left (W\left (a\left (\frac{t-i}{t+i} \right )^{-k}\right)+ H \left (b\left (\frac{t-i}{t+i} \right )^k\right)\right )=\vp_+(\im \mathbf{P}^{+}(d)),\\ \ker \left (W\left (a\left (\frac{t-i}{t+i} \right )^{-k}\right)- H \left (b\left (\frac{t-i}{t+i} \right )^k\right) \right)=\vp_-(\im \mathbf{P}^{-}(d)), \end{aligned} \end{equation} and if $\kappa_1$ is odd, then \begin{equation}\label{eq6.3} \begin{aligned} \ker \left (W\left (a\left (\frac{t-i}{t+i} \right )^{-k}\right ) \right . & + \left .H \left (b\left (\frac{t-i}{t+i} \right )^k \right) \right) \\ &= \frac{1-\boldsymbol\sigma(c)}{2}W(c_+^{-1})\,\{\sC \psi_0\} \dotplus \vp_+(\im \mathbf{P}^{+}(d)),\\ \ker \left (W\left (a\left (\frac{t-i}{t+i} \right )^{-k}\right)\right .&- \left. H \left (b\left (\frac{t-i}{t+i} \right )^k\right) \right) \\ &=\frac{1+\boldsymbol\sigma(c)}{2}W(c_+^{-1})\,\{\sC \psi_0\} \dotplus \vp_-(\im \mathbf{P}^{-}(d)), \end{aligned} \end{equation} where the function $\psi_0$ is defined by \eqref{Eq4} and the mappings $\vp_{\pm}$ depend on the functions $a\left (\frac{t-i}{t+i} \right )^{-k}$ and $b\left (\frac{t-i}{t+i} \right )^k$. \begin{thm}\label{t4} Let $(\kappa_1,\kappa_2)\in \sZ_-\times \sN$ and $p\in [1,\infty)$. Then \begin{enumerate} \item If $\kappa_1$ is odd, then \begin{align* &\ker(W(a)\pm H(b)) = W \left (\left (\frac{t-i}{t+i} \right )^{-k}\right )\\ &\;\times\left (\left\{ \frac{1\mp\boldsymbol\sigma(c)}{2}W(c_+^{-1})\,\{\sC \psi_0\} \dotplus \vp_{\pm}(\im \mathbf{P}^{\pm}(d))\right\} \cap \im W\! \left (\!\left (\frac{t-i}{t+i} \right )^k \right )\right )\\ &\;=\left\{\!\!\psi\in \!\left \{ W \!\!\left (\left (\frac{t-i}{t+i} \right )^{-k}\right )u\right \}\!: \! u\in \left \{\frac{1\mp\boldsymbol\sigma(c)}{2}W(c_+^{-1})\,\{\sC \psi_0\} \dotplus \vp_{\pm}(\im \mathbf{P}^{\pm}(d))\right \} \right . \\ &\qquad\qquad \qquad\qquad\qquad\text{and}\left. \int_0^\infty u(t) e^{-t}t^j\,dt=0 \text{ for all } j=0,1,\cdots, k-1, \right \}, \end{align*} where the mappings $\vp_{\pm}$ depend on the functions $a\left (\frac{t-i}{t+i} \right )^{-k}$ and $b\left (\frac{t-i}{t+i} \right )^k$. The last means that the functions $a,b$ and $c$ in the expression \eqref{eqphi} have to be, respectively, replaced by $a\left (\frac{t-i}{t+i} \right )^{-k}, b\left (\frac{t-i}{t+i} \right )^k$ and $c\left (\frac{t-i}{t+i} \right )^{-2k}$. \item If $\kappa_1$ is even, then \begin{multline* \ker(W(a)\pm H(b))\!\! = \!\! W \left (\left (\frac{t-i}{t+i} \right )^{-k}\right)\!\!\!\!\left ( \left\{\vp_{\pm}(\im \mathbf{P}^{\pm}(d))\right\} \cap \im W \left (\left (\frac{t-i}{t+i} \right )^k\right ) \right )\\ = \left\{ \psi\in \{ W \left (\left (\frac{t-i}{t+i} \right )^{-k}\right)u\}: u\in \left \{\{\sC \psi_0\}\dotplus \vp_{\pm}(\im \mathbf{P}^{\pm}(d)) \, \right \}\right . \text{and} \\ \left. \int_0^\infty u(t) e^{-t}t^j\,dt=0 \text{ for all } j=0,1,\cdots, k-1, \right \}, \end{multline*} and the mappings $\vp_{\pm}$ again depend on $a\left (\frac{t-i}{t+i} \right )^{-k}$ and $b\left (\frac{t-i}{t+i} \right )^k$. \end{enumerate} \end{thm} \textbf{Proof.} It follows immediately from the representations \eqref{eq6.1}--\eqref{eq6.3}. \rbx Theorem \ref{t4} can also be used to derive representations for the cokernels of the operators $W(a)\pm H(b)$ in the situation where $(\kappa_1,\kappa_2)\in \sZ_-\times \sN$. Indeed, recalling that for $p\in [1,\infty)$, the adjoint operator $(W(a)\pm H(b))^*$ can be represented in the form \eqref{cst10} and $(\overline{d}, \overline{c})$ is the subordinated pair for $(\overline{a},\widetilde{\overline{b}})$, one can observe that the operators $W(\overline{d})$ and $W(\overline{c})$ are also Fredholm and \begin{equation*} \ind W(\overline{d})=-\kappa_2, \quad \ind W(\overline{c})=-\kappa_1, \end{equation*} so $(-\kappa_2,-\kappa_1)\in \sZ_-\times \sN$. Therefore, Theorem \ref{t4} applies and one can formulate the following result. \begin{thm}\label{t5} Let $(\kappa_1,\kappa_2)\in \sZ_-\times \sN$, and let $m\in\sN$ satisfy the requirement \begin{equation*} 1\geq 2m-\kappa_2\geq0. \end{equation*} Then \begin{enumerate} \item If $\kappa_2$ is odd, then \begin{align* & \coker(W(a)\pm H(b))= W \left (\!\left (\frac{t-i}{t+i} \right )^{-m}\right)\\ &\quad \times \left (\left\{ \frac{1\mp\boldsymbol\sigma(\overline{d})}{2}W(\overline{d_-^{-1}})\, \{\sC \psi_0\} \dotplus \vp_{\pm}(\im \mathbf{P}^{\pm}(\overline{c}))\right\} \cap \im W \left (\left (\frac{t-i}{t+i} \right )^{m}\right) \right ). \end{align*} \item If $\kappa_2$ is even, then \begin{align* &\coker(W(a)\pm H(b))= \\ &= W\left (\left (\frac{t-i}{t+i} \right )^{-m}\right)\left (\left\{\vp_{\pm}(\im \mathbf{P}^{\pm}(\overline{c})\right\} \cap \im W\left (\left (\frac{t-i}{t+i} \right )^{m}\right) \right ), \end{align*} and the mappings $\vp_{\pm}$ depend on $\overline{a}\left (\frac{t-i}{t+i} \right )^{-m}$ and $\widetilde{\overline{b}}\left (\frac{t-i}{t+i} \right )^{m}$. \end{enumerate} \end{thm} \subsection{The Case II: $\nu(c)\neq 0$ and $\nu(d)\neq 0$.\label{ss2.2}} According to Theorem \ref{t1}, the operators $W(c)$ and $W(d)$ are one-sided invertible. In this situation the pair $(W(c),W(d))$ belongs to one of the classes $(r,r)$, $(l,l)$, $(l,r)$ or $(r,l)$, where letter $r$ or $l$ means that the corresponding operator is right- or left-invertible. It is worth mentioning that if the pair $(W(c),W(d))$ belongs to the class $(r,l)$, then the operator $W(a)+H(b)$ is normally solvable but it is not semi-Fredholm. Further, if $(W(c),W(d))\in (l,r)$ then, generally, it is not known whether $W(a)+H(b)$ is normally solvable or not. If $(W(c),W(d))$ belongs to one of the classes $(r,r)$ or $(r,l)$, then the kernels of the operators $W(a)+H(b)$ and $W(a)-H(b)$ can be described using results of Section \ref{s2}. For the description of the cokernels of the operators $W(a)+H(b)$ and $W(a)-H(b)$ in the cases $(l,l)$ and $(r,l)$, one has to assume that $p\in[1,\infty)$ and use the relation \eqref{cst10}. If $(W(c),W(d))$ belongs to the class $(r,l)$, then one can proceed similarly to Subsection \ref{ss2.1}. More precisely, we have to consider three situations, namely, \begin{enumerate} \item The index $\nu(c)<0$ and the index $n(c)>0$. \item The index $\nu(c)<0$ and the index $n(c)=0$. \item The index $\nu(c)<0$ and the index $n(c)<0$. \end{enumerate} Since in this situations, the operator $W(c)$ is right-invertible, the kernels of the operators $W(a)+H(b)$ and $W(a)-H(b)$ can be described by Proposition \ref{p2} and subsequent use of Theorems \ref{thm2}, \ref{thm3} and \ref{thm4}. As was already mentioned, if the pair $(W(c),W(d))$ belongs to the class $(l,r)$, then it is not known whether the operators $W(a)\pm H(b)$ are normally solvable. Nevertheless, the kernels and cokernels of these operators still can be described. However, it is worth noting that Proposition \ref{p2} cannot be directly used. Thus let us sketch the idea how to proceed in this situation. We have to deal with the following cases \begin{enumerate} \item The index $\nu(c)>0$ and the index $n(c)>0$. \item The index $\nu(c)>0$ and the index $n(c)=0$. \item The index $\nu(c)>0$ and the index $n(c)<0$. \end{enumerate} In these situations the operators $W(a)\pm H(b)$ admit the factorization \begin{align* W(a)\pm H(b) &= \left ( W \left ( ae^{-i\nu t/2} \left ( \frac{t-i}{t+i} \right)^{-k} \right )\pm H\left ( be^{i\nu t/2} \left ( \frac{t-i}{t+i} \right)^{k} \right ) \right ) \\ & \qquad\qquad \times W \left ( e^{i\nu t/2} \left ( \frac{t-i}{t+i} \right)^{k} \right ),\\ W(a)\pm H(b) &= \left ( W \left ( ae^{-i\nu t/2} \right )\pm H\left ( be^{i\nu t/2} \right ) \right ) W \left ( e^{i\nu t/2} \right ), \\ W(a)\pm H(b) &= \left ( W \left ( ae^{-i\nu t/2} \right )\pm H\left ( be^{i\nu t/2} \right ) \right ) W \left ( e^{i\nu t/2} \right ), \end{align*} where $\nu=\nu(c)$ and $k$ are defined as in Subsection \ref{ss2.1}. Let us consider, for definiteness, the operator $W(a)+H(b)$ and note that the respective subordinated pairs for the first operators in the right-hand sides of the last representations are \begin{equation* \left ( ce^{-i\nu t} \left ( \frac{t-i}{t+i} \right)^{-2k} , d \right ), \left ( ce^{-i\nu t}, d \right), \text{ and } \left ( ce^{-i\nu t}, d \right) \end{equation*} with the respective indices $\nu$ and $n$ defined as \begin{align* & \nu \left ( ce^{-i\nu t} \left ( \frac{t-i}{t+i} \right)^{-2k} \right ) =0 \text { and } n\left ( c e^{-i\nu t} \left ( \frac{t-i}{t+i}\right )^{-2k}\right ) =-2k+n(c), \\ & \nu \left ( ce^{-i\nu t} \right ) =0 \text { and } n\left ( c e^{-i\nu t} \right ) =0,\\ & \nu \left ( ce^{-i\nu t} \right ) =0 \text { and } n\left ( c e^{-i\nu t} \right ) =n(c). \end{align*} Now using the corresponding results of Section \ref{s2} and those obtained in Subsection \ref{ss2.1}, one can get a complete description for the kernels and cokernels of the operators $W(a)+H(b)$ and $W(a)-H(b)$. \subsection{The Case III.\label{ss2.3}} Assume that the only one of the indices $\nu(c)$ or $\nu(d)$ is equal to zero. This case can be handled similarly to the Cases~I and II without any new features. Therefore, we omit detailed formulations here. However, it is worth mentioning that in this case the operators $W(a)\pm H(b)$ are semi-Fredholm but not Fredholm. \section*{Acknowledgements} The authors thank an anonymous referee for very careful reading of the manuscript and suggesting a number of improvements and corrections.
1,116,691,497,456
arxiv
\section{Conclusion and Future Work} This study demonstrates that SE-MoE, MoE training and inference system, can satisfy requirements well from NLP and CV tasks. It not only addresses the issue of large model about training and inference, but also achieves competitive performance. In the future, we will explore a unified sparse training and inference system that takes parameter-sever into account and scheduling in multiple dimensions. The unified system will be effective to improve sparse training to overcome communication, computing and storage bottlenecks. Besides, how to utilize sparse training for large scale models to obtain better convergence in various tasks is still a seductive topic. We will further research efficient methods for sparse training of SE-MoE. At last, we will enhance our unified system by collaborating with the resource platform to perform lower carbon and more environmentally friendly research. \section{Efficient methods on MoE model} \label{sec:eff_methods} The unique architecture of the MoE model introduces new intrinsic problems in training and inference. The Elastic MoE Training has been designed to address the challenge of load imbalance due to uneven input data. Moreover, considering that MoE involves quite a bit of cross-machine communication, we explore Resource-aware Communication to speed up across different clusters. Finally, to overcome the limitation of storage because of the usage of oversized vocabulary in most tasks, we design and implement a novel embedding partition method in data parallelism, different from this in tensor-slicing parallelism. \subsection{Elastic MoE Training} \label{sec:elastic_training} The load imbalance largely affects the overall training performance, especially for multi-task training based on MoE architecture. For example, in the UFO task, due to the different amounts of input data on each task, the computing time is not uniform, causing serious load imbalance. On the one hand, the unbalanced load leads to the excess of the memory limit because the single task node processes a larger batch size due to the data collection from another node. On the other hand, synchronous communication has to wait for the slowest node which is known as the "Cask Effect" and results in a decline in computing utilization. \begin{figure}[!htb] \centering \subfloat[Original]{\includegraphics[scale=1.2]{figures/load_imbalance} \label{fig:Original}} \hfil \subfloat[Scale down]{\includegraphics[scale=1.2]{figures/scale_down_imbalance} \label{fig:Scale down}} \hfil \subfloat[Scale up]{\includegraphics[scale=1.2]{figures/scale_up_imbalance} \label{fig:Scale up}} \caption{Different methods supported by elastic MoE training: (a) the original training with load imbalance, in which the ratio of each node data quantity is 1:1:2; (b) combining multiple nodes with light-duty tasks, in which the ratio of each node data quantity is 2:2; (c) adding extra nodes to handle heavy-duty tasks, in which the ratio of each node data quantity is 1:1:1:1.} \label{fig:load imbalance} \end{figure} According to the workload of the task that is statistically estimated in advance, the elastic MoE training method is taken to flexibly adjust the number of training nodes and ensure the load balance of each node. In general, combining multiple nodes with light-duty tasks can better utilize resources on the premise that storage is not the bottleneck as shown in Figure~\ref{fig:Scale down}. On the contrary, adding extra nodes to handle heavy-duty tasks reduces the workload as the same task is processed by more computing resources. At the same time, the input data of the heavy-duty task are divided to balance the input and the data parallelism is employed to ensure parameter synchronization as shown in Figure~\ref{fig:Scale up}. The above two methods of elastic training can effectively alleviate the performance degradation caused by unbalanced load and the specific performance comparison is shown in Section~\ref{exp:ela_train}. \subsection{Resource-Aware Communication} During training and inference of the MoE model, a large amount of AlltoAll communication is required between devices in expert parallelism. It's possible to be a performance bottleneck and multiple processes of AlltoAll communication will compete for limited network resources at the same time. And after analyzing the network topology applied in our clusters, data interaction across clusters is much slower than devices within one cluster because of the larger message path and traffic cost. \begin{figure*}[htb] \centering \includegraphics[scale=0.7]{figures/network_topology} \caption{Network topology and message path of data movement} \label{fig:network_topology} \end{figure*} Through NVLink, intra-node communication spends a small quantity of time and resources without crossing any Tor bridge or switch. However, when communication happens between different nodes in one cluster or across clusters, a larger message path crossing Tor bridges and switches is required and costs more time in traffic scheduling. Supposing that there are $m$ clusters in the network and $p$ nodes sharing the same series of Tor bridges in one cluster. All leaf switches (LE) and spin switches (SP) are divided into $n$ and $m$ groups respectively. As shown in Figure~\ref{fig:network_topology}, leaf switches of the $i$-th group are connected directly to all Tor bridges only with rank $i$ from different clusters. And the spin switches are used for interaction across leaf switches. Since the bandwidth of the spin switch is less than that of the leaf switch, data exchange should utilize leaf switch as much as possible for better performance. For example, supposing that all GPU0s are connected to $ToR1$ and all GPU7s are connected to $ToRn$, we see that data movement between GPU0 of Node1 from Cluster A and GPU7 of Node2 from Cluster B traverse through the switch routing path $[LE1, SPq, LE1]$ as marked by the red lines, which causes the larger cost of communication and the potential possibility of resource competition against other interactions. It's a better way to implement the above-mentioned communication by a two-step process with data movement from GPU0 to GPU7 in Node1 through NVLink, followed by cross-cluster communication between the pair of corresponding Tor bridges with rank 7 without crossing any switch except $LE1$, as marked by the blue lines. This enables full utilization of NVSwitch bandwidth and optimized network traffic. \begin{figure*}[htb] \centering \includegraphics[width=\columnwidth]{figures/hierarchical_alltoall} \caption{Hierarchical AlltoAll} \label{fig:hierarchical_alltoall} \end{figure*} Therefore, the data exchange speed of the same rank in node outperforms that of different rank in node. Based on the properties of the network topology, we suggest an optimized Hierarchical AlltoAll communication with resource awareness in training and inference. As shown in Figure~\ref{fig:hierarchical_alltoall}, to avoid cross-node communication with different rank, we first implement intra-node AlltoAll through NVSwitch connection to gather the data. Then we categorize GPUs with the same rank into a group for inter-node AlltoAll and communicate across machines without unnecessary cost caused by crossing rails. Besides, in this way, the peer-to-peer communication across nodes increased by a factor of $p$, where $p$ is the number of GPUs in one node, which is capable of fully utilizing the inter-node bandwidth. \subsection{Embedding Partition in Data Parallelism} In the implementation of ultra-large-scale model training, the embedding table is often the largest parameter in the whole model parameters, so the storage of the embedding table is restricted to the model scale. There have been many works researching on the embedding partition. Megatron~\cite{shoeybi2019megatron} has long applied the row-wise partitioned embedding table to tensor-slicing parallelism to reduce training memory, and EmbedRace~\cite{li2021embrace} has proposed the column-wise partitioned method in the embedding table to achieve more balanced communication. However, there is no efficient processing way to deal with embedding partition when the input data of each process is inconsistent. \begin{figure*}[htb] \centering \subfloat[The forward stage]{\includegraphics[scale=0.65]{figures/dp_embedding_fp} \label{fig:fp_dp_embedding}} \hfil \centering \subfloat[The backward and optimization stage]{\includegraphics[scale=0.7]{figures/dp_embedding_bp_opt} \label{fig:bp_dp_embedding}} \caption{Example data flow of Embedding Partition in data parallelism. The embedding table is row-wise partitioned among processes. In the forward stage, AlltoAll communication is called twice: one is for exchanging input data and another is for exchanging embedding lookup results. In the backward stage, AlltoAll is called once to swap the gradient of the embedding table and using the gradients updates the embedding table.} \label{fig:dp_embedding} \end{figure*} As shown in Figure~\ref{fig:dp_embedding}, in this work, we focus on embedding partition in data parallelism. Suppose we partition an embedding table with dimension $[V, H]$ among $N$ training processes, the row-wise method distributed a $[\frac{V}{N}, H]$ shard to each worker, which means that every process only has an embedding representation of partial vocabulary. Therefore, before querying the embedding table, the input data of each process has to be exchanged with each other by AlltoAll communication to obtain the embedding results in the local partial vocabulary. Afterward, to obtain the correct results of the input data of the worker, the embedding results are exchanged again by AlltoAll communication which can be regarded as the inverse procedure of previous communication. Obviously, for the backward stage, the gradient is needed to exchange to recovery embedding table gradient. And unsurprisingly, it is an effective approach to reduce embedded table storage based on data parallelism, which introduces only three AlltoAll communications and remove AllReduce synchronization for embedding table gradients in data parallelism. \section{Experiment} \label{sec:exp} In this section, we give a comprehensive evaluation of SE-MoE system using experiments about MoE models from the perspective of training and inference. First, the training efficiency is tested on GPT based on MoE architecture with different configurations. Next, the inference performance is measured and the offloading strategy based on ring memory is evaluated on the different model sizes. Lastly, taking the UFO model as an example, various efficient methods of SE-MoE are tested. \subsection{Large-Scale MoE Training} We train GPT models~\cite{brown2020language, narayanan2021efficient} based on the MoE architecture on A100 GPUs(80 GB) by combining data parallelism and expert parallelism. Besides, we adopt Gshard~\cite{gross2017hard} and top1-gating for evaluation. Simultaneously, we choose pure fp16 precision and the AdamW~\cite{loshchilov2018decoupled} optimizer for training. The results of throughput with different configurations are demonstrated in Table~\ref{tab:exp_training}. From the table, compared with the state-of-the-art MoE system, DeepSpeed~\footnote{\url{https://github.com/microsoft/Megatron-DeepSpeed}}, SE-MoE obtains almost 28\% speedup in single-node training and at least 33\% speedup in multiple-node training for the MoE models with over-100-billions parameters. Meanwhile, SE-MoE decreases the employed GPU memory of each rank by nearly 12 GB. \begin{table*}[htb] \centering \caption{Results for MoE models on A100 GPUs in the different configurations} \resizebox{\textwidth}{!}{% \begin{tabular}{|c|c|c|c|c|c|c|c|cc|cc|} \hline \multirow{2}{*}{\textbf{Parameters(B)}} & \multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}Attention\\ heads\end{tabular}}} & \multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}Hidden\\ size\end{tabular}}} & \multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}Vocab\\ size\end{tabular}}} & \multirow{2}{*}{\textbf{Layers}} & \multirow{2}{*}{\textbf{Experts}} & \multirow{2}{*}{\textbf{GPUs}} & \multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}Batch\\ size\end{tabular}}} & \multicolumn{2}{c|}{\textbf{Speed(tokens/s)}} & \multicolumn{2}{c|}{\textbf{Memory(GB)}} \\ \cline{9-12} & & & & & & & & \multicolumn{1}{l|}{DeepSpeed} & \multicolumn{1}{l|}{SE-MoE} & \multicolumn{1}{l|}{DeepSpeed} & \multicolumn{1}{l|}{SE-MoE} \\ \hline 13.9 & \multirow{5}{*}{64} & \multirow{5}{*}{4096} & \multirow{5}{*}{50304} & \multirow{5}{*}{12} & 8 & 8 & 8 & \multicolumn{1}{c|}{24165} & \textbf{31085} & \multicolumn{1}{c|}{68.9} & \textbf{56.8} \\ \cline{1-1} \cline{6-12} 26.8 & & & & & 16 & 16 & 16 & \multicolumn{1}{c|}{43691} & \textbf{59136} & \multicolumn{1}{c|}{66.2} & \textbf{53.9} \\ \cline{1-1} \cline{6-12} 52.6 & & & & & 32 & 32 & 32 & \multicolumn{1}{c|}{82957} & \textbf{113456} & \multicolumn{1}{c|}{66.8} & \textbf{54.5} \\ \cline{1-1} \cline{6-12} 104.1 & & & & & 64 & 64 & 64 & \multicolumn{1}{c|}{157728} & \textbf{209970} & \multicolumn{1}{c|}{66.3} & \textbf{54.4} \\ \cline{1-1} \cline{6-12} 207.2 & & & & & 128 & 128 & 128 & \multicolumn{1}{c|}{283706} & \textbf{376968} & \multicolumn{1}{c|}{66.4} & \textbf{54.3} \\ \hline \end{tabular} } \label{tab:exp_training}% \end{table*}% \subsection{MoE Inference} \label{sec:infer_exp} The experiments about inference include two parts: one shows the performance of the MoE inference system on the different models with billions of parameters, and the other one shows the effectiveness of the offloading strategy we propose in Section~\ref{sec:ring_memory}. \paragraph*{Effective Inference on MoE} Substantially, inference requires less memory than training. So it's easy to process downstream tasks with a 10-billion-parameter MoE model on a single GPU. We measure the inference performance of large-scale MoE models on the text generation task. As shown in Table~\ref{tab:exp_inference}, compared with DeepSpeed, SE-MoE obtains almost 13\% speedup on MoE models with over 200 billion parameters. \begin{table*}[htb] \centering \caption{Results for performance of MoE inference on A100 GPUs} \resizebox{0.45\textwidth}{!}{ \begin{tabular}{|c|c|c|cc|} \hline \multirow{2}{*}{\textbf{Parameters(B)}} & \multirow{2}{*}{\textbf{GPUs}} & \multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}Batch\\ size\end{tabular}}} & \multicolumn{2}{c|}{\textbf{Speed(tokens/s)}} \\ \cline{4-5} & & & \multicolumn{1}{l|}{DeepSpeed} & \multicolumn{1}{l|}{SE-MoE} \\ \hline 10.0 & 1 & 1 & \multicolumn{1}{c|}{4303} & \textbf{4551} \\ \hline 106.5 & 8 & 8 & \multicolumn{1}{c|}{27215} & \textbf{29681} \\ \hline 209.6 & 16 & 16 & \multicolumn{1}{c|}{35310} & \textbf{40059} \\ \hline \end{tabular} } \label{tab:exp_inference}% \end{table*}% \paragraph*{Ring Memory Offloading} Using 16 A100(40G) GPUs in the experiment, we measure the inference performance of expert offloading strategy based on ring memory for the MoE model with 32 experts and 58.2B parameters. We also give the time of computation in GPU memory and expert movement from CPU memory. As shown in Figure~\ref{fig:exp_ring_memory}, the performance of overlapped MoE inference system is almost unaffected by CPU offloading. According to the results, we see that this strategy can keep a relatively good balance between computation and data movement, and makes the MoE inference systems hold decreased GPU memory by at least 30\% than inference without ring memory offloading. \begin{figure*}[htb] \centering \includegraphics[scale=0.6]{figures/exp_ring_memory_offload} \caption{Performance of MoE inference w/ and w/o overlapped offloading} \label{fig:exp_ring_memory} \end{figure*} \subsection{Multi-Task Training with MoE} \label{exp:ela_train} The experiments below show the advantages of efficient methods on an actual large model, which are illustrated in Section~\ref{sec:eff_methods}. \paragraph*{Elastic MoE Training} We train the UFO model on A100(80G) GPUs based on the MoE architecture to evaluate the efficiency of elastic MoE training. There are totally 4 tasks, and the batch size of each task is 512, 256, 128, and 128 respectively, which is imbalanced for training. By following the elastic sparse training described in Section~\ref{sec:elastic_training}, we adjust the entire training load by adding extra computing nodes, so that we choose 4 GPUs for Task-1 and 2 GPUs for Task-2. For the sake of fairness, we calculate the average speed of each GPU card for eliminating the impact caused by the increasing of nodes. As shown in Table~\ref{tab:exp_load_balance}, compared with load imbalance, the throughput of each card obtains 18.2\% speedup. \begin{table*}[ht] \centering \caption{Results for elastic MoE training on A100 GPUs(80G)} \resizebox{\textwidth}{!}{% \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline & \textbf{Task number} & \textbf{Parameters(M)} & \textbf{\begin{tabular}[c]{@{}c@{}}Total\\ batch size\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Batch size \\ per task\end{tabular}} & \textbf{GPUs} & \textbf{\begin{tabular}[c]{@{}c@{}}GPUs\\ per task\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Total Speed\\ (samples/s)\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Speed per card\\ (samples/s)\end{tabular}} \\ \hline Load imbalance & \multirow{2}{*}{4} & \multirow{2}{*}{83} & \multirow{2}{*}{1024} & \multirow{2}{*}{512/256/128/128} & 4 & 1/1/1/1 & 250.4 & 62.6 \\ \cline{1-1} \cline{6-9} Load balance & & & & & 8 & 4/2/1/1 & 591.9 & 74.0 \\ \hline \end{tabular}% } \label{tab:exp_load_balance}% \end{table*} \paragraph*{Resource-Aware Communication} In this experiment, we train the MoE models on the different number of nodes and model sizes. From Figure~\ref{fig:exp_hierarchical_alltoall}, we see that after the Hierarchical AlltoAll training is adopted, the computation time does not increase significantly, but the communication time decreases dramatically. On the MoE model with 80.7B parameters in four nodes with 32 GPUs, the overall end-to-end training performance is improved by 10.3\%, while the communication obtains speedup by 15.5\% using Hierarchical AlltoAll. \begin{figure*}[!htb] \centering \subfloat[2 nodes with 16 GPUs]{\includegraphics[width=0.48\columnwidth]{figures/hiera_alltoall_16cards} \label{fig:adaptive_original}} \hfil \subfloat[4 nodes with 32 GPUs]{\includegraphics[width=0.48\columnwidth]{figures/hiera_alltoall_32cards} \label{fig:adaptive_fault}} \caption{MoE Training Time Breakdown} \label{fig:exp_hierarchical_alltoall} \end{figure*} \paragraph*{Embedding Partition in Data Parallelism} For the scene with a large vocabulary size, we train the MoE model with embedding partition in data parallelism. It can be seen from the experimental results that adopting embedding partition strategy in a single machine can effectively reduce the GPU memory consumption of large vocabulary size. Since each rank updates the partial vocabulary in parallel, the training performance will also be improved. As shown in Table~\ref{tab:embedding_partition}, We take the non-segmented embedding under data parallelism as the baseline. With the increase of the hidden size, our method reduces GPU memory by 22.4\%, 24.2\% and 26.3\%, while increasing the throughput by 4.2\%, 11.2\%, and 15.6\% respectively. \begin{table}[htb] \centering \caption{Embedding Partition in Data Parallelism on V100 GPUs} \resizebox{\textwidth}{!}{% \begin{tabular}{|c|c|c|c|c|c|cc|cc|} \hline \multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}Batch\\ size\end{tabular}}} & \multirow{2}{*}{\textbf{GPUs}} & \multirow{2}{*}{\textbf{Experts}} & \multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}Vocab\\ size\end{tabular}}} & \multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}Hidden\\ size\end{tabular}}} & \multirow{2}{*}{\textbf{Parameter(M)}} & \multicolumn{2}{c|}{\textbf{\begin{tabular}[c]{@{}c@{}}Memory\\ (GB)\end{tabular}}} & \multicolumn{2}{c|}{\textbf{\begin{tabular}[c]{@{}c@{}}Speed\\ (tokens/s)\end{tabular}}} \\ \cline{7-10} & & & & & & \multicolumn{1}{l|}{\textbf{Baseline}} & \multicolumn{1}{l|}{\textbf{Embedding Partition}} & \multicolumn{1}{l|}{\textbf{Baseline}} & \multicolumn{1}{l|}{\textbf{Embedding Partition}} \\ \hline \multirow{3}{*}{8} & \multirow{3}{*}{8} & \multirow{3}{*}{8} & \multirow{3}{*}{50304} & 2048 & 100 & \multicolumn{1}{c|}{7.46} & \textbf{5.78} & \multicolumn{1}{c|}{144159} & \textbf{150161} \\ \cline{5-10} & & & & 4096 & 300 & \multicolumn{1}{c|}{12.80} & \textbf{9.70} & \multicolumn{1}{c|}{86237} & \textbf{95890} \\ \cline{5-10} & & & & 8192 & 700 & \multicolumn{1}{c|}{27.80} & \textbf{20.49} & \multicolumn{1}{c|}{40605} & \textbf{46938} \\ \hline \end{tabular}% } \label{tab:embedding_partition} \end{table} \section{Introduction} \label{sec:intro} In recent years, large-scale neural networks have excelled in many machine learning tasks, such as natural language processing(NLP)~\cite{kaplan2020scaling, brown2020language, devlin2018bert} and computer vision(CV)~\cite{dosovitskiy2020image}. At the same time, the parameter scale of the model has expanded from tens of billions of parameters, such as the GPT-3 model with 175B parameters~\cite{brown2020language, narayanan2021efficient}, Ernie3.0 Titan with 260B parameters~\cite{wang2021ernie} and Megatron-Turing NLG with 530B parameters~\cite{smith2022using}. However, these densely activated models require abundant computing resources and massive training time. Simultaneously, the inference performance of the super large-scale models is difficult to satisfy the actual demands. For example, by December 2021, the largest single densely activated model, Megatron-Turing NLG with 530B, had taken 3 months to train on over 2000 NVIDIA A100 GPUs~\cite{smith2022using}, making it more costly and further prevent from developing into a model with a larger parameter scale. To optimize training for large-scale models, the Click-Through Rate(CTR) prediction~\cite{zhao2019aibox} model contains numerous sparse feature embeddings to exploit the huge parameters~\cite{zhao2020distributed}. However, it only utilizes one special layer to process high-dimensional input data to scale up the model. Beyond that, differently from the densely activated models, multi-task learning~\cite{caruana1997multitask} is proposed in pre-trained language models~\cite{xue-etal-2021-mt5, aharoni-etal-2019-massively, wang-etal-2020-multi} about multilingual neural machine translation. Nevertheless, these methods require numerous computing resources to obtain state-of-the-art models. To solve the above issues~\cite{shazeer2017outrageously, lepikhin2020gshard, fedus2021switch}, the sparsely activated neural networks based on Mixture-of-Experts(MoE) are proposed to train larger models with limited or no additional computing resources and achieve better training effects. The MoE architecture selectively activates a subset of parameters for training according to the input data in comparison with the densely activated models. Given the sparsity, the computing cost increases sub-linearly concerning the size of the model. For example, the largest version of GLaM~\cite{du2021glam} has 1.2T parameters and 64 experts per MoE layer in total, but each token from the input batch activates only a subnet of 95B (8\% of 1.2T) parameters. And compared with the training GPT-3(175B), two-thirds of the electricity cost is saved, and only half of the computing resources are required in inference. Despite all the benefits, MoE models present their challenges and limitations in computation, communication, and storage. \subsection{Computation Challenges} Although the computing cost remains constant when the parameter scale gets expanded due to the increase of experts, computation in training and inference is confronted with the following limitations. On the one hand, MoE tends to make the training effects worse because of the imbalance in expert selection~\cite{fedus2021switch}. Therefore, many solutions have been proposed, such as adding auxiliary loss~\cite{lepikhin2020gshard}, using stochastic experts~\cite{zuo2021taming} and the noisy routing strategy~\cite{fedus2021switch}. Besides limiting the capacity of experts, it can also avoid inefficient training and the waste of computing resources. On the other hand, the calculation of routing selection and auxiliary loss at each layer pays more attention to scheduling than computing, which takes more burden to CPU devices without reusing high-speed computing devices such as GPU. At the same time, there are a lot of redundant operations introduced to the computation, such as Host2Device(H2D) and Device2Host(D2H)~\cite{he2021fastmoe}. \subsection{Communication Challenges} Most of the ongoing research has paid more attention to the imbalance of routing strategies caused by the gating network learning~\cite{shazeer2017outrageously, lepikhin2020gshard,fedus2021switch, yang2021m6}. However, because the activation of parameters in MoE is very closely interrelated to the input data, the notorious load imbalance often occurs when the data is unbalanced despite the efficient routing methods. For cross-device communication, the load imbalance results in the inconsistent pace of each device, causing mutual waiting for synchronous communication, especially in multi-task training. At the same time, taking Switch Transformer~\cite{fedus2021switch} as an example, each MoE layer requires AlltoAll communication for four times in both forward and backward stages, which frequently cross nodes or clusters. When the underlying network topology cannot be perceived, inter-machine communication is more prone to routing conflict and blocking, causing rapid performance degradation. \subsection{Storage Limitations} The MoE architectures is significantly limited by memory storage in computing devices. For densely activated models, the model scale is often restricted by the training time rather than memory. For example, the dense model with 1 trillion parameters takes around 3 months to train 450 billion tokens with 3072 A100 GPUs~\cite{narayanan2021efficient}. In contrast, the MoE model with trillions of parameters only requires a few weeks for training the same amount of tokens because its computing cost increases sub-linearly. However, the model scale depends on whether the device memory endures the occupation of model states. Although all available storage of the devices contains High-Bandwidth Memory(HBM)~\cite{keckler2011gpus} in GPUs, CPU Memory, SSDs(solid-state-drives) and so on, the I/O latency among them is different from each other, making computation wait for parameter and others. It is challenging to construct a unified and efficient storage management to support sparsely activated training to break the memory wall. \subsection{Proposed Solution} This paper introduces a novel unified framework based on a open-source platform for MoE training and inference, which outperforms the state-of-the-art dense model on NLP and CV tasks. To overcome the challenges and limitations of MoE, some related research papers (Section~\ref{sec:related}) are presented, and their insights are optimized: \begin{itemize} \item A novel distributed system named SE-MoE, capable of scaling MoE models to trillions of parameters, fully utilizes the clusters including HBM, CPU memory and even SSDs to go beyond the memory wall and achieves efficient training scheduling. Moreover, using 2D prefetch scheduling and fusion communication are to improve heterogeneous storage efficiency (Section~\ref{sec:Sparse training}). \item A new inference method based on the ring memory is employed by dynamic graph scheduling, which can overlap the computation and communication as much as possible and further obtain more efficient inference performance without using additional machines for larger-scale MoE models (Section~\ref{sec:inference}). \item Some effective training methods to scale up multi-task learning without extra memory and improve performance are employed by SE-MoE in NLP and CV tasks. These methods include load balancing, embedding partition, and resource-aware communication (Section~\ref{sec:eff_methods}). \end{itemize} The training and inference performance of models with different scales is our concern and the effective training recipes are adopted to train the CV task called UFO. Details about the experiments are presented in Section~\ref{sec:exp}. \section{MoE Inference Design} \label{sec:inference} There has been lots of research~\cite{du2021glam, artetxe2021efficient} showing that MoE models are significantly more efficient to train than dense models. However, for inference, numerous parameters (mostly ineffective parameters) introduce a larger storage burden compared to the dense model. For one thing, knowledge distillation~\cite{fedus2021switch, hinton2015distilling, shleifer2020pre, sanh2019distilbert} is popular in the reduction of model size and accuracy preservation. DeepSpeed~\cite{rajbhandari2022deepspeed} has proposed Mixture-of-Students(MoS) architecture to enhance student model accuracy. For another thing, to achieve low latency and high throughput at a large scale for MoE, diverse parallelism techniques are designed~\cite{rajbhandari2022deepspeed}, including expert-slicing, expert parallelism, tensor-slicing, and so on. However, multiple storage devices are not considered in the inference of MoE at an unprecedented scale when the number of machines is delimited. Below, for SE-MoE, we introduce an assembled process from train to inference deployment to achieve the purpose of high efficiency and low carbon. And we show innovations in the MoE inference architecture based on ring memory that supports inference to go beyond the memory wall and maintain efficient performance as much as possible. \subsection{Efficient Inference on MoE} The training part of SE-MoE adopts dynamic graph training, which has predominant debugging and flexibility. In contrast, to achieve stability and efficiency, the static graph is used in the inference and deployment stage. As shown in Figure~\ref{fig:inference_overall}, the whole inference process is divided into six steps: (1) Graph Fusion. For the model of ultra-large-scale distributed training, the origin graph is merged with the corresponding distributed strategy, which is employed in parameter redundancy elimination. (2) Distillation and Compression. Compress the numerous experts of the teacher network through distillation and compression to get the student network with fewer experts. (3) Graph Conversion. Convert the dynamic graph into a static graph for subsequent optimization and deployment. (4) Graph Segmentation. According to inference resources and the actual demand, a rational distributed strategy has been chosen manually or automatically to split the static graph into multiple distributed sub-graphs and add extra communication. (5) Optimization. For the distributed sub-graphs, pertinent IR Pass optimization like kernel fusion is used to further ameliorate the performance of inference. (6) Deployment. Deploy the optimized sub-graphs to the server to provide service. \begin{figure*}[htb] \centering \includegraphics[width=0.85\textwidth]{figures/moe_inference} \caption{MoE Inference} \label{fig:inference_overall} \end{figure*} It is worth mentioning that SE-MoE conjoins highly optimized transformers as well as MoE related kernels. We use the optimized methods that have been used in NVIDIA's BERT implementation in MLPerf 1.1~\cite{mattson2020mlperf} such as Fused Multi-head Attention, which is effective to reduce kernel launch time. For the MoE model, the unique kernels are made to improve H2D/D2H time by using CUDA Pinned Memory and customizing AlltoAll communication to minimize the number of layer transitions as much as possible. Specific inference performance experiment is shown in Section~\ref{sec:infer_exp}. \subsection{Ring Memory Offloading} \label{sec:ring_memory} To enable inference of a large-scale MoE model with limited resources, it is necessary to adopt the offloading strategy to solve the storage problem. Nevertheless, the speed of data movement inevitably becomes the bottleneck of inference performance. Therefore, many methods try to cover the data movement behind the inference calculation as much as possible, so that the calculation waiting time can be reduced. We design a dynamic scheduling strategy in offloading sparse parameters like expert parameters in the MoE model, trying to preserve efficient performance by overlapping the parameter movement from CPU memory and the inference computation in GPU memory. \begin{figure*}[htb] \centering \includegraphics[width=0.78\textwidth]{figures/infer_model} \caption{MoE Inference Model} \label{fig:infer_model} \end{figure*} As shown in Figure~\ref{fig:infer_model}, each layer is independent of each other from the perspective of parameters in the MoE model like switch transformer architecture~\cite{fedus2021switch}, which can be used to stagger computation and offloading to achieve overlap. Assuming there are $N$ decoder layers in the MoE inference model, we store $N$ copies of the expert parameters of $N$ layers in the CPU device and put other parameters such as embeddings on the dense buffer of the GPU device. At the same time, the GPU device also caches $K$ copies of the expert parameters. As shown in Figure~\ref{fig:infer_scheduling}, once all the computation related to the $i$-th layer is finished, the corresponding $P_i$ parameter in GPU memory can be released and start loading the $S_{K+i}$ expert parameter of the $(K+i)$-th layer from CPU memory asynchronously to occupy the space of $P_i$. In this way, the fixed $K$ copies of expert parameters on the GPU device are maintained by calculation-released-load, and they are stored in the ring memory to alleviate memory fragmentation. By using different CUDA streams, expert loading from CPU and computation can be partially overlapped, as illustrated in Figure~\ref{fig:infer_stream}. And when the MoE inference model has more decoder layers and the ring memory size is sufficient, overlapping can be greatly maximized. Specific inference performance experiments using the ring memory are shown in Section~\ref{sec:infer_exp}. \begin{figure}[htb] \centering \subfloat[Scheduling based on the ring memory]{\includegraphics[scale=0.6]{figures/infer_ring_buffer} \label{fig:infer_scheduling}} \hfil \subfloat[Timelines of different scheduling]{\includegraphics[scale=0.6]{figures/infer_stream} \label{fig:infer_stream}} \caption{The scheduling and timeline of the ring memory offloading. The essential steps of scheduling: \Circled{1} load $N$ copies of parameters from files in SSD memory, \Circled{2} load $K$ copies of parameters from CPU memory, \Circled{3} execute the $i$-th layer computation, \Circled{4} release the $i$-th parameter and trigger asynchronous copy to replace $P_i$ with $S_{K+i}$.} \label{fig:infer} \end{figure} \section{MoE Training Design} \label{sec:Sparse training} In the field of deep learning, there are two key factors affecting the performance and effect of model training: the model scale and the data size. It's quite a challenge for all scientific institutions and enterprises to explore further due to the requirements of massive resources for computation and storage. To solve this problem, a new training method has been proposed and introduced to the industry in recent years. Different from the densely activated model putting all the parameters into computation, the sparsely activated model adaptively selects a subset of its parameters for training according to the input data, and the parameters can be increased linearly without increasing the amount of calculation, which make larger models based on MoE architecture more feasible and efficient. To train models with considerable parameters with as few hardware resources as possible, it's an appropriate solution to adopt the offloading strategy and train larger-scale models with the extreme utilization of storage resources in devices. Recently, DeepSpeed has presented an efficient method named Zero-infinity~\cite{rajbhandari2021zero} and trained over 30 trillion parameters using 512 V100 GPUs in NVIDIA DGX-2 nodes. This method breaks through the limitation of memory and enables training of a super large-scale model on one single device by making full use of the storage space including High Bandwidth Memory(HBM) in GPU, CPU memory, and SSDs. Meanwhile, both the Zero strategy~\cite{rajbhandari2020zero, rajbhandari2021zero} and the parameter prefetching strategy are applied to the reduction of storage occupation and the improvement of the training performance respectively. However, SSDs have a limited lifetime number of writes, and also slow down as they reach their full storage capacity~\cite{ssd}. There's an optimization solution of MoE training that is proposed to solve the problems of SSDs and the scheduling in the large model training. Firstly, the parameters of the MoE model are classified into two categories according to the properties of activation: one is the sparse parameters, which are selectively activated in the training, such as expert parameters in switching FFN layer; the other is the dense parameters, which are always activated during the training, such as parameters in the multi-head attention layer. Generally, the sparse parameters account for a large proportion in MoE model easily tends to break the limitation of GPU storage. Then, as shown in Figure~\ref{fig:training_overall}, we specifically redesign the architecture of the MoE training system, by combining various storage devices for abundant memory of sparse and dense parameters. At the same time, to alleviate the degradation of training performance caused by movement between different devices, we propose a creative strategy named 2D prefetch scheduling. In the following, we will introduce our training design in detail by three aspects as hierarchical storage, 2D prefetch scheduling and fusion communication. \begin{figure*}[!htb] \centering \includegraphics[width=1\columnwidth]{figures/muti_task-moe_overall.pdf} \caption{Overall MoE training: This is an example of the MoE training with four devices. According to the parameter state property of MoE model, the parameter states are stored in GPU and SSD respectively. With this heterogeneous storage, NVLink and PCIe bandwidth can be used at a time in two dimensions.} \label{fig:training_overall} \end{figure*} \subsection{Hierarchical Storage} In large-scale MoE models, as the parameter scale increases, storage has become a major bottleneck in model training. Normally the stored parameter states consist of three parts: the trainable parameters, the gradients of parameter, and the corresponding optimizer states. According to the different storage mediums, the storage device can be classified into three parts to store the parameter states: GPU-Node, CPU-Node, and SSD-Node. Because dense parameters are intensively used for computation and do not account for the majority of storage space, their parameter states are all stored on the GPU-Node to avoid frequent data movement. In contrast, since sparse parameters are selectively activated in training and occupy a lot of storage space compared with dense parameters, the parameter states are placed on SSD-Node and transported to GPU-Node at the appropriate time for calculation. By reasonably storing the corresponding parameter states on hierarchical storage according to the calculation and storage characteristics of parameters, the storage of devices can be fully utilized as much as possible. In addition, considering the limit of storage nodes, we give several theoretical storage formulas to describe the relationship between each storage device and the storage parameter states with ADAM~\cite{kingma2014adam}. By default, there are eight GPUs per device. Supposing that $D$ and $S$ respectively represent the total number of dense parameters and sparse parameters, and $L$ represents MoE layers' numbers. Then, the total size of SSD memory, CPU memory and GPU memory on one device is $M_{SSD}$, $M_{CPU}$ and $M_{GPU}$ respectively. Next, $N$ represents the number of devices, and we use $\alpha$ to describe the probability of sparse parameter being activated on the whole training process, where $0 \leq \alpha \leq 1$. For the GPU-Node, it stores the dense parameter states used in forward propagation(FWD), backward propagation(BWD) and parameter updating (param fp16, grad fp16, master param fp32, momentum fp32, variance fp32, $2D$+$2D$+$2D$+$2D$+$4D$=$16D$ bytes), sparse parameters and sparse parameter gradients($2\alpha S/L$+$2\alpha S/L$=$4\alpha S/L$ bytes). The CPU-Node is used as the cache to hold the high-frequency sparse parameter states($16 \alpha S$ bytes), and the SSD-Node is used to store all the sparse parameter states in the device(master param fp32, momentum fp32, variance fp32, $12S$ bytes). \begin{equation} \begin{aligned} &\textbf{GPU-Node}:& 16D + 4\alpha S/L &\leq M_{GPU} \cdot N\\ &\textbf{CPU-Node}:& 16\alpha S &\leq M_{CPU} \cdot N \\ &\textbf{SSD-Node}:& 12S &\leq M_{SSD} \cdot N \end{aligned} \label{equ:Node storage} \end{equation} The scale of the entire MoE model is: \begin{equation} \begin{aligned} P = S + D \end{aligned} \label{equ:parameters} \end{equation} As described above, the sparse parameters are stored in SSDs as files. Because of the limitation of the flash mediums, PCIe bandwidth and NVMe protocol, SSDs have high latency and finite frequency of erasures. It is challenging to utilize SSDs under the scenario of MoE training with frequent writing operations. To avoid this problem, we further turn our attention to Intel Optane Persistent Memory(Optane PMem)~\cite{Optane} which is a new storage medium that can provide byte addressing like DRAM and persistent storage like SSDs. Optane PMem connects to the integrated memory controller(IMC) of CPU through the DIMM(Dual Inline Memory Module) interface, and uses DDR-T(A protocol on top of the electrical/mechanical interface for DDR4) for communication. It supports byte-wise addressing using CPU instructions to achieve higher bandwidth and lower latency. Besides, Optane PMem provides two modes: memory mode and AppDirect mode. Because we only need to store the parameter files in the Optane PMem, we choose AppDirect mode and configure the namespace type as FSDAX. With the help of Ext4, we can use load/storing directly access to the GPU bypass page cache and kernel, without any interruptions and context switches. \subsection{2D Prefetch Scheduling} Since we adopt hierarchical storage to save the sparse parameter states and dense parameter states, it costs significantly growing time to transfer parameter states between different devices during MoE training. Therefore, we propose a 2D prefetch scheduling strategy and apply it to MoE training so that the computation of parameters can be overlapped with the scheduling as shown in Algorithm~\ref{alg:2D Prefetch Scheduling}. For the dense parameter slice after using the zero-3 strategy, as shown in Figure~\ref{fig:training_overall}, the complete dense parameters can be prefetched after communication among the ranks in the horizontal dimension through the high-speed bandwidth of NVLink, accomplishing the desired effect of data parallelism. Similarly, sparse parameters can be prefetched through PCIe bandwidth in the vertical dimension of the device. Considering that sparse parameters are stored in SSDs, we reduce the sparse parameter states access to the SSDs and establish the corresponding cache mechanism in the CPU memory like LFU mechanism~\cite{sokolinsky2004lfu}. The CPU caches are responsible for storing the selectively activated sparse parameter states used for FWD/BWD calculation and parameter updating. When a prefetch request is received, it is preferred to retrieve the requested sparse parameters from the CPU caches, and then retrieve them from the SSDs if they are not stored in the CPU caches. When the CPU caches are full or reach the sparse parameter update cycle period, we use sparse parameters states from the CPU caches to update the corresponding parameters states on SSDs. Because the CPU memory of each machine only caches some frequently activated sparse parameters, we only need to prefetch the parameters of the one or more expert layers, which are cached on the CPU memory, to the corresponding GPU memory in advance. By prefetching parameters in advance, the waiting time of computation can be greatly reduced. From a global perspective, through the bandwidth of NVLink and PCIe in two dimensions, we can prefetch dense and sparse parameters at the same time to reduce the scheduling gap caused by heterogeneous storage and greatly increase the training efficiency. \IncMargin{1.0em} \begin{algorithm}[t] \caption{2D Prefetch Scheduling Algorithm} \label{alg:2D Prefetch Scheduling} \SetAlgoLined \SetKwProg{Fn}{Function}{}{end} \KwData{$p_{s}$: Sparse parameter states \\ \quad \quad $d_{slice}$: Dense parameter state slices \\ \quad \quad $caches_{cpu}$: CPU caches \\ \quad \quad $CPU_{size}$: The size of sparse parameter states that the CPU can cache\\ \quad \quad $hits$: The hit times of sparse parameter on hash table \\ \quad \quad $threshold$: Hit threshold \\ \quad \quad $\beta $: Attenuation coefficient\\ \quad \quad $K$: Moving average steps\\ \quad \quad $steps=0$: Cycle steps\\ \quad \quad $acc_{caches}=0$: Cumulative caches \algorithmiccomment{Record the number of sparse parameters on CPU} \\} \SetInd{0.61em}{0.61em} \BlankLine \SetKwFunction{FDesne}{DenseSchedule} \SetKwProg{Fn}{Function}{:}{} \Fn{\FDesne{}}{ Do $AllGather$($d_{slice}$) \algorithmiccomment{Get total parameters in next layers} } \textbf{End Function} \BlankLine \SetKwFunction{FSparse}{SparseSchedule} \SetKwProg{Fn}{Function}{:}{} \Fn{\FSparse{}}{ \uIf{$p_{s}$ in $caches_{cpu}$}{Get $p_{s}$ from $caches_{cpu}$ \\ $hits[p_{s}]$ += 1} \ElseIf{$acc_{caches}$ + 1 < $CPU_{size}$}{$hits[p_{s}]$ = 1 \\ $acc_{caches}$ += 1 \\ Fetch $p_{s}$ from SSDs to $caches_{cpu}$} \Else{ \ForEach{$p_{a}$ in $hits$}{ $hit_{a} = hits[p_{a}]$ \\ \uIf{$hit_{a} \geq threshold$ and $\min(hits.values()) == hit_{a}$}{ Update the states of $p_{a}$ on SSDs \\ Delete the states of $p_{a}$ in $caches_{cpu}$ \\ Delete $hits[p_{a}]$ \\ Fetch $p_{s}$ from SSDs to $caches_{cpu}$}} } $steps$ += 1 \\ \uIf{$steps$ == $K$}{ $hits$ $\cdot$ $\beta$ \algorithmiccomment{Moving average}\\ $steps = 0$ } $p_{s} \longrightarrow GPU$ \algorithmiccomment{Transfer $p_{s}$ to the corresponding GPU} \\ } \textbf{End Function} \BlankLine \SetKwBlock{DoParallel}{Do in parallel}{end} \DoParallel{ DenseSchedule() \\ SparseSchedule() Do FWD/BWD calculation \algorithmiccomment{Use the current parameter states to do FWD/BWD calculation} \\ } \textbf{End Parallel} \end{algorithm} \DecMargin{1.0em} Next, as shown in Algorithm~\ref{alg:2D Prefetch Scheduling}, we will introduce the CPU cache mechanism in detail. We additionally maintain the historical hit information of each sparse parameter, which is recorded as a hash table called $hits$. Specifically, if the parameter $p_{s}$ is requested, and it has been used in the previous FWD, we raise its count in the $hits$. If the CPU caches have reached the maximum limitation, we update the sparse parameter states which have the lowest hit frequency and exceed the hit threshold. Then we release them and move the states of parameter $p_{s}$ from SSDs to the CPU caches. With every $K$ training steps, we use moving average optimization to balance the hit frequency of each sparse parameter and increase the efficiency of CPU caches. \subsection{Fusion Communication} \paragraph{Fusion parameters:} As previously described in 2D prefetch scheduling, we use Zero-3 data parallelism~\cite{rajbhandari2021zero} for dense parameters. Meanwhile, multiple-time communication is requested in FWD and BWD calculation of MoE training, which increases the scheduling intervals in the training and thus affects the training efficiency. To reduce discontinuous communication, we adopt the fusion strategy for the parameters which are requested communication. Through the parameter management unit, parameter slices can be combined into a larger one before communication, and then cut into corresponding smaller ones after communication. As shown in Figure~\ref{fig:Parameter coalesced}, we manage the current parameter slices on each rank and fuse them as needed. After communication, we rebuild the whole parameter states according to the recorded slice index. By decreasing times of communication, the reduced latency overhead allows better scaling to numerous devices. \begin{figure}[!htb] \centering \subfloat[Parameter coalesced in forward]{\includegraphics[scale=0.8]{figures/muti_task-param_coalesced} \label{fig:Parameter coalesced}} \hfil \subfloat[Gradient coalesced in backward]{\includegraphics[scale=0.73]{figures/muti_task-gradient_bucket} \label{fig:Gradient buckets}} \caption{Fusion communication of dense parameters and gradients} \end{figure} \paragraph{Gradient Buckets:} In backward propagation, one-by-one gradient communication will undoubtedly cause increasing times of communication operations and raise the blank time of communication. At the same time, there will be potentially probability of disordered communication between different ranks. To avoid the above-mentioned problems, we build a bucket unit especially for gradient communication \cite{rajbhandari2020zero}. As shown in Figure~\ref{fig:Gradient buckets}, we apply for the bucket space in advance, which can accommodate the gradients of $N$ parameters. The communication unit will not be triggered until all the backward propagation about parameters involved in the bucket is finished. Backward communication through gradient buckets can largely avoid inconsistent gradient aggregation order and reduce the generation of GPU memory fragments, which can improve GPU memory utilization. \section{Related Work} \label{sec:related} \paragraph{Sparsely Activated Models:} Large-scale models based on Mixture-of-Experts have shown prominent advantages on natural language processing. Many papers~\cite{zuo2021taming, gross2017hard, lewis2021base} have focused on adapting the routing strategy to improve the model quality and performance. For low carbon and energy-saving, GLaM~\cite{du2021glam} has proved that the largest model with 1.2T parameters only consumes 1/3 of the energy used to train GPT-3. \paragraph{Large-Scale Models with MoE} There has been much research working on the increase of the model size for the last few years because of the scaling law~\cite{kaplan2020scaling}. Based on MoE architecture, billions and even trillions of models, like CPM-2~\cite{zhang2021cpm}, M6-T~\cite{yang2021m6}, M6-10T~\cite{lin2021m6}, GLaM~\cite{du2021glam}, are proved to have better generalization ability on natural language processing and multi-modal tasks. Besides, Baidu has proposed a unique UFO~\cite{ufo} based on MoE, which takes into account the deployment efficiency of large models and makes full use of big data and large models. With the introduction of the super network, the UFO model is composed of many subtasks, each of which is a path in the super network. One subtask is selected for training through the routing strategy. \paragraph{MoE Training and Inference Systems:} As the MoE training paradigm becomes popular, many scientific research institutions and enterprises have open-sourced MoE training frameworks and systems. DeepSpeed-MoE~\cite{kim2021scalable, rajbhandari2022deepspeed} utilizes a variety of distributed parallel methods to combine MoE parallelism, including data parallelism, tensor-slicing~\cite{shoeybi2019megatron}, ZeRO data parallelism~\cite{rajbhandari2020zero} to train larger models. As for the inference of MoE, DeepSpeed designs a novel sparsely activated model named PR-MoE and model compression techniques to reduce MoE model size, and an efficient communication method to optimize latency~\cite{rajbhandari2022deepspeed}. FastMoE~\cite{he2021fastmoe} is a distributed MoE training system to provide a hierarchical interface and simple institutions on how to use Megatron-LM~\cite{shoeybi2019megatron} and Transformer-XL~\cite{dai2019transformer} based on data parallelism and tensor slicing parallelism. Different from the implementation of DeepSpeed, FastMoE utilizes a sophisticated optimized method to reduce network traffic. In addition, the inference system INFMoE~\cite{zhang2021cpm} proposes the optimal order of computation and parameter offloading based on the greedy algorithm to address workload imbalance, aiming to fully conceal the cost of data movement caused by CPU-offloading and guarantee the efficiency of computation efficiency. Fairseq-MoE~\cite{ott2019fairseq, artetxe2021efficient} is a sequence modeling framework to train the custom model for summarization, translation and language modeling. And Tutel~\cite{tutel} has further optimized the Fairseq system in communication and computing, whose performance has improved by about 40 percent. Moreover, the optimizations in Tutel have been integrated into DeepSpeed to facilitate MoE model training. \section*{Appendix} \end{document}
1,116,691,497,457
arxiv
\section{Introduction} The complexity and diversity of today's media landscape provides many challenges for researchers studying news. In this paper, we introduce a broad news benchmark data set, called the NEws LAndscape (NELA2017) data set, to facilitate the study of many problems in this domain. The data set includes articles on U.S. politics from a wide range of news sources that includes well-established news sources, satire news sources, hyper-partisan sources (from both ends of the political spectrum), as well as, sources that have been known to distribute maliciously fake information. At the time of writing, this data set contains 136K news articles from 92 sources between April 2017 and October 2017. As news producers and distributors can be established quickly with relatively little effort, there is limited prior data on the reliability of some of many sources, even though the information they provide can end up being widely disseminated due to algorithmic and social filtering in social media. It has been argued that the traditionally slow fact-checking process and journalistically trained ``gatekeepers''are insufficient to counteract the potentially damaging effect these sources have on the public~\cite{mele2017combating}~\cite{buntain2017automatically}. As a result, there is a great deal of early research in automatically identifying different writing styles and persuasion techniques employed by news sources~\cite{popat2016credibility}~\cite{potthast2017stylometric}~\cite{horne2017just}~\cite{chakraborty2016stop} ~\cite{singhania20173han}. Hence, a broad data set including many different types of sources is especially useful in further refining these methods. To this end, we include 130 content-based features for each article, in addition to the article meta-data and full text. The feature set contains almost all the features used in the related literature, such as identifying misinformation, political bias, clickbait, and satire. Furthermore, we include Facebook engagement statistics for each article (the number of shares, comments, and reactions). While much of recent research has focused on automatic news characterization methods, there are many other news publishing behaviors that are not well-studied. For instance, there are many sources that have been in existence for a long time. These sources enjoy a certain level of trust by their audience, sometimes despite their biased and misleading reporting, or potentially because of it. Hence, trust for sources and content cannot be studied independently. While misinformation in news has attracted a lot of interest lately, it is important to note that many sources mix true and false information in strategic ways to not only to distribute false information, but also to create mistrust for other sources. This mistrust and uncertainty may be accomplished by writing specific narratives and having other similar sources copy that information verbatim~\cite{lytvynenko}. In some cases, sources may copy information with the intention to misrepresent it and undermine its reliability. In other cases, a source may copy information to gain credibility itself. Similarly, the coverage of topics in sources can be highly selective or may include well-known conspiracy theories. Hence, it may be important to study a source's output over time and compare it to other sources publishing news in the same time frame. This can sometimes be challenging as sources are known to remove articles that attract unwanted attention. We have observed this behavior with many highly shared false articles during the 2016 U.S. election. These open research problems are the primary reasons we have created the NELA2017 data set. Instead of concentrating on specific events or specific types of news, this data set incorporates all political news production from a diverse group of of sources over time. While many news data sets have been published, none of them have the broad range of sources and time frame that our data set offers. Our hope is that our data set can help serve as a starting point for many exploratory news studies, and provide a better, shared insight into misinformation tactics. Our aim is to continuously update this data set, expand it with new sources and features, as well as maintain completeness over time. In the rest of the paper, we describe the data set in detail and provide a number of motivating use cases. The first describe how we can characterize the news sources using the features we have provided. In the second, we show how social media engagement differs across groups sources. We then illustrate content copying behavior among the sources and how the sources covered different narratives around two events. \section{Related Work} There are several recent news data sets, specifically focused on fake news. These data sets include the following. {\bf Buzzfeed 2016} contains a sample of 1.6K fact-checked news articles from mainstream, fake, and political blogs shared on Facebook during the 2016 U.S. Presidential Election~\footnote{\url{github.com/BuzzFeedNews/2017-12-fake-news-top-50}}. It was later enhanced with meta data by Potthast et al.~\cite{potthast2017stylometric}. This data set is useful for understanding the false news spread during the 2016 U.S. Presidential Election, but it is unknown how generalizable results will be over different events. {\bf LIAR} is a fake news benchmark data set of 12.8K hand-labeled, fact-checked short statements from \url{politifact.com}~\cite{wang2017liar}. This data set is much larger than many previous fake news data sets, but focuses on short statements rather than complete news articles or sources. {\bf NECO 2017} contains a random sample of three types of news during 2016: fake, real, and satire. Each source was hand-labeled using two online lists. It contains a total of 225 articles~\cite{horne2017just}. While the ground truth is reasonably based, the data set is very small and time-specific. {\bf BS Detector} contains approximately 12K ``fake news" articles collected using the browser extension BS Detector which labels news based on a manually compiled source dictionary (\url{http://bsdetector.tech/}) and is publicly available on \url{kaggle.com}. The reliability of these lists are unknown. Additionally, there are much larger, general news data sets that are are focused on events, topics, and location. These include the following. {\bf GDELT} contains a wide range of online publications, including news and blogs, in over 100 languages. The collection is based on world events, focusing on location, temporal, and network features. GDELT provides a useful visual knowledge graph that indexes images and visuals used in news. While this data set provides news data over an extended period of time, it is focused on news surrounding external events, and may not capture many ``fake" news sources. In addition, Kwak and An~\cite{kwak2016revealing} point out that there is concern as to how biased the GDELT data set is as it does not always align with other event based data sets. {\bf Unfiltered News} (\url{unfiltered.news}) is a service built by Google Ideas and Jigsaw to address filter bubbles in online social networks. Unfiltered News indexes news data for each country based on mentioned topics. This data set does not focus on raw news articles or necessarily false news, but on location-based topics in news, making it extremely useful for analyzing media attention across time and location. Data from Unfiltered News is analyzed in ~\cite{an2017convergence}. There are many more data sets that focus on news or claims in social networks. {\bf CREDBANK} is a crowd sourced data set of 60 million tweets between October 2015 and February 2016. Each tweet is associated to a news event and is labeled with credibility by Amazon Mechanical Turkers~\cite{mitra2015credbank}. This data set does not contain raw news articles, only news article related tweets. {\bf PHEME} is a data set similar to CREDBANK, containing tweets surrounding rumors. The tweets are annotated by journalist~\cite{zubiaga2016analysing}. Once again, this data set does not contain raw news articles, but focused on tweets spreading news and rumors. Both PHEME and CREDBANK are analyzed in~\cite{buntain2017automatically}. {\bf Hoaxy} is an online tool that visualizes the spread of claims and related fact checking~\cite{Shao:2016:HPT:2872518.2890098}. Claim related data can be collected using the Hoaxy API. Once again, data from this tool is focused on the spread of claims (which can be many things: fake news article, hoaxes, rumors, etc.) rather than news articles themselves. Other works use study-specific data sets collected from a few sources. Some of these data sets are publicly available. Piotrkowicz et al. use 7 months of news data collected from The Guardian and The New York Times to assess headline structure's impact on popularity~\cite{piotrkowicz2017headlines}. Reis et al. analyze sentiment in 69K headlines collected from The New York Times, BBC, Reuters, and Dailymail~\cite{reis2015breaking}. Qian and Zhai collect news from CNN and Fox News to study unsupervised feature selection on text and image data from news~\cite{qian2014unsupervised}. Saez-Trumper at al. explore different types of bias in news articles from the top 80 news websites during a two-week period~\cite{saez2013social}. There are 3 core issues with these data sets that we address with the NELA2017 data set: \begin{enumerate} \item Small in size and sources - The current data sets that focused on news producers contain very few sources, typically focused on one type of source (mainstream, fake, etc.), and have a small number of data points. \item Event specific - Many of the current data sets are focused on small time frames or specific events (ex. 2016 Presidential Election). To ensure current results can be generalized and to track how the news is changing, researchers need data across time and events. \item Engagement specific - The majority of these data sets contain only highly engaged or shared articles. While it can be argued that these are the more important data points, they lack the complete picture of news producer behavior. In order to understand how news producers publish, specifically hyper-partisan and malicious sources, researchers need to explore both the viral and the never seen articles produced. \end{enumerate} Hence, our goal for the NELA2017 data set is to create a large, near-complete news article data set, across the various types of sources, in hopes of providing a more complete view of how news producers behave. \begin{table*} \begin{center} \fontsize{7.95}{8}\selectfont \hspace*{-0.0in}\begin{tabular}{|c|c||c|c||c|c||c|c|} \hline \textbf{Source} & \textbf{Complete} & \textbf{Source} & \textbf{Complete} & \textbf{Source} &\textbf{Complete} & \textbf{Source} & \textbf{Complete} \\ \hline AP & 50\% & Freedom Daily & 100\% & Observer & 100\% & Duran & 71\% \\ Activist Post & 100\% & Freedom Outpost & 100\% & Occupy Democrats & 93\% & Fiscal Times & 71\% \\ Addicting Info & 57\% & FrontPage Mag & 100\% & PBS & 100\% & Gateway Pundit & 100\% \\ Alt Media Syn & 78\% & Fusion & 86\% & Palmer Report & 50\% & The Guardian & 100\% \\ BBC & 100\% & Glossy News & 100\% & Politicus USA & 100\% & The Hill & 100\% \\ Bipartisan Report & 100\% & Hang the Bankers & 72\% & Prntly & 71\% & Huffington Post & 100\% \\ Breitbart & 100\% & Humor Times & 100\% & RT & 71\% & The Inquisitr & 100\% \\ Business Insider & 100\% & Infowars & 100\% & The Real Strategy & 100\% & New York Times & 100\% \\ BuzzFeed & 100\% & Intellihub & 100\% & Real News Right Now & 100\% & The Political Insider & 100\% \\ CBS News & 100\% & Investors Biz Daily & 100\% & RedState & 100\% & Truthfeed & 79\% \\ CNBC & 100\% & Liberty Writers & 100\% & Salon & 100\% & The Right Scoop & 100\% \\ CNN & 100\% & Media Matters & 100\% & Shareblue & 50\% & The Shovel & 100\% \\ CNS News & 100\% & MotherJones & 36\% & Slate & 100\% & The Spoof & 100\% \\ Conservative Trib & 100\% & NODISINFO & 86\% & Talking Points Memo & 50\% & TheBlaze & 100\% \\ Counter Current & 100\% & NPR & 100\% & The Atlantic & 100\% & ThinkProgress & 100\% \\ Daily Buzz Live & 86\% & National Review & 100\% & The Beaverton & 100\% & True Pundit & 100\% \\ Daily Kos & 100\% & Natural News & 100\% & Borowitz Report & 93\% & Washington Examiner & 100\% \\ Daily Mail & 100\% & New York Daily & 100\% & Burrard Street Journal & 86\% & USA Politics Now & 36\% \\ Daily Stormer & 72\% & New York Post & 100\% & The Chaser & 100\% & USA Today & 100\% \\ Drudge Report & 79\% & NewsBiscuit & 100\% & ConservativeTreeHouse & 100\% & Veterans Today & 100\% \\ Faking News & 100\% & NewsBusters & 72\% & D.C. Clothesline & 93\% & Vox & 100\% \\ Fox News & 86\% & Newslo & 93\% & Daily Beast & 100\% & Waking Times & 100\% \\ World News Politics & 93\% & Xinhua & 36\% & Yahoo News & 100\% & Young Conservatives & 93\% \\ \hline \end{tabular} \caption{Approximate completion percentage of all sources in the data set. Since each news source publishes at different rates, we compute completion as having more than 1 article published in each 2 week period of the data set.}\label{sources} \end{center} \end{table*} \section{Data set creation} In creating our data set, we target a collection of sources to include both well-established news companies, political blogs, and satire websites ,as well as many alternative news sources that have published misinformation in the past or have relatively unknown veracity. To select these sources, we used a 3-step process: \begin{enumerate*}\item We select well-known sources using Wikipedia lists to capture many mainstream and well-established sources. \item We randomly select sources from the \url{opensources.co} lexicon. OpenSources is expert-curated news source lexicon containing 12 different types of sources: fake, satire, extreme bias, conspiracy, rumor, state, junk science, hate speech, clickbait, unreliable, political, and reliable. This step captures many niche sources and those who have spread fake news in the past. \item We hand select sources cited by previously selected sources (based on reading random articles). \end{enumerate*} This 3rd step provides even more diversity across intentions and political leanings. To ensure that we have a balance of left and right leaning sources, we review selected sources using the crowd-sourced bias-checking service \url{mediabiasfactcheck.com}. Once we have the set of news sources, we create article scrapers for each source. Each scraper is collects news articles at 12:00pm EST and 9:00pm EST each day. This near real-time collection allows us to maintain news articles that are later deleted, a common practice among maliciously fake new sources. Some sources can be collected using standard RSS feed scrapers, while others, especially the less credible sources, need custom web scrapers to collect articles. For news sources with available RSS feeds, we use the Python library feedparser~\footnote{pythonhosted.org/feedparser/}, for news sources with standard HTML structure we use python-goose~\footnote{github.com/grangier/python-goose}, and for news sources with difficult to parse HTML structures, we use a mix of BeautifulSoup~\footnote{www.crummy.com/software/BeautifulSoup/bs4/doc/}, and feedparser to create site specific scrapers. Of the 100 sources selected, there were 8 that our scrapers could not consistently collect, leaving us with 92 sources. To control for topic, we only collect political news from each source. For the majority of sources, controlling for topic is very easy, as their websites are divided into topic-based feeds. It is important to note that some topic-based feeds are less strict than others, specifically on fake news sites. Thus, in the political news feed, some pseudo-science and odd topic conspiracy articles are mixed in. We choose to collect these occasional off-topic articles as well, as they may provide insight to these fake news sources. \noindent Each scraper collects the following information: \\ \indent \textbf{content} - the text from the body of the article \\ \indent \textbf{title} - the text from the title of the article \\ \indent \textbf{source} - the source of the article \\ \indent \textbf{author} - the journalist who wrote the article, if the information is available in the web page meta data \\ \indent \textbf{published} - the UTC time stamp of publication according to the web page \\ \indent \textbf{link} - the url used to scrape the article (RSS feed or web page) \\ \indent \textbf{html} - the full HTML of the article page stored as unicode \\ This information is stored for each article in a JSON dictionary, with keys of the same name as above. Using this process, we obtain almost 100\% of the articles produced during the 7 month time period. The approximate completion percentage for each source over the 7 months of collection can be found in Table~\ref{sources}. \begin{figure*}[h] \begin{center} \begin{tabular}{cc} \\ \small{(a) Top 10 Most Subjective Writing Style (on average)} & \small{(b) Top 10 Hardest to Read (on average)}\\ \includegraphics[width=200pt,keepaspectratio=true]{figures/top20_subj.png}& \includegraphics[width=200pt,keepaspectratio=true]{figures/top20_smog.png}\\ \small{(c) Top 10 Most Clickbait Titles (\% of articles)} & \small{(d)Top 10 Longest Title (on average)}\\ \includegraphics[width=200pt,keepaspectratio=true]{figures/top10_clickbait.png}& \includegraphics[width=200pt,keepaspectratio=true]{figures/top10_WCtitle.png}\\ \small{(e) Top 10 Most Negative Sources (on average)} & \small{(f) Top 10 Most Lexically Redundant Sources (on average)}\\ \includegraphics[width=200pt,keepaspectratio=true]{figures/top10_negemo.png}& \includegraphics[width=200pt,keepaspectratio=true]{figures/top10_ttr.png}\\ \end{tabular} \vspace*{-0.2in} \caption{Top 10 sources for a selection of features.\label{top10}} \end{center} \end{figure*} \begin{figure*}[ht] \begin{center} \hspace*{-0.1in}\begin{tabular}{cccc} \\ \includegraphics[width=115pt,keepaspectratio=true]{figures/subj_dist.png} & \includegraphics[width=115pt,keepaspectratio=true]{figures/negemo_dist.png} &\includegraphics[width=115pt,keepaspectratio=true]{figures/wctitle_dist.png} &\includegraphics[width=115pt,keepaspectratio=true]{figures/smog_dist.png} \end{tabular} \vspace*{-0.2in} \caption{Feature distributions across different articles from specific sources\label{dists}} \end{center} \end{figure*} \begin{figure*}[th] \begin{center} \hspace*{-0.1in}\begin{tabular}{cccc} \\ {\small Median} & {\small Max} & {\small Median} & {\small Max} \\ \includegraphics[width=115pt,keepaspectratio=true]{figures/political_med.png} & \includegraphics[width=115pt,keepaspectratio=true]{figures/political_max.png} & \includegraphics[width=115pt,keepaspectratio=true]{figures/political+_med.png} & \includegraphics[width=115pt,keepaspectratio=true]{figures/political+_max.png} \\ \multicolumn{2}{c}{(a) Self-Proclaimed Political biased groups} & \multicolumn{2}{c}{(b) Self-Proclaimed Political + previous behavior groups} \\ \end{tabular} \vspace*{-0.1in} \caption{Facebook shares for source groups over time. The median or the max of the shares is measure every two weeks. Hence, 0 is the beginning of April, and 14 is the end of October. \label{fb}} \end{center} \end{figure*} \begin{table*}[thb!] \centering \begin{minipage}[t]{3.2in} \hspace*{-0.15in}\begin{tabular}{p{0.6in}p{2.3in}} \bf{Abbr.} &\bf{Description} \\ \hline &\\ POS & normalized count of each part of speech (36 feats)\\ linguistic & \# function words, pronouns, articles, prepositions, verbs, etc. using LIWC lexicons (24 features)\\ clickbait & clickbait title classification using models built in~\cite{chakraborty2016stop}\\ \hline \multicolumn{2}{c}{\bf(a) Structure Features} \\\\ \hline sentiment & negative, positive, and neutral sentiment scores from VADER~\cite{hutto2014vader} (3 features)\\ emotion & positive, negative, affect, etc. words using LIWC and strong/weak emotion words from lexicons in~\cite{recasens2013linguistic} (13 features) \\ Happiness & happiness score using~\cite{mitchell2013geography} Happiness lexicon\\ &\\ \hline \multicolumn{2}{c}{\bf(b) Sentiment Features} \\ \\ \hline Facebook engagement & \# of shares, comments, reactions collected using Facebook API\\ \hline \multicolumn{2}{c}{\bf(c) Engagement Features} \\ bio & biological processes from LIWC lexicon (5 features)\\ relativity & motion, time, and space words from LIWC lexicon (4 features)\\ personal concerns & work, home, leisure, etc. from LIWC lexicon (6 features)\\ &\\ \hline \multicolumn{2}{c}{\bf(d) Topic-dependent Features} \\ \end{tabular} \end{minipage} \begin{minipage}[t]{3.2in} \hspace*{-0.1in}\begin{tabular}{p{0.6in}p{2.3in}} \bf{Abbr.} &\bf{Description} \\ \hline TTR & Type-Token Ratio, also known as lexical diversity or redundancy, computed as $\frac{\# unique words}{total words}$\\ &\\ FKE & Standard readability measure computed by $0.39 * (\frac{total words}{total sentences}) + 11.8 * (\frac{total syllables}{total words}) - 15.59$\\ SMOG & Standard readability measure computed by $1.0430 * \sqrt{\#polysyllables * \frac{30}{\#sentences}} + 3.1291$\\ &\\ wordlen & average \# characters in a word\\ WC & word count\\ cogmech & \# cognitive process words (includes cause, insight, etc.) from LIWC lexicons (7 features)\\ \hline \multicolumn{2}{c}{\bf(e) Complexity Features} \\ \\ \hline bias & several bias lexicons from ~\cite{recasens2013linguistic} and ~\cite{mukherjee2015leveraging} (14 features)\\ subjectivity & probability of subjective text using a Naive Bayes classifier trained on 10K subjective and objective sentences from~\cite{pang2004sentimental} used in~\cite{horne2017just}\\ \hline \multicolumn{2}{c}{\bf(f) Bias Features} \\ \\ \hline Moral & features based on Moral Foundation Theory~\cite{graham2009liberals} and lexicons used in ~\cite{lin2017acquiring} (10 features)\\ &\\ \hline \multicolumn{2}{c}{\bf(g) Morality Features} \\ \\ \end{tabular} \end{minipage} \caption{\label{tbl:features} Different features implemented on data set. Each feature is compute on the title and body text separately} \end{table*} {\bf Feature set creation} Next, to facilitate content-based analysis and writing style research on these articles, we compute 130 content-based features and collect 3 Facebook engagement statistics on each news article. These features come from a wide range of literature on false news detection~\cite{potthast2017stylometric}~\cite{horne2017just}~\cite{horne2018accessing}, political bias detection~\cite{recasens2013linguistic}, content popularity~\cite{piotrkowicz2017headlines}~\cite{horne2017identifying}, clickbait detection~\cite{chakraborty2016stop}, and general text characterization~\cite{loper2002nltk}. We break these features down into 7 categories: structure, complexity, sentiment, bias, morality, topic, and engagement. All 130 features are computed on the title and the body text separately, giving us 260 content-based features in total. Due to the wide range of literature these features are borrowed from, some are highly correlated, but all are computed differently. To allow researchers even more flexibility, we provide all of the feature code in one easy-to-use Python script. All feature code and implementation details are available at: \url{https://github.com/BenjaminDHorne/Language-Features-for-News}. Descriptions of these features can be found in Table~\ref{tbl:features}. Due to lack of space, we will leave major implementation details to the data set and code documentation. \section{Potential use cases of the NELA2017 data set} There is a variety of news credibility research strands that can benefit from this data set. In particular, we argue that this data set can not only test the generality of previous results in computational journalism, but also spark research in lesser studied areas. In this section, we present 4 use cases with varying levels of granularity, including: general news source characterization, highly engaged article characterization, content attribution and copying, and analyzing specific news narratives. \subsection{News Source Characterization} The most obvious and general use of the NELA2017 data set is news source characterization and comparison. With the increasing public attention on news sources, many maps of the media landscape have been offered to show how different sources compare to each other. Often these maps are based on a subjective evaluation of these sources. Our features make it possible to draw such comparisons based on algorithms with transparent criteria. We first show the top 10 sources in Figure~\ref{top10} according to their average behavior with respect: (a) subjectivity based on writing style, (b) grade level readability, (c) the clickbait nature of titles, (d) length of titles, (e)d negative sentiments expressed, and (f) the amount lexical redundancy, i.e. repetition in articles. Past research shows fake news articles are generally easier to read and more repetitive, but are not necessarily clickbait~\cite{horne2017just}. It is also well-studied that many highly engaged fake articles and conspiracy theories express negative emotions~\cite{Bessi:2015hg}. All of these previous results are accurately supported by the ranking with our features. For example, the subjectivity accurately captures a number of highly partisan sources in our list and the clickbait predictions point to well-known clickbait sources. However, these clickbait sources are not necessarily among the sources with very long titles or repetitive content. The sources with highest grade reading include some sources that translate languages (Xinhua) and more niche domain sources (The Fiscal Times). Additionally, we also look at the consistency of sources are over time. Sources may show higher variation in these distributions due to lack of editorial standards, as well as, different types of content mixing (made up content or content copied from other sources). In Figure~\ref{dists}, we show select feature distributions over the full 7 months of data for four news sources: Liberty Writers, Newslo, The New York Times, and PBS. We can clearly see both Liberty Writers and Newslo have very wide distributions, where as The New York Times and PBS have much more narrow distributions, illustrating consistency. These features are not only useful for quick source comparison, but have predictive power in news as shown in prior work~\cite{popat2016credibility}~\cite{horne2017just}. Given our feature set is a superset of all the features from the different literature threads, we expect them to have accuracy as well or better than those reported. Due to lack of space, we do not provide examples of prediction. \begin{figure*}[ht] \centering \hspace*{-0.5in}\begin{tabular}{cc} \small{(a) May 1st-14th 2017} & \small{(b) July 1st-14th 2017}\\ \includegraphics[width=9cm]{figures/May_week1and2.png}& \includegraphics[width=9cm]{figures/July_week1and2.png} \end{tabular} \vspace*{-0.2in} \caption{Article similarity graphs during two different two-week periods. The weighted in-degree is the number of articles copied from a source. The weight is indicated by the size of the arrow. The in-degree of a source is shown by the size of the node. The color of a node indicates the community it belongs to based on modularity.} \label{attrib_nets} \end{figure*} \subsection{Engagement Characterization} While the NELA2017 data set does not contain labels, such as which articles are fake and which are not, we can make labeled subgroups of the data set using external labeling or unsupervised clustering over different features described in the previous use case. For space reasons, we provide an example of external labeling only. There are many ways to label news articles and sources in the NELA2017 data set such as based on ownership, self-proclaimed political leaning, reliability (using a lexicon like \url{opensources.co}), or the age of the news source. To explore this method, we group sources by their self-proclaimed political leaning as conservative or liberal and exclude satire news sources and any news source that does not clearly claim a political ideology. These subgroups contain 16 liberal sources and 17 conservative sources. While there are certainly other politically biased news sources in the data set, we are strictly looking at self-proclaimed leaning. We can break down these groups even further by using previously known reporting behavior. Specifically, we ask ``has the source published a completely false article in the past?'' To do this, we manually use 3 online fact-checkers: (\url{snopes.com}, \url{politifact.com} or \url{factcheck.org}). In this division, we do not include sources that have published partially false articles, only completely false. This labeling can be thought of as source-level reliability rather than article-level correctness. With these newly labeled subgroups of the NELA2017 data set, we explore Facebook shares over time. In Figure~\ref{fb}a, we see that, on average, politically-left leaning news sources had higher shares over the 7 month time period and these shares increased over time. When looking at the max number of shares, rather than the median, we see politically-right leaning news sources were often shared slightly more. In Figure~\ref{fb}b, when splitting by previously publishing a false article, false politically-left sources were shared more than true politically-left news sources in the first 3 months of the time slice, but decrease significantly in the last 4 months of the time slice. In contrast, false right-leaning sources are shared more than true right-leaning source over the full 7 month time slice. While this simple analysis does not conclude that false news articles were more highly shared than true news articles during this time, it does illustrate differences in engagement with political news sources that have published false articles in the past. \subsection{Attribution and Content Copying} A lesser studied area that can benefit from the NELA2017 data set is news attribution, which has been studied in journalism, but not in the context of today's news ecosystem. In context of today's news environment, Jane Lytvynenko of Buzzfeed News points out that the conspiracy news site Infowars copied 1000's are articles from other sources without attribution over the past 3 years~\cite{lytvynenko}. Most notably, Infowars copied from Russia Today (RT), Sputnik, CNN, BBC, The New York Times, Breitbart, CNS News, and The Washington Post. This article sheds light on the potential content-mixing methods of fake and conspiracy news sources that publish original material with a specific message and also report ``real'' content from other sources to increase their perceived credibility. To provide an example of this, we extract highly similar articles from several two-week intervals. We do this using the cosine similarity between TFIDF (Term-Frequency Inverse Document-Frequency) article vectors, a standard technique in information retrieval. For every article pair from a different source, if the cosine similarity is above 0.90 (meaning the articles are almost verbatim), we extract the article pair and compare time stamps to see which source published the article first. Over each two week interval, we use the time stamp comparison to create a weighted directed graph, in which in-degree is how many articles are copied from the node and out-degree is how many articles a node copies. In Figure~\ref{attrib_nets}, we show networks from two time frames: May 1st-14th and July 1st-14th. In each figure, the weighted in-degree is represented by the size of the arrow. Each node's in-degree is shown by the size of the node and each node is colored based on the community it belongs to (using modularity). Note, since this is a pair-wise analysis, there may be redundant links if the same story is copied by many sources. For example, if several sources copy a story from AP, the network will not only point to AP, but also to the sources that published that story earlier than another source. While there are many potential types of content copying, this analysis is only exploring near exact content copying. Specifically, sources that may mix false and true content would not be captured by the high cosine similarity. In each graph, there are multiple connected components and clear communities of who copies from who. In particular, we see well-known mainstream sources copy from each other (primarily from AP, a news wire service) and known conspiracy sources copy from each other. In some cases, these two communities are completely disconnected and other times there is a path between them. For example, in Figure~\ref{attrib_nets}a, there exists a path between USA Politics Now and Fox News (through Liberty Writers and The Gateway Pundit). In other time slices (not shown), we see a direct path between Infowars and Fox News (Fox News copying from Infowars and vice versa). In addition to these two larger communities, we see many separate smaller communities of sources, including satire, left-wing, and right-wing communities. We see very similar community structure and attribution patterns throughout the data set. Overall, the community structure we observe in content similarity networks is very similar to that of the news ecosystem on Twitter~\cite{starbird2017examining}, where alternative news sources form tight-knit communities with few connections to mainstream news. We further categorize the types of content copying we see into three primary categories: {\bf Proper Attribution, Different Title.} Many sources publish full, word-for-word articles from The Associated Press (AP), but provide clear citations such as ``2017 The Associated Press. All Rights Reserved." or ``The Associated Press contributed to this report." Specifically, we see this citation behavior in sources like CBS News, PBS News, Fox News, Breitbart, The Talking Points Memo, and The Huffington Post. More interestingly, while the content is almost exactly the same, the titles can be very different. For example, the title for an AP article was ``Scholars White Houses name gaffe not helping US-China ties," where as the Fox News title for the same article was ``Chinese scholars rip White House staff after name mix up." Related, we see that True Pundit directly copies many full articles from The Daily Caller (60 copied articles between April 14th and May 14th). At the end of each article The Daily Caller writes: ``Content created by The Daily Caller News Foundation is available without charge to any eligible news publisher that can provide a large audience.'' Thus, True Pundit's copying can be considered legitimate attribution. Infowars similarly takes articles from the Daily Caller. {\bf Same Author, Different Source.} Surprisingly, we find the majority of highly similar articles are written by the same author on different sources. There are many examples of this behavior. We see The D.C. Clothesline and Freedom Outpost commonly publish articles written by Tim Brown. The D.C. Clothesline also has articles written by Jay Syrmopoulos, who writes for Activist Post and The Free Thought Project. The Daily Caller, Infowars, and The Real Strategy all have word for word identical articles written by Luke Rosiak. The Waking Times and Activist Post have articles written by Alex Pietrowski. Salon and Media Matters for America have multiple articles written by Cydney Hargis. In satire news, Rodger Freed writes the same articles for The Spoof, Humor Times, and Glossy News, usually publishing on The Spoof first. In another example, a series of stories about a ``George Soros backed Trump resistance fund'' are published word for word on both Infowars and Fox News, all written by Joe Schoffstal. Each article does not have clear attribution to one or the other source, despite being exact copies and each article was written on Infowars days prior to its publication on Fox News. This example is particularly surprising as Fox News captures a wide, mainstream audience and Infowars is a well known conspiracy source, creating a clear path between a well-established news source and conspiracy/false news. Note, while many of these articles are clearly written by the same author, as the authors state they contribute to both sources, there are others that may just be copied with the authors name included. For example, The D.C. Clothesline seems to have many authors that contribute elsewhere, but there is no indication in the authors' biographical information (on the other sources they contribute to) that they contribute to The D.C. Clothesline. Hence, while the author contributes to multiple sources, it is unclear that they contribute to The D.C. Clothesline. {\bf No Attribution.} We also see several sources, particularly those who have been caught spreading false news in the past, copying news articles with no citation. In particular, we found that both Veterans Today and Infowars copied multiple articles directly from Russia Today (RT), with no citation similar to behavior that has been pointed out by Jane Lytvynenko~\cite{lytvynenko}. \subsection{Issue framing and narrative slant} In addition to ``big picture'' analysis, NELA2017 can also be used to study specific events. To illustrate this, we explore differing narratives reported around a specific event. While many sources may cover the same topic, they may not report all sides of a story or may have an imbalanced quantity of coverage~\cite{lin2011more}. This type of coverage bias has been explored in terms of political party slant in US congress stories~\cite{lin2011more}, and similar notions of bias, including framing and agenda setting bias, have been in explored in various media studies~\cite{entman2007framing}~\cite{pan1993framing}. There is more recent work on ideological bias in news stories caused by journalists Twitter networks~\cite{wihbey2017exploring}. However, there is little to no recent work on the specific dynamics of differing news narratives. Further, since the NELA2017 data set covers many different political events, it is ideal for tracking publishing and reporting behavior over a wide range of time, something that has also not been explored in the literature. To provide an example of event extraction from the NELA2017 data set, we perform a simple extraction technique on two different events: \begin{enumerate*} \item the U.S. national anthem protests~\footnote{\url{en.wikipedia.org/wiki/U.S._national_anthem_protests}}, \item the dismissal of James Comey~\footnote{\url{en.wikipedia.org/wiki/Dismissal_of_James_Comey}}\end{enumerate*}. The U.S. national anthem protests were protests in which athletes, specifically NFL players, kneeled during the singing of the U.S. national anthem to protest police brutality and racial inequality in the U.S. These protests begin in 2016, but became widespread in late 2017 as U.S. President Donald Trump called for NFL team owners to fire any player who kneeled. This event caused a debate of whether NFL players were being disrespectful to the U.S. flag and military. Hence, two sides of the story emerged: race inequality and disrespecting the military. A similar two-sided story is the dismissal of James Comey. James Comey was the 7th director of the Federal Bureau of Investigation (FBI), who was dismissed by U.S. President Donald Trump in May 2017. This dismissal came at a controversial time, as President Trump's administration was under FBI investigation for alleged Russian interference in the 2016 election. At the same time, James Comey had been widely criticized for the way he handled the earlier Hilary Clinton email controversy~\footnote{\url{en.wikipedia.org/wiki/Hillary_Clinton_email_controversy}}. The Trump administration publicly stated Comey's dismissal was due to the recommendation by then Attorney General Jeff Sessions and Comey's handling of the earlier email investigation. The media created a divide between the two sides: did President Trump dismiss Comey due to the Russia investigation or due to the Clinton email investigation. Therefore, in both of these events there are clear sides that news sources may or may not give fair coverage. To do this analysis, we first select the dates of each event and extract all articles from several days before and after the event. With these articles extracted, we filter by a set of event keywords and manually ensure all articles extracted are reporting the appropriate event. We then modify a simple slant score technique used in~\cite{lin2011more} to quantify the narrative slant. In ~\cite{lin2011more}, the slant score is measured by the log-odds-ratio of the number of times source $i$ refers to party $k$ (specifically refers to a member of said party), where the baseline probability is 50\% (meaning an article has a 50-50 chance to refer to each party). We perform a similar analysis, but instead of counting party references, we count narrative keyword references. These narrative keywords are manually generated. While there are more sophisticated methods to measure bias, this method provides a base understanding of coverage bias within these stories. \begin{figure*} \begin{center} \hspace*{-0.1in}\begin{tabular}{cccc} \\ \multicolumn{2}{c}{(a) NFL Protests (Sept 20th 2017 to Sept Sept 30th 2017)} & \multicolumn{2}{c}{(b) Comey Firing (May 10th 2017 to May 15th 2017)} \\ \includegraphics[width=115pt,keepaspectratio=true]{figures/flag_narrative_references_to_issue_slant.png} & \includegraphics[width=115pt,keepaspectratio=true]{figures/flag_narrative_neg_to_issue_slant.png} & \includegraphics[width=115pt,keepaspectratio=true]{figures/comey_narrative_references_to_issue_slant.png} & \includegraphics[width=120pt,keepaspectratio=true]{figures/comey_narrative_neg_to_issue_slant.png} \\ \includegraphics[width=115pt,keepaspectratio=true]{figures/flag_narrative_subj_to_issue_slant.png} & \includegraphics[width=115pt,keepaspectratio=true]{figures/flag_narrative_we_to_issue_slant.png} & \includegraphics[width=115pt,keepaspectratio=true]{figures/comey_narrative_subj_to_issue_slant.png} & \includegraphics[width=115pt,keepaspectratio=true]{figures/comey_narrative_we_to_issue_slant.png} \\ \end{tabular} \vspace*{-0.2in} \caption{Issue slant score computed using the log-odds-ratio of narrative keywords. Each dot in the scatter plot represents a source, the x-axis the slant score, and the y-axis is the overall number of references or a feature from the NELA2017 feature set. A score of 0 indicated by the vertical line is perfectly balanced coverage. In (a) sources with higher scores report more about what the players are protesting (police brutality) than the disrespect for the flag and military (lower score is vice-versa). In (b) sources with higher scores report more about the Russia collusion than the Clinton email scandal (lower score is vice-versa). \label{slants}} \end{center} \end{figure*} {\bf U.S. national anthem protests.} For the U.S. national anthem protests, we use the following keywords for side 1: \textit{ Kaepernick, racism, race, racist, police, brutality, African American, and prejudice}, and the following for side 2: \textit{respect, stand, disrespect, flag, troops, military}, and \textit{anti-American}. In Figure~\ref{slants}a, we show a scatter plot in which each point represents a source and the x-axis shows the computed slant score. If a source reported both sides equally, it receives a slant score of 0 (indicated by the vertical dotted line). In this case, the higher the score the more coverage of side 1 (police brutality) and the lower the score the more coverage of side 2 (disrespect of flag). On the y-axis we show the either the number of keyword references overall or a feature selected from the NELA2017 feature set. We can see right away that there are sources across the spectrum of coverage slant; however, more so on side 1 (police brutality). Despite more sources covering side 1, we see more extreme slant (further from balanced) for side 2 (disrespect of flag), meaning they mention keywords corresponding to side 2 much more than side 1. When inspecting the sources with this extreme slant, we see several cases where there was no mention of side 1. Whereas even the most extreme slant towards side 2 mentions the debate of respecting the flag. Of those sources that only report the disrespecting of the flag narrative, we see they are more subjective in writing and slightly more negative than those sources who are near balanced. On the other side, those who report more of the police brutality message use the 1st person plural words more (like we, us, or our). {\bf Dismissal of James Comey} For the dismissal of James Comey, we use the following keywords for side 1: \textit{Russia, Trump-Russia, collusion, election}, and \textit{meddling}, and the following for side 2: \textit{Hilary, Clinton, Democrats, dems, email}, and \textit{server}. In Figure~\ref{slants}b, we show the same for scatter plots as in Figure~\ref{slants}a discussed above. In this case, the higher the score the more coverage of side 1 (Russia) and the lower the score the more coverage of side 2 (Clinton emails). In this story, we can see the vast majority of sources give balanced coverage, receiving slant scores close to 0. In fact, there is only 1 source that reported the event extremely one sided. When inspecting this source, they did not mention anything about the Russia investigation, only the Clinton email scandal. This one extreme source was much more negative, more subjective, and used 1st person plurals more than the other sources. \section{Conclusions} In this paper, we presented the NELA2017 data set, which contains articles from 92 news sources over 7 months, as well as, 130 content-based features that have been used throughout the news literature. Together with the data set, we include the source code for computing the features (\url{goo.gl/JsxGt}). We also illustrated potential research directions with a number of use cases, showing the data set's use in studying individual sources, compare sources to each other, or study sources over a specific event. We are continuing to expand and collect data for future releases. As we update, we will release the data set by versions, thus, NELA2017 will be an unchanged version corresponding to the meta data in this paper. All data can be requested at \url{nelatoolkit.science}. \bibliographystyle{aaai} \input{references.bbl} \end{document}
1,116,691,497,458
arxiv
\section{Introduction} In \cite{ParSht2}, two of the authors of this paper have obtained the complete power asymptotic expansion of the integrated density of states of Schr\"odinger operators \begin{equation*} H= -\Delta +V \end{equation*} acting in $ \mathbb R^d$ assuming that the real-valued potential $V$ is either smooth periodic, or generic quasi-periodic, or belongs to a reasonably wide class of almost-periodic functions (see \cite{ParSht2} for a complete set of conditions on $V$ as well as the previous history of the subject). The main aim of the current paper is to extend the results of \cite{ParSht2} to a more general class of operators. We give a detailed description of this new class in the next section; here, we list the main properties of the operators belonging to it. (i) We consider perturbations of the Laplacian, or any positive power of the Laplacian. More precisely, we work with operators of the form \begin{equation*} H= (-\Delta)^w +B, \end{equation*} where $B$ is a differential or pseudo-differential operator of order $\kappa<2w$. Here $H$ is self-adjoint and belongs to the standard algebra of almost-periodic pseudo-differential operators, see e.g. \cite{Shu0} and \cite{Shu}. (ii) If $B$ is a differential operator, we assume that its coefficients satisfy the same conditions the potential $V$ had to satisfy in \cite{ParSht2} (for example, the coefficients can be smooth periodic, or generic quasi-periodic functions). In particular, periodic magnetic Schr\"odinger operators are covered by our results. (iii) If $B$ is pseudo-differential, we assume that it is a classical pseudo-differential operator, or, more generally, the operator of classical type. By the latter we mean that the symbol of $B$ admits an asymptotic decomposition in powers of $|\bxi|$ when $|\bxi|\to\infty$; however, these powers do not have to be integer. Note that operators with the relativistic kinetic energy $\sqrt{(-i\nabla+ \mathbf{A})^2+ m^2}$ are admissible for (almost-)periodic smooth $\mathbf{A}$ and $m\geqslant 0$. Under these assumptions we prove that the integrated density of states $N(\lambda)$ has the complete asymptotic expansion \eqref{eq:main_thm1}. This expansion contains powers of $\lambda$ and powers of $\ln\lambda$; the values of the exponents in the powers of $\lambda$ depend on the form of $B$, whereas logarithms are raised to integer powers smaller than $d$. Sometimes (as in the case of the magnetic Schr\"odinger operator) we can guarantee that the logarithmic terms are absent (i.e., the corresponding coefficients are zero). \begin{rem} The main reason why we need assumption (iii) is to match asymptotic expansions in different intervals $I_n$ in Section \ref{reduction section}. If we did not have assumption (iii), we would have obtained the asymptotic expansions containing the general `phase volumes' (like in \cite{Sob}), and it is not clear how to relate the expansions obtained in different intervals $I_n$. \end{rem} One immediate and slightly unexpected corollary of \eqref{eq:main_thm1} is as follows: \begin{cor} Suppose, $H =(-\Delta)^w +B$ with $B$ being {\it periodic} and either differential, or pseudo-differential operator of classical type. Then for sufficiently large $\lambda$ the spectrum of $H$ is purely absolutely continuous. \end{cor} \begin{proof} Since $H$ is periodic, the general Floquet-Bloch theory implies that the spectrum of $H$ is absolutely continuous with the possible exception of eigenvalues of finite multiplicity. If $\lambda$ is such an eigenvalue, the integrated density of states has a jump at least $|\Gamma^{\dagger}|$ at $\lambda$, where $\Gamma^{\dagger}$ is the lattice dual to the lattice of periods of $H$. Due to \eqref{eq:main_thm1}, this cannot happen for large $\lambda$. \end{proof} The approach of our paper is similar to the one of \cite{ParSht2}. In particular, we use the method of gauge transform developed in \cite{Sob}, \cite{Sob1}, and \cite{ParSob}. Nevertheless, there are plenty of new (mostly technical, but sometimes ideological) difficulties arising because the operator $B$ is no longer bounded and no longer local. One example of the new methods employed in this paper is the proof of Lemma \ref{key_lemma}: not only this proof works for unbounded $B$, it also makes Condition D from \cite{ParSht2} redundant. The biggest increase in technical difficulties comes in Section \ref{contribution section} where we express the contribution to the density of states from various regions in the momentum space as certain complicated integrals and then try to compute these integrals. As a result, our paper is technically more complicated than \cite{ParSht2} (which already was quite difficult to read). Thus, we have reluctantly abandoned the idea of making our paper completely self-contained; we will skip all parts of the argument which are identical (or close) to corresponding parts of \cite{ParSht2} and refer the reader to that paper. Nevertheless, we will present all the definitions and properties of the important objects. \begin{rem}\label{no proofs remark} Throughout the article we employ the convention that, if some statement is given without a proof, then an analogous statement can be found in \cite{ParSht2}, and the proof is the same up to obvious modifications. It comes without saying that the reader is strongly encouraged to read the article \cite{ParSht2} first, before attempting to read this paper. \end{rem} {\bf Aknowlegements.} SM and LP were partially supported by the EPSRC grant EP/ F029721/1. SM was also supported by the Lundbeck Foundation and the European Research Council under the European Community's Seventh Framework Program (FP7/2007--2013)/ERC grant agreement 202859. RS was partially supported by the NSF grant DMS-0901015. The authors would like to thank Gerassimos Barbatis for participation in preliminary discussions which led to this paper. SM would like to express his thanks for hospitality to the University of Athens and ESI Vienna, where part of this work was made. \section{Preliminaries}\label{introduction section} For $w> 0$ we consider the operator \begin{equation}\label{eq:Sch} H =(-\Delta)^w+ B \end{equation} acting in $\plainL2( \mathbb R^d)$. The action of the pseudo-differential operator $B$ on functions from the Schwarz class $\textup{{\textsf{S}}}( \mathbb R^d)$ is defined by the formula \begin{equation*} (Bf)(\bx):= (2\pi)^{-d/2} \int b(\bx, \bxi)e^{i\bxi \bx} (\mathcal Ff)(\bxi) d\bxi. \end{equation*} Here $\mathcal F$ is the Fourier transform \begin{equation*} (\mathcal F f)(\bxi):= (2\pi)^{-d/2} \int e^{-i\bxi\bx}f(\bx) d\bx,\qquad \bxi\in \mathbb R^d, \end{equation*} the integration is over $ \mathbb R^d$, and $b$ is the symbol of $B$. We assume that $b(\bx, \bxi)$, $\bx, \bxi\in \mathbb R^d$, is a smooth almost-periodic in $\bx$ complex-valued function and, moreover, that for some countable set $\Bth$ of frequencies we have \begin{equation}\label{eq:sumf1} b(\bx, \bxi) = \sum\limits_{\bth\in\Bth}\hat{b}(\bth, \bxi)\be_{\bth}(\bx) \end{equation} where \begin{equation}\label{e_theta} \be_{\bth}(\bx):=e^{i\bth\bx}, \end{equation} and \begin{equation*} \hat{b}(\bth, \bxi):=\BM_\bx\big(b(\bx,\bxi)\be_{-\bth}(\bx)\big) \end{equation*} are the Fourier coefficients of $b$ (here $\BM_\bx$ is the mean of an almost-periodic function of $\bx$). We assume that the series \eqref{eq:sumf1} converges absolutely, and that $b$ satisfies the symmetry condition \begin{equation* \hat b(\bth, \bxi) = \overline{\hat b(-\bth, \bxi+\bth)}, \end{equation*} so that the operator $B$ is formally self-adjoint. For $R>0$ let $\Id_{\mathcal B_R}$ be the indicator function of the ball $\mathcal B_R:= \big\{\bxi: |\bxi|< R\big\}$. We assume that there exists a constant $C_0$ such that \begin{equation} \|b\Id_{\mathcal B_{C_0}}\|_{L_\infty( \mathbb R^d\times \mathbb R^d)}< \infty, \end{equation*} and that \begin{equation}\label{symbol series} \big(1- \Id_{\mathcal B_{C_0}}(\bxi)\big)b(\bx, \bxi)= \sum_{\iota\in J}|\bxi|^\iota b_\iota\big(\bx, \bxi/|\bxi|\big), \end{equation} where $J$ is a discrete subset of $(-\infty, \varkappa]$ with \begin{equation}\label{condition on kappa} 0\leqslant \varkappa <2w \end{equation} (the first inequality here is assumed for convenience without loss of generality), and $b_\iota(\bx, \boldeta)$ are smooth functions on $ \mathbb R^d\times\mathbb S^{d- 1}$ almost-periodic with respect to $\bx$. Let \begin{equation}\label{tilde w} \tilde w :=(w+ \varkappa)/2. \end{equation} We introduce $\chi\in \plainC\infty( \mathbb R_+)$ so that \begin{equation}\label{tau function} \chi(r)= \begin{cases}r, & r\geqslant C_0,\\ 0, &r\leqslant C_0/2.\end{cases} \end{equation} \begin{rem} Increasing $C_0$ if necessary, we can guarantee that for any $\widetilde J\subset J$ and any $\widetilde\Bth \subset\Bth$ the operator $\widetilde B$ with the symbol $\tilde b$ given by \begin{equation}\label{tilde b} \tilde b(\bx, \bxi):= \sum_{\iota\in \widetilde J}\Big(\chi\big(|\bxi|\big)\Big)^\iota\sum_{\bth\in \widetilde\Bth}\hat b_\iota\big(\bth, \bxi/|\bxi|\big)\be_{\bth}(\bx) \end{equation} satisfies \begin{equation}\label{tilde B estimate} (-\Delta)^{\tilde w}- |\widetilde B|\geqslant 0. \end{equation} \end{rem} We also assume that the coefficients in the expansion \begin{equation}\label{b_iota} b_\iota(\bx, \boldeta)= \sum_{\bth\in\Bth}\hat{b}_\iota(\bth, \boldeta)\be_{\bth}(\bx), \qquad \bx\in \mathbb R^d, \quad \boldeta\in\mathbb S^{d- 1}, \quad \iota\in J \end{equation} can be represented by a series \begin{equation}\label{series for b} \hat{b}_\iota(\bth, \eta_1, \dots, \eta_d)= \sum_{\tau\in \mathbb N_0^d}\hat{b}_\iota^{(\tau)}(\bth)\eta_1^{\tau_1}\cdots\eta_d^{\tau_d} \end{equation} which converges absolutely in a ball of radius greater than one of $ \mathbb R^d$. Under the above assumptions $H$ is a selfadjoint operator on the Sobolev space $\plainH{2w}( \mathbb R^d)$. We are interested in the asymptotic behaviour of its integrated density of states $N(\lambda)$ as the spectral parameter $\lambda$ tends to infinity. \begin{defn} Let $e(\lambda;\bx,\by)$ be the kernel of the spectral projection of $H$. We define the integrated density of states as \begin{equation} N(\lambda) :=\BM_{\bx}\big(e(\lambda;\bx,\bx)\big). \end{equation*} \end{defn} It was proved in Theorem 4.1 of \cite{Shu} that for differential operators this definition agrees with the traditional one (at least at its continuity points). The following lemma is proved at the end of Section~4 of \cite{ParSht2}. \begin{lem}\label{norms lemma} \begin{enumerate} \item If $A\ge B$, then $N(\lambda; A)\le N(\lambda; B)$. \item Suppose, $A= a(\bx, D)$ and $U= u(\bx, D)$ are two pseudo-differential operators with almost-periodic coefficients. Let operator $A$ be elliptic self-adjoint and operator $U$ be unitary. Then $N(\lambda; A)= N(\lambda; U^{-1}AU)$. \end{enumerate} \end{lem} Without loss of generality we assume that $\Bth$ (recall \eqref{eq:sumf1}) spans $ \mathbb R^d$, contains $\mathbf 0$ and is symmetric about $\mathbf 0$; we also put \begin{equation}\label{eq:algebraicsum} \Bth_k :=\Bth +\Bth +\dots +\Bth \end{equation} (algebraic sum taken $k$ times) and $\Bth_{\infty}:=\cup_k\Bth_k=Z(\Bth)$, where for a set $S\subset \mathbb R^d$ by $Z(S)$ we denote the set of all finite linear combinations of elements in $S$ with integer coefficients. The set $\Bth_\infty$ is countable and non-discrete (unless $B$ is periodic). We will need \medskip \paragraph{\bf Condition A} {\it Suppose that $\bth_1,\dots,\bth_d\in \Bth_\infty$. Then $Z(\bth_1,\dots,\bth_d)$ is discrete.} \medskip It is easy to see that this condition can be reformulated like this: suppose, $\bth_1,\dots,\bth_d\in \Bth_\infty$. Then either $\{\bth_j\}$ are linearly independent, or $\sum_{j=1}^d n_j\bth_j=0$, where $n_j\in\mathbb Z$ and not all $n_j$ are zeros. This reformulation shows that Condition A is generic: indeed, if we are choosing frequencies of $b$ one after the other, then on each step we have to avoid choosing a new frequency from a countable set of hyperplanes, and this is obviously a generic restriction. Condition A is obviously satisfied for periodic $B$, but it becomes meaningful if $B$ is quasi-periodic (i.e., if it is a linear combination of finitely many exponentials). If $\Bth$ and $J$ are finite, Condition A is all we need. If, however, any (or both) of these sets is infinite, we need other conditions which describe, how well $B$ can be approximated by operators with quasi-periodic symbols. In the proof we are going to work with quasi-periodic approximations of $B$, and we need these conditions to make sure that all estimates in the proof are uniform with respect to these approximations. We introduce \begin{equation* \textsf{b}_\iota(\bth):= \sup_{|\boldeta|= 1}\big|\hat b_\iota(\bth, \boldeta)\big|, \quad \bth\in \Bth. \end{equation*} \medskip \paragraph{\bf Condition B} {\it Let $k$ be a positive integer. Then there exists $R_0\geqslant C_0$ such that for each $\rho> R_0$ there exist a finite symmetric set $\widetilde\Bth\subset\big(\Bth\cap \mathcal B(\rho^{1/k})\big)$ (where $\mathcal B(r)$ is the ball of radius $r$ centered at $0$) and a finite subset $\widetilde J\subset J$ with \begin{equation}\label{tilde J cardinality} \card \widetilde J\leqslant \rho^{1/k} \end{equation} such that \begin{equation}\label{eq:condB2} \sum_{(\bth, \iota)\in (\Bth\times J)\setminus (\widetilde\Bth\times \widetilde J)}\big(1+ |\bth|^2\big)^{\varkappa/4}|R_0|^{\iota- \varkappa}\textup{\textsf{b}}_\iota(\bth)\leqslant \rho^{- k}. \end{equation}} The last condition we need is a version of the Diophantine condition on the frequencies of $b$. First, we need some definitions. We fix a natural number $\tilde k$ (the choice of $\tilde k$ will be determined later by the order of the remainder in the asymptotic expansion) and denote $\widetilde\Bth'_{\tilde k}:= \widetilde\Bth_{\tilde k}\setminus\{0\}$ (see \eqref{eq:algebraicsum} for the notation). We say that $\GV$ is a quasi-lattice subspace of dimension $m$, if $\GV$ is a linear span of $m$ linearly independent vectors $\bth_1,\dots,\bth_m$ from $\widetilde\Bth_{\tilde k}$. Obviously, the zero space (which we will denote by $\GX$) is a quasi-lattice subspace of dimension $0$, and $ \mathbb R^d$ is a quasi-lattice subspace of dimension $d$. We denote by $\mathcal V_m$ the collection of all quasi-lattice subspaces of dimension $m$ and put $\mathcal V:=\cup_m\mathcal V_m$. If $\bxi\in \mathbb R^d$ and $\GV$ is a linear subspace of $ \mathbb R^d$, we denote by $\bxi_{\GV}$ the orthogonal projection of $\bxi$ onto $\GV$, and put $\GV^\perp$ to be an orthogonal complement of $\GV$, so that $\bxi= \bxi_{\GV}+ \bxi_{\GV^\perp}$. Let $\GV,\GU\in\mathcal V$. We say that these subspaces are {\it strongly distinct}, if neither of them is a subspace of the other one. This condition is equivalent to stating that if we put $\GW:=\GV\cap\GU$, then $\dim \GW$ is strictly less than dimensions of $\GV$ and $\GU$. We put $\phi= \phi(\GV, \GU)\in [0, \pi/2]$ to be the angle between them, i.e. the angle between $\GV\ominus\GW$ and $\GU\ominus\GW$, where $\GV\ominus\GW$ is the orthogonal complement of $\GW$ in $\GV$. This angle is non-zero iff $\GV$ and $\GW$ are strongly distinct. We put $s= s(\rho)= s(\widetilde\Bth_{\tilde k}):= \inf\sin\big(\phi(\GV,\GU)\big)$, where the infimum is over all strongly distinct pairs of subspaces from $\mathcal V$, $R= R(\rho):= \sup_{\bth\in\widetilde\Bth_{\tilde k}}|\bth|$, and $r= r(\rho):= \inf_{\bth\in\widetilde\Bth'_{\tilde k}}|\bth|$. Obviously, \begin{equation}\label{R(rho)} R(\rho)= O(\rho^{1/k}), \end{equation} where the implied constant can depend on $k$ and $\tilde k$. \medskip \paragraph{\bf Condition C} {\it For each fixed $k$ and $\tilde k$ the sets $\widetilde\Bth_{\tilde k}$ can be chosen in such a way that for sufficiently large $\rho$ the number of elements in $\widetilde\Bth_{\tilde k}$ satisfies $\card\widetilde\Bth_{\tilde k}\le\rho^{1/k}$ and we have \begin{equation}\label{eq:condC1} s(\rho)\ge\rho^{-1/k} \end{equation} and \begin{equation} r(\rho)\ge\rho^{-1/k}, \end{equation*} where the implied constant (i.e. how large should $\rho$ be) can depend on $k$ and $\tilde k$.} \medskip \begin{rem}\label{rem:condB} Note that Condition C is automatically satisfied for quasi-periodic and smooth periodic $B$; see \cite{ParSht2} for further discussion of this condition. \end{rem} Condition A implies the following statement, which will be used crucially in our constructions. \begin{cor}\label{D type corollary} Suppose, $\bth_1, \dots, \bth_{l}\in \widetilde\Bth_{\tilde k}$, $l\leqslant d- 1$. Let $\GV$ be the span of $\bth_1, \dots, \bth_{l}$. Then each element of the set $\widetilde\Bth_{\tilde k}\cap\GV$ is a linear combination of $\bth_1, \dots, \bth_{l}$ with rational coefficients. Since the set $\widetilde\Bth_{\tilde k}\cap\GV$ is finite, this implies that the set $Z(\widetilde\Bth_{\tilde k}\cap\GV)$ is discrete and is, therefore, a lattice in $\GV$. \end{cor} From now on, we always assume that $B$ satisfies all the conditions from this section; we will also denote \begin{equation*} \rho:= \lambda^{1/2w}. \end{equation*} Now we can formulate our main theorem. \begin{thm}\label{main_thm} Let $H$ be an operator \eqref{eq:Sch} satisfying Conditions {\rm A, B} and {\rm C}. Then for each $K\in \mathbb R$ there exists a finite positive integer $L$ and a finite subset $J_0\subset J$ such that \begin{equation}\label{eq:main_thm1} \begin{split} &N(\rho^{2w})\\ &= \sum_{q= 0}^{d- 1}\sum_{h= 0}^{L}\sum_{\iota_1, \dots, \iota_h\in J_0}\sum_{j= 0}^{[K+ d+ (2- 2w)h+ \iota_1+ \cdots+ \iota_h]}C_{q\, h\, j}^{\iota_1\cdots \iota_h}\rho^{d+ (2- 2w)h+ \iota_1+ \cdots+ \iota_h- j}\ln^q\rho+ O(\rho^{-K}). \end{split} \end{equation} as $\rho\to\infty$. \end{thm} \begin{rem}\label{spurious remark} The powers of $\rho$ present in \eqref{eq:main_thm1} are equal to $d+ (2- 2w)h+ \iota_1+ \cdots+ \iota_h- j$, and the first impression is that there are far too many of them (indeed, a priori the set of all such powers can be dense in $ \mathbb R$, for instance). However, many of these powers are, in fact, spurious (i.e. the corresponding coefficients $C_{q\, h\, j}^{\iota_1\cdots \iota_h}$ are zero). This happens, for example, when $d+ (2- 2w)h+ \iota_1+ \cdots+ \iota_h- j>d$ (for obvious reasons). Equally obviously, these powers do not `multiply' when we increase $K$. This means that if $K_1<K_2$, then expansion \eqref{eq:main_thm1} with $K=K_2$ does not contain extra terms with $d+ (2- 2w)h+ \iota_1+ \cdots+ \iota_h- j>-K_1$, compared to this expansion for $K=K_1$. \end{rem} In the case of magnetic Schr\"odinger operators, Theorem \ref{main_thm} and calculations similar to those of \cite{HitPol} and \cite{ParSht2} imply that most of the terms in \eqref{eq:main_thm1} will indeed disappear: \begin{cor} For each $K\in \mathbb N$ we have: \begin{equation}\label{eq:main_cor1} N(\lambda)=\lambda^{d/2}\bigg(C_d+\sum\limits_{j=1}^{K}e_j\lambda^{-j}+o(\lambda^{-K})\bigg) \end{equation} as $\lambda\to\infty$. \end{cor} \begin{rem} By taking the Laplace transform of \eqref{eq:main_thm1}, one can obtain an asymptotic expansion of the (regularised) heat trace as $t\to 0$. However, it seems that using the approach of \cite{HitPol} and \cite{HitPol1}, it is possible to obtain even stronger results (the pointwise asymptotic expansion of the heat kernel). \end{rem} \begin{rem} Of course, formula \eqref{eq:main_lem1} cannot be differentiated; moreover, we do not even know if in the almost periodic case $N(\lambda)$ is strictly increasing. However, in the periodic Schr\"odinger case there are some results on the high-energy behaviour of the (non-integrated) density of states, see e. g. \cite{MorParPch}. \end{rem} Given Conditions B and C, we want to introduce the following definition. We say that a non-negative function $f= f(\rho)= f(\rho; k, \tilde k)$ satisfies the estimate $f(\rho)\le \rho^{0+}$ (resp. $f(\rho)\ge \rho^{0-}$), if for each positive $\varepsilon$ and for each $\tilde k$ we can achieve $f(\rho)\le \rho^{\varepsilon}$ (resp. $f(\rho)\ge \rho^{-\varepsilon}$) for sufficiently large $\rho$ by choosing parameter $k$ from Conditions B and C sufficiently large. For example, we have \begin{equation}\label{bound on R} R(\rho)\le\rho^{0+}, \end{equation} $\card \widetilde\Bth\le\rho^{0+}$, $s(\rho)\ge\rho^{0-}$, and $r(\rho)\ge\rho^{0-}$. Throughout the paper, we always assume that the value of $k$ is chosen sufficiently large so that all inequalities of the form $\rho^{0+}\le\rho^{\varepsilon}$ or $\rho^{0-}\ge\rho^{-\varepsilon}$ we encounter in the proof are satisfied. The next statement proved in \cite{ParSht2} is an example of how this new notation is used. \begin{lem}\label{lem:coefficients} Suppose, $\bth, \bmu_1,\dots,\bmu_d\in\widetilde\Bth'_{\tilde k}$, the set $\{\bmu_j\}$ is linearly independent, and $\bth=\sum_{j=1}^db_j\bmu_j$. Then each non-zero coefficient $b_j$ satisfies \begin{equation} \rho^{0-}\le |b_j| \le \rho^{0+}. \end{equation*} \end{lem} In this paper, by $C$ or $c$ we denote positive constants, the exact value of which can be different each time they occur in the text, possibly even in the same formula. On the other hand, the constants which are labeled (like $C_1$, $c_3$, etc) have their values being fixed throughout the text. Given two positive functions $f$ and $g$, we say that $f\gtrsim g$, or $g\lesssim f$, or $g=O(f)$ if the ratio $g/f$ is bounded. We say $f\asymp g$ if $f\gtrsim g$ and $f\lesssim g$. We will also need a number of auxiliary constants. Let us choose numbers $\{\a_j\}_{j= 1}^d$, $\b$, $\vartheta$, and $\varsigma$ satisfying \begin{equation}\label{beta and alphas} \max\{1- w + \varkappa/2, 1/2\}< \b< \a_1< \a_2< \cdots < \a_d< \vartheta< \varsigma< 1 \end{equation} (recall \eqref{condition on kappa}), and set \begin{equation}\label{alpha} \a:= \varkappa/\b. \end{equation} \section{Reduction to a finite interval of spectral parameter}\label{reduction section} To begin with, we choose sufficiently large $\rho_0> C_0$ (to be fixed later on) and for $n\in \mathbb N$ put $\rho_n:= 2\rho_{n- 1}= 2^n\rho_0$. We also define the intervals $I_n:= [\rho_n, 4\rho_n]$. The proof of Theorem~\ref{main_thm} will be based on the following lemma: \begin{lem}\label{main_lem} For each $M\in \mathbb R$ there exist $L> 0$ and a finite subset $J_0\subset J$ such that for every $n\in \mathbb N$ and $\rho\in I_n$ \begin{equation}\label{eq:main_lem1} N(\rho^{2w})= \sum_{q= 0}^{d- 1}\sum_{h= 0}^{L}\sum_{\iota_1, \dots, \iota_h\in J_0}\sum_{j= 0}^{[\frac{d+ M}{1- \varsigma}]}C_{q\, h\, j}^{\iota_1\cdots \iota_h}(n, M)\rho^{d+ (2- 2w)h+ \iota_1+ \cdots+ \iota_h- j}\ln^q\rho+ O(\rho_n^{-M}). \end{equation} Here, $C_{q\, h\, j}^{\iota_1\cdots \iota_h}(n, M)$ are some real numbers satisfying \begin{equation}\label{eq:main_lem2} C_{q\, h\, j}^{\iota_1\cdots \iota_h}(n, M)= O(\rho_n^{-2\b h+ \varsigma j}) \end{equation} The constants in the $O$-terms do not depend on $n$ (but they may depend on $M$). \end{lem} \begin{rem}\label{rem:new1} Note that \eqref{eq:main_lem1} is not a `proper' asymptotic formula, since the coefficients are allowed to grow with $n$ (and, therefore, with $\rho$). \end{rem} Some of the powers of $\rho$ on the right hand side of \eqref{eq:main_lem1} may coincide. In order to avoid the ambiguity let us redefine coefficients $C_{q\, h\, j}^{\iota_1\cdots \iota_h}(n, M)$ in such a way that, for any given values of $q$ and $d+ (2- 2w)h+ \iota_1+ \cdots+ \iota_h- j$, only the coefficient with the minimal possible value of $h$ and maximal possible values of $j$, $\iota_1, \dots, \iota_h$ (in this order) is nonzero. Note that these new coefficients still satisfy \eqref{eq:main_lem2}. Let us prove Theorem \ref{main_thm} assuming that we have proved Lemma \ref{main_lem}. Let $M$ be fixed. Denote the sum on the right hand side of \eqref{eq:main_lem1} by $N_n(\rho^{2w})$. Then, for $n\geqslant 1$, whenever $\rho\in I_{n-1}\cap I_n=[\rho_n,2\rho_n]$, we have: \begin{equation}\label{difference of Ns} \begin{split} &N_n(\rho^{2w})- N_{n- 1}(\rho^{2w})\\ &= \sum_{q= 0}^{d- 1}\sum_{h= 0}^{L}\sum_{\iota_1, \dots, \iota_h\in J_0}\sum_{j= 0}^{[\frac{d+ M}{1- \varsigma}]} t_{q\, h\, j}^{\iota_1\cdots \iota_h}(n, M)\rho^{d+ (2- 2w)h+ \iota_1+ \cdots+ \iota_h- j}\ln^q\rho+ O(\rho_n^{-M}), \end{split} \end{equation} where \begin{equation*} t_{q\, h\, j}^{\iota_1\cdots \iota_h}(n, M):= C_{q\, h\, j}^{\iota_1\cdots \iota_h}(n, M)- C_{q\, h\, j}^{\iota_1\cdots \iota_h}(n- 1, M). \end{equation*} On the other hand, since for $\rho\in I_{n- 1}\cap I_n$ we have both $N(\rho^{2w})= N_n(\rho^{2w})+ O(\rho_n^{-M})$ and $N(\rho^{2w})= N_{n- 1}(\rho^{2w})+ O(\rho_n^{-M})$, this implies that \begin{equation}\label{sum is O} \sum_{q= 0}^{d- 1}\sum_{h= 0}^{L}\sum_{\iota_1, \dots, \iota_h\in J_0}\sum_{j= 0}^{[\frac{d+ M}{1- \varsigma}]} t_{q\, h\, j}^{\iota_1\cdots \iota_h}(n, M)\rho^{d+ (2- 2w)h+ \iota_1+ \cdots+ \iota_h- j}\ln^q\rho= O(\rho_n^{-M}). \end{equation} \begin{cla} For each combination of indices present on the right hand side of \eqref{difference of Ns} we have: \begin{equation}\label{claim on t} t_{q\, h\, j}^{\iota_1\cdots \iota_h}(n, M)= O(\rho_n^{j- M- d+ (2w- 2)h- \iota_1- \cdots- \iota_h}\ln^{d- 1- q}\rho_n). \end{equation} \end{cla} \begin{proof} Put $y:= \rho_n/\rho$ and let \begin{equation}\label{taus} \tau_{p\, h\, j}^{\iota_1\cdots \iota_h}(n, M):= \rho_n^{M+ d+ (2- 2w)h+ \iota_1+ \cdots+ \iota_h- j}\sum_{q= p}^{d- 1}\binom qp(-1)^p t_{q\, h\, j}^{\iota_1\cdots \iota_h}(n, M)\ln^{q- p}\rho_n. \end{equation} Then by \eqref{sum is O} for $y\in [1/2, 1]$ \begin{equation}\label{Cramer} P(y):= \sum_{p= 0}^{d- 1}\sum_{h= 0}^{L}\sum_{\iota_1, \dots, \iota_h\in J_0}\sum_{j= 0}^{[\frac{d+ M}{1- \varsigma}]}\tau_{p\, h\, j}^{\iota_1\cdots \iota_h}(n, M)y^{j- d+ (2w- 2)h- \iota_1- \cdots- \iota_h}\ln^p y= O(1). \end{equation} Let us denote by $h_1, \dots, h_T$ the functions $y^{j- d+ (2w- 2)h- \iota_1- \cdots- \iota_h}\ln^p y$ entering the sum in \eqref{Cramer} with non-zero coefficients; these functions are linearly independent on the interval $[1/2, 1]$. Therefore, there exist points $y_1,...,y_{T}\in [1/2, 1]$ such that the determinant of the matrix $\big(h_j(y_l)\big)_{j, l= 1}^{T}$ is non-zero. Now \eqref{Cramer} and the Cramer's Rule imply that the values of $\tau_{p\, h\, j}^{\iota_1\cdots \iota_h}(n, M)$ are fractions with a bounded expression in the numerator and a fixed non-zero number in the denominator. Therefore, \begin{equation}\label{tau is O(1)} \tau_{p\, h\, j}^{\iota_1\cdots \iota_h}(n, M)= O(1). \end{equation} Thus, choosing $p= d- 1$ in \eqref{taus}, we obtain \[ t_{d- 1\, h\, j}^{\iota_1\cdots \iota_h}(n, M)= O(\rho_n^{j- M- d+ (2w- 2)h- \iota_1- \cdots- \iota_h}). \] Now we can put $p= d- 2$ into \eqref{tau is O(1)} and obtain \[ t_{d- 1\, h\, j}^{\iota_1\cdots \iota_h}(n, M)= O(\rho_n^{j- M- d+ (2w- 2)h- \iota_1- \cdots- \iota_h}\ln\rho_n). \] Continuing this process until $p= 0$, we obtain \eqref{claim on t}. \end{proof} Thus, for $j< M+ d+ (2- 2w)h+ \iota_1+ \cdots+ \iota_h$, the series $\sum_{m=0}^\infty t_{q\, h\, j}^{\iota_1\cdots \iota_h}(m, M)$ is absolutely convergent; moreover, for such $j$ we have: \begin{equation*} \begin{split} & C_{q\, h\, j}^{\iota_1\cdots \iota_h}(n, M)= C_{q\, h\, j}^{\iota_1\cdots \iota_h}(0, M)+ \sum_{m= 1}^n t_{q\, h\, j}^{\iota_1\cdots \iota_h}(m, M)\\ &= C_{q\, h\, j}^{\iota_1\cdots \iota_h}(0, M)+ \sum_{m= 1}^\infty t_{q\, h\, j}^{\iota_1\cdots \iota_h}(m, M)+ O(\rho_n^{j- M- d+ (2w- 2)h- \iota_1- \cdots- \iota_h}\ln^{d- 1- q}\rho_n)\\ & =: C_{q\, h\, j}^{\iota_1\cdots \iota_h}(M)+ O(\rho_n^{j- M- d+ (2w- 2)h- \iota_1- \cdots- \iota_h}\ln^{d- 1- q}\rho_n), \end{split} \end{equation*} where we have denoted $C_{q\, h\, j}^{\iota_1\cdots \iota_h}(M):= C_{q\, h\, j}^{\iota_1\cdots \iota_h}(0, M)+ \sum_{m =1}^\infty t_{q\, h\, j}^{\iota_1\cdots \iota_h}(m, M)$. For bigger values of $j$ we use \eqref{eq:main_lem2} and \eqref{beta and alphas} to obtain \begin{equation*} \begin{split}\label{intermediate js} &\sum_{q= 0}^{d- 1}\sum_{h= 0}^{L}\sum_{\iota_1, \dots, \iota_h\in J_0}\sum_{\substack{j\geqslant M+ d+ (2- 2w)h+ \iota_1+ \cdots+ \iota_h}}^{[\frac{d+ M}{1- \varsigma}]} \big|C_{q\, h\, j}^{\iota_1\cdots \iota_h}(n, M)\big|\rho^{d+ (2- 2w)h+ \iota_1+ \cdots+ \iota_h- j}\ln^q\rho\\ &\lesssim\sum_{q= 0}^{d- 1} \sum_{h= 0}^{L}\sum_{\iota_1, \dots, \iota_h\in J_0}\rho_n^{\varsigma d+ (2\varsigma- 2\beta- 2\varsigma w+ \varsigma\varkappa)h- (1- \varsigma)M}\ln^q\rho_n\lesssim \rho_n^{\varsigma d- (1- \varsigma)M}\ln^{d- 1}\rho_n. \end{split} \end{equation*} Thus, when $\rho\in I_n$, we have: \begin{equation} \begin{split} N(\rho^{2w})&= \sum_{q= 0}^{d- 1}\sum_{h= 0}^{L}\sum_{\iota_1, \dots, \iota_h\in J_0}\sum_{j= 0}^{[M+ d+ (2- 2w)h+ \iota_1+ \cdots+ \iota_h]} C_{q\, h\, j}^{\iota_1\cdots \iota_h}(M)\rho^{d+ (2- 2w)h+ \iota_1+ \cdots+ \iota_h- j}\ln^q\rho\\ &+ O(\rho^{-M}\ln^{d- 1}\rho)+ O(\rho^{\varsigma d- (1- \varsigma)M}\ln^{d- 1}\rho). \end{split} \end{equation*} Since the constants in $O$ terms do not depend on $n$, it is sufficient to choose \[ M:= \big[(\varsigma d+ K)/(1- \varsigma)\big]+ 1 \] to get \eqref{eq:main_thm1} for all $\rho\geqslant \rho_0$. The rest of the paper is devoted to proving Lemma \ref{main_lem}. The first step of the proof is fixing $n$ and fixing large $\tilde k$ and $k$. The precise value of $\tilde k$ will be chosen later; the only restriction on it will be to satisfy inequality \eqref{eq:kM} (it says that the more asymptotic terms we want to have in \eqref{eq:main_lem1}, the bigger $\tilde k$ we need to choose; note that the choice of $\tilde k$ does not depend on $k$). We will have several requirements on how large $k$ should be (most of them will be of the form $\rho_n^{0+}< \rho_n^{\varepsilon}$ or $\rho_n^{0-}>\rho_n^{-\varepsilon}$); each time we have such an inequality, we assume that $k$ is chosen sufficiently large to satisfy it. \begin{rem}\label{k remark} Our choice of $k$ will only depend on $M$, $w$, $\varkappa$, and the constants introduced in \eqref{beta and alphas}. The set $J_0$ in Lemma~\ref{main_lem} can be chosen to be \begin{equation}\label{tilde J condition} J_0:= J\cap [\varkappa -d -M -1, \varkappa]. \end{equation} \end{rem} The first requirement on $k$ we have is that \begin{equation}\label{k fist condition} k> d+ M+ \varkappa(d+ M)/(w- \varkappa) -2w. \end{equation} After fixing $\tilde k$ and $k$ we get $R_0$ from Condition B. Then, taking \begin{equation}\label{rho_0 condition} \rho_0\geqslant R_0 \end{equation} and fixing $n$, we choose $\widetilde\Bth$ and $\widetilde J$ so that Conditions B and C are satisfied for $\rho:= 4\rho_n$. Without loss of generality we may assume that $\widetilde J \supset J_0$. Then we introduce an auxiliary pseudo--differential operator $\widetilde B$ with the symbol $\tilde b$ given by \eqref{tilde b}. From now on we prove Lemma~\ref{main_lem} for $B= \widetilde B$ and with $J_0$ replaced by $\widetilde J$. However, in view of \eqref{tilde J cardinality} and \eqref{beta and alphas}, the results with $\widetilde J$ and $J_0$ are equivalent. Afterwards, in Section~\ref{final section} we will prove that the asymptotics \eqref{eq:main_lem1} for the original $B$ follows from Condition B and \eqref{eq:main_lem1} for $\widetilde B$. \section{Pseudo-differential operators}\label{PsDO section} Most of the material in this and several subsequent sections is very similar to the corresponding sections of \cite{ParSht2} and \cite{ParSob}, as are the proofs of most of the statements. Therefore, we will often omit the proofs, instead referring the reader to \cite{ParSht2}, \cite{Sob}, and \cite{ParSob}. \subsection{Classes of PDO's}\label{classes:subsect} Before we define the pseudo-differential operators (PDO's), we introduce the relevant classes of symbols. Let $b = b(\bx, \bxi)$, $\bx, \bxi\in \mathbb R^d$, be an almost-periodic (in $\bx$) complex-valued function and, moreover, for some countable set $\hat{\Bth}$ of frequencies (we always assume that $\hat\Bth$ is symmetric and contains $0$; starting from the middle of this section, $\hat\Bth$ will be assumed to be finite) \begin{equation}\label{eq:sumf} b(\bx, \bxi) = \sum\limits_{\bth\in\hat{\Bth}}\hat{b}(\bth, \bxi)\be_{\bth}(\bx), \end{equation} where \begin{equation*} \hat{b}(\bth, \bxi):=\BM_\bx\big(b(\bx,\bxi)\be_{-\bth}(\bx)\big) \end{equation*} are the Fourier coefficients of $b(\cdot, \bxi)$ (recall that $\BM$ is the mean of an almost-periodic function). We always assume that \eqref{eq:sumf} converges absolutely. Let us now define the classes of symbols we will consider and operators associated with them. For $\bxi\in \mathbb R^d$ let $\langle \bxi \rangle := \sqrt{1+|\bxi|^2}$. We notice that \begin{equation}\label{weight:eq} \langle \bxi + \boldeta\rangle\le 2\langle\bxi\rangle \langle\boldeta\rangle, \ \forall \bxi, \boldeta\in \mathbb R^d. \end{equation} We say that a symbol $b$ belongs to the class $\BS_{\a}= \BS_{\a}(\beta)= \BS_{\a}(\beta, \hat{\Bth})$, if for any $l\ge 0$ and any non-negative $s\in\mathbb Z$ the conditions \begin{equation}\label{1b1:eq} \1 b \1^{(\a)}_{l, s}:= \max_{|\bs| \le s}\sum\limits_{\bth\in\hat{\Bth}}\langle \bth\rangle^{l}\sup_{\bxi}\langle\bxi\rangle^{(-\a+ |\bs|)\beta}\big|\BD_{\bxi}^{\bs}\hat b(\bth, \bxi)\big|< \infty, \quad |\bs|= s_1+ s_2+ \dots+ s_d, \end{equation} are fulfilled. The quantities \eqref{1b1:eq} define norms on the class $\BS_\a$. Note that $\BS_\a$ is an increasing function of $\a$, i.e. $\BS_{\a}\subset\BS_{\gamma}$ for $\a < \gamma$. Given $\bth\in \mathbb R^d$, let us introduce a linear map $\nabla_{\bth}$ on symbols which acts according to the rule \begin{equation}\label{Delta} \widehat{(\nabla_{\bth} a)}(\bphi, \bxi):= \hat a(\bphi, \bxi+ \bth)- \hat a(\bphi, \bxi). \end{equation} If the Fourier transform of the symbol is factorized, i.e. \[ \hat a(\bphi, \bxi)= \prod_{q= 1}^Q\hat a_q(\bphi, \bxi), \] then the action of $\nabla_{\bth}$ can be written as a sum of actions on each factor separately: \begin{equation}\label{nabla of product} \widehat{(\nabla_{\bth} a)}(\bphi, \bxi)= \sum_{q= 1}^Q\prod_{l= 1}^{q- 1}\hat a_l(\bphi, \bxi+ \bth)\widehat{(\nabla_{\bth} a_q)}(\bphi, \bxi)\prod_{s= q- 1}^Q\hat a_s(\bphi, \bxi). \end{equation} For later reference we mention here the following convenient bound that follows from definition \eqref{1b1:eq} and property \eqref{weight:eq}: \begin{equation} \sum\limits_{\bth\in\hat{\Bth}}\langle\bth\rangle^{l}\sup_{\bxi}\, \langle\bxi\rangle^{(-\a+ s+ 1)\b}\Big(\big|\BD^{\bs}_{\bxi}\widehat{(\nabla_{\boldeta} b)}(\bth, \bxi)\big|\Big)\le C\1 b\1^{(\a)}_{l, s+ 1} \langle\boldeta\rangle^{|\a- s- 1|\b} |\boldeta|, \ s= |\bs|, \label{differ:eq} \end{equation} with a constant $C$ depending only on $\a, s$, and $\beta$. The estimate \eqref{differ:eq} implies that for all $\boldeta$ with $|\boldeta|\le C$ we have a uniform bound \begin{equation* \1 \nabla_{\boldeta} b\1^{(\a-1)}_{l, s}\le C \1 b\1^{(\a)}_{l, s+1}|\boldeta|. \end{equation*} Now we define the PDO $\op(b)$ in the usual way: \begin{equation}\label{eq:deff} \op(b)u(\bx) = (2\pi)^{-d/2} \int b(\bx, \bxi) e^{i\bxi \bx} (\mathcal Fu)(\bxi) d\bxi, \end{equation} the integral being over $ \mathbb R^d$. Under the condition $b\in\BS_\a$ the integral on the r.h.s. is clearly finite for any $u$ from the Schwarz class $\textup{{\textsf{S}}}( \mathbb R^d)$. Moreover, the property $b\in \BS_0$ guarantees the boundedness of $\op(b)$ in $\plainL2( \mathbb R^d)$, see Proposition~\ref{bound:prop}. Unless otherwise stated, from now on $\textup{{\textsf{S}}}( \mathbb R^d)$ is taken as a natural domain for all PDO's when they act in $\plainL2( \mathbb R^d)$. Applying the standard regularization procedures to definition \eqref{eq:deff} (see, e.g., \cite{Shu0}), we can also consider the action of $\op(b)$ on the exponentials $\be_{\bnu}$, $\bnu\in \mathbb R^d$. Namely, we have \begin{equation} \op(b)\be_{\bnu}=\sum_{\bth\in\hat\Bth}\hat{b}(\bth, \bnu)\be_{\bnu+\bth}. \end{equation*} This action can be extended by linearity to all quasi-periodic functions (i.e. finite linear combinations of $\be_{\bnu}$ with different $\bnu$). By taking the closure, we can extend this action of $\op(b)$ to the Besicovitch space $\textup{{\textsf{B}}}_2( \mathbb R^d)$. This is the space of all formal sums \begin{equation*} \sum_{j=1}^\infty a_{j}\be_{\bth_j}(\bx), \quad\textrm{with}\quad \sum_{j=1}^\infty |a_{j}|^2<+\infty. \end{equation*} It is known (see \cite{Shu0}) that the spectra of $\op(b)$ acting in $\plainL2( \mathbb R^d)$ and $\textup{{\textsf{B}}}_2( \mathbb R^d)$ are the same, although the types of the spectra can be entirely different. It is very convenient, when working with the gauge transform constructions, to assume that all the operators involved act in $\textup{{\textsf{B}}}_2( \mathbb R^d)$, although in the end we will return to operators acting in $\plainL2( \mathbb R^d)$. This trick (working with operators acting in $\textup{{\textsf{B}}}_2( \mathbb R^d)$) is similar to working with fibre operators in the periodic case in the sense that we can freely consider the action of an operator on one, or finitely many, exponentials \eqref{e_theta}, despite the fact that these exponentials do not belong to our original function space. Moreover, if the order $\alpha=0$ then by continuity this action can be extended to all of $\textup{{\textsf{B}}}_2( \mathbb R^d)$, and the extension has the same norm as $\op(b)$ acting in $\plainL2$ (see \cite{Shu0}). Thus, in what follows, when we speak about a pseudo-differential operator with almost-periodic symbol acting in $\textup{{\textsf{B}}}_2$, we mean that its domain is either whole $\textup{{\textsf{B}}}_2$ (when the order is non-positive), or the space of all quasi-periodic functions (for operators with positive order). And, when we make a statement about the norm of a pseudo-differential operator with almost-periodic symbol, we will not specify whether the operator acts in $\plainL2( \mathbb R^d)$ or $\textup{{\textsf{B}}}_2( \mathbb R^d)$, since these norms are the same. \subsection{Some basic results on the calculus of almost-periodic PDO's} We begin by listing some elementary results for almost-periodic PDO's. The proofs are very similar (with obvious changes) to the proof of analogous statements in \cite{Sob}. \begin{prop}\label{bound:prop} Suppose that $b$ satisfies \eqref{eq:sumf} and that $\1 b\1^{(0)}_{0, 0}<\infty$. Then $\op(b)$ is bounded in both $\plainL2( \mathbb R^d)$ and $\textup{{\textsf{B}}}_2( \mathbb R^d)$ and $\big\|\op(b)\big\|\le \1 b \1^{(0)}_{0, 0}$. \end{prop} In what follows, {\it if we need to calculate a product of two (or more) operators with some symbols $b_j\in\BS_{\a_j}(\hat{\Bth}_j)$ we will always consider that $b_j\in\BS_{\a_j}(\sum_j\hat{\Bth}_j)$ where, of course, all extra terms are assumed to have zero coefficients in front of them}. Since $\op(b) u\in\textup{{\textsf{S}}}( \mathbb R^d)$ for any $b\in\BS_{\a}$ and $u\in \textup{{\textsf{S}}}( \mathbb R^d)$, the product $\op(b) \op(g)$, $b\in \BS_{\a}(\hat{\Bth}_1), g\in \BS_{\gamma}(\hat{\Bth}_2)$, is well defined on $\textup{{\textsf{S}}}( \mathbb R^d)$. A straightforward calculation leads to the following formula for the symbol $b\circ g $ of the product $\op(b)\op(g)$: \begin{equation*} (b\circ g)(\bx, \bxi) = \sum_{\bth\in\hat{\Bth}_1,\, \bphi\in\hat{\Bth}_2} \hat b(\bth, \bxi +\bphi) \hat g(\bphi, \bxi) e^{i(\bth+\bphi)\bx}, \end{equation*} and hence \begin{equation}\label{prodsymb:eq} \widehat{(b\circ g)}(\boldsymbol\chi, \bxi) = \sum_{\bth +\bphi = \boldsymbol\chi} \hat b (\bth, \bxi +\bphi) \hat g(\bphi, \bxi),\ \boldsymbol\chi\in\hat{\Bth}_1+\hat{\Bth}_2,\ \bxi\in \mathbb R^d. \end{equation} We have \begin{prop}\label{product:prop} Let $b\in\BS_{\a}(\hat{\Bth}_1)$,\ $g\in\BS_{\gamma}(\hat{\Bth}_2)$. Then $b\circ g\in\BS_{\a+\gamma}(\hat{\Bth}_1+\hat{\Bth}_2)$ and \begin{equation*} \1 b\circ g\1^{(\a+\gamma)}_{l,s} \le C \1 b\1^{(\a)}_{l,s} \1 g\1^{(\gamma)}_{l+(|\a|+s)\beta,s}, \end{equation*} with the constant $C$ depending only on $l$, $\alpha$, and $s$. \end{prop} We are also interested in the estimates for symbols of commutators. For PDO's $A, \Psi_l, \ l = 1, 2, \dots ,N$, denote \begin{gather*} \ad(A; \Psi_1, \Psi_2, \dots, \Psi_N):= i\bigl[\ad(A; \Psi_1, \Psi_2, \dots, \Psi_{N-1}), \Psi_N\bigr],\\ \ad(A; \Psi):= i[A, \Psi],\quad \ad^N(A; \Psi):= \ad(A; \Psi, \Psi, \dots, \Psi),\quad \ad^0(A; \Psi):= A. \end{gather*} For the sake of convenience, we use the notation $\ad(a; \psi_1, \psi_2, \dots, \psi_N)$ and $\ad^N(a, \psi)$ for the symbols of multiple commutators. Let \[ \supp\hat b:= \big\{\bth\in \mathbb R^d: \hat b(\bth, \cdot)\not\equiv 0\big\}. \] It follows from \eqref{prodsymb:eq} that the Fourier coefficients of the symbol $\ad(b,g)$ are given by \begin{equation}\label{comm:eq} \widehat{\ad(b, g)}(\boldsymbol\chi, \bxi)= i\!\!\!\sum_{\bth\in (\supp\hat b)\cup(\boldsymbol\chi- \supp\hat g)}\!\!\!\bigl[\widehat{(\nabla_{\boldsymbol\chi- \bth} b)}(\bth, \bxi)\hat g(\boldsymbol\chi- \bth, \bxi)- \hat b(\bth, \bxi)\widehat{(\nabla_{\bth}g)}(\boldsymbol\chi- \bth, \bxi)\bigr]. \end{equation} \begin{prop}\label{commut0:prop} Let $b\in \BS_{\a}(\hat{\Bth})$ and $g_j\in\BS_{\gamma_j}(\hat{\Bth}_j)$,\ $j = 1, 2, \dots, N$. Then\\ $\ad(b; g_1, \dots, g_N) \in\BS_{\gamma}(\hat{\Bth}+\sum_j\hat{\Bth}_j)$ with $$ \gamma = \a+\sum_{j=1}^N(\gamma_j-1), $$ and \begin{equation* \1 \ad(b; g_1, \dots, g_N)\1^{(\gamma)}_{l,s} \le C \1 b\1^{(\a)}_{p,s +N} \prod_{j =1}^N \1 g_j\1^{(\gamma_j)}_{p, s +N -j +1}, \end{equation*} where $C$ and $p$ depend on $l, s, N, \a$ and $\gamma_j$. \end{prop} \section{Resonant regions} We now define resonant regions and mention some of their properties. This material is essentially identical to Section~5 of \cite{ParSht2}, where the reader can find the proofs of all the statements of this section. Recall the definition of the set $\Bth= \widetilde\Bth$ as well as of the quasi-lattice subspaces from Section~\ref{introduction section}. As before, by $\Bth_{\tilde k}$ we denote the algebraic sum of $\tilde k$ copies of $\Bth$; remember that we consider $\tilde k$ fixed. We also put $\Bth'_{\tilde k}:=\Bth_{\tilde k}\setminus\{0\}$. For each $\GV\in\mathcal V$ we put $S_{\GV}:= \big\{\bxi\in\GV,\ |\bxi|=1\big\}$. For each non-zero $\bth\in \mathbb R^d$ we put $\bn(\bth):=\bth|\bth|^{-1}$. Let $\GV\in\mathcal V_m$. We say that $\GF$ is a {\it flag} generated by $\GV$, if $\GF$ is a sequence $\GV_j\in\mathcal V_j$ ($j= 0, 1, \dots, m$) such that $\GV_{j- 1}\subset\GV_j$ and $\GV_m= \GV$. We say that $\{\bnu_j\}_{j= 1}^m$ is a sequence generated by $\GF$ if $\bnu_j\in\GV_j\ominus\GV_{j- 1}$ and $\|\bnu_j\|= 1$ (obviously, this condition determines each $\bnu_j$ up to multiplication by $-1$). We denote by $\mathcal F(\GV)$ the collection of all flags generated by $\GV$. We put \begin{equation}\label{L_j} L_j:= \rho_n^{\alpha_j}, \end{equation} recall \eqref{beta and alphas}. Let $\bth\in\Bth'_{\tilde k}$. The {\it resonant region} generated by $\bth$ is defined as \begin{equation}\label{eq:1} \Lambda(\bth):= \Big\{\bxi\in \mathbb R^d,\ \big|\langle\bxi, \bn(\bth)\rangle\big|\le L_1\Big\}. \end{equation} Suppose, $\GF\in\mathcal F(\GV)$ is a flag and $\{\bnu_j\}_{j= 1}^m$ is a sequence generated by $\GF$. We define \begin{equation}\label{eq:2} \Lambda(\GF):= \Big\{\bxi\in \mathbb R^d,\ \big|\langle\bxi,\bnu_j\rangle\big|\le L_j\Big\}. \end{equation} If $\dim\GV= 1$, definition \eqref{eq:2} is reduced to \eqref{eq:1}. Obviously, if $\GF_1\subset\GF_2$, then $\Lambda(\GF_2)\subset\Lambda(\GF_1)$. Suppose, $\GV\in\mathcal V_j$. We denote \begin{equation} \Bxi_1(\GV) :=\cup_{\GF\in\mathcal F(\GV)}\Lambda(\GF). \end{equation*} Note that $\Bxi_1(\GX)= \mathbb R^d$ and $\Bxi_1(\GV)= \Lambda(\bth)$ if $\GV\in\mathcal V_1$ is spanned by $\bth$. Finally, we put \begin{equation}\label{eq:Bxi} \Bxi(\GV):= \Bxi_1(\GV)\setminus\big(\cup_{\GU\supsetneq\GV}\Bxi_1(\GU)\big)= \Bxi_1(\GV) \setminus\big(\cup_{\GU\supsetneq\GV}\cup_{\GF\in\mathcal F(\GU)}\Lambda(\GF)\big). \end{equation} We call $\Bxi(\GV)$ the resonance region generated by $\GV$. Very often, the region $\Bxi(\GX)$ is called the non-resonance region. We, however, will omit using this terminology since we will treat all regions $\Bxi(\GV)$ in the same way. The first set of properties follows immediately from the definitions. \begin{lem}\label{lem:propBUps} (i) We have \begin{equation*} \cup_{\GV\in\mathcal V}\Bxi(\GV) = \mathbb R^d. \end{equation*} (ii) $\bxi\in\Bxi_1(\GV)$ iff $\bxi_{\GV}\in\Omega(\GV)$, where $\Omega(\GV)\subset\GV$ is a certain bounded set (more precisely, $\Omega(\GV) =\Bxi_1(\GV)\cap\GV\subset \mathcal B(m L_m)$ if $\dim\GV =m$). (iii) $\Bxi_1( \mathbb R^d) =\Bxi( \mathbb R^d)$ is a bounded set, $\Bxi( \mathbb R^d)\subset \mathcal B(d L_d)$; all other sets $\Bxi_1(\GV)$ are unbounded. \end{lem} Now we move to slightly less obvious properties. From now on we always assume that $\rho_0$ (and thus $\rho_n$) is sufficiently large. We also assume, as we always do, that the value of $k$ is sufficiently large so that, for example, $L_j\rho_n^{0+}< L_{j+ 1}$. \begin{lem}\label{lem:intersect} Let $\GV, \GU\in\mathcal V$. Then $\big(\Bxi_1(\GV)\cap\Bxi_1(\GU)\big)\subset \Bxi_1(\GW)$, where $\GW:= \GV+ \GU$ (algebraic sum). \end{lem} \begin{cor} (i) We can re-write definition \eqref{eq:Bxi} like this: \begin{equation} \Bxi(\GV) :=\Bxi_1(\GV)\setminus\big(\cup_{\GU\not\subset\GV}\Bxi_1(\GU)\big). \end{equation*} (ii) If $\GV\ne\GU$, then $\Bxi(\GV)\cap\Bxi(\GU) =\emptyset$. (iii) We have $ \mathbb R^d =\sqcup_{\GV\in\mathcal V}\Bxi(\GV)$ (the disjoint union). \end{cor} \begin{lem}\label{lem:verynew} Let $\GV\in\mathcal V_m$ and $\GV\subset\GW\in\mathcal V_{m +1}$. Let $\bmu$ be (any) unit vector from $\GW\ominus\GV$. Then, for $\bxi\in\Bxi_1(\GV)$, we have $\bxi\in\Bxi_1(\GW)$ if and only if the estimate $\big|\langle\bxi,\bmu\rangle\big|= \big|\langle\bxi_{\GV^{\perp}},\bmu\rangle\big|\le L_{m +1}$ holds. \end{lem} \begin{lem}\label{lem:newXi} We have \begin{equation} \Bxi_1(\GV)\cap\cup_{\GU\supsetneq\GV}\Bxi_1(\GU)= \Bxi_1(\GV)\cap\cup_{\GW\supsetneq\GV, \ \dim\GW= 1+ \dim\GV}\Bxi_1(\GW). \end{equation*} \end{lem} \begin{cor} We can re-write \eqref{eq:Bxi} as \begin{equation}\label{eq:Bxibbis} \Bxi(\GV):=\Bxi_1(\GV)\setminus\big(\cup_{\GW\supsetneq\GV, \dim\GW =1 +\dim\GV}\Bxi_1(\GW)\big). \end{equation} \end{cor} \begin{lem}\label{lem:Upsilon} Let $\GV\in\mathcal V$ and $\bth\in\Bth_{\tilde k}$. Suppose that $\bxi\in\Bxi(\GV)$ and both points $\bxi$ and $\bxi+\bth$ are inside $\Lambda(\bth)$. Then $\bth\in\GV$ and $\bxi+\bth\in\Bxi(\GV)$. \end{lem} \begin{defn} \label{reachability:defn} Let $\bth, \bth_1, \bth_2, \dots, \bth_l$ be some vectors from $\Bth'_{\tilde k}$, which are not necessarily distinct. \begin{enumerate} \item \label{1} We say that two vectors $\bxi, \boldeta\in \mathbb R^d$ are \textsl{$\bth$-resonant congruent} if both $\bxi$ and $\boldeta$ are inside $\L(\bth)$ and $(\bxi - \boldeta) =l\bth$ with $l\in\mathbb Z$. In this case we write $\bxi \leftrightarrow \boldeta \mod \bth$. \item\label{2} For each $\bxi\in \mathbb R^d$ we denote by $\BUps_{\bth}(\bxi)$ the set of all points which are $\bth$-resonant congruent to $\bxi$. For $\bth\not = \mathbf 0$ we say that $\BUps_{\bth}(\bxi) = \varnothing$ if $\bxi\notin\L(\bth)$. \item\label{3} We say that $\bxi$ and $\boldeta$ are \textsl{$\bth_1, \bth_2, \dots, \bth_l$-resonant congruent}, if there exists a sequence $\bxi_j\in \mathbb R^d, j=0, 1, \dots, l$ such that $\bxi_0 = \bxi$, $\bxi_l = \boldeta$, and $\bxi_j \in\BUps_{\bth_j}(\bxi_{j-1})$ for $j =1, 2, \dots, l$. \item We say that $\boldeta\in \mathbb R^d$ and $\bxi\in \mathbb R^d$ are \textsl{resonant congruent}, if either $\bxi=\boldeta$ or $\bxi$ and $\boldeta$ are $\bth_1, \bth_2, \dots, \bth_l$-resonant congruent with some $\bth_1, \bth_2, \dots, \bth_l \in\Bth_{\tilde k}'$. The set of \textbf{all} points, resonant congruent to $\bxi$, is denoted by $\BUps(\bxi)$. For points $\boldeta\in\BUps(\bxi)$ (note that this condition is equivalent to $\bxi\in\BUps(\boldeta)$) we write $\boldeta\leftrightarrow\bxi$. \end{enumerate} \end{defn} Note that $\BUps(\bxi)= \{\bxi\}$ for any $\bxi\in\Bxi(\GX)$. Now Lemma \ref{lem:Upsilon} immediately implies \begin{cor}\label{cor:Upsilon} For each $\bxi\in\Bxi(\GV)$ we have $\BUps(\bxi)\subset\Bxi(\GV)$ and thus \begin{equation*} \Bxi(\GV)= \sqcup_{\bxi\in\Bxi(\GV)}\BUps(\bxi). \end{equation*} \end{cor} \begin{lem}\label{lem:diameter} The diameter of $\BUps(\bxi)$ is bounded above by $mL_m$, if $\bxi\in\Bxi(\GV)$, $\GV\in\mathcal V_m$. \end{lem} \begin{lem}\label{lem:finiteBUps} For each $\bxi\in\Bxi(\GV),\ \GV\ne \mathbb R^d$, the set $\BUps(\bxi)$ is finite, and $\card\BUps(\bxi)$ is bounded uniformly in $\bxi\in \mathbb R^d\setminus\BXi( \mathbb R^d)$. \end{lem} \section{Description of the approach}\label{description section} We first prove \eqref{eq:main_lem1} assuming that the symbol $b$ of $B$ is replaced by $\tilde b$ which satisfies \eqref{tilde b}. In particular, it belongs to the class $\BS_\alpha$. At the end, in Section~\ref{final section}, we will use \eqref{eq:condB2} to show that Theorem~\ref{main_thm} holds as stated. For any set $\mathcal C\subset \mathbb R^d$ by $\mathcal P(\mathcal C)$ we denote the orthogonal projection onto $\mathrm{span}\{\be_{\bxi}\}_{\bxi\in\mathcal C}$ in $\textup{{\textsf{B}}}_2( \mathbb R^d)$ and by $\mathcal P^{L}(\mathcal C)$ the same projection considered in $\plainL2( \mathbb R^d)$, i.e. \begin{equation}\label{CP} \mathcal P^{L}(\mathcal C)=\mathcal F^*\Id_{\mathcal C}\mathcal F, \end{equation} where $\mathcal F$ is the Fourier transform and $\Id_{\mathcal C}$ is the operator of multiplication by the indicator function of $\mathcal C$. Obviously, $\mathcal P^{L}(\mathcal C)$ is a well-defined (respectively, non-zero) projection iff $\mathcal C$ is measurable (respectively, has non-zero measure). Let us fix sufficiently large $n$, and denote (recall that $\lambda_n =\rho_n^{2w}$) \begin{equation}\label{X} \mathcal X_n:= \Big\{\bxi\in \mathbb R^d,\,|\bxi|^{2w}\in \big[(5/6)^{2w}\lambda_n, 5^{2w}\lambda_n\big]\Big\}. \end{equation} We also put \begin{equation*} \mathcal A= \mathcal A_n:= \cup_{\bxi\in\mathcal X_n}\BUps(\bxi). \end{equation*} Lemma \ref{lem:diameter} implies that, if $\rho_0$ is big enough, \begin{equation}\label{bxi in CA} \textrm{for each $\bxi\in\mathcal A$ we have $|\bxi|^{2w}\in\big[(2/3)^{2w}\lambda_n, 6^{2w}\lambda_n\big]$.} \end{equation} In particular, we have \begin{equation}\label{eq:CARd} \mathcal A\cap\BXi( \mathbb R^d)= \varnothing. \end{equation} Let us define \begin{equation*} \hat\mathcal A :=\big\{\bxi\not\in\mathcal A,\ |\bxi|^{2w} <\lambda_n\big\} \end{equation*} and \begin{equation}\label{check A} \check\mathcal A:=\big\{\bxi\not\in\mathcal A,\ |\bxi|^{2w}>\lambda_n\big\}. \end{equation} We now plan to apply the gauge transform as in Sections 8 and 9 of \cite{ParSht2} to the operator $H$. The details of this procedure will be explained in Sections~\ref{Partition section} and~\ref{Gauge transform section}; here, we just mention that we are going to introduce two operators: $H_1$ and $H_2$. The operator $H_1$ is unitary equivalent to $H$: $H_1= U^{-1}HU$, where $U=e^{i\Psi}$ with a bounded pseudo-differential operator $\Psi$ with almost-periodic coefficients (then Lemma \ref{norms lemma} implies that the densities of states of $H$ and $H_1$ are the same). Moreover, $H_1= H_2+ R_{\tilde k}$, where \begin{equation}\label{bound on remainder} \|R_{\tilde k}\|\lesssim \rho_n^{-M+ 2w- d} \end{equation} and $H_2= (-\Delta)^{w}+ W_{\tilde k}$ is a self-adjoint pseudo-differential operator with symbol $|\bxi|^{2w}+ w_{\tilde k}(\bx, \bxi)$ which satisfies the following property: \begin{equation}\label{eq:b3} \hat w_{\tilde k}(\bth, \bxi)= 0, \ \mathrm{if} \ \big(\bxi\not\in\Lambda(\bth)\ \&\ \bxi\in\mathcal A\big), \ \mathrm{or} \ \big(\bxi+\bth\not\in\Lambda(\bth)\ \&\ \bxi\in\mathcal A\big), \ \mathrm{or} \ (\bth\not\in\Bth_{\tilde k}). \end{equation} We can now use a simple statement which follows from Lemma~\ref{norms lemma} and Remark~\ref{spurious remark}: \begin{lem}\label{H1H2 lemma} Suppose, $H_1$ and $H_2$ are two elliptic self-adjoint pseudo-differential operators with almost-periodic coefficients such that $\|H_1- H_2\|\lesssim \rho_n^{-M+ 2w- d}$. Suppose that $N(H_2; \rho^{2w})$ satisfies asymptotic expansion \eqref{eq:main_lem1}. Then $N(H_1; \rho^{2w})$ also satisfies \eqref{eq:main_lem1} with the same coefficients. \end{lem} This means that it is enough to establish the asymptotic expansion \eqref{eq:main_lem1} for the operator $H_2$ instead of $H$. Condition \eqref{eq:b3} implies that for each $\bxi\in\mathcal A$ the subspace $\mathcal P\big(\BUps(\bxi)\big)\textup{{\textsf{B}}}_2( \mathbb R^d)$ is an invariant subspace of $H_2$; its dimension is finite by Lemma~\ref{lem:finiteBUps}. We put \begin{equation} H_2(\bxi):= H_2|_{\mathcal P(\BUps(\bxi))\textup{{\textsf{B}}}_2( \mathbb R^d)}. \end{equation*} Note that the subspaces $\mathcal P(\hat\mathcal A)\textup{{\textsf{B}}}_2( \mathbb R^d)$ and $\mathcal P(\check\mathcal A)\textup{{\textsf{B}}}_2( \mathbb R^d)$ are invariant as well; by $H_2(\hat\mathcal A)$ and $H_2(\check\mathcal A)$ we denote the restrictions of $H_2$ to these subspaces; we also denote by $H_2(\mathcal A)$ the restriction of $H_2$ to $\mathcal P(\mathcal A)\textup{{\textsf{B}}}_2( \mathbb R^d)$. If we consider the operator $H_2$ acting in $\plainL2( \mathbb R^d)$, then $\mathcal P^L(\hat\mathcal A)\plainL2( \mathbb R^d)$, $\mathcal P^L(\check\mathcal A)\plainL2( \mathbb R^d)$, and $\mathcal P^L(\mathcal A)\plainL2( \mathbb R^d)$ are still invariant subspaces. It follows from \eqref{X} -- \eqref{check A} that $UH_2(\hat\mathcal A)U^*< (5/6)^{2w}\lambda_n I$ and $UH_2(\check\mathcal A)U^*> 5^{2w}\lambda_n I$. For each $\bxi\in\mathcal A$ the operator $H_2(\bxi)$ is a finite-dimensional self-adjoint operator, so its spectrum is purely discrete; we denote its eigenvalues (counting multiplicities) by $\lambda_1(\bxi)\le \lambda_2(\bxi)\le\dots\le \lambda_{\card\BUps(\bxi)}(\bxi)$. Next, we list all points $\boldeta\in\BUps(\bxi)$ in increasing order of their absolute values; thus, we have put into correspondence to each point $\boldeta\in\BUps(\bxi)$ a natural number $t= t(\boldeta)$ so that $t(\boldeta)< t(\boldeta')$ if $|\boldeta|< |\boldeta'|$. If two points $\boldeta= (\eta_1,\dots,\eta_d)$ and $\boldeta'= (\eta'_1, \dots, \eta'_d)$ have the same absolute values, we put them in the lexicographic order of their coordinates, i.e. we say that $t(\boldeta)< t(\boldeta')$ if $\eta_1< \eta'_1$, or $\eta_1= \eta'_1$ and $\eta_2< \eta'_2$, etc. Now we define the map $g: \mathcal A\to \mathbb R$ which to each point $\boldeta\in\mathcal A$ brings into correspondence the number $\lambda_{t(\boldeta)}\big(\BUps(\boldeta)\big)$. This map is an injection from $\mathcal A$ onto the set of eigenvalues of $H_2$, counting multiplicities (recall that we consider the operator $H_2$ acting in $\textup{{\textsf{B}}}_2( \mathbb R^d)$, so there is nothing miraculous about its spectrum consisting of eigenvalues and their limit points). Moreover, all eigenvalues of $H_2$ inside the interval $\big[(7/8)^{2w}\lambda_n, (9/2)^{2w}\lambda_n\big]$ have a pre-image under $g$. We define \begin{equation}\label{g outside A} g(\bxi):= |\bxi|^{2w}, \quad\textrm{for} \quad \bxi\in \mathbb R^d\setminus \mathcal A. \end{equation} Arguments similar to the ones used in \cite{ParSob} show that $g$ is a measurable function. We introduce \begin{equation*} \mathcal G_{\lambda}:= \big\{\bxi\in \mathbb R^d,\,g(\bxi)\le\lambda\big\}. \end{equation*} \begin{lem}\label{volume lemma} For $\lambda\in[\lambda_n, 4^{2w}\lambda_n]$ being a continuity point of $N(\lambda; H_2)$ we have: \begin{equation}\label{eq:densityh3} N(\lambda; H_2)= (2\pi)^{-d}\vol \mathcal G_{\lambda}. \end{equation} \end{lem} Since points of continuity of $N(\lambda)$ are dense, {\it the asymptotic expansion proven for such $\lambda$ can be extended to all $\lambda\in[\lambda_n, 4^{2w}\lambda_n]$ by taking the limit}. Thus, our next task is to compute $\vol \mathcal G_{\lambda}$. Let us put \begin{equation} \mathcal A^+(\rho):= \big\{\bxi\in \mathbb R^d,\,g(\bxi)<\rho^{2w}<|\bxi|^{2w}\big\} \end{equation*} and \begin{equation} \mathcal A^-(\rho):= \big\{\bxi\in \mathbb R^d,\,|\bxi|^{2w}<\rho^{2w}<g(\bxi)\big\}. \end{equation*} \begin{lem}\label{lem:new1} \begin{equation}\label{eq:46} \vol(\mathcal G_{\lambda})= \omega_d\rho^d+ \vol\mathcal A^+(\rho)- \vol\mathcal A^-(\rho), \end{equation} where $\omega_d$ is the volume of the unit ball in $ \mathbb R^d$. \end{lem} \begin{proof} We obviously have $\mathcal G_{\lambda}= \mathcal B(\rho)\cup \mathcal A^+(\rho)\setminus \mathcal A^-(\rho)$. Since $\mathcal A^-(\rho)\subset \mathcal B(\rho)$ and $\mathcal A^+(\rho)\cap \mathcal B(\rho)=\emptyset$, this implies \eqref{eq:46}. \end{proof} \begin{rem}\label{rem:nnew1} Properties of the mapping $g$ imply that $\mathcal A^+(\rho)\cup\mathcal A^-(\rho)\subset \mathcal A$. Thus, in order to compute $N(\lambda)$, we need to analyze the behavior of $g$ only inside $\mathcal A$. \end{rem} We will compute volumes of $\mathcal A^{\pm}(\rho)$ by means of integrating their characteristic functions in a specially chosen set of coordinates. The next section is devoted to introducing these coordinates. \section{Coordinates}\label{coordinates section} In this section, we do some preparatory work before computing $\vol\mathcal A^{\pm}(\rho)$. Namely, we are going to introduce a convenient set of coordinates in $\Bxi(\GV)$. Let $\GV\in\mathcal V_m$ be fixed; since $\mathcal A^{\pm}(\rho)\cap\BXi( \mathbb R^d)= \emptyset$, we will assume that $m<d$. Then, as we have seen, $\bxi\in\Bxi_1(\GV)$ if and only if $\bxi_{\GV}\in\Omega(\GV)$. Let $\{\GU_j\}$ be a collection of all subspaces $\GU_j\in\mathcal V_{m+ 1}$ such that each $\GU_j$ contains $\GV$. Let $\bmu_j= \bmu_j(\GV)$ be (any) unit vector from $\GU_j\ominus\GV$. Then it follows from Lemma~\ref{lem:verynew} that for $\bxi\in\Bxi_1(\GV)$, we have $\bxi\in\Bxi_1(\GU_j)$ if and only if the estimate $\big|\langle\bxi, \bmu_j\rangle\big|= \big|\langle\bxi_{\GV^{\perp}}, \bmu_j\rangle\big|\le L_{m+ 1}$ holds. Thus, formula \eqref{eq:Bxibbis} implies that \begin{equation} \Bxi(\GV)= \Big\{\bxi\in \mathbb R^d,\ \bxi_{\GV}\in\Omega(\GV)\ \&\ \forall j \ \big|\langle\bxi_{\GV^{\perp}},\bmu_j(\GV)\rangle\big| > L_{m+ 1}\Big\}. \end{equation*} The collection $\big\{\bmu_j(\GV)\big\}$ obviously coincides with \begin{equation*} \big\{\bn(\bth_{\GV^{\perp}}),\ \bth\in\Bth_{\tilde k}\setminus\GV\big\}. \end{equation*} The set $\Bxi(\GV)$ is, in general, disconnected; it consists of several connected components which we will denote by $\big\{\Bxi(\GV)_p\big\}_{p=1}^P$. Let us fix a connected component $\Bxi(\GV)_p$. Then for some vectors $\big\{\tilde\bmu_j(p)\big\}_{j=1}^{J_p}\subset \{\pm\bmu_j\}$ we have \begin{equation} \Bxi(\GV)_p= \big\{\bxi\in \mathbb R^d,\ \bxi_{\GV}\in\Omega(\GV)\ \&\ \forall j \ \langle\bxi_{\GV^{\perp}},\tilde\bmu_j(p)\rangle > L_{m +1}\big\}; \end{equation*} we assume that $\big\{\tilde\bmu_j(p)\big\}_{j =1}^{J_p}$ is the minimal set with this property, so that each hyperplane $$ \big\{\bxi\in \mathbb R^d,\ \bxi_{\GV}\in\Omega(\GV)\ \ \&\ \ \langle\bxi_{\GV^{\perp}}, \tilde\bmu_j(p)\rangle= L_{m+ 1}\big\},\ j= 1, \dots, J_p $$ has a non-empty intersection with the boundary of $\Bxi(\GV)_p$. It is not hard to see that $J_p\ge d- m$. Indeed, otherwise $\Bxi(\GV)_p$ would have non-empty intersection with $\Bxi_1(\GV')$ for some $\GV'$, $\GV\subsetneq\GV'$. We also introduce \begin{equation} \tilde\Bxi(\GV)_p:= \big\{\bxi\in\GV^{\perp},\ \forall j \ \langle\bxi,\tilde\bmu_j(p)\rangle > 0\big\}. \end{equation*} Note that our assumption that $\Bxi(\GV)_p$ is a connected component of $\Bxi(\GV)$ implies that for any $\bxi\in\tilde\Bxi(\GV)_p$ and any $\bth\in\Bth_{\tilde k}\setminus\GV$ we have \begin{equation} \langle\bxi,\bth\rangle= \langle\bxi,\bth_{\GV^{\perp}}\rangle\ne 0. \end{equation*} We also put \begin{equation*} K:= d- m- 1. \end{equation*} Without loss of generality we may (and will) assume that the number $J_p$ of `defining planes' is the minimal possible, i.e. $J_p= K+ 1$. Indeed, the argument presented in Section~11 of \cite{ParSht2} explains how to derive the result for arbitrary $\BXi(\GV)_p$, assuming we have proved it in the case $J_p= K+ 1$. If $J_p= K+ 1$, then the set $\big\{\tilde\bmu_j(p)\big\}_{j= 1}^{K+ 1}$ is linearly independent. Let $\ba= \ba(p)$ be a unique point from $\GV^\perp$ satisfying the following conditions: $\langle\ba,\tilde\bmu_j(p)\rangle= L_{m+ 1}$, $j= 1, \dots, K+ 1$. Then, since the determinant of the Gram matrix of vectors $\tilde\bmu_j(p)$ is $\gtrsim\rho_n^{0-}$ by \eqref{eq:condC1}, we have \begin{equation}\label{bound on a} |\ba|\lesssim L_{m+ 1}\rho_n^{0+}= \rho_n^{\alpha_{m+ 1}+ 0+}. \end{equation} We introduce shifted cylindrical coordinates in $\Bxi(\GV)_p$. These coordinates will be denoted by $\bxi= (r; \mathbf\Phi; \mathbf X)$. Here, $\mathbf X= (X_1, \dots, X_m)$ is an arbitrary set of cartesian coordinates in $\Omega(\GV)$. These coordinates do not depend on the choice of the connected component $\Bxi(\GV)_p$. The rest of the coordinates $(r, \mathbf\Phi)$ are shifted spherical coordinates in $\GV^{\perp}$, centered at $\ba$. This means that \begin{equation} r(\bxi)= |\bxi_{\GV^{\perp}}- \ba| \end{equation*} and \begin{equation} \mathbf\Phi= \bn(\bxi_{\GV^{\perp}}- \ba)\in S_{\GV^{\perp}}. \end{equation*} More precisely, $\mathbf\Phi\in \mathcal M_p$, where $\mathcal M_p:= \big\{\bn(\bxi_{\GV^{\perp}}- \ba),\ \bxi\in\Bxi(\GV)_p\big\}\subset S_{\GV^{\perp}}$ is a $K$-dimensional spherical simplex with $K+ 1$ sides. Note that \begin{equation} \begin{split} \mathcal M_p&= \big\{\bn(\bxi_{\GV^{\perp}}-\ba),\ \bxi\in\Bxi(\GV)_p\big\}= \big\{\bn(\bxi_{\GV^{\perp}}-\ba),\ \forall j \ \langle\bxi_{\GV^{\perp}},\tilde\bmu_j(p)\rangle > L_{m+1}\big\}\\ &= \big\{\bn(\boldeta),\ \boldeta:= \bxi_{\GV^{\perp}}-\ba\in\GV^\perp,\ \forall j \ \langle\boldeta,\tilde\bmu_j(p)\rangle > 0\big\}= S_{\GV^{\perp}}\cap\tilde\Bxi(\GV)_p. \end{split} \end{equation*} We will denote by $d\mathbf\Phi$ the spherical Lebesgue measure on $\mathcal M_p$. For each non-zero vector $\bmu\in\GV^{\perp}$, we denote \begin{equation*} \mathcal W(\bmu):= \big\{\boldeta\in\GV^\perp,\ \langle\boldeta,\bmu\rangle= 0\big\}. \end{equation*} Thus, the sides of the simplex $\mathcal M_p$ are intersections of $\mathcal W\big(\tilde\bmu_j(p)\big)$ with the sphere $S_{\GV^{\perp}}$. Each vertex $\bv= \bv_t$, $t= 1, \dots, K+ 1$ of $\mathcal M_p$ is an intersection of $S_{\GV^{\perp}}$ with $K$ hyperplanes $\mathcal W\big(\tilde\bmu_j(p)\big)$, $j= 1, \dots, K+ 1$, $j\ne t$. This means that $\bv_t$ is a unit vector from $\GV^{\perp}$ which is orthogonal to $\big\{\tilde\bmu_j(p)\big\}$, $j= 1, \dots, K+ 1$, $j\ne t$; this defines $\bv$ up to a multiplication by $-1$. \begin{lem}\label{lem:newangles} Let $\GU_1$ and $\GU_2$ be two strongly distinct subspaces each of which is a linear combination of some of the vectors from $\big\{\tilde\bmu_j(p)\big\}$. Then the angle between them is not smaller than $s(\rho_n)$. In particular, all non-zero angles between two sides of any dimensions of $\mathcal M_p$ as well as all the distances between two vertexes $\bv_t$ and $\bv_{\tau}$, $t\ne\tau$, are bounded below by $s(\rho_n)$. \end{lem} \begin{lem}\label{lem:sign} Let $p$ be fixed. Suppose, $\bth\in\Bth_{\tilde k}\setminus\GV$ and $\bth_{\GV^{\perp}}=\sum_{j=1}^{K+1} b_j\tilde\bmu_j(p)$. Then either all coefficients $b_j$ are non-positive, or all of them are non-negative. \end{lem} By taking sufficiently large $\tilde k$ we can assure that the diameter of $\mathcal M_p$ does not exceed $(100d^2)^{-1}$. We put $\Phi_q:= \frac{\pi}{2}-\phi\big(\bxi_{\GV^\perp}- \ba, \tilde\bmu_q(p)\big)$, $q= 1, \dots, K+ 1$. The geometrical meaning of these coordinates is simple: $\Phi_q$ is the spherical distance between $\mathbf\Phi= \bn(\bxi_{\GV^{\perp}}-\ba)$ and $\mathcal W\big(\tilde\bmu_q(p)\big)$. The reason why we have introduced $\Phi_q$ is that in these coordinates some important objects will be especially simple (see e.g. Lemma~\ref{lem:products} below) which is very convenient for integration. At the same time, the set of coordinates $\big(r, \{\Phi_q\}\big)$ contains $K+ 2$ variables, whereas we only need $K+ 1$ coordinates in $\GV^{\perp}$. Thus, we have one constraint for variables $\Phi_j$. Namely, let $\{\be_j\}$, $j= 1, \dots, K+ 1$ be a fixed orthonormal basis in $\GV^{\perp}$ chosen in such a way that the $K+1$-st axis is directed along $\ba$, and thus passes through $\mathcal M_p$. Then we have $\be_j= \sum_{l= 1}^{K+ 1}a_{jl}\tilde\bmu_l$ with some matrix $\{a_{jl}\}$, $j,l= 1, \dots, K+ 1$, and $\tilde\bmu_l= \tilde\bmu_l(p)$. Therefore (recall that we denote $\boldeta:= \bxi_{\GV^{\perp}}- \ba$), \begin{equation} \eta_j= \langle\boldeta,\be_j\rangle= r\sum_{q= 1}^{K+ 1}a_{jq}\sin\Phi_q \end{equation*} and, since $r^2(\bxi)=|\boldeta|^2=\sum_{j=1}^{K+1}\eta_j^2$, this implies that \begin{equation} \sum_{j= 1}^{K+ 1}\Big(\sum_{q= 1}^{K+ 1} a_{jq}\sin\Phi_q\Big)^2= 1, \end{equation*} which is our constraint. Let us also put \begin{equation}\label{eq:etajdash} \eta_j':= \frac{\eta_j}{|\boldeta|}= \sum_{q= 1}^{K+ 1}a_{jq}\sin\Phi_q. \end{equation} Then we can write the surface element $d\mathbf\Phi$ in the coordinates $\{\eta_j'\}$ as \begin{equation} d\mathbf\Phi= \frac{d\eta_1'\dots d\eta_K'}{\eta_{K+ 1}}= \frac{d\eta_1'\dots d\eta_K'}{\big(1- \sum_{j= 1}^K(\eta_{j}')^2\big)^{1/2}}, \end{equation*} where the denominator is bounded below by $1/2$ by our choice of the basis $\{\be_j\}$. It follows from our choice of the coordinates and \eqref{eq:etajdash} that \begin{equation}\label{a_dot_eta} \langle\ba, \mathbf\Phi\rangle= \langle\ba, \bn(\boldeta)\rangle= |\ba|\eta_{K+ 1}'= |\ba|\sum_{q= 1}^{K+ 1}a_{K+ 1\, q}\sin\Phi_q. \end{equation} \begin{lem}\label{lem:Al} For each $p,l$ we have $|a_{pl}|\le s(\rho_n)^{-1}$. \end{lem} \begin{lem}\label{lem:anglebelow} We have $\max_j\sin\Phi_j(\boldeta)\ge s(\rho_n) d^{-3/2}$. \end{lem} The next lemma describes the dependence on $r$ of all possible inner products $\langle\bxi,\bth\rangle$, $\bth\in\Bth_{\tilde k}$, $\bxi\in\Bxi(\GV)_p$. \begin{lem}\label{lem:products} Let $\bxi\in\Bxi(\GV)_p$, $\GV\in\mathcal V_m$, and $\bth\in\Bth_{\tilde k}$. (i) If $\bth\in\GV$, then $\langle\bxi,\bth\rangle$ does not depend on $r$. (ii) If $\bth\not\in\GV$ and $\bth_{\GV^{\perp}}=\sum_{q}b_q\tilde\bmu_q(p)$, then \begin{equation} \langle\bxi,\bth\rangle=\langle \mathbf X,\bth_{\GV}\rangle+L_{m+1}\sum_{q}b_q+r(\bxi)\sum_{q}b_q\sin\Phi_q. \end{equation*} In the case (ii) all the coefficients $b_q$ are either non-positive or non-negative and each non-zero coefficient $b_q$ satisfies \begin{equation}\label{eq:n10} \rho_n^{0-}\lesssim |b_q| \lesssim \rho_n^{0+}. \end{equation} \end{lem} \section{Partition of the perturbation}\label{Partition section} The symbols we are going to construct in this section will depend on $\rho_n$; this dependence will usually be omitted from the notation. Let $\varpi\in \plainC\infty( \mathbb R)$ be such that \begin{equation}\label{eta:eq} 0\le\varpi\le 1,\ \ \varpi(z)= \begin{cases} & 1,\ z \le 1;\\ & 0,\ z \ge 21/20. \end{cases} \end{equation} For $\bth\in \Bth'$ we define several $\plainC\infty$-cut-off functions: \begin{equation}\label{el:eq} \begin{cases} e_{\bth}(\bxi)&:= \varpi\Big(\big|2|2\bxi+ \bth|/\rho_n- 15\big|/13\Big),\\ \ell^{>}_{\bth}(\bxi)&:= 1- \varpi\Big(\big(2|2\bxi+ \bth|/\rho_n- 15\big)/13\Big),\\ \ell^{<}_{\bth}(\bxi)&:= 1- \varpi\Big(\big(15- 2|2\bxi+ \bth|/\rho_n\big)/13\Big), \end{cases} \end{equation} and \begin{equation}\label{phizeta:eq} \begin{cases} \zeta_{\bth}(\bxi)&:= \varpi\biggl(\dfrac{\big|\langle\bth, \bxi+ \bth/2\rangle\big|}{\rho_n^\beta|\bth|}\biggr),\\ \varphi_{\bth}(\bxi)&:= 1- \zeta_{\bth}(\bxi). \end{cases} \end{equation} \begin{rem}\label{partition support remark} Note that $e_{\bth}+ \ell^{>}_{\bth}+ \ell^{<}_{\bth}= 1$. The function $\ell^{>}_{\bth}$ is supported on the set $|\bxi+ \bth/2|\geqslant 7\rho_n$, and $\ell^{<}_{\bth}$ is supported on the set $|\bxi+ \bth/2|\leqslant \rho_n/2$. The function $e_{\bth}$ is supported in the shell $\rho_n/3\le |\bxi+ \bth/2|\le 8\rho_n$. \end{rem} Using the notation $\ell_{\bth}$ for any of the functions $\ell^{>}_{\bth}$ or $\ell^{<}_{\bth}$, we point out that \begin{equation* \begin{cases} e_{\bth}(\bxi)= e_{-\bth}(\bxi+ \bth), &\ell_{\bth}(\bxi)= \ell_{-\bth}(\bxi+ \bth),\\ \varphi_{\bth}(\bxi)= \varphi_{-\bth}(\bxi+ \bth), &\zeta_{\bth}(\bxi)= \zeta_{-\bth}(\bxi+ \bth). \end{cases} \end{equation*} Note that the above functions satisfy the estimates \begin{equation}\label{varphi:eq} \begin{cases} \big|\BD^{\bs}_{\bxi}e_{\bth}(\bxi)\big|+ \big|\BD^{\bs}_{\bxi}\ell_{\bth}(\bxi)\big|\lesssim \rho_n^{-|\mathbf s|},\\ \big|\BD^{\bs}_{\bxi}\varphi_{\bth}(\bxi)\big|+ \big|\BD^{\bs}_{\bxi}\zeta_{\bth}(\bxi)\big|\lesssim \rho_n^{-\beta|\bs|}. \end{cases} \end{equation} Now for any symbol $b\in\BS_{\a}(\b)$ we introduce five new symbols: \begin{equation*} \begin{split} b^{{\mathcal {L E}}}(\bx, \bxi; \rho_n)&:= \sum_{\bth\in\Bth'}\hat b(\bth, \bxi)\ell^{>}_{\bth}(\bxi)e^{i\bth \bx}, \\ b^{\natural}(\bx, \bxi; \rho_n)&:= \sum_{\bth\in\Bth'}\hat b(\bth, \bxi)\varphi_{\bth}(\bxi)e_{\bth}(\bxi) e^{i\bth \bx}, \\ b^{\flat}(\bx, \bxi; \rho_n)&:= \sum_{\bth\in\Bth'}\hat b(\bth, \bxi)\zeta_{\bth}(\bxi)e_{\bth}(\bxi)e^{i\bth \bx}, \\ b^{\downarrow}(\bx, \bxi; \rho_n)&:= \sum_{\bth\in\Bth'}\hat b(\bth, \bxi)\ell^{<}_{\bth}(\bxi)e^{i\bth \bx}, \\ b^o(\bx, \bxi; \rho_n)&= b^o(\bxi; \rho_n):= \hat b(0, \bxi). \end{split} \end{equation*} The superscripts here are chosen to mean, respectively: `large energy', `non-resonant', `resonant', `small energy' and $0$-th Fourier coefficient. The corresponding operators are denoted by $B^{{\mathcal {L E}}}$, $B^{\natural}$, $B^{\flat}$, $B^{\downarrow}$, and $B^o$. By definitions \eqref{eta:eq}, \eqref{el:eq} and \eqref{phizeta:eq} \begin{equation}\label{b as a sum} b= b^o+ b^{\downarrow}+ b^{\flat}+ b^{\natural}+ b^{{\mathcal {L E}}}. \end{equation} The role of each of these operators is easy to explain. Note that on the support of the functions $\hat b^{\natural}(\bth, \cdot; \rho_n)$ and $\hat b^{\flat}(\bth, \cdot; \rho_n)$ we have (using \eqref{bound on R}) \begin{equation* \rho_n/3 - O(\rho_n^{0+})\le |\bxi|\le 8\rho_n + O(\rho_n^{0+}). \end{equation*} On the support of $b^{\downarrow}(\bth, \cdot; \rho_n)$ we have \begin{equation}\label{supportell<:eq} |\bxi|\le \rho_n/2 + O(\rho_n^{0+}). \end{equation} On the support of $b^{{\mathcal {L E}}}(\bth, \ \cdot\ ; \rho_n)$ we have \begin{equation}\label{supportell>:eq} |\bxi|\ge 7\rho_n- O(\rho_n^{0+}). \end{equation} The introduced symbols play a central role in the proof of Lemma~\ref{main_lem}. As we have seen in Section~\ref{description section}, due to \eqref{supportell<:eq} and \eqref{supportell>:eq} the symbols $b^\downarrow$ and $b^{\mathcal {L E}}$ make only a negligible contribution to the spectrum of the operator $H$ near $\l= \rho^{2w}$ for $\rho\in I_n$. The only significant components of $b$ are the symbols $b^\natural, b^{\flat}$ and $b^o$. The symbol $b^o$ will remain as it is, and the symbol $b^\natural$ will be transformed in the next section to another symbol, independent of $\bx$. Under the condition $b\in\BS_{\a}(\b)$ the above symbols belong to the same class $\BS_{\a}(\b)$ and the following bounds hold: \begin{equation* \1 b^{\flat}\1^{(\a)}_{l, s}+ \1 b^{\natural}\1^{(\a)}_{l, s}+ \1 b^{{\mathcal {L E}}}\1^{(\a)}_{l, s}+ \1 b^{o}\1^{(\a)}_{l, s}+ \1 b^{\downarrow}\1^{(\a)}_{l, s}\lesssim \1 b\1^{(\a)}_{l, s}. \end{equation*} If $b$ symmetric, then so are the symbols on the right hand side of \eqref{b as a sum}. Let us mention some other elementary properties of the introduced operators. In the lemma below we use the projection $\mathcal P(\mathcal C)$, $\mathcal C\subset \mathbb R^d$ which was defined in Section~\ref{description section}. \begin{lem}\label{smallorthog:lem} Let $b\in \BS_{\a}(\b)$ with some $\a\in \mathbb R$. Then: \begin{itemize} \item[(i)] The operator $B^{\downarrow}$ is bounded and \begin{equation*} \|B^{\downarrow}\|\lesssim \1 b \1^{(\a)}_{0, 0} \rho_n^{\b\max(\a, 0)}. \end{equation*} Moreover, \begin{equation*} \Big(I- \mathcal P\big(\mathcal B(2\rho_n/3)\big)\Big) B^{\downarrow}= B^{\downarrow}\Big(I- \mathcal P \big(\mathcal B(2\rho_n/3)\big)\Big)= 0. \end{equation*} \item[(ii)] The operator $B^\flat$ satisfies the relations \begin{equation* \mathcal P\big(\mathcal B(\rho_n/6)\big)B^{\flat}= B^{\flat}\mathcal P\big(\mathcal B(\rho_n/6)\big)= \Big(I- \mathcal P\big(\mathcal B(9\rho_n)\big)\Big)B^{\flat}= B^{\flat}\Big(I- \mathcal P\big(9\rho_n)\big)\Big)= 0, \end{equation*} and similar relations hold for the operator $B^{\natural}$ as well. Moreover, $b^{\natural}, b^{\flat}\in \BS_{\gamma}$ for any $\gamma\in \mathbb R$, and for all $l$ and $s$ \begin{equation* \1 b^{\natural}\1^{(\gamma)}_{l, s} + \1 b^{\flat}\1^{(\gamma)}_{l, s} \lesssim \rho_n^{\b(\a - \gamma)}\1 b\1^{(\a)}_{l, s}, \end{equation*} with the implied constant independent of $b$ and $n\ge 1$. In particular, the operators $B^{\natural}, B^{\flat}$ are bounded and \begin{equation*} \|B^{\natural}\|+ \|B^{\flat}\|\lesssim \rho_n^{\b\a}\1 b\1^{(\a)}_{0, 0}. \end{equation*} \item[(iii)] \begin{equation*} \mathcal P\bigl(\mathcal B(6\rho_n)\bigr)B^{{\mathcal {L E}}}= B^{{\mathcal {L E}}}\mathcal P\bigl(\mathcal B(6\rho_n)\bigr) = 0. \end{equation*} \end{itemize} \end{lem} \section{Operators $H_1$ and $H_2$}\label{Gauge transform section} \subsection{Preparation} As mentioned at the end of Section~\ref{reduction section}, we assume that the symbol $b$ of $B$ satisfies \eqref{tilde b}, and thus belongs to the class $\BS_\alpha(\b)$ with $\alpha$ defined in \eqref{alpha}. Our strategy is to find a unitary operator which reduces $H= H_0+ B$, $H_0:= (-\Delta)^w$, to another PDO, whose symbol, essentially, depends only on $\bxi$. More precisely, we want to find operators $H_1$ and $H_2$ with the properties discussed in Section~\ref{description section}. Repeating the calculations of Subsection~9.1 of \cite{ParSht2} we find that $H$ is unitarily equivalent to \begin{equation}\label{H_1} H_1 =H_0 +Y^{(o)}_{\tilde k} +Y_{\tilde k}^{\flat} +Y_{{\tilde k}}^{\downarrow, {\mathcal {L E}}} +R_{\tilde k}, \end{equation} where \begin{align} Y_{\tilde k} &:=\sum_{l =1}^{{\tilde k}}B_l +\sum_{l =2}^{{\tilde k}}T_l, \label{Y_tilde k}\\ B_1 &:=\op(b),\notag\\ B_l &:=\sum_{j =1}^{l -1}\frac{1}{j!}\sum_{k_1+ k_2+ \dots+ k_j= l- 1} \ad\big(\op(b); \Psi_{k_1}, \Psi_{k_2}, \dots, \Psi_{k_j}\big),\ l\ge 2, \label{bl:eq}\\ T_l &:=\sum_{j =2}^l \frac{1}{j!} \sum_{k_1+ k_2 +\dots +k_j =l} \ad(H_0; \Psi_{k_1}, \Psi_{k_2}, \dots, \Psi_{k_j}),\ l\ge 2 \label{tl:eq},\\ R_{\tilde k} &:=\int_0^1dt_1\int_0^{t_1}dt_2\cdots \int_0^{t_{\tilde k}}\exp(-it\Psi)\ad^{{\tilde k}+ 1}(H; \Psi)\exp(it\Psi)dt\notag \\ &+\sum_{j =1}^{\tilde k} \frac{1}{j!}\sum_{\substack{k_1+ k_2 +\dots +k_j\ge {\tilde k} +1,\\ k_q\leqslant \tilde k, \ q =1, \dots, j}} \ad(H; \Psi_{k_1}, \Psi_{k_2}, \dots, \Psi_{k_j}),\notag\\ \Psi &:= \sum_{p =1}^{\tilde k}\Psi_p.\notag \end{align} The symbols $\psi_j$ of PDO $\Psi_j$ are found from the following system of commutator equations: \begin{gather} \ad(H_0; \Psi_1) +B_1^{\natural} =0,\label{psi1:eq}\\ \ad(H_0; \Psi_l) +B_l^{\natural} +T_l^{\natural} =0,\ l\ge 2.\label{psil:eq} \end{gather} By Lemma~\ref{smallorthog:lem}(ii), the operators $B_l^{\natural}$, $T_l^{\natural}$ are bounded. This, in view of \eqref{psi1:eq} and \eqref{psil:eq}, implies boudedness of the commutators $\ad(H_0; \Psi_l)$, $l\geqslant 1$. Below we denote by $y_{\tilde k}$ the symbol of the PDO $Y_{\tilde k}$. \subsection{Commutator equations} Put \begin{equation}\label{chi} \tilde{\chi}_{\bth}(\bxi) :=e_{\bth}(\bxi)\varphi_{\bth}(\bxi)\big(|\bxi +\bth|^{2w} -|\bxi|^{2w}\big)^{-1} \end{equation} when $\bth\not ={\bf 0}$, and $\tilde{\chi}_{\bf 0}(\bxi) :=0$. We have \begin{lem} \label{commut:lem} Let $A =\op(a)$ be a symmetric PDO with $a\in\BS_{\omega}$. Then the PDO $\Psi$ with the Fourier coefficients of the symbol $\psi(\bx, \bxi)$ given by \begin{equation}\label{psihat:eq} \hat\psi(\bth, \bxi) :=i\,{\hat a}(\bth, \bxi)\tilde{\chi}_{\bth}(\bxi) \end{equation} solves the equation \begin{equation* \ad(H_0; \Psi) +\op(a^{\natural})= 0. \end{equation*} Moreover, the operator $\Psi$ is bounded and self-adjoint, its symbol $\psi$ belongs to $\BS_{ \gamma}$ with any $\gamma \in \mathbb R$ and the following bound holds: \begin{equation* \1\psi\1^{(\gamma)}_{l, s} \lesssim\rho_n^{\b(\omega -\gamma -1)- 2w+ 2}r(\rho_n)^{-1}\1 a\1^{(\omega)}_{l -1, s} \lesssim\rho_n^{\b(\omega -\gamma -1)- 2w+ 2+ 0+}\1 a\1^{(\omega)}_{l -1, s}. \end{equation*} \end{lem} The proof of this lemma is analogous to that of Lemma~4.1 of \cite{ParSob} and is based on the estimate \begin{equation* |\bxi +\bth|^{2w} -|\bxi|^{2w}= |\bxi|^{2w}\Big(\big(1+ |\bxi|^{-2}(2\bxi+ \bth)\cdot\bth\big)^w- 1\Big)\asymp \rho^{2w- 2}\big|\bth\cdot(\bxi+ \bth/2)\big| \end{equation*} which holds for $\bxi$ in the support of $e_{\bth}\varphi_{\bth}$. Using Propositions~\ref{bound:prop}, \ref{product:prop}, \ref{commut0:prop}, Lemma~\ref{commut:lem}, and repeating arguments from the proof of Lemma 4.2 from \cite{ParSob} (with $\sigma_j:= j\big(\a- 2- (2w- 2)\b^{-1}\big)+ 1$), we obtain the following \begin{lem}\label{estimateskm:lem} Let $b\in\BS_{\alpha}(\b)$ be a symmetric symbol. Suppose that $k$ is large enough so that $r(\rho_n)^{-1}\lesssim\rho_n^{0+}\lesssim\rho_n^{w+ \b- \frac{\alpha\b}2 -1}$ and $\tilde k$ satisfies \begin{equation}\label{eq:kM} {\tilde k}> 2(M+ \alpha\b+ d- 2w)/(2w+ 2\b- \alpha\b -2). \end{equation} Then $\psi_j,\, b_j,\,t_j\in\BS_\gamma(\beta)$ for any $\gamma\in \mathbb R$ and there exists sufficiently large $\rho_0$, such that \begin{equation}\label{R_tilde k estimate} \|R_{\tilde k}\|\lesssim\rho_n^{-M+ 2w- d}. \end{equation} \end{lem} \begin{rem} Note that the expression in the denominator of \eqref{eq:kM} is positive by \eqref{beta and alphas} and \eqref{alpha}. \end{rem} Now Lemmas~\ref{H1H2 lemma} and \ref{estimateskm:lem} imply that the contribution of $R_{\tilde k}$ to the integrated density of states can be neglected. More precisely, let $W_{\tilde k}$ be the operator with symbol \begin{equation}\label{eq:newy} w_{\tilde k}(\bx, \bxi):= y_{\tilde k}(\bx, \bxi)- y_{\tilde k}^{\natural}(\bx, \bxi),\ \ \hbox{i.e.}\ \ \hat w_{\tilde k}(\bth, \bxi)= \hat y_{\tilde k}(\bth, \bxi)\big(1- e_{\bth}(\bxi)\varphi_{\bth}(\bxi)\big). \end{equation} We introduce $H_2:= (-\Delta)^w+ W_{\tilde k}$. Then, by \eqref{H_1} and \eqref{R_tilde k estimate}, $\|H_1- H_2\|\lesssim\rho_n^{-M+ 2w- d}$ and, moreover, the symbol $w_{\tilde k}$ satisfies \eqref{eq:b3}. This means that all the constructions of Section~\ref{description section} are valid, and all we need to do is to compute $\vol \mathcal G_{\lambda}$. Until this point, the material in our paper was quite similar to the corresponding parts of \cite{ParSht2}. From now on, the differences will be substantial. \subsection{Computing the symbol of the operator after gauge transform} The following lemma provides us with a more explicit form of the symbol $\hat{y}_{\tilde k}$. \begin{lem}\label{symbol} We have $\hat{y}_{\tilde k}(\bth,\bxi)=0$ for $\bth\not\in\Bth_{\tilde k}$. Otherwise, \begin{equation}\label{symbol equation} \begin{split} &\hat{y}_{\tilde k}(\bth, \bxi)= \hat{b}(\bth, \bxi)\\ &+ \sum\limits_{s= 1}^{\tilde k- 1}\sum_{\substack{\bth_j, \bth_{s+ 1}\in\Bth\\ \bphi_j, \bphi_{s+ 1}, \bphi_j'\in \Bth_{s+ 1}\\ \bth_j'\in \Bth_{s+ 1}'\\ 1\leqslant j\leqslant s}}\sum_{p= 1}^s\sum_{\substack{\bth_q'', \bphi_q''\in \Bth'_{s+ 1}\\ 1\leqslant q\leqslant p- 1}} \sum_{\substack{\nu_1, \dots, \nu_{2s+ p}\geqslant 0\\ \sum\nu_i= s}}\prod_{q= 1}^{p- 1}\widehat{(\nabla^{\nu_q}e_{\bth_q''}\varphi_{\bth_q''})}(\bxi+ \bphi_q'')\\ &\times\widehat{(\nabla^{\nu_{p}}b)}(\bth_{s+ 1}, \bxi+ \bphi_{s+ 1})\prod\limits_{j= 1}^s \widehat{(\nabla^{\nu_{p+ j}}b)}(\bth_j, \bxi+ \bphi_j)\widehat{(\nabla^{\nu_{p+ s+ j}}\tilde{\chi}_{\bth_j'})}(\bxi+ \bphi_j'). \end{split} \end{equation} Here for $\nu\in \mathbb N$ \begin{equation}\label{Delta-power} \nabla^\nu:= \sum_{\boldeta_1, \dots, \boldeta_\nu\in\Bth}C^{(s, p)}_{\boldeta_1, \dots, \boldeta_\nu}\big(\{\bth, \bphi\}\big)\nabla_{\boldeta_1}\cdots\nabla_{\boldeta_\nu}; \quad \nabla^0:= C^{(s, p)}\big(\{\bth, \bphi\}\big), \end{equation} and, for $\bth\in \mathbb R^d$, the action of $\nabla_{\bth}$ on symbols of PDO is defined in \eqref{Delta}, whereas for any function $f$ on $ \mathbb R^d$ \[ (\nabla_{\bth}f)(\bxi):= f(\bxi+ \bth)- f(\bxi). \] The coefficients $C^{(s, p)}\big(\{\bth, \bphi\}\big)$ and $C^{(s, p)}_{\boldeta_1, \dots, \boldeta_\nu}\big(\{\bth, \bphi\}\big)$ depend on $s,\ p$ and all vectors $\bth$, $\bth_j$, $\bth_{s+ 1}$, $\bphi_j$, $\bphi_{s+ 1}$, $\bth_j'$, $\bphi_j'$, $\bth_q''$, $\bphi_q''$ (and on $\boldeta_1, \dots, \boldeta_\nu$ if these subscripts are present). Moreover, these coefficients can differ for each particular $\nabla^\nu$, $\nu \in \mathbb N_0$. At the same time, they are uniformly bounded by a constant which depends on $\tilde k$ only. We apply the convention that $\prod_{q= 1}^{0}\widehat{(\nabla^{\nu_q}e_{\bth_q''}\varphi_{\bth_q''})}(\bxi+ \bphi_q'')= 1$. \end{lem} \begin{proof} We will prove the lemma by induction. Namely, let $\ell\geqslant2$. We claim that: 1) For any $m= 1, \dots, \ell- 1$, $\hat{\psi}_m(\bth, \bxi)= 0$ for $\bth\not\in\Bth_m$. Otherwise, \begin{equation}\label{inds1} \begin{split} \hat{\psi}_m(\bth, \bxi)&= \sum_{\substack{\bth_j\in\Bth\\ \bphi_j, \bphi_j'\in \Bth_{m}\\ \bth_j'\in \Bth_{m}'\\ 1\leqslant j\leqslant m}}\sum_{p= 1}^{m}\sum_{\substack{\bth_q'', \bphi_q''\in \Bth'_{m}\\ 1\leqslant q\leqslant p- 1}} \sum_{\substack{\nu_1, \dots, \nu_{2m+ p- 1}\geqslant 0\\ \sum\nu_i= m- 1}}\prod_{q= 1}^{p- 1}\widehat{(\nabla^{\nu_q}e_{\bth_q''}\varphi_{\bth_q''})}(\bxi+ \bphi_q'')\\ &\times\prod\limits_{j= 1}^m \widehat{(\nabla^{\nu_{p- 1+ j}}b)}(\bth_j, \bxi+ \bphi_j)\widehat{(\nabla^{\nu_{p- 1+ m+ j}}\tilde{\chi}_{\bth_j'})}(\bxi+ \bphi_j'). \end{split} \end{equation} 2) For any $s= 1, \dots, \ell- 1$ and any $k_1, \dots, k_p$ $(p\geqslant1)$ such that $k_1+ \dots+ k_p= s$, $\widehat{\ad\big(\op(b); \Psi_{k_1}, \dots, \Psi_{k_p}\big)}(\bth, \bxi)= 0$ for $\bth\not\in\Bth_{s+ 1}$. Otherwise, \begin{equation}\label{inds2} \begin{split} &\widehat{\ad\big(\op(b); \Psi_{k_1}, \dots, \Psi_{k_p}\big)}(\bth, \bxi)\\ &= \sum_{\substack{\bth_j, \bth_{s+ 1}\in\Bth\\ \bphi_j, \bphi_{s+ 1}, \bphi_j'\in \Bth_{s+ 1}\\ \bth_j'\in \Bth_{s+ 1}'\\ 1\leqslant j\leqslant s}}\sum_{p= 1}^s\sum_{\substack{\bth_q'', \bphi_q''\in \Bth'_{s+ 1}\\ 1\leqslant q\leqslant p- 1}} \sum_{\substack{\nu_1, \dots, \nu_{2s+ p}\geqslant 0\\ \sum\nu_i= s}}\prod_{q= 1}^{p- 1}\widehat{(\nabla^{\nu_q}e_{\bth_q''}\varphi_{\bth_q''})}(\bxi+ \bphi_q'')\\ &\times\widehat{(\nabla^{\nu_{p}}b)}(\bth_{s+ 1}, \bxi+ \bphi_{s+ 1})\prod\limits_{j= 1}^s \widehat{(\nabla^{\nu_{p+ j}}b)}(\bth_j, \bxi+ \bphi_j)\widehat{(\nabla^{\nu_{p+ s+ j}}\tilde{\chi}_{\bth_j'})}(\bxi+ \bphi_j'). \end{split} \end{equation} 3) For any $s= 2, \dots, \ell$ and any $k_1, \dots, k_p$ $(p\geqslant2)$ such that $k_1+ \dots+ k_p= s$, $\widehat{\ad(H_0; \Psi_{k_1}, \dots, \Psi_{k_p})}(\bth, \bxi)= 0$ for $\bth\not\in\Bth_{s}$. Otherwise, \begin{equation}\label{inds3} \begin{split} &\widehat{\ad(H_0; \Psi_{k_1}, \dots, \Psi_{k_p})}(\bth, \bxi)\\ &= \sum_{\substack{\bth_j, \bth_s\in\Bth\\ \bphi_j, \bphi_s, \bphi_j'\in \Bth_{s}\\ \bth_j'\in \Bth_{s}'\\ 1\leqslant j\leqslant s- 1}}\sum_{p= 1}^s\sum_{\substack{\bth_q'', \bphi_q''\in \Bth_{s}\\ 1\leqslant q\leqslant p- 1}} \sum_{\substack{\nu_1, \dots, \nu_{2s+ p- 2}\geqslant 0\\ \sum\nu_i= s- 1}}\prod_{q= 1}^{p- 1}\widehat{(\nabla^{\nu_q}e_{\bth_q''}\varphi_{\bth_q''})}(\bxi+ \bphi_q'')\\ &\times\widehat{(\nabla^{\nu_{p}}b)}(\bth_{s}, \bxi+ \bphi_{s})\prod\limits_{j= 1}^{s- 1} \widehat{(\nabla^{\nu_{p+ j}}b)}(\bth_j, \bxi+ \bphi_j)\widehat{(\nabla^{\nu_{p+ s- 1+ j}}\tilde{\chi}_{\bth_j'})}(\bxi+ \bphi_j'). \end{split} \end{equation} For $\ell =2$ assumptions 1)--3) can be easily checked. Indeed, by \eqref{psihat:eq}, \eqref{comm:eq} and \eqref{nabla of product}, \begin{equation* \hat{\psi}_1(\bth,\bxi) =i\hat{b}(\bth,\bxi)\tilde{\chi}_{\bth}(\bxi), \end{equation*} \begin{equation* \begin{split} &\widehat{\ad\big(\op(b); \Psi_1\big)}(\bth, \bxi)= \sum_{\boldsymbol\chi\in\Bth\cup(\bth- \Bth)}\big(\hat{b}(\bth, \bxi)\hat{b}(\bth- \boldsymbol\chi, \bxi+ \boldsymbol\chi)\widehat{(\nabla_{\boldsymbol\chi}\tilde{\chi}_{\bth- \boldsymbol\chi})}(\bxi)\\ &+ \hat{b}(\bth, \bxi)\widehat{(\nabla_{\boldsymbol\chi}b)}(\bth- \boldsymbol\chi, \bxi)\tilde{\chi}_{\bth- \boldsymbol\chi}(\bxi)- \widehat{(\nabla_{\bth- \boldsymbol\chi}b)}(\boldsymbol\chi, \bxi)\hat b(\bth- \boldsymbol\chi, \bxi)\tilde\chi_{\bth- \boldsymbol\chi}(\bxi)\big), \end{split} \end{equation*} \begin{equation* \begin{split} \widehat{\ad(H_0; \Psi_1, \Psi_1)}(\bth, \bxi)&= \sum_{\boldsymbol\chi\in\Bth\cup(\bth- \Bth)}\big(\hat b(\boldsymbol\chi, \bxi+ \bth- \boldsymbol\chi)\widehat{(\nabla_{\bth- \boldsymbol\chi}\varphi_{\boldsymbol\chi} e_{\boldsymbol\chi})}(\bxi)\hat b(\bth- \boldsymbol\chi, \bxi)\tilde\chi_{\bth- \boldsymbol\chi}(\bxi)\\ &+ \widehat{(\nabla_{\bth- \boldsymbol\chi}b)}(\boldsymbol\chi, \bxi)\varphi_{\boldsymbol\chi}(\bxi)e_{\boldsymbol\chi}(\bxi)\hat b(\bth- \boldsymbol\chi, \bxi)\tilde\chi_{\bth- \boldsymbol\chi}(\bxi)\\ &- \varphi_{\bth}(\bxi)e_{\bth}(\bxi)\hat{b}(\bth, \bxi)\hat{b}(\bth- \boldsymbol\chi, \bxi+ \boldsymbol\chi)\widehat{(\nabla_{\boldsymbol\chi}\tilde{\chi}_{\bth- \boldsymbol\chi})}(\bxi)\\ &- \varphi_{\bth}(\bxi)e_{\bth}(\bxi)\hat{b}(\bth, \bxi)\widehat{(\nabla_{\boldsymbol\chi}b)}(\bth- \boldsymbol\chi, \bxi)\tilde{\chi}_{\bth- \boldsymbol\chi}(\bxi)\big). \end{split} \end{equation*} Now, we complete the induction in several steps. Step 1. First of all, notice that due to \eqref{bl:eq}, \eqref{tl:eq}, for any $m =2, \dots, \ell$ the symbol of $B_m$ admits a representation of the form \eqref{inds2} with $s= m- 1$, and symbol of $T_m$ admits a representation of the form \eqref{inds3} with $s= m$. Then it follows from Lemma~\ref{commut:lem} and \eqref{psil:eq} that $\Psi_\ell$ admits a representation of the form \eqref{inds1}. Step 2. Proof of \eqref{inds2} with $s =\ell$. Let $k_1 +\dots +k_p =\ell$. If $p \geqslant2$. Then $$ \ad\big(\op(b); \Psi_{k_1}, \dots, \Psi_{k_p}\big)= \ad\Big(\ad\big(\op(b); \Psi_{k_1}, \dots, \Psi_{k_{p- 1}}\big); \Psi_{k_p}\Big). $$ Since $k_1+ \dots+ k_{p- 1}\leqslant \ell- 1$ and $k_p\leqslant \ell- 1$ we can apply \eqref{inds1} and \eqref{inds2}. Combined with \eqref{comm:eq} it gives a representation of the form \eqref{inds2}. If $p= 1$ then $\ad\big(\op(b); \Psi_\ell\big)$ satisfies \eqref{inds2} because of \eqref{comm:eq} and step 1. Step 3. Proof of \eqref{inds3} with $s= \ell+ 1$. Let $k_1+ \dots+ k_p= \ell+ 1$, $p\geqslant2$. If $p\geqslant3$, then (cf. step 2) $$ \ad(H_0; \Psi_{k_1}, \dots, \Psi_{k_p})= \ad\big(\ad(H_0; \Psi_{k_1}, \dots, \Psi_{k_{p- 1}}); \Psi_{k_p}\big). $$ Since $k_1+ \dots+ k_{p- 1}\leqslant \ell$, $p- 1\geqslant 2$ and $k_p\leqslant \ell- 1$ we can apply \eqref{inds1} and \eqref{inds3}. Together with \eqref{comm:eq} it gives a representation of the form \eqref{inds3}. If $p= 2$ then (see \eqref{psil:eq}) $$ \ad(H_0; \Psi_{k_1}, \Psi_{k_2}) =\ad\big(\ad(H_0; \Psi_{k_1}); \Psi_{k_2}\big) = -\ad(B_{k_1}^{\natural} +T_{k_1}^{\natural}; \Psi_{k_2}). $$ Since $k_1 \leqslant\ell$ and $k_2 \leqslant\ell$, the representation of the form \eqref{inds3} follows from \eqref{comm:eq} and step 1. (Formally exceptional case $k_1 =1$, $k_2 =\ell$ can be treated separately in the same way using \eqref{psi1:eq} instead of \eqref{psil:eq}.) Induction is complete. Now, \eqref{inds2}, \eqref{inds3} and \eqref{Y_tilde k}, \eqref{bl:eq}, \eqref{tl:eq} prove the lemma. \end{proof} \section{Contribution from various resonant regions}\label{contribution section} Let us fix a subspace $\GV \in\mathcal V_m$, $m <d$, and a component $\Bxi_p$ of the resonant region $\Bxi(\GV)$. Our aim is to compute the contribution to the density of states from each component $\Bxi_p$. Therefore, we define \begin{equation}\label{Apm} \mathcal A^+_p(\rho):= \mathcal A^+(\rho)\cap\Bxi_p\quad \textrm{and} \quad \mathcal A^-_p(\rho):= \mathcal A^-(\rho)\cap\Bxi_p \end{equation} and try to compute \begin{equation}\label{eq:n3} \vol\mathcal A^+_p(\rho)- \vol\mathcal A^-_p(\rho). \end{equation} Since formulas \eqref{eq:46} and \eqref{eq:CARd} obviously imply that \begin{equation}\label{eq:n2} \vol(\mathcal G_{\lambda})= \omega_d\rho^d+ \sum_{m= 0}^{d -1}\sum_{\GV\in \mathcal V_m}\sum_{p}\bigl(\vol\mathcal A^+_p(\rho)- \vol\mathcal A^-_p(\rho)\bigr), \end{equation} Lemma~\ref{main_lem} would be proved if we manage to compute \eqref{eq:n3} (or at least prove that this expression admits a complete asymptotic expansion in $\rho$). Note that if $\bxi\in\Bxi_p$, then we also have that $\BUps(\bxi)\subset \Bxi_p$. We denote \begin{equation*} H_2(\bxi) :=H_2|_{\plainH{}_{\bxi}}, \quad \plainH{}_{\bxi} :=\mathcal P\big(\BUps(\bxi)\big)\textup{{\textsf{B}}}_2( \mathbb R^d) \end{equation*} (recall that $\plainH{}_{\bxi}$ is an invariant subspace of $H_2$ acting in $\textup{{\textsf{B}}}_2( \mathbb R^d)$). Suppose now that two points $\bxi$ and $\boldeta$ have the same coordinates $\mathbf X$ and $\mathbf\Phi$ and different coordinates $r$. Then $\bxi\in\Bxi_p$ implies $\boldeta \in\Bxi_p$ and $\BUps(\boldeta)= \BUps(\bxi)+ (\boldeta- \bxi)$. This shows that two spaces $\plainH{}_{\bxi}$ and $\plainH{}_{\boldeta}$ have the same dimension and, moreover, there is a natural isometry $F_{\bxi,\boldeta}: \plainH{}_{\bxi}\to\plainH{}_{\boldeta}$ given by $F: \be_{\bnu}\mapsto\be_{\bnu+ (\boldeta- \bxi)}$, $\bnu \in\BUps(\bxi)$. This isometry allows us to `compare' operators acting in $\plainH{}_{\bxi}$ and $\plainH{}_{\boldeta}$. Thus, abusing slightly our notation, we can assume that $H_2(\bxi)$ and $H_2(\boldeta)$ act in the same (finite dimensional) Hilbert space $\plainH{}(\mathbf X, \mathbf\Phi)$. We will fix the values $(\mathbf X, \mathbf\Phi)$ and study how these operators depend on $r$. Thus, we denote by $H_2(r) =H_2(r; \mathbf X, \mathbf\Phi)$ the operator $H_2(\bxi)$ with $\bxi =(\mathbf X, r, \mathbf\Phi)$, acting in $\plainH{}(\mathbf X, \mathbf\Phi)$. Let $W_{\tilde k}(r)$ be the operator in $\plainH{}(\mathbf X, \mathbf\Phi)$ with the symbol $w_{\tilde k}\big(\mathbf x,\boldsymbol\xi(\mathbf X, r, \mathbf\Phi)\big)$. According to formula \eqref{a_dot_eta}, for any $s\leqslant \tilde k- 1$ and $\bth\in\Bth_{s+ 1}$ \begin{equation}\label{xi+ phi modulus squared} |\bxi+ \bphi|^2= r^2+ 2r|\ba|\sum_{q= 1}^{K+ 1}a_{K+ 1\, q}\sin\Phi_q+ 2\langle\bxi, \bphi\rangle+ |\mathbf X|^2+ |\ba|^2+ |\bphi|^2. \end{equation} This, together with \eqref{symbol series}, \eqref{b_iota} and \eqref{series for b}, implies that for $|\bxi+ \bphi|> C_0$ the coefficients $\hat{b}(\bth,\bxi+ \bphi)$ can be represented as the absolutely convergent series \begin{equation}\label{b series} \begin{split} \hat{b}(\bth,\bxi+ \bphi)= \sum_{\iota\in \widetilde J}\sum_{l= 0}^\infty\sum_{\substack{n_1, \dots, n_{K+ 1}\geqslant 0\\ n_1+ \cdots+ n_{K+ 1}\leqslant l\\ j_1, \dots, j_d\geqslant 0\\ j_1+ \cdots+ j_d\leqslant l}}C^{\iota\, j_1\cdots j_d}_{l\, n_1\cdots n_{K+ 1}}(\mathbf X; \bth)r^{\iota- l}\phi_1^{j_1}\cdots\phi_d^{j_d}\prod_{a= 1}^{K+ 1}(\sin\Phi_a)^{n_a}, \end{split} \end{equation} where the coefficients satisfy \begin{equation* \big|C^{\iota\, j_1\cdots j_d}_{l\, n_1\cdots n_{K+ 1}}(\mathbf X; \bth)\big|\lesssim \rho_n^{(l- j_1- \cdots- j_d)(\alpha_{m+ 1}+ 0+)} \end{equation*} In the next lemma, to facilitate the expansion of the RHS of \eqref{symbol equation} in a suitable form, we transform the denominator of $\tilde\chi_{\bth'}$ (recall \eqref{chi}). In the subsequent calculations we will use the generalized binomial coefficeints: \begin{equation}\label{binomial coefficient} \binom{p}{j}:= \begin{cases} 1, & j= 0;\\ \displaystyle \frac1{j!}\prod_{k= 0}^{j- 1}(p- k), & j\in \mathbb N. \end{cases} \end{equation} \begin{lem}\label{denominator lemma} For $s\leqslant \tilde k- 1$, $\bphi'\in\Bth_{s+ 1}$, $\bth'\in\Bth_{s+ 1}'$, and $\bxi$ in the support of $e_{\bth'}\varphi_{\bth'}$ let \begin{equation*} \begin{split} D&:= \frac1w\sum_{j= 2}^\infty\binom{w}{j}r^{2- 2j}\sum_{k= 0}^{j- 1}\binom{j}{k}\Big(2r|\ba|\sum_{q= 1}^{K+ 1}a_{K+ 1\, q}\sin\Phi_q+ 2\langle\bxi, \bphi'\rangle+ |\mathbf X|^2+ |\ba|^2+ |\bphi'|^2\Big)^{k}\\ &\times\big(2\langle\bxi, \bth'\rangle+ 2\langle\bphi', \bth'\rangle+ |\bth'|^2\big)^{j-k- 1}. \end{split} \end{equation*} Then $|D|\lesssim \rho_n^{-1+ \alpha_{m+ 1}+ 0+}$ and \begin{equation}\label{denominator formula} \big(|\bxi+ \bphi'+ \bth'|^{2w}-|\bxi+ \bphi'|^{2w}\big)^{-1}= w^{-1}r^{2- 2w}\big(2\langle\bxi, \bth'\rangle+ 2\langle\bphi', \bth'\rangle+ |\bth'|^2\big)^{-1}\sum_{a= 0}^\infty (-D)^a. \end{equation} \end{lem} \begin{proof} We introduce a shorthand \begin{equation*} N:= 2r|\ba|\sum_{q= 1}^{K+ 1}a_{K+ 1\, q}\sin\Phi_q+ 2\langle\bxi, \bphi'\rangle+ |\mathbf X|^2+ |\ba|^2+ |\bphi'|^2. \end{equation*} Then by (generalized) binomial formula and \eqref{xi+ phi modulus squared} we obtain \begin{equation}\label{D+ N} \begin{split} &|\bxi+ \bphi'+ \bth'|^{2w}-|\bxi+ \bphi'|^{2w}\\ &= \big(|\bxi|^2+ 2\langle\bxi, \bphi'+ \bth'\rangle+ |\bphi'+ \bth'|^2\big)^w- \big(|\bxi|^2+ 2\langle\bxi, \bphi'\rangle+ |\bphi'|^2\big)^w\\ &= \big(r^2+ N+ 2\langle\bxi, \bth'\rangle+ 2\langle\bphi', \bth'\rangle+ |\bth'|^2\big)^w- (r^2+ N)^w\\ &= r^{2w}\sum_{j= 1}^\infty\binom{w}{j}r^{-2j}\Big(\big(N+ 2\langle\bxi, \bth'\rangle+ 2\langle\bphi', \bth'\rangle+ |\bth'|^2\big)^j- N^j\Big)\\ &= wr^{2w- 2}\big(2\langle\bxi, \bth'\rangle+ 2\langle\bphi', \bth'\rangle+ |\bth'|^2\big)(1+ D). \end{split} \end{equation} The estimate on $|D|$ follows from estimates \eqref{bound on R} and \eqref{bound on a}, and Lemmas~\ref{lem:Al} and \ref{lem:propBUps}. Now \eqref{denominator formula} follows from \eqref{D+ N}. \end{proof} As we have seen from the previous sections, the symbol of the operator $H_2$ satisfies \begin{equation}\label{eq:nn1} h_2(\bx,\bxi)= |\bxi|^{2w}+ {w}_{\tilde k}(\bx,\bxi)= \big(r^2+ 2r\langle\ba, \mathbf\Phi\rangle+ |\ba|^2+ |\mathbf X|^2\big)^w+ {w}_{\tilde k}(\bx,\bxi), \end{equation} where $w_{\tilde k}$ are given by \eqref{eq:newy} and \eqref{symbol equation}. \begin{rem}\label{cutoff remark} In this section we assume that $\bxi\in\mathcal A$, so by \eqref{bxi in CA} $2\rho_n/3\le |\bxi|\le 6\rho_n$, and by Remark~\ref{partition support remark} all functions $e_{\bth}(\bxi+ \cdot)$ from \eqref{eq:newy} and \eqref{symbol equation} are equal to $1$. Note that if $\bth\in\Bth_{\tilde k}$, $\bphi\in\Bth_{\tilde k}$, and $\bth\not\in\GV$, then (see Lemma \ref{lem:products} and \eqref{phizeta:eq}) $\varphi_{\bth}(\bxi+ \bphi)= 1$. This means that all cut-off functions from \eqref{eq:newy} and \eqref{symbol equation} are equal to $1$ unless $\bth\in\GV$. If, on the other hand, $\bth\in\GV$, then $\varphi_{\bth}(\bxi+ \bphi)$ depends only on the projection $\bxi_{\GV}$ and thus is a function only of the coordinates $\mathbf X$. \end{rem} By Proposition~\ref{bound:prop}, \eqref{eq:newy}, Lemma~\ref{symbol}, formulas \eqref{b series} and \eqref{denominator formula}, Lemma~\ref{lem:products}, and Remark~\ref{cutoff remark}, for $r\asymp \rho_n$ \begin{equation}\label{early derivative estimate} \Big\|\frac{d^l}{dr^l}W_{\tilde k}(r)\Big\|\lesssim\rho_n^{\varkappa- l+ 0+}, \qquad l\geqslant 0. \end{equation} This, together with \eqref{eq:nn1}, implies \begin{lem}\label{monotonicity of H_2(r) lemma} The operator $H_2(r)$ is monotonically increasing in $r$; in particular, all its eigenvalues $\lambda_j\big(H_2(r)\big)$ are increasing in $r$. \end{lem} Thus the function $g\big(\bxi(\mathbf X, r, \mathbf\Phi)\big)$ (defined in Section~\ref{description section}) is an increasing function of $r$ if we fix the other coordinates of $\bxi$, so the equation \begin{equation} g(\bxi)= \rho^{2w} \end{equation*} has a unique solution for fixed values of $\mathbf X$ and $\mathbf\Phi$; we denote the $r$-coordinate of this solution by $\tau= \tau(\rho)= \tau(\rho; \mathbf X, \mathbf\Phi)$, so that \begin{equation}\label{eq:tau1} g\big(\bxi(\mathbf X, \tau, \mathbf\Phi)\big)= \rho^{2w}. \end{equation} By $\tau_0= \tau_0(\rho)= \tau_0(\rho; \mathbf X, \mathbf\Phi)$ we denote the value of $\tau$ for $(-\Delta)^w$, i.e. $\tau_0$ is a unique solution of the equation \begin{equation*} \big|\bxi(\mathbf X, \tau_0, \mathbf\Phi)\big|= \rho. \end{equation*} Obviously, we can write down a precise analytic expression for $\tau_0$ (and we have done this in \cite{ParSht} in the two-dimensional case) and show that it allows an expansion in powers of $\rho$ and $\ln\rho$, but we will not need it. The definition \eqref{Apm} of the sets $\mathcal A^{\pm}_p(\rho)$ implies that the intersection \begin{equation*} \mathcal A^+_p(\rho)\cap\big\{\bxi(\mathbf X, r, \mathbf\Phi),\ r\in \mathbb R_+\big\} \end{equation*} consists of points with $r$-coordinate belonging to the interval $\big[\tau_0(\rho), \tau(\rho)\big]$ (where we assume the interval to be empty if $\tau_0> \tau$). Similarly, the intersection \begin{equation*} \mathcal A^-_p(\rho)\cap\big\{\bxi(\mathbf X, r, \mathbf\Phi),\ r\in \mathbb R_+\big\} \end{equation*} consists of points with $r$-coordinate belonging to the interval $\big[\tau(\rho), \tau_0(\rho)\big]$. Therefore, \begin{equation*} \mathcal A^+_p(\rho)= \Big\{\bxi= \bxi(\mathbf X, r, \mathbf\Phi), \mathbf X\in\Omega(\GV), \mathbf\Phi\in \mathcal M_p, r\in\big[\tau_0(\rho; \mathbf X, \mathbf\Phi), \tau(\rho; \mathbf X, \mathbf\Phi)\big]\Big\} \end{equation*} and \begin{equation*} \mathcal A^-_p(\rho)= \Big\{\bxi= \bxi(\mathbf X, r, \mathbf\Phi), \mathbf X\in\Omega(\GV), \mathbf\Phi\in \mathcal M_p, r\in\big[\tau(\rho; \mathbf X, \mathbf\Phi),\tau_0(\rho; \mathbf X, \mathbf\Phi)\big]\Big\}. \end{equation*} This implies that (recall that $K= d- m- 1$) \begin{equation}\label{eq:n4} \begin{split} &\vol\mathcal A^+_p(\rho)- \vol\mathcal A^-_p(\rho) =\int_{\Omega(\GV)}d\mathbf X\int_{\mathcal M_p}d\mathbf\Phi\int_{\tau_0(\rho; \mathbf X, \mathbf\Phi)}^{\tau(\rho; \mathbf X, \mathbf\Phi)}r^{K}dr\\ &= (K+ 1)^{-1}\int_{\mathcal M_p}d\mathbf\Phi\int_{\Omega(\GV)}d\mathbf X\big(\tau(\rho; \mathbf X, \mathbf\Phi)^{K+ 1}- \tau_0(\rho; \mathbf X, \mathbf\Phi)^{K+ 1}\big). \end{split} \end{equation} \begin{rem}\label{K= 0 remark} Note that in the case $K= 0$ the simplex $\mathcal M_p$ is degenerate and there is no integration in $d\mathbf\Phi$. \end{rem} Obviously, it is enough to compute the part of \eqref{eq:n4} containing $\tau$, since the second part (containing $\tau_0$) can be computed analogously. We start by considering \begin{equation}\label{eq:n5} \int_{\Omega(\GV)}\tau(\rho; \mathbf X, \mathbf\Phi)^{K+ 1}d\mathbf X. \end{equation} First of all, we notice that if $\bxi,\boldeta\in\BXi(\GV)$ are resonant congruent points then, according to Lemma~\ref{lem:Upsilon}, all vectors $\bth_j$ from Definition~\ref{reachability:defn} of equivalence belong to $\GV$. This naturally leads to the definition of equivalence for projections $\bxi_{\GV}$ and $\boldeta_{\GV}$. Namely, we say that two points $\bnu$ and $\bmu$ from $\Omega(\GV)$ are $\GV$-equivalent (and write $\bnu\leftrightarrow_{\GV}\bmu$) if $\bnu$ and $\bmu$ are equivalent in the sense of Definition~\ref{reachability:defn} with an additional requirement that all $\bth_j\in\GV$. Then $\bxi \leftrightarrow\boldeta$ implies $\bxi_{\GV} \leftrightarrow_{\GV}\boldeta_{\GV}$. For $\bnu \in\Omega(\GV)$ we denote by $\BUps_{\GV}(\bnu)$ the class of equivalence of $\bnu$ generated by $\leftrightarrow_{\GV}$. Then $\BUps_{\GV}(\bxi_{\GV})$ is a projection of $\BUps(\bxi)$ to $\GV$ and is, therefore, finite. Since $\BUps_{\GV}(\bnu)$ is a finite set for each $\bnu\in\Omega(\GV)$, we can re-write \eqref{eq:n5} as \begin{equation}\label{eq:n6} \int_{\Omega(\GV)}\tau(\rho; \mathbf X, \mathbf\Phi)^{K+ 1}d\mathbf X= \int_{\Omega(\GV)}\big(\card\BUps_{\GV}(\bnu)\big)^{-1}\sum_{\mathbf X\in\BUps_{\GV}(\bnu)}\tau(\rho; \mathbf X, \mathbf\Phi)^{K+ 1}d\bnu \end{equation} and try to compute \begin{equation} \sum_{\mathbf X\in\BUps_{\GV}(\bnu)}\tau(\rho; \mathbf X, \mathbf\Phi)^{K+ 1}. \end{equation*} Remark~\ref{cutoff remark}, together with equations \eqref{eq:nn1}, \eqref{eq:newy}, and \eqref{symbol equation}, shows that $H_2(r)$ depends on $r$ analytically, so we can and will consider the family $H_2(z)$ with complex values of the parameter $z$ with $\Re e~z\asymp \rho$. Likewise, we analytically continue the function $\bxi(\mathbf X, r, \mathbf\Phi)$ to \begin{equation}\label{extended xi} \bxi(\mathbf X, z, \mathbf\Phi):= \mathbf X+ \ba+ z\mathbf\Phi. \end{equation} We also introduce the analytic continuation $|\cdot|_{\mathbb C}$ of the modulus of vectors, so that \begin{equation}\label{modulus complexified} |\bxi|_{\mathbb C}^{2}:= z^2+ 2z\langle\ba, \mathbf\Phi\rangle+ |\ba|^2+ |\mathbf X|^2. \end{equation} Formulas \eqref{eq:newy} and \eqref{symbol equation} give matrix elements of $H_2(z)$ in an orthonormal basis even for complex $z$. We choose a contour \begin{equation}\label{contour} \gamma:= \bigg\{z\in\mathbb C:\, |z- \rho|= t\rho_n:= \Big(8\max\big\{(2w- 2)/3, 1\big\}\Big)^{-1}\rho_n\bigg\} \end{equation} to be a circle in the complex plane going in the positive direction. Estimates \eqref{early derivative estimate} remain valid after the analytic continuation: for all $z$ inside and on $\gamma$ \begin{equation}\label{derivative estimate} \Big\|\frac{d^l}{dz^l}W_{\tilde k}(z)\Big\|\lesssim\rho_n^{\varkappa- l+ 0+}, \qquad l\geqslant 0. \end{equation} \begin{lem}\label{key_lemma} For $\rho\in I_n= [\rho_n, 4\rho_n]$ all $\tau(\rho; \mathbf X, \mathbf\Phi)$ lie inside $\gamma$. These are the only zeros of the function $\det\big(H_2(z)- \rho^{2w}I\big)$ inside the contour. \end{lem} \begin{proof} Let $r:= \Re e~z$, $y:= \Im m~z$. For $y= 0$ the operator $H_2(r)$ is self-adjoint. Thus it has $\card\BUps_{\mathfrak V}(\boldsymbol\nu)$ real eigenvalues. Now for $r\geqslant \rho+ t\rho_n\geqslant (1+ t/4)\rho$ relations \eqref{eq:nn1}, \eqref{bound on a}, Lemma~\ref{lem:propBUps}(ii), and \eqref{derivative estimate} imply \begin{equation*} H_2(r)\geqslant \Big(\big((1+ t/4)\rho\big)^{2w}\big(1- O(\rho^{\alpha_{m+ 1}- 1+ 0+})\big)- O(\rho^{\varkappa+ 0+})\Big)I. \end{equation*} Thus by \eqref{beta and alphas} and \eqref{alpha} for big $\rho$ no eigenvalue of $H_2(r)$ can coincide with $\rho^{2w}$. Likewise for $r\leqslant \rho- t\rho_n\leqslant (1- t/4)\rho$ for big $\rho$ we have \begin{equation*} H_2(r)\leqslant \Big(\big((1- t/4)\rho\big)^{2w}\big(1+ O(\rho^{\alpha_{m+ 1}- 1+ 0+})\big)+ O(\rho^{\varkappa+ 0+})\Big)I, \end{equation*} and no eigenvalue of $H_2(r)$ can coincide with $\rho^{2w}$. This implies that all the eigenvalues of $H_2(r)$ lie in the real interval $(\rho- t\rho_n, \rho+ t\rho_n)$. By \eqref{eq:tau1} and Lemma~\ref{monotonicity of H_2(r) lemma} these eigenvalues coincide with $\big\{\tau(\rho; \mathbf X, \mathbf\Phi): \mathbf X\in\BUps_{\GV}(\bnu)\big\}$. It remains to show that $H_2(r+ iy)$ is invertible for any nonzero $y$ such that $r+ iy$ is inside or on $\gamma$. Relation \eqref{extended xi}, Lemma~\ref{lem:propBUps}(ii), definition \eqref{L_j}, and bound \eqref{bound on a} imply that inside and on the contour \begin{equation* \bxi= (r+ iy)\big(1+ O(\rho_n^{-1+ \alpha_{m+ 1}+ 0+})\big) \end{equation*} and \begin{equation* \mathrm{arg\,}|\boldsymbol\xi|_{\mathbb C}\leqslant \big(1+ o(1)\big)\arcsin(t\rho_n/\rho)\leqslant \big(1+ o(1)\big)\arcsin t\leqslant t\big(1+ o(1)\big). \end{equation*} Hence \begin{equation* \big||\boldsymbol\xi|_{\mathbb C}^{2w}\big|= \big||\boldsymbol\xi|_{\mathbb C}^2\big|^w\asymp \rho^{2w}\quad \textrm{and}\quad \mathrm{arg\,}|\boldsymbol\xi|_{\mathbb C}^{2w}= w\arcsin\frac{2y\big(r+ \langle\mathbf a, \mathbf\Phi\rangle\big)}{\big||\boldsymbol\xi|_{\mathbb C}^2\big|}\asymp y\rho^{-1}, \end{equation*} which implies that \begin{equation}\label{Im of main symbol} \Big|\mathrm{Im}\big(|\boldsymbol\xi|_{\mathbb C}^{2w}\big)\Big|\gtrsim |y|\rho^{2w- 1}. \end{equation} Now for any $\Psi\in\plainH{}(\mathbf X, \mathbf\Phi)$ with $\|\Psi\|= 1$ we have by \eqref{Im of main symbol} and \eqref{derivative estimate} \begin{equation*}\label{idea} \begin{split} \Big\|\big(H_2(z)- \rho^{2w}I\big)\Psi\Big\|&\geqslant \Big|\textrm{Im}\langle\big(H_2(z)- \rho^{2w}I\big)\Psi, \Psi\rangle\Big|\\ &\geqslant \Big|\textrm{Im}\big(|\boldsymbol\xi|_{\mathbb C}^{2w}\big)\Big|- |y|\underset{t\in[0, y]}{\textrm{sup}}\big\|W'(r+ it)\big\|\gtrsim |y|\rho^{2w- 1}, \end{split} \end{equation*} where we have used that for $y= 0$ the quadratic form of $W(z)$ is real-valued. So the kernel of $H_2(r+ iy)- \rho^{2w}$ is trivial for $y\neq 0$. \end{proof} \begin{lem}\label{z-denominator lemma} For $z\in\gamma$ and $l\in \mathbb N$ \begin{equation}\label{denominator representation} (z^{2w}- \rho^{2w})^{-l}= \rho^{-2wl}\sum_{j= 0}^\infty A_{l\, j}\Big(\frac{z- \rho}\rho\Big)^{j- l}, \end{equation} where \begin{equation*} A_{l\, j}=\begin{cases} (2w)^{-l}, & j= 0;\\ \displaystyle\frac1{(2w)^l}\sum_{p= 1}^j\frac1{(2w)^{p}}\binom{-l}{p}\sum_{\substack{q_1, \dots, q_p\geqslant 1\\ q_1+ \cdots +q_p= j}}\binom{2w}{q_1+ 1}\binom{2w}{q_2+ 1}\cdots\binom{2w}{q_p+ 1}, & j> 0. \end{cases} \end{equation*} The series in \eqref{denominator representation} converges absolutely. \end{lem} \begin{proof} A striaghtforward calculation gives \begin{equation}\label{before progression} \begin{split} (z^{2w}- \rho^{2w})^{-l}&= \frac1{\rho^{2wl}}\bigg(\Big(1+ \frac{z- \rho}{\rho}\Big)^{2w}- 1\bigg)^{-l}= \frac1{\rho^{2wl}}\bigg(\sum_{q= 1}^\infty\binom{2w}{q}\Big(\frac{z- \rho}{\rho}\Big)^q\bigg)^{-l}\\ &= \frac{\rho^{-2wl}}{(2w)^l}\Big(\frac{z- \rho}{\rho}\Big)^{-l}\bigg(1+ \frac1{2w}\sum_{q= 1}^\infty\binom{2w}{q+ 1}\Big(\frac{z- \rho}{\rho}\Big)^q\bigg)^{-l}. \end{split} \end{equation} If $2w\in\mathbb N$, then the series on the right hand side is finite. Otherwise, by \eqref{contour} and \eqref{binomial coefficient}, for $z\in\gamma$ the ratio of absolute values of any two sequential terms of the series satisfies \begin{equation*} \bigg|\frac{z-\rho}\rho\binom{2w}{q+ 2}\binom{2w}{q+ 1}^{-1}\bigg|= \Big|\frac{z-\rho}\rho\Big|\frac{|2w- q- 1|}{q+ 2}\leqslant \frac18, \qquad q\geqslant 1. \end{equation*} So, again by \eqref{contour} and \eqref{binomial coefficient}, we have \begin{equation*} \bigg|\sum_{q= 1}^\infty\binom{2w}{q+ 1}\Big(\frac{z- \rho}{\rho}\Big)^q\bigg|< \bigg|\binom{2w}{2}\bigg|\frac{|z- \rho|}\rho\sum_{q= 0}^\infty\frac1{8^q}\leqslant \frac{4w}7. \end{equation*} Thus we can decompose the expression on the right hand side of \eqref{before progression} into an absolutely converging series obtaining \begin{equation*} \begin{split} &(z^{2w}- \rho^{2w})^{-l}= \frac{\rho^{-2wl}}{(2w)^l}\Big(\frac{z- \rho}{\rho}\Big)^{-l}\sum_{p= 0}^\infty\binom{-l}{p}\frac1{(2w)^p}\bigg(\sum_{q= 1}^\infty\binom{2w}{q+ 1}\Big(\frac{z- \rho}{\rho}\Big)^q\bigg)^p\\ &= \frac{\rho^{-2wl}}{(2w)^l}\Big(\frac{z- \rho}{\rho}\Big)^{-l}\bigg(1+ \sum_{j= 1}^\infty\Big(\frac{z- \rho}{\rho}\Big)^{j}\sum_{p= 1}^j\frac1{(2w)^p}\binom{-l}{p}\sum_{\substack{q_1, \dots, q_p\geqslant 1\\ q_1+ \cdots +q_p= j}}\binom{2w}{q_1+ 1}\cdots\binom{2w}{q_p+ 1}\bigg), \end{split} \end{equation*} which finishes the proof. \end{proof} Let $S(z):= H_2(z)- z^{2w}I$ in $\plainH{}(\mathbf X, \mathbf\Phi)$. Then by \eqref{eq:nn1} on $\gamma$ the symbol of $S(z)$ admits the representaion \begin{equation}\label{symbol for S} s(z)= \sum_{v= 1}^\infty\binom{w}{v}z^{2w- v}\Big(2\langle\ba, \mathbf\Phi\rangle+ z^{-1}\big(|\ba|^2+|\mathbf X|^2\big)\Big)^v+ w_{\tilde k}(z). \end{equation} Relations \eqref{symbol for S}, \eqref{derivative estimate}, \eqref{bound on a}, Lemma~\ref{lem:propBUps}(ii), and \eqref{beta and alphas} imply that everywhere inside and on $\gamma$ \begin{equation}\label{eq:newS2} \Big\|\frac{d^l}{dz^l}S(z)\Big\|\lesssim \rho_n^{2w- 1+ \alpha_{m+ 1}- l+ 0+},\ \ l\ge 0. \end{equation} A version of the Jacobi's formula states that for any differentiable invertible matrix-valued function $F(z)$ we have $$ \tr\big[F'(z)F^{-1}(z)\big]=\Big(\det\big[F(z)\big]\Big)'\Big(\det\big[F(z)\big]\Big)^{-1} $$ (it can be proved, for example, using the expansion of the determinant along rows and the induction in the size of $F$). Then by Lemma~\ref{z-denominator lemma} and the residue theorem \begin{equation}\label{eq:residues} \begin{split} &\sum_{\mathbf X\in\BUps_{\GV}(\bnu)}\tau(\rho;\mathbf X,\mathbf\Phi)^{K+ 1}\\ &= \frac{1}{2\pi i}\oint_\gamma z^{K+ 1}\Big(\det\big[H_2(z)- \rho^{2w}I\big]\Big)'\Big(\det\big[H_2(z)- \rho^{2w}I\big]\Big)^{-1}dz\\ &= \frac{1}{2\pi i}\oint_\gamma \tr\Big[z^{K+ 1} H_2'(z)\big(H_2(z)- \rho^{2w}I\big)^{-1}\Big]dz\\ &= \frac{1}{2\pi i}\oint_\gamma \tr\Big[\big(2wz^{2w+ K}I+ z^{K+ 1}S'(z)\big)\sum_{l=0}^\infty (-1)^lS^l(z)(z^{2w}- \rho^{2w})^{-1- l}\Big]dz\\ &= \frac{1}{2\pi i}\oint_\gamma \tr\Big[\big(2wz^{2w+ K}I+ z^{K+ 1}S'(z)\big)\\ &\qquad\times\sum_{l= -\infty}^\infty(z- \rho)^{-1- l}\sum_{j= 0}^\infty(-1)^{l+ j}A_{1+ l+ j\, j}\rho^{1+ l- 2w(1+ l+ j)}S^{l+ j}(z)\Big]dz\\ &= \sum_{l= 0}^\infty\frac1{l!}\tr\frac{d^l}{dr^l}\Big[\big(2wr^{2w+ K}I+ r^{K+ 1}S'(r)\big)\sum_{j= 0}^\infty(-1)^{l+ j}A_{1+ l+ j\, j}\rho^{1+ l- 2w(1+ l+ j)}S^{l+ j}(r)\Big]\Big|_{r= \rho}. \end{split} \end{equation} We can restrict the summation on the RHS of \eqref{eq:residues} to \begin{equation* l+ j\leqslant l_0:= \big(M+ K+ d+ 1+ (d- 1)\alpha_{d- 1}- 2w\big)/(1- \alpha_{m+ 1}). \end{equation*} Indeed, using the trivial fact that for any linear operator $A$ in the finite dimensional Hilbert space spanned by $\be_{\bth}$ with $\bth\in \BUps_{\GV}(\bnu)$ \begin{equation* |\tr A|\leqslant \|A\|\card\BUps_{\GV}(\bnu), \end{equation*} estimate \eqref{eq:newS2}, and relation \eqref{beta and alphas} we can see that the sum of the terms in \eqref{eq:residues} with $l+ j> l_0$ contributes only to the order $O(\rho_n^{-M+ 2w- d})$ in \eqref{eq:n6}, and thus after integration in $\mathbf\Phi$ the corresponding term can be included into the remainder $R_{\tilde k}$ of Section~\ref{description section}. Formula \eqref{eq:n4} shows that in order to compute the contribution to the density of states from $\Bxi(\GV)_p$, we need to integrate the RHS of \eqref{eq:residues} against $d\bnu$ and $d\mathbf\Phi$. We are going to integrate against $d\mathbf\Phi$ first: \begin{equation}\label{integration in Phi} \begin{split} &\int_{\mathcal M_p}d\mathbf\Phi\int_{\Omega(\GV)}\big(\card\BUps_{\GV}(\bnu)\big)^{-1}\sum_{\mathbf X\in\BUps_{\GV}(\bnu)}\tau(\rho; \mathbf X, \mathbf\Phi)^{K+ 1}\\ &= \int_{\Omega(\GV)}d\bnu\big(\card\BUps_{\GV}(\bnu)\big)^{-1}\sum_{l= 0}^{l_0}\sum_{j= 0}^{l_0- l}\frac{(-1)^{l+ j}}{l!}A_{1+ l+ j\, j}\rho^{1+ l- 2w(1+ l+ j)}\\ &\times \tr\frac{d^l}{dr^l}\Big[\int_{\mathcal M_p}d\mathbf\Phi\big(2wr^{2w+ K}I+ r^{K+ 1}S'(r)\big)S^{l+ j}(r)\Big]\Big|_{r= \rho}+ O(\rho_n^{-M+ 2w- d})\\ &= O(\rho_n^{-M+ 2w- d})+ \int_{\Omega(\GV)}\frac{d\bnu}{\card\BUps_{\GV}(\bnu)}\sum_{l= 0}^{l_0}\sum_{j= 0}^{l_0- l}\frac{(-1)^{l+ j}}{l!}A_{1+ l+ j\, j} \rho^{1+ l- 2w(1+ l+ j)}\\ &\times\tr\bigg[\frac{d^l}{dr^l}\Big(2wr^{2w+ K}\int_{\mathcal M_p} S^{l+ j}(r)d\mathbf\Phi - \frac{(K+1)r^{K}}{l+ j+ 1}\int_{\mathcal M_p}S^{l+ j+ 1}(r)d\mathbf\Phi\Big)\\ &+ \frac{d^{l+ 1}}{dr^{l+ 1}}\Big(\frac{r^{K+ 1}}{l+ j+ 1}\int_{\mathcal M_p}S^{l+ j+ 1}(r)d\mathbf\Phi\Big)\bigg]\bigg|_{r= \rho}. \end{split} \end{equation} We will prove that the integrand of the exterior integral in \eqref{integration in Phi} is a convergent series of products of powers of $\rho$ and $\ln\rho$. The coefficients in front of all terms will be bounded functions of $\mathbf X$, so afterwards we will just integrate these coefficients to obtain the desired asymptotic expansion. Let us discuss, how $S(r)$ depends on $\rho$, $\mathbf X$ and $\mathbf\Phi$. In order to do this, we first look again at \eqref{symbol equation}. As follows from Remark~\ref{cutoff remark}, the product $e_{\bth_q''}\varphi_{\bth_q''}$ does not depend on $r$ and $\mathbf\Phi$, and by \eqref{varphi:eq} \begin{equation}\label{Delta cutoffs} \|\widehat{\nabla^{\nu}e_{\bth_q''}\varphi_{\bth_q''}}\|_{\plainL\infty( \mathbb R^d)}\lesssim \rho_n^{-\nu\b}. \end{equation} For any $\boldeta\in\Bth_{s+ 1}$ the application of the finite difference operator $\nabla_{\boldeta}$ to a polynomial decreases its degree by $1$. Hence formula \eqref{b series} ensures that \begin{equation}\label{Delta b} \begin{split} \widehat{(\nabla^{\nu}b)}(\bth, \bxi+ \bphi)= \sum_{\iota\in \widetilde J}\sum_{i= \nu}^\infty\sum_{\substack{n_1, \dots, n_{K+ 1}\geqslant 0\\ n_1+ \cdots+ n_{K+ 1}\leqslant i\\ j_1, \dots, j_d\geqslant 0\\ j_1+ \cdots+ j_d\leqslant i- \nu}}\widetilde C^{\iota\, j_1\cdots j_d}_{i\, n_1\cdots n_{K+ 1}}(\mathbf X; \bth)r^{\iota- i}\phi_1^{j_1}\cdots\phi_d^{j_d}\prod_{a= 1}^{K+ 1}(\sin\Phi_a)^{n_a}. \end{split} \end{equation} Here $\widetilde C^{\iota\, j_1\cdots j_d}_{i\, n_1\cdots n_{K+ 1}}(\mathbf X; \bth)$ depend on the coefficients of \eqref{Delta-power} and satisfy a uniform estimate \begin{equation* \big|\widetilde C^{\iota\, j_1\cdots j_d}_{i\, n_1\cdots n_{K+ 1}}(\mathbf X; \bth)\big|\lesssim \rho_n^{(i- \nu- j_1- \cdots- j_d)(\alpha_{m+ 1}+ 0+)}. \end{equation*} Now \begin{equation* \widehat{(\nabla^{\nu}\tilde\chi_{\bth})}(\bxi)= \sum_{\tilde\nu= 0}^{\nu}\widehat{(\nabla^{\tilde\nu}e_{\bth}\varphi_{\bth})}\Big(\bxi+ \sum_{p= 1}^{\tilde\nu}\boldeta_k\Big)\widehat{\Big(\nabla^{\nu- \tilde\nu}\big(|\cdot+ \bth|_{\mathbb C}^{2w}- |\cdot|_{\mathbb C}^{2w}\big)^{-1}\Big)}(\bxi). \end{equation*} The factors $\widehat{(\nabla^{\tilde\nu}e_{\bth}\varphi_{\bth})}$ satisfy the estimate \eqref{Delta cutoffs}. For $\boldeta\in \Bth_{s+ 1}$ we have \begin{equation* \begin{split} &\widehat{\Big(\nabla_{\boldeta}\big(|\cdot+ \bth|_{\mathbb C}^{2w}- |\cdot|_{\mathbb C}^{2w}\big)^{-1}\Big)}(\bxi)\\ &= \big(|\bxi+ \boldeta+ \bth|_{\mathbb C}^{2w}- |\bxi+ \boldeta|_{\mathbb C}^{2w}\big)^{-1}\big(|\bxi+ \bth|_{\mathbb C}^{2w}- |\bxi|_{\mathbb C}^{2w}\big)^{-1}G(\bxi; \bth, \boldeta), \end{split} \end{equation*} where \begin{equation* \begin{split} &G(\bxi; \bth, \boldeta):= |\bxi+ \bth|_{\mathbb C}^{2w}- |\bxi|_{\mathbb C}^{2w}- |\bxi+ \boldeta+ \bth|_{\mathbb C}^{2w}+ |\bxi+ \boldeta|_{\mathbb C}^{2w}\\ &= -2w\langle\boldeta, \bth\rangle|\bxi|_{\mathbb C}^{2w- 2}\\ &+ \sum_{j= 2}^\infty\binom wj|\bxi|_{\mathbb C}^{2w- 2j}\Big(\big(2\langle\bxi, \bth\rangle+ |\bth|^2\big)^j- \big(2\langle\bxi, \boldeta+ \bth\rangle+ |\boldeta+ \bth|^2\big)^j+ \big(2\langle\bxi, \boldeta\rangle+ |\boldeta|^2\big)^j\Big). \end{split} \end{equation*} In analogy to \eqref{Delta b} we have \begin{equation* \begin{split} \widehat{\big(\nabla^{\nu}G(\cdot; \bth, \boldeta)\big)}(\bxi)= \sum_{i= \nu}^\infty\sum_{\substack{n_1, \dots, n_{K+ 1}\geqslant 0\\ n_1+ \cdots+ n_{K+ 1}\leqslant i+ 2}}\widetilde C^i_{n_1\cdots n_{K+ 1}}(\mathbf X; \bth, \boldeta)r^{2w- 2- i}\prod_{a= 1}^{K+ 1}(\sin\Phi_a)^{n_a}, \end{split} \end{equation*} with \begin{equation}\label{Delta G coefficients} \big|\widetilde C^i_{n_1\cdots n_{K+ 1}}(\mathbf X; \bth, \boldeta)\big|\lesssim \rho_n^{(i- \nu)(\alpha_{m+ 1}+ 0+)+ 0+}. \end{equation} Altogether, applying relations \eqref{Delta cutoffs} -- \eqref{Delta G coefficients} to \eqref{eq:newy} and \eqref{symbol equation} we obtain \begin{equation}\label{W series} \begin{split} &w_{\tilde k}(\bth, \bxi)\\ &= \sum_{s= 0}^{\tilde k- 1}\sum_{\iota_0, \dots, \iota_s\in \widetilde J}\sum_{\mu= 0}^s\sum_{\substack{\boldeta_1, \dots, \boldeta_{s+ \mu}\in \Bth_{s+ 1}\\ \bth_1, \dots, \bth_{s+ \mu}\in \Bth'_{s+ 1}}}\sum_{p= 0}^{s- \mu}\sum_{i= 0}^\infty\sum_{\substack{n_1, \dots, n_{K+ 1}\geqslant 0\\ n_1+ \cdots+ n_{K+ 1}\leqslant 2\mu+ p+ i}} C_{s\, \mu\, p\, i\, \iota_0\cdots \iota_s\, n_1\cdots n_{K+ 1}}^{\boldeta_1\cdots \boldeta_{s+ \mu}\, \bth_1\cdots \bth_{s+ \mu}}(\mathbf X; \bth)\\ &\times r^{(2w- 2)\mu+ \iota_0+ \cdots+ \iota_s- p- i}\prod_{a= 1}^{K+ 1}(\sin\Phi_a)^{n_a}\prod_{v= 1}^{s+ \mu}\big(|\bxi+ \boldeta_v+ \bth_v|_{\mathbb C}^{2w}- |\bxi+ \boldeta_v|_{\mathbb C}^{2w}\big)^{-1}, \end{split} \end{equation} where \begin{equation* \big|C_{s\, \mu\, p\, i\, \iota_0\cdots \iota_s\, n_1\cdots n_{K+ 1}}^{\boldeta_1\cdots \boldeta_{s+ \mu}\, \bth_1\cdots \bth_{s+ \mu}}(\mathbf X; \bth)\big|\lesssim \rho_n^{i(\alpha_{m+ 1}+ 0+)- (s- \mu- p)\b+ 0+}. \end{equation*} According to Lemma~\ref{denominator lemma}, \begin{equation}\label{old to new denominator} \begin{split} &\big(|\bxi+ \boldeta_v+ \bth_v|_{\mathbb C}^{2w}- |\bxi+ \boldeta_v|_{\mathbb C}^{2w}\big)^{-1}\\ &= r^{2- 2w}\big(2\langle\bxi, \bth_v\rangle+ 2\langle\boldeta_v, \bth_v\rangle+ |\bth_v|^2\big)^{-1}\\ &\times\sum_{i= 0}^\infty \sum_{\substack{n_1, \dots, n_{K+ 1}\geqslant 0\\ n_1+ \cdots+ n_{K+ 1}\leqslant i}}C_{n_1\cdots n_{K+ 1}}^i(\mathbf X; \boldeta_v, \bth_v)r^{-i}\prod_{a= 1}^{K+ 1}(\sin\Phi_a)^{n_a}, \end{split} \end{equation} and here \begin{equation* \big|C_{n_1\cdots n_{K+ 1}}^i(\mathbf X; \boldeta_v, \bth_v)\big|\lesssim \rho_n^{i(\alpha_{m+ 1}+ 0+)}. \end{equation*} If now subsitute \eqref{old to new denominator} to \eqref{W series}, we obtain \begin{equation}\label{new W series} \begin{split} &w_{\tilde k}(\bth, \bxi)\\ &= \sum_{s= 0}^{\tilde k- 1}\sum_{\iota_0, \dots, \iota_s\in \widetilde J}\sum_{\mu= 0}^s\sum_{\substack{\boldeta_1, \dots, \boldeta_{s+ \mu}\in \Bth_{s+ 1}\\ \bth_1, \dots, \bth_{s+ \mu}\in \Bth'_{s+ 1}}}\sum_{p= 0}^{s- \mu}\sum_{i= 0}^\infty\sum_{\substack{n_1, \dots, n_{K+ 1}\geqslant 0\\ n_1+ \cdots+ n_{K+ 1}\\ \leqslant 2\mu+ p+ i}} C_{s\, \mu\, p\, i\, \iota_0\cdots \iota_s\, n_1\cdots n_{K+ 1}}^{\boldeta_1\cdots \boldeta_{s+ \mu}\, \bth_1\cdots \bth_{s+ \mu}}(\mathbf X; \bth, \boldeta_v, \bth_v)\\ &\times r^{(2- 2w)s+ \iota_0+ \cdots+ \iota_s- p- i}\prod_{a= 1}^{K+ 1}(\sin\Phi_a)^{n_a}\prod_{v= 1}^{s+ \mu}\big(2\langle\bxi, \bth_v\rangle+ 2\langle\boldeta_v, \bth_v\rangle+ |\bth_v|^2\big)^{-1}, \end{split} \end{equation} with \begin{equation* \big|C_{s\, \mu\, p\, i\, \iota_0\cdots \iota_s\, n_1\cdots n_{K+ 1}}^{\boldeta_1\cdots \boldeta_{s+ \mu}\, \bth_1\cdots \bth_{s+ \mu}}(\mathbf X; \bth, \boldeta_v, \bth_v)\big|\lesssim \rho_n^{i(\alpha_{m+ 1}+ 0+)- (s- \mu- p)\b+ 0+}. \end{equation*} The first sum in \eqref{symbol for S} can be written in the form \begin{equation}\label{first sum for S} \begin{split} &\sum_{v= 1}^\infty\binom{w}{v}z^{2w- v}\Big(2\langle\ba, \mathbf\Phi\rangle+ z^{-1}\big(|\ba|^2+ |\mathbf X|^2\big)\Big)^v\\ &= \sum_{i= 0}^\infty\sum_{\substack{n_1, \dots, n_{K+ 1}\geqslant 0\\ n_1+ \cdots+ n_{K+ 1}\leqslant i+ 1}}C^i_{n_1\cdots n_{K+ 1}}(\mathbf X)z^{2w- 1- i}\prod_{a= 1}^{K+ 1}(\sin\Phi_a)^{n_a}, \end{split} \end{equation} where \begin{equation* \big|C^i_{n_1\cdots n_{K+ 1}}(\mathbf X)\big|\lesssim \rho_n^{(i+ 1)(\alpha_{m+ 1}+ 0+)}. \end{equation*} Substituting \eqref{new W series} and \eqref{first sum for S} into \eqref{symbol for S} we can calculate the series for the symbol of the operator $S^f$ for $f\in \mathbb N$: \begin{equation}\label{S^f} \begin{split} &\widehat{s^f}(\bth, \bxi)= \sum_{\substack{\bth_1, \dots, \bth_f\in \Bth_{\tilde k}\\ \bphi_1, \dots, \bphi_{f}\in \Bth_{\tilde k}}}C_{\bphi_1\cdots \bphi_{f}}^{\bth_1\cdots \bth_{f}}(\bth)\prod_{g= 1}^f\hat s(\bth_g, \bxi+ \bphi_g)\\ &= \sum_{\nu= 0}^{f}\sum_{h= \nu}^{\nu\tilde k}\sum_{\iota_1, \dots, \iota_h\in \widetilde J}\sum_{\mu= 0}^{h- \nu}\sum_{\substack{\bphi_1, \dots, \bphi_{h- \nu+ \mu}\in \Bth_{\tilde k}\\ \bth_1, \dots, \bth_{h- \nu+ \mu}\in \Bth'_{\tilde k}}}\sum_{p= 0}^{h- \nu- \mu}\sum_{i= 0}^\infty\sum_{\substack{n_1, \dots, n_{K+ 1}\geqslant 0\\ n_1+ \cdots+ n_{K+ 1}\leqslant 2\mu+ p+ i+ f- \nu}}C(\mathbf X; \bth, \dots)\\ &\times r^{(2- 2w)h+ (2w- 1)f- \nu+ \iota_1+ \cdots+ \iota_h- p- i}\prod_{a= 1}^{K+ 1}(\sin\Phi_a)^{n_a}\prod_{v= 1}^{h- \nu+ \mu}\big(2\langle\bxi, \bth_v\rangle+ 2\langle\bphi_v, \bth_v\rangle+ |\bth_v|^2\big)^{-1}, \end{split} \end{equation} with \begin{equation* \big|C(\mathbf X; \bth, \dots)\big|\lesssim \rho_n^{(f- \nu+ i)(\alpha_{m+ 1}+ 0+)- (h- \nu- \mu- p)\b+ 0+}. \end{equation*} Note that the last product on the right hand side of \eqref{S^f} is of the form \begin{equation} \prod_{t =1}^T\Big(l_t +\rho\sum_{q} b_q^t\sin \Phi_q\Big)^{-k_t}. \end{equation*} Here we have expanded the inner products $\langle\bxi,\bth_v\rangle$ using Lemma \ref{lem:products}(ii). The coefficients $\{b_q^t\}$ in the decomposition $(\bth_v)_{\GV^{\perp}}= \sum_q b_q^t\tilde \bmu_q$ are all of the same sign and satisfy \eqref{eq:n10}. Without loss of generality we may assume that all $b_q^t$ are non-negative. The numbers \[ l_t= l(b_1^t, \dots, b_{K+ 1}^t):= 2L_{m+ 1}\sum_{q}b_q^t+ 2\langle \mathbf X, (\bth_v)_{\GV}\rangle+ 2\langle \bphi_v, \bth_v\rangle+ |\bth_v|^2 \] satisfy $\rho_n^{\alpha_{m+1}}\rho_n^{0-}\lesssim l_t\lesssim \rho_n^{\alpha_{m+1}}\rho_n^{0+}$, since \[ \big|2\langle \mathbf X,(\bth_v)_{\GV}\rangle+ 2\langle \bphi_v, \bth_v\rangle+ |\bth_v|^2\big|\lesssim \rho_n^{\alpha_m+ 0+}. \] This numbers depend on $\mathbf X$, but not on $\mathbf\Phi$ or $\rho$. The numbers $k_t= k(b_1^t, \dots, b_{K+ 1}^t)$ are positive, integer, and independent of $\bxi$. The following lemma is identical to Lemma 10.4 of \cite{ParSht2}, where for our purposes we have replaced the explicit constants $1/2$ and $2/3$ by $\vartheta$ and $\varsigma$, respectively. \begin{lem}\label{lem:integral1} For $1\leqslant K\leqslant d-1$; $n_1, \dots, n_{K+ 1}\in \mathbb N_0$; $k_1, \dots, k_T\in \mathbb N$ let $Q:= \sum_{t= 1}^Tk_t$, \begin{equation* \hat J_K:= \int\limits_{\mathcal M_p}\frac{(\sin\Phi_1)^{n_1}\dots (\sin\Phi_K)^{n_K}(\sin\Phi_{K+ 1})^{n_{K+ 1}}\,d\mathbf\Phi}{\prod_{t= 1}^T \big(l_t+ \rho\sum_{j= 1}^{K+ 1} b_j^t \sin\Phi_j\big)^{k_t} }. \end{equation*} Then there exist positive numbers $\delta_0$, $p_K$, and $q_K$ depending only on the constants \eqref{beta and alphas} and $K$ such that \begin{equation* \hat J_K= \sum_{q= 0}^K(\ln\rho)^q\sum_{p= 0}^\infty e(p,q){\rho}^{-p}, \end{equation*} where \begin{equation} \big|e(p,q)\big|\lesssim\rho_n^{(\varsigma- p_K)p}\rho_n^{-Q\beta}. \end{equation*} These estimates are uniform in the following regions of variables: \begin{equation} \rho_n^{\beta}\lesssim l_t\lesssim\rho_n^{\vartheta},\ \ \rho_n^{-\delta_0}\lesssim b_j^t\lesssim \rho_n^{\delta_0},\ \ \rho_n^{\varsigma- q_K}<{\rho}. \end{equation*} \end{lem} Now using Lemma~\ref{lem:integral1} we can compute the integrals of \eqref{S^f} over the domain $\{\mathbf\Phi\in \mathcal M_p\}$ (recall that this integration is not needed for $K= 0$ by Remark~\ref{K= 0 remark}). Substituting the result into \eqref{integration in Phi}, integrating in $d\bnu$ over $\Omega(\GV)$, and taking into account \eqref{eq:n4} and \eqref{eq:n6} we obtain in the region $2\rho_n/3< \rho< 6\rho_n$ \begin{equation* \begin{split} &\vol\mathcal A^+_p(\rho)- \vol\mathcal A^-_p(\rho)\\ &= \sum_{q= 0}^K\sum_{h= 0}^{(l_0+ 1)\tilde k}\sum_{\iota_1, \dots, \iota_h\in \widetilde J}\sum_{j= 0}^\infty C_{q\, h\, j}^{\iota_1\cdots \iota_h}\rho^{K+ 1+ (2- 2w)h+ \iota_1+ \cdots+ \iota_h- j}(\ln\rho)^q+ O(\rho_n^{-M+ 2w- d}), \end{split} \end{equation*} with the coefficients satisfying \begin{equation* |C_{q\, h\, j}^{\iota_1\cdots \iota_h}|\lesssim \rho_n^{-2\b h+ \varsigma j}. \end{equation*} This, together with equations \eqref{eq:n2}, \eqref{eq:densityh3}, Lemma~\ref{H1H2 lemma}, relation \eqref{beta and alphas}, Section~11 of \cite{ParSht2}, and the observation that the number of different quasi-lattice subspaces $\GV$ is $\lesssim\rho_n^{0+}$, completes the proof of Lemma \ref{main_lem} and, thus, of our main theorem in the case of $B= \widetilde B$ with the symbol satisfying \eqref{tilde b}. As explained at the end of Section~\ref{reduction section}, the summation over $\widetilde J$ may be replaced by summation over $J_0$. It remains to relax the assumptions on $B$. This will be done in the subsequent section. \section{Approximation}\label{final section} In this section we prove Lemma~\ref{main_lem} and thus Theorem~\ref{main_thm} for general $B$ using the fact that the proof is complete for $\widetilde B$ whose symbol fulfills the extra assumption \eqref{tilde b}. \subsection*{1.} Given $B$ satisfying the hypothesis of Theorem~\ref{main_thm} and the number $M$, we fix the values of $k$ and $\tilde k$ in such a way that Lemma~\ref{main_lem} holds true for $H= (-\Delta)^w+ \widetilde B$, where the symbol $\tilde b$ of $\widetilde B$ satisfying \eqref{tilde b} is constructed at the end of Section~\ref{reduction section}. For $R> 0$ let us define (recall \eqref{CP}) \begin{equation*} \mathcal P_R:= \mathcal P^L(\mathcal B_{R}), \quad \mathcal P_R^c:= \mathcal P^L( \mathbb R^d\setminus \mathcal B_{R}) \end{equation*} We start by estimating the quadratic form of $B- \widetilde B$. For any $\psi\in \plainH{2w}( \mathbb R^d)$ \begin{equation}\label{B correction estimate} \begin{split} \big|\langle\psi, (B- \widetilde B)\psi\rangle\big|&\leqslant \big|\langle\psi, \mathcal P_{R_0}(B- \widetilde B)\mathcal P_{R_0}\psi\rangle\big|+ \big|\langle\psi, \mathcal P_{R_0}(B- \widetilde B)\mathcal P_{R_0}^c\psi\rangle\big|\\ &+ \big|\langle\psi, \mathcal P_{R_0}^c(B- \widetilde B)\mathcal P_{R_0}\psi\rangle\big|+ \big|\langle\psi, \mathcal P_{R_0}^c(B- \widetilde B)\mathcal P_{R_0}^c\psi\rangle\big|. \end{split} \end{equation} By Condition \eqref{eq:condB2}, the symbol of $(B- \widetilde B)\mathcal P_{R_0}^c$ satisfies \begin{equation*} {\,\vrule depth4pt height11pt width1pt}\,(b- \tilde b)\Id_{ \mathbb R^d\setminus \mathcal B_{R_0}}{\vrule depth4pt height11pt width1pt\,}_{\varkappa/2,\, 0}^{(\varkappa/\beta)}< \rho_n^{-k}. \end{equation*} Now Propositions~\ref{bound:prop} and \ref{product:prop} imply that \begin{equation}\label{bound from Condition B} \big\|(-\Delta+ 1)^{-\varkappa/4}(B- \widetilde B)(-\Delta +1)^{-\varkappa/4}\mathcal P_{R_0}^c\big\|\leqslant C\rho_n^{-k}. \end{equation} Hence \begin{equation}\label{one term estimated} \begin{split} &\big|\langle\psi, \mathcal P_{R_0}(B- \widetilde B)\mathcal P_{R_0}^c\psi\rangle\big|\\ &= \big|\langle(-\Delta+ 1)^{\varkappa/4}\psi, \mathcal P_{R_0}(-\Delta+ 1)^{-\varkappa/4}(B- \widetilde B)\mathcal P_{R_0}^c(-\Delta+ 1)^{-\varkappa/4}(-\Delta+ 1)^{\varkappa/4}\psi\rangle\big|\\ &\leqslant C\rho_n^{-k}\langle\psi, (-\Delta+ 1)^{\varkappa/2}\psi\rangle, \end{split} \end{equation} and the analogous estimates hold for the last two terms in \eqref{B correction estimate}. Thus \eqref{B correction estimate} implies \begin{equation}\label{estimate with B^(k)} |B- \widetilde B|\leqslant B^{(k)}, \end{equation} where $B^{(k)}$ is the operator of multiplication by the function \begin{equation}\label{b^(k)} b^{(k)}(\bxi):= \begin{cases}\|b\|_{\plainL{\infty}( \mathbb R^d\times\mathcal B_{R_0})}+ \|\tilde b\|_{\plainL{\infty}( \mathbb R^d\times\mathcal B_{R_0})}, &|\bxi|\leqslant R_0,\\ C\rho_n^{-k}\big(1+ |\bxi|^2\big)^{\varkappa/2}, &|\bxi|> R_0\end{cases} \end{equation} in the momentum space. In view of Lemma~\ref{norms lemma}(a), we conclude that \begin{equation}\label{up and down} N\big((-\Delta)^w+ B, \lambda\big)\gtrless N\big((-\Delta)^w+ \widetilde B \pm B^{(k)}, \lambda\big). \end{equation} So to prove \eqref{eq:main_lem1} it will be sufficient to show that for $\rho\in I_n$ (which we assume everywhere below) the right hand side of \eqref{up and down} does not differ from $N\big((-\Delta)^w+ \widetilde B, \rho^{2w}\big)$ by more than $O(\rho_n^{-M})$. By \eqref{eq:main_lem1} and Remark~\ref{spurious remark}, it is enough to prove that \begin{equation}\label{approximation goal} N\big((-\Delta)^w +\widetilde B \pm B^{(k)}, \lambda\big) =N\big((-\Delta)^w +\widetilde B, \lambda +O(\rho_n^{2w -d -M})\big). \end{equation} \subsection*{2.} We note that for \begin{equation}\label{R_*} R_*:= (4\rho_n^{d+ M})^{1/(w- \varkappa)} \end{equation} we have \begin{equation}\label{intermediate irrelevant} N\big((-\Delta)^w+ \widetilde B \pm B^{(k)}, \lambda\big)= N\big((-\Delta)^w+ \widetilde B \pm\mathcal P_{R_0}B^{(k)} \pm\mathcal P_{R_*}^cB^{(k)}, \lambda+ O(\rho_n^{2w -d -M})\big). \end{equation} Indeed, \begin{equation*} \big\|(\mathcal P_{R_*}- \mathcal P_{R_0})B^{(k)}\big\|= C\rho_n^{-k}(1+ R_*^2)^{\varkappa/2}= O(\rho_n^{2w -d -M}) \end{equation*} in view of \eqref{k fist condition}. \subsection*{3.} Now we are going to prove that \begin{equation}\label{interior removal} N\big((-\Delta)^w+ \widetilde B \pm\mathcal P_{R_0}B^{(k)} \pm\mathcal P_{R_*}^cB^{(k)}, \lambda\big)= N\big((-\Delta)^w+ \widetilde B \pm\mathcal P_{R_*}^cB^{(k)}, \lambda+ O(\rho_n^{2w -d -M})\big). \end{equation} This will be done with the help of the following lemma, which is a development of Lemma~3.1 from \cite{Par}. \begin{lem}\label{iterative decay lemma} Let $H_0$, $V$, $A$ be pseudo--differential operators with almost--periodic coefficients. Suppose that $H:= H_0+ V$ is elliptic, selfadjoint and bounded below, and there exists a collection of orthogonal projections $\{P_l\}_{l= 0}^L$ commuting with $H_0$ such that \begin{equation}\label{projector conditions} \sum_{l =0}^LP_l =I \quad \textrm{and} \quad V_{n\, l} :=P_nVP_l =0 \quad \textrm{for} \quad |l -n|> 1. \end{equation} Suppose that $A= P_0A$ and that \begin{equation*} a:= \|A\|< \infty. \end{equation*} At last, suppose that for $\lambda \in \mathbb R$ \begin{equation}\label{D_l hypothesis} D_l:= \dist\big(\lambda, \sigma(P_lHP_l)\big) -(4+ 2^{5 -L})a >0, \quad l =0, \dots, L -1 \end{equation} and \begin{equation}\label{V assumption} \max_{0 \leqslant l \leqslant L -1}\big(a+ \|V_{l\, l -1}\|+ \|V_{l\, l+ 1}\|\big)/D_l\leqslant 1/4. \end{equation} Then for \begin{equation}\label{vareps} \varepsilon:= 2^{4 -L}a \end{equation} we have \begin{equation}\label{N(H+ A)} N(H, \lambda -\varepsilon) \leqslant N(H+ A, \lambda) \leqslant N(H, \lambda +\varepsilon). \end{equation} \end{lem} \begin{proof} We will prove the first inequality; the second follows by interchanging the roles of $H_0$ and $H_0 +A$. Let $E_\lambda$ be the spectral projection of $(-\infty, \lambda]$ for $H$. By Lemma~4.1 of \cite{ParSht2} it is enough to prove that \begin{equation}\label{lambda form bound} \langle \phi, (H+ A)\phi\rangle\leqslant \lambda\|\phi\|^2 \quad \textrm{for every}\quad \phi \in E_{\lambda- \varepsilon}\plainL2( \mathbb R^d). \end{equation} Let \begin{equation}\label{delta and K} \delta :=\min\{a, 2^{-3 -L}\min_{0 \leqslant l \leqslant L -1}D_l\}, \quad K :=[2a/\delta] +2, \end{equation} so that \begin{equation}\label{K -1 bounds} 2a \leqslant (K -1)\delta \leqslant 3a \end{equation} and by \eqref{V assumption} \begin{equation}\label{K -1 estimate} K -1\leqslant 3a/\delta \leqslant3\max\{1, 2^{L +3}a/\min_{0 \leqslant l \leqslant L -1}D_l\} \leqslant 2^{L +3}. \end{equation} For $\phi \in E_{\lambda- \varepsilon}$ introduce \begin{equation*} \begin{split} \phi^k &:=(E_{\lambda -\varepsilon -(k -1)\delta} -E_{\lambda -\varepsilon -k\delta})\phi, \quad k =1, \dots, K -1, \\ \phi^K&:= E_{\lambda -\varepsilon -(K- 1)\delta}\phi, \quad \phi' :=\phi -\phi^K =\sum_{k =1}^{K -1}\phi^k. \end{split} \end{equation*} Then $\phi =\sum_{k =1}^K\oplus\phi^k$ and, letting \begin{equation}\label{eta^k} \eta^k :=H\phi^k -\big(\lambda -\varepsilon -(k -1)\delta\big)\phi^k, \quad k =1, \dots, K -1, \end{equation} we have \begin{equation}\label{eta norm} \|\eta^k\| \leqslant\delta\|\phi^k\|. \end{equation} Let $P_{-1} := P_{L +1} := 0$. Projecting \eqref{eta^k} with $P_l$ we obtain \begin{equation*} \eta_l^k= V_{l\, l -1}\phi_{l -1}^k+ \Big(P_lHP_l- \big(\lambda -\varepsilon -(k -1)\delta\big)\Big)\phi^k_l+ V_{l\, l +1}\phi^k_{l +1}, \quad l =0, \dots, L, \end{equation*} and thus by \eqref{V assumption}, \eqref{delta and K} and \eqref{eta norm} \begin{equation*} \begin{split} \|\phi^k_l\| &\leqslant \big(\|\eta^k_l\| +\|V_{l\, l -1}\|\|\phi_{l -1}^k\| +\|V_{l\, l +1}\|\|\phi_{l +1}^k\|\big)/D_l \\ &\leqslant 2^{-3 -L}\|\phi^k\| +\|\phi^k_{l -1}\|/4 +\|\phi^k_{l +1}\|/4, \quad l =0, \dots, L -1. \end{split} \end{equation*} By induction, starting from $l =0$ we obtain \begin{equation*} \|\phi^k_l\| \leqslant 2^{-2 -L}\|\phi^k\| +3\|\phi^k_{l +1}\|/8, \quad l =0, \dots, L -1. \end{equation*} Again by induction, using that $\|\phi^k_L\| \leqslant\|\phi^k\|$, we get $\|\phi^k_l\| \leqslant2^{l -L}\|\phi^k\|$, $l =1, \dots, L$ and thus $\|\phi^k_0\| \leqslant2^{-L}\|\phi^k\|$. Therefore, for $k =1, \dots, K -1$, \begin{equation*} \|A\phi^k\| =\|A\phi^k_0\| \leqslant2^{-L}a\|\phi^k\|, \end{equation*} and thus \begin{equation*} \|A\phi'\|\leqslant \sum_{k =1}^{K -1}\|A\phi^k\|\leqslant 2^{-L}\sqrt{K -1}a\|\phi'\| \end{equation*} and \begin{equation*} \big|\langle\phi', A\phi'\rangle\big| =\Big|\sum_{k, m =1}^{K -1}\langle\phi^k_0, A\phi^m_0\rangle\Big| \leqslant 2^{-2L}(K -1)a\|\phi'\|^2. \end{equation*} Hence \begin{equation*} \begin{split} \langle\phi, (H +A)\phi\rangle &=\langle\phi', H\phi'\rangle +\langle\phi', A\phi'\rangle +2\Re\langle\phi^K, A\phi'\rangle +\langle\phi^K, H\phi^K\rangle +\langle\phi^K, A\phi^K\rangle\\ &\leqslant (\lambda- \varepsilon)\|\phi'\|^2 +2^{-2L}(K -1)a\|\phi'\|^2+ 2^{1 -L}\sqrt{K -1}a\|\phi'\|\|\phi^K\|\\ &+ \big(\lambda -\varepsilon -(K -1)\delta\big)\|\phi^K\|^2+ a\|\phi^K\|^2\\ &\leqslant\big(\lambda -\varepsilon +2^{1 -2L}(K -1)a\big)\|\phi'\|^2 +\big(\lambda -\varepsilon -(K -1)\delta +2a\big)\|\phi^K\|^2\\ &\leqslant\lambda\|\phi\|^2, \end{split} \end{equation*} where the last inequality follows from \eqref{K -1 bounds} and \eqref{K -1 estimate}. \end{proof} We now want to apply Lemma~\ref{iterative decay lemma} to \begin{equation*} H_0^\pm :=(-\Delta)^w \pm\mathcal P_{R_*}^cB^{(k)}, \quad V :=\widetilde B, \quad A^\pm :=\pm\mathcal P_{R_0}B^{(k)}. \end{equation*} Note that \begin{equation}\label{a} a :=\|b\|_{\plainL{\infty}( \mathbb R^d\times\mathcal B_{R_0})}+ \|\tilde b\|_{\plainL{\infty}( \mathbb R^d\times\mathcal B_{R_0})} \end{equation} does not depend on $\rho_n$. For \begin{equation}\label{L} L :=\big[4 +\log_2a +(M +d -2w)\log_2\rho_n\big] +1 \end{equation} we let \begin{equation}\label{R_l} R_l :=R_0 +l\rho_n^{2/k}, \quad l =0, \dots, L -1, \end{equation} and introduce a family of projections \begin{equation}\label{P_l} P_0 :=\mathcal P_{R_0}, \quad P_l :=\mathcal P_{R_l} -\mathcal P_{R_{l -1}}, \quad l =1, \dots, L -1, \quad P_L :=\mathcal P_{R_{L -1}}^c. \end{equation} Let us check that the hypothesis of Lemma~\ref{iterative decay lemma} is satisfied. Relation \eqref{projector conditions} follows from \eqref{R(rho)} and \eqref{R_l}. It follows from \eqref{tilde B estimate} that for $l\leqslant L -1$ \begin{equation}\label{norm on P_l} \|P_lHP_l\| \leqslant\Big\|P_{L -1}\big((-\Delta)^w+ \widetilde B\big)P_{L -1}\Big\| \leqslant 2\big\|P_{L -1}(-\Delta)^wP_{L -1}\big\| \leqslant 2(R_{L -1})^{2w}. \end{equation} Also, for $l\leqslant L -1$ \begin{equation}\label{norms of projected V} \|V_{l\, l -1}\|+ \|V_{l\, l +1}\| \leqslant 2(R_{L -1} +\rho_n^{2/k})^{2\tilde w}. \end{equation} Since by \eqref{L} and \eqref{R_l} we have \begin{equation} R_{L -1} =R_0 +\big[4 +\log_2a +(M +d -2w)\log_2\rho_n\big]\rho_n^{2/k}\lesssim \rho_n^{2/k}\log\rho_n, \end{equation} relations \eqref{D_l hypothesis} and \eqref{V assumption} follow from \eqref{norm on P_l} and \eqref{norms of projected V} if $\rho_n$ is big enough. Applying Lemma~\ref{iterative decay lemma}, we get \eqref{N(H+ A)} with \begin{equation*} \varepsilon =2^{4- L}a \leqslant\rho_n^{-M +2w -d}, \end{equation*} which implies \eqref{interior removal}. \subsection*{4.} It remains to prove that \begin{equation}\label{exterior removal} N\big((-\Delta)^w+ \widetilde B \pm\mathcal P_{R_*}^cB^{(k)}, \lambda\big)= N\big((-\Delta)^w+ \widetilde B, \lambda+ O(\rho_n^{2w -d -M})\big). \end{equation} Choose \begin{equation}\label{vareps again} \varepsilon :=\rho_n^{-d -M}. \end{equation} In view of \eqref{tilde B estimate}, we have \begin{equation*} (-\Delta)^{\tilde w} +\widetilde B \lessgtr\mathcal P_{R_*}(1 \pm\varepsilon)\big((-\Delta)^{\tilde w} +\widetilde B\big)\mathcal P_{R_*} \oplus\mathcal P_{R_*}^c(1 \pm1/\varepsilon)\big((-\Delta)^{\tilde w} +\widetilde B\big)\mathcal P_{R_*}^c. \end{equation*} Therefore, \begin{equation}\label{R_* direct sum} \begin{split} (-\Delta)^w+ \widetilde B \pm\mathcal P_{R_*}^cB^{(k)} &\lessgtr\mathcal P_{R_*}\big((-\Delta)^w \pm\varepsilon(-\Delta)^{\tilde w} +(1 \pm\varepsilon)\widetilde B\big)\mathcal P_{R_*} \\ &\oplus\mathcal P_{R_*}^c\big((-\Delta)^w \pm(-\Delta)^{\tilde w}/\varepsilon +(1 \pm1/\varepsilon)\widetilde B \pm B^{(k)}\big)\mathcal P_{R_*}^c. \end{split} \end{equation} Using \eqref{tilde B estimate} again and recalling the definitions \eqref{vareps again}, \eqref{b^(k)} and \eqref{R_*}, we can estimate the last term on the right hand side of \eqref{R_* direct sum} from below: \begin{equation*} \begin{split} &\mathcal P_{R_*}^c\big((-\Delta)^w \pm(-\Delta)^{\tilde w}/\varepsilon +(1 \pm1/\varepsilon)\widetilde B \pm B^{(k)}\big)\mathcal P_{R_*}^c \\ &>\big((-\Delta)^w -2(-\Delta)^{\tilde w}/\varepsilon\big)\mathcal P_{R_*}^c \geqslant (R_*^{2w} -2R_*^{2\tilde w}/\varepsilon)\mathcal P_{R_*}^c \geqslant (5\rho_n)^{2w}\mathcal P_{R_*}^c, \end{split} \end{equation*} so it does not contribute to the density of states for $\rho \in I_n$. For the first term we have \begin{equation*} \mathcal P_{R_*}\big((-\Delta)^w \pm\varepsilon(-\Delta)^{\tilde w} +(1 \pm\varepsilon)\widetilde B\big)\mathcal P_{R_*} \lessgtr\mathcal P_{R_*}(1 \pm\varepsilon)\big((-\Delta)^w +\widetilde B\big)\mathcal P_{R_*}, \end{equation*} so \begin{equation}\label{bound with lambda rescaled} \begin{split} &N\Big((-\Delta)^w+ \widetilde B \pm\mathcal P_{R_*}^cB^{(k)}, \lambda\Big)\\ &\gtrless N\Big(\mathcal P_{R_*}\big((-\Delta)^w \pm\varepsilon(-\Delta)^{\tilde w} +(1 \pm\varepsilon)\widetilde B\big)\big|_{\mathcal P_{R_*}\plainL{2}( \mathbb R^d)}, \lambda\Big)\\ &\gtrless N\Big(\mathcal P_{R_*}\big((-\Delta)^w +\widetilde B\big)\big|_{\mathcal P_{R_*}\plainL{2}( \mathbb R^d)}, \lambda/(1 \pm\varepsilon)\Big), \end{split} \end{equation} and the same estimates hold true for $B^{(k)}$ replaced by $0$. Combining these two versions of \eqref{bound with lambda rescaled}, we obtain \begin{equation}\label{difference of IDS} \begin{split} &N\big((-\Delta)^w+ \widetilde B, \lambda\big)\lessgtr N\big((-\Delta)^w+ \widetilde B \mp\mathcal P_{R_*}^cB^{(k)}, \lambda\big)\\ &\lessgtr N\Big(\mathcal P_{R_*}\big((-\Delta)^w +\widetilde B\big)\big|_{\mathcal P_{R_*}\plainL{2}( \mathbb R^d)}, \lambda/(1 \mp\varepsilon)\Big) \lessgtr N\big((-\Delta)^w+ \widetilde B, (1 \pm\varepsilon)\lambda/(1 \mp\varepsilon)\big). \end{split} \end{equation} Recalling that $\lambda =\rho^{2w} \leqslant(4\rho_n)^{2w}$ and \eqref{vareps again}, we arrive at \eqref{exterior removal}. Combining \eqref{intermediate irrelevant}, \eqref{interior removal} and \eqref{exterior removal}, we get \eqref{approximation goal}.
1,116,691,497,459
arxiv
\section{Introduction and description of the results}\label{s1} Non-semisimple Lie algebras play an important role in physics where they are frequently used to study various physical systems and explain diverse physical phenomena. For example one could mention Schr\"odinger algebras and groups, see \cite{DDM}, conformal Galilei algebras and groups, see \cite{AI, AIK, CZ, GI1, GI2, HP, NOR, LMZ}, and ageing algebras, see \cite{HP, H1, H2, H3, H4, HEP, HU, HS1, HS2, PH, HSSU}. The representation theory of finite dimensional semisimple Lie algebras is fairly well developed, see for example \cite{Hu,Ja,M} and the references therein. In contrast to this, many important methods of the representation theory of semisimple algebra are either not available or not yet developed or, at best, become much more complicated in the non-semisimple case, confer e.g. \cite{DLMZ}. As a consequence, much less in known for non-semisimple case even for Lie algebras of rather small dimension. Some recent results, see e.g. \cite{WZ,D,DLMZ,LMZ}, make some progress in studying modules over certain non-semisimple extensions of the Lie algebra $\mathfrak{sl}_2$ motivated by their applications in physics. The Schr\"odinger Lie group is the group of symmetries of the free particle Schr\"odinger equation. The classical {\em Schr\"odinger algebra} is the Lie algebra of this group. The main objects of study in this paper is the extended Schr\"odinger algebra $\mathcal{S}$ in $(1+1)$-dimensional space-time and its ageing subalgebra $\fa$, see precise definitions in Section~\ref{s2}. The name of the latter algebra comes from its use as dynamical symmetry in physical ageing, which can be observed in strongly interacting many-body systems quenched from a disordered initial state to the co-existence regime below the critical temperature $T_c > 0$ where several equivalent equilibrium states exist, see \cite{BR} for details. Various representations of $\fa$ were constructed in the literature, see for example \cite{HS1, HS2, H1, H2, H3, H4, HU, PH, HSSU}. Both $\mathcal{S}$ and $\fa$ have natural Cartan subalgebras which allow to define the notion of weight modules, that is modules on which elements of the Cartan subalgebra act diagonalizably. The main objective of the present paper is to classify all simple weight modules over $\fa$. It turns out that these modules have either all their weight spaces one-dimensional, or all their weight spaces infinite dimensional. It seems that the algebra $\fa$ is the first Lie algebra having simple weight modules with infinite dimensional weight spaces for which classification of simple weight modules is completed. As an application we classify all simple weight $\mathcal{S}$-modules that have a simple $\fa$-submodule. This provides many new examples of simple weight $\S$-modules (for other examples, see \cite{D,LMZ}). The paper is organized as follows. We start with some preliminaries in Section~\ref{s2}. In Section~\ref{s3}, by embedding our algebras into the first Weyl algebra we obtain a very good presentation for the centralizer $U_0$ of the Cartan subalgebra in the universal enveloping algebra $U(\fa)$. This is the key observation needed to classify all simple $U_0$-modules. As a consequence, we obtain our main result, Theorem~\ref{thmmain}, which provides a classification of all simple weight modules over $\fa$. It turns out that there are four classes of such simple modules, the first three classes consist of certain weight modules with finite dimensional (in fact one-dimensional) weight spaces, while the last class consists of modules for which all non-zero weight spaces are infinite-dimensional. As a bonus, in Section~\ref{s4} we classify all simple weight $\S$-modules that have a simple $\fa$-submodule, see Theorem~\ref{thmtwo}. \section{Preliminaries}\label{s2} In this paper, we denote by $\Z$, $\N$, $\Z_+$ and $\C$ the sets of integers, positive integers, nonnegative integers and complex numbers, respectively. All vector spaces and Lie algebras are over $\C$. For a Lie algebra $\mathfrak{g}$ we denote by $U(\mathfrak{g})$ its universal enveloping algebra. We write $\otimes$ for $\otimes_{\mathbb{C}}$. The {\em extended Schr\"odinger algebra} $\mathcal{S}$ in $(1+1)$-dimensional space-time is a complex Lie algebra with a basis $\{ f,q,h,e,p,z\}$ and the Lie bracket given as follows: \begin{equation} \label{commrelations} \begin{array}{lll} \left[h,e\right]=2e, & \left[e,f\right]=h, & \left[h,f\right]=-2f,\\ \left[e,q\right]=p, & \left[e,p\right]=0, & \left[h,p\right]=p,\\ \left[f, p\right]=q, &\left[f,q\right]=0, &\left[h,q\right]=-q,\\ \left[p,q\right]=z. & [z,\mathcal{S}]=0. \end{array} \end{equation} It is easy to see that the subspace $\fa$ of $\mathcal{S}$ spanned by $\{e,h,p,q,z\}$ is a Lie subalgebra of $\mathcal{S}$. This subalgebra is called the {\em ageing algebra}. The elements $h$ and $z$ span a Cartan subalgebra $\mathfrak{h}$ while the elements $p$, $q$ and $z$ span a copy of the Heisenberg algebra $\H$. We also denote by $\mathfrak{n}$ the subalgebra spanned by $h$, $z$, $e$ and $p$. The subalgebra of $\mathcal{S}$ spanned by $e$, $f$ and $h$ is isomorphic to the classical algebra $\sl_2$ and will be identified with the latter. By Schur's lemma, the element $z$ acts as a scalar on any simple module over $\fa$ or $\mathcal{S}$. Since we are concerned with simple modules only, we will assume that $z$ acts as a scalar $\dot{z}$ on any module in this paper. For a module $V$ we denote by $\supp(V)$ the set of $h$-eigenvalues on $V$ and will call these eigenvalues \emph{weights}. All $h$-eigenvectors will be called \emph{weight vectors}. For a weight $\dot{h}$ we denote by $V_{\dot{h}}$ the corresponding {\em weight space}, that is the space of all $h$-eigenvectors with eigenvalue $\dot{h}$. If $V$ is a simple weight $\fa$- or $\mathcal{S}$-module, then, as usual, we have $\supp(V)\subset \dot{h}+\mathbb{Z}$ for any $\dot{h}\in \supp(V)$. Let $U_0=\{x\in U(\fa) | [h,x]=0\}$ be the centralizer of $\mathfrak{h}$ in $U(\fa)$. Then, as usual, for every simple weight $\fa$-module $V$ and any $\lambda\in\supp(V)$ the $U_0$-module $V_{\lambda}$ is simple. Each $(\dot{z}, \dot{h})\in \C^2$ defines the one-dimensional $\mathfrak{n}$-module $\C w$ with the action $h w=\dot{h} w$, $z w=\dot{z} w$ and $pw=ew=0$. Using it we define, as usual, the {\em Verma} $\fa$-module $$M_{\fa}(\dot{z},\dot{h})=\Ind_{\mathfrak{n}}^{\fa} \C w.$$ Denote by $\bar{M}_{\fa}(\dot{z},\dot{h})$ the unique simple quotient of $M_{\fa}(\dot{z},\dot{h})$ (which exists by standard arguments, see e.g. \cite[Chapter~7]{Di}). Similarly we may define the Verma modules $M_{\mathcal{S}}(\dot{z}, \dot{h})$, $M_{\H}(\dot{z})$, $M_{\sl_2}(\dot{h})$ and the corresponding simple quotients $\bar{M}_{\mathcal{S}}(\dot{z}, \dot{h})$, $\bar{M}_{\H}(\dot{z})$, $\bar{M}_{\sl_2}(\dot{h})$ over $\mathcal{S}$, $\H$, and $\sl_2$, respectively. Analogously one defines the {\em lowest weight Verma modules} $M^-_{\mathcal{S}}(\dot{z}, \dot{h})$, $M^-_{\fa}(\dot{z},\dot{h})$, $M^-_{\H}(\dot{z})$ and $M^-_{\sl_2}(\dot{h})$ and their corresponding simple quotients $\bar{M}^-_{\mathcal{S}}(\dot{z}, \dot{h})$, $\bar{M}^-_{\fa}(\dot{z},\dot{h})$, $\bar{M}^-_{\H}(\dot{z})$ and $\bar{M}^-_{\sl_2}(\dot{h})$. \section{Simple weight modules over $\fa$}\label{s3} We start by recalling some results from \cite{B} adjusted to our setup. Let $\mathcal{K}$ be the associative algebra $\C(t)[s]$ where $st-ts=1$. The algebra $\mathcal{K}$ is a non-commutative principal left and right ideal domain, in fact, it is an Euclidean domain (both left and right). The space $\C(t)$ becomes a faithful $\C(t)[s]$-module by defining the action of $\C(t)$ via multiplication and the action of $s$ via $\frac{d}{d t}$. Consider the subalgebra $R_0$ of $\mathcal{K}$ generated by $t$ and $t^2s$, and the subalgebra $R_{1}$ of $\mathcal{K}$ generated by $t$ and $ts$. For convenience of later use we set $R_{\dot z}:=R_1$ for $\dot{z}\in \C^*$. \begin{lem}\label{lem1} Let $\dot z\in\C$. {\hspace{2mm}} \begin{enumerate}[$($a$)$] \item\label{lem1.1} Let $\a=\sum_{j=0}^n \a_j s^j$, where $ \a_j\in \C(t)$ with $\a_0=1$, be an irreducible element in $\mathcal{K}$. Then we have the following: \begin{enumerate}[$($i$)$] \item\label{lem1.1.1} If the rational function $\a_j$ has a zero at $0$ of order at least $j+1$, then $R_1/(R_1\cap \mathcal{K}\a)$ is a simple $R_1$-module. Up to isomorphism, every $\C[t]$-torsion-free simple $R_1$-module arises in this way. \item\label{lem1.1.2} If the rational function $\a_j$ has a zero at $0$ of order at least $2j+1$, then $R_0/(R_0\cap \mathcal{K}\a)$ is a simple $R_0$-module. Up to isomorphism, every $\C[t]$-torsion-free simple $R_0$-module arises in this way. \end{enumerate} \item\label{lem1.2} Any simple quotient of the $R_{\dot z}$-module $R_{\dot z}/R_{\dot z} t$ is 1-dimensional. \item\label{lem1.3} For any $\lambda\in \C^*$, the $R_{dot z}$-module $R_{\dot z}/R_{\dot z} (t-\lambda)$ is simple. \item\label{lem1.4} Let $V$ be a simple $\C[t]$-torsion-free $R_{\dot z}$-module. Then $t$ acts bijectively on $V$. \end{enumerate} \end{lem} \begin{proof} Claim~\eqref{lem1.1.1} is \cite[Theorem~2]{B}. Claim~\eqref{lem1.1.2} follows from \cite[Theorem~4.3]{B}. To prove claim~\eqref{lem1.2}, note that $R_{\dot z} t$ is a two-sided ideal of $R_{\dot z}$ for any value of $\dot z$. The algebra $R_{\dot z}/R_{\dot z} t$ is easily checked to be the commutative algebra $\C[ts]$ if $\dot z\ne0$. Similarly, if $\dot z=0$, we have that $R_{\dot z}/R_{\dot z} t\cong \C[t^2s]$. All simple modules over the latter commutative algebras have dimension one. To prove claim~\eqref{lem1.3}, observe that $\{(ts)^i|i\in\Z_+\}$ is a basis in $R_{\dot z}/R_{\dot z} (t-\lambda)$ if $\dot z\ne0$. Moreover, if $\dot z=0$, then $\{(t^2s)^i|i\in\Z_+\}$ is a basis in the quotient $R_{\dot z}/R_{\dot z} (t-\lambda)$. Using this it is easy to verify that the $R_{\dot z}$-module $R_{\dot z}/R_{\dot z} (t-\lambda)$ is simple. Claim~\eqref{lem1.4} follows from the fact that $tV$ is an $R_{\dot z}$-submodule of $V$ and hence is equal to $V$. \end{proof} Next we characterize the associative algebra $U_0$. Denote by $\mathbf{A}$ the first Weyl algebra, which we realize as the unital subalgebra of the algebra of all linear operators on $\mathbb{C}[t]$ generated by the linear operator $\frac{d} {d t}$ and the linear operator of multiplication by $t$ (which we will denote simply by $t$). Alternatively, one can also think of the algebra $\mathbf{A}$ as the unital subalgebra of the algebra $\mathcal{K}$ generated by $t$ and $s$. In particular, we can view $R_0$ as a subalgebra of $\mathbf{A}$ generated by $t$ and $t^2\frac{d} {d t}$. Similarly, we can view $R_{1}$ as a subalgebra of $\mathbf{A}$ generated by $t$ and $t\frac{d} {d t}$. \begin{lem}\label{lem2} {\hspace{2mm}} \begin{enumerate}[$($a$)$] \item\label{lem2.1} The associative algebra $U_0$ is generated by $eq^2$, $pq$, $h$, and $z$ and $$\{(eq^2)^{i_1}(pq)^{i_2}h^{i_3}z^{i_4}|i_1,i_2,i_3,i_4\in \Z_+\}$$ is a basis of $U_0$. In particular, for any $\dot{z},\dot{h}\in \C$, the image of $\{(eq^2)^{i_1}(pq)^{i_2}|i_1,i_2\in \Z_+\}$ in $U_0/\langle h-\dot{h}, z-\dot{z}\rangle$ forms a basis there. \item\label{lem2.2} For any $\dot{z},\dot{h}\in \C$, there is a unique homomorphism of associative algebras $\phi_{\dot{z},\dot{h}}: U_0\rightarrow \mathbf{A}$ such that $\phi_{\dot{z}, \dot{h}}(z)=\dot{z}$, $\phi_{\dot{z},\dot{h}}(h)=\dot{h}$ and \begin{enumerate}[$($i$)$] \item\label{lem2.2.1} $\phi_{0, \dot{h}}(pq)=t,\,\, \phi_{0, \dot{h}}(eq^2)=t^2\frac{d}{d t}$ if $\dot{z}=0$; \item\label{lem2.2.2} $\phi_{\dot{z}, \dot{h}}(pq)=2\dot{z} t\frac{d} {d t},\,\, \phi_{\dot{z}, \dot{h}}(eq^2)=t+\dot{z}t\frac{d}{d t}+2\dot{z}(t\frac{d} {d t})^2$ if $\dot{z}\neq 0$. \end{enumerate} \item\label{lem2.3} We have $\phi_{\dot{z},\dot{h}}(U_0)=R_{\dot z}$ and $U_0/\langle h-\dot{h}, z-\dot{z} \rangle\cong R_{\dot z}$. \item\label{lem2.4} The isomorphism in \eqref{lem2.3} induces a natural bijection between isomorphism classes of simple $U_0 $-modules on which $h$ acts as $\dot{h}$ and $z$ acts as $\dot{z}$ and isomorphism classes of simple $R_{\dot z}$-modules. \end{enumerate} \end{lem} \begin{proof} To prove claim~\eqref{lem2.1} we argue that, by the PBW Theorem, the algebra $U_0$ has a basis \begin{gather*} \{e^{i_1}q^{2i_1+i_2}p^{i_2}z^{i_3}h^{i_4}|i_1,i_2,i_3,i_4\in \Z_+\}=\\ \{(e^{i_1}q^{2i_1})(q^{i_2}p^{i_2})z^{i_3}h^{i_4}|i_1,i_2,i_3,i_4\in \Z_+\}. \end{gather*} {\bf Step 1}: The element $q^{i}p^{i}$ can be written as a linear combination of elements of the form $(pq)^jz^{i-j}$ for $j\in\Z_+$. This follows by induction on $i$ from the following computation: $$q^{i+1}p^{i+1}=q^{i+1}pp^{i}=pqq^{i}p^{i}-(i+1)q^{i}p^{i}z.$$ {\bf Step 2}: The element $e^iq^{2i}$ can be written as a linear combination of elements of the form $(eq^2)^k(pq)^jz^{i-k-j}$ for $k,j\in\Z_+$ with $i\ge k+j$. This follows by induction on $i$ from the following computation: \begin{multline*} e^{i+1}q^{2(i+1)}=ee^{i}qq^{2i+1}=eqe^{i}q^{2i+1}+ie^{i}pq^{2i+1}=\\ eq^2e^{i}q^{2i}+2ie^{i}pq^{2i+1}=(eq^2)(e^{i}q^{2i})+2i((e^{i}q^{2i})(pq)+2ie^{i}q^{2i}z). \end{multline*} Claim~\eqref{lem2.1} follows easily from Steps~1 and 2. To prove claim~\eqref{lem2.2}, we only need to check $$\phi_{\dot{z},\dot{h}}([eq^2,pq])=[\phi_{\dot{z},\dot{h}}(eq^2), \phi_{\dot{z},\dot{h}}(pq)]$$ for any $\dot{z}$ and $\dot{h}$. Note that \begin{displaymath} \begin{array}{rcl} [eq^2,pq]&=&[eq^2,p]q+p[eq^2,q]\\&=&-2zeq^2+p(pq)q\\ &=&-2zeq^2+zpq +(pq)^2. \end{array} \end{displaymath} Therefore we have $[\phi_{0,\dot{h}}(eq^2),\phi_{0,\dot{h}} (pq)]=[t^2\frac{d}{dt},t]=t^2$ while $$\phi_{0,\dot{h}}([eq^2,pq])=\phi_{0,\dot{h}}(-2zeq^2+zpq+(pq)^2)=t^2,$$ which implies \eqref{lem2.2.1}. Similarly, for $\dot{z}\ne 0$ we have \begin{displaymath} \begin{array}{rcl} \phi_{\dot{z},\dot{h}}([eq^2,pq])&=&\phi_{\dot{z},\dot{h}} ((-2zeq^2+zpq+(pq)^2))\\&=&-2\dot{z}(t+\dot{z}t\frac{d}{d t}+2\dot{z}(t\frac{d} {d t})^2)+\dot{z}2\dot{z} t\frac{d} {d t}+(2\dot{z} t\frac{d} {d t})^2\\&=&-2\dot{z}t, \end{array} \end{displaymath} while \begin{displaymath} [\phi_{\dot{z},\dot{h}}(eq^2),\phi_{\dot{z},\dot{h}}(pq)]= [t+\dot{z}t\frac{d}{d t}+2\dot{z}(t\frac{d} {d t})^2,2\dot{z} t\frac{d}{d t}]=-2\dot{z}t, \end{displaymath} which implies \eqref{lem2.2.2}. Claim~\eqref{lem2.3} follows from claims~\eqref{lem2.1} and \eqref{lem2.2}. Claim~\eqref{lem2.4} follows from claim~\eqref{lem2.3}. \end{proof} Now we address the structure of Verma modules over $\fa$. \begin{lem}\label{lem3} {\hspace{2mm}} \begin{enumerate}[$($a$)$] \item\label{lem3.1} Let $\dot{z},\dot{h}\in\C$. Then the $\fa$-module $M_{\fa}(\dot{z},\dot{h})$ is simple if and only if $\dot{z}\ne 0$. \item\label{lem3.2} Let $\dot{h}\in\C$. Then we have $\dim \bar{M}_{\fa}(0,\dot{h})=1$. \end{enumerate} \end{lem} \begin{proof} If $\dot{z}\ne 0$, then the module $M_{\fa}(\dot{z},\dot{h})$ is obviously simple already as an $\mathcal{H}$-module, which implies \eqref{lem3.1}. If $\dot{z}= 0$, we have a simple $\fa$-module $\C v$ with action $ev=pv=qv=zv=0$ and $h v=\dot{h} v$. By the universal property of Verma modules, ${M}_{\fa}(0,\dot{h})$ surjects onto $\C v$ and hence $\bar{M}_{\fa}(0,\dot{h})=\C v$, which implies \eqref{lem3.2}. \end{proof} \begin{rem}\label{remn2} {\rm Lemma~\ref{lem3} describes all simple highest weight $\fa$-mo\-du\-les. Let $V$ be a simple highest weight $\fa$-module. Consider the decomposition $V=\oplus_{\dot{h}\in\mathbb{C}}V_{\dot{h}}$ and note that all $V_{\dot{h}}$ are finite dimensional. As usual, the space $V^{\star}=\oplus_{\dot{h}\in\mathbb{C}}\mathrm{Hom}_{\C}(V_{\dot{h}},\C)$ has the natural structure of an $\fa$-module defied using the canonical involution $x\mapsto -x$, $x\in \fa$. The module $V^{\star}$ is a simple lowest weight $\fa$-module and the correspondence $V\mapsto V^{\star}$ is a bijection between the sets of isomorphism classes of simple highest weight and simple lowest weight modules. } \end{rem} Before constructing some other weight $\fa$-modules, we have to define some automorphisms of the associative algebras $U_0$ and $R_{\dot z}$. \begin{lem}\label{lem4} For any $i\in \Z$ there is a unique $\tau_{i}\in \Aut(U_0)$ such that \begin{gather*} \tau_i(pq)=pq+iz, \,\,\,\tau_i(h)=h-i, \,\,\,\tau_i(z)=z,\\ \tau_i(eq^2)=eq^2+ipq+\frac{i(i+1)}{2}z. \end{gather*} \end{lem} \begin{proof} From Lemma 2.1(a), we know that $U_0$ is a PBW-algebra on the generating set $\{eq^2,pq, h, z\}$. One can find the concept of PBW-algebras in \cite{BTLC}. Then we need only to verify the relations on the generating elements $eq^2$, $pq$, $h$, and $z$. We have \begin{displaymath} \begin{array}{rcl} [\tau_i(eq^2),\tau_i(pq)]&=&[eq^2+ipq+\frac{i(i+1)}{2}z,pq+iz]\\&=&[eq^2,pq]\\ &=&-2zeq^2+zpq+(pq)^2 \end{array} \end{displaymath} while \begin{displaymath} \begin{array}{rcl} \tau_i([eq^2,pq])&=&\tau_i(-2zeq^2+zpq+(pq)^2)\\& =&-2z(eq^2+ipq+\frac{i(i+1)}{2}z)+z(pq+iz)+(pq+iz)^2\\&=&-2zeq^2+zpq+(pq)^2. \end{array} \end{displaymath} Thus $[\tau_i(eq^2),\tau_i(pq)]=\tau_i([eq^2,pq])$. All other relations are checked similarly. \end{proof} The proof of the next lemma is a straightforward computation which is left to the reader. Here we use the embedding of $R_{\dot z}$ in the first Weyl algebra $\mathbf{A}$ as in the paragraph before Lemma 2. \begin{lem}\label{lem5} For any $i\in \Z$ and $\dot{z}\in \C$ there is a unique $\tau_{i,\dot{z}}\in \Aut(R_{\dot z})$ such that \begin{gather*} \tau_{i, 0}(t)=t,\,\,\,\,\,\, \tau_{i, 0}(t^2\frac{d}{d t})=t^2\frac{d}{d t}+it \,\,\,\,\text{if} \,\,\,\,\dot{z}= 0;\\ \tau_{i,\dot{z}}(t)=t,\,\,\,\,\,\, \tau_{i,\dot{z}}( t\frac{d} {d t})=t\frac{d} {d t}+\frac{i}{2} \,\,\,\,\,\,\text{if}\,\, \dot{z}\ne 0. \end{gather*} \end{lem} The next step is to construct some weight $\fa$-modules using $R_{\dot z}$-modules. From now on in this section $N$ denotes an $R_{\dot z}$-module. We define the $\fa$-module $N_{\dot{z},\dot{h}}$ as follows: as a vector space we set $N_{\dot{z},\dot{h}}=N\otimes \mathbb{C}[x,x^{-1}]$ and then for $v\in N$ define \begin{equation}\label{A-1}q(v\otimes x^i)=v\otimes x^{i+1},\,\,\,\,\,\, p(v\otimes x^i) =(\phi_{\dot{z},\dot{h}}(pq+(i-1)z) v)\otimes x^{i-1},\end{equation} \begin{equation}\label{A-2}e (v\otimes x^{i})=(\phi_{\dot{z}, \dot{h}}(eq^2+(i-2)pq+\frac{(i-1)(i-2)}{2}z)v)\otimes x^{i-2},\end{equation} \begin{equation}\label{A-3}h (v\otimes x^i)=((\dot{h}-i) v)\otimes x^i,\,\,\,\,\,\, z(v\otimes x^i)=(\dot{z} v)\otimes x^i. \end{equation} By a straightforward computation we get the following: \begin{lem}\label{lem6} {\hspace{2mm}} \begin{enumerate}[$($a$)$] \item\label{lem6.1} Formulae \eqref{A-1}, \eqref{A-2} and \eqref{A-3} define on $N_{\dot{z},\dot{h}}$ the structure of an $\fa$-module. \item\label{lem6.2} For $\dot{z}=0$ the action in \eqref{A-1}, \eqref{A-2} and \eqref{A-3} reads as follows {\small \begin{equation}\label{A-01} \left\{\aligned&h (v\otimes x^i)=(\dot{h}-i) v\otimes x^i,\,\,\,\,z(v\otimes x^i)=0, \\ & q (v\otimes x^i)=v\otimes x^{i+1},\,\,\,\, p(v\otimes x^i)=(t v)\otimes x^{i-1},\\ & e (v\otimes x^{i})=((t^2\frac{d}{d t}+(i-2)t)v)\otimes x^{i-2}. \endaligned\right. \end{equation} } \item\label{lem6.3} For $\dot{z}\neq 0$ the action in \eqref{A-1}, \eqref{A-2} and \eqref{A-3} reads as follows {\footnotesize \begin{equation}\label{A-11} \left\{\aligned&h (v\otimes x^i)=(\dot{h}-i) v\otimes x^i,\,\,\,\,z(v\otimes x^i)=\dot{z} v\otimes x^i, \\ & q (v\otimes x^i)=v\otimes x^{i+1},\,\,\,\, p(v\otimes x^i)=((2\dot{z} t\frac{d}{d t}+(i-1)\dot{z}) v)\otimes x^{i-1},\\ &e (v\otimes x^{i})=((t+(2i-3)\dot{z}t\frac{d}{d t}+2\dot{z}(t\frac{d}{d t})^2+\frac{(i-1)(i-2)}{2}\dot{z})v) \otimes x^{i-2}.\\ \endaligned \right. \end{equation} } \end{enumerate} \end{lem} For an associative algebra $A$, an $A$-module $V$ and $\sigma\in \Aut(A)$, we denote by $V^{\sigma}$ the module obtained from $V$ via twisting the action of $A$ by $\sigma$, that is $a\cdot v=\sigma(a)v$ for $a\in A$ and $v\in V$. For each $i\in \Z$ the space $N\otimes x^i$ is naturally a $U_0$-module. We may even make $N\otimes x^i$ into an $R_{\dot z}$-module as follows: If $\dot{z}=0$, then for any $ v\in N$, we set \begin{equation} \label{z0} \aligned &t(v\otimes x^i)=(pq)(v\otimes x^i)=(tv)\otimes x^i,\\ &(t^2\frac d{dt})(v\otimes x^i)=(eq^2)(v\otimes x^i)=((t^2\frac d{dt}+it)v)\otimes x^i. \endaligned \end{equation} If $\dot{z}\ne0$, then for any $ v\in N$, we set \begin{equation} \label{z1} \aligned &t(v\otimes x^i)=(eq^2-\frac{pq}{2}-2\dot{z}(\frac{pq}{2\dot{z}})^2)(v\otimes x^i)=(tv)\otimes x^i,\\ &(t\frac d{dt})(v\otimes x^i)=(\frac{pq}{2\dot{z}})(v\otimes x^i)=((t\frac d{dt}+\frac i2)v)\otimes x^i. \endaligned \end{equation} The following standard result asserts that any simple weight module is uniquely determined by any of its nonzero weight spaces. \begin{lem} \label{lemma-2.4} Let $\mathfrak{g}$ be $\fa$ or $\mathcal{S}$. Let $V$ and $W$ be simple weight $\mathfrak{g}$-modules such that for some $\dot{h}\in \supp(V)$ the $U(\mathfrak{g})_0$ modules $V_{\dot{h}}$ and $W_{\dot{h}}$ are isomorphic. Then $V\cong W$. \end{lem} \begin{proof} Set $N=V_{\dot{h}}=W_{\dot{h}}$ and write $$U(\mathfrak{g})=\bigoplus_{i\in\Z}U_i\quad\text{where}\quad U_i=\{x\in U(\mathfrak{g}) |[h,y]=iy\}.$$ Every $U_i$ is a $U(\mathfrak{g})_0\text{-}U(\mathfrak{g})_0$-bimodule. Consider the induced module \begin{displaymath} M:=\mathrm{Ind}_{U(\mathfrak{g})_0}^{U(\mathfrak{g})}N= U(\mathfrak{g})\bigotimes_{U(\mathfrak{g})_0} N= \bigoplus_{i\in\Z}({U_i}\otimes_{U(\mathfrak{g})_0} N). \end{displaymath} Then, by adjunction, both $V$ and $W$ are simple quotients of $M$. The module $M$ is, clearly, a weight $\mathfrak{g}$-module with $M_{\dot{h}}\cong N$, the latter being a simple $U(\mathfrak{g})_0$-module, which implies, using standard arguments, that $M$ has a unique maximal submodule $K$ satisfying $K_{\dot{h}}=0$ and thus the unique corresponding simple quotient $M/K$. As both $V$ and $W$ have to be quotients of $M/K$ by adjunction, we get $V\cong W$. \end{proof} Now we list some properties of the $\fa$-module $N_{\dot{z},\dot{h}}$. \begin{lem}\label{lemma-2.5} {\hspace{2mm}} \begin{enumerate}[$($a$)$] \item\label{lemma-2.5.1} For every $i\in\Z$ the $U_0$-modules $N\otimes x^i$ and $N^{\tau_i}$ are isomorphic. \item\label{lemma-2.5.2} The $\fa$-module $N_{\dot{z},\dot{h}}$ is simple if and only if $N$ is a simple $R_{\dot{z}}$-mo\-dule and one of the following conditions is satisfied: \begin{enumerate}[$($i$)$] \item\label{lemma-2.5.2.1} $\dot{z}=0$ and $(\C t+\C t^2\frac{d}{d t})N\ne 0$; \item\label{lemma-2.5.2.2} $\dot{z}\ne 0$ and $\tau_{i,\dot{z}}(\C t+\C t\frac{d}{d t})N\ne0$ for all $i\in \Z$. \end{enumerate} \item\label{lemma-2.5.3} Let $\dot{z},\dot{h}, \dot{z}',\dot{h}'\in\C$. Let further $N$ be an $R_{\dot{z}}$-module and $N'$ be an $R_{\dot{z}'}$-module. Assume that $N_{\dot{z},\dot{h}}$ and $N'_{\dot{z}',\dot{h}'}$ are simple. Then we have $N_{\dot{z},\dot{h}}\cong N'_{\dot{z}',\dot{h}'}$ if and only if $\dot{z}=\dot{z}'$, $i:=\dot{h}-\dot{h}'\in \Z$ and $N'\cong N^{\tau_{i,\dot{z}}}$ as $R_{\dot{z}}$-modules. \end{enumerate} \end{lem} \begin{proof} Let $\psi:N\otimes x^i\rightarrow N^{\tau_i}$ be the linear map defined as $\psi(v\otimes x^i)=v$ for all $v\in A$. It is straightforward to verify that $$\aligned &\psi(pq (v\otimes x^i))=\psi(((pq+i\dot{z})v)\otimes x^i)=(pq+iz)v=\tau_i(pq) v,\\ &\psi((eq^2) (v\otimes x^i))=\psi((eq^2+ipq+\frac{i(i+1)}{2}\dot{z})v\otimes x^i)\\&=(eq^2+ipq+\frac{i(i+1)} {2}\dot{z})v=\tau_i(eq^2)\psi(v\otimes x^i),\\ &\psi(h (v\otimes x^i))=\psi((\dot{h}-i)v\otimes x^i)=(\dot{h}-i)v=\tau_i(h) \psi(v\otimes x^i),\\ &\psi(z (v\otimes x^i))=\psi(\dot{z}v\otimes x^i)=\tau_i(z) \psi(v\otimes x^i).\endaligned$$ This implies claim~\eqref{lemma-2.5.1}. To prove claim~\eqref{lemma-2.5.2}, we observe that simplicity of $N_{\dot{z},\dot{h}}$ clearly requires simplicity of $N$. Now suppose that $N$ is a simple $R_{\dot{z}}$-module. We study simplicity of $N_{\dot{z},\dot{h}}$ using a case-by-case analysis. {\bf Case~1.} Assume $\dot{z}=0$ and $(\C t+\C t^2\frac{d}{d t})N= 0$. In this case, from \eqref{A-01} we get $e N_{0,\dot{h}}=pN_{0,\dot{h}}=0$. Hence each nonzero weight element of $N_{0,\dot{h}}$ generates a proper highest weight submodule. {\bf Case~2.} Assume $\dot{z}=0$ and $(\C t+\C t^2\frac{d}{d t})N\ne 0$. Since $N$ is a simple $R_{\dot{z}}$-module, we have $(\C t+\C t^2\frac{d}{d t})v\ne 0$ for any nonzero $v\in N$. From \eqref{A-01} it follows that for each $i\in \Z$ we either have $p(v\otimes x^i)\ne 0$ or $e(v\otimes x^{j})\ne 0$. As the action of $q$ on $N_{0,\dot{h}}$ is injective, it follows that any nonzero submodule $V$ of $N_{0,\dot{h}}$ has support $\dot{h}+\Z$. Now from claim~\eqref{lemma-2.5.1} we get $V=N_{0,\dot{h}}$, that is any nonzero submodule $V$ coincides with $N_{0,\dot{h}}$ and thus $N_{0,\dot{h}}$ is simple. {\bf Case~3.} Assume $\dot{z}\ne 0$ and $\tau_{i,\dot{z}}(\C t+\C t\frac{d}{d t})N=0$ for some $i\in \Z$. In this case, we have $N=\C v$ with $t v=0, (t\frac{d} {dt})v=-\frac{i}{2}v$. From \eqref{A-11}, we have $p(v\otimes x^{i+1})=e(v\otimes x^{i+1})=0$ and hence $v\otimes x^{i+1}$ generates a proper highest weight submodule of $N_{\dot{z},\dot{h}}$. Therefore $N_{\dot{z},\dot{h}}$ is not simple. {\bf Case~4.} Assume $\dot{z}\ne 0$ and $\tau_{i,\dot{z}}(\C t+\C t\frac{d}{d t})N\ne0$ for all $i\in \Z$. Since $N$ is a simple $R_{\dot{z}}$-module, have $\tau_{i,\dot{z}}(\C t+\C t\frac{d}{d t})v\ne 0$ for any nonzero $v\in N$. From \eqref{A-11} it follows that for each $i\in \Z$ we have either $p (vx^{i+1})\ne 0$ or $e(vx^{i+2})\ne 0$ for some $v\in N$. Similarly to Case~2 we deduce that $N_{\dot{z},\dot{h}}$ is simple. This completes the proof of claim~\eqref{lemma-2.5.2}. To prove claim~\eqref{lemma-2.5.3}, first assume $N_{\dot{z},\dot{h}}\cong N'_{\dot{z}',\dot{h}'}$. Then we have $i=h-h'\in \Z$, $\dot{z}=\dot{z}'$ and $N'\cong N\otimes x^i$ as $U_0$-modules. From \eqref{z0} and \eqref{z1} it follows that $A'\cong N^{\tau_{i,\dot{z}}}$ as $R_{\dot{z}}$-modules. Now suppose that $\dot{z}=\dot{z}'$, $i=\dot{h}-\dot{h}'\in \Z$ and $N'\cong N^{\tau_{i,\dot{z}}}$ as $R_{\dot{z}}$-modules. From \eqref{z0} and \eqref{z1} it follows that $N'\cong N\otimes x^i$ as $U_0$-modules. From Lemma \ref{lemma-2.4} we thus get $N_{\dot{z},\dot{h}}\cong N'_{\dot{z}',\dot{h}'}$. This completes the proof. \end{proof} \begin{rem}\label{remn1} {\rm Let $N$ be a simple $R_{\dot{z}}$-module. If we have $\dot{z}=0$ and $(\C t+\C t^2\frac{d}{d t})N=0$, then $\dim N=1$. If $\dot{z}\ne 0$ and $\tau_{i,\dot{z}}(\C t+\C t\frac{d}{d t})N=0$ for some $i\in \Z$, then $\dim N=1$. Thus, if $N$ is an infinite dimensional simple $R_{\dot{z}}$-module, then $N_{\dot{z},\dot{h}}$ is a simple weight $\fa$-module. } \end{rem} Now we are ready to prove our main result on classification of all simple weight $\fa$-modules. \begin{thm}\label{thmmain} Each simple weight $\fa$-module is isomorphic to one of the following simple modules: \begin{enumerate}[$($i$)$] \item\label{thmmain.1} A simple highest or lowest weight module. \item\label{thmmain.2} The module $Q(\lambda,\dot{h})=\C[x,x^{-1}]$, where $\dot{h}\in \C$ and $\lambda\in \C^*$, with the action given by: $$z x^i=p x^i=0,\,\,\,\, qx^i=x^{i+1},\,\,\,\,ex^i=\lambda x^{i-2},\,\,\,\,h x^i=(\dot{h}-i)x^i.$$ \item\label{thmmain.3} The module $Q'(\dot{z},\dot{h},\lambda)=\C[x,x^{-1}]$, where $\dot{h}\in \C,\dot{z}\in \C^*$ and $\lambda \in \C\backslash \Z$, with the action given by: $$\aligned &q x^i=x^{i+1},\,\,\,\, p x^i=\dot{z}(\lambda+i)x^{i-1},\,\,\,\,z x^i= \dot{z} x^i,\\& h x^i=(\dot{h}-i)x^i,\,\,\,\, e x^i=\frac{\dot{z}}{2}(\lambda+i)(\lambda+i-1)x^{i-2}.\endaligned$$ \item\label{thmmain.4} The module $N_{\dot{z},\dot{h}}$, where $N$ is an infinite dimensional simple $R_{\dot{z}}$-module. \end{enumerate} \end{thm} \begin{proof} It is straightforward to verify that $Q(\lambda,\dot{h})$ in \eqref{thmmain.2} and $Q'(\dot{z},\dot{h},\lambda)$ in \eqref{thmmain.3} are simple $\fa$-modules. From Remark~\ref{remn1} we know that the module $N_{\dot{z},\dot{h}}$ in \eqref{thmmain.4} is a simple $\fa$-module. Let $V$ be any simple weight $\fa$-module. Assume that $\dot{h}\in \supp(V)$ and that $z$ acts on $V$ as the scalar $\dot{z}$. Then $V_{\dot{h}}$ is both, a simple $U_0$-module and a simple $R_{\dot{z}}$-module. By Lemma~\ref{lem1} we have that $V_{\dot{h}}$ either is a one-dimensional module with $tV_{\dot{h}}=0$ or is an infinite dimensional module with $t$ acting bijectively on it. If $N=V_{\dot{h}}$ has infinite dimension, then $V\cong N_{\dot{z},\dot{h}}$ by Lemma~\ref{lemma-2.4}. Therefore it remains to consider the case when all nontrivial weight spaces of $V$ have dimension one. If $V$ has a nonzero element annihilated by $q$, then $V$ is clearly a lowest weight module. Therefore we may assume that $q$ acts injectively on $V$. Let $0\ne w\in V_{\dot{h}}$ and consider first the case $\dot z=0$. In this case, we have $tw=pqw=0$ and hence $pw=0$ since the action of $q$ is injective. This implies that the ideal $\C p+\C z$ of $\fa$ annihilates $V$. Let $\mathfrak{a}=\fa/(\C p+\C z)$. Then $V$ is a simple $\mathfrak{a}$-module. Denote by $\bar{x}$ the image of $x\in \fa$ in $\mathfrak{a}$. Note that $\bar{e}\bar{q}^2$ is a central element in $U(\mathfrak{a})$. By Schur's lemma, $\bar{e}\bar{q}^2$ acts on $V$ as a scalar, say $\lambda\in\mathbb{C}$. If $\lambda=0$, then $e (q^2w)=0$ and $V$ is a highest weight module. If $\lambda\ne 0$, then it is easy to check that $V\cong Q(\lambda,\dot{h})$. Finally, consider the case $\dot z\ne 0$. In this case, $w$ generates an $\H$-submodule of $V$ with nonzero central charge and one-dimensional weight spaces. Consequently, $V$ contains a simple $\C h+\H$ submodule $M$ with nonzero central charge. We make $M$ into an $\fa$-module by setting $ev=\frac{1}{2\dot{z}} p^2 v$ and denote the resulting module by $M^{\fa}$. Let $\C v$ be the one-dimensional trivial $\C h+\H$-module. Then \cite[Lemma~8 and Theorem~7]{LZ1} yield that $V$ is isomorphic to a simple quotient of $$\Ind_{\C h+\H}^{\fa}(M\otimes \C v)\cong M^{\fa}\otimes \Ind_{\C h+\H}^{\fa} \C v,$$ which is of the form $M^{\fa}\otimes X$, where $X$ is a simple quotient of $\Ind_{\C h+\H}^{\fa} \C v$. Since $M^{\fa}\otimes X$ is a weight module, we get that $X$ is the trivial module and hence $V\cong M^{\fa}$. This gives that $V$ is either a highest weight module or is isomorphic to $Q'(\dot{z},\dot{h},\lambda)$ with $\lambda\in\C\setminus\Z$. This completes the proof. \end{proof} The following result is a direct consequence of Theorem~\ref{thmmain}. \begin{cor}\label{cor21} Let $V$ be a simple weight $\fa$-module and $\dot{h}\in \supp(V)$. Then either $\dim V_{\dot{h}+i}\le 1$ for all $i\in \Z$ or $\dim V_{\dot{h}+i}=\infty$ for all $i\in \Z$. \end{cor} Next we present a nontrivial example of a simple weight $\fa$-module with infinite dimensional weight spaces. \begin{exa}\label{ex22} {\rm Let $\a=1-t^3(t-1)\frac{d}{dt}\in \mathcal{K}$. It is easy to see that $\alpha$ is irreducible in $\mathcal{K}$. Then $N=R_0/(R_0\cap \mathcal{K}\a)$ is a simple $R_0$-module such that $t$ acts bijectively on $N$. This implies that $N$ is also a simple $R_1$-module and a simple module over the localization of $\mathbf{A}$ at $t$. In fact, we have $N=(t-1)^{-1}\C[t,t^{-1}]$ with the following action: $$\frac{d}{d t}\cdot g(t)=\frac{ d g(t)}{d t}+\frac{g(t)}{t^3(t-1)},\,\,\,\, t \cdot g(t)=tg(t)\,\,\text{ for all } g(t)\in N. $$ Thus we have the simple $\fa$-module $N_{\dot{z},\dot{h}}=(t-1)^{-1}\C[t,t^{-1},x,x^{-1}]$ with the action \begin{gather*} h:=\dot{h}-\partial_x,\,\,\,\,z:=0,\,\,\,\, q:=x,\,\,\,\, p:=t x^{-1},\\ e:=x^{-2}t\partial_t+\frac{1}{t(t-1)x^2}+tx^{-2}\partial_x-2tx^{-2} . \end{gather*} if $\dot{z}=0$, and the action \begin{gather*} h :=\dot{h}-\partial_x,\,\,\,\,z:=\dot{z}, \,\,\,\, q:=x,\,\,\,\, p:=x^{-1}\dot{z}(2\partial_t+\frac{2}{t^2(t-1)}+\partial_x-1),\\ e:=\dot{z}x^{-2}\left(\frac{t}{\dot{z}}+(2\partial_x-3)(\partial_t+\frac{1}{t^2(t-1)}) +2\partial_t^2+\frac{4}{t^2(t-1)}\partial_t-\right.\\\left. \hspace{4cm} \frac{2+2t+6t^2}{t^4(t-1)^2} +\frac{(\partial_x-1)(\partial_x-2)}{2}\right), \end{gather*} if $\dot{z}\ne 0$ (here $\partial_x=x\frac{\partial}{\partial x}$ and $\partial_t=t\frac{\partial}{\partial t}$). } \end{exa} \section{Simple weight $\mathcal{S}$-modules having a simple $\fa$-submodule}\label{s4} In this section we classify all simple weight $\S$-modules that have a simple $\fa$-submodule. Let $V$ be a simple weight $\fa$-module. We have the induced weight $\S$-module $H(V):=\Ind_{\fa}^{\S} V$ which can be identified with $\C[f]\otimes V$ as a vector space. In this section we classify all simple quotient $\S$-modules of $H(V)$. From \cite[Corollary~8]{DLMZ} we know that the center $Z(U(\S))$ of $U(\S)$ equals $\C[z,c]$, where \begin{equation}\label{eq55} c:=(fp^2-eq^2-hpq)-\frac{1}{2}(h^2+h+4fe)z. \end{equation} Recall some simple $\S$-module constructed in \cite{LMZ} and \cite{D}. Let $V$ be any simple module over $\H$ with nonzero central charge $\dot{z}$. The module $V$ becomes an $\S$-module by setting \begin{equation}\label{eq57} e v=\frac{1}{2\dot{z}} p^2 v,\,\,\,\, f v=-\frac{1}{2 \dot{z}}q^2 v,\,\,\,\, h v=(-\frac{pq}{\dot{z}}+\frac{1}{2})v, \text{ where } v\in V. \end{equation} The resulting module will be denoted by $V^{\S}$. Any simple $\sl_2$-module $W$ becomes an $\S$-module by setting $\H W=0$. The resulting module will also be denoted by $W^{\S}$. Next let us define a class of simple weight $\S$-modules. Let $N$ be an infinite dimensional simple $R_{\dot{z}}$-module. Let $\mathbf{A}_{(t)}$ denote the localization of $\mathbf{A}$ at powers of $t$, which is actually the differential operator algebra $\C[t, t^{-1}, \frac d{dt}]$. Then $N$ is also naturally an $\mathbf{A}_{(t)}$-module since $t$ acts bijectively on $N$, see Lemma~\ref{lem1}\eqref{lem1.4}. Let $\dot{c},\dot{z},\dot{h}\in \C$. Then we have the simple weight $\fa$-module $N_{\dot{z},\dot{h}}$ given by Lemma~\ref{lem6}. We extend this $\fa$-module $N_{ \dot{z},\dot{h}}$ to an $\S$-module $N_{\dot{c}, \dot{z},\dot{h}}$ as follows: For $\dot{z}=0$ we set: \begin{equation}\label{eq77} \begin{array}{l} h (v\otimes x^i)=(\dot{h}-i) v\otimes x^i,\,\,\,\, z(v\otimes x^i)=0, \\ q (v\otimes x^i)=v\otimes x^{i+1},\,\,\,\, p(v\otimes x^i)=(t v)x^{i-1},\\ e (v\otimes x^{i})=((t^2\frac{d}{d t}+(i-2)t)v)\otimes x^{i-2},\\ f(v\otimes x^i)=((\frac{d}{d t}+(\dot{h}-2 )t^{-1}+\dot{c}t^{-2})v)\otimes x^{i+2}. \end{array} \end{equation} For $\dot{z}\neq 0$ we set: \begin{equation}\label{eq75} \begin{array}{l} h (v\otimes x^i)=(\dot{h}-i) v\otimes x^i,\,\,\,\, z(v\otimes x^i)=\dot{z} v\otimes x^i,\\ q (v\otimes x^i)=vx^{i+1},\,\,\,\, p(v\otimes x^i)=((2\dot{z} t\frac{d}{d t}+(i-1)\dot{z}) v)\otimes x^{i-1},\\ e (v\otimes x^{i})=((t+(2i-3)\dot{z}t\frac{d}{d t}+ 2\dot{z}(t\frac{d}{d t})^2+\frac{(i-1)(i-2)}{2}\dot{z})v)\otimes x^{i-2},\\ f(v\otimes x^i)=((-\frac{1}{2\dot{z}}-(\dot{h}-\frac{1}{2})\frac{d}{ dt}-t(\frac{d}{dt})^2-(\frac{(\dot{h}-1)(\dot{h}-2)}{4}+\frac{\dot{c}}{2\dot{z}})t^{-1})v)\otimes x^{i+2}. \end{array} \end{equation} \begin{lem}\label{lem15} Formulae~\eqref{eq77} and \eqref{eq75} indeed define the structure of an $\S$-module, moreover, the element $c$ acts on $N_{\dot{c}, \dot{z},\dot{h}}$ as the scalar $\dot{c}$. \end{lem} \begin{proof} If $\dot{z}=0$, then from \eqref{A-01} we know that $p=tx^{-1}$ which acts bijectively on $N_{ \dot{z},\dot{h}}$. From \eqref{eq55} we obtain that $f=(eq^2+hpq+\dot{c})p^{-2}$, that is $f(v\otimes x^i)=((\frac{d}{d t}+(\dot{h}-2 )t^{-1}+\dot{c}t^{-2})v)\otimes x^{i+2}$. This indicates the formulae for $\dot{z}=0$ and they are checked by a (long but) straightforward computation. If $\dot{z}\ne 0$, then from \eqref{A-11} we know that $$\aligned p^2-2ze=&(x^{-1}(2\dot{z}\partial_t+(\partial_x-1)\dot{z}))^2-\\ &-2 x^{-2}\dot{z}\left(t+\dot{z}(2\partial_x-3))\partial_t+2\dot{z} \partial_t^2+\frac{(\partial_x-1)(\partial_x-2)}2\dot{z}\right)\\ =&-2\dot{z}tx^{-2}\endaligned $$ which is bijective on $A_{ \dot{z},\dot{h}}$. From \eqref{eq55} we obtain that $$f=(eq^2+hpq+z(h^2+h)/2+\dot{c})(p^2-2ze)^{-1},$$ that is $f(v\otimes x^i)$ equals {\small $$ \left(\left(-\frac{1}{2\dot{z}}-\big(\dot{h}-\frac{1}{2}\big)\frac{d}{dt}-t\big(\frac{d}{dt}\big)^2 -\big(\frac{(\dot{h}-1)(\dot{h}-2)}{4}+\frac{\dot{c}}{2\dot{z}}\big)t^{-1}\right)v\right)\otimes x^{i+2}.$$ }\hspace{-2mm} This indicates the formulae for $\dot{z}\neq 0$ and they are checked by a (long but) straightforward computation. \end{proof} Let $\dot{z}\in \C^*$ and $\lambda \in \C\backslash \Z$. For convenience, we define the simple $\H$-module $G(\dot{z},\lambda)=\C[x,x^{-1}]$ as follows (see \cite{LMZ}): $$\aligned &q x^i=x^{i+1},\,\,\,\, p x^i=\dot{z}(\lambda+i)x^{i-1},\,\,\,\,z x^i=\dot{z} x^i.\endaligned$$ Note that the $\H$-module $G(\dot{z},\lambda)$ is an $\H$-submodule of the simple $\fa$-module $Q'(\dot{z}, \dot{h},\lambda)$ for any $\dot{h}\in\C$. Now we can formulate the main result of this section. \begin{thm}\label{thmtwo} Let $V$ be a simple weight $\fa$-module. \begin{enumerate}[$($a$)$] \item\label{thmtwo.1} If $V$ is a highest weight module over $\fa$, then $H(V)$ is a highest weight $\S$-module, which has a unique simple quotient. \item\label{thmtwo.2} If $V\cong \bar{M}_{\fa}^-(\dot{z},\dot{h})$ with $\dot{z}\ne 0$, then \begin{displaymath} H(V)\cong M_{\H}^-(\dot{z})^{\S}\otimes M_{\sl_2}(\dot{h}-\frac{1}{2})^{\S}. \end{displaymath} The latter module has a unique simple quotient and this simple quotient is isomorphic to $M_{\H}^-(\dot{z})^{\S}\otimes \bar{M}_{\sl_2}(\dot{h}-\frac{1}{2})^{\S}$. \item\label{thmtwo.3} If $V\cong Q(\lambda,\dot{h})$ for some $\dot{h}\in \C$ and $\lambda\in \C^*$, then $H(V)$ is simple. \item\label{thmtwo.4} If $V\cong Q'(\dot{z}, \dot{h},\lambda)$ for some $\dot{h}\in \C,\dot{z}\in \C^*,\lambda \in \C\backslash \Z$, then \begin{displaymath} H(V)\cong G(\dot{z},\lambda)^{\S}\otimes M_{\sl_2}(\lambda+\dot{h}+\frac{1}{2})^{\S}. \end{displaymath} The latter module has a unique simple quotient and this simple quotient is isomorphic to $G(\dot{z},\lambda)^{\S}\otimes \bar{M}_{\sl_2}(\lambda+\dot{h}+\frac{1}{2})^{\S}$. \item\label{thmtwo.5} If $V\cong N_{\dot{z},\dot{h}}$, where $N$ is an infinite dimensional simple $R_{\dot{z}}$-module, then any simple quotient of $H(V)$ is isomorphic to $N_{\dot{c}, \dot{z},\dot{h}}$ for some $\dot{c}\in \C$. \end{enumerate} \end{thm} \begin{proof} Claim~\eqref{thmtwo.1} is clear. To prove claim~\eqref{thmtwo.2}, we note that the module $V=\bar{M}_{\fa}^-(\dot{z},\dot{h})$ has a simple $\H$-submodule $M_{\H}^-(\dot{z})$. Then, using \eqref{eq57}, we can extend the action of $\H$ on $M_{\H}^-(\dot{z})$ to a lowest weight $\fa$-module $\bar{M}_{\fa}^-(\dot{z},-1/2)$. Let $\C v$ be the $1$-dimensional $\fa$-module given by $ev=pv=qv=zv=0$ and $hv=(\dot{h}-\frac{1}{2})v$. Then we have $V\cong \C v\otimes \bar{M}_{\fa}^-(\dot{z},-1/2)$. From \cite[Theorem~3]{LMZ} we know that $H(V)\cong M_{\H}^-(\dot{z})^{\S}\otimes \bar{M}_{\sl_2}(\dot{h}-\frac{1}{2})^{\S}$. The rest of claim~\eqref{thmtwo.2} now follows easily. Let $M$ be a nonzero submodule of $H(V)$. Then $M$ is a weight module. Choose $0\ne v_n=\sum_{i=0}^k c_i f^i \otimes x^{n-2i} \in M$ such that $k$ is minimal. If $k>0$, then $0\ne p v_n=-\sum_{i=1}^k ic_i f^{i-1} \otimes x^{n-2i-1}\in M$ which contradicts our choice of $v_n$. Hence $k=0$ and $v_n\in 1\otimes V$. Now $M$ has to coincide with $H(V)$ since $V$ is a simple $\fa$-module. Claim~\eqref{thmtwo.3} follows. To prove claim~\eqref{thmtwo.4}, we note that $V=Q'(\dot{z}, \dot{h},\lambda)$ has a simple $\H$-sub\-module $G(\dot{z},\lambda)$. Then we apply \eqref{eq57} to make $G(\dot{z},\lambda)$ into an $\fa$-module $G(\dot{z},\lambda)^{\fa}$ with $h(x^i)=-(\lambda+i+1/2)x^i$. Let $\C v$ be the one-dimensional $\fa$-module given by $ev=pv=qv=zv=0$ and $hv=(\dot{h}+\lambda+\frac{1}{2})v$. Then $V\cong \C v\otimes G(\dot{z},\lambda)_{\fa}$. From \cite[Theorem~3]{LMZ} we know that $H(V)\cong C(\dot{z},\lambda)^{\S}\otimes M_{\sl_2}(\lambda+\dot{h}+\frac{1}{2})^{\S}$. The rest of claim~\eqref{thmtwo.4} now follows easily. Finally, we prove claim~\eqref{thmtwo.5}. During the proof of Lemma~\ref{lem15} we computed that in the module $N_{\dot{z},\dot{h}}$ we have $p^2 (v\otimes x^i)=(t^2v)\otimes x^{i-2}$ if $\dot{z}=0$ and $(p^2-2ez)(v\otimes x^i)=(-2\dot{z} tv)\otimes x^{i-2}$ if $\dot{z}\ne 0$. Recall that $t$ acts bijectively on $N$ since $N$ is infinite dimensional. Thus $p^2-2ez$, which is the coefficient at $f$ in $c$, acts bijectively on $N_{\dot{z},\dot{h}}$ in this case. Let $M$ be any simple quotient of $H(V)$. Then $c$ acts as the scalar $\dot{c}$ on $M$. Now we have $f (1\otimes V)\subset 1\otimes V$ in $M$, and the action of $f$ is uniquely determined by $\dot{c}$. Thus we have $M\cong N_{\dot{c},\dot{z},\dot{h}}$. This completes the proof. \end{proof} Let $\fa^{op}$ be the parabolic subalgebra of $\mathcal{S}$ spanned by $\{f,h,p,q,z\}$, which is isomorphic to $\fa$. There are actually many simple weight $\S$-modules that do not contain any simple $\fa$-submodule or any simple $\fa^{op}$-submodule. For example, if we take $V$ to be a simple weight $\sl_2$-module that is not highest or lowest weight module and $\dot{c}\ne0$, then the simple $\S$-module $V^{\S}\otimes M_{\H}(\dot{z})^{\S}$ contains neither simple $\fa$-submodules nor simple $\fa^{op}$-submodules. Taking into account the results of this paper it is natural to ask whether one can classify all simple weight $\S$-modules or all simple $\fa$-module. \vspace{5mm} \noindent {\bf Acknowledgments.} The research presented in this paper was carried out during the visit of the first author to University of Waterloo and of the second author to Wilfrid Laurier University. The first author thanks professors Wentang Kuo and Kaiming Zhao for sponsoring his visit, and University of Waterloo for providing excellent working conditions. The second author thanks Wilfrid Laurier University for hospitality. We thank the referee for helpful comments and nice suggestions. R.L. is partially supported by NSF of China (Grant 11371134) and Jiangsu Government Scholarship for Overseas Studies (JS-2013-313). \\ V.M. is partially supported by the Swedish Research Council.\\ K.Z. is partially supported by NSF of China (Grant 11271109) and NSERC.
1,116,691,497,460
arxiv
\section{Introduction} In outsourcing the data storage to cloud servers, a mechanism, known as access control, is required to guarantee users' access to the appropriate data. For individual use, access control can be simply designed using asymmetric cryptography. Data, encrypted by a public key, can be decrypted by the user with the corresponding private key. However, in some cases, data recipient is not known at the encryption time or the data is intended for a group of users. For example, the users of an industrial cloud may include sales enterprises, consulting firms, manufacturing enterprises, logistics enterprises, and scientific research institutions, where each group may be granted to access some specific data~\cite{song2019efficient}. Attribute-based access control can be considered as a solution for these circumstances, where only users whose attributes satisfy a pre-specified access policy can access the data. In an attribute-based access control, a user with a certain set of attributes is eligible to gain access to specific data. In a central solution, one authority is responsible to verify users' attributes. The concern is the central authority will learn all the attributes of the users, which raises serious privacy issues. For example, consider the case, where a patient with certain range of income and with a particular disease is eligible to have access to some information. However, for various reasons, he does not want a central authority to know both of his income and his disease. This concern can be resolved by delegating the task of attribute verification to multiple authorities. For example, one authority, say financial organizations, \emph{only} observes and verifies the income attribute of the patient, and another authority \emph{only} observes and verifies the disease attribute, without learning anything about another attribute of the patient. The access is granted if both attributes have been verified. There are algorithms for non-centralized attribute-based access control based on cryptographic primitives, e.g., bilinear mappings, hash functions, and encryption algorithms~\cite{wang2011hierarchical,jung2013privacy}. In this paper, we propose an information theoretic framework for the problem of distributed attribute-based private access control (DAPAC), and investigate its fundamental limits. \textbf{\textit{Related Works}:} The idea of identity-based encryption was first proposed by Shamir~\cite{shamir1984identity}. In~\cite{boneh2001identity}, the first fully functional identity-based encryption scheme was introduced. In~\cite{sahai2005fuzzy}, attribute-based encryption systems are introduced as a special case of identity-based encryption systems, where each user is specified by an attribute vector. Among its many applications, attribute-based access control is proposed for the personal health record services~\cite{qian2015privacy, zhang2019hidden}. In the scheme of~\cite{sahai2005fuzzy}, it is assumed that there is an authority that verifies all attributes of the user. The systems with multiple authorities to verify the attributes is studied in~\cite{chase2007multi,jung2013privacy}. To the best of our knowledge, all existing works on the attribute-based access control problem utilize cryptographic primitives. In contrary, in this work, we take an information theoretic approach. Our work is mainly inspired by the results on information theoretic private information retrieval (PIR) ~\cite{sun2017capacity, sun2018capacity, banawan2019private, banawan2020capacity, ulukus2022private}. Here, we elaborate on the similarities and the differences between the PIR and DAPAC problems. In terms of similarities, in both problems: (i) The user tends to retrieve a message from some replicated servers, (ii) The user wishes to keep some information about the index of the desired message private from each server. (iii) In DAPAC, the user should gain no information about the non-requested messages, as in symmetric PIR~\cite{sun2018capacity}. Despite the above similarities, there are some intrinsic differences between these two problems: (i) In PIR, the index of the requested message is kept entirely private from all the servers. However, in DAPAC, the index of message is an attribute vector, and each server is supposed to observe and verify one of the attributes, without learning any information about other attributes. (ii) In PIR, all the files in each server is basically accessible for the user. Of course, the one that the user is asked for is revealed to the user, following the protocol. However, in DAPAC, when server $n$ verifies the $n$'th attribute of the user, in that server, only the messages with index vectors with the matched $n$'th entry will be accessible to the user. Thus, after attribute verification at each server, the content of the servers are not replicated from user's perspective. Because of these differences, the solutions of the PIR problem are not applicable to the DAPAC problem. \textbf{\textit{Our Contribution}:} We propose an information theoretic framework for the DAPAC problem. In the proposed model, there is a user with $N$ attributes, denoted by attribute vector $\mathbf{v}^*=(v_1^*, ..., v_N^*)$, where each attribute has $K$ possible values. The user have the right (and wishes) to access the message $W^{\mathbf{v}^*}$, with access policy $\mathbf{v}^*$. There are $N$ replicated servers, containing all messages, where Server $n$ can verify $v_n^*$ and in response, it will release a function of its content. We consider the access control and privacy constraints. The access control constraint assures that the user is able to retrieve his intended message $W^{\mathbf{v}^*}$ from what it receives from the servers (correctness), and he gains no information about other messages (data secrecy). The (user's attribute) privacy constraint guarantees that each server gains no information about the other attributes of the user, except the one for which that server is responsible for its verification. The goal is to minimize the download cost. The capacity of the DAPAC is defined as the ratio of the file size and the aggregated size of the responses, maximized over all feasible schemes. We obtain a lower bound on the capacity of this problem by proposing an achievable algorithm. In the proposed algorithm, the user proves his $n$-th attribute to Server~$n$ and after verifying $v_n^*$, Server~$n$ authorizes the user's access to the message set $\mathcal{W}^{v^*_n}$. The user to retrieve the desired message $W^{\mathbf{v}^*}\in \mathcal{W}^{v^*_n}$ sends queries to Server~$n$ in the form of linear combinations of messages that have two attributes in common; Obviously, one of them is $v_n^*$. Considering any two servers, the user downloads two linear combinations of messages with the same access policy and the same message indices, e.g., $L_m$ and $L_n$ from Servers $m$ and $n$, respectively. Although the servers add an independent part of common randomness to each of the requested linear combinations to guarantee the data secrecy, the added randomness is the same for the linear combinations $L_m$ and $L_n$. Therefore, the user can subtract these two linear combinations to retrieve a chunk of the desired message $W^{\mathbf{v}^*}$. The user can make ${N \choose 2}$ such linear combinations to completely retrieve the message $W^{\mathbf{v^*}}$. In this scheme, to guarantee privacy, at each Server~$n\in [N]$, the distribution on the other attributes of the user (by observing the queries) is uniform, because all the linear combinations of messages that have two attributes in common (one is $v_n^*$) are requested. So each server learns nothing about the other attributes of the user. The rest of the paper is as follows. Section~\ref{System_model} formally introduces our proposed information theoretic framework. Section~\ref{Main_results} presents main results. Section~\ref{achievable} presents the achievable algorithm, and Section~\ref{proof} contains the proofs. \section {System Model} \label{System_model} As shown in Fig.~\ref{system_model}, we consider a system, including $N\geq 2$ non-colluding semi-honest servers, each storing an identical copy of a database of messages $\mathcal{W}$ and a set of common randomness $\mathcal{C}$, and a user with $N$ attributes, shown by attribute vector $\mathbf{v^*}=(v^*_1, ..., v^*_N)$, who wishes to download a message from the servers that corresponds to his attribute vector. Each server is responsible for verifying one of the attributes, i.e., Server $n$ is responsible for verifying $v_n^*$. The user can show the evidence of possessing attribute $v^*_n$ to Server $n$ and he cannot falsify the possessing of any other attribute $v_n$, $\forall v_n\neq v_n^*$ and $v_n \in \mathcal{V}_n$. There are $N$ disjoint attribute sets $\mathcal{V}_1, ..., \mathcal{V}_N$ and $|\mathcal{V}_n|=K\geq 2$ for $n\in [N]$. So, $v_n$, attribute $n$, can take one of $K\geq 2$ values from the set $\mathcal{V}_n$. Moreover, the user has access to an independent uniform permutation $\mathcal{P}$. We define an access policy for each message such that if the access policy of a message is $(v_1, ..., v_N)$, then this message is shown as $W^{(v_1, ..., v_N)}$, and only users with attribute vector $(v_1, ..., v_N)$ have the right to access it. All messages have equal length, so for $v_n \in \mathcal{V}_n$, $n \in [N]$, we have \begin{equation} H(W^{(v_1, ..., v_N)})=L. \end{equation} The messages of different access policies are independent. Let $\mathcal{V}^N\doteq\mathcal{V}_1\times \mathcal{V}_2\times...\times \mathcal{V}_N$, so for each $\mathcal{V}\subset\mathcal{V}^N$, and $\Tilde{\mathcal{W}}=\{W^{(v_1, ..., v_N)}, (v_1, ..., v_N)\in \mathcal{V}\}$, we have \begin{align} H(\Tilde{\mathcal{W}})=\sum_{(v_1, ..., v_N)\in \mathcal{V}}H(W^{(v_1, ..., v_N)}). \end{align} \begin{figure}[tb] \centering \includegraphics[trim={6cm .1cm .5cm 0cm},clip,scale=.9]{system_model} \captionsetup{justification=centering} \caption{System model of the DAPAC} \label{system_model} \end{figure} The user can send queries for messages with different access polices $\mathbf{v}^{(n)}$ for $\forall n \in[N]$, and \begin{align} \mathbf{v}^{(n)}=(v_1^{(n)}, ..., v_n^*, ..., v_N^{(n)})\in \mathcal{V}^N. \end{align} The user sends query $Q_n^{\mathbf{v}^{(n)}}$, and his $n$-th attribute, $v^*_n$, to Server~\textit{n} as a pair $(Q_n^{\mathbf{v}^{(n)}}, v_n^*)$. The server verifies the attribute $n$ of the user correctly (by verifying the possession evidence) and let the user access the messages $\mathcal{W}^{v^*_n}=\{W^{(v_1,\ldots,v_n,\ldots,v_N)}\in\mathcal{W}|v_n=v^*_n\}$, if the user is verified to have attribute $v_n^*$. When a user wants to retrieve his corresponding message labelled with $\mathbf{v^*}=(v^*_1, ..., v^*_N)\in \mathcal{V}^N$, he sends query pairs $(Q_n^{\mathbf{v}^*}, v_n^*)$ to Server $n$. The Server verifies the attribute $n$ of the user correctly and let the user access the messages $\mathcal{W}^{v^*_n}$, if the user is verified to have attribute $v_n^*$. The queries are generated with no knowledge about the messages. So, for each $\mathbf{v}=(v_1, ..., v_N)\in \mathcal{V}^N$, we have \begin{equation} I(Q_1^{\mathbf{v}}, Q_2^{\mathbf{v}}, ..., Q_N^{\mathbf{v}}; \mathcal{W})=0, \end{equation} where $Q_n^{\mathbf{v}}$ is the query sent to Server $n$ to access the message with access policy $\mathbf{v}$. The queries are deterministic functions of the user's attributes and the randomness $\mathcal{P}$, used by the user to generate queries. So, for each $\mathbf{v}=(v_1, ..., v_N)\in \mathcal{V}^N$, \begin{equation} H(Q_1^{\mathbf{v}}, Q_2^{\mathbf{v}}, ..., Q_N^{\mathbf{v}}|\mathcal{P}, v_1, v_2, ..., v_N)=0. \end{equation} The Server~\textit{n}, after receiving the query for the message ${W}^{(\mathit{v}_1,\ldots, \mathit{v}^*_n, \ldots, \mathit{v}_N)}$ (i.e., $Q_n^{\mathbf{v}}$, $\mathbf{v}=(v_1, \ldots, v_n^*, \ldots, v_N)$), generates the answer set $A_n^{\mathbf{v}}$ based on the received query, attribute $n$ of the user, $v_n^*$, the messages that correspond to the attribute $v_n^*$ (i.e., $\mathcal{W}^{v_n^*}$), and the common randomness between servers, i.e., $\mathcal{C}$. So, for each $n\in[N]$, \begin{equation} H(A^{\mathbf{v}}_n|Q^{\mathbf{v}}_n, v^*_n, \mathcal{W}^{{v}^*_n}, \mathcal{C})=0. \end{equation} Now, we define the constraints to guarantee the access control and the privacy of the other attributes of the user. \textbf{Access control}: To ensure the access control in our setup, each user must correctly retrieve his own message (correctness), while preventing the leakage about the other messages to him (data secrecy). Hence, it is required that for $\mathcal{A}:= \{A_1^{\mathbf{v}^{(1)}}, ..., A_N^{\mathbf{v}^{(N)}}\}$, and $\mathcal{Q}:= \{ (Q_1^{\mathbf{v}^{(1)}}, v^*_1), ..., (Q_N^{\mathbf{v}^{(N)}}, v^*_N)\}$: (i) The user can retrieve his message: \begin{align}[Correctness] \label{correctness} \:\:H(W^{\mathbf{v^*}}|\mathcal{A} ,\mathcal{Q},\mathcal{P})=0. \end{align} (ii) Secrecy of the other messages is preserved: \begin{align}[Data\text{ }Secrecy] \label{data_sec_com} \:\:I(\mathcal{W}\backslash W^{\mathbf{v}^*}; \mathcal{A},\mathcal{Q},\mathcal{P}|{W}^{\mathbf{v}^*})=0. \end{align} \textbf{Privacy}: To preserve the user's privacy, it is required that the attribute vector be kept hidden from each Server~$n$ except $v^*_n$. Thus, (iii) In Server $n$, for $n\in[N]$: \begin{align}[Privacy] \label{poa} \:\:H(\{v^*_i: i\in[N], i\neq n\}|Q_n^{\mathbf{v}^{(n)}}, \mathcal{C}, \mathcal{W}, v^*_n) =H(\{v^*_i: i\in[N], i\neq n\}| \mathcal{C}, \mathcal{W}, v^*_n). \end{align} An $(N, K)$ DAPAC scheme for the above setup and for a set of vectors $\{\mathbf{v}^{(1)},\ldots,\mathbf{v}^{(N)}\}$ consists of query-answer functions $(Q_n^{\mathbf{v}^{(n)}},A_n^{\mathbf{v}^{(n)}})$ for $n\in[N]$, and the corresponding decoding functions that map them to $W^{\mathbf{v^*}}$, common randomness $\mathcal{C}$, and random permutation $\mathcal{P}$. The retrieval rate of this code is the ratio of bits of the desired message (\textit{L}) to the total download cost from all servers in bits, i.e., $D=\sum_{n=1}^{N}H(A_n^{\mathbf{v}^{(n)}})$ and is defined as, \begin{align} R:=\frac{L}{D}. \label{eq_r0} \end{align} \begin{Definition} A rate $R$ is achievable if a DAPAC scheme with the retrieval rate greater than or equal to $R$ exists that satisfies the constraints of correctness \eqref{correctness}, secrecy of other messages \eqref{data_sec_com}, and privacy \eqref{poa} for all $\mathbf{v}^*=(v_1^*, v_2^*, ..., v_N^*)\in \mathcal{V}^N$. The capacity of the DAPAC problem is defined as, \begin{align} C:=\sup\{R:\text{ }R\text{ }\text{is achievable}\}. \end{align} \end{Definition} We also define a parameter to show the number of equations downloaded from all servers for the desired message retrieval. The download complexity is defined as, \begin{align} \label{download_comp} DC:=\sum_{n=1}^{N}|A_n^{\mathbf{v}^{(n)}}|. \end{align} \section{Main Results} \label{Main_results} The first theorem presents a lower bound on the capacity of the DAPAC problem, and the second theorem presents the minimum common randomness required in Theorem~\ref{thoerem_1}. \begin{Theorem}\label{thoerem_1} In an $(N, K)$ DAPAC system, with at least two attributes ($N\geq 2$), where each has at least two values ($K=|\mathcal{V}_n|\geq 2$), the following rate is achievable with download complexity of $O(KN^2)$, \begin{align} \label{r_poly} R=\frac{1}{2K}\leq C. \end{align} \end{Theorem} \begin{IEEEproof} To prove this lower bound, we propose an achievable algorithm with the rate $R=\frac{1}{2K}$ in Section~\ref{achievable}, and the rest of proof is provided in Section~\ref{theorem_1_proof}. \end{IEEEproof} \begin{Remark} Consider an $(N, K)$ DAPAC system, we can run an $(N-1,N)$ secret sharing on the messages with different access policies, and then download all accessible messages from each server. This is a naive solution that jointly satisfies \eqref{correctness}, \eqref{data_sec_com}, and \eqref{poa}. In this scheme, the download complexity is $NK^{N-1}$ and the size of each secret share is $L$ bits. So the achievable rate is $R_\mathsf{Naive}=\frac{1}{NK^{N-1}}$, and thus, \begin{align} \frac{R}{R_\mathsf{Naive}}=\frac{NK^{N-1}}{2K}=\frac{NK^{N-2}}{2}. \end{align} We observe that: (i) For a fixed $K$, the proposed DAPAC scheme has an exponential gain over the naive scheme as $N$ increases. (ii) For a fixed $N$, the proposed DAPAC scheme has a polynomial gain over the naive scheme as $K$ increases. (iii) The download complexity of the proposed DAPAC scheme is less than the naive scheme. \end{Remark} \begin{Remark} To guarantee the privacy, the user should hide the value of his other attributes in all their possible values (from each server). The larger the alphabet of attribute be, the harder is to provide the privacy. This result is reflected from \eqref{r_poly} as the achievable rate decreases when $K$ increases. \end{Remark} \begin{Remark} The surprising fact about the achievable rate \eqref{r_poly} is that it is independent of the number of attributes $N$. The reason is that, in our achievable scheme, we split the messages into $\frac{N(N-1)}{2}$ equal chunks, each with length $\frac{L}{\frac{N(N-1)}{2}}$ bits. Then, we download $KN(N-1)$ linear combinations of these chunks to retrieve the desired message. So, the total download is $2KL$ bits, which is independent of $N$. \end{Remark} To guarantee the data secrecy constraint \eqref{data_sec_com}, we need a minimum amount of independent common randomness $\mathcal{C}$ between servers. The following theorem presents the minimum common randomness required in the proposed achievable scheme, and its proof is provided in Subsection~\ref{theorem_2_proof}. \begin{Theorem} \label{thoerem_2} In the proposed $(N, K)$ DAPAC scheme, the lower bound on the amount of common randomness is as, \begin{align} H(\mathcal{C})\geq K^2L. \end{align} \end{Theorem} \section{Achievable algorithm} \label{achievable} In this section, we first present the key ideas of our proposed achievable scheme by a motivating example. Then we present the general achievable algorithms. \textit{\textbf{Motivating Example:}} Consider a $(3, 2)$ DAPAC system. Let $\mathcal {V}_1=\{\mathsf{M}, \mathsf{P}\}$, where $\mathsf{M}$ and $\mathsf{P}$ indicate the MSc and PhD, respectively, $\mathcal {V}_2=\{\mathsf{E}, \mathsf{C}\}$ where $\mathsf{E}$ and $\mathsf{C}$ indicate the Electrical Engineering and Computer Science, respectively, and $\mathcal {V}_3=\{\mathsf{S}, \mathsf{F}\}$ where $\mathsf{S}$ and $\mathsf{F}$ indicate Spring intake and Fall intake, respectively. $\forall (v_1, v_2, v_3) \in \mathcal{V}^3$, we split the message $W^{(v_1, v_2, v_3)}$ into three equal chunks as $W^{(v_1,v_2,v_3)}=w_1^{v_1v_2v_3}||w_2^{v_1v_2v_3}||w_3^{v_1v_2v_3}$. There are three servers; Each is responsible for verifying one of the attributes and giving messages to the user based on his attribute and requests. Suppose a user who needs to access the message $W^{(\mathit{\mathsf{M}, \mathsf{E}, \mathsf{S}})}$. The user commits $v^*_1=\mathsf{M}$, $v^*_2=\mathsf{E}$, and $v^*_3=\mathsf{S}$, in servers $1$, $2$, and $3$, respectively. Using a uniform random permutation $\mathcal{P}$, the user permutes the index of different chunks of messages and accesses the couple of messages as shown in Table~\ref{access_table}. \begin{table}[tb] \centering \caption{Access table for the motivating example} \label{access_table} \scalebox{1}{ \begin{tabular}{|c|c|c|} \hline $S\mathit{\{\mathsf{M}, \mathsf{P}\}}$ & $S\mathit{\{\mathsf{E}, \mathsf{C}\}}$ & $S\mathit{\{\mathsf{F}, \mathsf{S}\}}$ \\ \hline $\mathbf{w}_{\mathsf{M}}^{(1),1}=(w^{\mathsf{MES}}_{1},w^{\mathsf{MCS}}_{1})$ &$\mathbf{w}_{\mathsf{E}}^{(2),1}=(w^{\mathsf{MES}}_{2},w^{\mathsf{MEF}}_{1})$ & $\mathbf{w}_{\mathsf{S}}^{(3),1}=(w^{\mathsf{MES}}_{1},w^{\mathsf{MCS}}_{1})$ \\ $\mathbf{w}_{\mathsf{M}}^{(1),2}=(w^{\mathsf{MES}}_{2},w^{\mathsf{MEF}}_{1})$ &$\mathbf{w}_{\mathsf{E}}^{(2),2}=(w^{\mathsf{PES}}_{1}, w^{\mathsf{MES}}_{3})$ & $\mathbf{w}_{\mathsf{S}}^{(3),2}=(w^{\mathsf{PES}}_{1}, w^{\mathsf{MES}}_{3})$ \\ $\mathbf{w}_{\mathsf{M}}^{(1),3}=(w^{\mathsf{MCS}}_{2},w^{\mathsf{MCF}}_{1})$ &$\mathbf{w}_{\mathsf{E}}^{(2),3}=(w^{\mathsf{PES}}_{2},w^{\mathsf{PEF}}_{1})$ & $\mathbf{w}_{\mathsf{S}}^{(3),3}=(w^{\mathsf{MCS}}_{3},w^{\mathsf{PCS}}_{1})$ \\ $\mathbf{w}_{\mathsf{M}}^{(1),4}=(w^{\mathsf{MCF}}_{2},w^{\mathsf{MEF}}_{2})$ &$\mathbf{w}_{\mathsf{E}}^{(2),4}=(w^{\mathsf{MEF}}_{3},w^{\mathsf{PEF}}_{2})$ & $\mathbf{w}_{\mathsf{S}}^{(3),4}=(w^{\mathsf{PCS}}_{2},w^{\mathsf{PES}}_{3})$ \\ \hline \end{tabular}} \end{table} The user generates $12$ vectors $\mathbf{a}^{(n)}_i$ for $ i \in [4], n \in [3]$, each a $1\times 2$ binary vector; Nine of them have random elements with independent uniform distribution in $\{0,1\}$ and the rest are: \begin{align} \mathbf{a}^{(2)}_1&=\mathbf{a}^{(1)}_2\oplus(1,0),\\ \mathbf{a}^{(3)}_1&=\mathbf{a}^{(1)}_1\oplus(1,0),\\ \mathbf{a}^{(3)}_2&=\mathbf{a}^{(2)}_2\oplus(0,1). \end{align} Then, the user sends queries for message $W^{(\mathit{\mathsf{M}, \mathsf{E}, \mathsf{S}})}$ as shown in Table \ref{table}, where $\forall i\in[9]$, ${s}_{i}$ is an independent part of $\mathcal{C}$. \begin{table}[tb] \centering \caption{Request table for the motivating example} \label{table} \scalebox{1}{ \begin{tabular}{|c|c|c|} \hline $S\mathit{\{\mathsf{M}, \mathsf{P}\}}$ & $S\mathit{\{\mathsf{E}, \mathsf{C}\}}$ & $S\mathit{\{\mathsf{F}, \mathsf{S}\}}$ \\ \hline $\mathbf{a}^{(1)}_{1}.\mathbf{w}_{\mathsf{M}}^{(1),1}+s_1$ &$\mathbf{a}^{(2)}_{1}.\mathbf{w}_{\mathsf{E}}^{(2),1}+s_2$ & $\mathbf{a}^{(3)}_{1}.\mathbf{w}_{\mathsf{S}}^{(3),1}+s_1$ \\ $\mathbf{a}^{(1)}_{2}.\mathbf{w}_{\mathsf{M}}^{(1),2}+s_2$ &$\mathbf{a}^{(2)}_{2}.\mathbf{w}_{\mathsf{E}}^{(2),2}+s_3$ & $\mathbf{a}^{(3)}_{2}.\mathbf{w}_{\mathsf{S}}^{(3),2}+s_3$ \\ $\mathbf{a}^{(1)}_{3}.\mathbf{w}_{\mathsf{M}}^{(1),3}+s_4$ &$\mathbf{a}^{(2)}_{3}.\mathbf{w}_{\mathsf{E}}^{(2),3}+s_6$ & $\mathbf{a}^{(3)}_{3}.\mathbf{w}_{\mathsf{S}}^{(3),3}+s_8$ \\ $\mathbf{a}^{(1)}_{4}.\mathbf{w}_{\mathsf{M}}^{(1),4}+s_5$ &$\mathbf{a}^{(2)}_{4}.\mathbf{w}_{\mathsf{E}}^{(2),4}+s_7$ & $\mathbf{a}^{(3)}_{4}.\mathbf{w}_{\mathsf{S}}^{(3),4}+s_9$ \\ \hline \end{tabular}} \end{table} Obtaining the corresponding answers, the user can retrieve the message $W^{(\mathit{\mathsf{M}, \mathsf{E}, \mathsf{S}})}=w^{\mathsf{MES}}_{1}||w^{\mathsf{MES}}_{2}||w^{\mathsf{MES}}_{3}$ correctly, because: \begin{align} w_1^{\mathsf{MES}}= \mathbf{a}^{(3)}_{1}.\mathbf{w}_{\mathsf{S}}^{(3),1}+s_1-(\mathbf{a}^{(1)}_{1}.\mathbf{w}_{\mathsf{M}}^{(1),1}+s_1),\\ w_2^{\mathsf{MES}}= \mathbf{a}^{(2)}_{1}.\mathbf{w}_{\mathsf{E}}^{(2),1}+s_2-(\mathbf{a}^{(1)}_{2}.\mathbf{w}_{\mathsf{M}}^{(1),2}+s_2),\\ w_3^{\mathsf{MES}}= \mathbf{a}^{(3)}_{2}.\mathbf{w}_{\mathsf{S}}^{(3),2}+s_3-(\mathbf{a}^{(2)}_{2}.\mathbf{w}_{\mathsf{E}}^{(2),2}+s_3). \end{align} The user gains no information about the other messages, since we have used an independent part of common randomness $\mathcal{C}$, i.e., $s_i$, for each linear combination that includes the other messages. The user attributes privacy is also preserved. The reason follows. Server $1$ verifies $\mathsf{M}$, and the user wants to hide his other attributes from Server~1, i.e., $\mathsf{E}$ and $\mathsf{S}$. (i) The structure of message vectors requested from Server $1$, $\{\mathbf{w}_{\mathsf{M}}^{(1),i}:\forall i \in [4]\}$, is independent of the second and the third attributes of the user, since it is composed of all message vectors that have two attributes in common and one of these common attributes is $\mathsf{M}$. (ii) The index of chunks of messages is determined by applying the random permutation $\mathcal{P}$ on the index of message chunks, so the index of messages reveals no information about the attributes of the user. (iii) The elements of the coefficients vectors, $a_i^{(1)}$, $\forall i \in [4]$, have an independent uniform distribution on $\{0, 1\}$, and reveal no information about the attributes of the user. Therefore, the privacy of the user is preserved in Server $1$. A similar argument can be applied in Servers $2$ and $3$. The retrieval rate of DAPAC is $R=\frac{L}{12\times \frac{L}{3}}=\frac{1}{4}=\frac{1}{2K}$. \label{ex_poly_3} To show the general achievable algorithm, we need to define the concept of \emph{type} of messages first. \begin{Definition} Consider $V_1$ as a vector of messages where $J\geq 1$, and $\forall j \in [J]$, $\mathbf{v}^{(j)}$ is the access policy of each message and $i_j$s are the message indices: $${V_1}=(w^{\mathbf{v}^{(1)}}_{i_1}, ..., w^{\mathbf{v}^{(J)}}_{i_J}).$$ We define the type of messages in $V_1$, as a set $T(V_1)$, composing of the access policy of the messages in $V_1$. Thus, \begin{align} T(V_1) = \{\mathbf{v}^{(1)}, \mathbf{v}^{(2)}, ..., \mathbf{v}^{(J)}\}. \end{align} \end{Definition} \begin{Definition} Two vector of messages like $V_1$ and $V_2$ are of the same type if and only if, \begin{align} T(V_1) = T(V_2). \end{align} \end{Definition} \begin{Definition} \label{def_4} In Server $n$, $n \in [N]$, for each $k \in [K]$, first cast the set $\mathcal{V}_n$ into a list $\Tilde{\mathcal{V}_n}$:$\Tilde{\mathcal{V}_n} =$ List$(\mathcal{V}_n)$, then for $j\neq n$ define $U^{(n)}$, with entries \begin{align} U^{(n)}(k, j) := \{\mathbf{v} = (v_1, ..., v_N)\in \mathcal{V}^{N}| v_n = v_n^*, v_j = \mathcal{\Tilde{V}}_j(k)\}. \end{align} \end{Definition} \begin{algorithm}[b] \caption{Initializing Algorithm for an $(N, K)$ DAPAC} \label{alg_ini_poly} \begin{algorithmic}[1] \STATE Consider an $(N,K)$ DAPAC system with $N\geq2$, $K\geq 2$.\nolinebreak[4] \STATE Split each message into $\frac{N(N-1)}{2}$ equal chunks. \STATE For each subset of $K^{N-2}$ messages that have two attributes in common, assign an independent part of common randomness $\mathcal{C}$, e.g., $s_i$ with length $\frac{L}{\frac{N(N-1)}{2}}$. \end{algorithmic} \end{algorithm} The achievable DAPAC scheme comprises three algorithms; Initializing algorithm, user-side algorithm, and server-side algorithm. The initializing algorithm is run by the operator of the system or the servers themselves and is described in Algorithm \ref{alg_ini_poly}. When a user with attribute vector $\mathbf{v}^*$ wants to retrieve his related message, he sends queries to the $N$ servers and requests for linear combinations of messages to retrieve the message with access policy $\mathbf{v}^*$. The user-side algorithm is described in Algorithm~\ref{alg_us_poly}. The server receives a query, it verifies whether the user is an authorized user and whether only one linear combination from each type of messages exists in the query. If both hold, the server responds to the query based on the server-side algorithm, described in Algorithm \ref{alg_ss_poly}. \begin{algorithm}[tb] \caption{User Side Algorithm of an $(N, K)$ DAPAC} \label{alg_us_poly} \begin{algorithmic}[1] \STATE For a user with attribute vector $\mathbf{v}^*=(v^*_1, v^*_2, ..., v^*_N)$: \STATE $Q^{\mathbf{v}^*}_n= \{\}$. \STATE Permute the index of different messages chunks with a private and uniform random permutation $\mathcal{P}$. \FOR {$n \in [N]$} \STATE $\beta = 1$. \FOR {$k \in [K]$} \FOR {$j \in [N]\setminus n$} \STATE $\mathbf{w}_{v_n^*}^{(n),\beta}$ = A vector of messages with access policies given in $U^{(n)}(k, j)$. \IF {$\exists m \in [n-1]$ and $\exists \alpha \in [K(N-1)]$ such that $T(\mathbf{w}_{v_n^*}^{(n),\beta}) == T(\mathbf{w}_{v_m^*}^{(m),\alpha})$} \STATE Set $\mathbf{a}^{(n)}_{\beta}$ and the index of chunks in $\mathbf{w}_{v_n^*}^{(n),\beta}$ same as the ones in $\mathbf{w}_{v_m^*}^{(m),\alpha}$. \STATE Set the order of messages in $\mathbf{w}_{v_n^*}^{(n),\beta}$ the same as the order of messages in $\mathbf{w}_{v_m^*}^{(m),\alpha}$. \STATE $\gamma$ = Index of the desired message in $\mathbf{w}_{v_n^*}^{(n),{\beta}}$. \STATE $\mathbf{a}^{(n)}_{\beta}(\gamma) = \mathbf{a}^{(n)}_{\beta}(\gamma)\oplus1$. \ELSE \STATE Select $\mathbf{a}_{\beta}^{(n)}= ({a}_{{\beta}, 1}^{(n)}, ..., {a}_{{\beta}, K^{N-2}}^{(n)})$ with i.i.d elements and uniform distribution $\{0, 1\}$. \STATE Assign new indices for the messages in $\mathbf{w}_{v_n^*}^{(n),{\beta}}$. \ENDIF \STATE $Q^{\mathbf{v}^*}_n = Q^{\mathbf{v}^*}_n \cup (\mathbf{a}^{(n)}_{\beta}, \mathbf{w}_{v_n^*}^{(n),{\beta}})$. \STATE $\beta = \beta+1$. \ENDFOR \ENDFOR \STATE Use $Q^{\mathbf{v}^*}_n$ to request from Server $n$. \ENDFOR \STATE Using received answers ($A^{\mathbf{v}^*}_n$, $n\in[N]$), Compute $W^{\mathbf{v}^*}$. \end{algorithmic} \end{algorithm} \begin{algorithm}[tb] \caption{Server Side Algorithm of an $(N, K)$ DAPAC} \label{alg_ss_poly} \begin{algorithmic}[1] \STATE In Server~$n$: verify attribute $n$ of the user, i.e., $v^*_n$. \STATE Set $A^{\mathbf{v}^*}_n=\{\}$ and Type$({i})=0$, $\forall i \in U^{(n)}$. \FOR {$(\mathbf{a}^{(n)}_{\beta}, \mathbf{w}_{v_n^*}^{(n),{\beta}}) \in Q^{\mathbf{v}^*}_n$} \IF {{The attribute $n$ of all messages in $\mathbf{w}_{v_n^*}^{(n),{\beta}}$ is $v^*_n$}, and Type$(T(\mathbf{w}_{v_n^*}^{(n),{\beta}}))==0$} \STATE $s_i$ = Part of $\mathcal{C}$ assigned to the type of messages in $\mathbf{w}_{v_n^*}^{(n),{\beta}}$. \STATE $A^{\mathbf{v}^*}_n = A^{\mathbf{v}^*}_n \cup \mathbf{a}^{(n)}_{\beta}. \mathbf{w}_{v_n^*}^{(n),\beta}+ s_i$. \STATE Type$(T(\mathbf{w}_{v_n^*}^{(n),{\beta}}))=1$. \ELSE \STATE $A^{\mathbf{v}^*}_n$= \{\}. \STATE Break. \ENDIF \ENDFOR \STATE Send $A^{\mathbf{v}^*}_n$ to the user. \end{algorithmic} \end{algorithm} To complete the proof of Theorem~\ref{thoerem_1}, in Section~\ref{proof}, we compute the rate of the proposed scheme and prove that the proposed achievable scheme satisfies the access control and privacy constraints in \eqref{correctness}, \eqref{data_sec_com}, and \eqref{poa} for all $\mathbf{v}^*\in \mathcal{V}^N$. \section{Proofs} \label{proof} \subsection{Proof of Theorem \ref{thoerem_1}} \label{theorem_1_proof} In the proposed DAPAC scheme, we split each message into $\frac{N(N-1)}{2}$ equal chunks, and thus each message chunk has length $\frac{2L}{N(N-1)}$ bits. When the user commits $v^*_n$, he gains access to $K^{N-1}$ messages from Server $n$. But we download these messages in the form of message vectors of length $K^{N-2}$ that, in addition to $v^*_n$, have one common attribute between the other $N-1$ attributes (to satisfy the privacy constraint \eqref{poa}). So the number of linear combinations downloaded from Server~$n$ is equal to $\frac{K^{N-1}(N-1)}{K^{N-2}}$, and the total download from Server~$n$ becomes \begin{align} D_n=\frac{K^{N-1}(N-1)}{K^{N-2}}.\frac{2L}{N(N-1)}=\frac{2LK}{N}. \end{align} Due to the symmetry between the servers in the scheme, the total download from $N$ servers is, \begin{align} D_{t}=ND_n=2LK, \label{d_t_poly} \end{align} which is used to retrieve a message with length $L$ bits. So the retrieval rate of the scheme is, \begin{align} R=\frac{L}{2LK}=\frac{1}{2K}. \end{align} From \eqref{d_t_poly} and \eqref{download_comp}, by noting that each downloaded equation has $\frac{2L}{N(N-1)}$ bits length, the total number of equations downloaded in this scheme which represents the download complexity is equal to \begin{align} DC=\frac{2LK}{\frac{2L}{N(N-1)}}=KN(N-1), \end{align} and the download complexity is of $O(KN^2)$. To complete the achievability proof, it is required to prove that the proposed scheme satisfies the access control and privacy constraints. 1) \textbf{Access Control:} We prove the correctness and data secrecy constraints, \eqref{correctness} and \eqref{data_sec_com}, respectively, are satisfied. \textbf{Correctness:} Suppose that the user with attribute vector $\mathbf{v}^*$ requests the chunks of message $W^{\mathbf{v}^*}$ from all servers. In Algorithm~\ref{alg_ini_poly}, we split each message into $\frac{N(N-1)}{2}$ equal chunks, so it is required to download $\frac{N(N-1)}{2}$ different chunks of $W^{\mathbf{v}^*}$. For each $n, m \in [N]$, $n\neq m$, based on the Definition~\ref{def_4}, $\exists k_n, k_m \in [K]: \Tilde{\mathcal{V}}_m(k_m)= v_m^*, \Tilde{\mathcal{V}}_n(k_n)= v_n^*$, so $U^{(n)}(k_m,m)= U^{(m)}(k_n,n)$. Without loss of generality, suppose $m>n$, then in Algorithm~\ref{alg_us_poly}, the condition in line 9, becomes true, and the user downloads two linear combinations of messages with access policies given in $U^{(n)}(k_m,m)$ in Servers $n$ and $m$ with aligned interferences, and the user can subtract these two linear combinations and retrieve one chunk of the desired message. This argument is true for each ${N \choose 2}$ of servers. So the user can retrieve $\frac{N(N-1)}{2}$ different chunks of the desired message $W^{\mathbf{v}^*}$ (based on line 16 of Algorithm~\ref{alg_us_poly}), therefore we have \begin{align} H(W^{\mathbf{v^*}}| A_1^{\mathbf{v^*}}, ..., A_N^{\mathbf{v^*}}, (Q_1^{\mathbf{v^*}}, v^*_1), ..., (Q_N^{\mathbf{v^*}}, v^*_N), \mathcal{P})=0, \end{align} and the correctness of the scheme is guaranteed. \textbf{Data Secrecy:} First, we prove the data secrecy when the user with attribute vector $\mathbf{v}^*$ sends queries for the message with access policy $\mathbf{v}^*$ to all servers. Next, we prove that if the user sends a query for a message with a different access policy even to one of the servers, then he cannot retrieve the message $W^{\mathbf{v}^*}$ correctly. Without loss of generality, suppose that the indices of common randomness variables used for the types of messages that comprise $W^{\mathbf{v}^*}$, are in the range of $[\frac{N(N-1)}{2}]$; So, we have \begin{align} &I(\mathcal{W}\backslash W^{\mathbf{v}^*}; A_1^{\mathbf{v}^*}, ..., A_N^{\mathbf{v}^*}, (Q_1^{\mathbf{v}^*}, v^*_1), ..., (Q_N^{\mathbf{v}^*}, v^*_N), \mathcal{P}|{W}^{\mathbf{v}^*})\nonumber\\ &=I(\mathcal{W}\backslash W^{\mathbf{v}^*}; \mathbf{a}^{(1)}_1.\mathbf{w}_{v_1^*}^{(1),1}+s_1,..., \mathbf{a}^{(N)}_{N-1}.\mathbf{w}_{v_N^*}^{(N),N-1}+s_{\frac{N(N-1)}{2}}, ..., \mathbf{a}^{(N)}_{K(N-1)}.\mathbf{w}_{v_N^*}^{(N),K(N-1)}\nonumber\\ &+s_{N(N-1)(K-\frac{1}{2})}, (Q_1^{\mathbf{v}^*}, v^*_1), ..., (Q_N^{\mathbf{v}^*}, v^*_N), \mathcal{P}|{W}^{\mathbf{v}^*})=0,\label{data_sec_poly} \end{align} where \eqref{data_sec_poly} follows from the independence of common randomness, messages, and queries. So the user gains no information about the messages with different access policies. Now, we show that if the user sends queries for a message rather than $W^{\mathbf{v}^*}$, then the correctness constraint is violated. Without loss of generality, suppose that the user with attribute vector $\mathbf{v}^*$ sends a query for message $\mathbf{\bar{v}}$ to server $N$. We consider two possible cases below: (i) Except the attribute $v_N^*$, two vectors $\mathbf{\bar{v}}$ and $\mathbf{v^*}$ have no other attribute in common. Then in server $N$, the coefficients used for the vectors of messages that include the message with access policy $\mathbf{v^*}$ are random, and the user cannot use these equations to retrieve useful chunks of the message $W^{\mathbf{v}^*}$. (ii) In addition to the attribute $v_N^*$, two vectors $\mathbf{\bar{v}}$ and $\mathbf{v^*}$ have at least one common attribute. Then linear combinations that include messages with access policies $\mathbf{v^*}$ and $\mathbf{\bar{v}}$, cannot be used to retrieve a useful chunk of the message $W^{\mathbf{v}^*}$. Because the coefficients of these linear combinations are tailored to the retrieval of message $W^{\mathbf{\bar{v}}}$. From the above two cases, we conclude that if the user with attribute vector $\mathbf{v}^*$ sends a query for a message with access policy $\mathbf{\bar{v}}$, then the user cannot retrieve $W^{\mathbf{v}^*}$, completely. This completes the proof of access control constraints. 2) \textbf{Privacy:} From \eqref{poa}, to preserve the privacy of the other attributes of the user, it is required that $\forall n \in [N]$: \begin{align} I(\{v^*_i:i\in [N],i\neq n\};Q_n^{\mathbf{v}}|v^*_n,\mathcal{C},\mathcal{W})=0, \end{align} for $\mathbf{v}=(v_1, ..., v_n^*, ..., v_N)$. In the achievable scheme, there are three features below: (i) The structure of queries from Server $n$ is the same for all users with attribute $v_n^*$. In fact, all users with attribute $v_n^*$ should download $K(N-1)$ linear combinations of messages with access policies given in $U^{(n)}(k,j)$, for each $k \in [K]$ and $j \in [N]\setminus n$. (ii) The index of messages are specified after using a uniform and random permutation $\mathcal{P}$ on the indices. (iii) For each $\beta \in [K(N-1)]$, each element of the coefficients vectors $a_{\beta}^{(n)}$ has an independent uniform distribution in $\{0, 1\}$. Therefore, as the structure of the equations is fixed, and the coefficients vector and indices have uniform distributions, each server learns nothing about the attributes of the user, except the one exposed to it, and the privacy of the other attributes of the user is preserved. \subsection{Proof of Theorem \ref{thoerem_2}} \label{theorem_2_proof} In the proposed DAPAC scheme, a part of common randomness $\mathcal{C}$ (with length $\frac{L}{\frac{N(N-1)}{2}}$ bits) is assigned to each set of messages with $K^{N-2}$ chunks of messages that have different access policies and two common attributes. The number of sets of messages with cardinality $K^{N-2}$ composing of messages with different access policies and two attributes in common is, \begin{align} {N \choose 2}K^2=\frac{N(N-1)K^2}{2}. \end{align} Since we use independent randomness for different types of messages, then we have a lower bound on the amount of common randomness $\mathcal{C}$ as, \begin{align} H(\mathcal{C})\geq\frac{N(N-1)K^2}{2}.\frac{L}{\frac{N(N-1)}{2}}=K^2L, \end{align} which completes the proof of Theorem~\ref{thoerem_2}. \bibliographystyle{ieeetr}
1,116,691,497,461
arxiv
\section{Introduction} Computation of classical effective lagrangians that represent motions of wave packets in theories involving field-theoretic dispersion relations with Lorentz-violating corrections has been of considerable interest in the recent literature. Specifically, the Standard Model Extension (SME) provides a self-consistent framework that leads to physically viable dispersion relations that incorporate effects due to possible Lorentz violation in theories underlying the standard model. Classical lagrangians arising from these SME dispersion relations have been computed exactly in relatively simple algebraic form only for some subsets of the parameters appearing in the general model. These lagrangians provide a tool for computing classical particle trajectories in a curved-spacetime background when the background tensors are promoted to space-time dependent forms that vary slowly over space and time. The minimal Standard Model Extension (SME) formulated in flat spacetime involves constant background fields that couple to the known particles \cite{kps,ck} through power-counting, renormalizable, gauge invariant terms. Extension of the minimal SME theory has developed in several different directions including the gravity sector\cite{kosgrav} and nonminimal terms \cite{kosmewes}. Wave packets can be constructed that have specific group velocities which lead to classical particle trajectories given specific branches of the dispersion relations \cite{brettcol}. These paths also follow from Lagrangians computed using a Legendre transformation of the implicitly defined hamiltonians in the dispersion relations \cite{kr}. Previous work on SME lagrangians includes computations involving momentum-dependent couplings \cite{colmcd1}, non-minimal terms \cite{shreck} and photons \cite{shreck2}. Much of this work has been related to Finsler geometry \cite{shen1} using either Wick rotations or restrictions to certain subspaces \cite{kosfins, kosrustso}, or in other contexts \cite{erasmo,berger,snow, zheng, vacaru, bonder, bag, gomez, yan, silva, thornberg}, and in analogous classical systems \cite{ralph}. The general procedure of computing effective classical Lagrangians leads to a covariant Lagrangian when a generalized parametrization is adopted for the four-velocity. Computation of the associated relativistic hamiltonian yields zero, as is well-known in standard covariant theories. This prevents the inversion of the expression for $p^\mu(u)$ into a formula for $u^\mu(p)$ in a manifestly re-parametrization invariant way and inhibits the natural use of hamilton's equations. Use of the extended hamiltonian formalism \cite{quantgauge} in which the dispersion relation is incorporated into the action using a Lagrange multiplier yields a relativistic formulation in which hamilton's equations follow naturally. Singular points occur in both the extended lagrangian and hamiltonian functions when the associated algebraic varieties fail to be smooth manifolds. These singular points in the lagrangian and hamiltonian varieties are seen to occur at different points along the particle trajectories. This means that the full theory involving both varieties can be given a manifold structure and the dual momentum and velocity variables desingularize each other naturally. In this paper, the CPT-violating, spin-dependent $b^\mu$ parameter will be used to illustrate the various formulas and definitions as they arise. The singular points are identified and the physics of the desingularization is described for a simple example of a particle trajectory in constant gravitational field. This case is then generalized to a larger class of bipartite SME dispersion relations for which the algebraic manipulations are still simple. \section{Dirac Equation for $b$-parameter} The Dirac equation for a fermion in the presence of a CPT- and Lorentz-violating background vector field $b^\mu$ in the minimal SME is \cite{ck} \begin{equation} \left(i \gamma^\mu \partial_\mu - m - b_\mu \gamma_5 \gamma^\mu \right) \psi = 0. \label{deq} \end{equation} The corresponding dispersion relation in momentum space can be written as $R_T(p) = R_+(p) R_-(p) = 0$, where \begin{equation} R_\pm(p) = {1 \over 2} \left( p^2-m^2-b^2 \pm 2 \sqrt{(b \cdot p)^2 - b^2 p^2}\right), \label{hamiltonian} \end{equation} providing an observer covariant (but non-unique...) factorization of the dispersion relation. There is a map to a related CPT- and Lorentz-violating photon dispersion relation, $b^\mu \rightarrow k_{AF}^\mu$, and $m^2 \rightarrow m_\gamma^2 - b^2$ which is analyzed in detail in \cite{colnoord}. The plane-wave spinors $u_\pm(p)$ are particle solutions to the (off-shell) Dirac Equation as \begin{equation} (\not p - m - b_\mu \gamma_5 \gamma^\mu ) \psi_\pm(x) = 2 R_\pm(p) u_\pm( p) e^{-i p \cdot x}. \end{equation} The above equation reduces upon setting $u_\pm = (\not p + m - \gamma_5 \not b) w_\pm$ to \begin{equation} \epsilon_{\mu\nu\alpha\beta}\sigma^{{\mu\nu}} p^\alpha b^\beta w_\pm = \pm 2 \sqrt{(b \cdot p)^2 - b^2 p^2} w_\pm, \label{sols} \end{equation} which is the condition that $w_\pm$ are eigenstates of the Pauli-Lubanski vector contracted with $b^\mu$. \section{Classical Lagrangian for $b$-parameter} The classical lagrangian corresponding to the field-theoretic term in Eq.\ (\ref{deq}) is calculated by performing a Legendre Transformation of the associated dispersion relation in Eq.\ (\ref{hamiltonian}) and introducing an arbitrary parameterization $\lambda$, with result \cite{kr} \begin{equation} {\cal{L_\pm}} = - m \sqrt{u^2} \mp \sqrt{(b \cdot u)^2 - b^2 u^2} , \label{lagrangian} \end{equation} where $u^\mu = d x^\mu / d \lambda$, the invariant product was taken to be flat Minkowskian, and the $b^\mu$ is a constant background vector field, with components much smaller in magnitude than $m$ so that the theory is in a concordant frame \cite{koslehnert}. This expression may be generalized to curved-spacetime backgrounds by promoting the constant $b^\mu$ fields to slowly-varying vector fields and the Minkowski product to a covariant product (ie: $b \cdot u \rightarrow g_{\mu\nu}(x)b^\mu(x) u^\nu$). The details of the of the gravity sector SME construction is described in \cite{kosgrav, kosjay}. Physically, the two solutions ${\cal L}_+$ and ${\cal L}_-$ have some relation to the helicity solutions in Eq.\ (\ref{sols}), although the specific map is not immediately obvious. The velocity and momentum variables are connected by the definitions $u^j/u^0 = - \partial p_0 / \partial p_j$ and $p^\mu = -\partial {\cal L} / \partial u_\mu$. When one restricts to regions away from the singular points, there are two disjoint sheets $R_\pm(p)=0$ which relate the momenta and velocities on the energy surface in a unique way. Note that $u^0(\lambda)$ is introduced as an arbitrary function adjustable through re-parametrization to put the lagrangian into manifestly covariant form. This means that there is a gauge type symmetry that must be fixed to compute the four-velocity. The Lagrangian is found by solving $R_T(p)=0$ together with setting the total derivative of $R_T$ with respect to $p^j$ equal to zero. This includes chaining the derivative through the implicit dependence of $p^0(\vec p)$ due to the constraint $R_T(p) = 0$. This procedure results in an eighth-order polynomial equation $P({\cal L}) =0$ which can be factored. The resulting two solutions given in Eq.\ ({\ref{lagrangian}}) are the ones that reduce to the correct classical form as $b^\mu \rightarrow 0$. Since the lagrangians in Equ.\ (\ref{lagrangian}) are found through factorization of an eighth-order polynomial, it is unclear which of the ${\cal L}\pm$ functions are in correspondence with the sheets $R_\pm = 0$. The extended hamiltonian formalism that follows will solve the issue of non-invertibility of $p(u)$ and provide an explicit connection between the signs chosen in Eqs.\ ({\ref{hamiltonian}}) and ({\ref{lagrangian}}). \section{New covariant variables for dispersion relation} A naturally defined observer-covariant four-vector (where the derivatives are taken as if $p^0$ and $p^j$ were independent, or so-called `off-shell' derivatives...) is given by the expression \begin{equation} w^\mu_\pm = {1 \over m}{\partial R_\pm \over \partial p_\mu} = {p^\mu \over m} \pm {(p\cdot b) b^\mu -b^2 p^\mu \over m \sqrt{(b \cdot p)^2 - b^2 p^2}}. \end{equation} A short calculation yields the remarkably simple relation \begin{equation} R_\pm = m^2 (w_\pm^2 - 1), \end{equation} indicating that the dispersion relation takes the conventional form in terms of these new variables. This relation was first noticed in \cite{colnoord} where the massive-CPT-violating photon dispersion relation takes a similar form. Inverting for the momentum gives \begin{equation} p^\mu = m w^\mu_\pm \mp \epsilon {(w_\pm \cdot b) b^\mu - b^2 w_\pm^\mu \over \sqrt{(b \cdot w_\pm)^2 - b^2 w_\pm^2}}, \end{equation} where $\epsilon = \sign \left( \sqrt{(b \cdot p)^2 - b^2 p^2} \mp b^2 \right)$ is a sign factor required near the singular points to obtain the correct relation. This looks very similar to the expression for $p^\mu$ in terms of $u^\mu$ computed using the Lagrangian in Eq.\ ({\ref{lagrangian}}) \begin{equation} p^\mu = - {\partial {\cal L}_\pm \over \partial u_\mu} = {m u^\mu \over \sqrt{u^2}} \pm {(u \cdot b) b^\mu - b^2 u^\mu \over \sqrt{(b \cdot u)^2 - b^2 u^2}}. \end{equation} Note that the $\pm$ signs are flipped due to the reversed notation used for ${\cal L}_\pm$. In fact, one can see that the two four-velocity parameters $u^\mu$ and $w^\mu$ are related by choosing the explicit parametrization for $u^\mu$ such that $u^2=1$. This can also be seen by application of the chain rule to the derivative of the dispersion relation with respect to $p^j$ used to compute the Lagrangian, \begin{equation} {dR_T \over dp_j} = {dR_+ \over d p_j} R_- + {dR_- \over d p_j}R_+ = 0. \end{equation} on-shell, on the sheet where $R_- = 0$ (away from the singular point so that $R_+ \ne 0$) we have \begin{equation} {dR_- \over d p^j } = {\partial R_- \over \partial p_0}{\partial p_0 \over \partial p^j} + {\partial R_- \over \partial p^j} = 0, \end{equation} or \begin{equation} {w_-^j \over w_-^0} = {u^j \over u^0}, \quad {\rm or} \quad w_-^\mu = {w_-^0 \over u^0} u^\mu = (e_-)^{-1} u^\mu, \end{equation} indicating that $w_-^\mu$ is in fact the velocity four-vector, up to some scalar multiple $e_-(\lambda)$. An analogous equation holds for $w^j_+$, involving the introduction of another function $e_+(\lambda)$. Defining the Lagrangian as ${\cal L} = - p_\mu u^\mu$ and matching to Eq.\ (\ref{lagrangian}) fixes the relation $e_+ = e_- =\sqrt{u^2} $, at least away from the singular points. \section{Extended Hamiltonian Formalism} It is useful to `free up' the definition of $e(\lambda)$ as an auxiliary function to make the four components of momentum linearly independent using the extended hamiltonian formalism, originally due to Dirac \cite{quantgauge}. Doing so yields a modified form of the candidate \cite{fn1} action functionals as \begin{equation} S^*_\pm = - \int \left[ m e^{-1} u^2 \pm \sqrt{(b\cdot u)^2 - b^2 u^2} + {e \over m} R_\mp (p,x) \right ] d \lambda, \label{action} \end{equation} where $ e(\xi \lambda) = \xi e(\lambda)$ is a homogeneous function of degree one to ensure re-parametrization invariance of the modified action and $R_\mp(p,x)$ is an appropriate hamiltonian constraint function that vanishes when the equations of motion are satisfied. Note that we take $e_- = e_+ = e$ since this function can be interpreted as a metric on the world-line and should be the same for different spin particles if they are allowed to interact. By writing the action in the form \begin{equation} S^* = \int \left[ - p^\mu u_\mu - {\cal H}^* \right] d \lambda = \int L^* d \lambda, \end{equation} the extended hamiltonian can be identified as \begin{equation} {\cal H}^*_\pm = -{e \over m} R_\mp (p,x). \label{genham} \end{equation} Note that this hamiltonian is zero when the constraint is satisfied (`on-shell'), as is expected for relativistic systems that are generally covariant (re-parametrization invariance in this case...). If the constraint is written in terms of $R_\pm(p,x) = {m^2 \over 2} (w_\pm^2 - 1) = {m^2 \over 2 e^2} ( u^2 - e^2)$, then the lagrangian becomes \begin{equation} L_\pm^*[u^\mu,x,e] = -{m \over 2 e} u^2 \mp \sqrt{(b \cdot u)^2 - b^2 u^2} - {e m \over 2}. \end{equation} Variation of this lagrangian with respect to $e$ gives the condition $e = \sqrt{u^2}$, reducing to the original lagrangian of Eq.\ ({\ref{lagrangian}}) when $e$ is eliminated. Note that the functional form of the lagrangian is independent of the choice of $R_+(p)$ or $R_-(p)$ in Eq.\ (\ref{action}). The conjugate momenta are now \begin{equation} p^\mu ={\partial L^*_\pm \over \partial u_\mu} = {m u^\mu \over e} \pm {(u \cdot b) b^\mu - b^2 u^\mu \over \sqrt{(b \cdot u)^2 - b^2 u^2}}, \label{mom} \end{equation} which is now invertible for $u^\mu(p)$ as \begin{equation} u^\mu = {e \over m} \left( p^\mu \mp {(b \cdot p)b^\mu - b^2 p^\mu \over \sqrt{(b \cdot p)^2 - b^2 p^2}}\right), \label{vel} \end{equation} provided the determinant of the Hessian of $L^*$ with respect to the velocity is nonzero. The Hessian is computed as \begin{equation} h^{{\mu\nu}}_L = - {\partial^2 L^*_\pm \over \partial u_\mu \partial u_\nu} = {m \over e} \eta^{\mu\nu} \mp {b^2 \over ((b \cdot u)^2 - b^2 u^2)^{3/2}} T^{\mu\nu}(u), \end{equation} with \begin{equation} T^{\mu\nu} (u) = ((b \cdot u)^2 - b^2 u^2) \eta^{\mu\nu} + b^2 u^\mu u^\nu + u^2 b^\mu b^\nu - (b \cdot u)(b^\mu u^\nu + b^\nu u^\mu). \end{equation} The determinant is then computed by acting on a linearly independent basis of eigenvectors as \begin{equation} det(\eta \cdot h_L) = \left( {m \over e} \left[ {m \over e } \mp {b^2 \over \sqrt{(b \cdot u)^2 - b^2 u^2}}\right] \right)^2, \label{hessl} \end{equation} valid when $b$ and $u$ are not parallel. When $b$ and $u$ are parallel, the Lagrangian becomes independent of the Lorentz violation parameter producing singular behavior. An additional source of singular behavior is due to the vanishing of the determinant for the upper sign when $e$ happens to satisfy \begin{equation} e = {m \sqrt{(u \cdot b)^2 - b^2 u^2} \over b^2 }, \end{equation} which can happen for some physical values of $u$ when $b$ is time-like. Evaluation of the momentum at these points yields \begin{equation} p^\mu \rightarrow {(u \cdot b)b^\mu \over \sqrt{(u \cdot b)^2 - b^2 u^2}}, \end{equation} which is degenerate for some set of nonzero velocity four-vectors. For example, imposing the equations of motion fixes $e^2 = u^2$ and imposing standard parametrization so that $u^2 = 1$ implies the determinant vanishes for three-velocities satisfying \begin{equation} \gamma^2 (b^0 - \vec b \cdot \vec v)^2 = b^2(1 + {b^2 \over m^2}), \end{equation} where $\gamma = 1/\sqrt{1 - \vec v^2}$ denotes the standard relativistic factor. Solutions to this equation occur for values of the velocity of order $b$, for example, if $b^\mu = (b_0,0,0,0)$, then the Lagrangian is singular on the sphere determined by $| \vec v| = b_0 / \sqrt{m^2 + b_0^2}$, and $p^\mu = (\sqrt{m^2 + (b_0)^2},0,0,0)$. Note that the direction of $\vec v$ can be used to characterize the trajectory at points where the momenta are zero. The corresponding extended hamiltonian functions are \begin{equation} {\cal H}^*_\pm = -{e \over 2m} \left(p^2 - m^2-b^2 \mp 2 \sqrt{(b \cdot p)^2 - b^2 p^2}\right), \end{equation} with Hessian matrix \begin{equation} h^{{\mu\nu}}_H = -{\partial^2 {\cal H}^* \over \partial p_\mu \partial p_\nu} = {e \over m}\left[ \eta^{\mu\nu} \pm {b^2 \over ((b \cdot p)^2 - b^2 p^2)^{3/2}} T^{\mu\nu}(p) \right], \label{hessh} \end{equation} with determinant \begin{equation} det(\eta \cdot h_H) = \left({e \over m}\right)^4 \left[ 1 \pm {b^2 \over \sqrt{(b \cdot p)^2 - b^2 p^2}}\right]^2, \end{equation} which vanishes on a set in momentum space, complimentary to the one in velocity space. This is useful since Hamilton's equations relate derivatives of the extended hamiltonian to the velocity covariantly as \begin{equation} {\partial {\cal H}^*_\pm \over \partial p_\mu} = - u^\mu, \quad {\partial {\cal H}^*_\pm \over \partial x^\mu} = \dot p_\mu. \label{hameq1} \end{equation} Note that the second equation becomes useful when the background metric varies from a flat Minkowskian one. Note that it is crucial that the four components $p^\mu$ be varied independently in the proof that hamilton's equations hold, which is now possible due to the inclusion of the auxiliary $e$ parameter into the theory. When the extended hamiltonian is expressed in terms of the velocity, it takes the conventional form \begin{equation} {\cal H}^*_\pm = - {m \over 2 e}(u^2 - e^2). \label{hamu} \end{equation} A corresponding expression for the lagrangian in terms of the momentum variables is \begin{equation} L^*_\pm = -{e \over 2 m}(p^2 + m^2 + b^2) . \end{equation} It is curious that both the extended hamiltonian and lagrangian take the conventional relativistic form when expressed in terms of the "wrong" variables. Note that the above formulas serve to define the theory as a one-to-one Legendre transformation provided a certain singular set at low velocities and momenta are avoided. Mathematically, this region corresponds to points where $D(p) \equiv \sqrt{(b \cdot p)^2 - b^2 p^2}$ and $D(u) \equiv \sqrt{(b \cdot u)^2 - b^2 u^2}$ fail to be in one-to-one correspondence. These functions are related by \begin{equation} \epsilon_H D(p) - \epsilon_L {m \over e} D(u) = - b^2, \label{drel} \end{equation} where $\epsilon_L$ ($\epsilon_H$) is the sign chosen in $L^*_{(\epsilon_L = \pm)}$ ($H^*_{(\epsilon_H = \pm)}$). The hessians are badly behaved when $D(p) \sim b^2$, indicating a lack of convexity in a small region near the points where the determinants vanish. Outside of this singular region, one is free to choose one of the $\pm$ signs in $L^*_\pm$ and use the corresponding extended hamiltonian ${\cal H}^*_\pm$, and the relation in Eq.\ (\ref{drel}) becomes \begin{equation} D(p) - {m \over e} D(u) = \mp b^2, \end{equation} and the equations relating the momentum and velocity given in Eqs.\ (\ref{mom}) and (\ref{vel}) are one-to-one providing a well-defined Legendre transformation on an open convex subvariety of the solution space. This can be seen directly through the formula $(h_L)^{\mu \alpha} \cdot (h_H)_{\alpha \nu} = \delta^\mu_{~\nu}$, which follows directly from the chain rule. Within the singular region, it is not possible to use a single global sign choice to define the action and some procedure is required to handle the signs of the functions appearing in Eq.\ (\ref{action}) more carefully. This topic is addressed next. \section{Behavior Near Singular Points} The determinants of the hessian matrices in Eqs.\ (\ref{hessl}) and (\ref{hessh}) vanish when either $D(u) = \epsilon_L {e b^2 \over m}$ or $D(p) = \epsilon_H {b^2}$. In addition, when $det(\eta \cdot h_H) = 0$, the corresponding velocity function $D(u)$ vanishes and $det{(\eta \cdot h_L)}$ diverges to either $\pm \infty$. An important observation arising from Eq.\ (\ref{drel}) is that it is not possible for both $D(u)$ and $D(p)$ to simultaneously vanish (provided $b^2 \ne 0$). In order to handle the relative sign choices in Eq.\ (\ref{action}), the expressions for the extended lagrangian $L^*$ and hamiltonian ${\cal H}^*$ can be re-expressed in terms of the zero set of the following polynomials (which define algebraic varieties) \begin{equation} f_L[L^*,u^\mu,e] = \left( L^* + {m \over 2 e} (u^2 + e^2) \right)^2 - D^2(u) = 0, \end{equation} and \begin{equation} f_H[{\cal H}^*,p^\mu,e] = \left( {\cal H}^* + {e \over 2 m}(p^2 - m^2 - b^2)\right)^2 - ({e \over m})^2 D^2(p) = 0 \end{equation} The gradients of these functions are nonzero provided $D(u) \ne 0$ and $D(p) \ne 0$, indicating that the corresponding varieties are smooth everywhere except at singular points where either $D$-function vanishes. Derivatives of $L^*$ and ${\cal H}^*$ and the corresponding Legendre transformation can therefore be defined implicitly on these varieties everywhere except at the singular points. At the singular points, the lagrangian variety can be formally blown up using an auxiliary set of variables as was demonstrated in \cite{desing} using the non-extended Lagrangian formalism. Here, it is demonstrated that the momentum-space variables can be used to parametrize the variety in velocity space near the singular point $D(u) = 0$, naturally desingularizing it. An $n-2$ dimensional sphere of momentum values degenerate to the same velocity value at the singular point, so by retaining this information, it is possible to define smooth paths on the Lagrangian variety through the singular points by observing that the momentum variables are continuous due to Hamilton's equations. A symmetric procedure can be used to handle paths going through the singular points $D(p) = 0$ on the hamiltonian variety. As an example, consider a particle moving vertically in a region of constant gravitational field near the surface of the Earth for which the metric is given by \begin{equation} d \tau^2 \approx \left(1 + {2 g} z \right) dt^2 - \left ( 1 - {2 g} z \right) dz^2, \end{equation} using clock time $t$ at the surface and the height from the surface $z << R$ as coordinates, with $b^\mu = (b^0,0,0,0)$, and constant $b^0$ in this coordinate system. Using the proper time as a parameter gives the Euler-Lagrange equation that follows from Eq.\ ({\ref{lagrangian}}) as \begin{equation} m {d v \over d t} \mp b^0 {d \over d t} \left( {\partial |v| \over \partial v} \right) = - mg, \end{equation} where the motion is taken to be non-relativistic and $v = dz/dt$ for motion along the vertical direction. Note that the singular point at $v=0$ is evident as the correction term is not defined there, but vanishes everywhere else. The geodesic through the singular point can be determined uniquely by examination of Hamilton's equation of motion in Eq.\ (\ref{hameq1}) which reduces to \begin{equation} \dot p_z \approx -mg \left( 1 \pm 2 {b^0 |p^z| \over m^2} \right), \end{equation} proving that the momentum variables remain continuous through the trajectory near the singular point. Examination of \begin{equation} p^z \approx mv \mp b^0 {\partial |v| \over \partial v}, \end{equation} demonstrates that the particle must transition from $L_\pm$ to $L_\mp$ as it passes through the singular point at $v=0$ as the the term $\partial |v| / \partial v$ flips from $+1$ to $-1$ during the transition. Physically, this trajectory can be described as a particle with spin-up rising to its apex and falling again with spin remaining up. During this process, the velocity helicity changes sign in passing through the apex of the trajectory while the energy and momentum remains continuous. If instead, the particle remains on the same lagrangian sheet, the spin would have to flip at the top requiring a discontinuous change in the energy and momentum in violation of hamilton's equations. The corresponding singular point in momentum space occurs when $p^z = 0$, which occurs at $v = b^0 / m$ when using $L_+$. In this case, the Euler-Lagrange equation requires the velocity to be continuous through the singular point with the implication that the particle transitions from ${\cal H}_- = 0$ to ${\cal H}_+ = 0$ as it passes through this velocity. In this case, it is the momentum helicity that flips sign. This means that the action given in Eq.\ (\ref{action}) needs to be modified in the neighborhood of the singular point so that the appropriate hamiltonian is paired with the chosen lagrangian and visa-versa. \section{Generalization to bipartite case} The above case for $b^\mu$ parameter can be put into a compact form using $s_{\mu\nu} = b_\mu b_\nu - b^2 g_{\mu\nu}$, so that $D_s(p) = \sqrt{s_{\mu\nu} p^\mu p^\nu}$. It turns out that a class of easily solved Legendre transformations exists when the matrix $s^{\mu\nu}$ is arbitrary but still satisfies the special condition $s^2 = -\zeta s$ \cite{kosrustso}. The corresponding momentum space constraint is \begin{equation} {1 \over 4} (p^2 - m^2 - \zeta)^2 - s_{\mu\nu} p^\mu p^\nu = 0, \end{equation} which can be factored into the form $R_+ R_- = 0$, analogous to the $b$-case. This gives rise to the extended Hamiltonians \begin{equation} {\cal H}^*_\pm = -{e \over 2 m} \left( p^2 - m^2 - \zeta \mp \sqrt{p \cdot s \cdot p} \right). \end{equation} Computation of the four-velocity gives \begin{equation} u^\mu = - {\partial {\cal H}^* \over \partial p_\mu} = {e \over m} \left( p^\mu \mp {(s \cdot p)^\mu \over \sqrt{p \cdot s \cdot p}} \right), \end{equation} which gives the same expression as Eq.\ ({\ref{hamu}}). This simple formula gives some additional insight into why the bipartite form is so special, it leads to a conventional description of the system when written in terms of alternative variables. The extended Lagrangian becomes \begin{equation} L_\pm^*[u,x,e] = - {m \over 2 e} u^2 \mp \sqrt{u \cdot s \cdot u} - {m e \over 2} . \end{equation} singular subspaces occur when either the momenta or the velocity vectors are killed by $s$. The relation in Eq.\ ({\ref{drel}}) generalizes to \begin{equation} \epsilon_H D_s(p) - \epsilon_L {m \over e} D_s(u) = - \zeta, \end{equation} where again, the $\epsilon_H$ and $\epsilon_L$ are the sign choices used in the extended hamiltonian and lagrangian functions. This relation implies that at the singular points where one of the $D$-functions vanishes, the other one is nonzero in a neighborhood of that point, provided $\zeta \ne 0$. This means that the momenta variables can be used to de-singularize the velocity variables and visa-versa as in the $b$-case. \section{Conclusion} Using the extended hamiltonian formalism, the classical mechanics implied by Lorentz-violating dispersion relations in the SME can be implemented using both the Euler-Lagrange and hamilton's equations simultaneously. The formulation is manifestly covariant in that an einbien $e$ is introduced to free up the variations of the extended hamiltonian with respect to all four momentum components. In this formailsm, the theory provides an explicit connection between the choice of lagrangian and the original energy surfaces from which it was derived that can allow for a physical interpretation of the states in terms of the original field theoretic model. For example, in the $b$ case, it is the eigenstates of the Pauli-Lubanski operator contracted with $b$ that determine the energy surfaces and the associated lagrangian functions. In addition, the symmetric treatment of variables in velocity and momentum space allows for natural de-singularization when the momentum-space variables are used to parametrize the velocity space singular points, and visa versa. As an added benefit, the particle trajectory equations can be formulated directly in momentum space, thereby removing the necessity to first convert to velocity space lagrangians which can be intractable algebraically in many situations. This may be particularly useful when considering interacting theories as it is the total momentum that is conserved rather than the total velocity. Successful application of this formalism to non-bipartite SME dispersion relations remains an interesting open issue.
1,116,691,497,462
arxiv
\section{Introduction} This paper reports on some recent progress that has been made in the analytical modeling of defect formation, far from threshold, in pattern forming physical systems. We will take a moment here to very briefly sketch the physical and mathematical background that motivates what is done in this paper. \smallskip The relevant class of pattern-forming physical systems to consider are those in which the spatial physical field can be described as planar and the first bifurcation from a homogeneous state, having arbitrary translational symmetry in the plane, produces a striped pattern which has only a discrete periodic symmetry in one direction. This \emph{symmetry-breaking} occurs at a critical threshold; above this threshold the pattern can deform and, further away, \emph{defects} can form. It is the desire to understand and model this process of defect formation that motivates our study. A good particular example of these kinds of physical systems is a high Prandtl number Rayleigh-B\'enard convection experiment. The critical threshold in this example is the critical Rayleigh number at which fluid convection is initiated from the sub-threshold homogeneous conducting state. The "striped pattern" here can be taken to be the horizontal cross-section of the temperature field at the vertical midpoint of the experimental cell in which \emph{convection rolls} have formed. Because of its periodic structure, the striped pattern can be described in terms of a periodic form function of a \emph{phase}, $\theta = \vec{k}\cdot\vec{x}$, where the magnitude of $\vec{k}$ is the wavenumber of the pattern and the orientation of $\vec{k}$ is perpendicular to the stripes. Here $\vec{x}=(x,y)$ is a physical point in the plane. Even though the striped pattern will deform far from threshold, over most of the field (and in particular away from defects) it can be locally approximated as a function of a well-defined phase, $\theta(\vec{x})$, for which a local wavevector can be defined as $\vec{k} = \nabla\theta$ which differs little from a constant vector unless one varies over distances on the order of many stripes in the pattern. This slowly-varying feature of pattern formation far from threshold motivates the introduction of a \emph{modulational} ansatz in the microscopic equations describing these physical systems from which an order parameter equation for the behavior of the phase can be formally derived. This was originally done by Cross and Newell \cite{CN}. These equations are variational and from our perspective it is advantageous to study their solutions by studying the behavior of the minimizers of the variational problem. The version of the variational problem that we study corresponds to the following energy functional on a given domain $\Omega$ with specified Dirichlet boundary values. \begin{equation} \label{eq:gl} {\mathcal{E}}^\mu(\Theta) = \mu \int_\Omega \left(\Delta_{\vec{X}} \Theta \right)^2 d\vec{X} + \frac{1}{\mu} \int_\Omega (1 - |\nabla_{\vec{X}} \Theta|^2)^2\, d\vec{X}\ , \end{equation} which is expressed in terms of \emph{slow} variables stemming from the modulational ansatz mentioned above: $\vec{X} = (X,Y) = \left(\mu x, \mu y\right); \Theta = \frac{\theta}{\mu}$. We refer to this functional as the \emph{regularized Cross-Newell} (RCN) \emph{Energy}. It consists of two parts: a non-convex functional of the gradient (the CN part) plus a quadratic functional of the Hessian matrix of $\Theta$, which is the regularizing singular perturbation. Without this regularization, the CN variational equations admit non-physical caustic formation. Instead, by studying the limit of minimizers of ${\mathcal{E}}^\mu$ as $\mu \to 0$, one may be able to identify the formation of a physical defect as a limiting jump discontinuity or other kind of singularity in the wavevector field associated to the $\mu$-indexed family of minimizing phase fields. For more details on what has been rather tersely outlined above, we refer the reader to \cite{EINP} where analytical results on the asymptotic limit of minimizers for RCN and their defects in certain geometries are also derived. See also \cite{ET} where further refinements and generalizations are developed. We further mention that the variational problem associated to (\ref{eq:gl}) also arises in other physical contexts (unrelated to pattern formation) where it is known as the \emph{Aviles-Giga energy} \cite{AG}. \medskip We now turn to the focus of this paper. The kind of defects that are seen to arise far from threshold are not supported by asymptotic minimizers of (\ref{eq:gl}) if the class of functions over which one is varying is restricted to be single-valued phases. In particular, one can see for purely topological reasons that this restriction rules out \emph{disclinations} \cite{EINP}. In \cite{EINP2}, physical, numerical and experimental arguments are developed which make a strong case in support of the hypothesis that the correct order parameter model for the phase in pattern forming systems far from threshold should come from a variational problem admitting test functions which are \emph{multi-valued} and in particular \emph{two-valued}. In physical parlance this is often expressed by saying that the wavefield $\vec{k}$ should be allowed to be a \emph{director field}; i.e. an unoriented vector field. One figure (see Fig.~\ref{fig:sh-zip} below) from \cite{EINP2} will help to crystallize the issue and the focus of this paper. \begin{figure}[htbp] \centerline{\includegraphics[width = 0.9\hsize, angle=90]{figs/lzippers.eps}} \caption{The ``Swift-Hohenberg'' zippers. The patterns are determined by minimizing the Swift-Hohenberg energy functional for various choices of the angle $\alpha$ that detemines the slopes of the stripe patterns as $y \rightarrow \pm \infty$.} \label{fig:sh-zip} \end{figure} This figure shows seven numerical simulations, each done in a horizontal strip, of a solution to the \emph{Swift-Hohenberg equation} which is a generic model of microscopic equations for a pattern forming system. Each of these is run far from theshold but with differing boundary conditions imposed at the edges. In each case the boundary conditions impose a constant orientation of the stripe at the edges such that the normal to the stripe is $(\cos(\alpha), \sin(\alpha))$ along the top edge and $(\cos(\alpha), - \sin(\alpha))$ along the bottom edge. The only thing that changes from one simulation to the next is the value of $\alpha$ which in the figure is recorded on the left in each respective cell. The results of \cite{EINP} together with symmetry considerations establish that for an analogous domain and boundary values, the asymptotic minimizers of (\ref{eq:gl}), within the class of single-valued phases, should have the form shown in the bottom-most cell of Figure (\ref{fig:sh-zip}). That is, they should have wavevectors very close to $(\cos(\alpha), \sin(\alpha))$ in the upper region of the cell and very close to $(\cos(\alpha), -\sin(\alpha))$ in the lower region of the cell with a boundary layer around the mid-line in which the wavevector transitions smoothly but rapidly from one state to the other. These minimizers are dubbed \emph{knee solutions} in \cite{EINP} and in the limit as $\mu \to 0$, they tend to a configuration in which there is a sharp jump in the wavevector along the mid-line. This kind of defect is called a \emph{grain boundary}. In other words, the theory for (\ref{eq:gl}) with single-valued phases predicts that the grain boundary should be the limiting defect independent of the value of $\alpha$. The different result appearing in Figure (\ref{fig:sh-zip}) was one of the pieces of evidence sited in \cite{EINP2} to argue the necessity for the larger variational class of multi-valued phases, even in such simple geometries as those of Figure (\ref{fig:sh-zip}). In this paper we are going to carry out a careful analytical study of the RCN variational problem in exactly this geometry but within a larger class of two-valued phases. We will firmly establish that the form of the asymptotic minimizers in this more general model does in fact depend non-trivially on $\alpha$. In addition, the construction of test functions in section \ref{sec:u_bound} and the numerical simulations in section \ref{sec:results} gives some intuitive and experimental support to the belief that the stable solutions of the RCN equations qualitatively resemble what is seen in the Swift-Hohenberg simulations. In \cite{EINP2}, the term Swift-Hohenberg "zippers" was coined to refer to the problem studied in Figure (\ref{fig:sh-zip}). In this paper we will be studying \emph{Cross-Newell zippers}. \section{Setup} \label{sec:prelim} We are given an angle $\alpha$ that determines the boundary conditions on the pattern as $y \rightarrow \pm \infty$ by $$ \nabla \theta \rightarrow (\cos(\alpha), \pm \sin(\alpha)) \text{ as } y \rightarrow \pm \infty. $$ Note that this differs from the setup underlying the Swift-Hohenberg zippers in that the boundary conditions are placed at $\pm \infty$ in the $y$-direction rather than at finite values of $y$. This simplifies our technical considerations in that we don't need to worry about adjusting the location of these boundaries as $\alpha$ changes. Also, all of the patterns we want to consider here are \emph{shift-periodic} in the $x$-direction. This allows us to reduce our study to domains that are periodic in $x$. We introduce the (small) parameter $\epsilon = \cos(\alpha)$ and we define the period $l = \pi/\epsilon$. We consider the following variational problem on the strip $\mathcal{S}^\epsilon \equiv \{(x,y) | 0 \leq x < l, y\geq 0\}$: Minimize $\mathcal{F}^\epsilon[\theta;a,\delta]$ given by $$ \mathcal{F}^\epsilon[\theta;a,\delta] = \iint_{\mathcal{S}^\epsilon} \left\{[ \Delta \theta]^2 + (1 - |\nabla \theta|^2)^2 \right\} dx dy $$ over all $a \in [0,1], \delta \in \mathbb{R}$ and $\theta$ satisfying the boundary conditions \begin{gather} \theta(x,0) = 0 \quad \text{ for } 0 \leq x < a l; \label{eq:bc} \\ \theta_y(x,0) = 0 \quad \text{ for } a l \leq x < l; \nonumber \\ \theta(x,y) - \epsilon x \text{ is periodic in $x$ with period $l$ for each } y \geq 0; \nonumber \\ \theta(x,y) - \left[\epsilon x + \sqrt{1 - \epsilon^2} y + \delta\right] \in H^2(\mathcal{S}^\epsilon). \nonumber \end{gather} We take a moment here to explain the considerations that have motivated the mixed Dirichlet-Neumann boundary conditions here, the first two boundary conditions in (\ref{eq:bc}) above. We argued in the introduction that in order to capture the physically relevant minimizers, the RCN variational problem needed to allow for multi-valued phases in its admissible class of test functions. However, the numerical results on the Swift-Hohenberg zippers suggest that in certain symmetrical geometries the appropriate multi-valuedness can be introduced in a tractable fashion. Indeed in the case of the SH zippers we see that the symmetry of the boundary conditions between the upper and lower edges of the domain is preserved in the symmetry of all of the exhibited solutions about the middle horizontal axis; i.e., the reflection in $y$ about the $y=0$ axis. This suggests that a single-valued phase could describe the solution in the upper half-plane with the solution in the lower half-plane given as a symmetric reflection of that in the upper half-plane about $y=0$. \begin{figure}[htbp] \centerline{\includegraphics{figs/boundaryconds.eps}} \caption{An illustration of the appropriate boundary conditions.} \label{fig:bcs} \end{figure} Figure~\ref{fig:bcs} illustrates two instances of the form that we expect these zippers to take in the infinite (in $y$) geometry. The figure on the left illustrates level curves (\textit{stripes} in the parlance of the introduction) of what we will shortly define to be a \textit{self-dual knee solution}. This is indeed symmetric about the mid-axis, which we will take to be the $y=0$ axis; moreover, one can see that its gradient field along $y=0$ is tangential to this axis. Thus the gradient field in the upper half-plane is completely symmetrical to that in the lower half-plane under reflection about $y=0$. However, for the striped pattern on the right in figure~\ref{fig:bcs}, this is not the case. There are regions, illustrated for example by the darkened interval along $y=0$, where the gradient field is tangential to this axis; but, there are other regions, illustrated for example by the lightened interval along $y=0$, where the gradient field needs to be perpendicular to this axis. By reflection symmetry this field will point upwards in the upper half-plane and downward in the lower half-plane. This cannot be supported by a vector field but it is allowable for a director field. This indicates that in this region a two-valued phase is required. To get at the conditions on the phase itself we observe that patterns of the type illustrated here are analytically given in terms of a form function $F$ of the phase $\theta = \theta(x,y)$ such that $F$ is locally periodic of period $2\pi$ in $\theta$ and such that $F(\theta(x,y))$ is even in $y$ and smooth in $(x,y)$. In order to allow $\theta$ to be two-valued we also require $F$ to be even in $\theta$. (An example of a global form function having these properties is $F = \cos$.) It follows from these requirements that either $\theta(x,y)$ is even in $y$, in which case $\theta_y(x,0) = 0$, a Neumann boundary condition; or, $\theta(x,y)$ is an odd function of $y$ modulo $\pi$, in which case $\theta_y(x,0) = n\pi$ for some integer $\pi$, a Dirichlet boundary condition. Thus to realize the pattern on the right in figure~\ref{fig:bcs} in terms of a single-valued phase in the upper half-plane, we would need to take the Neumann boundary condition on the darkened interval and the Dirichlet boundary condition on the lightened interval. This is what we have done in (\ref{eq:bc}). For the self-dual knee pattern on the left we would take the entire boundary condition to be Neumann. \medskip The functional $\mathcal{F}^\epsilon$ is the RCN energy functional but with the scaling $\mu$ removed. It is appropriate to do this because the demonstration that the nature of the RCN minimizers depends on $\alpha$ is independent of this scaling. The first and the second conditions impose a mixed Dirichlet/Neumann boundary condition at $y = 0$, the third condition imposes (shifted-) periodicity in $x$ and the last condition ensures that the test functions $\theta$ approach the straight parallel roll patterns $\epsilon x + \sqrt{1 - \epsilon^2} y + \delta$ as $y \rightarrow \infty$. Note that the dependence of the functional $\mathcal{F}^\epsilon$ on the parameter $\epsilon$ is through the dependence of the domain $\mathcal{S}^\epsilon$ and the boundary conditions on $\epsilon$. The parameters $a$ and $\delta$ are determined by minimization. The parameter $a$ is a measure of the fraction of the boundary at $y = 0$ that has a Dirichlet boundary condition, and $\delta$ represents the {\em asymptotic phase shift}, that is the difference in phases between the test function $\theta$ and the roll pattern $\hat{\theta}(x,y) = \epsilon x + \sqrt{1 - \epsilon^2} y$ which satisfies $\hat{\theta}(0,0) = \theta(0,0) = 0$. The case where $a$ is set to zero is considered in earlier references \cite{EINP}. The test functions $\theta(x,y)$ satisfy a pure Neumann boundary condition at $y = 0$ and the minimizers in this case are the self-dual knee solutions $$ \theta_{neu}(x,y) = \epsilon x + \log(\cosh(\sqrt{1-\epsilon^2} y)). $$ These solutions have an asymptotic phase shift of $-\log(2)$ and the energy of the minimizers in the strip $\mathcal{S}^\epsilon$ is given by \begin{equation} \mathcal{F}^\epsilon[\theta_{neu};0,-\log(2)] = \frac{ 4 \pi \sqrt{1-\epsilon^2}}{3 \epsilon}. \label{eq:chevrons} \end{equation} The existence of $(\theta^\epsilon,a^\epsilon,\delta^\epsilon)$ minimizing $\mathcal{F}^\epsilon$ can be shown from the direct method in the calculus of variations. We also prove the following results about the minimizers, and their energy -- \begin{theorem} {\em Upper bound} There is a constant $E_0$ such that $\mathcal{F}^\epsilon[\theta^\epsilon;a^\epsilon,\delta^\epsilon] \leq E_0$ for all $\epsilon \in (0,1]$. \label{thm:u_bound} \end{theorem} We prove this result in sec.~\ref{sec:u_bound} by exhibiting an explicit test function satisfying this bound. Note the implication that the minimizers for sufficiently small $\epsilon$ cannot be the self-dual solutions, since the energy in Eq.~(\ref{eq:chevrons}) diverges as $\epsilon \rightarrow 0$. Consequently, $a^{\epsilon} > 0$ for sufficiently small $\epsilon$. \begin{theorem} {\em Lower bound} There are constants $E_1 > 0$ and $\epsilon_0 > 0$ such that, even for the optimal test function $\theta^\epsilon$ and the optimal parameter values $a^\epsilon$ and $\delta^\epsilon$, we have $\mathcal{F}^\epsilon[\theta^\epsilon;a^\epsilon, \delta^\epsilon] \geq E_1$ for all $\epsilon \leq \epsilon_0$. Further, there are constants $0 < \alpha_1 < \alpha_2$ such that $1 - \alpha_2 \epsilon < a^\epsilon < 1 - \alpha_1 \epsilon$ for sufficiently small $\epsilon$. \label{thm:l_bound} \end{theorem} We prove this result in sec.~\ref{sec:l_bound}. Combining this result with the preceding theorem, we obtain a rigorous scaling law for the energy of the minimizer, and for the quantity $(1-a)$ as $\epsilon \rightarrow 0$. As a corollary to Theorem~\ref{thm:l_bound}, we find that an $O(1)$ part of the energy of the minimizer concentrates on the set, $a l \leq x \leq l, 0 \leq y \leq 1$. This can be interpreted as saying that a nontrivial part of the energy of the minimizer lives in the region of the convex-concave disclination pair \cite{EINP2}. \section{Upper bound} \label{sec:u_bound} We will first show an upper bound for the enrgy functional $\mathcal{F}^{\epsilon}$, uniform in $\epsilon$, by constructing a family of explicit test function whose energy is uniformly bounded. The idea for the construction of these test functions comes from the self-dual ansatz \cite{EINP} which requires that the energy density of the functional $\mathcal{F}$ should be \emph{equi-partitioned} between its two terms. Functions satisfying this ansatz solve the self-dual (resp., anti-self-dual) equation: \begin{equation} \label{eq:selfdual} \Delta\theta =\pm (1 - |\nabla\theta|^2). \end{equation} Solutions of this equation can be constructed via the logarithmic transform $$ \theta = \pm \log u $$ which reduces (\ref{eq:selfdual}) to the linear Helmholtz equation (\ref{selfdual}). We refer the reader to \cite{EINP, ET} for more background on self-dual reduction. \subsection{Self-dual test functions for the CN-Zipper problem}\label{zipper} \subsubsection{Existence} We consider the Helmholtz equation in the upper half-plane, \begin{equation}\label{selfdual} \Delta u - u = 0 \end{equation} subject to the mixed boundary conditions \begin{eqnarray} u(x,0) &=& e^{-n\pi} \,\,\, n\ell < x < (n+a)\ell \label{2} \\ u_y(x,0) &=& 0 \,\,\,\,\,\, (n+a)\ell \leq x \leq (n+1)\ell \label{3} \end{eqnarray} and with asymptotic behavior for large $y$ given by const. $\exp(-\epsilon x - \sqrt{1-\epsilon^2} y)$ where $\ell = \pi/\epsilon$ and $a \in (0,1)$. We seek a shift-periodic solution, meaning that we change variables to $w = e^{\epsilon x} u(x,y)$ and look for periodic solutions of \begin{equation}\label{shiftper} Lw = \Delta w -2\epsilon\partial_x w -(1-\epsilon^2)w=0, \end{equation} with boundary conditions of periodicity in $x$ of period $\ell$; mixed boundary conditions at $y=0$, \begin{eqnarray} w(x,0) &=& e^{\epsilon x} \,\,\,\,\,\, 0 < x < a\ell \label{5}\\ w_y(x,0) &=& 0 \,\,\,\,\,\, a\ell \leq x \leq \ell \label{6}; \end{eqnarray} and with asymptotic behavior for large $y$ given by const. $\exp(- \sqrt{1-\epsilon^2} y)$. Given such a $u$, $\theta = -\log u$ would satisfy the boundary conditions (\ref{eq:bc}). (However, for notational simplicity, in the remainder of this section we will set $\theta = \log u$.) We now let $\mathcal{S}^\epsilon$ denote the half-cylindrical domain, $\ell$-periodic in $x$ and with $y>0$. The existence of a weak solution to (\ref{shiftper}) satisfying the above boundary conditions can be established via the Lax-Milgram theorem with appropriate energy estimates. However, in order to derive uniform asymptotic energy estimates (as $\epsilon \to 0$) for the CN Zipper problem we need to go beyond existence results and try to construct a more explicit representation of the solution to (\ref{selfdual}). Unfortunately, at present, the solutions one can construct using Greens function methods and the like do not yield sufficient a priori boundary regularity near $y=0$ to control the asymptotic behavior of the energy in this \textsl{finite} part of $\mathcal{S}^\epsilon$. We will therefore instead study solutions of a self-dual problem with modified boundary conditions (more precisely, with pure Dirichlet boundary conditions). Subsequently we will make a local modification of these solutions near the boundary to produce functions (no longer global self-dual solutions) whose asymptotic energy we can control \emph{and} which are valid test functions for the Cross-Newell Zipper problem. The modified boundary value problem we consider is (\ref{shiftper}) with (\ref{5}-\ref{6}) replaced by the pure Dirichlet boundary condition $$ w(x,0) = \left\{\begin{array}{c} e^{\epsilon x} \,\,\,\,\,\, 0 < x < a\ell \\ q_a(x) \,\,\,\,\,\, a\ell \leq x \leq \ell \\ \end{array}\right. \leqno{\begin{array}{c} (\ref{5}^\prime)\\ (\ref{6}^\prime)\\ \end{array}} $$ where $q_a(x)$ is a function which smoothly interpolates, up through second derivatives, between $e^{\epsilon x}$ at $x=a\ell$ on the left and $e^{\epsilon x - \pi}$ at $x=\ell$ on the right. There are clearly many choices for such a function; the precise choice for our purposes will be made later at the end of subsection \ref{energy}. By elliptic regularity \cite{Evans}, the solution to this boundary value problem satisfies $w(x,y) \in H^2\left(\mathcal{S}^\epsilon\right)$. In the following sections we will construct the solutions to this problem and study its asymptotics relative to the RCN energy $\mathcal{F}^\epsilon$. \subsubsection{Explicit Construction} The whole plane Green's function for the Helmholtz equation (\ref{selfdual}) is explicitly given in terms of the Bessel potential \cite{Evans}: \begin{equation}\label{Bessel} {G}(x,y;\xi,\eta)=\frac{1}{4\pi} \int_0^\infty e^{-t} \frac{dt}{t} \exp\left( -\frac{1}{4t}\{(x-\xi)^2 + (y-\eta)^2\}\right). \end{equation} In terms of this Green's function we can then represent a solution to (\ref{selfdual}), with asymptotic behavior for large $y$ given by const. $\exp(-\epsilon x - \sqrt{1-\epsilon^2} |y|)$, as \begin{eqnarray}\label{soln} u^\epsilon(x,y) &=& \int_{-\infty}^\infty \rho^\epsilon(\xi){G}(x,y;\xi,0) d\xi. \end{eqnarray} Note that \begin{eqnarray}\label{dirbase} u_y^\epsilon(x,y) &=& \int_{-\infty}^\infty \rho^\epsilon(\xi){G}_y(x,y;\xi,0) d\xi\\ \nonumber &=& -\int_{-\infty}^\infty \rho^\epsilon(\xi){G}_\eta(x,y;\xi,0) d\xi \end{eqnarray} solves (\ref{selfdual}) with respect to the standard Dirichlet boundary condition which equals minus the jump of $u_y^\epsilon$ along the $x$-axis. One may check directly (see (\ref{FT})) that in fact $\rho^\epsilon(\xi) = -2 u_y^\epsilon(\xi,0)$ almost everywhere. Integrating (\ref{dirbase}) with respect to $y$ gives \begin{eqnarray}\label{intdirbase} u^\epsilon(x,y) + f(x) &=& \int_{-\infty}^\infty \rho^\epsilon(\xi){G}(x,y;\xi,0) d\xi. \end{eqnarray} Since both $u^\epsilon(x,y)$ and the RHS of (\ref{intdirbase}) decay as $y\uparrow \infty$, it follows that $f(x)\equiv 0$. This is consistent with the ansatz (\ref{soln}), taking $\rho^\epsilon(\xi)$ to be the jump in the normal derivative of $u^\epsilon$ along $y=0$. \medskip We make the following shift-periodic ansatz for $\rho^\epsilon$, $$ \rho^\epsilon(\xi+\ell)e^{\epsilon(\xi+\ell)} = \rho^\epsilon(\xi)e^{\epsilon\xi}. $$ With this one can expand out (\ref{soln}) more explicitly as \noindent \,\,$u^\epsilon(x,y)=$ \begin{eqnarray} &=& \frac{1}{4\pi}\sum_{n \in \textbf{Z}} \int_0^\infty\frac{dt}{t} e^{-(t+\frac{y^2}{4t})} \int_{n\ell}^{(n+1)\ell} \rho^\epsilon(\xi) \exp\left(\frac{(x-\xi)^2}{-4t}\right) d\xi \label{11}\\ &=& \frac{1}{4\pi}\sum_{n \in \textbf{Z}} \int_0^\infty\frac{dt}{t} e^{-(t+\frac{y^2}{4t})} \int_{0}^{\ell} e^{-n\pi}\rho^\epsilon(\xi) \exp\left(\frac{(x-(\xi + n\ell))^2}{-4t}\right) d\xi \label{12}\\ &=& \frac{1}{4\pi} \int_0^\infty\frac{dt}{t} e^{-(t+\frac{y^2}{4t})} \int_{0}^{\ell}\rho^\epsilon(\xi) \sum_{n \in \textbf{Z}}e^{-n\pi} \exp\left(\frac{(x-(\xi + n\ell))^2}{-4t}\right) d\xi \label{13}\\ &=& \frac{e^{-\epsilon x}}{4\pi} \int_0^\infty\frac{dt}{t} e^{-((1-\epsilon^2)t+\frac{y^2}{4t})} \int_{0}^{\ell}d\xi\rho^\epsilon(\xi)e^{\epsilon\xi}\sum_{n \in \textbf{Z}} \exp\left(\frac{((x-2\epsilon t)-(\xi + n\ell))^2}{-4t}\right) \label{14}\\ &=& \frac{e^{-\epsilon x}}{\sqrt{4\pi}} \int_0^\infty\frac{dt}{t^\frac{1}{2}} e^{-((1-\epsilon^2)t+\frac{y^2}{4t})} \frac{1}{\ell}\int_{0}^{\ell} \rho^\epsilon(\xi) e^{\epsilon \xi} \,\,\vartheta_3\left(\frac{-(x - \xi) + 2\epsilon t}{\ell}, \frac{-4\pi t}{\ell^2}\right) d\xi \label{15} \end{eqnarray} In (\ref{11}) we have interchanged the order of integration which is justified by Tonelli's Theorem; in (\ref{12}) we've made the substitution $\xi = \xi_n + n\ell$ and in (\ref{13}) we've commuted the sum past the integrals which is justified by monotone convergence--all terms in the series are positive and hence the partial sums are monotonic. In (\ref{14}) we write each summand as a single exponential and then appropriately complete the square in each exponent. Finally in (\ref{15}) we apply Jacobi's identity \cite{WW}. Here $\vartheta_3$ is one of the Jacobi theta functions, in this setting explicitly given as \begin{eqnarray} \label{Jacobi} \vartheta_3\left(\frac{- x + 2\epsilon t}{\ell}, \frac{-4\pi t}{\ell^2}\right) &=& 1 + 2\sum_{n=1}^\infty e^{-\left(\frac{2\pi}{\ell}\right)^2 n^2 t} \cos\left(\frac{2\pi n}{\ell}\left(x-\frac{2\pi t}{\ell}\right) \right) \end{eqnarray} Finally, from (\ref{15}) we can express our candidate for the solution to (\ref{shiftper}), ($\ref{5}^\prime - \ref{6}^\prime$) as \begin{eqnarray} {w^\epsilon}(x,y) = \frac{1}{\sqrt{4\pi}} \int_0^\infty\frac{dt}{t^\frac{1}{2}} e^{-((1-\epsilon^2)t+\frac{y^2}{4t})} \frac{1}{\ell}\int_{0}^{\ell} p^\epsilon(\xi) \,\,\vartheta_3\left(\frac{-(x - \xi) + 2\epsilon t}{\ell}, \frac{-4\pi t}{\ell^2}\right) d\xi,\label{shiftpersoln} \end{eqnarray} where $p^\epsilon(\xi) = \rho^\epsilon(\xi) e^{\epsilon \xi}$. \subsubsection{Data Characterization, periodized and in Fourier Space} From the previous sections we have that $p^\epsilon(\xi)$ is periodic of period $\ell$; also ${w^\epsilon}(x,y)$ is periodic in $x$ of period $\ell$ and $=e^{\epsilon x}$ along $(0, a\ell)$ when $y=0$. Moreover, taking the Fourier transform of (\ref{shiftpersoln}) one finds that the Fourier coefficients, in $x$, must satisfy \begin{eqnarray} \label{FT} \{\widehat{w^\epsilon(x,y)}\}(n,y) &=& \frac{1}{2}\frac{1}{\sqrt{1+\epsilon^2(2n+i)^2}}{\{\widehat{p^\epsilon(\xi)}\}}(n)e^{-\sqrt{1+\epsilon^2(2n+i)^2}y} \end{eqnarray} for each value of $y$. Taking the limit as $y\to 0$ on both sides of (\ref{FT}) gives \begin{eqnarray} \label{FTsoln} \{\widehat{w^\epsilon(x,0)}\}(n) &=& \frac{1}{2}\frac{1}{\sqrt{1+\epsilon^2(2n+i)^2}}{\{\widehat{p^\epsilon(\xi)}\}}(n). \end{eqnarray} This is a determining conditions for $p^\epsilon(\xi)$. We note that differentiating (\ref{FT}) with respect to $y$ and setting $y = 0$ demonstrates that $p^\epsilon(x) = - 2 w^\epsilon_y(x,0)$, at least in the $L^2$ sense. Since $w^\epsilon(x,0) \in H^{2}(S^1)$, it follows, by comparison, that $2\sqrt{1+ \epsilon^2(2n+i)^2}\widehat{w^\epsilon}(n)\in h^1(\mathbb{Z})$. Given this we can now define \begin{equation} \label{Besspot} p^\epsilon(x) \doteq \left\{2\sqrt{1+ \epsilon^2(2n+i)^2}\widehat{w^\epsilon}(n)\right\}^\vee(x) \end{equation} which characterizes $p^\epsilon$ as an element of $H^1(S^1)$. It follows from Sobolev's lemma \cite{Evans} that $p^\epsilon$ can be taken to be continuous. This last observation also justifies the existence of the Fourier coefficients $\{\widehat{p^\epsilon}\}(n)$ that were formally introduced in (\ref{FT}). \subsubsection{Large $y$ asymptotics}\label{largey} We now determine the large $y$ asymptotics of (\ref{soln}). By (\ref{FT}), $w^\epsilon$ has a Fourier representation given by \begin{eqnarray*} {w^\epsilon}(x,y) &=& \sum_{n \in \mathbb{Z}} \widehat{w^\epsilon}(n) e^{-\sqrt{1+\epsilon^2(2n+i)^2}y} e^{\frac{2\pi i nx}{\ell}}\\ &=& \widehat{w^\epsilon}(0) e^{-\sqrt{1-\epsilon^2}y} + \mathcal{O}\left(e^{-2 \sqrt{1 + 3\epsilon^2} y}\right). \end{eqnarray*} (We note that for large $y$ this series converges uniformly to a smooth, in fact real-analytic, function of $x$.) Moreover, $w^\epsilon(0) = \frac{1}{\ell}\int_{0}^{\ell}w^\epsilon(x,0) dx$ is non-zero since by the maximum principle \cite{Evans} applied to the elliptic PDE (\ref{shiftper}) on the cylinder $[0,\ell] \times \left(-\infty, \infty\right)$, the integrand, $w^\epsilon(x,0)$, is non-negative and in fact, by ($\ref{5}^\prime-\ref{6}^\prime$), non-vanishing on $[0,\ell]$ (the definition of $q_a$ which we give later will insure that this is so). \subsection{Energy Estimates}\label{energy} We will now try to show that the regularized Cross-Newell energy of $\theta(x,y) = \log u(x,y)$ is uniformly bounded in $\epsilon$. This would establish a uniform (in $\epsilon$) upper bound for the energy minimizers. Recall that the energy is calculated by integrating the energy density over the domain $\mathcal{S}^\epsilon$. Making this estimate breaks naturally into the consideration of two regions: $[0, \ell] \times \{y\geq M_\epsilon\}$ and $[0, \ell] \times \{y < M_\epsilon\}$ where $M_\epsilon$ is to be determined. We remark that the so-called "knee solution" of the self-dual equation provides an upper bound for the energy for values of $\epsilon$ bounded away from zero. So we only need to be concerned with small values of $\epsilon$. Since $u^\epsilon(x,y) = e^{-\epsilon x}w^\epsilon(x,y)$ solves the Helmholtz equation, it will suffice to bound the density $(1-|\nabla\theta^\epsilon|^2)^2$ (since the integral of this density equals that of $(\Delta \theta^\epsilon)^2$ for self-dual solutions). \subsubsection{Estimates in $[0, \ell] \times \{y\geq M_\epsilon\}$} We begin by considering the domain for large $y$. Since \begin{eqnarray*} &&\nabla\theta^\epsilon(x,y) = \frac{\nabla u^\epsilon}{u^\epsilon}(x,y) = \left( \begin{array}{c} -\epsilon \\ 0 \\ \end{array} \right) + \frac{\nabla w^\epsilon}{w^\epsilon}(x,y), \end{eqnarray*} we may reduce our considerations to studying the asymptotics of $w^\epsilon$ and its first derivatives. It will be convenient to replace the convolution integral in (\ref{shiftpersoln}) by the Fourier series whose coefficients are the product of the Fourier coefficients of $p^\epsilon$ and the $\vartheta_3$ series. This results in the following alternative representation of $w^\epsilon$: \begin{eqnarray*} {w^\epsilon}(x,y) &=& \frac{1}{\sqrt{4\pi}} \sum_{n\in \mathbb{Z}}\int_0^\infty\frac{dt}{t^\frac{1}{2}} e^{- \left((1+\epsilon^2(2n+i)^2)t+\frac{y^2}{4t}\right)}\widehat{p^\epsilon} (n)e^{\frac{2\pi i n x}{\ell}}. \end{eqnarray*} With the change of variables, $$ s = \frac{t}{y} $$ this representation takes the form \begin{eqnarray}\label{asympw} {w^\epsilon}(x,y) &=& \sqrt{\frac{y}{4\pi}} \sum_{n\in \mathbb{Z}}\int_0^\infty\frac{ds}{s^\frac{1}{2}} e^{-\frac{y}{4}\left(\frac{s}{s_n^2} + \frac{1}{s}\right)}\widehat{p^\epsilon}(n)e^{\frac{2\pi i n x}{\ell}}, \end{eqnarray} where $s_n = \frac{1}{2\sqrt{1 + \epsilon^2 (2n+i)^2}}$. The critical point of the exponent is $s=s_n$ and the expansion of the exponent in the $nth$ term of the series near this critical point has the form \begin{eqnarray*} \frac{s}{s_n^2} + \frac{1}{s} &=& \frac{2}{s_n}\left(1 + \frac{(s-s_n)^2}{s_n^2}+ \mathcal{O}\left(\frac{(s-s_n)^3}{s_n^3}\right)\right). \end{eqnarray*} An asymptotic expansion in large $y$ may be developed for the integral in each term of the series (\ref{asympw}) by the method of Laplace. By the uniform convergence of the series (for large $y$), the asymptotic expansion of the series is equivalent to the sum of the asymptotic expansions from each term. We implement this strategy to find the leading order, large $y$ behavior, and next corrections, for $w^\epsilon$, $w_x^\epsilon$ and $w_y^\epsilon$: \begin{eqnarray*} w^\epsilon(x,y) &=& \sqrt{\frac{y}{4\pi}} \sum_{n\in \mathbb{Z}}s_n^\frac{1}{2} e^{-\frac{y}{s_n}}\int_{-1}^\infty \frac{dz}{(1+z)^\frac{1}{2}} e^{- \frac{y}{2s_n}\left(z^2+\mathcal{O}(z^3)\right)}\widehat{p^\epsilon}(n)e ^{\frac{2\pi i n x}{\ell}}, \\ w_x^\epsilon(x,y) &=& -2 i \epsilon \sqrt{\frac{y}{4\pi}} \sum_{n\ne 0} s_n^\frac{1}{2} e^{-\frac{y}{s_n}}\int_{-1}^\infty \frac{dz}{(1+z)^\frac{1}{2}} e^{-\frac{y}{2s_n}\left(z^2+\mathcal{O}(z^3)\right)}n \widehat{p^\epsilon}(n)e^{\frac{2\pi i n x}{\ell}}, \\ w_y^\epsilon(x,y) &=& \frac{1}{2y}w^\epsilon - \frac{1}{2}\sqrt{\frac{y}{4\pi}} \sum_{n\in \mathbb{Z}}s_n^{-\frac{1}{2}} e^{-\frac{y}{s_n}}\int_{-1}^\infty \frac{dz}{(1+z)^\frac{1}{2}} e^{- \frac{y}{2s_n}\left(z^2+\mathcal{O}(z^3)\right)}\widehat{p^\epsilon}(n)e ^{\frac{2\pi i n x}{\ell}}, \end{eqnarray*} where in the $n^{th}$ term of each series, $z=\frac{s-s_n}{s_n}$, respectively. We can now apply Laplace's method to each term and then observe that the dominant contributions for large $y$ come from the $0, +1, -1$ Fourier modes. Retaining just these we derive the following asymptotic behavior for $\nabla \log w^\epsilon$: \begin{eqnarray*} \frac{w_x^\epsilon}{w^\epsilon}(x,y) &=& \frac{-4\epsilon\sqrt{1-\epsilon^2}}{\widehat{p^\epsilon}(0)}e^{-2\epsilon^2y}\Im\left(\widehat{p^\epsilon}(1) e^{-2i \epsilon^2 y}\right) = \mathcal{O}\left(\epsilon e^{-2\epsilon^2 y}\right)\\ \frac{w_y^\epsilon}{w^\epsilon}(x,y) &=& \frac{1}{2y} -\sqrt{1-\epsilon^2}\frac{1 + \Re\left(\frac{\widehat{p^\epsilon}(1)}{\widehat{p^\epsilon}(0)} e^{2i(\epsilon x - \epsilon^2 y)}\right)e^{-2\epsilon^2 y} +\mathcal{O}\left(\epsilon^2e^{-2\epsilon^2 y}\right)}{1 + \Re\left(\frac{\widehat{p^\epsilon}(1)}{\widehat{p^\epsilon}(0)}e^{2i(\epsilon x - \epsilon^2 y)}\right)e^{-2\epsilon^2 y} +\mathcal{O}\left(\epsilon^2e^{-2\epsilon^2 y}\right)}\\ &=& -\sqrt{1-\epsilon^2} + \frac{1}{2y} +\mathcal{O}\left(\epsilon^2 e^{-2\epsilon^2 y}\right). \end{eqnarray*} Based on these asymptotics we can now estimate the energy in the large $y$ domain. \begin{eqnarray*} \nabla \theta^\epsilon &=& \left( \begin{array}{c} -\epsilon \\ -\sqrt{1-\epsilon^2} \\ \end{array} \right) + \left( \begin{array}{c} \mathcal{O}\left(\epsilon e^{-2\epsilon^2 y} \right) \\ \mathcal{O}\left(\frac{1}{y} + \epsilon^2 e^{-2\epsilon^2 y} \right) \\ \end{array} \right) \end{eqnarray*} from which it follows that \begin{eqnarray*} |\nabla \theta^\epsilon|^2 &=& 1 + \mathcal{O}\left(\epsilon^2 e^{-2\epsilon^2 y} \right) + \mathcal{O}\left(\frac{1}{y}\right) \end{eqnarray*} and so the energy density \begin{eqnarray*} \left(1- |\nabla \theta^\epsilon|^2\right)^2 &=& \mathcal{O}\left(\epsilon^4 e^{-4\epsilon^2 y} \right) + \mathcal{O}\left(\frac{\epsilon^2}{y} e^{-2\epsilon^2 y} \right) + \mathcal{O}\left(\frac{1}{y^2}\right). \end{eqnarray*} >From this it follows that the "large $y$" part of the total energy is bounded as $$ \mathcal{F}^\epsilon_{y\geq M_\epsilon} \lesssim \frac{1}{\epsilon M_\epsilon}. $$ Thus, if we take $M_\epsilon = c/\epsilon$, this part of the total energy will remain finite as $\epsilon \to 0$. \medskip \subsubsection{Estimates in $[0,\ell] \times \left\{y <M_\epsilon\right\}$} We next turn to consideration of the energy density in the \textsl{finite} part of the domain. To facilitate this consideration we will sometimes make the uniformizing change of variables $z = \epsilon \xi$ and $h = \epsilon x$ in the Jacobi theta function (\ref{Jacobi}): \begin{eqnarray} \vartheta_3\left(\frac{- (x-\xi) + 2\epsilon t}{\ell}, \frac{-4\pi t}{\ell^2}\right)=\vartheta_3\left(\frac{-(h - z) + 2\epsilon^2 t}{\pi}, \frac{-4\epsilon^2 t}{\pi}\right) \label{perJacobi} \end{eqnarray} In what follows we will assume that $a$ is chosen to depend on $\epsilon$ in such a way that $1-a^\epsilon = \mathcal{O}(\epsilon)$. \medskip We will make use here of the following single-layer potential counterpart of the double-layer potential representation (\ref{shiftpersoln}), which in fact can be deduced directly from a change of variables in (\ref{asympw}): $\noindent {w^\epsilon}(x,y)=$ \begin{eqnarray} &=& \frac{-y}{\sqrt{4\pi}} \int_0^\infty\frac{dt}{t^\frac{3}{2}} e^{-((1-\epsilon^2)t+\frac{y^2}{4t})} \frac{1}{\ell}\int_{0}^{\ell} w^\epsilon(\xi,0) \,\,\vartheta_3\left(\frac{-(x - \xi) + 2\epsilon t}{\ell}, \frac{-4\pi t}{\ell^2}\right) d\xi,\label{shiftpersoln2} \end{eqnarray} We study the asymptotic behavior of the convolution integral in (\ref{shiftpersoln2}) for $x \in (0, a\ell)$ and for \textit{times} $t$ of order less than $1/\epsilon$: \begin{eqnarray}\label{convoest} &&\frac{1}{\ell}\int_{0}^{\ell} w^\epsilon(\xi) \,\, \vartheta_3\left(\frac{- (x-\xi) + 2\epsilon t}{\ell}, \frac{-4\pi t}{\ell^2}\right)d\xi \\ \nonumber &=&\frac{1}{\ell}\int_{0}^{\ell} e^{\epsilon \xi} \,\, \vartheta_3\left(\frac{- (x-\xi) + 2\epsilon t}{\ell}, \frac{-4\pi t}{\ell^2}\right)d\xi\\ \nonumber &+& \frac{1}{\ell}\int_{a\ell}^{\ell} \left(w^\epsilon(\xi) - e^{\epsilon \xi}\right) \,\, \vartheta_3\left(\frac{- (x-\xi) + 2\epsilon t}{\ell}, \frac{-4\pi t}{\ell^2}\right)d\xi \\ \nonumber &=& \frac{1}{\pi}\int_{0}^{\pi} e^{z} \,\,\vartheta_3\left(\frac{-(h - z) + 2\epsilon^2 t}{\pi}, \frac{-4\epsilon^2 t}{\pi}\right) dz\\ \nonumber &+& \frac{1}{\pi}\int_{0}^{\pi} \left(q_a(\frac{z}{\epsilon}) - e^{z}\right) \,\, \vartheta_3\left(\frac{-(h - z) + 2\epsilon^2 t}{\pi},\frac{-4\epsilon^2 t}{\pi}\right) dz \\ \nonumber &=& e^{\epsilon x} + o(\epsilon), \end{eqnarray} where in the third line above, the form of the integrals follows from making the change of variables as in (\ref{perJacobi}). In the second integral we smoothly extend $q_a(\frac{z}{\epsilon}) - e^z$ to be zero on $(0,a\pi)$. The final line follows for $t$ of order less than $1/\epsilon$ because in this regime the Jacobi theta function inside the convolution behaves as a \textit{Dirac comb} as $\epsilon \to 0$. The second term has this asymptotic behavior because $h \in (0,a\pi)$ and the support of $q_a(\frac{z}{\epsilon}) - e^z$ is complementary to this interval, so that this integral decays exponentially to zero with $\epsilon$, as with a Dirac sequence away form its support. Based on (\ref{convoest}) we can estimate $w^\epsilon$ as \begin{eqnarray} \label{w-limit} {w}^\epsilon(h,y) &=& \nonumber \frac{-y}{\sqrt{4\pi}} \left[\int_0^{1/\epsilon}\frac{dt}{t^\frac{3}{2}} e^{-((1-\epsilon^2)t+\frac{y^2}{4t})} \left(e^{\epsilon x} + {o}(\epsilon)\right)\right] + \mathcal{O}\left(e^{-1/\epsilon}\right)\\ &=& e^{\epsilon x} e^{-\sqrt{1-\epsilon^2}y}+ o(\epsilon). \end{eqnarray} The evaluation of the previous integral may be deduced from a basic Bessel identity (see \cite{AS} 9.6.23). In order to estimate $\nabla\theta^\epsilon$, we also need to estimate the $x$ and $y$ derivatives of $w^\epsilon(x,y)$. To this end we first consider the $x$-derivative of the internal convolution integral which equals \begin{eqnarray} &&\partial_x \frac{1}{\pi}\int_{0}^{\pi} \left( q_a(\frac{z}{\epsilon}) - e^z \right) \,\,\vartheta_3\left(\frac{-(h - z) + 2\epsilon^2 t}{\pi}, \frac{-4\epsilon^2 t}{\pi}\right) dz \\ &=& \epsilon \frac{1}{\pi}\int_{0}^{\pi} \left( q_a(\frac{z}{\epsilon}) - e^z \right) \,\,\partial_h \vartheta_3\left(\frac{-(h - z) + 2\epsilon^2 t}{\pi}, \frac{-4\epsilon^2 t}{\pi}\right) dz \\ &=& \nonumber - \epsilon\frac{1}{\pi}\int_{0}^{\pi} \left( q_a(\frac{z}{\epsilon}) - e^z \right) \,\,\partial_z\,\vartheta_3\left(\frac{-(h - z) + 2\epsilon^2 t}{\pi}, \frac{-4\epsilon^2 t}{\pi}\right) dz. \end{eqnarray} Integrating by parts, the above derivative may be rewritten as \begin{eqnarray} \label{xderiv}&& \frac{1}{\pi}\int_{0}^{\pi} \epsilon\partial_z \left( q_a(\frac{z}{\epsilon}) - e^z \right)\, \,\,\vartheta_3\left(\frac{-(h - z) + 2\epsilon^2 t}{\pi}, \frac{-4\epsilon^2 t}{\pi}\right) dz\\ &=& \nonumber o(\epsilon)\,\, \mbox{for}\,\,\, h \in (0,a\pi), \end{eqnarray} as for the second integral in the last line of (\ref{convoest}). Thus, \begin{eqnarray} \nonumber \frac{u^\epsilon_x}{u^\epsilon} &=& -\epsilon + \frac{\frac{y}{\sqrt{4\pi}} \int_0^\infty\frac{dt}{t^\frac{3}{2}} e^{-((1-\epsilon^2)t+\frac{y^2}{4t})} \frac{1}{\pi}\int_{0}^{\pi} \epsilon\partial_z \left( q_a(\frac{z}{\epsilon}) - e^z \right)\,\,\vartheta_3\left(\frac{-(h - z) + 2\epsilon^2 t}{\pi}, \frac{-4\epsilon^2 t}{\pi}\right) dz}{\frac{y}{\sqrt{4\pi}} \int_0^\infty\frac{dt}{t^\frac{3}{2}} e^{-((1-\epsilon^2)t+\frac{y^2}{4t})} \frac{1}{\pi}\int_{0}^{\pi} \left( q_a(\frac{z}{\epsilon}) - e^z \right)\,\,\vartheta_3\left(\frac{-(h - z) + 2\epsilon^2 t}{\pi}, \frac{-4\epsilon^2 t}{\pi}\right) dz}\\ \label{xlogd}&=& -\epsilon + \frac{o(\epsilon)}{e^{\epsilon x} + o(\epsilon)} = \mathcal{O}(\epsilon) \end{eqnarray} For the $y$ logarithmic derivative we have \begin{eqnarray} \nonumber \frac{{u}^\epsilon_{y}}{{u}^\epsilon} &=& \frac{1}{y} - \frac{\frac{y}{2}\int_0^\infty\frac{dt}{t^\frac{5}{2}} e^{-((1-\epsilon^2)t+\frac{y^2}{4t})} \frac{1}{\ell}\int_{0}^{\ell} w^{\epsilon}(\xi,0) \vartheta_3\left(\frac{-(x - \xi) + 2\epsilon t}{\ell}, \frac{-4\pi t}{\ell^2}\right) d\xi}{\int_0^\infty\frac{dt}{t^\frac{3}{2}} e^{-((1-\epsilon^2)t+\frac{y^2}{4t})}\frac{1}{\ell}\int_{0}^\ell w^{\epsilon}(\xi,0) \vartheta_3\left(\frac{-(x- \xi) + 2\epsilon t}{\ell}, \frac{-4\pi t}{\ell^2}\right) d\xi } \\ \nonumber &=& \frac{1}{y} - \frac{\frac{y}{2}\int_0^\infty\frac{dt}{t^\frac{5}{2}} e^{-((1-\epsilon^2)t+\frac{y^2}{4t})} \left(e^{\epsilon x} + o(\epsilon)\right)}{\int_0^\infty\frac{dt}{t^\frac{3}{2}} e^{-((1-\epsilon^2)t+\frac{y^2}{4t})}\left(e^{\epsilon x} + o(\epsilon)\right)}\\ \label{ylogd} &=& \frac{1}{y} - \frac{K_{-\frac{3}{2}}(y)}{K_{-\frac{1}{2}}(y)} + o(\epsilon) = -1 + o(\epsilon). \end{eqnarray} The last equivalence follows from a Bessel recurrence identity \cite{AS}, formula 9.6.26, together with formula 9.6.6. Thus we finally have \begin{eqnarray} \nabla \theta^\epsilon = \left( \begin{array}{c} 0 \\ -1 \\ \end{array} \right) + \mathcal{O}(\epsilon) \end{eqnarray} and hence $(1-|\nabla\theta^\epsilon|^2)^2 = \mathcal{O}(\epsilon^2)$. Since the domain $[0,a\ell] \times \left\{y < M_\epsilon\right\}$ has dimensions $1/\epsilon \times 1/\epsilon$, the total energy in this region is also asymptotically finite. \subsubsection{Modification of the Self-dual Test Funciton} It remains to estimate the energy in the region $[a\ell, \ell] \times \left\{y < M_\epsilon\right\}$ which has dimensions $\mathcal{O}(1) \times 1/\epsilon$. The question of the finiteness of the energy of the $\theta^\epsilon$ we have been considering in this region is beside the point for general purpose of but, this self-dual solution does not satisfy the boundary condition (\ref{6}) in this region. As stated earlier we are going to modify the self-dual test function in this region so that the boundary condition (\ref{6}) is satisfied. To that end we fix a small value of $\delta$ and let $\mathbf{B}(\delta)$ denote the $\delta$-neighborhood of $[a\ell, \ell]$ in $\mathcal{S}^\epsilon$. We modify $w^\epsilon$ in this neighborhood as follows. Define \begin{eqnarray} \label{finalsoln} \widetilde{w^\epsilon}(x,y) &=& \phi_1(x,y) w^\epsilon(x,y) + \phi_2(x,y) w_2(x,y) \end{eqnarray} where $\{\phi_1,\phi_2\}$ is a partition of unity subordinate to the cover of $\mathcal{S}^\epsilon$ given by \begin{eqnarray*} U_1 &=& \mathcal{S}^\epsilon\backslash \mathbf{B}(\delta/2) \\ U_2 &=& \mathbf{B}(\delta) \end{eqnarray*} and $w_2(x,y) = w^\epsilon(x,0)\cosh\left( y\right)$, where $w^\epsilon(x,0)$ here is defined as in $(\ref{5}^\prime - \ref{6}^\prime)$. One has \begin{eqnarray*} \phi_1 &=& \left\{\begin{array}{cc} 1 & U_1\backslash \mathbf{B}(\delta)\\ 0 & \mathbf{B}(\delta/2) \\ \end{array}\right. \\ \phi_2 &=& \left\{\begin{array}{cc} 1 & \mathbf{B}(\delta/2) \\ 0 & U_1\backslash \mathbf{B}(\delta)\\ \end{array} \right. \end{eqnarray*} and $\phi_1 + \phi_2 \equiv 1$. It is straightforward to check that $\widetilde{w^\epsilon}(x,y)$ satisfies the boundary conditions (\ref{5}) and (\ref{6}): \begin{eqnarray*} \lim_{y\to 0} \widetilde{w^\epsilon}(x,y) &=& \phi_1(x,0) w^\epsilon(x,0) + \phi_2(x,0) w^\epsilon(x,0)\\ &=& \left(\phi_1(x,0)+ \phi_2(x,0)\right) w^\epsilon(x,0)\\ &=& w^\epsilon(x,0)\\ &=& e^{\epsilon x} \end{eqnarray*} for $x\in [0,a\ell]$. For $x \in [a\ell, \ell]$, \begin{eqnarray*} \lim_{y\to 0} \widetilde{w^\epsilon}_y(x,y) &=& \left(\phi_{1y}(x,0)+ \phi_{2y}(x,0)\right) w^\epsilon(x,0) + \phi_2(x,0) w^\epsilon(x,0)\sinh(0) \\ &=& \left(\phi_1 + \phi_2\right)_y (x,0) w^\epsilon(x,0) + 0 \\ &=& 0. \end{eqnarray*} Thus, $\log \widetilde{w^\epsilon}$ is an admissible test function for the regularized Cross-Newell variational problem. We can now estimate the energy of this test function in $\mathbf{B}(\delta)$. The energy density in this region is bounded and therefore the energy in $\mathbf{B}(\delta)$ is finite. \subsubsection{Estimates for the "outer" solution in $\left( [a\ell,\ell] \times \left\{y < M_\epsilon\right\}\right)$} It remains to estimate the energy in $\left( [a\ell,\ell] \times \left\{y < M_\epsilon\right\}\right) \backslash \mathbf{B}(\delta) $. To proceed with this we will need a more specific definition of $q_a$ which we now give. Note first that by our assumption that $1 - a^\epsilon = \mathcal{O}(\epsilon)$, the interval $[a\ell,\ell]$ remains of size $\mathcal{O}(1)$ for arbitrarily small values of $\epsilon$. We will now further pin this down by setting $1-a^\epsilon = c \epsilon$ for a value of $c$ that is fixed, independent of $\epsilon$. Consequently, $[a\ell,\ell]$ is now an interval of fixed length $c\pi$ which can therefore also be represented as $[\ell - c\pi,\ell]$. Recall that $q_a$ needs to be built so that on this interval it matches, through second order, to $e^{\epsilon x}$ at the left endpoint and similarly to $e^{\epsilon x - \pi}$ at the right endpoint. Toward this end we observe that the required leading order value on the right is $1$, independent of $\epsilon$ while on the left the leading order value limits to the stable value of $e^\pi$ as $\epsilon \to 0$. Choosing a value $\nu > 0$ that is small with respect to $c\pi$, we define a compressed \textsl{tanh}-profile that interpolates between the point $ (x_0, y_0) = (\ell - c\pi + \nu, e^\pi + \gamma)$ and the point $ (x_1, y_1) = (\ell - \nu, 1 - \gamma)$ and where $\gamma > 0$ is another chosen value required to be smaller than $1$. (This last requirement will insure that the positivity claim made at the end of subsection \ref{zipper} holds.) Explicitly this tanh-profile is given by \begin{eqnarray*} T(x) &=& \frac{e^\pi + 1}{2} + \left(\frac{e^\pi - 1}{2} + \gamma\right) \tanh \left( \frac{x - (\ell - c\frac{\pi}{2})}{\left(x-(\ell-\nu)\right)\left(x-(\ell - c\pi + \nu)\right)}\right). \end{eqnarray*} Note that the profile of $T(x)$ is independent of $\epsilon$. The only way in which $T$ depends on $\epsilon$ is that this profile translates uniformly with $\ell$ as $\epsilon$ changes. We will define $q_a(x) = T(x)$ on the subinterval $[x_0, x_1] = [\ell - c\pi + \nu,\ell - \nu]$ of $[\ell - c\pi,\ell]$. Next we will define the piece of $q_a(x)$ on the left that interpolates between the point $(\ell - c\pi, e^{\pi - \epsilon c \pi})$ and the point$(x_0, y_0)$. Choose a value $\sigma > 0$ that is small with respect to $\nu$. Consider the covering of $[\ell - c\pi,\ell - c\pi + \nu]$ by the two sets $V_1 = [\ell - c\pi,x_0 - \sigma)$ and $V_2 = (\ell - c\pi + \sigma, x_0]$ and let $\{\psi_1(x), \psi_2(x)\}$ be a partition of unity subordinate to this cover which means, in particular, that \begin{eqnarray*} \psi_1 &=& \left\{\begin{array}{cc} 1 & [\ell - c\pi, \ell - c\pi + \sigma)\\ 0 & (x_0 - \sigma, x_0] \\ \end{array}\right. \\ \psi_2 &=& \left\{\begin{array}{cc} 1 & (x_0 - \sigma, x_0] \\ 0 & [\ell - c\pi, \ell - c\pi + \sigma). \\ \end{array} \right. \end{eqnarray*} On $[\ell - c\pi, x_0]$ we define \begin{eqnarray}\label{qa} q_a(x) &=& \psi_1(x) e^{\epsilon x} + \psi_2(x) (e^\pi + \gamma). \end{eqnarray} It is straightforward to see that with these choices $q_a(x)$ is smooth throughout $[\ell - c\pi, \ell - \nu)$ and satisfies the smooth matching conditions on the left. Moreover, it is clear from the functions comprising (\ref{qa}) that $q_a$ and its derivatives remain bounded on $[\ell - c\pi, \ell - \nu)$ as $\epsilon \to 0$. A similar construction may be made on the right; i.e., on $(\ell - c\pi + \nu, \ell]$. This completes our description of $q_a(x)$. The study of the convolution integral in (\ref{shiftpersoln2}) in the region where $x \in [a\ell, \ell]$ now proceeds similarly to what was done in (\ref{convoest}) and subsequent formulae. In particular, the analogous result to (\ref{convoest}) is that for $x \in (a\ell, \ell)$ and for \textit{times} $t$ of order less than $1/\epsilon$: \begin{eqnarray}\label{convoest2} &&\frac{1}{\ell}\int_{0}^{\ell} w^\epsilon(\xi) \,\, \vartheta_3\left(\frac{- (x-\xi) + 2\epsilon t}{\ell}, \frac{-4\pi t}{\ell^2}\right)d\xi \\ \nonumber &=&\frac{1}{\ell}\int_{0}^{\ell} q_a(\xi) \,\, \vartheta_3\left(\frac{- (x-\xi) + 2\epsilon t}{\ell}, \frac{-4\pi t}{\ell^2}\right)d\xi\\ \nonumber &+& \frac{1}{\ell}\int_{a\ell}^{\ell} \left(w^\epsilon(\xi) - q_a(\xi)\right) \,\, \vartheta_3\left(\frac{- (x-\xi) + 2\epsilon t}{\ell}, \frac{-4\pi t}{\ell^2}\right)d\xi \\ \nonumber &=& \frac{1}{\pi}\int_{0}^{\pi} q_a(\frac{z}{\epsilon}) \,\,\vartheta_3\left(\frac{-(h - z) + 2\epsilon^2 t}{\pi}, \frac{-4\epsilon^2 t}{\pi}\right) dz\\ \nonumber &+& \frac{1}{\pi}\int_{0}^{\pi} \left(w^\epsilon(\frac{z}{\epsilon}) - q_a(\frac{z}{\epsilon}) \right) \,\, \vartheta_3\left(\frac{-(h - z) + 2\epsilon^2 t}{\pi},\frac{-4\epsilon^2 t}{\pi}\right) dz \\ \nonumber &=& q_a(x) + o(\epsilon), \end{eqnarray} with $q_a(x)$ here bounded away from zero, independent of $\epsilon$, by our earlier choice of $\gamma$. Hence the denominators in the estimates analogous to (\ref{xlogd}) and (\ref{ylogd}) are under control. In the subsequent formulae the roles of $e^{\epsilon x}$ and $q_a(x)$ are effectively interchanged as above and all proceeds as before. The result is that the energy in $\left( [a\ell,\ell] \times \left\{y < M_\epsilon\right\}\right)\backslash \mathbf{B}(\delta)$ is asymptotically bounded like $\mathcal{O}(\epsilon)$. It thus follows that the total energy of our family of test functions is uniformly bounded in $\epsilon$. \section{Lower bound} \label{sec:l_bound} Following the ideas of Jin and Kohn \cite{JK}, we will prove {\em ansatz-free} lower bounds for the functional $\mathcal{F}^{\epsilon}$ by identifying vector fields $\Sigma(\nabla\theta)$ such that $$ \mathcal{F}^{\epsilon}[\theta;a,\delta] \geq C^{-1} \left|\iint_{\mathcal{S}^\epsilon} \nabla \cdot \Sigma(\nabla \theta) dx dy \right|. $$ This allows us to obtain information about the energy $\mathcal{F}^\epsilon$ purely in terms of the boundary conditions on $\theta$. To avoid the proliferation of symbols, here and henceforth, $C, C', C_1$, {\em etc} denote (finite) constants whose precise value is unimportant, and different occurrences of the same symbol might denote different values of the constants. $e_1,e_2,K,K_1$, {\em etc} denote constants that have the same value in all their occurrences. \begin{defn} A smooth vector function $\Sigma(p,q) = (\Sigma_1, \Sigma_2)$ is {\em subordinate to the energy} if \begin{gather} \left|\frac{\partial\Sigma_1(p,q)}{\partial p}\right| + \left|\frac{\partial\Sigma_1(p,q)}{\partial q} + \frac{\partial\Sigma_2(p,q)}{\partial p}\right| + \left| \frac{\partial\Sigma_2(p,q)}{\partial q}\right| \leq C |1 - p^2 - q^2| \label{eq:subordinate} \end{gather} for some $C < \infty$. \end{defn} If $\Sigma$ is subordinate to the energy, it follows that \begin{align*} |\nabla \cdot \Sigma(\nabla \theta)| & \leq \left|\frac{\partial\Sigma_1(p,q)}{\partial p}\right| |\theta_{xx}| + \left|\frac{\partial\Sigma_1(p,q)}{\partial q} + \frac{\partial\Sigma_2(p,q)}{\partial p}\right||\theta_{xy}| + \left| \frac{\partial\Sigma_2(p,q)}{\partial q}\right||\theta_{yy}| \\ & \leq C|1-\theta_x^2 - \theta_y^2| |\nabla \nabla \theta|, \end{align*} where we use the identification $p = \theta_x, q = \theta_y$ and $|\nabla \nabla \theta|^2 = \theta_{xx}^2 + 2 \theta_{xy}^2 + \theta_{yy}^2$. Consequently, \begin{align*} \mathcal{F}^{\epsilon}[\theta;a,\delta] & = \iint_{\mathcal{S}^\epsilon} \left\{[\nabla \nabla \theta]^2 + (1 - |\nabla \theta|^2)^2\right\} dx dy - 2 \int_{\partial \mathcal{S}^\epsilon} \theta_x d \theta_y \\ & \geq 2 \iint_{\mathcal{S}^\epsilon} |\nabla \nabla \theta| |1 - |\nabla \theta|^2| dx dy - 2 \int_{\partial \mathcal{S}^\epsilon} \theta_x d \theta_y \\ & \geq \frac{2}{C} \left|\iint_{\mathcal{S}^\epsilon} \nabla \cdot \Sigma(\nabla \theta) dx dy \right| - 2 \int_{\partial \mathcal{S}^\epsilon} \theta_x d \theta_y \\ & \geq C^{-1} \left|\iint_{\mathcal{S}^\epsilon} \nabla \cdot \Sigma(\nabla \theta) dx dy \right|. \end{align*} In obtaining the last equation, we use the fact that $$ \int_{\partial \mathcal{S}^\epsilon} \theta_x d \theta_y = 0 $$ for the boundary conditions in (\ref{eq:bc}). \begin{lemma} There are constants $e_1,K_1 > 0$ such that for all $\epsilon \in (0,1]$,$a \in [0,1]$ and $\delta \in \mathbb{R}$, we have $$ \mathcal{F}^\epsilon[\theta;a,\delta] \geq \frac{e_1 \epsilon^2}{(1-a)^2} - K_1\epsilon^2. $$ \label{lem:squeeze} \end{lemma} \begin{proof} Let $\phi \geq 0$ be a smooth, compactly supported function such that \begin{align*} & \phi(0) = 1, \\ & \phi(1) < \phi(0),\\ & f(p) = p \phi(p^2) \text{ has a single maximum at } p=1. \end{align*} An explicit example of a function $\phi$ whith these properties is $$ \phi(p) = \begin{cases} \exp\left[\frac{1}{2} - \frac{1}{(2-p)(p+1)} - \frac{p}{4} \right] & p \in (-1,2) \\ 0 & \text{ otherwise } \end{cases} $$ Let $b = (1-a)/\epsilon$. Define the vector field $\Sigma(p,q)$ by \begin{align*} \Sigma_2(p,q) & = p \phi(b^2 p^2) \\ \Sigma_1(p,q) & = - \int_0^q \left[\phi(b^2 (1-\eta^2)) + 2(b^2(1-\eta^2)\phi'(b^2(1-\eta^2)) \right] d\eta. \end{align*} Since $\phi$ has compact support, it follows that $\Sigma$ is bounded on $\mathbb{R}^2$. An explicit calculation shows that the quantities $\Sigma_{1,p}$ and $\Sigma_{2,q}$ are zero. Also, \begin{align*} \left|\frac{\partial\Sigma_1(p,q)}{\partial q} + \frac{\partial\Sigma_2(p,q)}{\partial p}\right| & = \left|\phi(b^2 p^2) + 2 b^2 p^2 \phi'(b^2 p^2) - \phi(b^2 (1-q^2)) \right.\\ &\left. - 2b^2(1-q^2)\phi'(b^2(1-q^2)\right| \\ & \leq C b^2 |(1-p^2 - q^2)| \end{align*} where $$ C = \sup_{x,y}\left|\frac{\phi(x) + 2 x \phi'(x) - \phi(y) - 2 y \phi'(y)}{x-y}\right| \leq \sup_z |3 \phi'(z) + 2 z \phi''(z)| $$ is clearly finite since $\phi$ is compactly supported and twice differentiable. This proves that $\Sigma$ is subordinate to the energy. Since $\nabla \cdot \Sigma(\nabla \theta) = (\Sigma_{2,p} + \Sigma_{1,q}) \theta_{xy}$ we obtain \begin{align} \left|\iint \nabla \cdot \Sigma(\nabla \theta)\, dx dy\right| & \leq Cb^2 \iint| (1 - \theta_x^2 - \theta_y^2) \theta_{xy}| \, dx dy \nonumber \\ & \leq \frac{C b^2}{2} \left[ \iint (1 - \theta_x^2 - \theta_y^2)^2\, dx dy + \iint [\nabla \nabla \theta]^2\, dx dy \right] \nonumber \\ & = \frac{Cb^2}{2} \mathcal{F}^\epsilon[\theta;a,\delta] \label{eq:lbnd1} \end{align} Integrating by parts, we have \begin{align*} \iint \nabla \cdot \Sigma(\nabla \theta)\, dx dy = & \Sigma_2(\epsilon,\sqrt{1-\epsilon^2}) \frac{\pi}{\epsilon} - \int_0^{a \pi/\epsilon} \Sigma_2(0,\theta_y(x,0)) dx \\ & - \int_{a \pi/\epsilon}^{\pi/\epsilon} \Sigma_2(\theta_x(x,0),0)dx, \end{align*} where the contributions from the boundaries at $x = 0$ and $x = \pi/\epsilon$ cancel due to the periodicity. By construction, $\Sigma_2(p,0) = 0$ and $\Sigma_2(p,q)$ has a maximum value at $p = 1/b$. Consequently, \begin{align*} \int_0^{a \pi/\epsilon} \Sigma_2(0,\theta_y(x,0)) dx & = 0 \\ \int_{a \pi/\epsilon}^{\pi/\epsilon} \Sigma_2(\theta_x(x,0),0) dx & \leq \frac{(1 - a) \pi}{\epsilon} \frac{\phi(1)}{b} = \pi \phi(1). \end{align*} Also, $\phi(0) = 1$ and $\phi$ is Lipschitz so that $$ \Sigma_2(\epsilon,\sqrt{1-\epsilon^2}) = \epsilon \phi(b^2 \epsilon^2) = \epsilon \phi((1-a)^2) \geq \epsilon(1 - C' (1-a)^2), $$ for some finite $C'$. Combining these estimates with (\ref{eq:lbnd1}), we obtain \begin{equation} \mathcal{F}^\epsilon[\theta;a,\delta] \geq \frac{2 \pi}{C b^2}\left[\phi(0) - \phi(1) - C'(1-a)^2 \right], \end{equation} and rewriting $b$ in terms of $a$ and $\epsilon$ yields the desired conclusion. \end{proof} The above lemma shows that the energy grows without bound as the quantity $(1-a)$ becomes small. However, we do not have {\em a priori} control on the size of $(1-a)$. Consequently, to obtain a lower bound for the energy, we need a complementary estimate which shows that the energy grows as the quantity $(1-a)$ becomes large. To prove this result, we first construct a vector field $\Sigma$ subordinate to the energy functional as follows: Let $\psi \geq 0$ be a smooth, compactly supported function such that \begin{align*} & \psi(0) = 1 \\ & \int_0^{\infty} (1-\xi^2)\psi(\xi^2) d\xi = 0 \end{align*} We can always construct such a function, given $\chi \geq 0$, a compactly supported function with $\chi(0) = 1$. Observe that $$ \int_0^\infty (1 - \xi^2) \chi\left(\frac{\xi^2}{\eta^2}\right) d \xi = \eta(A_0 - \eta^2 A_1), $$ where $A_0,A_1 > 0$. Consequently, by an appropriate choice of $\eta$, we get $\psi(x) = \chi(x/\eta^2)$ with the required properties. We define the functions $\zeta(q^2)$ and $\sigma(q^2)$ by \begin{align} \zeta(q^2) & = \int_0^q (q - \eta) \psi(\eta^2) d \eta \nonumber \\ \sigma(q^2) & = \int_0^q (q - \eta) (1 - \eta^2) \psi(\eta^2) d \eta \label{eq:zeta_sigma} \end{align} Note that the functions $\zeta$ and $\sigma$ are well defined for positive values of their arguments, that is the expressions on the right hand sides of the above equations are even functions of $q$. From these expressions, we have \begin{align*} \frac{\partial^2}{\partial q^2} \zeta(q^2) = \psi(q^2); & \quad \zeta(0) = 0 \\ \frac{\partial^2}{\partial q^2} \sigma(q^2) = (1-q^2)\psi(q^2); & \quad \sigma(0) = 0 \end{align*} We will also use the same letters $\zeta$ and $\sigma$ to denote smooth extensions of the functions defined above to all of $\mathbb{R}$. We will pick extensions such that the supports of $\zeta$ and $\sigma$ are contained in $[-1,\infty)$. Let $\varphi \geq 0$ be a compactly supported function and set \begin{align} V(p,q) = & \, \, \varphi(p^2)\left[\sigma(q^2) - p^2\zeta(q^2)\right] \nonumber \\ & - \int_0^p (p-\xi)\left\{ \sigma(1-\xi^2) \left[ \frac{\partial^2}{\partial \xi^2} \varphi(\xi^2) \right] - \zeta(1-\xi^2) \left[ \frac{\partial^2}{\partial \xi^2} (\xi^2 \varphi(\xi^2)) \right] \right\}\, d\xi. \label{eq:defnv} \end{align} $V$ is now an even function of $p$ and $q$. Define the vector field $\Sigma$ by \begin{equation} \Sigma(p,q) = \left( -\frac{\partial}{\partial p} V,\frac{\partial}{\partial q} V \right) \label{eq:def} \end{equation} From (\ref{eq:def}) it follows that $$ \frac{\partial\Sigma_1(p,q)}{\partial q} + \frac{\partial\Sigma_2(p,q)}{\partial p} = 0. $$ Also, $$ \left|\frac{\partial\Sigma_2(p,q)}{\partial q}\right| = \left|\frac{\partial^2 V(p,q)}{\partial q^2}\right| = |\varphi(p^2)\psi(q^2)||1-p^2-q^2| \leq C_1 |1-p^2-q^2|. $$ With $C_1 = \sup|\varphi(p^2) \psi(q^2)| < \infty$. Finally, an explicit calculation yields \begin{align} \left|\frac{\partial\Sigma_1(p,q)}{\partial p}\right| & = \left|\left[ \frac{\partial^2}{\partial p^2} \varphi(p^2) \right]\left( \sigma(q^2) - \sigma(1-p^2) \right) - \left[ \frac{\partial^2}{\partial p^2} p^2 \varphi(p^2) \right]\left( \zeta(q^2) - \zeta(1-p^2) \right)\right| \nonumber \\ & \leq C_2 |1 - p^2 - q^2| \end{align} where $C_2$ can be bounded in terms of the support of $\varphi$ and the maximum values of $|\varphi|, |\varphi'|, |\varphi''|, |\zeta'|$ and $|\sigma'|$. Clearly $\varphi$ and all it's derivatives are uniformly bounded since it is smooth and compactly supported. From (\ref{eq:zeta_sigma}), we have \begin{align*} \zeta'(q^2) & = \frac{1}{2q}\int_0^q \psi(\eta^2) d \eta \\ \sigma'(q^2) & = \frac{1}{2q}\int_0^q (1 - \eta^2) \psi(\eta^2) d \eta \end{align*} Since $\psi$ is compactly supported, these derivatives vanish as $q^2\rightarrow \infty$, implying that $\zeta'$ and $\sigma'$ are bounded for all positive values of the argument. Since $\sigma$ and $\zeta$ are smooth and are identically zero if their arguments are sufficiently negative, it follows that $C_2 < \infty$. It thus follows that the vector field $\Sigma$ is subordinate to the energy functional. For future use, let us record a few observations that follow directly from the construction: \begin{observation} $\Sigma_2(p,0) = 0$ since $\Sigma_2$ is an odd function of $q$. \end{observation} \begin{observation} $$ \Sigma_{2,q}(0,q) = V_{qq}(0,q) = \psi(q^2)(1-q^2). $$ Consequently, the non-degenerate critical points are at $ q = \pm 1$. Differentiating in $q$, we get $$\Sigma_{2,qq}(0,\pm 1) = \mp 2 \psi(1),$$ so that $\Sigma_2(0,q)$ has a maximum at $q = 1$ and a minimum at $q = -1$. Finally, $$\Sigma_2(0,q) = 2 q \sigma'(q^2) = \int_0^q (1-\xi^2) \psi(\xi^2) d \xi,$$ so that $\Sigma_2(0,1) > 0$ and $\Sigma_2(0,q) \rightarrow 0$ as $q \rightarrow \infty$. \end{observation} $M = \int_0^1 (1-\xi^2) \psi(\xi^2) d \xi$ will denote the maximum value of $\Sigma_2(0,q)$. \begin{observation} \begin{align*} \Sigma_2(\epsilon,\sqrt{1-\epsilon^2}) & = \varphi(\epsilon^2) \int_0^{\sqrt{1-\epsilon^2}} (1- \epsilon^2 -\xi^2) \psi(\xi^2) d \xi \\ & \geq M - K \epsilon^2 \end{align*} for a constant $K < \infty$. In obtaining the last line, we use \begin{align*} |1-\varphi(\epsilon^2)| & \leq C_1 \epsilon^2 \\ \left|\int_0^{\sqrt{1-\epsilon^2}} \psi(\xi^2) d \xi \right| & \leq C_2 \\ \left|\int_{\sqrt{1-\epsilon^2}}^1 (1-\xi^2) \psi(\xi^2) d \xi \right| & \leq C_3 \epsilon^2 \end{align*} for some bounded constants $C_1,C_2,C_3$. \label{obs:estimate} \end{observation} \begin{lemma} There are constants $e_2,K_2 > 0$ such that for all $\epsilon \in (0,1]$, $a \in [0,1]$ and $\delta \in \mathbb{R}$, we have $$ \mathcal{F}^\epsilon[\theta;a,\delta] \geq \frac{e_2(1-a)}{\epsilon} - K_2 \epsilon. $$ \label{lem:extend} \end{lemma} \begin{remark} For the case $a = 0$, corresponding to the self-dual minimizers, this estimate captures the right scaling of the minimum energy as $\epsilon \rightarrow 0$. \end{remark} \begin{proof} The proof of the lemma follows from estimating a lower bound for the functional $\mathcal{F}^\epsilon[\theta;a,\delta]$ using the vector field $\Sigma$ that we constructed above. For the vector field $\Sigma$ we have \begin{align*} \iint \nabla \cdot \Sigma(\nabla \theta)\, dx dy = & \Sigma_2(\epsilon,\sqrt{1-\epsilon^2}) \frac{\pi}{\epsilon} - \int_0^{a \pi/\epsilon} \Sigma_2(0,\theta_y(x,0)) dx \\ & - \int_{a \pi/\epsilon}^{\pi/\epsilon} \Sigma_2(\theta_x(x,0),0) dx. \end{align*} As before, the contributions from the boundaries at $x = 0$ and $x = \pi/\epsilon$ cancel due to the periodicity. By construction, $\Sigma_2(p,0) =0$ and $\Sigma_2(0,q)$ has a maximum value $M$ at $q = 1$. Consequently, $$ \int_0^{a \pi/\epsilon} \Sigma_2(0,\theta_y(x,0)) dx \leq \frac{M a \pi}{\epsilon} $$ From observation~\ref{obs:estimate}, we obtain $$ \Sigma_2(\epsilon,\sqrt{1-\epsilon^2}) \frac{\pi}{\epsilon} \geq \frac{M \pi}{\epsilon} - K \pi \epsilon, $$ Since $\Sigma$ is subordinate to the energy, \begin{equation} \mathcal{F}^\epsilon[\theta;a,\delta] \geq \frac{M \pi}{C \epsilon}\left[1 - a - K \epsilon^2 \right], \end{equation} which yields the desired conclusion. \end{proof} We can now prove theorem~\ref{thm:l_bound} using lemma~\ref{lem:squeeze} and lemma~\ref{lem:extend}. \begin{proof} Let $b$ denote the quantity $(1-a)/\epsilon$. From lemma~\ref{lem:squeeze} we get $$ \mathcal{F}^\epsilon[\theta;a,\delta] \geq \frac{e_1}{b^2} - K_1 \epsilon^2 \geq 3 \left(e_1 e_2^2\right)^{1/3} - 2 e_2 b - K_1 \epsilon^2 $$ where the last inequality comes from linearizing the convex function $e_1 b^{-2}$ at $b = (e_1/e_2)^{1/3}$. Combining this estimate with the conclusion of lemma~\ref{lem:extend}, we get $$ \mathcal{F}^\epsilon[\theta;a,\delta] \geq \max\left(3 \left(e_1 e_2^2\right)^{1/3} - 2 e_2 b - K_1 \epsilon^2, e_2 b - K_2 \epsilon \right) \geq (e_1 e_2^2)^{1/3} - \frac{2 K_2 \epsilon +K_1 \epsilon^2 }{3}. $$ If we set $$ \epsilon_* = \min\left( \frac{(e_1 e_2^2)^{1/6}}{\sqrt{K_1}}, \frac{(e_1 e_2^2)^{1/3}}{2 K_2}\right), $$ for all $\epsilon < \epsilon_*$, all $a \in [0,1]$ and all $\theta$ satisfying the boundary conditions in (\ref{eq:bc}), we have $$ \mathcal{F}^{\epsilon}[\theta;a,\delta] \geq \frac{(e_1 e_2^2)^{1/3}}{3} \equiv E_1. $$ Combining the upper bound $\mathcal{F}^{\epsilon}[\theta^\epsilon;a^\epsilon,\delta^\epsilon] \leq E_0$ in theorem~\ref{thm:u_bound} with the lower bounds for $\mathcal{F}^\epsilon$ in lemma~\ref{lem:squeeze} and lemma~\ref{lem:extend}, it follows that for $$\epsilon < \min\left(\sqrt{\frac{E_0}{K_1}},\frac{E_0}{K_2}\right),$$ we have $$ \sqrt{\frac{e_1}{2E_0}} < \frac{1-a^\epsilon}{\epsilon} < \frac{2 E_0}{e_2}. $$ Consequently, $$ 1 - \alpha_2 \epsilon < a^\epsilon < 1 - \alpha_1 \epsilon, $$ for sufficiently small $\epsilon$ with $\alpha_1 = \sqrt{e_1/(2E_0)}$ and $\alpha_2 = 2 E_0/e_2$. \end{proof} \section{Numerical Results} \label{sec:results} In this section, we will present the results of numerical simulations that illustrate and clarify our analysis of the energy and also the structure of the minimizers for the regularized Cross-Newell energy $\mathcal{F}^\epsilon$ within the class of functions given by (\ref{eq:bc}). For our numerical simulations, we restrict ourself to the finite domain, $\mathcal{R}^{\epsilon} = \{(x,y) \, |\, 0 \leq x \leq l = \pi/\epsilon, 0 \leq y \leq L\}$, where $L \gg 1$ is a length scale much larger than the typical wavelength of the pattern. The boundary conditions in (\ref{eq:bc}) which are appropriate for the semi-infinite strip $\mathcal{S}^\epsilon$ are modified for the finite domain as follows -- \begin{gather} \theta(x,0) = 0 \quad \text{ for } 0 \leq x < a l; \nonumber \\ \theta_y(x,0) = 0 \quad \text{ for } a l \leq x < l; \nonumber \\ \theta(x,y) - \epsilon x \text{ is periodic in $x$ with period $l$ for each } y \in [0,L]; \nonumber \\ \theta(x,L) = \left[\epsilon x + \sqrt{1 - \epsilon^2} L + \delta\right] \label{eq:num_bc} \end{gather} It is rather straightforward to show that there exist $\theta^\epsilon \in H^2(\mathcal{R}^\epsilon)$ satisfying (\ref{eq:num_bc}) for an $a^\epsilon \in [0,1)$ and $\delta^\epsilon \in \mathbb{R}$ minimizing the functional $$ \mathcal{F}^\epsilon[\theta;a,\delta] = \iint_{\mathcal{R}^\epsilon} \left\{[ \Delta \theta]^2 + (1 - |\nabla \theta|^2)^2 \right\} dx dy. $$ The existence of a minimizer is immediate from the following lemma: \begin{lemma} Let $0 \leq a < 1$, and $\delta \in \mathbb{R}$ be given. $\rho_j \in L^2(\mathcal{R}^\epsilon)$ is a sequence of functions that converges weakly to zero. $H^2_{per}$ denotes the completion of periodic (in $x$) functions on $\mathcal{R}^\epsilon$ with respect to the $H^2$ norm. If $\theta_j \in H^2_{per}(\mathcal{R}^\epsilon)$ is a sequence satisfying (in the sense of trace) \begin{gather} \Delta \theta_j = \rho_j \nonumber \\ \theta_j(x,0) = 0 \quad \text{ for } 0 \leq x < a l; \nonumber \\ \partial_y \theta_j(x,0) = 0 \quad \text{ for } a l \leq x < l; \nonumber \\ \theta(x,L) = 0 \end{gather} it follows that, up to extraction of a subsequence and relabelling, we have $\nabla \theta_j \rightarrow 0$ in $L^4(\mathcal{R}^\epsilon,\mathbb{R}^2)$. \end{lemma} \begin{proof} Elliptic regularity along with the given boundary conditions implies that the sequence $\theta_j$ is bounded in $H^2(\mathcal{R}^\epsilon)$. The compactness of the embedding $H^2(\mathcal{R}^\epsilon) \hookrightarrow W^{1,4}(\mathcal{R}^\epsilon)$ \cite{Evans} proves the lemma. \end{proof} If $\tilde{\theta}_j$ is an infimizing sequence for $\mathcal{F}^\epsilon[\theta;a,\delta]$ subject to the boundary conditions in (\ref{eq:num_bc}), then let $\theta_j = \tilde{\theta}_j -\varphi$, where $\varphi$ is a smooth function on $\mathcal{R}^\epsilon$ satisfying the boundary conditions in (\ref{eq:num_bc}). It then follows from the form of $\mathcal{F}^\epsilon$ and the fact that $\tilde{\theta}_j$ is infimizing that $\Delta \theta_j$ is a bounded sequence in $L^2$, and so converges weakly to a limit $\rho^*$. Applying the compactness result of the preceding lemma with reference to the sequence $\rho_j = \Delta \theta_j - \rho^*$, we obtain the existence of a minimizier for the functional $\mathcal{F}^\epsilon[\theta;a,\delta]$ for a fixed $a$ and $\delta$. Note that, for a given $a$ it is easy to construct smooth transformations $\psi_t : \mathcal{R}^\epsilon \rightarrow \mathcal{R}^\epsilon$ such that $\psi_0$ is the identity, if $\theta$ satisfies the boundary conditions in (\ref{eq:num_bc}), then $\theta \circ \psi_t$ satisfies the same boundary conditions with the fraction of the boundary with a Dirichlet boundary condition equaling $a(1+t)$. Further, the energy $\mathcal{F}^\epsilon[\theta \circ \psi_t;a(1+t),\delta]$ is a smooth function of $t$ for sufficiently small $t$. A standard argument now implies that, for a given $\delta$ the map $$ a \mapsto \inf_{\theta} \mathcal{F}^\epsilon[\theta;a,\delta] $$ is continuous for $a \in (0,1)$. A similar argument shows that the map is also continuous at $a = 0$. In Lemma~\ref{lem:squeeze} we showed that $\liminf_{a \rightarrow 1} \mathcal{F}^\epsilon[\theta;a,\delta] = \infty$. Combining these results, we see that $$ \inf_{a,\theta} \mathcal{F}^\epsilon[\theta;a,\delta] = \inf_a \left[\inf_{\theta} \mathcal{F}^\epsilon[\theta;a,\delta]\right]. $$ We now consider variations $\theta \rightarrow \theta_t = \theta + t \chi(y/L)$, where $\chi$ is a smooth function vanishing identically on $[0,1/3]$ and equal to 1 on $[2/3,1]$. The functions $\theta_t$ satisfy the boundary conditions in (\ref{eq:num_bc}), except the asymptotic phase shift is given by $\delta + t$. A similar argument as above shows that the map $$ \delta \mapsto \inf_{\theta} \mathcal{F}^\epsilon[\theta;a,\delta] $$ is continuous and it is easy to see that $ \mathcal{F}^\epsilon[\theta;a,\delta] \rightarrow \infty$ as $\delta \rightarrow \pm \infty$. In particular, this proves the existence of an optimal $\delta$, and combining with the results from above, we see that the minimizer $\theta^{\epsilon}, a^\epsilon, \delta^{\epsilon}$, can be obtained by successive minimization in each of the factors. This suggests the following discretization for the functional $\mathcal{F}^\epsilon$, which should converge as the grid spacings $\eta,\zeta \rightarrow 0$. We define a grid by $x_i = i \eta, i = 0,1,2,\ldots,m-1, y_j = j \zeta, j = 0,1,2,\ldots,n$, where $\eta = l/m, \zeta = L/n$. The discretization of the test function $\theta(x,y)$ is $$ \theta_{i,j} = \theta(i \eta,j \zeta) $$ We define the difference operator $\delta^{\pm}_x$ by $$ (\delta^{\pm}_x \theta)_{i,j} = \pm\frac{\theta_{i \pm 1,j} - \theta_{i,j}}{\eta} $$ with similar definitions for $\delta^{\pm}_y$. In terms of the discretization, the boundary conditions are \begin{gather} \theta_{i,0} = 0 \quad \text{ for } 0 \leq i < k; \nonumber \\ \delta_y^+\theta_{i,0} = 0 \quad \text{ for } k \leq i < m; \nonumber \\ \theta_{m,j} = \theta_{0,j} + \pi \quad j = 0,1,2,\ldots,n \\ \theta_{i,n} = \frac{\pi i}{m} + \sqrt{1-\epsilon^2} L + \delta \quad i = 0,1,2,\ldots m-1 \label{eq:discr_bc} \end{gather} and the Energy functional is discretized as $$ \mathcal{F}^\epsilon \approx \eta \zeta \sum_{i = 0}^{m-1} \sum_{j = 0}^{n} \left[ (\delta_x^+ \delta_x^- + \delta_y^+ \delta_y^-) \theta_{i,j}\right]^2 + \left[ \frac{(\delta_x^+ \theta)_{i,j}^2 +(\delta_x^- \theta)_{i,j}^2 +(\delta_y^+ \theta)_{i,j}^2 +(\delta_y^- \theta)_{i,j}^2 }{2} - 1 \right]^2 $$ Computing this functional requires assigning values for $\theta_{i,j}$ with $i = -1, j = -1$ and $j = n+1$. The values for $i = -1$ are obtained form the shift-periodicity of $\theta$ by $\theta_{-1,j} = \theta_{m-1,j} - \pi$. The values for T $j = n+1$ are assigned using the Dirichlet boundary condition $\theta_{i,n+1} = \frac{\pi i}{m} + \sqrt{1-\epsilon^2} (L+\zeta) + \delta$. This functional is minimized using MATLAB's conjugate-gradient minimization. Fig.~\ref{fig:hyster} shows the results from minimizing the RCN energy over the pattern $\theta$ and also the phase shift $\delta$, for different values of $\epsilon$, and for a range of values of $a \approx k/m$. The results do indeed suggest that the (partial) minimization with respect to the pattern and the asymptotic phase yields a functional that {\em depends continuously} on $a$. Further, this functional has first-order (discontinuous) phase transition at a bifurcation value $\epsilon^*$, below which the global minimizer has $a \neq 0$. \begin{figure}[htbp] \centerline{\includegraphics[width = 0.7\hsize,viewport = 0 0 675 478,clip]{figs/hysterisis.eps}} \caption{Minimum energy as a function of the separation between the convex and the concave disclinations.} \label{fig:hyster} \end{figure} Fig.~\ref{fig:energy} shows the energy of the minimizer (minimizing over the pattern, asymptotic phase and the parameter $a$) as a function of $\epsilon$. Note that the minimum energy is a non-differentiable function of $\epsilon$, as one would expect for a first-order phase transition. \begin{figure}[htbp] \centerline{\includegraphics[width = 0.7\hsize,viewport = 0 0 675 478,clip]{figs/energy.eps}} \caption{Global minimum of the regularized Cross-Newell energy.} \label{fig:energy} \end{figure} Figure~\ref{fig:patterns} show the numerically obtained minimizing patterns at various values of $\epsilon$. Note that, for sufficiently large $\epsilon$, the minimizers are the knee-solutions (\ref{eq:chevrons}) with $a = 0$, whereas for sufficiently small $\epsilon$, the minimizers have convex-concave disclination pairs, and have $a \neq 0$. \begin{figure}[htbp] \includegraphics[angle=90]{figs/cnzippers.eps} \label{fig:patterns} \caption{ The Cross-Newell zippers. These are numerically obtained minimizing patterns for various choices of the asymptotic angle $\alpha$. Note that $\epsilon = \cos \alpha$. The bifurcation from the knee solution to solutions with disclinations occurs between $\alpha = 0.35 \pi$ and $\alpha = 0.37 \pi$. } \end{figure} \bigskip \thanks{\noindent \textbf{Acknowledgements:} N. M. Ercolani was supported in part by NSF grant DMS-0073087; S.C. Venkataramani was supported in part by an NSF CAREER Award DMS--0135078.} \bibliographystyle{amsplain}
1,116,691,497,463
arxiv
\section{Introduction} \subsection{Background and motivation} Inspiralling and coalescing binary neutron stars are key sources for ground-based gravitational wave (GW) detectors \cite{CutlerThorne}. An important science goal in the detection of such sources is to obtain robust information on the highly uncertain equation of state (EoS) of neutron star matter \cite{EOS}. The effects of the EoS on the GW signal are largest during the late inspiral and merger stages of binary evolution, at GW frequencies $\gtrsim 500$ Hz, and the strong gravity and complex hydrodynamics involved in these regimes require the use of fully relativistic numerical simulations for their study (see e.g.~Ref.~\cite{Duez} and references therein). A small but clean EoS signature will also be present in the early inspiral waveform, at frequencies $\lesssim500$ Hz within LIGO's most sensitive band, arising from the effects of tidal coupling \cite{FH}. The relative weakness of orbital gravity in this regime makes it possible to construct good approximate waveforms using post-Newtonian-based analytic models \cite{BlanchetLRR}. For point-particle models of binary inspiral, analytic gravitational waveforms have been computed to 3PN accuracy \cite{3p5}, and spin effects have been computed to 2PN accuracy \cite{2PNspin}.\footnote{ The shorthand $n$PN, for post-$n$-Newtonian, is used to describe corrections of order $c^{-2n}$ relative to Newtonian gravity, where $c$ is the speed of light.} More recent efforts to improve the analytic description of neutron star binary GW signals by including tidal effects began with Refs.~\cite{FH,H}, which used a leading-order model of the tidal coupling and GW emission to demonstrate the potential feasibility of measuring EoS effects in inspiralling neutron stars in the low frequency ($\lesssim 400$ Hz) regime with Advanced LIGO. The tidal contribution to the GW signal computed in Ref.~\cite{FH} depends on a single tidal deformability parameter $\lambda$, which characterizes the star's deformation response to a static (or adiabatically changing) tidal field and which is sensitive to the star's EoS. The quadrupolar tidal deformability $\lambda$ was defined in a fully relativistic context and calculated for a variety of EoS models in Refs.~\cite{FH,H,Hetal,DN1,BP}, and Refs.~\cite{DN1,BP} extended the analysis to include higher-multipolar tidal responses of both electric- and magnetic-type. It was found in Ref.~\cite{Hetal} that Advanced LIGO should be able to constrain the neutron stars' tidal deformability to $\lambda\lesssim (1.2\times 10^{37}\,\rm{g}\,\rm{cm}^2\,\rm{s}^2)(D/100\,\rm{Mpc})$ with 95\% confidence, for a binary of two 1.4 $M_\odot$ neutron stars at a distance $D$ from the detector, using only the portion of the signal with GW frequencies less than 400 Hz. The calculations of $\lambda$ for a 1.4 $M_\odot$ neutron star in Refs.~\cite{H,Hetal,DN1,BP}, using several different equations of state, give values in the range 0.03--1.0$\times 10^{37}\,\rm{g}\,\rm{cm}^2\,\rm{s}^2$, so nearby events may allow Advanced LIGO to place useful constraints on candidate equations of state. To detect or constrain the tidal deformability $\lambda$ will require models for the tidal contribution to the GW signal that are accurate to $\lesssim$10\%, much less than the current uncertainty in $\lambda$. References \cite{FH,Hetal} estimate the fractional corrections to the tidal signal at GW frequencies below 400 Hz due to several effects neglected by the model of the GW phasing used in Ref.~\cite{FH}, namely, non-adiabaticity ($\lesssim$1\%), higher-multipolar tidal coupling ($\lesssim$0.7\%), nonlinear hydrodynamic effects ($\lesssim$0.1\%), spin effects ($\lesssim$0.3\%), nonlinear response to the tidal field ($\lesssim$3\%), viscous dissipation (negligible), and post-Newtonian effects ($\lesssim$10\%). The largest expected corrections, from post-Newtonian effects in the orbital dynamics and GW emission, are thus essential for an accurate analysis of the tidal signal. These corrections will depend on the neutron star physics only through the same tidal deformability parameter $\lambda$ used in the Newtonian treatment and thus can be easily incorporated into the same data anaysis methods used in the Newtonian (tidal) case. The extension of the tidal signal calculation to 1PN order was recently discussed in Ref.~\cite{DN2} by Damour and Nagar (DN). Working within the framework of the effective-one-body (EOB) formalism, DN gave a complete description of the 1PN conservative dynamics of tidally interacting binaries in circular orbits, parametrized the forms of further 1PN corrections to the GW emission, and made comparisons with numerical simulations (see also Ref.~\cite{Baiotti}). The 1PN conservative dynamics has also been recently studied in Ref.~\cite{VF} by Vines and Flanagan (VF). Working from the formalism for 1PN celestial mechanics developed in Refs.~\cite{DSX1,DSX2} and extended by Ref.~\cite{RF}, VF found the explicit equations of motion and action principle for generic orbits and generic evolution of the bodies' quadrupoles. Specializing to adiabatically induced quadrupoles and circular orbits, the results of VF agree with those of DN for the 1PN conservative dynamics. The construction of the 1PN metric given by VF also allows for explicit computation of the binary system's 1PN-accurate mass multipole moments. In the present paper, we use the results of VF \cite{VF} to derive the 1PN-accurate GW signal from an inspiralling binary with quadrupolar tidal interactions. Working to linear order in the stars' quadrupole moments, and using adiabatically induced quadrupoles and circular orbits, we compute the binary's binding energy and GW energy flux and use them to determine the phase evolution of the emitted GW signal in the stationary phase approximation. The results presented here can be used to extend the validity of analytic GW signals to higher frequencies, and to provide useful information for hybrid schemes that attempt to bridge the gap in frequencies between analytic inspiral models and the start of numerical simulations, such as the EOB formalism of Ref.~\cite{DN2}. Our expressions for the orbital equations of motion and binding energy may also be useful for the construction of quasi-equilibrium initial data for numerical simulations \cite{initialdata}. We note that the 1PN corrections calculated here slightly improve the prospects for detection of tidal effects in binary GW signals, as they increase the tidal signal by $\sim20\%$ at GW frequencies of 400 Hz. \subsection{Organization} The organization of this paper is as follows. In Sec.~\ref{sec:cons}, we briefly state the key results of Ref.~\cite{VF} for the 1PN conservative dynamics of a binary in which one member has a mass quadrupole moment. We specialize to the adiabatic limit and circular orbits and compute the gauge-invariant binding energy as a function of orbital frequency. In Sec.~\ref{sec:fluxes}, we consider the gravitational radiation and obtain the 1PN tidal corrections to the radiated energy flux. We then compute the resulting 1PN tidal corrections to the phase of the Fourier transform of the waveform in the stationary phase approximation and conclude in Sec.~\ref{disc} with a short discussion of the results. \subsection{Notation and conventions} We use units where Newton's constant is $G=1$, but retain factors of the speed of light $c$, with $1/c^2$ serving as the formal expansion parameter for the post-Newtonian expansion. We use lowercase latin letters $a,b,i,j,\ldots$ for indices of spatial tensors. Spatial indices are contracted with the Euclidean metric, $v^iw^i=\delta_{ij}v^iw^j$, with up or down placement of the indices having no meaning. We use angular brackets to denote the symmetric, trace-free projection of tensors, for example $T^{<ab>}=T^{(ab)}-\frac{1}{3}\delta^{ab}T^{cc}$. \section{Conservative dynamics in the adiabatic limit}\label{sec:cons} In this section we briefly review the key results of VF \cite{VF} concerning the 1PN conservative dynamics of a binary system with quadrupolar tidal coupling. For simplicity, we consider a binary composed of one point-mass (body 1) and one deformable star (body 2). Since we consistently work to linear order in the quadrupole, our results can be easily generalized to the case of two deformable bodies by interchanging body labels. The binary's orbital dynamics can be formulated in terms of the separation (three-)vector $z^i=z^i_2-z^i_1$ between the bodies, the bodies' masses $M_1$ and $M_2$, and the quadrupole moment $Q_2^{ij}$ of body 2. The 1PN-accurate worldlines $x^i=z_1^i(t)$ and $x^i=z_2^i(t)$ of the bodies' centers of mass-energy and their separation $z^i(t)=z_2^i(t)-z_1^i(t)$ are defined in a `global' 1PN coordinate system $(t,x^i)$. The global coordinates are conformally Cartesian and harmonic, and they tend to inertial coordinates in Minkowski spacetime as $|\bm x|\to\infty$. Also, the binary system's center of mass-energy is taken to be at rest at the origin $x^i=0$ (the system's 1PN-accurate mass dipole moment is set to zero), so that the $(t,x^i)$ coordinates correspond to the center-of-mass-energy frame of the system. We use the following notation for the relative position, velocity, and acceleration: \begin{eqnarray} &z^i=z_2^i-z_1^i,\quad r=|\bm{z}|=\sqrt{\delta_{ij}z^iz^j},\quad n^i=z^i/r,& \nonumber\\\nonumber &v^i=\dot z^i,\quad \dot r=v^in^i,\quad a^i=\ddot z^i,& \end{eqnarray} with dots denoting derivatives with respect $t$. We take $M_1$ and $M_2$ to be the bodies' conserved rest masses,\footnote{Note that the mass $M_2$ used here is not the 1PN-accurate Blanchet-Damour \cite{BD} mass monopole moment (which was called $M_2$ in VF \cite{VF}); rather, the $M_2$ used here is the conserved part of the BD mass monopole (called $\,^{\scriptscriptstyle \text{n}}\!\;\! M_2$ in VF \cite{VF}). The full 1PN-accurate monopole also receives contributions from the body's internal elastic energy (and from the tidal gravitational potential energy), which for a deformable body, will vary as tidal forces do work on the body. The effects of these time-dependent contributions to the monopole have been separately accounted for in the Lagrangian (\ref{Lad}), and the mass $M_2$ appearing there is constant.} and we define the total mass $M$, mass fractions $\chi_1,\chi_2$, reduced mass $\mu$, and symmetric mass ratio $\eta$ by \begin{equation} M=M_1+M_2,\quad \chi_1=M_1/M,\quad \chi_2=M_2/M,\quad\mu=\eta M=\chi_1\chi_2 M. \end{equation} Note that there are only two independent parameters among these quantities; we will tend to express our results in terms of the total mass $M$ and the mass fraction $\chi_2$ of the deformable body, unless factorizations make it more convenient to use $\chi_1=1-\chi_2$ or $\eta=\chi_1\chi_2$. The tidal deformation of body 2 is described by its 1PN-accurate Blanchet-Damour \cite{BD} mass quadrupole moment $Q_2^{ij}(t)$. We will work in the limit where the quadrupole is adiabatically induced by the tidal field; i.e.~we assume that the quadrupole responds to the instantaneous tidal field according to \begin{subequations}\label{pnQG} \begin{equation}\label{pnQ} Q_2^{ij}(t)=\lambda G_2^{ij}(t). \end{equation} Here, the constant $\lambda$ is the tidal deformability,\footnote{The tidal deformability is related to the Love number $k_2$ \cite{Love} and the star's areal radius $R$ by $\lambda=2k_2R^5/3$.} and $G_2^{ij}(t)$ is the quadrupolar gravito-electric DSX \cite{DSX2} tidal moment of body 2 which encodes the leading order ($l=2$) tidal field felt by body 2. For the binary system under consideration, the tidal moment is given by \begin{eqnarray}\label{pnG} G_2^{ij}&=&\frac{3\chi_1 M}{r^3}n^{<ij>} +\frac{1}{c^2}\frac{3\chi_1 M}{r^3} \bigg[\left(2v^2-\frac{5\chi_2^2}{2}\dot{r}^2-\frac{6-\chi_2}{2}\frac{M}{r}\right)n^{<ij>} +v^{<ij>}-(3-\chi_2^2)\dot{r}n^{<i}v^{j>}\bigg] \nonumber\\ &&+O(c^{-4})+O(\lambda). \end{eqnarray} \end{subequations} With the quadrupole given by Eqs.~(\ref{pnQG}) in the adiabatic limit, the only independent degree of freedom is the binary's relative position $z^i(t)$. It was shown by VF \cite{VF} that the evolution of $z^i(t)$ is governed by the Lagrangian \begin{eqnarray} \mathcal L[z^i]&=&\frac{\mu v^2}{2}+\frac{\mu M}{r}\left(1+\frac{\Lambda}{r^5}\right) +\frac{\mu}{c^2}\left\{\theta_0v^4 +\frac{M}{r}\left[v^2\left(\theta_1+\xi_1\frac{\Lambda}{r^5}\right) +\dot{r}^2\left(\theta_2+\xi_2\frac{\Lambda}{r^5}\right) +\frac{M}{r}\left(\theta_3+\xi_3\frac{\Lambda}{r^5}\right)\right]\right\} \nonumber\\ &&+O(c^{-4})+O(\lambda^2),\label{Lad} \end{eqnarray} with $\Lambda=(3\chi_1/2\chi_2)\lambda$, and with the dimensionless coefficients \begin{eqnarray}\label{Ladcs} &\theta_0=(1-3\eta)/8, \quad \theta_1=(3+\eta)/2, \quad \theta_2=\eta/2, \quad \theta_3=-1/2,& \nonumber\\ &\xi_1=(\chi_1/2)(5+\chi_2), \quad \xi_2=-3(1-6\chi_2+\chi_2^2), \quad \xi_3=-7+5\chi_2.& \end{eqnarray} The orbital equation of motion resulting from this Lagrangian, via $(d/dt)(\partial\mathcal L/\partial v^i)=\partial\mathcal L/\partial z^i$, is given by \begin{eqnarray}\label{ada} a^i&=&-\frac{Mn^i}{r}\left(1+\frac{6\Lambda}{r^5}\right) +\frac{M}{c^2r^2}\left[ v^2n^i\left(\phi_1+\zeta_1\frac{\Lambda}{r^5}\right) +\dot{r}^2n^i\left(\phi_2+\zeta_2\frac{\Lambda}{r^5}\right) +\frac{M}{r}n^i\left(\phi_3+\zeta_3\frac{\Lambda}{r^5}\right) +\dot{r}v^i\left(\phi_4+\zeta_4\frac{\Lambda}{r^5}\right)\right] \nonumber\\ &&+O(c^{-4})+O(\lambda^2), \end{eqnarray} with coefficients \begin{eqnarray} &\phi_1=-1-3\eta, \quad \phi_2=3\eta/2, \quad \phi_3=2(2+\eta), \quad \phi_4=2(2-\eta),& \nonumber\\ &\zeta_1=-3(2-\chi_2)(1+6\chi_2), \quad \zeta_2=24(1-6\chi_2+\chi_2^2), \quad \zeta_3=66+9\chi_2-19\chi_2^2, \quad \zeta_4=6(2-\chi_2)(3-2\chi_2).& \end{eqnarray} The conserved energy constructed from the Lagrangian (\ref{Lad}) is \begin{eqnarray}\label{Ead} E&=&v^i\partial\mathcal L/\partial v^i-\mathcal L \nonumber\\ &=&\frac{\mu v^2}{2}-\frac{\mu M}{r}\left(1+\frac{\Lambda}{r^5}\right) +\frac{\mu}{c^2}\left\{3\theta_0v^4 +\frac{M}{r}\left[v^2\left(\theta_1+\xi_1\frac{\Lambda}{r^5}\right) +\dot{r}^2\left(\theta_2+\xi_2\frac{\Lambda}{r^5}\right) -\frac{M}{r}\left(\theta_3+\xi_3\frac{\Lambda}{r^5}\right)\right]\right\} \nonumber\\ && +O(c^{-4})+O(\lambda^2), \end{eqnarray} which is a constant of motion of the equation of motion (\ref{ada}). The orbital equation of motion (\ref{ada}) admits solutions of the form \begin{subequations}\label{circ} \begin{equation} z^i(t)=rn^i(t)=r(\cos(\omega t),\sin(\omega t),0), \end{equation} with $ \dot r=0$, $v^2=r^2\omega^2$ and $a^i=-r\omega^2 n^i$, corresponding to circular orbits in the $x$-$y$ plane with frequency $\omega$. For later convenience, we introduce the unit vector $\phi^i$ in the direction of the velocity $v^i$, which satisfies \begin{equation} \dot z^i=v^i=r\omega\phi^i,\qquad \dot n^i=\omega \phi^i,\qquad \dot\phi^i=-\omega n^i,\qquad n^i\phi^i=0, \end{equation} \end{subequations} for circular orbits. Working to linear order both in the post-Newtonian parameter $c^{-2}$ and in the tidal deformability parameter $\lambda$, Eqs.~(\ref{ada}) and (\ref{circ}) yield the radius-frequency relationship \begin{equation}\label{rofomega} r(\omega)=\frac{M^{1/3}}{\omega^{2/3}}\left[1+\frac{3\chi_1}{\chi_2}\hat{\lambda} +\frac{\eta-3}{3}x +\frac{\chi_1}{2\chi_2}\left(-6+26\chi_2-\chi_2^2\right)x\hat{\lambda} \right]+O(c^{-4})+O(\lambda^2). \end{equation} Here, we have introduced the $\omega$-dependent dimensionless quantities \begin{equation}\label{lhx} \hat{\lambda}\equiv\frac{\lambda\omega^{10/3}}{M^{5/3}},\qquad\qquadx\equiv\frac{(M\omega)^{2/3}}{c^2}, \end{equation} which characterize the fractional corrections due to tidal effects and to post-Newtonian effects. Using Eqs.~(\ref{Ead}), (\ref{circ}) and (\ref{rofomega}), we can also find the gauge-invariant energy-frequency relationship for circular orbits: \begin{eqnarray}\label{eofomega} E(\omega)&=&\mu(M\omega)^{2/3}\bigg[-\frac{1}{2} +\frac{9\chi_1}{2\chi_2}\hat{\lambda} +\frac{9+\eta}{24}x +\frac{11\chi_1}{4\chi_2}(3+2\chi_2+3\chi_2^2)x\hat{\lambda}\bigg]+O(c^{-4})+O(\lambda^2).\phantom{yoy} \end{eqnarray} This expression for the binding energy can be directly compared with Eqs.~(37,38,50-57) of DN \cite{DN2}, and indicates that their parameter $\bar\alpha_1'$ giving the 1PN tidal contribution to the binding energy should have the value $\bar\alpha_1'=(11/18)(3+2\chi_2+3\chi_2^2)$ instead of $55\chi_2/18$ (note the the quantity denoted here by $\chi_2$ is denoted by $X_A$ in DN \cite{DN2}). For the case of equal masses ($\chi_1=\chi_2=1/2$, $\eta=1/4$), the binding energy (\ref{eofomega}) simplifies to \begin{equation} E_{M_1=M_2}(\omega)=-\frac{M^{5/3}\omega^{2/3}}{8}\left[1-\frac{37}{48}x -18\hat{\lambda}\left(1+\frac{209}{72}x\right)\right]+O(c^{-4})+O(\lambda^2).\label{equalmeb} \end{equation} For orbital frequencies of 200 Hz (GW frequencies of 400 Hz) and total mass $M=2.8 M_\odot$, the 1PN fractional correction to the Newtonian tidal term in the binding energy is $(209/72)x \approx 19\%$. \section{Gravitational radiation}\label{sec:fluxes} The energy flux from the binary due to gravitational radiation is determined by the time variation of the binary system's multipole moments \cite{BlanchetLRR}. The flux $\dot E$ to 3.5PN-order (or to 1PN-order relative to the leading 2.5PN flux) is given in terms of the total system's mass quadrupole moment $Q_{\rm{sys}}^{ij}(t)$, current quadrupole moment $S_{\rm{sys}}^{ij}(t)$, and mass octupole moment $Q_{\rm{sys}}^{ijk}(t)$ by \begin{equation} \dot E=-\frac{1}{5c^5} ( \partial_t^3 Q_{\rm{sys}}^{ij} )^2 -\frac{1}{c^7}\left[\frac{1}{189}(\partial_t^4 Q_{\rm{sys}}^{ijk})^2 +\frac{16}{45} (\partial_t^3 S_{\rm{sys}}^{ij})^2 \right] + O(c^{-8}), \label{eq:fluxformula} \end{equation} c.f.~Eq.~(223) of Ref.~\cite{BlanchetLRR}. The binary system's multipole moments can be computed from the asymptotic form of the global metric, as in Sec.~IV of Ref.~\cite{VF}. The mass quadrupole $Q_{\rm{sys}}^{ij}$, which is needed to 1PN accuracy in the flux formula (\ref{eq:fluxformula}), can be found from Eqs.~(4.6,4.5,B4,B5,6.1) of VF \cite{VF}; the result is \begin{eqnarray} Q^{ij}_{\rm{sys}}&=&Q^{ij}_2+\mu r^2 n^{<ij>} +\frac{\mu r^2}{c^2}\bigg\{ n^{<ij>}\left[ v^2 \left(\tau_1+\sigma_1\frac{\lambda}{r^5}\right) + \dot{r}^2 \left(\tau_2+\sigma_2\frac{\lambda}{r^5}\right) + \frac{M}{r} \left(\tau_3+\sigma_3\frac{\lambda}{r^5}\right) \right] \nonumber\\ &&\phantom{Q^{ij}+\mu r^2 n^{<ij>} +\frac{\mu r^2}{c^2}\bigg\{} + v^{<ij>} \left(\tau_4+\sigma_4\frac{\lambda}{r^5}\right) + \dot{r} n^{<i}v^{j>} \left(\tau_5+\sigma_5\frac{\lambda}{r^5}\right) \bigg\} +O(c^{-4})+O(\lambda^2), \label{eq:qijt} \end{eqnarray} where the 1PN-accurate body quadrupole $Q_2^{ij}$ is given by Eqs.~(\ref{pnQG}) above and the dimensionless coefficients $\tau$ and $\sigma$ are given by \begin{eqnarray}\label{tau4} \tau_1&=&\frac{29}{42}(1-3\eta),\quad \tau_2=0,\quad \tau_3=\frac{1}{7}(8\eta-5),\quad \tau_4=\frac{11}{21}(1-3\eta),\quad \tau_5=\frac{4}{7}(3\eta-1), \\ \nonumber \sigma_1&=&\frac{13\chi_1^2}{7\chi_2},\quad \sigma_2=\frac{185\chi_1^2}{14\chi_2},\quad \sigma_3=-\frac{3\chi_1}{14\chi_2}(8+23\chi_2+13\chi_2^2),\quad \sigma_4=\frac{38\chi_1^2}{7\chi_2},\quad \sigma_5=-\frac{151\chi_1^2}{7\chi_2}. \end{eqnarray} This result holds for generic orbits (in a binary where body 2 has an adiabatically induced quadrupole). Using Eqs.~(\ref{pnQG}) for the body quadrupole, Eqs.~(\ref{circ}) to specialize to circular orbits, and the radius-frequency relationship (\ref{rofomega}), the system quadrupole simplifies to \begin{equation} Q^{ij}_{\rm{sys}}=\frac{\eta M^{5/3}}{\omega^{4/3}}\left[n^{<ij>}(1+\sigma_0\hat{\lambda}) +x\left(\tau_6 n^{<ij>} + \tau_4 \phi^{<ij>} \right)+x\hat{\lambda}\left(\sigma_6 n^{<ij>} + \sigma_7 \phi^{<ij>} \right)\right]+O(c^{-4})+O(\lambda^2),\label{Qsysf} \end{equation} with $\tau_4$ as in Eq.~(\ref{tau4}), and with \begin{displaymath} \sigma_0=\frac{3(3-2\chi_2)}{\chi_2},\:\tau_6=-\frac{85+11\eta}{42},\:\sigma_6=\frac{1}{14\chi_2}(4+56\chi_2+264\chi_2^2-219\chi_2^3),\: \sigma_7=\frac{1}{7\chi_2}(103-252\chi_2+302\chi_2^2-132\chi_2^3). \end{displaymath} The expression (\ref{Qsysf}) for the total quadrupole determines the unknown 1PN correction coefficient introduced in Eq.~(71) of DN \cite{DN2}.\footnote{The parametrization of the tidal contribution to the system quadrupole given in Eqs.~(68-71) of DN \cite{DN2} does not quite match the form given in Eq.~(\ref{Qsysf}) here, as no $\phi^{<ij>}$ term is included. Also, their parametrization leaves some dependence on the radius $r$, while ours eliminates $r$ in favor of the gauge invariant quantity $\omega$. Still, as the coefficients of $x\hat{\lambda} n^{<ij>}$ and $x\hat{\lambda} \phi^{<ij>}$ in our Eq.~(\ref{Qsysf}) end up additively combined in the final contribution to the energy flux, one could in principle determine an effective value for the coefficient $\beta_1$ in Eq.~(71) of DN \cite{DN2} that would lead to the correct flux $\dot E$.} Similarly, the system's mass octupole and current quadrupole, which are needed only to Newtonian order, are given by \begin{eqnarray} Q^{ijk}_{\rm{sys}}&=&\mu r^3 n^{<ijk>}\left[(\chi_1-\chi_2)+\frac{9\chi_1}{\chi_2}\frac{\lambda}{r^5}\right]+O(c^{-2})+O(\lambda^2) \nonumber\\ &=&\frac{\eta M^2}{\omega^2}n^{<ijk>}\left[(\chi_1-\chi_2)+\frac{18\chi_1^2}{\chi_2}\hat{\lambda}\right]+O(c^{-2})+O(\lambda^2), \label{eq:qijkt} \end{eqnarray} and \begin{eqnarray} S^{ij}_{\rm{sys}}&=&\mu r^2 \epsilon^{kl<i}n^{j>k}v^l\left[(\chi_1-\chi_2)+\frac{9\chi_1}{2\chi_2}\frac{\lambda}{r^5}\right]+O(c^{-2})+O(\lambda^2) \nonumber\\ &=&\frac{\eta M^2}{\omega}\epsilon^{kl<i}n^{j>k}\phi^l\left[(\chi_1-\chi_2)+\frac{9\chi_1(3-4\chi_2)}{2\chi_2}\hat{\lambda}\right]+O(c^{-2})+O(\lambda^2), \label{eq:sijt} \end{eqnarray} where the first equalities hold for generic orbits, and the second equalities hold for circular orbits. Having gathered the expressions (\ref{Qsysf}), (\ref{eq:qijkt}) and (\ref{eq:sijt}) for the system multipole moments, we can insert them into the flux formula (\ref{eq:fluxformula}). Using also Eqs.~(\ref{circ}) to for the time derivatives of $n^i$ and $\phi^i$ (which are the only time-dependent quantities in the final expressions for the multipoles), and working out the STF projections and contractions (e.g.~$n^{<ijk>}n^{<ijk>}=2/5$) using the STF identities from (e.g.) Ref.~\cite{RF}, we find the GW energy flux from the binary to be given by \begin{eqnarray} \dot E(\omega)&=&-\frac{32}{5}\eta^2x^{\,5/2} \bigg[1-\left(\frac{1247}{336}+\frac{35\eta}{12}\right)x+\frac{6(3-2\chi_2)}{\chi_2}\hat{\lambda}+\frac{1}{28\chi_2}\left(-704-1803\chi_2+4501 \chi_2^2 -2170\chi_2^3\right)x\hat{\lambda} \nonumber\\ &&+O(c^{-3})+O(\lambda^2)\bigg].\label{flux} \end{eqnarray} The coefficients for the 1PN point-mass (second) and Newtonian tidal (third) terms match those given in Refs.~\cite{BlanchetLRR,FH}. Using energy balance and the stationary phase approximation \cite{TichyPhase}, the Fourier transform of the gravitational waveform can be written as $h={\cal A}e^{i\psi}$, with the phase $\psi(\omega)$ determined from the binding energy $E(\omega)$ and flux $\dot E(\omega)$ as functions of the orbital frequency $\omega$ by the relation \begin{equation} \frac{d^2\psi}{d\omega^2}=\frac{2}{\dot E}\frac{dE}{d\omega}. \end{equation} Taking $\dot E$ from Eq.~(\ref{flux}), finding $dE/d\omega$ from a derivative of Eq.~(\ref{eofomega}), and integrating twice (dropping unimportant integration constants) yields the phase: \begin{eqnarray} \psi(\omega)&=&\frac{3}{128\eta x^{\,5/2}}\left[1+\psi_{0,1}\hat{\lambda}+\psi_{1,0}x+\psi_{1,1}x\hat{\lambda}+O(c^{-3})+O(\lambda^2)\right] \\ \nonumber &=&\frac{3c^5}{128\eta(M\omega)^{5/3}}\left[1+\psi_{0,1}\frac{\lambda\omega^{10/3}}{M^{5/3}}+\psi_{1,0}\frac{(M\omega)^{2/3}}{c^2} +\psi_{1,1}\frac{\lambda\omega^4}{Mc^2}+O(c^{-3})+O(\lambda^2)\right], \end{eqnarray} with coefficients \begin{equation} \psi_{0,1}=-\frac{24}{\chi_2}(1+11\chi_1),\quad \psi_{1,0}=\frac{20}{9}\left(\frac{743}{336}+\frac{11\eta}{4}\right),\quad \psi_{1,1}=-\frac{5}{28\chi_2}\left(3179-919\chi_2-2286\chi_2^2+260\chi_2^3\right). \end{equation} The above results concern a binary where only one body (body 2) develops a tidally induced quadrupole, with quadrupolar tidal deformability $\lambda_2=\lambda$. For the case of two deformable bodies, the contribution to the tidal signal from the other body (body 1) can simply be added to the phase by interchanging body labels $(1\leftrightarrow2)$ in the tidal terms. For the case of equal masses and identical equations of state, $M_1=M_1=M/2$ and $\lambda_1=\lambda_2=\lambda$, the phase correction is \begin{equation} \psi_{M_1=M_2}(\omega)=\frac{3}{32x^{\,5/2}}\left[1-624\hat{\lambda}+\frac{2435}{378}x-\frac{3115}{2}x\hat{\lambda} \right]. \end{equation} The 1PN correction increases the tidal signal by $\approx 17\%$ at gravitational wave frequencies of $400$Hz for $M=2.8 M_\odot$. \bigskip From the expressions (\ref{flux}) and (\ref{eofomega}) for the gravitational wave luminosity $\dot E(\omega)$ and the binding energy $E(\omega)$, it is straightforward to construct the phase $\varphi(t)$ of the time-domain gravitational waveform based on the various PN Taylor approximants used in several approaches to interfacing analytical and numerical relativity \cite{templatecompare}. We provide here the explicit expressions for the Taylor T4 approximant, in which the function ${\cal F}\equiv \dot E/(dE/dx)$ is expanded in a Taylor series and the differential equations \begin{equation} \frac{dx}{dt}={\cal F}, \ \ \ \ \ \frac{d\varphi}{dt}=2 x^{3/2}/M, \end{equation} are integrated numerically [with $x$ as in (\ref{lhx})]. The tidal contribution to the function ${\cal F}^{\rm T4}$ adds linearly to the $3.5$PN point mass terms and is given to 1PN order by \begin{equation} {\cal F}^{\rm{T4}}_{\rm{tidal}}= \frac{32\chi_1\lambda_2}{5M^6}\left[12(1+11\chi_1)x^{10} +\left(\frac{4421}{28}-\frac{12263}{28}\chi_2+\frac{1893}{2}\chi_2^2-661\chi_2^3\right)x^{11}\right]+(1\leftrightarrow 2). \end{equation} \section{Discussion and Conclusions}\label{disc} We have provided the 1PN accurate description of quasi-circular binary inspiral with quadrupolar tidal coupling and obtained the 1PN tidal contributions to the phasing of the emitted gravitational radiation in the low-frequency, adiabatic limit. Our results show that 1PN effects increase the tidal corrections by approximately $20\%$ at gravitational wave frequencies of $400$ Hz in the case of two $1.4M_\odot$ stars. These results should be of use in constructing GW measurement templates and can be easily be incorporated into the EOB formalism as discussed by DN \cite{DN2}; the unknown coefficients introduced by DN pertaining to 1PN quadrupolar tidal effects have been determined here. Our results can also be of use in comparing numerical and analytic waveforms and constructing initial data for numerical simulations. While we have restricted attention here to the case of circular orbits, the results necessary to compute the GW signal for generic orbits can all be found in this paper. This work could also be extended to consider 1PN tidal coupling at higher multipolar orders; the necessary machinery (and the template of the quadrupolar case) is fully contained in VF \cite{VF}. \begin{acknowledgments} This research was supported at Cornell by NSF Grant PHY-0757735, and at Caltech by the Sherman Fairchild Foundation. \end{acknowledgments}
1,116,691,497,464
arxiv
\section{Introduction} Over the past few decades, inflation has been established as the leading paradigm for describing the early universe. It proposes a period of rapidly accelerated expansion during the first fraction of a second after the universe came to be \cite{Guth:1980zm,Linde:1981mu,Albrecht:1982wi}. At the classical level such an expansion can explain why the universe looks nearly identical in every direction (i.e. is homogeneous and isotropic), while at the quantum level it gives rise to the tiny density fluctuations that we observe in the cosmic microwave background radiation (CMB), which eventually grow into the large scale structure of the universe (LSS). By precisely mapping the anisotropies in the CMB, we have determined the fluctuations to be very close to Gaussian distributed, which matches the predictions of even the simplest theories of inflation \cite{Akrami:2018odb}. However, in order to sift through the vast landscape of consistent inflationary theories we are required to look beyond such general predictions. One avenue to discriminate theories of inflation, is through the study of primordial non-Gaussianities (pnGs) (see \cite{Meerburg:2019qqi} and references therein). Signatures of pnG would appear as non-zero higher $n$-point functions of the initial conditions, where the $3$-point function, the so-called \emph{bispectrum}, is generally the most sensitive. A measurement of pnGs can tell us a great deal about the dynamics driving the expansion (see \cite{Achucarro:2022qrl} for a recent overview). Furthermore, particles (fields) present in the primordial universe leave their unique imprint in the distribution of fluctuations through pnGs, effectively making inflation a particle collider at the highest conceivable energy scale \cite{Arkani-Hamed:2015bza}. Hence, a detailed study of primordial non-Gaussianity is imperative in order to advance our understanding of the universe as a whole. While the most stringent constraints on pnGs are derived from measurements of the CMB bispectrum, future CMB experiments will be limited by its two-dimensional nature and damping of primary fluctuations. In our search for signatures of pnGs we are therefore required to look for alternative probes. Surveys of the large scale structure of the universe provide us with a huge observable volume all the way into the cosmological Dark Ages, by mapping the distribution of galaxies and neutral hydrogen. While the anisotropies in the CMB are pristine (i.e. linearly related to the primordial fluctuations), the density field has since evolved. Gravity, being intrinsically non-linear, breaks the linear relation between density fluctuations and primordial initial conditions, giving rise to a number of complications. Firstly, even if the primordial fluctuations are purely Gaussian, the non-linear gravitational evolution introduces \emph{secondary} non-Gaussianities (snGs), typically many orders of magnitude stronger than any primordial signal. Thus, an accurate modelling of snG is required in order to properly extract information about pnG. Furthermore, snGs introduce non-Gaussian covariance in the measurements, reducing the amount of \emph{unique} information present in the data. Although the impact of non-Gaussian covariance has been appreciated at low redshifts \cite{Takahashi:2009ty,Chan:2016ehg,Chan:2017fiv,Wadekar:2019rdu,Barreira:2019icq,Gualdi:2020ymf,Oddo:2021iwq,Barreira:2021ueb,Biagetti:2021tua,Rizzo:2022lmh}, its relevance for high redshift surveys has typically been neglected \cite{Munoz:2015eqa,Chen:2016zuu,Meerburg:2016zdz,Karagiannis:2019jjx,Floss:2022grj,Yamauchi:2022fri,Karagiannis:2022ylq}. In this paper, we reassess this assumption and show that by not including non-Gaussian covariance in forecasts of the constraining power of the hydrogen bispectrum observed by a PUMA-like 21-cm intensity mapping experiment \cite{CosmicVisions21cm:2018rfq,Karagiannis:2019jjx}, one can underestimate the uncertainty in the linear regime by up to a factor of $\sim 5$ and $\sim 2$ for the local and equilateral type non-Gaussianity, respectively. \\ \\ \textbf{Conventions \& Notation} We denote spatial vectors as $\bk{i}$ and its magnitude as $|\bk{i}| = k_i$. Sums of momenta are written as $\bk{1..n} = \sum_1^n \bk{i}$ e.g. $\bk{1}+\bk{2} = \bk{12}$. Momentum integrals are compactly written as $\int \frac{d^3\bk{i}}{(2\pi)^3} = \int_{\bk{i}}$ and $\delta_D$ denotes the Dirac delta function. In order to compare to simulations of the matter bispectrum, our cosmology equals the fiducial cosmology of the \textsc{Quijote} suite \cite{Villaescusa-Navarro:2019bje}, which closely resembles the 2018 Planck constraints \cite{Planck:2018vyg}. For the analysis of the PUMA survey, we use the 2015 Planck constraints \cite{Planck:2015fie} to match previous forecasts. \section{Theoretical Framework and Setup} In order to estimate the signal-to-noise for high-redshift surveys observables, we need to introduce a few concepts. We are ultimately interested in constraining the early universe through primordial non-Gaussianities, thus we start off by defining correlations of primordial fluctuations, i.e. our signal of interest. Next, we introduce density perturbations, whose correlations at different positions in the sky are the building blocks of what we actually observe in high- (and low-) redshift surveys. Their dynamics driven by gravity determine the noise we need to overcome. \subsection{Initial Conditions} Quantum fluctuations during the inflationary epoch cause the expansion to end at slightly different times in different places, giving rise to tiny scalar density fluctuations $\zeta$ that source linear perturbations in the matter density field. In this way, linear fluctuations of the density field trace the primordial initial conditions of the universe. Even a small non-Gaussianity in the distribution of primordial fluctuations serves as an important way to discriminate between different models of inflation. Furthermore, it allows one to directly probe the particle content and interactions of the inflationary epoch \cite{Arkani-Hamed:2015bza,Lee:2016vti}. Since such non-Gaussianities are constrained to be small by CMB observations \cite{Akrami:2018odb}, in this work we consider only the first non-Gaussian statistic, which is the bispectrum. Hence, we require only the first two statistical moments of the primordial density distributions. In Fourier space, these are the power spectrum $P_{\zeta}(k)$ and bispectrum $B_{\zeta}(k_1,k_2,k_3)$, defined as \begin{eqnarray} \langle \zeta_{\bk{1}} \zeta_{\bk{2}} \rangle &=& (2\pi)^3 \delta_D(\bk{12}) P_\zeta(k_1),\\ \label{eq:zeta} \langle \zeta_{\bk{1}} \zeta_{\bk{2}} \zeta_{\bk{3}} \rangle &=& (2\pi)^3 \delta_D(\bk{123}) B_\zeta(k_1,k_2,k_3). \end{eqnarray} Different inflationary mechanisms give rise to distinct sizes and shapes of bispectra. It is customary to classify these bispectra into three main templates, the so-called local, equilateral and orthogonal templates, whose expressions are given in the Appendix, Eqs. \eqref{eq:local}, \eqref{eq:equi} and \eqref{eq:ortho}. The local shape typically arises in models of multi-field inflation and peaks in squeezed triangle configurations $k_1 \ll k_2 \sim k_3$. The equilateral shape is typically generated by self-interactions of the inflaton field and peaks for equilateral configurations $k_1 = k_2 = k_3$. Finally, the orthogonal shape, along with the equilateral one, is a natural prediction of the Effective Field Theory (EFT) of (single field) inflation \cite{Cheung:2007st} and peaks for both equilateral and flattened configurations $k_1 = k_2 + k_3$. \subsection{Matter field and correlators} The primordial initial conditions serve as the seed for the distribution of matter in the universe. We can therefore study the initial conditions of the universe by studying fluctuations of the matter density field, $\rho$, defined as $\delta(t,\textbf{x}) = \rho(t,\textbf{x})/\bar{\rho}(t)-1$, with $\bar \rho$ the mean density in a volume. Similar to the primordial case, we define correlations of $\delta(\mathbf{x})$ in Fourier space as \begin{align} \label{eq:pdelta} \langle \delta_{\bk{1}} \delta_{\bk{2}} \rangle &= (2\pi)^3 \delta_D(\bk{12}) P_\delta(k_1); \\ \langle \delta_{\bk{1}} \delta_{\bk{2}} \delta_{\bk{3}} \rangle &= (2\pi)^3 \delta_D(\bk{123}) B_\delta(\bk{1},\bk{2},\bk{3}),\label{eq:matterb}\\ \langle \delta_{\bk{1}} \delta_{\bk{2}} \delta_{\bk{3}} \delta_{\bk{4}} \rangle &= (2\pi)^3 \delta_D(\bk{1234}) T_\delta(\bk{1},\bk{2},\bk{3}, \bk{4}),\label{eq:mattert} \end{align} where we assume all fluctuations to be at equal times. In this work, we also need the $4$-point correlation function in Fourier space, known as the \emph{trispectrum}, for the computation of the non-Gaussian covariance. Even in the absence of a primordial bispectrum, or higher-order primordial correlators, fluctuations in the matter field grow via gravitational instability and become non-linear, thereby sourcing the matter bispectrum, trispectrum and higher-order correlations. The dynamical equations for $\delta$ describing this process can be solved perturbatively (see e.g.~\cite{Bernardeau:2001qr} for a review). This allows one to compute correlators analytically up to a mildy non-linear scale $k_{\rm NL}$. One way to estimate this scale is by computing \begin{eqnarray} \label{eq:knl} k_\text{NL}(z) = \left[ \frac{1}{6\pi^2} \int_0^\infty dk \; P_\delta^L(k,z)\right]^{-1/2}, \end{eqnarray} where $P_\delta^L$ is the linear matter power spectrum as defined in Eq.~\eqref{eq:matterp}. We use this scale to confine ourselves to the linear regime \footnote{Other definitions have been considered in the literature, e.g. \cite{Tomlinson:2022xud} studies the non-linear scale for the bispectrum specifically. The precise definition of $k_{\rm NL}$ does not qualitatively change the results of this paper.}. The induced bi- and trispectrum in this framework are presented in Eq. \eqref{eq:SPT-B} and Eq. \eqref{eq:SPT-T} of the Appendix. \\ To complement the perturbative approach, we resort to N-body simulations of the universe at large-scales solving the dynamical equations for $\delta$ numerically (see \cite{Angulo:2021kes} for a review). The advantage of N-body simulations is that they allow to directly measure correlations of $\delta$ even at non-linear scales, and to test analytic predictions. The drawback is that they are computationally expensive to run. We make use of publicly available \textsc{Quijote} simulations \cite{Villaescusa-Navarro:2019bje} for our estimates of signal-to-noise at low redshift (i.e. up to $z=3$). For higher redshifts, as the non-linear scale is pushed to very small scales, instead of fully solving dynamical equations we resort to \texttt{Monofonic} \cite{Michaux:2020yis}, which computes particle positions by solving third-order Lagrangian perturbation theory (3LPT) equations. Further details on how simulation data is used can be found in the Appendix. \subsection{Fisher information and estimated uncertainty} In this section, we introduce the quantities we use to estimate the uncertainty on the amplitude of primordial non-Gaussianity, $\fnl{}$, from observations of the bispectrum. \paragraph{Fisher matrix.} A common way to quantify the information content of an observable is through the Fisher matrix. It encodes both the amount of information available from a measurement to constrain a parameter, as well as the correlation between different parameters. Given $N$ measurements of an observable, which for us will be the matter or hydrogen bispectrum, and a set of parameters we want to constrain, $\mathbf{p}$, the Fisher matrix is defined as \begin{eqnarray} \label{eq:fish} F_{ab} = \sum_{TT'}\,\frac{\partial B_T}{\partial p_a}\left(C\right)_{TT'}^{-1}\frac{\partial B_{T'}}{\partial p_b}, \end{eqnarray} where $T$ are triangle configurations in which the bispectra are measured, or calculated, $\mathbf{B}$ is the data vector of bispectra and $C_{TT'}$ is the covariance of $\mathbf{B}$, defined as \begin{eqnarray}\label{eq:defcov} C_{TT'} = \left\langle B_T\,B_{T'}\rangle - \langle B_T \rangle \langle B_{T'}\right \rangle. \end{eqnarray} The estimated uncertainty on a parameter $p_a$ is then defined as \begin{eqnarray} \sigma_{p_a} = \left(F^{-1}\right)^{1/2}_{a\,a}, \end{eqnarray} where $F^{-1}$ indicates the matrix inverse of $F$. \paragraph{N-body measurements.} When using numerical simulations, we measure the matter bispectrum on a finite size box with periodic boundary conditions, such that in this case $\delta_{\bk{}}$ is a discrete Fourier transform of the density contrast. The bispectrum estimator then is defined as \cite{Scoccimarro:1997st} \begin{equation}\label{eq:estB} \hat{B}(k_1,k_2,k_3) \equiv \frac{k_F^3}{N_{tr}}\sum_{\mathbf{q} \in k}\,\delta_K(\mathbf{q}_{123})\, \,\delta_{\mathbf{q}_1}\,\delta_{\mathbf{q}_2}\,\delta_{\mathbf{q}_3}, \end{equation} being $k_F=2\pi/L$ the fundamental frequency in a cubic box of side $L$, $N_{tr}$ gives the number of “fundamental triangles” formed by the vectors $\mathbf{q}_i$ satisfying the condition $\mathbf{q}_{123} = 0$ that belong in the “triangle bin” defined by the triplet of bin centers ($k_1, k_2, k_3$) and bin width $\Delta k$ \footnote{We also measure the power spectrum, since, as we show below, it enters in the calculation of the covariance. The estimator of the power spectrum is \begin{equation}\label{eq:estp} \hat P(k) \equiv \frac{k^3_F}{N_k}\sum_{\mathbf{q} \in k} \delta_{\mathbf{q}}\,\delta_{-\mathbf{q}}, \end{equation} where $N_k$ gives the number of modes in each k-bin.}. The advantage of using N-body simulations is that the full covariance can be estimated numerically from a sample of simulations using Eq.~\eqref{eq:defcov}, where now the average $\langle\cdot\rangle$ is over different realisations of the same simulation. It is also straightforward to compute the Gaussian contribution only, i.e. the case where different modes are uncorrelated. This contribution is given by the product of three power spectra \footnote{The approximate equality indicates the thin shell approximation.} \begin{eqnarray} \label{eq:ppp} C^G_{TT'} \simeq \frac{(2\pi)^3\, k_F^3}{V_{123}}\,s_{123}\,\hat P(k_1)\hat P(k_2)\hat P(k_3)\, \delta_{TT'} \end{eqnarray} where $T,T'$ denote triangle bins, $V_{123} \simeq 8\pi^2 k_1 k_2 k_3 \Delta k^3$ is the volume of the bin, $s_{123}=1,2,6$ for scalene, isosceles and equilateral triangles respectively and $\hat P(k_i)$ are power spectrum measurements. \paragraph{Limit of infinitely thin bins.} At high redshifts, the non-linear scale $k_{\rm NL}$ is pushed to smaller scales. At fixed bin width $\Delta k$, this implies a wider range of scales explored, and consequently a larger data vector and covariance. In order to keep the calculations within reasonable computational cost, one solution is to widen the range of bins, and to sample wavenumbers in log space. Alternatively, we choose to go in the limit of infinitely thin bins and promote the sums to integrals, such that the Fisher matrix becomes \begin{equation}\label{eq:confish} F_{ab} = \int_{TT'} \frac{\partial B_T}{\partial p_a} C^{-1}_{TT'} \frac{\partial B_T'}{\partial p_b}, \end{equation} where now the matter, or hydrogen, bispectrum are estimated using perturbation theory, as explained in the Appendix. Calculating Eq.~\eqref{eq:confish} now implies knowledge of the dependence on triangle configurations $T,T'$ of the inverted full covariance matrix, which is typically hard to compute. In Appendix (Eqs.~\eqref{eq:neumann} to \eqref{eq:fabappr}) we outline a strategy that is based on splitting the covariance into Gaussian and non-Gaussian contributions, $C = C_{\rm G} + C_{\rm nG}$, and expanding the inverse as a Neumann series. We then approximate this series such that the Fisher matrix in the limit of thin bins becomes: \begin{eqnarray}\label{eq:expfish} F_{ab} = \frac{\left(F_{ab}^{\rm G}\right)^2}{F_{ab}^{\rm G} + \delta F_{ab}^{\rm nG}}, \end{eqnarray} where $F_{ab}^{\rm G}$ is the Fisher matrix computed using only the Gaussian covariance $C_{\rm G}$ to compute the inverse covariance and $\delta F_{ab}^{\rm nG}$ is the non-Gaussian correction computed using as inverse the product of matrices $-C_{\rm G}^{-1} C_{\rm nG} C^{-1}_{\rm G}$. \paragraph{Model for the non-Gaussian covariance.} The goal of this paper is to compute how $\sigma$ varies for $f_{\rm NL}$ whether we are considering only the Gaussian term $C_{\rm G}$ or a more complete modelling of the covariance including non-Gaussian terms. As explained above, when using N-body simulations, the Gaussian and the full covariance are computed numerically. In the case of thin bins, we need to introduce a model of the bispectrum covariance. Inserting Eq.~\eqref{eq:estB} into Eq.~\eqref{eq:defcov}, the computation involves the correlator of $6$ fields in Fourier space, which can be combined in four different ways: the Gaussian term is the product of three power spectra (`PPP' term), given by Eq.~\eqref{eq:ppp}. Non-Gaussian terms are represented by either the product of two bispectra (`BB' term), or the product of a power spectrum and a trispectrum (`PT' term), or finally the connected $6$-point function, the so-called \emph{pentaspectrum}. The pentaspectrum is negligible in most practical cases (see \cite{Biagetti:2021tua} for a rough estimate). The key point of this paper is to account for the `BB' and `PT' terms in signal-to-noise estimates at high redshifts. The `BB' term, again assuming that correlators are slowly varying in the $k$-shells, can be written as \begin{equation} C_{\rm nG}^{\rm BB} \simeq B_T B_{T'} (\Sigma_{TT'}^{11} + 8\,\,{\rm perm}), \end{equation} where $\Sigma^{ij}_{TT'}$ is a mode-counting factor that again depends on the shape of the triangles. The `PT' term is similarly written. We calculate these terms for the matter bispectrum predictions using Eqs. \eqref{eq:matterp}, \eqref{eq:SPT-B} and \eqref{eq:SPT-T}. For the hydrogen bispectrum we use the following model for the covariance \begin{equation}\label{eq:modelcov} C \approx C_{\rm G}+ 2\,C_{\rm nG}^{\rm BB}, \end{equation} where the `PT' term is approximated to be equal to the `BB' term, which is a good approximation for squeezed triangles for which the non-Gaussian terms are largest \cite{Biagetti:2021tua}. \section{Constraining $\fnl{}$ at high redshifts} The primary goal of this work is to show the importance of including non-Gaussian terms in the covariance when estimating the uncertainty to the primordial non-Gaussian amplitude $\fnl{}$ in high-redshift surveys. One could be tempted to neglect the non-Gaussian covariance at high redshifts on scales larger than $k_{\rm NL}$ at that redshift. In this linear regime one might expect modes of different wavelength to be mostly uncorrelated, such that the covariance is diagonal and Gaussian terms dominate. As we show in what follows, this intuition fails: off-diagonal terms become important well within what is typically considered the linear regime based on Eq.~\eqref{eq:knl}. \subsection{Uncertainty on $\fnl{}$ from the matter bispectrum}\label{sec:matter} As a testing ground, we first consider the matter bispectrum in real space as our observable and compute the estimated uncertainty of the primordial non-Gaussian amplitude $\fnl{}$ for the primordial bispectra of the local, equilateral and orthogonal type as defined previously. In this setup, $\fnl{}$ is the only parameter. When using finite-sized bins, we evaluate the derivative $\partial \mathbf{B} / \partial f_{\rm NL}$ averaging over the bins (see Eq. \eqref{eq:binavgB} of the Appendix), while in the case of infinitely thin bins the derivative is analytically computed directly from the templates Eqs. \eqref{eq:local}, \eqref{eq:equi} and \eqref{eq:ortho}. Figure \ref{fig:R_BS} shows the ratio of the estimated uncertainty computed with non-Gaussian over Gaussian covariance as a function of the maximum wavenumber $k_{\rm max}$. The uncertainty computed using the infinitely thin bin approximation is shown in solid lines, while simulation measurements are shown as diamonds. Solid lines are computed up to the non-linear scale $k_{\rm NL}$ at that redshift. The uncertainty on local type non-Gaussianity is most affected by the introduction of off-diagonal covariance, increasing by a factor of $\sim5$ at $k_{\rm max} \approx k_{\rm NL}$ for redshifts lower than $z=10$ and even higher at higher redshifts. This is because the off-diagonal covariance is largest for squeezed triangle configurations where the local type non-Gaussianity has most of its signal \cite{Biagetti:2021tua} \footnote{This is very similar to how lensing-induced covariance mostly affects measurements of the local bispectrum in the CMB \cite{Coulton:2019odk}}. Equilateral non-Gaussianity is less affected, since most of its signal comes from equilateral triangle configurations whose covariance is large only when approaching non-linear scales. Still, the loss is almost a factor of $2$ at $z\lesssim 10$. For a discussion and the results of the orthogonal bispectrum we refer to the Appendix. It is important to note that these results do not imply that the uncertainty does not improve overall, since we are still able to access more modes as we increase $k_\mathrm{max}{}$. Rather, our analysis shows that off-diagonal non-Gaussian covariance reduces the amount of information gained by probing smaller scales, or in other words, the signal-to-noise saturates. We show a clear representation of this fact in Figure \ref{fig:snbs} in the Appendix, where we directly plot the uncertainty of $\fnl{}$ including non-Gaussian terms as a function of $k_{\rm max}$ at different redshifts for a fictitious matter survey. \begin{figure} \includegraphics[scale=0.5]{R_BS.pdf} \caption{Estimate of relative increase in error on $f_{\rm NL}$ due to non-Gaussian covariance as a function of $k_\mathrm{max}{}$. The diamonds present results obtained using the \textsc{Quijote} simulations ($z=0,3$) or 3LPT ($z = 10$). Note the different scales on the vertical axes. The local bispectrum is expected to be significantly affected when accounting for non-diagonal covariance even at very high redshifts. Solid lines are estimated up to the non-linear scale $k_{\rm NL}$ at each redshift. For $z=0$ and $3$ the simulation results are also shown up to the non-linear scale, while for $z=10$ they are shown up to the scale at which shot-noise becomes a significant contribution to the covariance.} \label{fig:R_BS} \end{figure} \subsection{Uncertainty on $\fnl{}$ from the hydrogen bispectrum}\label{sec:hydro} To make contact with actual future observations, we consider a realistic PUMA-like intensity mapping survey setup. PUMA is a proposed 21-cm intensity mapping experiment aimed at measuring the distribution of neutral hydrogen through the 21-cm hyperfine transition between redshift 2 and 6. One of the key science drivers for PUMA is to provide better constraints on primordial non-Gaussianity with respect to the CMB \cite{CosmicVisions21cm:2018rfq} (see also Figure 5 in \cite{Achucarro:2022qrl} for a comparison to other future surveys). \\ As compared to the simplified scenario considered in Figure \ref{fig:R_BS}, the calculation of the estimated uncertainty in this case involves several complications. First of all, neutral hydrogen is a biased tracer of the matter field. This introduces additional non-linearities and we need to define a set of nuisance (bias) parameters that are fixed through observations (see \cite{Desjacques:2016bnm} for a review). Secondly, we need to compute correlators in redshift space, taking into account the survey geometry and foregrounds. Lastly, the presence of primordial non-Gaussianity introduces additional bias parameters. This last effect famously appears already at the power spectrum level for the local template, known as scale dependent bias (\cite{Dalal:2007cu, Matarrese:2008nc, Slosar:2008hx} and \cite{Biagetti:2019bnp} for a recent review). For this reasons, forecasts of $\sigma(f_{\rm NL})$ depend sensitively on many assumptions, and would need to include the tracer power spectrum in order to be realistic. Here we limit ourselves to calculate the uncertainty using the tracer bispectrum only, rather than performing a full forecast, since our goal is to show the loss of constraining power due to the inclusion of non-Gaussian covariance on the bispectrum \footnote{We have confirmed that our forecasts, using very similar assumption about the PUMA survey, result in forecasts on $\sigma(f_{\rm NL})$ that are consistent with those presented in Refs.~\cite{Karagiannis:2019jjx,Sailer:2021yzm} when neglecting non-Gaussian covariance}. For our computations, we follow the setup presented in \cite{Karagiannis:2019jjx}. We take into account foreground noise, which effectively limits the largest scale accessible by the survey in each redshift bin, as described in the Appendix Eqs.~\eqref{eq:kpar} and \eqref{eq:kper}. In this setup, beside $f_{\rm NL}$ the hydrogen bispectrum is a function of $7$ parameters: three bias parameters, two shot-noise parameters, the dimensionless linear growth rate $f$ and the velocity dispersion $\sigma_v$. We compute the fiducial value of these parameters as a function of redshift following \cite{Castorina:2016bfm,Karagiannis:2019jjx} and the expression for the hydrogen bispectrum is found in the Appendix, Eq.~\eqref{eq:hibis}. We also calculate the hydrogen power spectrum, given in Eq.~\eqref{eq:hipow}, as we use it to compute the Gaussian covariance. To compute the non-Gaussian covariance, we use the model of Eq. \eqref{eq:modelcov}. We then proceed in computing the Fisher matrix, which is estimated in the thin bins form of Eq. \eqref{eq:expfish}. We marginalise over all the $7$ nuisance parameters entering the bispectrum as described in Eq. \eqref{eq:marginal} of the Appendix. Figure \ref{fig:PUMA} shows the ratio of the estimated uncertainty computed using a non-Gaussian covariance over a Gaussian approximation for the local and equilateral type non-Gaussianities as a function of redshift for a PUMA-like experiment. We compute the uncertainty for two different values of $k_{\rm max}$, corresponding to $0.5\, k_{\rm NL}$ (dashed lines) and $0.75\, k_{\rm NL}$ (solid lines), as we expect Eq. \eqref{eq:knl} to be less accurate for tracers. Our results show that even for a more conservative choice of $k_{\rm max} = 0.5\, k_{\rm NL}$, the effect is significant. The increase in uncertainty ranges from a factor of $2$ to a factor of $5$ for local type non-Gaussianity. We therefore conclude previous forecasts on constraining $\fnl{}$ at high-redshift are too optimistic \cite{Munoz:2015eqa,Meerburg:2016zdz,Karagiannis:2019jjx,Floss:2022grj,Yamauchi:2022fri} and non-Gaussian covariance will have to be considered in order to produce more realistic forecasts. A similar estimation for a generic biased tracer was performed in \cite{dePutter:2018jqk} up to $z=10$ and shows qualitative agreement with Figure \ref{fig:PUMA}. \begin{figure} \includegraphics[scale=0.5]{PUMA_loss.pdf} \caption{Estimate of relative increase in error on the non-Gaussian amplitude $f_{\rm NL}$ due to non-Gaussian covariance of the hydrogen bispectrum, as a function of redshift for a PUMA-like experiment when marginalising over the $7$ additional parameters of the hydrogen bispectrum. We show the results for $k_\mathrm{max}{} = 0.75\, k_{\rm NL}$ (solid lines) and $k_\mathrm{max}{} = 0.5\, k_{\rm NL}$ (dashed lines). } \label{fig:PUMA} \end{figure} \section{Discussion and Conclusions} We studied the impact of non-Gaussian terms in the covariance on measurements of cosmological correlators. Specifically, we aim to quantify the effect on the estimated uncertainty of the primordial non-Gaussian amplitude $\fnl{}$ when using the bispectrum at high redshift as an observable. Because off-diagonal components are small compared to the diagonal, most studies have typically neglected this covariance. We showed that, when looking at the information content, there is a significant impact on the constraining power on primordial non-Gaussianity due to this non-Gaussian mode coupling, even at high redshifts and well below the non-linear scale as defined in Eq.~\eqref{eq:knl}. We have first computed the effect of non-Gaussian covariance on $\sigma_{\fnl{}}$ using the matter bispectrum in real space and then performed a more realistic estimation using the hydrogen bispectrum as measured from a PUMA-like experiment. This proposed 21-cm intensity mapping experiment has the potential to constrain primordial bispectra to reach beyond constraints set by the CMB. Yet, our analysis shows that not accounting for the full covariance can overestimate the constraining power of the hydrogen bispectrum measured by PUMA up to a factor of $5$ for local type non-Gaussianity and $2$ for equilateral. For local type non-Gaussianity, the primary observable is actually the tracer power spectrum, thanks to the so-called scale dependent bias, which we do not include in our analysis. Nevertheless, our results imply that combining it with the bispectrum does not help as much as it is expected to considering a Gaussian covariance only. Moreover, they motivate including the bispectrum-power spectrum cross covariance in the joint analysis, which is also a non-Gaussian contribution \cite{Biagetti:2021tua}. Overall, our result suggests we should reconsider some of the existing forecasts and make sure the projected numbers are not overly optimistic for future high redshift surveys such as PUMA, MegaMapper \cite{Schlegel:2019eqc} and the Maunakea Spectroscopic Explorer \cite{MSEScienceTeam:2019bva}. Future constraints on primordial non-Gaussianities will depend on our ability to extract information from large scale structures. Intuitively, the main obstacle to constrain primordial spectra is set by the non-linear scale which estimates when loop-corrections become important. Here we show that for the Fisher information on $\fnl{}$ it is important to account for non-Gaussian bispectrum covariance, even for modes that are still considered linear. The results are comparable to what was found for measurements of the CMB bispectrum, where lensing induced off-diagonal covariance is the main limitation as we start to measure smaller scales and increase the number of accessible modes. For the CMB, it was shown that the lensing induced covariance can be accounted for by delensing the data before applying the standard estimators \cite{Coulton:2019odk}. The analogy here would be to ``degravitate'' the data, a technique that is well established in studies of the Baryon Acoustic Oscillations in galaxy surveys \cite{Eisenstein:2006nk}. It might be possible to explore this option at high redshifts, where the physics is still perturbatively tractable. At lower redshifts however, it likely suggests that existing estimators are sub-optimal or that we have adopted inefficient summary statistics that need to be revisited. Similar conclusions were drawn in Ref.~\cite{Coulton:2022qbc}. Applying reconstruction methods \cite{Leclercq:2014fta} or using simulations (e.g. through simulation based inference \cite{cranmer2020frontier}) \cite{Alsing:2018eau,Alsing:2019xrx,Jeffrey:2020itg,Miller:2020hua,Cole:2021gwr}, both active fields of investigation, will certainly help to establish to what degree we have to modify our analysis tools in search for signs of primordial non-Gaussianity. The code used to produce the results in this work is publicly available \footnote{\url{https://github.com/tsfloss/pyNG}}. \section*{Acknowledgements} The authors would like to thank Emanuele Castorina, Will Coulton, Simon Foreman, Dionysios Karagiannis, Emiliano Sefusatti and Anže Slosar for useful discussions and comments on a draft. While finalising this work, we became aware of related work by Coulton et al. \cite{Coulton:2022qbc} and thank the authors for sharing their draft ahead of submission. We would also like to thank Francisco Villaescusa-Navarro and the whole \textsc{Quijote} team for making the simulation suite available. We thank the Center for Information Technology of the University of Groningen for their support and for providing access to the Peregrine high performance computing cluster. T.F is supported by the Fundamentals of the Universe research program within the University of Groningen. M.B acknowledges support from the Netherlands Organization for Scientific Research (NWO), which is funded by the Dutch Ministry of Education, Culture and Science (OCW) under VENI grant 016.Veni.192.210. P.D.M acknowledges support from the Netherlands organization for scientific research (NWO) VIDI grant (dossier 639.042.730).
1,116,691,497,465
arxiv
\section{Introduction} In this paper, we study semi-classical quasimodes of WKB-type, formally associated with the low lying spectrum of a Schr\"odinger operator $H_\hbar$ on a vector bundle $\mathcal{E}$ over a smooth Riemannian manifold $M$. More precisely, given an operator of the form \begin{equation*} H_\hbar = \hbar^2 L + \hbar W + V \cdot \mathrm{id}_\mathcal{E} \end{equation*} acting on the space $\Gamma^\infty(M, \mathcal{E})$ of smooth sections of $\mathcal{E}$, we construct formal asymptotic eigenfunctions near non-degenerate minima of the potential $V$ in the limit $\hbar \to 0$. Operators of this type arise e.g.\ in Witten's perturbation of the de Rham complex where $H_\hbar$ is the square of the Dirac type operator \begin{equation*} \hbar \, e^{\phi/\hbar} \bigl( \mathrm{d} + \mathrm{d}^* \bigr) e^{-\phi/\hbar} = \hbar\bigl( \mathrm{d} + \mathrm{d}^* \bigr) + \mathrm{d} \phi \wedge + \mathrm{d} \phi \lrcorner\, . \end{equation*} In this particular case, the endomorphism $W$ is non-vanishing, which is the reason for us to include this (somewhat unusual) term in our considerations. We recall that the construction of semi-classical quasimodes of WKB-type is an important step in discussing tunneling problems, i.e.\ exponentially small splitting of eigenvalues for a self-adjoint realization of $H_\hbar$. In the scalar case, for $\dim M>1$, rigorous results in this field start with the seminal paper \cite{helffer-sjostrand-1} (for $M=\mathbb{R}^n$ or $M$ compact). The associated asymptotic expansion of eigenvalues was also considered in \cite{simon-1}, and somewhat weaker results on tunneling were obtained in \cite{simon-2} avoiding the use of quasimodes of WKB-type. For non-scalar operators, the bundle of exterior differential forms (for $M=\mathbb{R}^n$ or $M$ compact) has been considered in \cite{helffer-sjostrand-4} in the context of the Witten complex. All WKB-constructions in \cite{helffer-sjostrand-1} as well as in \cite{helffer-sjostrand-4} ultimately rely on the asymptotic constructions done in \cite{helffer-sjostrand-1} which proceed via a special FBI-transform. There exist several introductory texts to this field (e.g.\ \cite{dima}, \cite{helffer}, \cite{helffer2}) but none of these treats the case of eigenvalues degenerate in the harmonic approximation (in this case, the naive WKB-constructions which work for the non-degenerate eigenvalues, break down, see Remark 2.3.5 in \cite{helffer}). However, for the scalar case and $M=\mathbb{R}^n$, there exists a more elementary approach to the asymptotic WKB-constructions, avoiding the use of FBI-transform (see \cite{klein-schwarz}). This approach was also used in \cite{klein-rosen1} for the case of semi-classical difference operators on the scaled lattice $\hbar \mathbb{Z}^n$. The central point of this paper is to show that this method gives complete asymptotic solutions of WKB-type for a class of Schr\"odinger operators on general bundles. Since our results are local, we do not need further restrictions on $M$ (as e.g.\ compactness, completeness, bounded geometry) and not even a self-adjoint realization of $H_\hbar$. In particular, we do not discuss the tunneling problem for $H_\hbar$ with a multiwell potential $V$ thus avoiding the use of Agmon-type estimates for the true eigenfunctions of certain Dirichlet operators and estimates on the difference between WKB-type quasimodes and these eigenfunctions far from the well and with exponential precision. \section{Outline of the Results} In everything what follows, let $(M, g)$ be a (smooth) Riemannian manifold and let $\mathcal{E}$ be a complex vector bundle over $M$ equipped with an inner product $\gamma$ (i.e.\ a positive definite hermitian form). This gives a unique volume density inducing an integral $\int_M$ for compactly supported continuous functions. The standard inner product on $\Gamma^\infty_c(M, \mathcal{E})$, the space of compactly supported smooth sections of $\mathcal{E}$, is then defined by \begin{equation}\label{standard_skalar} (u, v)_{\gamma} = \int_M \gamma[ u, v ]~~~~~~ u, v \in \Gamma_c^\infty(M, \mathcal{E})\, . \end{equation} Recall that a differential operator $L$ acting on sections of $\mathcal{E}$ is said to be of {\em Laplace type} if, in local coordinates $x$, it has the form \begin{equation}\label{Laplace-type} L = - \mathrm{id}_{\mathcal{E}} \sum_{ij} g^{ij}(x)\frac{\partial^2}{\partial x^i\partial x^j} + \sum_{j} b_j \frac{\partial}{\partial x^j} + c \end{equation} where $\bigl(g^{ij}(x)\bigr)$ is the inverse matrix of the metric $\bigl(g_{ij}(x)\bigr)$ and $b_j, c\in \Gamma^\infty (M, \mathrm{End} (\mathcal{E}))$ are endomorphism fields. Examples for Laplace-type operators are the Hodge-Laplacian on $p$-forms (in particular, for $p=0$, this is the Laplace-Beltrami operator) and the square of a generalized Dirac operator acting on spinors. \begin{remark} \label{RemarkLaplaceType} If a Laplace type operator $L$ is symmetric on $\Gamma^\infty_c(M, \mathcal{E})$ with respect to $(\,.\,,\,.\,)_{\gamma}$, i.e., if \begin{equation}\label{Lsymm} \int_M \gamma[ L u, v ] = \int_M \gamma[ u, Lv ] ~~~~~~~~ \text{for all} ~~~ u, v \in \Gamma_c^\infty(M, \mathcal{E}) \end{equation} then there is a unique metric connection $\nabla^\mathcal{E}$ on $\mathcal{E}$ and a unique endomorphism field $K \in \Gamma^\infty(M, \mathrm{End}(\mathcal{E}))$ such that \begin{equation} L = (\nabla^\mathcal{E})^* \nabla^\mathcal{E} + K \end{equation} (see e.g.\ \cite[Prop.\ 2.5]{bgv}). \end{remark} Our setup is the following. \begin{setup}\label{setup1} For $\hbar>0$, we consider Schr\"odinger operators $H_\hbar$ acting on $\Gamma^\infty(M, \mathcal{E})$ of the form \begin{equation}\label{schrodinger} H_\hbar = \hbar^2 L + \hbar W +V \cdot \mathrm{id}_\mathcal{E} \end{equation} where $L$ is a symmetric Laplace type operator as above, $W \in \Gamma^\infty(M, \mathrm{End}(\mathcal{E}))$ is a symmetric endomorphism field and $V \in C^\infty(M, \mathbb{R})$. Furthermore, we assume that the potential $V$ has a non-degenerate minimum at some fixed point $p\in M$ with $V(p) = 0$. \end{setup} We remark that the operator $H_\hbar$ given in \eqref{schrodinger} is not necessarily real, i.e.\ it does not commute with complex conjugation. However, under semi-classical quantization ($\xi \mapsto -i\hbar \mathrm{d}$ in some reasonable sense) its principal $\hbar$-symbol \begin{equation}\label{symbol} \sigma_H(q, \xi) = \bigl( |\xi|^2 + V(q)\bigr) \mathrm{id}_\mathcal{E} \end{equation} is both real and scalar ($|\cdot|$ denotes the norm on $T_q^*M$ induced by $g$). This is crucial for our construction. Thus our assumptions exclude Schr\"odinger operators with magnetic field (the operator $(i\hbar d + \alpha)^*(i\hbar d + \alpha)$, with a $1$-form $\alpha$ describing the magnetic potential, has non real principal $\hbar$-symbol, see e.g.\ \cite{helffer-kondyukov-1}) or with endomorphism valued potential $V$ as needed e.g.\ for molecular Hamiltonians in the Born-Oppenheimer approximation (see e.g.\ \cite{klein-seiler}). \begin{definition}[Local Harmonic Oscillator] \label{DefLocalHarmonicOscillator} In Setup \ref{setup1}, we associate to $H_\hbar$ the {\em local harmonic oscillator} $H_{p,\hbar}$ at the critical point $p$ of $V$. This is the differential operator on $C^\infty(T_p M, \mathcal{E}_p)$ defined by \begin{equation} H_{p,\hbar} f(X) := \Bigl( \hbar^2 \Delta_{T_p M} + \hbar \, W(p) + \frac{1}{2} \nabla^2V|_p (X, X)\Bigr) f(X), \end{equation} for $X \in T_pM$, where $\Delta_{T_p M}$ denotes the Laplacian on $T_p M$ induced by the metric $g_p$ on $T_pM$ and $\nabla^2 V|_p$ denotes the Hessian of $V$ at $p$ which is a bilinear form, so the operator $\nabla^2V|_p (X, X)$ acts by multiplication with a quadratic function. \end{definition} The local harmonic operator $H_{p,\hbar}$ can be considered as an essentially self-adjoint operator on $C^\infty(T_p M, \mathcal{E}_p)\cap L^2(T_pM, d\mu) \otimes \mathcal{E}_p$, where $d\mu$ denotes the volume density induced by the inner product $g_p$ on $T_pM$. It is well known that its spectrum scales with $\hbar$, and consists of the numbers \begin{equation} \label{LocalEigenvalues} \hbar E_{\alpha,\ell} = \hbar\bigl((2\alpha_1 +1) \lambda_1 + \dots + (2\alpha_n + 1) \lambda_n + \mu_\ell\bigr), ~~~~~ \alpha \in \mathbb{N}_0^n, ~~ \ell=1, \dots, \mathrm{rk}\,\mathcal{E} \end{equation} where $\lambda_1, \dots, \lambda_n$ are the eigenvalues of $\frac{1}{2}\nabla^2 V|_p$ and $\mu_1, \dots, \mu_{\mathrm{rk} \mathcal{E}}$ are the eigenvalues of $W(p)$ (see e.g. \cite{cycon}, \cite[Section 8.10]{reed-simon-1}). \begin{remark}\label{polynom} It is clear that $H_{p,\hbar}$ maps polynomials to polynomials, i.e., it preserves $\mathcal{E}_p[T_p M] := \mathcal{E}_p \otimes \mathbb{C}[T_p M] \subset C^\infty(T_p M, \mathcal{E}_p)$, the space of polynomials on $T_p M$ with values in $\mathcal{E}_p$. \end{remark} To formulate our results, we need the following theorem (a proof can be found for example in \cite{helffer-sjostrand-1}). \begin{theorem}[The Eikonal Equation]\label{ThmEikonal} In Setup \ref{setup1}, for each sufficiently small neighborhood $U$ of $p$, there exists a unique function $\phi\in C^\infty(U, \mathbb{R})$ with $\phi(p) = 0$ that has a non-degenerate minimum at $p$ and satisfies the eikonal equation\footnote{We remark that the eikonal equation can be written as $\sigma_H(p, \xi) = 0$, where $\sigma$ denotes the semi-classical principal symbol, compare \eqref{symbol} above. This is important in the case of more general operators.} \begin{equation} \label{EikonalEquation} \bigl|\mathrm{d} \phi(q)\bigr|^2 = V(q), ~~~~~ q \in U, \end{equation} where $|\cdot|$ denotes the norm on $T_q^*M$ induced by $g_q$. \end{theorem} \begin{definition} \label{DefAdmissible} Let $U \subseteq M$ be an open neighborhood of $p$ and $\phi \in C^\infty(U, \mathbb{R})$. We call $(U, \phi)$ an {\em admissible pair} (with respect to $H_\hbar$), if \begin{enumerate} \item[1.)] $\phi$ is the unique positive solution of the eikonal equation \eqref{EikonalEquation} on $U$ as in Thm.\ \ref{ThmEikonal}. \item[2.)] $U$ is star-shaped around $p$ with respect to the vector field $\operatorname{grad} \phi$ in the following sense: If $\Phi_t$ is the flow of $\operatorname{grad} \phi$, then we have $\Phi_t(U) \subseteq U$ for all $t \leq 0$. \end{enumerate} \end{definition} Our main result is the following theorem, stating that for each eigenvalue of the local harmonic oscillator at $p$, we can obtain asymptotic eigenvectors up to any order in $\hbar$. \begin{theorem} \label{Theorem1} In Setup \ref{setup1}, let $(U, \phi)$ be an admissible pair and let $\hbar E_0$ be an eigenvalue of multiplicity $m_0$ of the local harmonic oscillator $H_{p, \hbar}$ at $p$ as given in Def.\ \ref{DefLocalHarmonicOscillator}. Set $H_{\phi, \hbar} = e^{\phi/\hbar} \circ H_\hbar \circ e^{-\phi/\hbar}$, the explicit form of which is calculated in Lemma \ref{ConjugationByPhi}. Then there exist a number $K \in \mathbb{N}_0/2$ and formal series \begin{equation*} \boldsymbol{a}_j = \hbar^{- K}\sum\nolimits_{k\in \mathbb{N}_0/2} \hbar^k a_{j,k}, ~~~~~\text{and}~~~~~ \boldsymbol{E}_j = \hbar\bigl(E_0 + \sum\nolimits_{k\in \mathbb{N}/2} \hbar^k E_{j,k}\bigr), \end{equation*} for $j=1, \dots, m_0$, where $a_{j,k} \in \Gamma^\infty(U, \mathcal{E})$ and $E_{j,k} \in \mathbb{R}$ such that in the sense of formal power series \begin{enumerate} \item[1.] we have \begin{equation*} H_{\phi, \hbar} \boldsymbol{a}_j = \boldsymbol{E}_j \boldsymbol{a}_j\, , \qquad j=1, \dots, m_0\; . \end{equation*} \item[2.] the series $\boldsymbol{a}_1, \dots, \boldsymbol{a}_{m_0}$ are asymptotically orthonormal meaning that \begin{equation*} \mathcal{I}\Bigl( \gamma \bigl[ \chi \boldsymbol{a}_j, \chi \boldsymbol{a}_i\bigr]\Bigr) = \delta_{ji} \, , \qquad i, j=1, \dots, m_0, \end{equation*} as asymptotic series in $\hbar^{1/2}$. Here, $\chi \in \Gamma_c^\infty (U, [0,1])$ is any cutoff function with $\chi \equiv 1$ in a neigborhood of $p$ and $\mathcal{I}(\gamma[\cdot, \cdot])$ is the weighted integral defined in \eqref{extendI}. \end{enumerate} Furthermore, if $|\alpha|$ is even (or odd respectively) in all pairs $(\alpha, \ell)$ such that $E_0 = E_{\alpha, \ell}$ defined in \eqref{LocalEigenvalues} then no half-integer terms (or integer terms respectively) occur in the series $ \boldsymbol{a}_j $. In both cases, no half integer terms occur in the series $\boldsymbol{E}_j$\footnote{Results concerning the parity were first proven in \cite{helffer-sjostrand-1}, correcting a mistake in \cite{simon-1}, see \cite{simon-1-e}.}. \end{theorem} \begin{remark} The lowest order in $\hbar$ in the expansion of $\boldsymbol{a}_j$ is given by $K = \max_\alpha |\alpha|/2$ where $\alpha$ runs over all multi-indices such that $E_{\alpha,\ell} = E_0$ for some $\ell=1, \dots \mathrm{rk}\,\mathcal{E}$. \end{remark} \begin{remark} \label{ExplainingThm1} Property 1 and 2 in Thm. \ref{Theorem1} are equivalent to the following statements: For each $U^\prime \subset \subset U$ open and compactly contained in $U$, each $N \in \mathbb{N}/2$ and each $\hbar_0>0$, there exists constants $C_1, C_2>0$ such that for each $\hbar < \hbar_0$ and all $1 \leq i, j \leq m_0$ we have \begin{equation} \label{Asymptotics} \Bigl| \Bigl( H_\hbar - \hbar\bigl(E_0 - \sum\nolimits_{k=1/2}^N \hbar^k E_{j,k}\bigr) \Bigr) e^{-\phi/\hbar} \sum\nolimits_{k=0}^N \hbar^k a_{j,k} \Bigr| \leq C_1 \,e^{-\phi/\hbar} \hbar^{K+N+ 3/2} \end{equation} uniformly on $U^\prime$ for each $j = 1, \dots, m_0$ and \begin{equation} \label{AsymptoticOrthonormal} \int_{U^\prime} \gamma \Bigl[\sum\nolimits_{k=0}^N \hbar^k a_{i,k}~, ~\sum\nolimits_{k=0}^N \hbar^k a_{j,k} \Bigr] e^{-2\phi/\hbar} \leq \hbar^{2K+n/2}(\delta_{ij} + C_2 \hbar^{N+1/2})\; . \end{equation} In both equations above, the sums are meant to run in half integer steps. \end{remark} Theorem \ref{Theorem1} allows us to construct good quasimodes for $H_\hbar$, which are essential in the discussion of tunneling problems. These are derived by a Borel procedure with respect to $\hbar^{1/2}$. \begin{corollary} Under the assumptions of Thm.\ \ref{Theorem1}, for any open neighborhood $U^\prime \subset \subset U$ of $p$, there are functions\footnote{for asymptotic series with coefficients in topological vector spaces, compare \cite[Thm.\ 1.2.6]{hormander}.} $a_j \in C^\infty((0, \hbar_0), \Gamma^\infty(M, \mathcal{E}))$ and $E_j \in C^\infty((0, \hbar_0), \mathbb{R})$, $j = 1, \dots, m_0$, such that $a_j(\hbar)$ is compactly supported in $U$ for each $\hbar< \hbar_0$ and \begin{equation*} a_j (\hbar) \sim \hbar^{-n/4} e^{-\phi/\hbar}\boldsymbol{a}_j\quad\text{and}\quad E_j(\hbar) \sim \boldsymbol{E}_j \end{equation*} on $U^\prime$ as $\hbar \searrow 0$ for $\boldsymbol{a}_j$ and $\boldsymbol{E}_j$ given in Thm. \ref{Theorem1}. Moreover, we have \begin{equation} H_\hbar a_j(\hbar) = E_j(\hbar) a_j(\hbar) + o(\hbar^\infty) \end{equation} uniformly on $M$ and \begin{equation} H_\hbar a_j(\hbar) = E_j(\hbar) a_j(\hbar) + o(\hbar^\infty e^{-\phi/\hbar}) \end{equation} uniformly on $U^\prime$, as well as the asymptotic orthonormality relation \begin{equation} \bigl(a_i, a_j\bigr)_\gamma = \delta_{ij} + o(\hbar^\infty) \end{equation} for the inner product \eqref{standard_skalar}. \end{corollary} \begin{remark} We do not make any claim that the quasimodes constructed above are in any sense asymptotic to actual eigenfunctions of the Schr\"odinger operator \eqref{schrodinger}. In fact, this statement does not make sense as it stands, as one would need to specify a self-adjoint realization of the operator on a suitable Hilbert space, which need not even be unique without further assumptions on the manifold and the operator. Even though statements that formal asymptotics actually belong to eigenfunctions hold in great generality, this is not in the scope of our paper. We refer to \cite{helffer-sjostrand-1} for a discussion in the scalar case in $\mathbb{R}^n$ or on a compact Riemannian manifold. \end{remark} \begin{remark} \label{RemarkTransport0} The coefficients $a_{j,k}$ from Thm.\ \ref{Theorem1} necessarily fulfill the recursive transport equations \begin{equation} \label{TransportEquations} (\nabla_{2\operatorname{grad}\phi}^\mathcal{E} + W + \Delta \phi - E_0){a}_{j, k} = - L {a}_{j,k-1} + \sum\nolimits_{i=1/2}^{k} E_{j, i}\,{a}_{j,k-i} \end{equation} where $\nabla^\mathcal{E}$ is the connection from Remark \ref{RemarkLaplaceType}. Hence one could try and solve these equations in order to prove our theorem. This works indeed well in the non-degenerate case, i.e.\ when the multiplicity of the eigenvalue $\hbar E_0$ of the local harmonic oscillator is equal to one (see \cite{dima} or \cite[Ex.\ 8.3]{matthias}). However, in the degenerate case, i.e. when the multiplicity of the eigenvalue $\hbar E_0$ of the local harmonic oscillator is greater that one, this approach runs into problems . It is not clear how to derive enough expansions in this case. We stick we therefore stick to a different approach as described in the introduction. \end{remark} The paper is organized as follows. In Section \ref{Kapitel_2} we introduce the Taylor series map $\tau_p$ on sections and differential operators with respect to a fixed normal geodesic chart $x$ at $p\in M$, leading to the space $\mathcal{E}_p[[x]]$ of formal power series in $x$ with values in $\mathcal{E}_p$. We use the stationary phase method in its real form to define an inner product on $\mathfrak{S} := \mathcal{E}_p[[x]](\!(\hbar^{1/2})\!)$, the space of formal Laurent series in the variable $\hbar^{1/2}$ that have elements of $\mathcal{E}_p[[x]]$ as coefficients. In Section \ref{Section4}, we define the rescaling operator $R$, setting $x = \hbar^{1/2}y$, which maps $\mathfrak{S}$ to $\mathfrak{S}_0$, a subspace of the space $\mathcal{E}_p[y](\!(\hbar^{1/2})\!)$ of Laurent series in the variable $\hbar^{1/2}$ with coefficients in the polynomial space $\mathcal{E}_p[y]$. In Section \ref{Kapitel3} we use Taylor series and the rescaling operator to define the operator $\hbar Q= R \circ\tau_p(H_{\phi, \hbar}) \circ R^{-1}$ on $\mathfrak{S}_0$ where $H_{\phi, \hbar} = e^{\phi/\hbar} \circ H_\hbar \circ e^{-\phi/\hbar}$. To a given eigenvalue of the leading order $Q_0$, we then construct eigenfunctions and eigenvalues of $Q$ and $\tau_p(H_{\phi, \hbar})$ and prove results on the absence of integer or half-integer order terms in the expansion with respect to $\hbar$. Finally, the proof of Theorem \ref{Theorem1} is given in Section \ref{Kapitel4}. \section{Notation and first Constructions}\label{Kapitel_2} Throughout, we work in Setup \ref{setup1} and we fix an admissible pair $(U, \phi)$. With respect to a chart $x$ with $x(p) = 0$, any function $f\in C^\infty(U)$ has a Taylor series at $p$ \begin{equation}\label{taylorf} f \sim \sum\nolimits_{\alpha \in \mathbb{N}_0^n} f_\alpha x^\alpha=: \tau_{p,x}(f)\subset \mathbb{C}[[x]], ~~~~~ f_\alpha \in \mathbb{C}\, , \end{equation} which is determined by the property that for each neighborhood $U^\prime$ of $p$ compactly contained in the domain of $x$ and for each $N\in\mathbb{N}_0$, there exists a constant $C_{N,U^\prime}>0$ such that \[ \Bigl|f-\sum\nolimits_{|\alpha|\leq N} f_\alpha x^\alpha \Bigr| \leq C_{N,U^\prime} |x|^{N+1}\quad\text{ uniformly on }\, U\, .\] By $\mathbb{C}[[x]]$, we denote the space of formal power series in the $n$ variables $x^1, \ldots, x^n$ and, by abuse of notation, we identify the chart $x: U^\prime \rightarrow \mathbb{R}^n$ with variables $x^1, \ldots x^n\in \mathbb{R}^n$. We call $\tau_{p,x}(f)\in \mathbb{C}[[x]]$ defined in \eqref{taylorf} the Taylor series of $f$ at $p$ (with respect to $x$). If $x$ and $\tilde{x}$ are normal coordinates with respect to the Riemannian metric, then $x=Q\circ\tilde{x}$ for some matrix $Q\in O(n)$. On the other hand, $Q$ induces an algebra isomorphism \[ \tilde{Q}: \mathbb{C}[[x]] \rightarrow \mathbb{C}[[\tilde{x}]] \] via $\tilde{Q}(x^i) = \sum_j Q^i_j\tilde{x}^j$. Thus $\tau_{p,\tilde{x}} = \tilde{Q}\circ\tau_{p,x}$. \begin{definition}\label{setup3} We choose normal coordinates $x$ such that for our fixed solution of \eqref{EikonalEquation}, we have \begin{equation}\label{taupphi2} \tau_p (\phi) = \sum\nolimits_{\nu=1}^n \lambda_\nu x_\nu^2 + O\bigl(|x|^3\bigr)\, , \qquad \lambda_1, \ldots \lambda_n >0\, , \end{equation} near $p$ and write $\tau_p$ instead of $\tau_{p,x}$. We call $\tau_p$ the {\em geodesic Taylor series} map. \end{definition} All $\lambda_\nu\in \mathbb{R}$ in \eqref{taupphi2} are strictly positive since by Setup \ref{setup1}, the minimum of $V$ at $p$ is non-degenerate. \begin{remark}\label{CTpM} The map \[ x^i \mapsto dx^i|_p\in T^*_pM\cong \{\text{ homogeneous polynomials of degree 1 on } T_pM\,\} \] induces an $\mathbb{C}$-algebra-isomorphism $\Phi_x$ from $\mathbb{C}[x]$ to $\mathbb{C}[T_pM]$, the space of complex valued polynomial functions on $T_pM$. $\Phi_x$ can be extended to the respective completions with respect to the valuations by polynomial degree, $\mathbb{C}[[x]]$ and $\mathbb{C}[[T_pM]]$. It is easy to see that $\Phi_x\circ \tau_{p,x} = \Phi_{\tilde{x}}\circ \tau_{p,\tilde{x}}$ for all normal coordinates $x$ and $\tilde{x}$. \end{remark} To a section $u\in \Gamma^\infty (M, \mathcal{E})$ we associate a geodesic Taylor series at $p$ in the following way: We trivialize $\mathcal{E}$ by identifying the fibers along geodesics emanating from $p$ by parallel translation with respect to the connection $\nabla^\mathcal{E}$ given in Remark \ref{RemarkLaplaceType}. Near $p$, $u$ can then be seen as a smooth function with values in the vector space $\mathcal{E}_p$ (the fiber of $\mathcal{E}$ over the point $p$). In this sense it has a Taylor series $\tau_p(u)$ at $p$ with respect to the normal coordinates $x$ of Def.\ \ref{setup3}, which will then map to the space $\mathcal{E}_p[[x]] := \mathcal{E}_p\otimes \mathbb{C}[[x]]$. Finally, under the trivialization of $\mathcal{E}$ above, a differential operator $P$ of order $k\in\mathbb{N}$ acting on sections of $\mathcal{E}$ can be seen as a differential operator acting on $\mathcal{E}_p$-valued functions. By Taylor expansion of its coefficients, it has a geodesic Taylor series $\tau_p(P)$ of the form \begin{equation}\label{tau_p_von_P} P \sim \sum_{\natop{\alpha, \beta\in\mathbb{N}_0^n}{|\beta|\leq k}} P_{\alpha\beta} \, x^\alpha \frac{\partial^{|\beta|}}{\partial x^\beta}=: \tau_p(P)\subset \mathscr{D}(\mathcal{E}_p)[[x]] ~~~~~~~ \text{where} ~~ P_{\alpha\beta} \in \mathrm{End}(\mathcal{E}_p) \end{equation} and $\mathscr{D}(\mathcal{E}_p)[[x]]$ denotes the space of differential operators with coefficients in $\mathrm{End}(\mathcal{E}_p)[[x]]$. These operators act on the space $\mathcal{E}_p[[x]]$ by formal derivation term by term (inducing a left-module structure on that space). Let us remark that by construction, we have \begin{equation} \label{MultiplicationProperty} \tau_p(Pu) = \tau_p(P)\tau_p(u). \end{equation} In all this, we define $\tau_p$ with respect to the chart from Def.\ \ref{setup3}. \begin{definition}\label{Def_pgrading} We say that $\boldsymbol{P}\in \mathscr{D}(\mathcal{E}_p)[[x]]$ is homogeneous of degree $\deg_{\mathscr{D}} \boldsymbol{P} = j$, $j\in \mathbb{Z},$ if and only if $\boldsymbol{P}$ maps homogeneous polynomials of degree $k$ to homogeneous polynomials of degree $k+j$. In this case, $\boldsymbol{P}$ is a (necessarily finite) sum \[ \boldsymbol{P} = \sum_{\natop{\alpha, \beta\in \mathbb{N}_0^n}{|\alpha| - |\beta| = j}} P_{\alpha\beta}x^\alpha \frac{\partial^{|\beta|}}{\partial x^\beta} \; ,\qquad P_{\alpha\beta} \in \mathrm{End}(\mathcal{E}_p)\, . \] \end{definition} We recall Borel's Theorem which (for the case $M=\mathbb{R}^n$) is proven for example in \cite[Thm.\ 1.2.6]{hormander}. Since the statement is purely local, it generalizes to our setting and gives \begin{corollary} \label{BorelTheorem} The map $\tau_p: \Gamma^\infty (U, \mathcal{E}) \rightarrow \mathcal{E}_p [[x]]$ is surjective. \end{corollary} We define a weighted inner product on the space $\Gamma_c^\infty(U, \mathcal{E})$ of sections, compactly supported on $U$, by \begin{equation}\label{Scalar_phi} \bigl( u, w \bigr)_{\gamma,\phi} := \int_U \gamma[u, w] e^{-2\phi/\hbar}, \end{equation} where $\phi$ is our fixed solution of \eqref{EikonalEquation}. (To be precise, this defines a whole family of inner products, depending on $\hbar \in (0, \infty)$.) We introduce the space \begin{equation} \label{DefinitionKa} \mathfrak{S} := \mathcal{E}_p[[x]] (\!(\hbar^{1/2})\!) \end{equation} of formal Laurent series in $\hbar^{1/2}$ with coefficients in $\mathcal{E}_p[[x]]$ which is a vector space over the field $\mathbb{C}(\!(\hbar^{1/2})\!)$ of formal Laurent series in the variable $\hbar^{1/2}$. On $\mathfrak{S}$ we shall define a non-degenerate hermitian form $\bigl(\,\cdot\,,\,\cdot\,\bigr)_{\mathfrak{S},\phi}$ with values in $\mathbb{C}(\!(\hbar^{1/2})\!)$, by using the stationary phase approximation (see e.g.\ \cite{gs}). We remark that for each admissible pair $(U, \phi)$, there is a Morse-chart $\kappa$ defined on some neighborhood $U^\prime \subset U$ of $p$ with $\phi=|\kappa|^2$. This chart is used in the following theorem. \begin{theorem}[Method of Stationary Phase] For $\hbar>0$ and $f \in C^\infty_c(U)$, we set \begin{equation}\label{Iphi} I_\phi(f, \hbar) := \hbar^{-n/2} \int_M f \, e^{-2\phi/\hbar} \, . \end{equation} Then, at $\hbar = 0$, the function $I_\phi(f; \hbar)$ has the asymptotic expansion \begin{equation} \label{StationaryPhaseExpansion} I_\phi (f; \hbar)~~ \sim~~ \left( \pi/2 \right)^{n/2} \sum_{k=0}^\infty \hbar^k \frac{1}{k!} \, \Delta^k \Bigl[ (f\circ \kappa^{-1})(0) \cdot \det\nolimits^{1/2} \bigl(g_{ij}(0)\bigr)\Bigr] \end{equation} where $\Delta$ is the Laplacian on $\mathbb{R}^n$ and $g_{ij}$ is the matrix of the metric with respect to the Morse-chart $\kappa$. \end{theorem} In particular, we obtain a linear map \begin{equation}\label{mathcalI} \mathcal{I}: C_c^\infty(U) \longrightarrow \mathbb{C}(\!(\hbar^{1/2})\!)\, , \qquad f \longmapsto \text{right hand side of}~~\eqref{StationaryPhaseExpansion}\, . \end{equation} By the above formula, it is clear that the asymptotic expansion of $I_\phi(f, \hbar)$ only depends on the Taylor series of $f, \phi$ and $\det\,^{\!\!1/2} \bigl( g_{ij}(\kappa)\bigr)$ at $p\in M$. Thus the kernel of $\tau_p$ is contained in the kernel of $\mathcal{I}$ defined in \eqref{mathcalI}. Since $\tau_p$ is surjective by Corollary \ref{BorelTheorem}, there is a unique linear map \begin{equation}\label{hatI} \hat{\mathcal{I}}: \mathbb{C}[[x]] \rightarrow \mathbb{C}(\!(\hbar^{1/2})\!) \end{equation} such that the diagram \begin{equation} \label{diagram} \xymatrix{ C_c^\infty(U) \ar[rr]^{\mathcal{I}} \ar[d]_{\tau_p} & &\mathbb{C}(\!(\hbar^{1/2})\!)\\ \mathbb{C}[[x]] \ar[urr]_{\hat{\mathcal{I}}} } \end{equation} commutes. We use this construction for $\gamma[u, w]\in C_c^\infty (U)$ where $u, w \in \Gamma_c^\infty(U, \mathcal{E})$. The inner product $\gamma$ on $\mathcal{E}$ has a Taylor series \[ \tau_p(\gamma) = \sum_{\delta\in \mathbb{N}_0^n} \gamma_\delta x^\delta \in (\mathcal{E}_p^* \otimes \mathcal{E}_p^*)[[x]] \] which can be interpreted as a positive definite hermitian map $\tau_p(\gamma): \mathcal{E}_p[[x]] \times \mathcal{E}_p[[x]] \longrightarrow \mathbb{C}[[x]]$ by setting \begin{equation}\label{taupgamma} \tau_p(\gamma)\left[ \sum_\alpha u_\alpha x^\alpha, \sum_{\beta} w_\beta x^\beta \right] := \sum_{\alpha,\beta,\delta\in\mathbb{N}_0^n} \gamma_\delta[u_\alpha, w_\beta] x^{\alpha + \beta + \delta}. \end{equation} By sesquilinearity, $\tau_p(\gamma)$ extends to a positive definite hermitian map \begin{equation}\label{taupgammadef} \tau_p (\gamma): \mathfrak{S} \times \mathfrak{S} \longrightarrow \mathbb{C}[[x]](\!(\hbar^{1/2})\!), \end{equation} where $\mathfrak{S}$ was defined in \eqref{DefinitionKa}. By construction, we have \begin{equation} \label{CommutativityScalarProduct} \tau_p(\gamma)[\tau_p(u), \tau_p(w)] = \tau_p(\gamma[u, w]). \end{equation} Now we define the hermitan form $(\, \cdot\,,\,\cdot\,)_{\mathfrak{S},\phi}$ first on $\mathcal{E}_p[[x]]$ by \begin{equation}\label{scalarprodKdef} \left( \boldsymbol{u}, \boldsymbol{w} \right)_{\mathfrak{S},\phi} := \hat{\mathcal{I}} \bigl( \tau_p(\gamma)\!\left[ \boldsymbol{u}, \boldsymbol{w}\right]\bigr)\, ,\qquad \boldsymbol{u}, \boldsymbol{w} \in \mathcal{E}_p[[x]]\, . \end{equation} By commutativity of the diagram \eqref{diagram} and \eqref{CommutativityScalarProduct}, it satisfies \begin{equation}\label{scalarprodmitI} \left( \tau_p(u), \tau_p(w) \right)_{\mathfrak{S},\phi} = \mathcal{I}(\gamma[u, w]) \end{equation} and naturally extends to a hermitian form \begin{equation}\label{scalarprodKdef2} ( \,\cdot\,, \,\cdot\, )_{\mathfrak{S},\phi}: \mathfrak{S} \times \mathfrak{S} \longrightarrow \mathbb{C}(\!(\hbar^{1/2})\!)\, . \end{equation} Moreover, by the sesquilinearity of $\gamma$, $\mathcal{I}(\gamma[\cdot, \cdot])$ extends to a hermitian form \begin{equation}\label{extendI} \mathcal{I}(\gamma[\cdot, \cdot])\ : \mathcal{F}\times \mathcal{F} \rightarrow \mathbb{C} (\!(\hbar^{1/2})\!) \end{equation} on the space $\mathcal{F}:=\Gamma_c^\infty (U, \mathcal{E}) ((\hbar^{1/2}))$ of Laurent series in $\hbar^{1/2}$ with coefficients in $ \Gamma_c^\infty (U, \mathcal{E})$. \begin{lemma}\label{skpnondegLem} The hermitian form $( \,\cdot\,, \,\cdot\, )_{\mathfrak{S},\phi}$ defined in \eqref{scalarprodKdef2} is non-degenerate. \end{lemma} \begin{proof} We have to show that $(\boldsymbol{u}, \boldsymbol{w})_{\mathfrak{S},\phi} = 0$ implies $\boldsymbol{u}=0$ or $\boldsymbol{v} = 0$ for all $\boldsymbol{u}, \boldsymbol{w} \in \mathfrak{S}$. It suffices to show that $( \,\cdot\,, \,\cdot\, )_{\mathfrak{S},\phi}$ is non-degenerate on $\mathcal{E}_p[[x]]$, so we may assume $\boldsymbol{u}, \boldsymbol{w} \in \mathcal{E}_p[[x]]$. Since the map $\tau_p: \Gamma_c^\infty(U, \mathcal{E}) \longrightarrow \mathcal{E}_p[[x]]$ is surjective, we may assume that $\boldsymbol{u} = \tau_p(u)$ and $\boldsymbol{w} = \tau_p(w)$ for some $u, v \in \Gamma_c^\infty(U, \mathcal{E})$. From the definition it is clear that $(\boldsymbol{u}, \boldsymbol{w})_{\mathfrak{S},\phi} = 0$ if and only if the function $\gamma[u, w]$ vanishes to infinite order at the point $p$. In this case, however, we necessarily have that either $u$ or $w$ vanish to infinite order at $p$. Therefore $\tau_p(u) = \boldsymbol{u}= 0$ or $\tau_p(w) = \boldsymbol{w} = 0$. \end{proof} The reason to define $( \,\cdot\,, \,\cdot\, )_{\mathfrak{S},\phi}$ as we did is that it has the following property. \begin{proposition}[Symmetry] \label{SymmetrieOperator} Let $P$ be a differential operator, acting on sections of $\mathcal{E}$, which is symmetric on $\Gamma_c^\infty(U, \mathcal{E})$ with respect to the inner product $\bigl(\,.\,,\,.\,\bigr)_{\gamma}$ defined in \eqref{standard_skalar}. For our fixed solution $\phi$ of \eqref{EikonalEquation}, we define the conjugated operator $P_\phi$ on $\Gamma^\infty (U, \mathcal{E})$ with respect to $e^{-\phi/\hbar}$ by \begin{equation} \label{ConjugationByPhi} P_\phi u := e^{\phi/\hbar} P \bigl[e^{-\phi/\hbar}u\bigr]\, , \qquad u\in \Gamma^\infty (U, \mathcal{E})\; . \end{equation} Then $P_\phi$ is symmetric on $\Gamma_c^\infty(U, \mathcal{E})$ with respect to $(\,.\,,\,.\,)_{\gamma, \phi}$ (defined above) and its Taylor series $\tau_p(P_\phi)$ (see \eqref{tau_p_von_P}) acts on $\mathfrak{S}$ and is symmetric with respect to the sesquilinear form $( \,\cdot\,, \,\cdot\, )_{\mathfrak{S}, \phi}$ defined in \eqref{scalarprodKdef2}. \end{proposition} \begin{proof} It is clear that $\tau_p(P_\phi)\in \mathscr{D}(\mathcal{E}_p)[[x]]$ extends to a differential operator on $\mathfrak{S}$. By sesquilinearity, it suffices to prove the statement on the symmetry for elements of $\mathfrak{S}$ which are constant in $\hbar^{1/2}$. Since by Corollary \ref{BorelTheorem} all elements of $\mathcal{E}_p[[x]]$ can be written as Taylor series $\tau_p(u)$ for some $u \in \Gamma_c^\infty(U, \mathcal{E})$, we need to prove the statement only for elements of this type. Now by \eqref{MultiplicationProperty} and \eqref{scalarprodmitI} \begin{align} \bigl( \tau_p(P)\tau_p(u), \tau_p(w) \bigr)_{\mathfrak{S},\phi} &=\bigl( \tau_p(Pu), \tau_p(w) \bigr)_{\mathfrak{S},\phi}\nonumber\\ &= \mathcal{I}\bigl(\gamma[Pu, w]\bigr) = \text{asymptotic expansion of}~ I_\phi\bigl(\gamma[Pu, w], \hbar\bigr)\label{lemma4.5.1}\; . \end{align} By \eqref{Iphi}, \eqref{Scalar_phi}, \eqref{standard_skalar} and the symmetry of $P$ with respect to $\bigl(\,.\,,\,.\,\bigr)_{\gamma}$ we get for $u,w\in\Gamma_c^\infty(U, \mathcal{E})$ \begin{align*} I_\phi\bigl(\gamma[Pu, w], \hbar\bigr) &= \hbar^{-n/2} \bigl(P_\phi u, w\bigr)_{\gamma,\phi} = \hbar^{-n/2} \bigl(P e^{-\phi/\hbar}u, e^{-\phi/\hbar}w\bigr)_{\gamma} \\ &= \hbar^{-n/2} \bigl(e^{-\phi/\hbar}u, P e^{-\phi/\hbar}w\bigr)_{\gamma} = \hbar^{-n/2} \bigl(u, P_\phi w\bigr)_{\gamma,\phi} = I_\phi\bigl(\gamma[u, Pw], \hbar\bigr) \end{align*} and using \eqref{lemma4.5.1} again gives the stated result. \end{proof} \section{Rescaling} \label{Section4} In order to analyze degenerate eigenvalues, we now consider the rescaled variable $y=\hbar^{-1/2}x$ instead of $x$, where $x$ are the coordinates from Def.\ \eqref{setup3}. This is motivated by the basic scaling property of $H_{p,\hbar}$, giving the scaling in $\mathrm{spec} (H_{p,\hbar})$ and of the associated eigenfunctions. The important observation here is the following: Given a formal power series in $\hbar$ with coefficients in $\Gamma^\infty(U, \mathcal{E})$ as in Thm.\ \ref{Theorem1}, its Taylor series will be an element of $\mathfrak{S}$, i.e., a formal power series in $\hbar^{1/2}$ whose coefficients are formal power series in the variable $x$. After rescaling, however, it becomes an element of a space $\mathfrak{S}_0$ (defined in \eqref{DefinitionKa0} below), which is the space of formal power series in $\hbar$ whose coefficients are {\em polynomials} in the rescaled variable $y$. In this section, we explain the details. Again, throughout this section, we work in Setup \ref{setup1} and fix an admissible pair $(U, \phi)$. \begin{definition}[Rescaling Operator]\label{rescaleDef} We define the {\em rescaling operator} $R: \mathfrak{S} \longrightarrow \mathcal{E}_p[y] (\!(\hbar^{1/2})\!)$ by \begin{equation*} \boldsymbol{u} = \sum_{\natop{k \in \mathbb{Z}/2}{k\geq -M}} \hbar^k \sum_{\alpha \in \mathbb{N}_0^n} u_{\alpha, k} x^\alpha\quad \longmapsto \quad R\boldsymbol{u} = \sum_{\natop{k \in \mathbb{Z}/2}{k\geq -M}}\sum_{\alpha\in \mathbb{N}_0^n}\hbar^{k + |\alpha|/2} u_{\alpha,k} y^\alpha \; . \end{equation*} \end{definition} \begin{proposition}[Image of $R$]\label{rescaleProp} The rescaling operator $R$ given in Definition \ref{rescaleDef} is well-defined, injective and its image is the space \begin{equation} \label{DefinitionKa0} \mathfrak{S}_0 := \Bigl\{ \hbar^{-K}\!\!\!\sum_{j \in \mathbb{N}_0/2} P_j(y) \, \hbar^j \in \mathcal{E}_p[y] (\!(\hbar^{1/2})\!) \mid K \in \mathbb{N}_0/2 ,~ \mathrm{deg} \,P_j(y) \leq 2j \Bigr\} \end{equation} \end{proposition} \begin{proof} An element $\boldsymbol{u} \in \mathfrak{S}$ can be written as \begin{equation} \label{formofu} \boldsymbol{u} = \hbar^{-K}\!\!\!\sum_{k \in \mathbb{N}_0/2} \hbar^k \sum_{\alpha \in \mathbb{N}_0^n} u_{\alpha, k} x^\alpha \end{equation} for some $K\in\mathbb{N}_0$. Hence \begin{equation}\label{Rhatu} R \boldsymbol{u} = \hbar^{-K}\!\!\!\sum_{k \in \mathbb{N}_0/2} \hbar^k \sum_{\alpha \in \mathbb{N}_0^n} u_{\alpha, k} \,y^\alpha \,\hbar^{|\alpha|/2} = \hbar^{-K}\!\!\!\sum_{k \in \mathbb{N}_0/2} \hbar^k \sum_{j=0}^{2k} \sum_{|\alpha|=j} u_{\alpha, k-j/2} \,y^\alpha, \end{equation} so $R \boldsymbol{u}\in\mathcal{E}_p[y](\!(\hbar^{1/2})\!)$ and if $R\boldsymbol{u} = 0$, then each $u_{\alpha, k}$ has to be zero, i.e.\ $\boldsymbol{u} = 0$. This shows that $R$ is well-defined and injective. Furthermore, it is clear from \eqref{Rhatu} that $R\mathfrak{S}\subset \mathfrak{S}_0$. On the other hand, given an element \begin{equation*} \boldsymbol{w} = \hbar^{-K}\!\!\!\sum_{j \in \mathbb{N}_0/2} \hbar^k \sum_{|\alpha|\leq 2k} w_{\alpha, k} \,x^{\alpha}\in \mathfrak{S}_0\, , \end{equation*} its preimage $\boldsymbol{u}\in \mathfrak{S}$ under $R$ in the form \eqref{formofu} has the coefficients $u_{\alpha, k} := w_{\alpha, k + |\alpha|/2}$. \end{proof} \begin{definition}\label{DefProdK_0} By Prop.\ \ref{rescaleProp}, we can define the hermitian form \begin{equation}\label{binlinK0} ( \,\cdot\,, \,\cdot\,)_{\mathfrak{S}_0,\phi}:= (R^{-1})^*\bigl[( \,\cdot\,, \,\cdot\,)_{\mathfrak{S},\phi}\bigr] : \mathfrak{S}_0\times \mathfrak{S}_0\rightarrow \mathbb{C}(\!(\hbar^{1/2})\!) \end{equation} as the pullback of the inner product $( \,\cdot\,, \,\cdot\,)_{\mathfrak{S},\phi}$ on $\mathfrak{S}$ defined in \eqref{scalarprodKdef} under the inverse rescaling operator $R^{-1}: \mathfrak{S}_0 \rightarrow \mathfrak{S}$. \end{definition} Analogously to Def.\ \ref{rescaleDef}, we define (with $x={\hbar}^{1/2}y$) rescaling operators from $(\mathcal{E}^*_p\otimes \mathcal{E}^*_p)[[x]](\!(\hbar^{1/2})\!)$ to $(\mathcal{E}^*_p\otimes \mathcal{E}^*_p)[y](\!(\hbar^{1/2})\!)$ and from $\mathbb{C}[[x]](\!(\hbar^{1/2})\!)$ to $\mathbb{C}[y](\!(\hbar^{1/2})\!)$ which we also denote by $R$. Then, for $\tau_p(\gamma)\in (\mathcal{E}^*_p\otimes \mathcal{E}^*_p)[[x]]$ given in \eqref{taupgamma}, \eqref{taupgammadef}, we get \begin{equation}\label{gammak} R\circ \tau_p(\gamma) = \sum_{\delta\in \mathbb{N}_0^n} \hbar^{|\delta|/2} \gamma_\delta y^\delta =: \sum_{k\in\mathbb{N}_0/2} \hbar^k \gamma_k \quad\text{with}\quad \gamma_k := \sum_{|\delta|= 2k} \gamma_\delta y^\delta\in (\mathcal{E}^*_p\otimes \mathcal{E}^*_p)[y] \, . \end{equation} Remember that $\tau_p$ always refers to the coordinates $x$ as defined in Def.\ \ref{setup3}. We also introduce the positive definite hermitian form $\boldsymbol{\gamma}: \mathfrak{S}_0\times \mathfrak{S}_0 \rightarrow \mathbb{C}[y](\!(\hbar^{1/2})\!)$, by setting for $\boldsymbol{u}, \boldsymbol{v}\in\mathfrak{S}_0$ \begin{align}\label{tildegammaDef1} \boldsymbol{\gamma}[\boldsymbol{u}, \boldsymbol{v}] &:= \hbar^{-K_1-K_2}\sum_{j\in \mathbb{N}_0/2} \hbar^j \sum_{k+\ell+ r = j} \gamma_r[u_k, v_\ell] \quad \text{where}\\ \gamma_r[u_k, v_\ell] &:= \sum_{\natop{\alpha, \beta, \delta \in\mathbb{N}_0^n}{|\alpha|\leq 2k, |\beta|\leq 2\ell, |\delta| = 2r}} \gamma_\delta [u_{k,\alpha}, v_{\ell, \beta}] y^{\alpha + \beta + \delta}\in \mathbb{C}[y]\quad \text{for } \gamma_\delta \text{ as in } \eqref{gammak}\, ,\label{tildegammaDef2}\\ \boldsymbol{u}&=\hbar^{-K_1}\sum_{k\in\mathbb{N}_0/2}\hbar^k u_k\quad \text{with}\quad u_k = \sum_{|\alpha|\leq 2k} u_{k, \alpha} y^\alpha \quad \text{and}\label{tildegammaDef3}\\ \boldsymbol{v}&=\hbar^{-K_2}\sum_{\ell\in\mathbb{N}_0/2}\hbar^\ell v_\ell\quad \text{with}\quad v_\ell = \sum_{|\beta|\leq 2\ell} v_{\ell, \beta} y^\beta\; . \label{tildegammaDef4} \end{align} Then for all $\boldsymbol{u}, \boldsymbol{v}\in\mathfrak{S}$, we have \begin{equation}\label{Rundgammatilde} \boldsymbol{\gamma}[R\boldsymbol{u}, R\boldsymbol{v}] = R\bigl(\tau_p(\gamma)[\boldsymbol{u}, \boldsymbol{v}]\bigr)\quad\text{and}\quad \mathrm{deg}\,\gamma_r[u_k, v_\ell]\leq 2(r+k+\ell). \end{equation} From \eqref{tildegammaDef2} it follows that if $u_k$ has the parity\footnote{We say that $u\in \mathcal{E}_p[y]$ (or $\mathbb{C}[y]$) has parity $\pm 1$, if and only if $u(y) = \pm u(-y)$, i.e.\ $u$ only contains monomials of even (for $+$) or of odd degree (for $-$).} $(-1)^{2k}$ and $v_\ell$ has the parity $(-1)^{2\ell}$, then $\gamma_r[u_k, v_\ell]$ has the parity $(-1)^{2k + 2\ell + 2r}$. \begin{proposition} \label{rescaleProp2} The hermitian form $( \,\cdot\,, \,\cdot\,)_{\mathfrak{S}_0,\phi}$ defined in \eqref{binlinK0} is non-degenerate. An operator $P$ on $\mathfrak{S}$ is symmetric with respect to $( \,\cdot\,, \,\cdot\,)_{\mathfrak{S}, \phi}$ if and only if the associated operator $R\circ P\circ R^{-1}$ on $\mathfrak{S}_0$ is symmetric with respect to $( \,\cdot\,, \,\cdot\,)_{\mathfrak{S}_0, \phi}$.\\ Moreover, there exist polynomials $\omega_k\in\mathcal{E}_p [y],\, k\in \mathbb{N}_0/2$ of order $2k$ and parity $(-1)^{2k}$ such that for $\boldsymbol{u}, \boldsymbol{v}\in \mathfrak{S}_0$ and $\gamma_r[u_k, v_\ell]\in \mathbb{C}[y]$ as given in \eqref{tildegammaDef2}, \eqref{tildegammaDef3} and \eqref{tildegammaDef4} \begin{equation}\label{skpk} \bigl(\boldsymbol{u}, \boldsymbol{v} \bigr)_{\mathfrak{S}_0, \phi} = \hbar^{-K_1-K_2}\sum_{k \in \mathbb{N}_0/2} \hbar^k \sum_{\natop{j,\ell, r, m\in\mathbb{N}_0/2}{j + \ell + r + m = k}} \int_{\mathbb{R}^n} \gamma_r [u_j, v_\ell] (y) \omega_m(y) e^{-\langle y, \Lambda y\rangle} \, dy \end{equation} where $\Lambda = \diag (\lambda_1, \ldots, \lambda_n)\in \mathrm{Mat}(n\times n, \mathbb{R})$ for $\lambda_\nu>0$ as given in \eqref{taupphi2}. \end{proposition} \begin{proof} By Prop.\ \ref{rescaleProp} and Lemma \ref{skpnondegLem}, the hermitian form $\bigl( \,\cdot\,, \,\cdot\,\bigr)_{\mathfrak{S}_0, \phi}$ is well-defined and non-degenerate. By its definition in \eqref{binlinK0}, we have $\bigl(R\circ P\circ R^{-1} \boldsymbol{u}, \boldsymbol{v} \bigr)_{\mathfrak{S}_0, \phi} = \bigl(P u, v \bigr)_{\mathfrak{S}, \phi}$ for $\boldsymbol{u} = R \boldsymbol{u}, \boldsymbol{v}= R\boldsymbol{v} \in \mathfrak{S}_0$, proving the stated equivalence of symmetry. Using \eqref{binlinK0}, \eqref{scalarprodmitI} and \eqref{mathcalI} we get for $u, v\in \Gamma_c^\infty (U, \mathcal{E})$ \begin{equation}\label{rescalePropBew1} \bigl( R\circ \tau_p (u), R\circ \tau_p (v))_{\mathfrak{S}_0,\phi} = \mathcal{I}(\gamma[u, v]) = \;\text{asymptotic expansion of }\, I_\phi(\gamma[u,v], \hbar)\, . \end{equation} Now $\boldsymbol{u} = R\circ \tau_p(u)$ and $\boldsymbol{v} = R\circ \tau_p(v)$ for some $u, v\in \Gamma_c^\infty (U, \mathcal{E})$ by Prop.\ \ref{rescaleProp} and Corollary \ref{BorelTheorem}. Identifying $\phi$ and $\gamma[u,v]$ with their pullback under the inverse chart $x^{-1}$, we have \[ I_\phi (\gamma[u,v], \hbar) = \hbar^{-n/2} \int_{\mathbb{R}^n} \gamma[u,v](x) e^{-\phi(x)/\hbar} G(x)\, dx\; , \] where $G(x):= (\det g_{ij}(x))^{1/2}$, using the compact support of $\gamma[u,v]$ in $U$.\\ Changing variables by $x=\hbar^{1/2}y$ gives \begin{equation}\label{Iphimity} I_\phi (\gamma[u,v], \hbar) = \int_{\mathbb{R}^n} \gamma[u,v](\hbar^{1/2} y) e^{-2\phi(\hbar^{1/2}y) /\hbar} G(\hbar^{1/2}y)\, dy\, . \end{equation} It is now straightforward (though tedious) to check that the asymptotic expansion on the right hand side of \eqref{Iphimity} is given by the asymptotic expansion of all three factors in the integrand of \eqref{Iphimity} and integrating term by term (see e.g.\ \cite[Lemma 3.7]{klein-rosen1} for a similar discussion). Remember that we chose coordinates such that the Hessian of $\phi$ is diagonal (see Def.\ \ref{setup3}). It is straightforward to check from the definitions that \begin{equation}\label{asym_gamma} \text{asymptotic expansion of }\gamma[u, v] (\hbar^{1/2}y) = \boldsymbol{\gamma} [\boldsymbol{u}, \boldsymbol{v}] \in \mathbb{C}[y](\!(\hbar^{1/2})\!)\; . \end{equation} Furthermore, by \eqref{taupphi2} the asymptotic expansion of the phase-function $\phi\in C^\infty(U, \mathbb{R})$ is given by \begin{equation}\label{sigmapphi} \text{asymptotic expansion of }\frac{1}{\hbar} \phi (\hbar^{1/2}y) = \frac{1}{2}\<y, \Lambda y\> + \sum_{k\in\mathbb{N}} \hbar^{k/2} \phi_k \end{equation} where $\Lambda = \diag (\lambda_1, \ldots \lambda_n)$ is strictly positive and $\phi_k\in \mathbb{C}[y]$ are homogeneous of degree $k+2$. Equation \eqref{sigmapphi} together with \eqref{gij} allows us to define real polynomials $\omega_k\in\mathbb{R}[y]$ by \begin{equation}\label{eminus2varphi} \text{asymptotic expansion of } e^{-2\phi(\hbar^{1/2}y)/\hbar} G(\hbar^{1/2}y) =: e^{-\langle y, \Lambda y\rangle} \sum_{k\in\mathbb{N}_0/2}\hbar^k \omega_k \in \mathbb{R}[y](\!(\hbar^{1/2})\!) \end{equation} where $\omega_0:= 1$ and the parity of $\omega_k$ is $(-1)^{2k}$ (this point follows as in the similar discussion in \cite[Rem.\ 1]{klein-schwarz} and the Cauchy-product). Inserting the expansions \eqref{asym_gamma}, \eqref{eminus2varphi} and \eqref{tildegammaDef1} into \eqref{Iphimity} and ordering by powers of $\hbar$ gives \begin{equation} \text{asymptotic expansion of } I_\phi (\gamma[u,v]; \hbar) = \text{right hand side of }~\eqref{skpk}\; . \end{equation} Using \eqref{rescalePropBew1} and the uniqueness of asymptotic expansion proves \eqref{skpk}. \end{proof} \begin{remark} Obviously \eqref{skpk} could be used to define the inner product $( \,\cdot\,, \,\cdot\,)_{\mathfrak{S}_0, \phi}$ on $\mathfrak{S}_0$ directly. This approach avoids the stationary phase expansion \eqref{StationaryPhaseExpansion} used in the definition \eqref{scalarprodKdef} of the inner product on $\mathfrak{S}$. But then it is slightly more complicated to show symmetry of $Q:= \hbar^{-1}\bigl(R \circ \tau_p(H_\phi) \circ R^{-1}\bigr)$ given in \eqref{RescaledSeriesOfH} with respect to $( \,\cdot\,, \,\cdot\,)_{\mathfrak{S}_0, \phi}$ (see \cite{klein-schwarz}). \end{remark} \section{The formal Eigenvalue Problem}\label{Kapitel3} Again we assume Setup \ref{setup1} and fix an admissible pair $(U, \phi)$. The operator $\tau_p(H_{\phi,\hbar})$ is symmetric on $\mathfrak{S}$ by Thm.\ \ref{SymmetrieOperator}. In this section, we will consider the formal eigenvalue problem of this operator over the field $\mathbb{C}(\!(\hbar^{1/2})\!)$, i.e.\ we will solve the eigenvalue equation \begin{equation} \label{SpectralDecomposition} \tau_p(H_{\phi,\hbar}) \, \boldsymbol{a} = \boldsymbol{E} \boldsymbol{a}, ~~~~ \text{for} ~~\boldsymbol{a} \in \mathfrak{S} ~~\text{and}~~ \boldsymbol{E} \in \mathbb{C}(\!(\hbar^{1/2})\!). \end{equation} We start giving an explicit calculation of the rescaled Taylor series of $H_{\phi,\hbar}$, as defined in \eqref{ConjugationByPhi}. \begin{lemma}\label{Hphi} On $U$, the conjugation of $H_\hbar$ with respect to $e^{-\phi/\hbar}$ is given by \begin{equation} \label{LocalFormHPhi} H_{\phi,\hbar} = \hbar^2 L + \hbar \bigl( \nabla^\mathcal{E}_{2\operatorname{grad}\phi} + W + \Delta \phi \bigr)\, . \end{equation} where $\nabla^\mathcal{E}$ is the unique metric connection determined by $L$ as described in Remark \ref{RemarkLaplaceType} and $\Delta$ denotes the Laplace-Beltrami operator acting on functions. \end{lemma} \begin{proof} For any $f \in C^\infty(M)$ and $u \in \Gamma^\infty(M, \mathcal{E})$, we have the product rule \begin{equation*} L(fu) = (\Delta f) u - 2 \nabla^\mathcal{E}_{\operatorname{grad} f} u + f Lu, \end{equation*} compare \cite[Prop.\ 2.5]{bgv}. Straightforward calculation gives \begin{equation} \label{ConjugationFormula} H_{\phi,\hbar} = \hbar^2 L + 2 \hbar \nabla^\mathcal{E}_{\operatorname{grad} \phi} + \hbar W + \bigl(\hbar \Delta \phi +V - |\mathrm{d} \phi|^2\bigr)\mathrm{id}_\mathcal{E}\, . \end{equation} Since $\phi\in C^\infty (U, \mathbb{R})$ solves the eikonal equation on $U$, this simplifies to \eqref{LocalFormHPhi}. \end{proof} \begin{remark} \label{RmkTransportEquations} When making the ansatz \begin{equation*} e^{-\phi/\hbar}\sum_{k\in \mathbb{N}_0/2}\hbar^k a_k ~~~~ \text{and} ~~~~ \hbar\sum_{k\in \mathbb{N}_0/2}\hbar^k E_k \end{equation*} with $a_k \in \Gamma^\infty(U, \mathcal{E})$ for an eigenfunction and eigenvalue of $H_{\phi, \hbar}$, respectively, straightforward calculation gives that the equation \begin{equation} \Bigl(H_{\phi, \hbar} - \hbar\sum\nolimits_{k \in \mathbb{N}_0/2} \hbar^k E_k \Bigr) \sum\nolimits_{k\in \mathbb{N}_0/2}\hbar^k a_k = 0 \end{equation} in the sense of an asymptotic expansion in $\hbar$ is equivalent to the the statement that for each $k \in \mathbb{N}_0/2$, the coefficients $a_k$ solve the recursive transport equations \begin{equation} \label{TransportEquation2} (\nabla_{2\operatorname{grad}\phi}^\mathcal{E} + W + \Delta \phi - E_0){a}_{k} = - L {a}_{k-1} + \sum_{i=1/2}^k E_{ i}\,a_{k-i}, \end{equation} which were discussed in Remark \eqref{RemarkTransport0}. \end{remark} With the chart of Def.\ \ref{setup3}, we have \begin{equation}\label{taupphi} \tau_p (\phi) = \frac{1}{2}\langle x, \Lambda x \rangle + \sum_{k\in\mathbb{N}} \hbar^{k/2} \phi_k ~~~~ \text{and hence} ~~~~ \tau_p (V) = \< \Lambda x, \Lambda x \> + \sum_{k\in\mathbb{N}}V_k \end{equation} by \eqref{EikonalEquation} where $\Lambda=\diag (\lambda_1, \ldots, \lambda_n)$ is positive definite and $V_k, \phi_k$ are homogeneous polynomials of degree $k+2$. In the basis $\partial_{x^1}, \dots, \partial_{x^n}$ of $T_pM$, the Hessian $\nabla^2 V|_p$ is given by the matrix $\Lambda^2$, hence the local harmonic oscillator at $p$ associated to $H_\hbar$ (see Def. \ref{DefLocalHarmonicOscillator}) is given by \begin{equation} \label{LocalHarmonicOscillator} H_{p, \hbar} = - \hbar^2 \sum_{\nu=1}^n \frac{\partial^2}{\partial X_\nu^2} + \hbar\, W(p) + \< \Lambda X, \Lambda X \> \end{equation} at the point $X \in T_pM$. We use the standard fact of Riemannian geometry that the geodesic Taylor series of the metric is given by (see e.g.\ \cite[Prop. 1.28]{bgv}) \begin{equation}\label{gij} \tau_p(g_{ij})=\delta_{ij} - \sum_{k,l}\frac{1}{3}R_{ikjl}x^kx^l + \sum_{|\alpha|\geq 3}\frac{\partial g_{ij}}{\partial x^\alpha} \frac{x^\alpha}{\alpha !}\, , \end{equation} where $R_{ikjl}$ denotes the Riemannian curvature tensor. From \eqref{taupphi} and \eqref{gij} it follows that \begin{equation}\label{taup_nablagradphi} \tau_p(\nabla^\mathcal{E}_{2\operatorname{grad} \phi}) = \sum_{\nu=1}^n 2 \lambda_\nu x^\nu \frac{\partial}{\partial x^\nu}\, \mathrm{id}_\mathcal{E} + \text{terms of higher degree} \end{equation} with respect to the degree from Def.\ \ref{Def_pgrading}. Again by \eqref{gij} we have \begin{equation}\label{Deltaphi} \tau_p(\Delta \phi) = \mathrm{tr}\,\Lambda\, \mathrm{id}_\mathcal{E} + \text{terms of higher degree}. \end{equation} Hence, by \eqref{LocalFormHPhi}, \eqref{taup_nablagradphi}, \eqref{Deltaphi} and Def.\ \ref{rescaleDef} the rescaled Taylor series of $H_{\phi, \hbar}$ is given by \begin{equation} \label{RescaledSeriesOfH} Q := \hbar^{-1}\bigl(R \circ \tau_p(H_{\phi, \hbar}) \circ R^{-1}\bigr) = \sum_{j \in \mathbb{N}_0/2} \hbar^j Q_j \end{equation} for differential operators $Q_j\in \mathscr{D}(\mathcal{E}_p)[y]$ independent of $\hbar$. Here, $\mathscr{D}(\mathcal{E}_p)[y]$ denotes the space of differential operators on $\mathcal{E}_p[y]$ with coefficients in $\mathrm{End} (\mathcal{E}_p)[y]$ which extend to operators on $\mathfrak{S}_0$. We have in particular \begin{equation} \label{defQ0} Q_0 = -\sum_{\nu=1}^n \Bigl( \frac{\partial^2}{\partial y_\nu^2} + 2 \lambda_\nu y^\nu \frac{\partial}{\partial y_\nu} + \lambda_\nu \Bigr)\, \mathrm{id}_\mathcal{E} + W(p)\; . \end{equation} It is clear by \eqref{RescaledSeriesOfH} that $Q$ is a well-defined operator acting on the space $\mathfrak{S}_0$. Moreover, by Thm.\ \ref{SymmetrieOperator} and Prop.\ \ref{rescaleProp2}, $Q$ is symmetric with respect to $(\,\cdot\, ,\,\cdot\, )_{\mathfrak{S}_0, \phi}$. \begin{remark} \label{RemarkOnH0AndQ0} $Q_0$ can be restricted to $\mathcal{E}_p[y]$. Using the isomorphism $\Phi_x: \mathcal{E}_p[x] \rightarrow \mathcal{E}_p [T_pM]$ extending $\Phi_x$ defined in Remark \ref{CTpM}, we set $\tilde{R}:= R\circ \Phi_x^{-1} : \mathcal{E}_p [T_pM] \rightarrow \mathcal{E}_p[y]$. Then, on $\mathcal{E}_p[y]$, $Q_0$ is the rescaling of the conjugation of the local harmonic oscillator $H_{p,\hbar}$ given in \eqref{LocalHarmonicOscillator} using the function $\phi_0: X \mapsto \frac{1}{2} \< X, \Lambda X \>$ on $T_pM$, more precisely \begin{equation*} \hbar \tilde{R}^{-1}Q_0 \tilde{R} = e^{\phi_0/\hbar} H_{p,\hbar} e^{-\phi_0/\hbar}. \end{equation*} In particular, if a polynomial $q\in \mathcal{E}_p[y]$ is an eigenfunction of $Q_0$ with eigenvalue $E_0$, then \begin{equation*} e^{-\phi_0/\hbar} \tilde{R}^{-1}q (\,\cdot\,) = e^{-\phi_0/\hbar} q(\hbar^{-1/2} \,\cdot\,) \in C^\infty(T_pM, \mathcal{E}_p) \end{equation*} is an eigenfunction of $H_{p, \hbar}$ with eigenvalue $\hbar E_0$. \end{remark} \begin{lemma}\label{Qj} The operators $Q_j\in \mathscr{D}(\mathcal{E}_p)[y],\, j\in \mathbb{N}_0/2,$ in \eqref{RescaledSeriesOfH} are sums $Q_j = L_{2j-2} + A_{2j}$ where $L_k$ and $A_k$ are homogeneous with $\mathrm{deg}_\mathscr{D} L_k= k$ and $\mathrm{deg}_\mathscr{D} A_k = k$. \end{lemma} \begin{proof} If $\tilde{P}\in \mathscr{D}(\mathcal{E}_p)[[x]]$ is homogeneous of degree $\deg_\mathscr{D} \tilde{P} = k\in \mathbb{Z}$ and does not depend on $\hbar$, there are $P_{\alpha\beta}\in\mathrm{End} (\mathcal{E}_p)$ such that \begin{equation}\label{degreeP} R\circ \tilde{P}\circ R^{-1} = \sum_{|\alpha| - |\beta| =k} P_{\alpha\beta}\hbar^{|\alpha|/2}y^\alpha \hbar^{-|\beta|/2}\partial_y^{\beta} =: \hbar^{k/2} P \end{equation} where $P\in \mathscr{D}(\mathcal{E}_p)[y]$ is homogeneous with $\deg_\mathscr{D} P = k$. In our case, $\tau_p(L) = \sum_{k\in \mathbb{Z}, k\geq -2} \tilde{L}_k$ where $\tilde{L}_k\in \mathscr{D}(\mathcal{E}_p)[[x]]$ is homogeneous with $\deg_\mathscr{D} \tilde{L}_k = k$. Moreover, by \eqref{taup_nablagradphi} and \eqref{Deltaphi}, $\tau_p\bigl(\nabla_{2\operatorname{grad} \phi} + \Delta \phi + W(p)\bigr) = \sum_{k\in\mathbb{N}_0} \tilde{A}_{k}$ where $\tilde{A}_k\in \mathscr{D}(\mathcal{E}_p)[[x]]$ is homogeneous with $\deg_\mathscr{D} \tilde{A}_k = k$. Therefore, by \eqref{degreeP} \begin{align*} \hbar^{-1} R \circ \hbar^2\tau_p(L) \circ R^{-1} &= \sum_{k\in \mathbb{Z}, k\geq -2} \hbar^{(k+2)/2} L_k = \sum_{j \in \mathbb{N}_0/2}\hbar^j L_{2j-2} \\ \hbar^{-1} R \circ \hbar \tau_p\bigl(\nabla_{2\operatorname{grad} \phi} + \Delta \phi + W\bigr) \circ R^{-1} &= \sum_{k\in \mathbb{N}_0} \hbar^{k/2} A_k = \sum_{j \in \mathbb{N}_0/2} \hbar^j A_{2j} \end{align*} where $\deg_\mathscr{D} L_k = k$ and $\deg_\mathscr{D} A_k = k$. Thus $Q_j = L_{2j-2} + A_{2j}$ and the lemma follows. \end{proof} Let $(e_1, \dots, e_{\mathrm{rk}\,\mathcal{E}})$ be an orthonormal basis of $\mathcal{E}_p$ consisting of eigenvectors of $W(p)$ and let $\mu_1, \dots, \mu_{\mathrm{rk}\,\mathcal{E}}$ denote the corresponding eigenvalues. Then the eigenvalue problem \begin{equation} \label{EigenValueEquationQ0} Q_0 h(y) = E h(y), ~~~~~~~~ h(y) \in \mathcal{E}_p[y], ~~ E \in \mathbb{R}, \end{equation} is solved by the eigenvalues \begin{equation} \label{EigenvaluesHarmonicOscillator} E_{\alpha, k} = \sum_{j=1}^n(2 \alpha_j + 1) \lambda_j + \mu_k, ~~~~~~ \alpha \in \mathbb{N}_0^n,~~ 1 \leq k \leq \mathrm{rk}\,\mathcal{E}, \end{equation} with the corresponding eigenfunctions \begin{equation} \label{EigenvectorsHarmonicOscillators} h_{\alpha, k}(y) = \prod_{j=1}^n \sqrt[4]{\lambda_j} ~ h_{\alpha_j}\bigl(\sqrt{\lambda_j}y\bigr) \cdot e_k, ~~~~~~ \alpha \in \mathbb{N}_0^n,~~ 1 \leq k \leq \mathrm{rk}\,\mathcal{E}, \end{equation} where \begin{equation} h_j(z) = \frac{(-1)^j}{\sqrt{2^j j!}\sqrt[4]{\pi}} e^{z^2}\frac{\mathrm{d}^j}{\mathrm{d} z^j} e^{-z^2}, ~~~~~~~ j \in \mathbb{N}_0, \end{equation} are the Hermite polynomials. \begin{remark}\label{Remhalphak} The eigenfunctions $h_{\alpha,k}$ of $Q_0$ are orthonormal with respect to $(\,\cdot\, ,\,\cdot\, )_{\mathfrak{S}_0,\phi}$. Moreover, since $h_k(-x) = (-1)^k h_k(x)$ and $\deg h_k = k$ it follows that $h_{\alpha,k}$ is even or odd if $|\alpha|$ is even or odd respectively and $\deg h_{\alpha,k} = |\alpha|$. \end{remark} \begin{remark}\label{SpektrumQ0} $Q_0$ can be considered as an unbounded operator on the Hilbert space \begin{equation*} \mathcal{H} := L^2(\mathbb{R}^n, e^{-2\phi_0/\hbar}\, dy)\otimes \mathcal{E}_p \end{equation*} that is essentially self-adjoint on polynomials with coefficients in $\mathcal{E}_p$. By diagonalizing $W(p)$, it is easy to see that the spectrum of $Q_0$ on $\mathcal{H}$ is given by \begin{equation} \mathrm{spec}(Q_0) := \{ E_{\alpha, k} \mid \alpha \in \mathbb{N}_0^n, ~~ 1 \leq k \leq \mathrm{rk}\,\mathcal{E} \}\; , \end{equation} with $E_{\alpha,k}$ given in \eqref{EigenvaluesHarmonicOscillator}. Moreover, the eigenfunctions $h_{\alpha,k}$ given in \eqref{EigenvectorsHarmonicOscillators} are an orthonormal basis of $\mathcal{H}$. In particular, this shows that $Q_0-\mathbf{z}$ is bijective on $\mathcal{E}_p[y]$ for $\mathbf{z}\in \mathbb{C}\setminus \mathrm{spec} (Q_0)$. \end{remark} \begin{lemma}[The resolvent] For $\mathbf{z} \in \mathbb{C}\setminus \mathrm{spec}(Q_0)$, the operator $Q - \mathbf{z}: \mathfrak{S}_0 \longrightarrow \mathfrak{S}_0$ is invertible, and the inverse $R(\mathbf{z})$ is given by the formal Neumann series \begin{equation} \label{DefinitionOfResolvent} R(\mathbf{z}) = \sum_{k=0}^\infty \Bigl( - R_0(\mathbf{z}) \sum_{j \in \mathbb{N}/2} \hbar^j Q_j \Bigr)^k R_0(\mathbf{z}) = - \sum_{j \in \mathbb{N}_0/2} \hbar^j R_j(\mathbf{z}) \end{equation} where $Q_j,\, j\in \mathbb{N}_0/2,$ are the differential operators given in \eqref{RescaledSeriesOfH} and \begin{align}\label{R0} R_0(\mathbf{z}) &:= (Q_0 - \mathbf{z})^{-1} \\ R_j (\mathbf{z})&:= \sum_{k=1}^{2j} (-1)^k \sum_{\natop{j= j_1 +\ldots + j_k}{j_1, \ldots j_k \in\frac{\mathbb{N}}{2}}} \left(\prod_{m=1}^k -R_0(\mathbf{z}) Q_{j_m}\right) R_0(\mathbf{z})\; .\label{Rj} \end{align} Furthermore, for all $j \in\mathbb{N}_0/2$ \begin{equation} \label{SymmetryOfRj} \bigl( \boldsymbol{u}, R(\mathbf{z}) \boldsymbol{w} \bigr)_{\mathfrak{S}_0,\phi} = \bigl( R(\overline{\mathbf{z}}) \boldsymbol{u} , \boldsymbol{w} \bigr)_{\mathfrak{S}_0,\phi} \quad\text{and}\quad \bigl( \boldsymbol{u}, R_j(\mathbf{z}) \boldsymbol{w} \bigr)_{\mathfrak{S}_0,\phi} = \bigl( R_j(\overline{\mathbf{z}}) \boldsymbol{u} , \boldsymbol{w} \bigr)_{\mathfrak{S}_0,\phi}\; . \end{equation} \end{lemma} \begin{proof} To prove that the inverse $R(\mathbf{z})$ of $(Q- \mathbf{z})$ is given by the series \eqref{DefinitionOfResolvent}, we write $Q = Q_0 + \tilde{Q}$ and $R_0 = R_0(\mathbf{z})$ to simplify the notation. Then \begin{equation}\label{Q-zR} (Q- \mathbf{z})R(\mathbf{z}) = (Q_0 - \mathbf{z} + \tilde{Q}) \sum_{k=0}^\infty (-R_0 \tilde{Q})^k R_0 = \sum_{k=0}^\infty (Q_0 - \mathbf{z}) (-R_0 \tilde{Q})^k R_0 + \sum_{k=0}^\infty \tilde{Q} (-R_0 \tilde{Q})^k R_0 \end{equation} and, using $(R_0\tilde{Q})^k R_0 = R_0 (\tilde{Q}R_0)^k$, we get \begin{equation*} \text{right hand side of}~\eqref{Q-zR} = \sum_{k=0}^\infty (-1)^k (\tilde{Q} R_0)^k + \sum_{k=0}^\infty (-1)^k (\tilde{Q} R_0)^{k+1} = 1. \end{equation*} Similar computations show that $R(\mathbf{z}) (Q-\mathbf{z}) = 1$. To prove \eqref{SymmetryOfRj}, we use that the symmetry of $Q$ implies that for all $\boldsymbol{u}, \boldsymbol{v}\in \mathfrak{S}_0$ \begin{equation}\label{symm1} \bigl( (Q-\bar{\mathbf{z}}) \boldsymbol{u}, R(\mathbf{z}) \boldsymbol{v} \bigr)_{\mathfrak{S}_0, \phi} = \bigl( \boldsymbol{u}, \boldsymbol{v} \bigr)_{\mathfrak{S}_0, \phi} = \bigl(R(\bar{\mathbf{z}}) (Q-\bar{\mathbf{z}}) \boldsymbol{u}, \boldsymbol{v} \bigr)_{\mathfrak{S}_0, \phi}\; . \end{equation} Equation \eqref{symm1} in the space of formal power series yields inductively that for all $j\in \mathbb{N}_0/2$ and $\boldsymbol{u},\boldsymbol{v}\in \mathcal{E}_p[y]$ \[ \bigl( (Q_0 - \bar{\mathbf{z}}) \boldsymbol{u}, R_j (\mathbf{z}) \boldsymbol{v} \bigr)_{\mathfrak{S}_0, \phi} = \bigl(R_j(\bar{\mathbf{z}}) (Q_0 -\bar{\mathbf{z}}) \boldsymbol{u}, \boldsymbol{v} \bigr)_{\mathfrak{S}_0, \phi}\; .\] Since $Q_0 - \bar{\mathbf{z}}: \mathcal{E}_p[y] \rightarrow \mathcal{E}_p[y]$ is bijective (see Remark \ref{SpektrumQ0}), this proves \eqref{SymmetryOfRj}. \end{proof} From now on, fix $E_0 \in \mathrm{spec}(Q_0)$. We will define a spectral projection $\Pi_{E_0}$ for $Q$ on $\mathfrak{S}_0$, associated to $E_0$, using the operator $R(\mathbf{z})$. By Remark \ref{SpektrumQ0}, the eigenfunctions $h_{\alpha,k}, \, \alpha \in \mathbb{N}_0^n,\, 1\leq k\leq \mathrm{rk}\mathcal{E}$ of $Q_0$ (see \eqref{EigenvectorsHarmonicOscillators}) form a basis in $\mathcal{E}_p[y]$. By \eqref{DefinitionOfResolvent}, the action of the operators $R_j(\mathbf{z})$ on $h_{\alpha,k}$ therefore determines the action of $R(\mathbf{z})$ on $\mathcal{E}_p[y]$. By Lemma \ref{Qj}, each $Q_j$ raises the degree of a polynomial by $2j$, thus by \eqref{Rj} and \eqref{EigenValueEquationQ0} there exist rational functions $d^j_{\alpha,k,\beta, l}(\mathbf{z})$ on $\mathbb{C}$ with poles at most at $\mathrm{spec} (Q_0)$ such that \begin{equation}\label{Rjhalpha} R_j(\mathbf{z}) h_{\alpha, k} = \sum_{|\beta|\leq |\alpha| + 2j} d^j_{\alpha,k,\beta, l}(\mathbf{z}) h_{\beta, l}\; . \end{equation} We choose a counterclockwise oriented closed contour $\gamma\in \mathbb{C}\setminus \mathrm{spec} (Q_0)$ around $E_0$, separating $E_0$ from the rest of $\mathrm{spec}(Q_0)$, and define \begin{equation}\label{DefPi} \Pi_{E_0} q := \frac{1}{2\pi i} \sum_{j \in \mathbb{N}_0/2} \hbar^j \oint_\gamma R_j(\mathbf{z}) q \, \mathrm{d} \mathbf{z} \in \mathfrak{S}_0\, ,\quad q\in\mathcal{E}_p[y]\, . \end{equation} This definition makes sense, as $R_j(\mathbf{z})q$ is holomorphic on $\mathbb{C} \setminus \mathrm{spec}(Q_0)$ in the following sense: From \eqref{Rjhalpha} it is clear that for fixed $q\in\mathcal{E}_p[y]$ the degree of $R_j(\mathbf{z})q$ is smaller than some $N$ for all $\mathbf{z} \in \mathbb{C} \setminus \mathrm{spec}(Q_0)$. Thus the range of the map $\mathbf{z} \mapsto R_j(\mathbf{z})q$ is a finite-dimensional complex vector space and as such it is holomorphic. We extend $\Pi_{E_0}$ to $\mathfrak{S}_0$ by linearity. \begin{proposition}[The Eigenprojection]\label{propPi} Let $m_0$ denote the multiplicity of $E_0$. Then $\Pi_{E_0}$ defined in \eqref{DefPi} is a projection in $\mathfrak{S}_0$ of rank $m_0$, symmetric with respect to the inner product $( \,\cdot\,,\,\cdot\,)_{\mathfrak{S}_0,\phi}$ and commutes with $Q$. \end{proposition} \begin{proof}We write $\Pi = \Pi_{E_0}$. {\sl Symmetry:} The symmetry of $\Pi$ is a consequence of \eqref{SymmetryOfRj}. More explicitly, it follows from \[ \Bigl( u, \oint_{\gamma}R_j(\mathbf{z})w \, \mathrm{d}\mathbf{z} \Bigr)_{\mathfrak{S}_0,\phi} = -\Bigl( \oint_{\gamma}R_j(\mathbf{z})u\, \mathrm{d}\mathbf{z}, w\Bigr)_{\mathfrak{S}_0,\phi}\, , \qquad u,v\in \mathfrak{S}_0\, ,\] since the orientation of $\gamma$ is reversed under complex conjugation. $\Pi^2 = \Pi$: Using \eqref{DefinitionOfResolvent}, \eqref{DefPi} and the resolvent equation, this follows from standard arguments (see \cite{klein-schwarz} or \cite{helffer-sjostrand-1} for the computation in the setting of formal power series). rk $\Pi = m_0$: To determine the rank of $\Pi$, write $I = \{ (\alpha, k) \in \mathbb{N}_0^n \times \{1, \dots, \mathrm{rk}\,\mathcal{E}\} \mid E_{\alpha, k} = E_0 \}$. Since \begin{equation*} \frac{1}{2\pi i} \int_\gamma R_0(\mathbf{z}) h_{\alpha, k}(y) \mathrm{d} \mathbf{z} = \begin{cases} h_{\alpha, k} (y)\, , & (\alpha, k) \in I \\ 0 \, ,& (\alpha, k) \notin I \end{cases}, \end{equation*} we have \begin{equation}\label{Pihalphak} \Pi h_{\alpha, k} (y) = \begin{cases} h_{\alpha, k}(y) + \sum_{j \in \mathbb{N}/2} \hbar^j l_{j, \alpha, k}(y)\, , & (\alpha, k) \in I \\ \sum_{j \in \mathbb{N}/2} \hbar^j l_{j, \alpha, k}(y)\, , & (\alpha, k) \notin I \end{cases} \end{equation} for some polynomials $l_{j, \alpha, k}\in \mathcal{E}_p[y]$ of degree less than or equal to $|\alpha|+2j$ (this follows from \eqref{Rjhalpha}). Since the eigenfunctions $h_{\alpha,k}$ of $Q_0$ (see \eqref{EigenvectorsHarmonicOscillators}) form a basis, \eqref{Pihalphak} implies that the functions $\Pi h_{\alpha,k},\, (\alpha, k) \in I$, are linearly independent over $\mathbb{C}(\!(\hbar^{1/2})\!)$. Thus their span has dimension $m_0$. We now claim that the elements $\Pi h_{\alpha, k}(y)$ with $(\alpha, k) \in I$ span the range of $\Pi$. To prove this, it suffices to verify that for all $(\beta, l)\in \mathbb{N}_0^n\times \{1, \ldots , \mathrm{rk}\, \mathcal{E}\}$ \begin{equation*} \Pi h_{\beta, l} (y) = \sum_{(\alpha, k) \in I} A^{\beta, l}_{\alpha, k} \Pi h_{\alpha, k}(y) \end{equation*} for some coefficients $A^{\beta, l}_{\alpha, k} \in \mathbb{C}(\!(\hbar^{1/2})\!)$. These coefficients $A^{\beta, l}_{\alpha, k}$, however, can be determined by an easy induction argument using \eqref{Pihalphak} (see e.g.\ \cite{klein-schwarz}). $\Pi Q = Q\Pi$: Since $Q$ commutes with $R(z)$, this follows from the definition of $\Pi$. \end{proof} \begin{proposition}\label{orthonormprop} For $E_{\alpha, k}$ given in \eqref{EigenvaluesHarmonicOscillator} and $E_0\in \mathrm{spec} (Q_0)$, let $I_{E_0}:= \{ (\alpha, k) \in \mathbb{N}_0^n \times \{1, \dots, \mathrm{rk}\,\mathcal{E}\} \mid E_{\alpha, k} = E_0 \}$ and let $(\alpha^j, k^j)$, $j=1, \dots m_0,$ be an enumeration of the elements of $I_{E_0}$. Then there exists an orthonormal basis $(\boldsymbol{b}^1, \dots, \boldsymbol{b}^{m_0})$ of $V := \Pi_{E_0} \mathfrak{S}_0$ such that \begin{equation} \boldsymbol{b}^j = h_{\alpha^j,k^j}+ \sum_{\ell \in \mathbb{N}/2} \hbar^{\ell} p^j_{\ell} \end{equation} for some polynomials $p^j_{\ell}\in\mathcal{E}_p[y]$ of degree less or equal to $|\alpha^j| + 2 \ell$. With respect to such a basis, $Q|_V$ is represented by a hermitian $m_0 \times m_0$ matrix $M$ of the form \begin{equation} M = E_0 \cdot \mathbf{1} + \sum_{j \in \mathbb{N}/2} \hbar^{j} M_j \end{equation} where the entries of the matrices $M_j$ are in $\mathbb{C}(\!(\hbar^{1/2})\!)$ and $\mathbf{1}$ is the identity matrix. \end{proposition} \begin{proof} Setting $\boldsymbol{f}^j:= \Pi_{E_0} h_{\alpha^j,k^j}$, it follows from \eqref{Pihalphak}, the definition of $(\,\cdot\, ,\,\cdot\, )_{\mathfrak{S}_0,\phi}$ in \eqref{scalarprodKdef} and the fact that the eigenfunctions $h_{\alpha^j,k^j}$ are orthonormal with respect to $(\,\cdot\, ,\,\cdot\, )_{\mathfrak{S}_0,\phi}$ that \begin{equation}\label{fkfl} \bigl( \boldsymbol{f}^k, \boldsymbol{f}^\ell \bigr)_{\mathfrak{S}_0,\phi} = \delta_{k\ell} + \sum_{j \in \mathbb{N}/2} \hbar^j \beta_{k,\ell, j}, ~~~~~ \beta_{k,\ell, j} \in \mathbb{C}, \, \; 1 \leq k, \ell \leq m_0\; . \end{equation} We obtain a hermitian $m_0 \times m_0$-matrix with entries in $\mathbb{C}(\!(\hbar^{1/2})\!)$ \begin{equation}\label{DefA} A := \Bigl( \bigl( \boldsymbol{f}^k, \boldsymbol{f}^\ell \bigr)_{\mathfrak{S}_0,\phi} \Bigr)_{1 \leq k, \ell \leq m_0} = \mathbf{1} + \sum_{j \in \mathbb{N}/2} \hbar^j A_j. \end{equation} As the series of $A$ starts with the identity matrix, the matrix \begin{equation}\label{DefB} B:= A^{-1/2} = \mathbf{1} + \sum_{j \in \mathbb{N}/2} \hbar^j B_j \end{equation} is well-defined by the appropriate Taylor series and its elements are in $\mathbb{C}(\!(\hbar^{1/2})\!)$ as well. If the elements of $A$ are in $\mathbb{C}(\!(\hbar)\!)$, then the same is true for the elements of $B$. Setting \begin{equation*} (\boldsymbol{b}^1, \dots, \boldsymbol{b}^{m_0}) := (\boldsymbol{f}^1, \dots \boldsymbol{f}^{m_0}) B\, , \end{equation*} it is straightforward to verify that $(\boldsymbol{b}^1, \dots, \boldsymbol{b}^{m_0})$ is an orthonormal basis of $V$. Furthermore, with respect to this basis, $Q$ is represented by the hermitian matrix $M = B C B$ where $C$ is the matrix with entries in $\mathbb{C}(\!(\hbar^{1/2})\!)$ defined by \begin{equation}\label{CMatrix} C := \Bigl(\bigl(\boldsymbol{f}^k, Q \boldsymbol{f}^\ell \bigr)_{\mathfrak{S}_0,\phi}\Bigr)_{1\leq k,\ell\leq m_0} = E_0 \cdot \mathbf{1} + \sum_{j \in \mathbb{N}/2} \hbar^j C_{j}\, . \end{equation} The last equality in \eqref{CMatrix} follows from \eqref{RescaledSeriesOfH}, Lemma \ref{Qj}, Proposition \ref{propPi}, \eqref{fkfl} and the fact that $h_{\alpha^j,k^j},\, j=1, \ldots m_0$ are the eigenfunctions of $Q_0$ for the eigenvalue $E_0$. The symmetry of $C$ and hence $M$ follows from the symmetry of $Q$. The statement on the degree of the polynomials $p^j_\ell$ follows from \eqref{Pihalphak}. \end{proof} \begin{corollary}[Eigendecomposition of $Q_0$] \label{EssentialCorollary} For each $E_0 \in \mathrm{spec}(Q_0)$ of multiplicity $m_0$ with eigenfunctions $h_{\alpha^j,k^j}$, with $(\alpha^j,k^j) \in I_{E_0}$ as defined in Prop.\ \ref{orthonormprop}, the operator $Q: \mathfrak{S}_0 \longrightarrow \mathfrak{S}_0$ possesses $m_0$ (not necessarily distinct) eigenvalues $E_1, \dots, E_{m_0}$ of the form \begin{equation}\label{Ej} \boldsymbol{E}_j= E_0 + \sum_{\ell \in \mathbb{N}/2} \hbar^\ell E_{j,\ell}\, , \qquad E_{j,\ell}\in\mathbb{R}\, , \end{equation} with associated orthonormal eigenfunctions with respect to $(\,\cdot\, , \,\cdot\, )_{\mathfrak{S}_0,\phi}$ \begin{equation}\label{psij} \boldsymbol{\psi}_j = \sum_{\ell \in \mathbb{N}_0/2} \hbar^\ell \psi_{j,\ell} \in \mathfrak{S}_0 \end{equation} where $\psi_{j,\ell}\in\mathcal{E}_p[y]$ with $\deg \psi_{j,\ell} \leq d_{j,\ell}:= 2\ell + \max_{(\alpha,k)\in I_{E_0}} |\alpha|$. \end{corollary} \begin{proof} This follows at once from Prop.\ \ref{orthonormprop} together with \cite[Thm.\ A2.3]{klein-schwarz}\footnote{We recall that this result essentially is a generalization of Rellich's Theorem on the analyticity of eigenvalues and eigenfunctions for certain matrices with analytic coefficients (see \cite{rellich}). For a more abstract algebraic result (covering the present case of formal power series) see also \cite{grater-klein}.}, which states that for any $k \in \mathbb{N}$, a hermitian $m\times m$-matrix $M$ with elements $M_{ij}\in \mathbb{C}(\!(\hbar^{1/k})\!)$ has a complete eigendecomposition in $\mathbb{R}(\!(\hbar^{1/k})\!)$ and the associated eigenvectors can be chosen to be orthonormal. \end{proof} Using Lemma \ref{Qj}, we can prove the following proposition about the absence of half integer terms in the expansion \eqref{Ej}. \begin{proposition}[Parity]\label{ThmHalfIntegers} For $I_{E_0}$ as in Prop.\ \ref{orthonormprop}, we assume that all $(\alpha, k) \in I_{E_0}$ have the same parity (i.e.\ $|\alpha|$ is either even for all $(\alpha, k) \in I_{E_0}$ or odd for all $(\alpha, k)\in I_{E_0}$). Let $M$ denote the matrix specified in Prop.\ \ref{orthonormprop} and $E_j$ its eigenvalues given in Cor.\ \ref{EssentialCorollary}. Then $M_{ij}\in \mathbb{C}(\!(\hbar)\!)$ and $E^j\in \mathbb{R}(\!(\hbar)\!)$ for $1\leq i,j\leq m_0$. \end{proposition} \begin{proof} Again by \cite[Thm.\ A2.3]{klein-schwarz} we know that if $M_{ij}\in\mathbb{C}(\!(\hbar)\!)$, the same is true for its eigenvalues $E_j$. Thus, it suffices to prove the statement on $M_{ij}$. We will change the notation during this proof to $\boldsymbol{f}_{\alpha,k}:=\Pi_{E_0} h_{\alpha,k}$ and $C_{\alpha,k,\beta, \ell}$ for $(\alpha,k), (\beta, \ell)\in I_{E_0}$ and $C$ given in \eqref{CMatrix}. \\ We start proving that $\bigl(\boldsymbol{f}_{\alpha,k}, \boldsymbol{f}_{\beta, \ell}\bigr)_{\mathfrak{S}_0, \phi}\in \mathbb{C}(\!(\hbar)\!)$. By the definition \eqref{DefPi} of $\Pi_{E_0}$, we have \begin{equation}\label{falphaj} \boldsymbol{f}_{\alpha,k} = \sum_{j\in \mathbb{N}_0/2} \hbar^j f_{\alpha,k,j} \quad \text{with} \quad \boldsymbol{f}_{\alpha,k,j} = \frac{1}{2\pi i}\oint_{\Gamma} R_j(\mathbf{z})h_{\alpha,k}\, d\mathbf{z}\, . \end{equation} By \eqref{Rj}, $R_j(\mathbf{z})$ is determined by $Q_{j_\ell}$ with $\sum j_\ell = j$ and $R_0(\mathbf{z})$. Since by Lemma \ref{Qj} the operator $Q_{j_\ell}$ changes the parity of a polynomial in $\mathcal{E}[y]$ by the factor $(-1)^{2j_\ell},\, j_\ell \in\mathbb{N}_0/2,$ (and raises its degree by $2j_\ell$), we can conclude that $R_j(\mathbf{z})$ changes the parity by $(-1)^{2j}$. Using that, for each $1 \leq k \leq \mathrm{rk}\mathcal{E}$, the parity of $h_{\alpha,k}$ is given by $(-1)^{|\alpha|}$, it follows that the parity of $f_{\alpha,k, j}$ is given by $(-1)^{|\alpha|+2j}$. By \eqref{skpk} we have \begin{equation}\label{Anull} \bigl(\boldsymbol{f}_{\alpha,k}, \boldsymbol{f}_{\beta,\ell}\bigr)_{\mathfrak{S}_0, \phi} =\sum_{n\in\mathbb{N}_0/2} \hbar^n \sum_{\natop{j, m, s, r\in\mathbb{N}_0/2}{j+m+s+r=n}}\int_{\mathbb{R}^n} \gamma_r[{f}_{\alpha,k, j}, {f}_{\beta, \ell, m}](y) \omega_s(y) e^{-\langle y, \Lambda y\rangle}\, dy \, . \end{equation} We shall show that for $2n$ odd (and thus for $n$ half-integer) each summand vanishes. For fixed $j,m,s,r$, the integral will vanish if the entire integrand is odd. According to Prop.\ \ref{rescaleProp2}, the parity of $\omega_s$ is $(-1)^{2s}$. Moreover, as described below \eqref{Rundgammatilde}, the parity of the term $\gamma_r[f_{\alpha,k, j}, f_{\beta, \ell, m}]$ is given by $(-1)^{|\alpha|+2j + |\beta| + 2m + 2r}$. Since the exponential term is even, the integral on the right hand side of \eqref{Anull} therefore vanishes if $(|\alpha| + 2j + |\beta| + 2m + 2r + 2s)$ is odd. Since by assumption $\alpha$ and $\beta$ have the same parity, $|\alpha| + |\beta|$ is even. Thus the integral vanishes if $2(j+m + s + r)= 2n$ is odd which occurs if $n$ is half-integer. This shows that $A_{\alpha,k, \beta, \ell} = \bigl(\boldsymbol{f}_{\alpha,k}, \boldsymbol{f}_{\beta,\ell}\bigr)_{\mathfrak{S}_0, \phi}\in \mathbb{C} (\!(\hbar)\!)$ (see \eqref{DefA}) and thus the same is true for $B_{\alpha,k,\beta, \ell}$ given in \eqref{DefB}. \\ Since $M=BCB$ for $C$ given in \eqref{CMatrix} as described in the proof of Prop.\ \ref{orthonormprop}, it remains to show that the elements of $C$ are in $\mathbb{C}(\!(\hbar)\!)$ where \begin{multline}\label{Calphabeta} C_{\alpha,k,\beta, \ell} = \bigl(\boldsymbol{f}_{\alpha,k}, Q \boldsymbol{f}_{\beta,\ell}\bigr)_{\mathfrak{S}_0, \phi} \\ = \sum_{n\in\mathbb{N}_0/2}\hbar^n \sum_{\natop{j, m, s, r, q\in\mathbb{N}_0/2}{j+m+s+r+q=n}}\int_{\mathbb{R}^n} \gamma_r[f_{\alpha,k, j}, Q_q f_{\beta, \ell, m}](y) \omega_s(y) e^{-\langle y, \Lambda y\rangle}\, dy \, . \end{multline} By Lemma \ref{Qj}, the operator $Q_q$ changes the parity by $(-1)^{2q}$. Therefore, it follows from the discussion above \eqref{Anull} that the integral on the right hand side of \eqref{Calphabeta} vanishes if $j + m + s + r + q = n$ is half integer. This proves the proposition. \end{proof} Now we come back to the Taylor expansion of $H_{\phi,\hbar}$ with respect to our original not rescaled chart $x$. \begin{theorem}[Eigendecomposition of $\tau_p(H_{\phi, \hbar})$]\label{PropEigenmitx} Let $\hbar E_0$ be an eigenvalue of multiplicity $m_0$ of the local harmonic oscillator $H_{p, \hbar}$ from Def.\ \ref{DefLocalHarmonicOscillator}. Then the operator $\tau_p \bigl(H_{\phi, \hbar}\bigr)$ on $\mathfrak{S}$ as in \eqref{LocalFormHPhi} has a system of $m_0$ eigenfunctions $\boldsymbol{\hat{a}}_j\in \mathfrak{S}$, $j=1. \ldots m_0,$ that are orthonormal with respect to $(\,\cdot\,, \,\cdot\,)_{\mathfrak{S}, \phi}$ and that are of the form \begin{equation}\label{eigenfctonK} \boldsymbol{\hat{a}}_j = \hbar^{-K}\sum_{k\in \mathbb{N}_0/2} \hbar^k \boldsymbol{a}_{j,k} \quad\text{with}\quad \boldsymbol{a}_{j,k} = \sum_{\beta\in\mathbb{N}^n} {a}_{j,k,\beta} x^\beta \in \mathcal{E}_p[[x]] \end{equation} where $K = \max_{\alpha, k \in I_{E_0}}|\alpha|/2$ for $I_{E_0}$ as given in Prop.\ \ref{orthonormprop} and the lowest order monomial in $\boldsymbol{a}_{j,k}$ is of degree $\max\{2(K-k), 0\}$. The associated eigenvalues are \begin{equation}\label{eigenvalueonK} \hbar \boldsymbol{E}_j = \hbar \Bigl( E_0 + \sum_{k \in \mathbb{N}/2} \hbar^k E_{j,k} \Bigr) \, , \qquad E_{j,k}\in\mathbb{R}\, . \end{equation} If $|\alpha|$ is even (or odd resp.) for all $(\alpha, \ell)\in I_{E_0}$, then all half integer terms (or integer terms respectively) in the expansion \eqref{eigenfctonK} vanish, and in both cases, all half integer terms in the expansion \eqref{eigenvalueonK} vanish. \end{theorem} \begin{proof} As discussed in Remark \ref{RemarkOnH0AndQ0}, $E_0$ is an eigenvalue of multiplicity $m_0$ of $Q_0$. Thus we derive the eigenfunctions $\boldsymbol{\hat{a}}_j\in \mathfrak{S}$ of $\tau_p\bigl(H_{\phi, \hbar}\bigr)$ by rescaling the eigenfunctions $\boldsymbol{\psi}_j\in \mathfrak{S}_0$ of $Q$ given in \eqref{psij}, explicitly \[ \boldsymbol{\hat{a}}_j = R^{-1} \boldsymbol{\psi}_j = R^{-1} \sum_{\ell\in \mathbb{N}_0/2}\hbar^\ell \sum_{\natop{\beta\in \mathbb{N}_0^n}{|\beta| \leq d_{j\ell}}} \psi_{j,\ell,\beta} y^\beta = \sum_{\ell\in \mathbb{N}_0/2} \sum_{\natop{\beta\in \mathbb{N}_0^n}{|\beta| \leq d_{j\ell}}}\hbar^{\ell-\frac{|\beta|}{2}} \psi_{j,\ell,\beta} x^\beta \] for $d_{j,\ell} =2 \ell + \max_{(\alpha,k)\in I_{E_0}} |\alpha| $ (see Corollary \ref{EssentialCorollary}). Thus $\ell-\frac{|\beta|}{2} \geq - \max_{(\alpha,k)\in I_{E_0}}{ |\alpha|}/{2} =: - K$. Moreover, setting $k=\ell - {|\beta|}/{2} + K$, we get $|\beta|/2\geq K-k$ and \[ \boldsymbol{\hat{a}}_j = \hbar^{-K}\sum_{k\in \mathbb{N}_0/2} \hbar^k \sum_{\natop{\beta\in\mathbb{N}_0^n}{|\beta|\geq 2(K-k)}} \psi_{j,|\beta|/2-K+k, \beta} x^\beta =: \hbar^{-K}\sum_{k\in \mathbb{N}_0/2} \hbar^k \boldsymbol{a}_{j,k} \, . \] Thus ${a}_{j,k,\beta} = \psi_{j,|\beta|/2-K+k, \beta}$ and the lowest degree of $\boldsymbol{a}_{j,k}$ is given by $\max \{ 2(K-k), 0\}$. The orthonormality of the eigenfunctions $\boldsymbol{\hat{a}}_j = R^{-1} \boldsymbol{\psi}_j$ follows at once from Corollary \ref{EssentialCorollary} together with Definition \ref{DefProdK_0}. We now consider the case that all $|\alpha|$ are even (or odd respectively) for $(\alpha, \ell)\in I_{E_0}$. By Corollary \ref{EssentialCorollary}, Prop.\ \ref{ThmHalfIntegers} and \cite[Thm.\ A2.3]{klein-schwarz}, we can write any eigenfunction $\boldsymbol{\psi}\in \Pi_{E_0}\mathfrak{S}_0$ of $Q$ as linear combination of $\Pi_{E_0} h_{\alpha,\ell}\, , \; (\alpha, \ell)\in I_{E_0},$ with coefficients $\lambda_{\alpha,\ell}\in \mathbb{C}(\!(\hbar)\!)$. Thus, using the notation in the proof on Prop.\ \ref{ThmHalfIntegers}, by \eqref{falphaj} we explicitly get \begin{equation}\label{f1} \boldsymbol{\psi} = \sum_{(\alpha,\ell)\in I_{E_0}} \sum_{k\in \mathbb{N}_0} \hbar^k \lambda_{\alpha,\ell,k} \sum_{s\in\mathbb{N}_0/2}\hbar^s f_{\alpha,\ell,s}\; . \end{equation} As discussed below \eqref{falphaj}, the polynomials $f_{\alpha,\ell,s}$ are of degree $|\alpha| + 2s$ in $y$ and have the parity $(-1)^{|\alpha| + 2s}$. Thus rescaling explicitly gives \begin{equation}\label{f2} R^{-1} f_{\alpha,\ell,s} = R^{-1} \!\!\sum_{\natop{r\in\mathbb{N}_0}{r\leq |\alpha|/2 + s}} \sum_{\natop{\beta\in \mathbb{N}_0^n}{|\beta| = |\alpha| + 2s + 2r}} \!\!f_{\alpha,\ell,s,\beta} y^\beta = \sum_{\natop{r\in\mathbb{N}_0}{r\leq |\alpha|/2 + s}} \!\hbar^{|\alpha|/2 + s - r }\! \sum_{\natop{\beta\in \mathbb{N}_0^n}{|\beta| = |\alpha| + 2s - 2r}}\!\! f_{\alpha,\ell,s,\beta}x^\beta \, . \end{equation} Inserting \eqref{f2} into \eqref{f1} and setting $m=k +2s - r\in\mathbb{Z}$ gives the expansion \[ R^{-1} \boldsymbol{\psi} = \sum_{(\alpha,\ell)\in I_{E_0}} \sum_{m\in\mathbb{Z}, m\geq -|\alpha|/2} \hbar^{|\alpha|/2 + m } p_{\alpha,\ell, m} \in \mathfrak{S} \; . \] Thus if $|\alpha|$ is odd (or even respectively), then $|\alpha|/2 + m$ is half-integer (or integer resp.). So if one of these assumptions is true for all $(\alpha,\ell)\in I_{E_0}$, there remain no integer terms (or half-integer terms respectively) in the expansion of $R^{-1} \boldsymbol{\psi}$. Since the transition to $\boldsymbol{\hat{a}}_j$ is just a reordering, this is also true for $\boldsymbol{\hat{a}}_j$. The statement on the eigenvalues follows at once from Corollary \ref{EssentialCorollary}, the definition of $Q$ in \eqref{RescaledSeriesOfH} and Prop.\ \ref{ThmHalfIntegers}. \end{proof} \section{Proof of Theorem \ref{Theorem1}}\label{Kapitel4} Given Setup \ref{setup1}, we fix an admissible pair $(U, \phi)$ and an eigenvalue $\hbar E_0$ of $H_{p,\hbar}$ of multiplicity $m_0$. For $j=1, \ldots, m_0$, let $\boldsymbol{\hat{a}}_j$ and $\boldsymbol{E}_j$ be the associated orthonormal eigenfunctions and eigenvalues of $\tau_p\bigl(H_{\phi, \hbar}\bigr)$ as given in Thm.\ \ref{PropEigenmitx}. By Corollary \ref{BorelTheorem}, for each $k\in \mathbb{N}_0/2$ there exist sections $\tilde{a}_{j,k} \in \Gamma^\infty(U, \mathcal{E})$ such that $\tau_p\bigl( \tilde{a}_{j,k}\bigr) = \boldsymbol{a}_{j,k}$. Then by Theorem \ref{PropEigenmitx} we have \begin{equation} \label{RestEquation} \Bigl(H_{\phi, \hbar} - \hbar\bigl( E_0 + \sum_{k \in \mathbb{N}/2} \hbar^k E_{j,k} \bigr)\Bigr) \sum_{k\in \mathbb{N}_0/2} \hbar^k \tilde{a}_{j,k} = \hbar\sum_{k\in \mathbb{N}_0/2} \hbar^k r_{j,k} \end{equation} in the sense of formal series in $\hbar^{1/2}$ with coefficients in $\Gamma^\infty(U, \mathcal{E})$, where $r_{j,k}\in \ker \tau_p \cap \Gamma^\infty_c (U, \mathcal{E})$, i.e., $r_{j,k}$ vanishes to infinite order at $p\in M$ for all $k\in \mathbb{N}_0/2, j=1, \ldots m_0$. We now want to modify the sections $\tilde{a}_{j,k}$ such that \eqref{RestEquation} is solved with zero on the right hand side. By Remark \ref{RmkTransportEquations}, this is the case for a series $\sum \hbar^k a_{j, k}$ if and only if the coefficients $a_{j,k}$ solve the transport equations \eqref{TransportEquations}. Our terms $\tilde{a}_{j,k}$ solve the transport equations {\em almost}, up to the error terms $r_{j,k}$ which vanishes to infinite order at $0$. To get rid of these terms, we use the following theorem. \begin{theorem}[Flat Solutions] \label{TheoremFlatSolutions} Let $X \in \Gamma^\infty(U, TM)$ be a vector field vanishing at $p \in U$ such that the eigenvalues of its linearization $\nabla X|_p$ at $p$ all have positive real part and let $A$ be an endomorphism field of the vector bundle $\mathcal{E}$. Assume that $U$ is star-shaped around $p$ with respect to $X$. Then for each section $r \in \ker \tau_p\cap \Gamma^\infty (U, \mathcal{E})$, there exists a unique section $\eta \in \ker \tau_p\cap \Gamma^\infty (U, \mathcal{E})$ solving the differential equation \begin{equation}\label{allform} (\nabla_X^\mathcal{E} + A)\, \eta = r. \end{equation} \end{theorem} A proof of the scalar case can be found in \cite{helffer-sjostrand-1}. For a proof of this version, we refer to \cite[Thm.\ 4.1]{matthias} or \cite{giacomo}. For $j=1, \dots, m_0$, we now look for a solution $\boldsymbol{\eta}_j := \sum_{k\in \mathbb{N}_0/2} \hbar^k \eta_{j,k}$ of \eqref{RestEquation} with $\eta_{j,k}\in\ker \tau_p\cap \Gamma^\infty (U, \mathcal{E})$. The first equation is \begin{equation} \label{aj0} \bigl(\nabla_X^\mathcal{E} + A\bigr)\eta_{j,0} = r_{j,0}~~~~~~\text{with}\quad X = 2 \operatorname{grad} \phi~~~~\text{ and }~~~~A = W + \Delta \phi - E_0\; . \end{equation} It follows from Setup \ref{setup1} that $X$ and $A$ fulfill the assumptions given in Thm \ref{TheoremFlatSolutions} and since $(U, \phi)$ is admissible, $U$ is star-shaped around $p$ with respect to $X$ by Definition \ref{DefAdmissible}. Thus \eqref{aj0} has a unique solution $\eta_{j,0}\in \ker \tau_p\cap \Gamma^\infty (U, \mathcal{E})$ and $a_{j,0}:= \tilde{a}_{j,0} - \eta_{j,0}$ solves $\bigl(\nabla_X^\mathcal{E} + A\bigr)a_{j,0} = 0$. We now proceed inductively. Assume that the equations of order $1\leq \ell \leq k+1/2$ are solved by sections $\eta_{j, \ell-1}\in \ker \tau_p\cap \Gamma^\infty (U, \mathcal{E})$. Then for $X$ and $A$ as given in \eqref{aj0}, the equation of order $k+1 \in \mathbb{N}/2$ is given by \begin{equation}\label{etajk} (\nabla_X^\mathcal{E} + A)\eta_{j, k} = r_{j, k} - L \eta_{j,k-1} + \sum\nolimits_{i=1/2}^k E_{j,i}\, \eta_{j, k-i}\, . \end{equation} Now as the right hand side of \eqref{etajk} is known and flat at $p$, it can be considered as inhomogeneity $r$ and by Thm.\ \ref{TheoremFlatSolutions}, there exists a unique solution $\eta_{j,k}\in \ker \tau_p\cap \Gamma^\infty (U, \mathcal{E})$ of \eqref{etajk}. Setting $a_{j,k} := \tilde{a}_{j,k} - \eta_{j,k}\in \Gamma^\infty (U, \mathcal{E})$ for $j=1, \dots, m_0$ and $k\in \mathbb{N}_0/2$, it follows that \[ \boldsymbol{a}_j := \hbar^{-K}\sum_{k\in\mathbb{N}_0/2} \hbar^k a_{j,k} \] solves \begin{equation*} \Bigl(H_{\phi, \hbar} - \hbar \boldsymbol{E}_j \Bigr) \boldsymbol{a}_j = 0 \end{equation*} in the sense of asymptotic series in $\hbar^{1/2}$ with coefficients in $\Gamma^\infty(U, \mathcal{E})$, thus Property 1 of Thm \ref{Theorem1} is shown. To prove Property 2 of Thm \ref{Theorem1}, we first remark that for any cut-off function $\chi\in \Gamma^\infty_c (U, [0,1])$ with $\chi\equiv 1$ in a neighborhood of $p$, we have $\chi a_{j,k}\in \Gamma^\infty_c (U, \mathcal{E})$ for $j=1, \ldots m_0$ and $k\in \mathbb{N}_0/2$. Moreover, since $\chi$ and $\eta_{j,k}$ are in $\ker\tau_p$, we have by construction $\tau_p (\chi a_{j,k}) = \tau_p(\tilde{a}_{j,k}) = \boldsymbol{a}_{j,k}$, $j=1, \ldots m_0, k\in \mathbb{N}_0/2$. Thus by Thm.\ \ref{PropEigenmitx}, \eqref{scalarprodmitI} and \eqref{extendI} we have in $\mathbb{C}(\!(\hbar^{1/2})\!)$ for all $i,j=1, \ldots m_0$ \begin{align*} \delta_{ji} &= \bigl(\boldsymbol{\hat{a}}_j, \boldsymbol{\hat{a}}_i\bigr)_{\mathfrak{S}, \phi} = \Bigl(\hbar^{-K}\!\sum_{k\in\mathbb{N}_0/2} \hbar^k \boldsymbol{a}_{j,k}, \hbar^{-K}\!\sum_{\ell\in\mathbb{N}_0/2} \hbar^\ell \boldsymbol{a}_{i,\ell}\Bigr)_{\mathfrak{S}, \phi}\\ &= \Bigl(\hbar^{-K}\!\sum_{k\in\mathbb{N}_0/2} \hbar^k \tau_p \bigl(\chi a_{j,k}\bigr), \hbar^{-K}\!\sum_{\ell\in\mathbb{N}_0/2} \hbar^\ell \tau_p\bigl(\chi a_{i,\ell}\bigr)\Bigr)_{\mathfrak{S}, \phi} \\ &= \mathcal{I}\Bigl( \gamma \bigl[\hbar^{-K}\!\sum_{k\in\mathbb{N}_0/2} \hbar^k\chi a_{j,k}, \hbar^{-K}\!\sum_{\ell\in\mathbb{N}_0/2} \hbar^\ell \chi a_{i,\ell}\bigr] \Bigr) \\ &= \mathcal{I}\Bigl( \gamma \bigl[\chi \boldsymbol{a}_j, \chi \boldsymbol{a}_i\bigr]\Bigr)\; . \end{align*} The statement about the vanishing of terms of half-integer or integer order, depending on the parity of $|\alpha|$, follows from the analogous result in Thm.\ \ref{PropEigenmitx}. \begin{remark} By similar arguments as for the method of stationary phase, we get the following analytic statement from \eqref{RestEquation}: For each open set $U^\prime \subset \subset U$, for each $N \in \mathbb{N}/2$ and for each $\hbar_0>0$, there exists a constant $C>0$ such that \begin{equation} \Bigl|\Bigl(H_{\hbar} - \hbar\bigl(E_0 - \sum\nolimits_{k=1/2}^N \hbar^k E_{j,k}\bigr)\Bigr) e^{-\phi/\hbar} \sum\nolimits_{k = 0}^N \hbar^k \tilde{a}_{j,k} \Bigr| \leq C \hbar^{N+K+3/2}. \end{equation} uniformly on $U^\prime$. The extra factor $\hbar^K$ is present on the right hand side because the terms $\tilde{a}_{j, k}$ vanish to order at least $2K - k$ by Thm.\ \ref{PropEigenmitx}. However, to get the stronger result \begin{equation}\label{Equation77} \Bigl| \Bigl( H_{\phi, \hbar} - \hbar\bigl(E_0 - \sum\nolimits_{k=1/2}^N \hbar^k E_{j,k}\bigr) \Bigr) \sum\nolimits_{k=0}^N \hbar^k a_{j, k} \Bigr| \leq C \hbar^{N+K +3/2} \end{equation} uniformly on $U^\prime$ (which is equivalent to property 1 of Thm.\ \ref{Theorem1}), we need Thm. \ref{TheoremFlatSolutions}. \end{remark}
1,116,691,497,466
arxiv
\section{Summary} \label{SEC:Conclusion} In this paper we presented a database of real ham radio recordings with parallel clean and distorted speech data, offering a novel challenge for speech activity detection, enhancement, and reconstruction. The database consists of diverse noise conditions with a \gls{SDR} ranging from \SI{-20}{db} to \SI{5}{dB}. Additionally, two baseline systems for \gls{SAD} are presented. In future work, we will evaluate speech enhancement and reconstruction algorithms on this new database and work on improved targets for supervised \gls{ASR} training. \section{Acknowledgements}\label{sec:acknowledgements} Computational resources were provided by the Paderborn Center for Parallel Computing. \newpage \small \balance \bibliographystyle{ieeetr} \section{Experiments} \label{SEC:Experiments} \begin{table}[b] \caption{% Statistics of the different data sets. } \renewcommand*{\arraystretch}{1.1} \centering \small \begin{tabular}{ l c c c c c } \toprule & {Training} & {Development} & {Evaluation}\\ \toprule {Duration / h} & 121 & 18 & 37 \\ {Speech / $\%$} & 13.90 & 12.58 & 13.16 \\ {$\#~$Speakers} & 705 & 80 & 175 \\ Average SDR / dB & -9.09 & -10.4 & -8.92 \\ Average STOI & 0.54 & 0.52 & 0.53 \\ \toprule \end{tabular} \label{tbl:datasets} \end{table} The recorded database is split into the following three data sets: training, development and evaluation. Each set includes data recorded by about $22$ different KiwiSDR stations. The speakers are strictly disjoint among the three sets and the speakers are uniformly distributed between female and male over the whole database. No speaker is active in more than one example and the speakers are equally distributed between female and male over the whole database. To classify the degree of distortion of each example we measured the \gls{SDR} \cite{Vincent2006Performance} and the short time objective intelligibility (STOI) score~\cite{Taal2011STOI}. Both metrics require the availability of clean and distorted data as provided by this database. In Fig \ref{fig:hist} the histograms of SDR and STOI values in the test set are presented, illustrating the diversity of channel conditions represented by the database. Table \ref{tbl:datasets} summarizes the duration, the speech activity, the number of speakers per data set, as well as the average of both metrics, SDR and STOI. For all data sets the percentage of speech activity is close to $\SI{13}{\percent}$. Therefore, the database includes a large amount of ham radio noise which could be used for data augmentation. \begin{figure}[t] \centering \input{images/density} \caption{Histogram of STOI and SDR values on evaluation data.} \label{fig:hist} \end{figure} \begin{figure}[t] \centering \input{images/comparison_nn_sigpro} \caption{ROC curves of the \gls{SAD} systems for the development data set. At the intersection with the straight gray line the equal error rate can be read off.} \label{fig:comparison} \end{figure} To evaluate the \gls{SAD} systems, we used the OpenSAD challenge scoring \cite{nist16} with a collar of \SI{0.5}{s} around both the beginning and end of each segment, i.e., speech on/offset errors below that value were not counted as an error. For the experiments the following \gls{STFT} parameters are used: \begin{itemize} \item DNN SAD: FFT size 256, \SI{10}{ms} shift, \SI{25}{ms} window length \item Stat. SAD: FFT size 1024, \SI{32}{ms} shift, \SI{64}{ms} window length \end{itemize} \vspace*{-2em} Furthermore, the number of hidden units is set to $514$ for the \gls{GRU} and to $10$ for the \gls{FF} classifier layer. The results for the \gls{DNN} and the statistical \gls{SAD} on the development data set are displayed in terms of a \gls{ROC} curve in Figure \ref{fig:comparison}. From the \gls{ROC} curve the threshold corresponding to the equal error rate is determined for each system on the development data set and then applied on the evaluation data set. The results are shown in Table \ref{tbl:comparison}. The \gls{DNN} outperforms the statistical \gls{SAD} on both the development and evaluation data set. However, the statistical \gls{SAD} using a C implementation has a significantly lower \gls{RT} factor than the \gls{DNN} implemented in Pytorch \cite{pytorch19}. The real-time (RT) factor is calculated on an Intel\textsuperscript \textregistered Xeon\textsuperscript \textregistered CPU E3-1240 v6 @ 3.70GHz with 8GB RAM. For both systems the all detection metrics are lower compared to their counterpart in \cite{Heitkaemper20Fearless} which emphasizes the difficulty of the presented database. \begin{table}[h] \caption{% Results of different \gls{SAD} systems on the test data set, where the SAD threshold has been determined on the evaluation set. } \label{tbl:comparison} \renewcommand*{\arraystretch}{1.1} \centering \small \begin{tabular}{ l S[table-auto-round, table-format=-1.2] S[table-auto-round, table-format=-1.2] S[table-auto-round, table-format=-1.2] S[table-auto-round, table-format=-1.4] } \toprule System & {Recall} & {Precision} & {F1-score} & {RT-factor}\\ \toprule Statistical & 85.90045356 & 93.23647535 & 89.41825194 & 0.00472 \\ DNN based & 95.58814162 & 95.65484254 & 95.62148045 & 0.0119\\ \toprule \end{tabular} \end{table} \subsection{DNN based SAD} \label{SEC:DNN} Since the Fearless Steps challenge database \cite{fearless20} also offers \gls{HF} transmission data, the best performing \gls{SAD} system \cite{Heitkaemper20Fearless} from the 2020 challenge is adapted to the new database. Fig. \ref{fig:model} shows the block diagram of the architecture of the \gls{DNN} for \gls{SAD}. \begin{figure}[ht] \centering \input{images/model} \caption{Block diagram of the \gls{DNN} architecture for speech activity detection, where $R$ represents the output size of the FF layer.} \label{fig:model} \end{figure} The \gls{STFT} input data is first normalized using a $L^2$-norm over the time dimension to compensate for possible variations in the signal due to different recording devices, distance to the transmitter or speaker. The normalized signal is processed by several \gls{CNN} blocks with subsequent temporal smoothing as shown in the figure. The \gls{CNN} output is then processed by two uni-directional \gls{GRU} layers \cite{Kyunghyun14GRU} followed by a \gls{FF} classification layer and max pooling over the feature dimension. All \gls{CNN} blocks consist of two 2D-\gls{CNN} layers with batch normalization and max pooling over the feature dimension. Note, that no pooling is applied to the time dimension to allow a frame-wise activity estimation. The purpose of the \gls{GRU} is to gather temporal information from a larger context than the \gls{CNN} layer. During training, each utterance is split into segments of \SI{4}{s} length with a random offset to ensure that the network cannot overfit to the particular lengths of speech and noise regions of the training set. Training is performed using binary cross entropy, a batch size of 32 and the ADAM optimizer with a learning rate of $0.001$. During evaluation the network output is smoothed by applying a simple median filter with a window length of \SI{250}{ms} and \SI{50}{\percent} overlap. \section{Introduction} During the last decade \glspl{DNN} have achieved impressive results on a variety of speech tasks, such as \gls{SAD}, speech enhancement, source separation, and \gls{ASR} \cite{Watanabe18book,Haeb19Overview}. Their success, however, rests crucially on the availability of labeled training data, as the systems are trained in a supervised manner \cite{Heymann18MaskBF}. Manual annotation of training data can be a very tedious task, and for some problems, such as speech enhancement, it is nearly impossible for large data sets. This issue can be circumvented with parallel data, which refers to the existence of distorted and clean versions of the same utterance. While the former is applied to the network input, the latter can be used to automatically derive the training targets. For example, targets for time frequency masks can be generated from clean speech and then used to train a mask estimator from noisy speech \cite{Erdogan15Masks,Kolbaek17UPit}. Such parallel data is usually generated by artificially distorting clean speech signals. Among the many examples of such databases with artificially generated parallel data are WSJ-2-Mix \cite{Wang18McDc} and SMS-WSJ \cite{Drude18SMSWSJ}, two data sets for source separation research. For real recordings of degraded speech, parallel data is usually not available. For example, the CHiME-5 \cite{chime5} and AMI \cite{ami} databases offer real recordings of a meeting scenario, but parallel clean speech is not available. This leaves researchers with a dilemma: while artificial corruption of clean data offers the opportunity to provide parallel data useful for network training, real data are ``real'', and artificial degradation can never perfectly mimick true recordings of degraded speech \cite{Heymann18MaskBF}. In this paper we present one of the rare cases of a database which offers both, parallel data and real recordings of degraded speech. It consists of recordings of \gls{HF} speech transmissions recorded by amateur radio receivers. Additionally, the original transmitted clean audio, taken from the LibriSpeech database \cite{libri20}, is available and temporally synchronized to the recordings. This distinguishes this \gls{HF} database from the two most recently released \gls{HF} databases. Both the audio data from the entire Apollo-11 mission, which was released by the University of Texas during the ''Fearless Step challange'' \cite{fearless20} and the \gls{HF} data released as part of the DARPA RATS program \cite{rats12} do not offer parallel data. Amateur radio (abbrev.\ ''ham radio'') is the non-commercial use of radio frequency spectrum by hobby radio operators. Regulated by the \gls{ITU} \cite{itu2020}, the service is intended for research purposes, private conversations, and even to provide a means for wireless communication in emergency or disaster scenarios. The database presented in this paper can be used to develop speech activity detection, speech enhancement or signal reconstruction algorithms for speech transmitted over HF communication links. This includes not only the aforementioned amateur radio transmissions, but also several commercial applications of the HF radio spectrum, such as aircraft, police and marine radio. The database has been generated as follows. Our ham radio station located in Paderborn, Germany, transmitted utterances of the Librispeech corpus, which was then received by socalled Kiwi \gls{SDR} stations across Europe \cite{kiwi20}. A WebSocket connection to the Kiwi stations was established to transmit the received signals back to Paderborn via the internet. The received and transmitted data was then synchronized to obtain the desired parallel data sets. The developed database is published under the CC BY 4.0\footnote{\url{http://creativecommons.org/licenses/by/4.0/}} and can be used for research on \gls{SAD}, speech enhancement, and \gls{ASR} of speech transmitted over HF radio channels. In this paper we present two baseline systems for \gls{SAD}: One is based on statistical methods, i.e., Wiener filtering using minimum statistics for noise estimation, and the other utilizes \glspl{DNN} to solve the task. The paper is organized as follows: In Sec.~\ref{SEC:dataset} the speech data transmission and recording are described, followed by Sec.~\ref{SEC:SAD} explaining the two baseline systems where Sec.~\ref{SEC:SigPro} is dedicated to the statistical approach and Sec.~\ref{SEC:DNN} to the \gls{DNN} system. After discussing the experimental results in Sec.~\ref{SEC:Experiments}, the paper concludes with Sec.~\ref{SEC:Conclusion}. \section{Ham Radio Database}\label{SEC:dataset} To create the database we transmitted speech signals from our amateur radio station in Paderborn and recollected the data which has been received in parallel by several Kiwi-\gls{SDR} stations throughout neighboring countries. Each Kiwi-\gls{SDR} is separately connected to our servers to transmit the recorded signals back to Paderborn via a WebSocket connection. For the automated transmission the beacon callsign DB0UPB (assigned by the German Telecommunications body ``Bundesnetzagentur'') was used. The transmission and recording scheme is depicted in Fig.~\ref{FIG:gesamtsystem}. \begin{figure}[htb] \input{images/gesamtsystem.tex} \caption{System for distributed recording of radio signals.} \label{FIG:gesamtsystem} \end{figure} Predicted signal quality for the chosen HF carrier frequencies was checked using the web site \cite{VOACAP20}, and appropriate receiver stations from regions with good predicted signal quality were selected. The dataset includes recordings from stations in Germany, Austria, Switzerland, Belgium, the Netherlands and the United Kingdom. The signals are \gls{SSB} modulated using the lower sideband (LSB) with a bandwidth of $\SI{2.7}{kHz}$ at carrier frequencies of $\SI{7.05}{MHz}-\SI{7.053}{MHz}$ and $\SI{3.6}{MHz} -\SI{3.62}{MHz}$. Although, the original audio data has a sampling rate of \SI{16}{kHz}, and the Kiwi-\gls{SDR} sampled the data at \SI{12.001}{Hz}, the finally emitted data is band-limited to $\SI{2.7}{kHz}$, adhering to transmission regulations from the \gls{ITU}. In a first processing step the recorded signals are band-limited to $\SI{4}{kHz}$ via a linear Parks-McClellan filter and downsampled to a rate of $\SI{8}{kHz}$. \subsection{Data preparation} The data preparation targets three key issues: First, the recorded data should allow for automatic annotation of speech activity. Second, the data streams of all recording stations have to be synchronized with the emitted audio data to create parallel data. Aligning the clean (transmitted) data and noisy (received) data is important to let the speech activity labels computed on the clean carry over to the noisy data. Finally, it should be possible to automatically decide whether a transmission has been received at each individual station, even in very low \gls{SNR} conditions. \subsubsection{Annotation process} Our annotation process starts with a carefully conducted data selection and concatenation. The speech samples are taken from the clean training subset of the Librispeech corpus \cite{libri20}, a public database of read speech of size close to \SI{500}{hrs} containing over $1000$ speakers. For this database random segments of $\SI{1}{s}$ to $\SI{8}{s}$ length are selected from utterances of the Librispeech corpus. As shown in Fig.~\ref{fig:clean_data}, we concatenate $5$ segments to a single \textit{audio signal sequence}. Each segment is headed and followed by zeros which guarantee silence periods between utterances. The number of zeros here corresponds to a randomly drawn length between $\SI{8}{s}$ and $\SI{30}{s}$. \begin{figure}[b] \centering \input{images/example_data} \caption{Example of a speech activity sequence over time. Labels ``sp'' and ``sil'' represent speech and silence, respectively.} \label{fig:clean_data} \end{figure} Subsequently, the speech activity labels for the audio signal sequences are generated as follows: A \gls{GMM} \gls{HMM} acoustic model is trained on the clean training data set of the Librispeech corpus using the KALDI toolkit~\cite{Povey11Kaldi}. This acoustic model is used to calculate forced alignments, and all regions with non-silent labels are declared as containing speech. Additionally, the transcription for each speech segment is generated by extracting the part of the original librispeech transcription corresponding to the chosen $\SI{1}{s}$ to $\SI{8}{s}$ segment using alignment information. \subsubsection{Signal preparation for time synchronization} The Kiwi-\gls{SDR} receivers record a predefined, fixed frequency band and stream the recorded audio data with an unknown time offset via a WebSocket connection to our servers. This unknown offset has to be determined to synchronize the streams from all Kiwi-\glspl{SDR} and align them with the clean transmitted signals. To ease time synchronization, we added markers before and after each transmitted sequence. Each marker has a length of \SI{4}{s} and consists of $26$ chirp symbols which differ in starting frequency and orientation (ramp-up or ramp-down). The individual chirp encoding is derived from a gold code to ensure the orthogonality of the markers. In the spectrogram of Fig.~\ref{FIG:Chirp} a chirp sequence can be seen from frame index $0$ to $250$. \begin{figure}[b] \input{images/marker_signal} \caption{Example of a transmitted chirp sequence and the following audio signal} \label{FIG:Chirp} \end{figure} Our transmission scheme embeds multiple audio signal sequences into one single transmission by surrounding the sequences with markers and concatenating $N$ of them. Additional silence is included after the markers (\SI{5}{s}) and after the audio signal sequences (\SI{1}{s}) to increase the temporal distance between markers and speech. This is an important aspect to mitigate the effect of the Kiwi-\gls{SDR}'s \gls{AGC}. The \gls{AGC} correctly reacts on the marker and raises the gain, however, in a real audio transmission no preliminary warning is given to the receiver that audio is coming next. Therefore, the time between the marker and the signal has to be long enough so that the \gls{AGC} readjusts the gain to the original level to get realistic recordings. The beginning of one transmission scheme is shown in Fig.~\ref{FIG:Chirp}. \begin{figure*}[t] \centering \input{images/tikz_spect}% \caption{Received signals of three stations recording the same transmitted signal.} \label{fig:ex_spects} \end{figure*} \subsubsection{Marker based time synchronization} The recorded Kiwi-\gls{SDR} audio streams are examined in the \gls{STFT} domain. Here, the marker's \gls{STFT} acts as a binary mask which is shifted along the audio stream to find the temporal shift with maximum correlation. Beneficially, the markers are orthogonal to each other and the length of $\SI{4}{s}$ enables a reliable detection even in low \gls{SNR} conditions. The marker detection process delivers a set of hypotheses for marker positions in the stream. A stream is segmented according to the marker positions and the data is considered valid if the following sanity checks are fulfilled: \begin{compactitem} \item All markers of a transmission are detected. \item All markers are detected in the expected order. \item All time differences between markers exactly match the expected timing. \end{compactitem} The third condition assures that audio streams with dropped samples are discarded and that the synchronization error cannot exceed \SI{16}{ms}. Fig.\ref{fig:ex_spects} shows synchronized recordings of the same transmitted audio sample received by three different Kiwi-\gls{SDR} stations. While some stations deliver audio signals with only few distortions, e.g., Hagenow in Fig.\ref{fig:ex_spects}, others suffer from bad propagation conditions resulting in low \gls{STOI} measures \cite{Taal2011STOI}, e.g., Newmarket. \subsection{Concurrent speakers}\label{sec:concurrent} Some of the recordings are accidentally corrupted by co-/adjacent channel interferences caused by transmissions from other ham radio users. Some users did not understand the English content explaining the transmission purpose and asked our automatic beacon to leave the frequency. Others did not check the frequency carefully enough and selected transmission frequencies too close to ours. Their signals are transmitted at neighboring frequencies, and, depending on the proximity, are hardly understandable. Their speech poses a unique challenge to speech enhancement and reconstruction. In the data these regions are, obviously, not marked as containing speech, as they have not been transmitted by us. As a consequence a few percent of the labels may not reflect true absence of speech. \subsection{Download} \label{SEC:Download} The database can be downloaded from \url{https://zenodo.org/record/4247491} and code examples on how to train and evaluate a neural network for \gls{SAD} on the presented data can be found in \url{https://github.com/fgnt/ham_radio}. The database is published under the CC BY 4.0 license and thereby free to use for any both commercial and non-commercial purposes. \section{Speech Activity Detection}\label{SEC:SAD} Both the \gls{DNN} based \gls{SAD}, and the statistical SAD are similar to the one we proposed in \cite{Heitkaemper20Fearless}. The following sections present a short overview over the systems. \subsection{Statistical SAD}\label{SEC:SigPro} The statistical \gls{SAD} processes the data in two phases. In the first phase the background noise is reduced by Wiener filtering, whose noise \gls{PSD} is estimated by minimum statistics. The gain function of the Wiener filter is given by \begin{align} W(t,f) = \mathrm{max}\left(1-\gamma\frac{\overline{|V(t,f)|^{2}}}{|X(t,f)|^{2}}, G_{\mathrm{min}}\right). \end{align} with $G_{\mathrm{min}}$ denoting a lower bound on the Wiener filter gain. Here, the \gls{STFT}-coefficients of the observed signal $X(t,f)$ with $t$ as the frame index and $f$ as the frequency bin index are used to determine the noise \gls{PSD} $\overline{|V(t,f)|^{2}}$ and the \gls{PSD} of the current analysis window $|X(t,f)|^{2}$. Since the Kiwi-\gls{SDR}'s \gls{AGC} adapts to the receiver channel conditions, the noise level of the recordings is changing rapidly over time. Hence, the observation window of the minimum statistics for estimating $\overline{|V(t,f)|^{2}}$ has to be kept small, and the oversubtraction factor $\gamma$ has to be chosen quite large ($\gamma>20$). Noise \gls{PSD} estimation and Wiener filtering is repeated several times to maximize the \gls{SNR} gain. Following the multi-stage Wiener filter, a linear highpass is applied to remove low frequency noise. Afterwards, a simple $1^{\text{st}}$-order \gls{LPC} filter suppresses all parts of the signals that are not as well predictable as the highly correlated speech signals. In the second phase temporally smoothed sub-band energies are calculated, including adaptive thresholds to decide on speech activity. To this end, the \gls{STFT} of the denoised signal is computed and the frequency bins are smoothed with a mel filter-bank. Subsequently, sub-bands with $\SI{1}{kHz}$ bandwidth are formed, and the energy per sub-band is determined. Then, all sub-band energies are summed with a weighing factor of $1/s$ for the $s$-th sub-band. The resulting value, called \gls{CSBE}$(t)$, is tracked with minimum statistics to estimate the noise floor level (F-CSBE). A frame is declared to contain speech if the \gls{CSBE}$(t)$ value exceeds the F-CSBE value by a certain factor. Fluctuations between speech and noise decisions are suppressed by a subsequent median filter. Figure \ref{fig:sigpro} depicts a block diagram of the described statistical \gls{SAD} system. \begin{figure}[t] \centering \input{images/sigpro.tex} \caption{Overview on statistical \gls{SAD} components and signal processing queue.} \label{fig:sigpro} \vspace{-0.5cm} \end{figure} The strength of the statistical \gls{SAD} originates from the large \gls{SNR} gain of the multi-stage Wiener filter, which suppresses the noise without regard to speech quality. Thus, it focuses on parts of speech that have significant signal strength and misses parts with noise-like characteristics. However, the experimental results will show that these protected speech parts are sufficient for a noise robust \gls{SAD}.
1,116,691,497,467
arxiv
\section{Introduction and statement of a main result}\label{2sec1} Let us denote the family of all meromorphic functions $f$ with no poles in the unit disk $\mathbb{D}:=\{z \in \mathbb{C}:|z|<1\}$ of the form \begin{equation}\label{deq1} f(z)=z+a_2z^2+a_3z^3+\cdots \end{equation} by $\mathcal{A}$. Clearly, functions in $\mathcal{A}$ are analytic in $\mathbb{D}$ and the set of all univalent functions $f \in \mathcal{A}$ is denoted by $\mathcal{S}$. Functions in $\mathcal{S}$ are of interest because they appear in the Riemann mapping theorem and several other situation in many different contexts. For background knowledge on these settings we refer to the standard books \cite{AvWir-09, Dur83, Gol46, Goo83, Mil77}. One of the popular necessary conditions for a function $f$ of the form \eqref{deq1} to be in $\mathcal{S}$ is the sharp inequality $|a_n|\leq n$ for $n\geq 2$, which was first conjectured by Bieberbach in 1916 and proved by de Branges in 1985 (\cite{DeB1}). On the other hand, the problem of estimating sharp bound for successive coefficients, namely, $\big | |a_{n+1}|-|a_n| \big | $, is also an interesting necessary condition for a function to be in $\mathcal{S}$. This problem was first studied by Goluzin \cite{Gol46} with an idea to solve the Bieberbach conjecture. Several results are known in this direction. For example, Hamilton \cite{Ham80} proved that $\displaystyle \overline{\lim}_{n\rightarrow \infty}\big | |a_{n+1}|-|a_n| \big | \leq 1$. Prior to this paper, Hayman \cite{Hay63} proved in 1963 that \begin{equation}\label{deq5} \big | |a_{n+1}|-|a_n| \big | \leq A, \quad n=1,2,3,\dots, \end{equation} where $A\geq 1$ is an absolute constant, for functions $f$ in $\mathcal{S}$ of the form \eqref{deq1}. Milin \cite{Mil68,Mil77} found a simpler approach, which led to the bound $A\leq 9$ and Ilina \cite{Ili68} improved this to $A\leq 4.26$. It is still an open problem to find the minimal value of $A$ which works for all $f \in \mathcal{S}$, however, the best known bound as of now is $3.61$ which is due to Grinspan \cite{Gri76} (see also \cite{Mil77}). The fact that $A$ in \eqref{deq5} cannot be replaced by $1$ may be seen from the work of \cite{SS43}. On the other hand, sharp bound is known only for $n=2$ (see \cite[Theorem~3.11]{Dur83}), namely, $$ -1\leq|a_3|-|a_2|\leq 1.029\ldots. $$ Since Schaeffer and Spencer \cite{SS43} showed that for each $n \geq 2$ there corresponds an odd function $h(z)=z+a_3z^3+\cdots$ in $\mathcal{S}$ with all of its coefficients real such that $|a_{2n+1}(h)|>1$, it is also clear that the constant $A$ in \eqref{deq5} must be greater than $1$ for odd functions in the class $\mathcal{S}$. Note that for the Koebe function $k(z)=z/(1-z)^2$ and its rotation $e^{-i\theta}k(e^{i\theta}z)$, we have $\big | |a_{n+1}|-|a_n| \big | =1$ for $n\geq 1$. Denote by ${\mathcal S}^*$, the class $\mathcal S$ of functions $f$ such that $f(\mathbb{D})$ is starlike with respect to the origin. Concerning the class ${\mathcal S}^*$, Leung \cite{Leu78} (see also \cite{LS17}) in 1978 has proved that $A=1$ for starlike functions that was first conjectured by Pommerenke in \cite{POM71}. More precisely, we have \begin{Thm}\label{ThA} {\rm \cite{Leu78}} For every $f\in \mathcal{S}^*$ given by \eqref{deq1}, we have $$\big | |a_{n+1}|-|a_n| \big | \leq 1, \quad n=1,2,3,\ldots. $$ Equality occurs for fixed $n$ only for the function $$ \frac{z}{(1-\gamma z)(1-\zeta z)} $$ for some $\gamma$ and $\zeta$ with $|\gamma|=|\zeta|=1$. \end{Thm} We remark that, as an application of triangular inequality, Theorem \Ref{ThA} leads to $|a_n|\leq n$ for $n\geq 2$ which is the well known coefficient inequality for starlike functions. This is one of reasons for studying the successive coefficients problem in the univalent function theory. From the above discussion, we understand the importance of finding the minimal value of $A$ for functions to be in $\mathcal{S}$. Later, the problem of finding the minimal value of $A$ was considered for certain other subfamilies of univalent functions such as convex, close-to-convex, and spirallike functions. Among other things, Hamilton in \cite{Ham80} has shown some bound for successive coefficients for spirallike functions and for the class of starlike functions of non-positive order. For convex functions, recently Li and Sugawa \cite{LS17} obtained the sharp upper bound which is $|a_{n+1}|-|a_n|\leq1/(n+1)$ for $n \geq 2$, and for $n=2,3$ sharp lower bounds are $1/2$ and $1/3$, respectively. For $n\geq 4$, it is still an open problem to find the best lower bound for convex functions. These information clearly shows the level of difficulty in determining the bound on the successive coefficients problem. Our objective in this paper is to obtain results related to successive coefficients for starlike functions of order $\alpha$, convex functions of order $\alpha$, spirallike functions and functions in the close-to-convex family. To state our first result we need to introduce the following definitions: The family $\mathcal{S}_\gamma (\alpha )$ of $\gamma$-spirallike functions of order $\alpha$ is defined by $$ \mathcal{S}_\gamma (\alpha) = \left \{f\in {\mathcal A}: \, {\rm Re} \left ( e^{-i\gamma}\frac{zf'(z)}{f(z)}\right )>\alpha \cos \gamma\, \mbox{ for } z\in \mathbb{D}\right\}, $$ where $ \alpha \in [0,1)$ and $\gamma\in (-\pi/2, \pi/2)$. Each function in $\mathcal{S}_\gamma(\alpha )$ is univalent in $\mathbb{D}$ (see \cite{Lib67}). Clearly, $\mathcal{S}_\gamma (\alpha )\subset \mathcal{S}_\gamma (0)\subset \mathcal{S}$ whenever $0\leq \alpha <1$. Functions in $\mathcal{S}_\gamma(0)$ are called \textit{$\gamma$-spirallike}, but they do not necessarily belong to the starlike family $\mathcal{S}^*$. The class $\mathcal{S}_\gamma (0)$ was introduced by ${\rm \check{S}}$pa${\rm\check{c}}$ek \cite{Spacek-33} (see also \cite{Dur83}). Moreover, $\mathcal{S}_0 (\alpha)=:\mathcal{S}^*{(\alpha)}$ is the usual class of starlike functions of order $\alpha$, and $\mathcal{S}^*(0)=\mathcal{S}^*$. The class $\mathcal{S}^*{(\alpha)}$ is meaningful even if $\alpha <0$, although univalency will be destroyed in this situation. A function $f \in \mathcal{A}$ is called convex of order $\alpha$, denoted by $\mathcal{C}(\alpha)$ if and only if, for some $\alpha \in [ 0,1),$ $zf'(z)$ belongs to $\mathcal{S}^*{(\alpha)}$; i.e. \begin{equation}\label{deq4} {\rm Re}\left (1+\frac{zf''(z)}{f'(z)}\right)>\alpha \quad \mbox{ for $z\in{\mathbb D}$}. \end{equation} If $\alpha=0,$ the inequality \eqref{deq4} is equivalent to the definition of a convex function, i.e. $f$ maps $\mathbb{D}$ onto a convex domain. We set $\mathcal{C}(0)=\mathcal{C}$. It is well-known that $\mathcal C$ is a proper subset of $\mathcal{S}^*{(1/2)}$. We state our first result which shows that Theorem \Ref{ThA} continues to hold for $\gamma$-spirallike functions. More generally, as a generalization and the extension of Leung's result, we prove the following result whose proof will be presented in Section \ref{2sec4}. \begin{theorem}\label{2thm1} For every $f \in \mathcal{S}_\gamma (\alpha )$ of the form \eqref{deq1}, $$ \big | |a_{n+1}|-|a_n| \big | \leq \exp(-M\alpha \cos \gamma ) $$ for some absolute constant $M>0$ and \mbox{for $n \ge 2$}. \end{theorem} Note that for $\alpha =0$, the above theorem extend the result of Leung \cite{Leu78} from starlike to $\gamma$-spirallike functions and hence Theorem \ref{2thm1} contains the result of Hamilton \cite{Ham80}. For a ready reference, we recall it here. However, in this paper, we get his result as a consequence of a general result with an alternate proof. \begin{corollary}\label{2thm2} Let $f \in \mathcal{S}_\gamma (0)$ for some $|\gamma|<\pi/2$, and be of the form \eqref{deq1}. Then $$\big | |a_{n+1}|-|a_n| \big | \leq 1 ~\mbox{ for $n \ge 2$}. $$ \end{corollary} \begin{remark} In Theorem \ref{2thm4}, we see that Theorem~\Ref{ThA} and Corollary~\ref{2thm2} continue to hold for functions that are not necessarily starlike but is close-to-convex. At this place it is worth pointing out that there are functions that are $\gamma$-spirallike but not close-to-convex. It is also equally true that there exist close-to-convex functions but are not $\gamma$-spirallike. Theorem \ref{2thm4} is supplementary for this reasoning. \end{remark} The paper is organized as follows. Section \ref{2sec2} deals with definitions of classes of functions and statements of main results. In Section \ref{2sec3}, we state and prove a lemma which will be used in the proof of our main results in Section \ref{2sec4}. \section{Definitions and further results}\label{2sec2} We consider another family of functions that includes the class of convex functions as a proper subfamily. For $-\pi/2<\gamma<\pi/2$, we say that $f\in {\mathcal C}_\gamma (\alpha) $ provided $f\in {\mathcal A}$ is locally univalent in $\mathbb{D}$ and $zf'(z)$ belongs to $\mathcal{S}_\gamma (\alpha)$, i.e. \begin{equation}\label{pv-hire2-eq1} {\rm Re } \left \{ e^{-i\gamma }\left ( 1+\frac{zf''(z)}{f'(z)}\right )\right \}>\alpha \cos \gamma, \quad z\in\mathbb{D}. \end{equation} We may set ${\mathcal C}_\gamma (0) =:{\mathcal C}_\gamma$ and observe that the class ${\mathcal C}_0(\alpha)=:{\mathcal C}(\alpha)$ consists of the normalized convex functions of order $\alpha$. For general values of $\gamma $ $(|\gamma|<\pi/2)$, a function in ${\mathcal C}_\gamma (0)$ need not be univalent in $\mathbb{D}$. For example, the function $f(z)=i(1-z)^i-i$ is known to belong to ${\mathcal C}_{\pi/4}\backslash {\mathcal S}$. Robertson \cite{Robertson-69} showed that $f\in\mathcal{C}_{\gamma}$ is univalent if $0<\cos \gamma\leq 0.2315\cdots$. Finally, Pfaltzgraff \cite{Pfaltzgraff} has shown that $f\in\mathcal{C}_{\gamma}$ is univalent whenever $0<\cos \gamma\leq 1/2$. This settles the improvement of range of $\gamma$ for which $f\in\mathcal{C}_{\gamma}$ is univalent. On the other hand, in \cite{SinghChic-77} it was also shown that functions in ${\mathcal C}_\gamma$ which satisfy $f''(0)=0$ are univalent for all real values of $\gamma$ with $|\gamma|<\pi /2$. For a general reference about these special classes we refer to \cite{Goo83}. \begin{Thm} \label{ThB} {\rm \cite{LS17}} For every $f \in {\mathcal C}:={\mathcal C}(0)$ of the form \eqref{deq1}, the following inequality holds $$ |a_{n+1}|-|a_n| \leq \frac{1}{n+1} $$ for $n\geq 2$, and the extremal function is given by $$ L_{\phi}(z)=\cfrac{1}{e^{i\phi}-e^{-i\phi}} ~\log\left (\frac{1-e^{-i\phi}z}{1-e^{i\phi}z} \right ) $$ for $\phi=\pi/n$, where a principal branch of logarithm is chosen. \end{Thm} A straightforward application of Theorem \ref{2thm1} yields the following generalization of Theorem \Ref{ThB} for convex functions of order $\alpha$ and also for locally univalent functions that are not necessarily univalent in the unit disk $\mathbb D$. \begin{corollary}\label{2thm1:coro1} Suppose that $f\in {\mathcal C}_\gamma (\alpha)$ for some $\alpha \in [0,1)$ and $-\pi/2<\gamma<\pi/2$. Then we have $$|a_{n+1}|-|a_n|\leq\cfrac{\exp(-M\alpha \cos \gamma)}{n+1} $$ for some absolute constant $M>0$. In particular, we have \begin{enumerate} \item [(1)] For $f\in {\mathcal C}_\gamma (0)$, $$|a_{n+1}|-|a_n|\leq \frac{1}{n+1}. $$ \item [(2)] For $f\in {\mathcal C}(\alpha)$ we have $$ |a_{n+1}|-|a_n|\leq\cfrac{\exp(-M\alpha)}{n+1} $$ for some absolute constant $M>0$. \end{enumerate} \end{corollary} \begin{proof} By the classical Alexander theorem, $f(z)=z+\sum_{n=2}^{\infty} a_n z^n$ belongs to ${\mathcal C}_\gamma (\alpha)$ if and only if $zf'(z)=z+\sum_{n=2}^{\infty} b_n z^n$ is ${\mathcal S}_\gamma (\alpha)$ and clearly, $b_n=n a_n.$ Thus, by Theorem \ref{2thm1}, we have $$ (n+1)|a_{n+1}|-n|a_n|=|b_{n+1}|-|b_n|\leq \exp(-M\alpha \cos \gamma). $$ This gives, $$ |a_{n+1}|-|a_n| \leq |a_{n+1}|-\cfrac{n}{n+1}|a_n| \leq \cfrac{\exp(-M\alpha \cos \gamma)}{n+1}\,. $$ The proof of the corollary is complete. \end{proof} We would like to remark that Hamilton generalized Leung's result to the case of starlike functions of non-positive order and proved the following: \begin{Thm} \label{ThC} {\rm \cite{Ham80}} For a function $f(z) \in {\mathcal S}^*(\alpha )$ for some $\alpha \leq 0,$ $$ \big | |a_{n+1}|-|a_n| \big | \leq \cfrac{\Gamma(1-2\alpha +n)}{\Gamma(1-2\alpha )\Gamma(n+1)}\,. $$ Equality holds for the function $f(z)=z(1-z)^{2(\alpha -1)}$. \end{Thm} Let $f\in\mathcal{A}$ be locally univalent. Then, according to Kaplan's theorem, it follows that $f$ is {\em close-to-convex} if and only if for each $r~(0<r<1)$ and for each pair of real numbers $\theta_1$ and $\theta_2$ with $\theta_1<\theta_2$, $$ \int_{\theta_1}^{\theta_2} {\rm Re}\left(1+\frac{zf''(z)}{f'(z)}\right)\,d\theta>-\pi,\quad z=re^{i\theta}. $$ If a locally univalent analytic function $f$ defined in $\mathbb{D}$ satisfies $$ {\rm Re}\left(1+\frac{zf''(z)}{f'(z)}\right)>-\frac{1}{2} ~\mbox{ for $z \in \mathbb{D}$}, $$ then by the Kaplan characterization it follows easily that $f$ is close-to-convex in $\mathbb{D}$, and hence $f$ is univalent in $\mathbb{D}$. This generates the following subclass of the class of close-to-convex (univalent) functions: $$ \mathcal{C} (-1/2):=\left\{f\in \mathcal{A}:\,{\rm Re}\left (1+ \frac{zf''(z)}{f'(z)}\right )>-\frac{1}{2} ~\mbox{ for $z \in \mathbb{D}$}\right\}. $$ This class of functions is also studied recently by the authors in \cite{AS17}, and others in different contexts; for instance see \cite{LP17,MYLP14,PSY14} and references therein. Functions in $\mathcal{C} (-1/2)$ are not necessarily starlike but is convex in some direction as the function \begin{equation}\label{2condirection} f(z)=\cfrac{z-(z^2/2)}{(1-z)^2} \end{equation} shows. Note that $$ {\rm Re}\left (1+ \frac{zf''(z)}{f'(z)}\right ) ={\rm Re} \left(\cfrac{1+2z}{1-z}\right)>-\frac{1}{2} ~\mbox{ for $z \in \mathbb{D}$} $$ and thus $f\in \mathcal{C} (-1/2)$, but not starlike in $\mathbb{D}$. \begin{theorem}\label{2thm4} Let $f \in \mathcal{C} (-1/2)$. Then $$ |a_{n+1}|-|a_n|\leq1. $$ \end{theorem} The following result is an immediate consequence of Theorem \ref{2thm4} which solves the Robertson conjecture problem for the class $\mathcal{C}(-1/2)$. It is worth pointing out that in 1966 Robertson \cite{Rob66} conjectured that the Bieberbach Conjecture could be strengthened to $$\big | n|a_n|-m|a_m| \big | \le \big|n^2-m^2\big| \quad \mbox{ for all $m,n\ge 2$}, $$ however, two years latter Jenkins \cite{Jen68} showed that this inequality fails in the class $\mathcal{S}$. \begin{theorem}\label{2thm5} Let $f \in \mathcal{C}(-1/2)$. Then for $n>m$ we have $$ \big | n|a_n|-m|a_m| \big | \leq \frac{(n^2-m^2)+(n-m)}{2}=\frac{(n-m)(n+m+1)}{2}. $$ Equality holds for $f(z)=(z-(z^2/2))/(1-z)^2$. \end{theorem} \section{Preliminary result}\label{2sec3} The following lemma plays a crucial role in the proof of our main results. \begin{lemma}\label{2lemma1} Let $\varphi(z)=1+\sum_{n=1}^{\infty} c_nz^n$ be analytic in $\mathbb{D}$ such that ${\rm Re}\,\varphi(z)>\alpha$ in $\mathbb{D}$ for some $\alpha<1$. Suppose that $\psi(z)=e^{i\gamma}\sum_{n=1}^\infty \lambda_n c_n z^n$ is analytic in $\mathbb{D}$, where $\lambda_n \geq 0$ and ${\rm Re}\,\psi(z) \le M$ for some $M>0$. Then we have the inequality $$ \cos\gamma\sum_{n=1}^\infty \lambda_n |c_n|^2 \leq 2M(1-\alpha). $$ \end{lemma} \begin{proof} Let us first prove the result for $\alpha=0$. Consider the identity $$4({\rm Re}\,\varphi)({\rm Re}\,\psi)=(\varphi+\overline{\varphi})(\psi+\overline{\psi}) =(\varphi\psi+\varphi\overline{\psi})+\overline{(\varphi\psi+\varphi\overline{\psi})} $$ so that \begin{equation}\label{lem1-eq1} 4\int_{|z|=r}({\rm Re}\,\varphi)({\rm Re}\,\psi)\,d\theta = 2 {\rm Re}\left (\int_{|z|=r}\varphi(z)\overline{\psi(z)}\,d\theta \right ), \end{equation} since (with $z=re^{i\theta}$) \begin{equation}\label{lem1-eq2} \int_0^{2\pi}\varphi(z)\psi(z)\,d\theta =\int_{|z|=r}\varphi(z)\psi(z)\,\frac{dz}{iz}=0, \end{equation} by the Cauchy integral formula and the fact that $\psi(0)=0$. Using the power series representation of $\varphi(z)$ and $\psi(z)$, it follows that (since $\overline{z}=r^2/z$ on $|z|=r$) \begin{eqnarray}\label{lem1-eq3} \int_{|z|=r}\varphi(z)\overline{\psi(z)}\,d\theta & = &e^{-i\gamma}\int_{|z|=r} \left [1+\sum_{n=1}^\infty c_nz^n\right]\left[\sum_{n=1}^\infty\overline{c_n}\lambda_n\frac{r^{2n}}{z^n}\right]\frac{dz}{iz} \nonumber\\ &=&2\pi e^{-i\gamma}\sum_{n=1}^\infty \lambda_n |c_n|^2 r^{2n}. \end{eqnarray} By \eqref{lem1-eq2}, \eqref{lem1-eq3} and the assumption that ${\rm Re}\,\psi(z) \le M$ for some $M>0$, the identity \eqref{lem1-eq1} reduces to $$ 4\pi \cos \gamma\sum_{n=1}^\infty \lambda_n |c_n|^2 r^{2n}=4\int_0^{2\pi} ({\rm Re}\,\varphi(z))({\rm Re}\,\psi(z))\,d\theta\le 4M \int_0^{2\pi} {\rm Re}\,\varphi(z)\,d\theta=8M\pi, $$ where we have used the fact that \begin{align*} \frac{1}{2\pi}\int_0^{2\pi}{\rm Re}\,\varphi(z)\,d\theta & = \frac{1}{2\pi}\int_0^{2\pi} \frac{\varphi(z)+\overline{\varphi(z)}}{2}\,d\theta\\ & = \frac{1}{4\pi}\left[\int_{|z|=r}\varphi(z)\,\frac{dz}{iz}\,+\,\overline{\int_{|z|=r}\varphi(z)\,\frac{dz}{iz}}\,\right]\\ & = \frac{1}{4\pi}(2\pi+2\pi)=1. \end{align*} The desired result for the case $\alpha=0$ follows by letting $r\to 1^{-}$ in the last inequality. Finally, for the general case, we first observe that ${\rm Re}\,\Phi(z)>0$, where $$ \Phi(z)=\frac{\varphi(z)-\alpha}{1-\alpha}=1+\sum_{n=1}^\infty d_nz^n, \quad d_n=\frac{c_n}{1-\alpha}. $$ Also, the given condition on $\psi$ gives ${\rm Re}\,\Psi(z)\le \frac{M}{1-\alpha},$ where $$ \Psi(z)=e^{i\gamma}\sum_{n=1}^\infty\lambda_n d_n z^n=\frac{1}{1-\alpha}\left ( e^{i\gamma}\sum_{n=1}^\infty \lambda_nc_nz^n\right ) = \frac{1}{1-\alpha}\psi (z). $$ Applying the previous arguments for the pair $(\Phi(z),\Psi(z))$, one obtains that $$ \cos \gamma \sum_{n=1}^\infty \lambda_n |d_n|^2=\frac{\cos \gamma }{(1-\alpha)^2}\sum_{n=1}^\infty \lambda_n|c_n|^2\le \frac{2M}{1-\alpha} $$ so that $\cos \gamma \sum_{n=1}^\infty \lambda_n|c_n|^2\le 2M(1-\alpha)$, as desired. \end{proof} \begin{remark} We remark that Lemma \ref{2lemma1} for $\gamma =0$ is obtained by MacGregor\cite{MacGre69} (see also \cite{Leu78} and \cite[p.178, Lemma]{Dur83}). \end{remark} \section{Proof of the main results}\label{2sec4} We begin with the proof of Theorem \ref{2thm1} \subsection{Proof of Theorem \ref{2thm1}} Let $f\in\mathcal{S}_\gamma (\alpha)$. Then by the definition, we may consider $\varphi$ by $$ \frac{1}{\cos \gamma }\left [e^{-i\gamma} \cfrac{zf'(z)}{f(z)} +i\sin \gamma \right ] = \varphi (z) $$ so that $$e^{-i\gamma} \left (\cfrac{zf'(z)}{f(z)} -1 \right )= \cos\gamma\, (\varphi (z) -1), $$ where ${\rm Re}\,\{\varphi(z)\}>\alpha $ and $\varphi(z)=1+\sum_{n=1}^\infty c_nz^n$ is analytic in $\mathbb D$. We may rewrite the last equation as \begin{equation}\label{2thm1:eq0} \cfrac{f'(z)}{f(z)}-\cfrac{1}{z}= e^{i\gamma} \cos\gamma\, \sum_{n=1}^\infty c_nz^{n-1} \end{equation} which by simple integration gives \begin{equation}\label{2thm1:eq1} \log\left (\cfrac{f(z)}{z}\right )= e^{i\gamma} \cos\gamma\, \sum_{n=1}^\infty \cfrac{c_nz^n}{n}\,, \end{equation} where we use the principal value of the logarithm such that $\log 1$=0. By the Taylor series expansion of $\log (1-\xi z)$ and \eqref{2thm1:eq1}, we get \begin{align}\label{2thm1:eq2} \log {(1-\xi z)\cfrac{f(z)}{z}} & =\sum_{n=1}^\infty \cfrac{C_n-\xi^n}{n}z^n =\sum_{n=1}^\infty \alpha_n z^n, \end{align} where $C_n=e^{i\gamma} \cos\gamma\,c_n$ and $$ \alpha_n=\frac{C_n-\xi^n}{n} =\frac{e^{i\gamma} \cos\gamma\,c_n-\xi^n}{n}. $$ Also, for $|\xi|=1$, we have \begin{align}\label{2thm1:eq3} (1-\xi z)\cfrac{f(z)}{z} &=\sum_{n=0}^\infty \beta_n z^n, \quad \beta_n=a_{n+1}-\xi a_n. \end{align} From \eqref{2thm1:eq2} and \eqref{2thm1:eq3}, it follows that $$ \exp \Big(\sum_{n=1}^\infty \alpha_n z^n \Big)=\sum_{n=0}^\infty \beta_n z^n, ~\beta_0=1. $$ Then, by the third Lebedev-Milin inequality (see \cite[p. 143]{Dur83}), we have $$|\beta _n|^2\leq \exp\Bigg\{\sum_{k=1}^n\Bigg( k|\alpha _k|^2-\cfrac{1}{k}\Bigg) \Bigg\}, $$ or equivalently \begin{equation}\label{2thm1:eq4} |a_{n+1}-\xi a_n|^2\leq \exp\Bigg\{\sum_{k=1}^n\Bigg(\cfrac{|C_k -\xi^k |^2}{k}-\cfrac{1}{k}\Bigg) \Bigg\}\,. \end{equation} Now we consider $$ \psi(z)=e^{i\gamma}\sum_{k=1}^n\cfrac{ c_k {z}^k }{k}, $$ and let $M$ be the maximum of ${\rm Re}\{\psi(z)\}$ on $|z|=1.$ Applying Lemma \ref{2lemma1} with $\lambda_k=1/k$ for $1 \leq k \leq n$ and $\lambda_k=0$ for $k>n$, we obtain \begin{align*} \sum_{k=1}^n\Bigg(\cfrac{|C_k -\xi^k |^2}{k}-\cfrac{1}{k}\Bigg)&=\cos^2\gamma\sum_{k=1}^n \cfrac{|c_k |^2}{k}-2\cos\gamma\, \sum_{k=1}^n\cfrac{{\rm Re}( e^{i\gamma} c_k \overline{\xi}^k) }{k}\\ & \leq 2 M(1-\alpha)\cos\gamma -2\cos\gamma \, {\rm Re}\{\psi(\overline{\xi})\}. \end{align*} Choosing $\xi$ (say $\xi_0$) so that ${\rm Re}\{\psi(\overline{\xi_0})\}=M,$ we see that \begin{align*} \sum_{k=1}^n\Bigg(\cfrac{|C_k -\xi_0^k |^2}{k}-\cfrac{1}{k}\Bigg)&\leq 2 M(1-\alpha)\cos\gamma -2M\cos\gamma =-2M\alpha \cos\gamma. \end{align*} Hence from \eqref{2thm1:eq4}, $|a_{n+1}-\xi_0 a_n|\leq \exp(-M\alpha \cos\gamma)$ for some $\xi_0$ with $|\xi_0|=1.$ Since $$ \big | |a_{n+1}|-|a_n| \big | \leq |a_{n+1}-\xi_0 a_n|\leq \exp(-M\alpha \cos\gamma), $$ the proof of our theorem is complete. \hfill{$\Box$} \vspace{6pt} Here we provide one example that associates to Theorem \ref{2thm1}. \begin{example} Consider the function $f(z):=f_{\gamma, \alpha}(z)=z/(1-z)^{\beta}$, where $\beta =2(1-\alpha)\cos \gamma$. It is easy to check that $f\in\mathcal{S}_\gamma (\alpha)$, $$ f(z)=z+\sum_{n=2}^\infty \frac{\Gamma(n+\beta)}{\Gamma(n+1)\Gamma(\beta )}z^n ~\mbox{ and }~ e^{-i\gamma}\cfrac{zf'(z)}{f(z)}=e^{-i\gamma}+ 2(1-\alpha)\cos\gamma \cfrac{z}{1-z}. $$ Again consider the function $$ \varphi(z)=e^{-i\gamma}\cfrac{zf'(z)}{f(z)}= e^{-i\gamma}+2(1-\alpha)\cos\gamma \,\sum_{n=1}^\infty z^n. $$ It is clear that ${\rm Re}\,(\varphi(z))>\alpha \cos\gamma$. Now, if we adopt the proof of Lemma \ref{2lemma1} and Theorem \ref{2thm1} by assuming $\psi(z)= 2(1-\alpha) \sum_{n=1}^\infty z^n$ and $\gamma =0$, then for $f\in\mathcal{S}^* (\alpha)$ we obtain $$\big | |a_{n+1}|-|a_n| \big | \leq \exp(-\alpha M),\quad M=2(1-\alpha)(\log n+1). $$ \end{example} \subsection{Proof of Theorem \ref{2thm4}} Let $f\in\mathcal{C} (-1/2)$. Then the function $g(z)=\sum_{n=1}^{\infty} b_nz^n=zf'(z)$, where $b_n=na_n$, belongs to $\mathcal{S}^*(-1/2)$. From Theorem \Ref{ThC}, we obtain that \begin{equation}\label{2thm-eq4} \big | |b_{n+1}|-|b_n| \big | =\big | (n+1)|a_{n+1}|-n|a_n| \big | =(n+1)\left ||a_{n+1}|-\cfrac{n}{n+1}|a_n|\right | \leq n+1 \end{equation} which implies that $$|a_{n+1}|-|a_n|\leq\left | |a_{n+1}|-\cfrac{n}{n+1}|a_n| \right | \leq 1, $$ and the proof is complete. \hfill{$\Box$} \begin{example} Consider the function $f$ defined by \eqref{2condirection}, namely, $$ f(z)=\cfrac{z-z^2/2}{(1-z)^2} =\sum_{n=1}^{\infty} \frac{n+1}{2}z^n. $$ It is easy to check that $f$ satisfies the hypothesis of Theorem \ref{2thm4}. For this function, we have $$ |a_{n+1}|-|a_n|=\cfrac{n+2}{2}-\cfrac{n+1}{2}=\frac{1}{2}<1. $$ \end{example} \begin{example} Consider the function $f$ defined by $$ f(z)=\cfrac{z}{\sqrt{1-z^2}} =\sum_{n=1}^{\infty} \cfrac{\Gamma(n+1/2)}{\pi \Gamma(n+1)}\,z^{2n+1}. $$ A simple computation shows that $f \in \mathcal{C} (-1/2)$ and for this function, we see that $$ |a_{n+1}|-|a_n|=\cfrac{\Gamma(n+1/2)}{\pi \Gamma(n+1)} <1, $$ so the result is compatible with Theorem \ref{2thm4}. \end{example} \subsection{Proof of Theorem \ref{2thm5}} Let $f\in\mathcal{C} (-1/2)$. Then we have $$ \big | (k+1)|a_{k+1}|-k|a_k| \big | \leq k+1 \mbox{ for $k\geq 1$}, $$ by \eqref{2thm-eq4}. Here $a_1=1.$ Using the triangle inequality, we deduce that for $n \geq m$ \begin{align*} \big | |n|a_n|-m|a_m| \big | &= \left |\sum_{k=m}^{n-1}(k+1)|a_{k+1}|-k|a_k| \right |\\ &\leq \sum_{k=m}^{n-1}\big | (k+1)|a_{k+1}|-k|a_k| \big | \\ &\le \sum_{k=m}^{n-1}(k+1)=\cfrac{(n^2-m^2)+(n-m)}{2}. \end{align*} Clearly the equality holds for $f \in \mathcal{C} (-1/2)$ defined by \eqref{2condirection} in which the coefficient of $z^n$ is $(n+1)/2.$ \hfill{$\Box$} \begin{remark} It would be interesting to see an improved version of our results in which the upper bounds are depending upon sharp absolute constant $M$. \end{remark} \section*{Acknowledgments} The authors thank the referee for many useful comments. The work of the second author is supported by Mathematical Research Impact Centric Support (MATRICS) of DST, India (MTR/2017/000367).
1,116,691,497,468
arxiv
\section{Calculating $\fisherInfMark_{\chParMark}$}\label{app:chFIM} To calculate elements of $\fisherInfMark_{\chParMark}$ based, we derive the derivatives $\partial [\bm{M}]_{n,t}/\partial \parameterMark_{\chParMark}$. For delays we obtain \begin{align} \frac{\partial \bm{M}}{\partial \tau_{\mathrm{b}}} &= g_{\mathrm{b}} \bm{F} \bm{E}(v_{\mathrm{b}}) \bm{F}^{{\textrm{H}}} \left[\dot{\bm{D}}(\tau_{\mathrm{b}}) \odot \ccbig_{\wideband} (v_{\mathrm{b}})\right]\\ \frac{\partial \bm{M}}{\partial \tau_{\mathrm{r}}} &= g_{\mathrm{r}} \bm{F} \bm{E}(v_{\mathrm{r}}) \bm{F}^{{\textrm{H}}} \left[\dot{\bm{D}}(\tau_{\mathrm{r}}) \odot \aabig_{\wideband} ( \bm{\phi}) \odot \ccbig_{\wideband} (v_{\mathrm{r}}) \right] \end{align} where \begin{align} \dot{\bm{D}}(\tau) &= \jmath 2\pi\Delta f [0, e^{\jmath 2\pi\Delta f \tau }, \dots,\nonumber\\ & \qquad\qquad (N-1) e^{\jmath 2 (N-1) \pi\Delta f \tau }]^\top \bm{1}_{L}^{\top}. \end{align} The derivatives with respect to the \ac{aod} are \begin{align} \frac{\partial \bm{M}}{\partial [\bm{\phi}]_{\mathrm{az}}} &= g_{\mathrm{r}} \bm{F} \bm{E}(v_{\mathrm{r}}) \bm{F}^{{\textrm{H}}} \left[\bm{D}(\tau_{\mathrm{r}}) \odot \aabig_{\wideband} ^{\mathrm{az}}( \bm{\phi}) \odot \ccbig_{\wideband} (v_{\mathrm{r}}) \right]\\ \frac{\partial \bm{M}}{\partial [\bm{\phi}]_{\mathrm{el}}} &= g_{\mathrm{r}} \bm{F} \bm{E}(v_{\mathrm{r}}) \bm{F}^{{\textrm{H}}} \left[\bm{D}(\tau_{\mathrm{r}}) \odot \aabig_{\wideband} ^{\mathrm{el}}( \bm{\phi}) \odot \ccbig_{\wideband} (v_{\mathrm{r}}) \right] \end{align} where for $\ell \in \{0,\dots,L-1\}$ and $n \in \{0,\dots,N-1\}$ \begin{align} [ \aabig_{\wideband} ^{\mathrm{az}}(\bm{\psi})]_{n,\ell} &= \bm{a}_{n}(\theta)^{\top} \mathrm{diag}(\bm{\gamma}_{\ell}) \bm{a}_{n}^{\mathrm{az}}(\phi)\\ [ \aabig_{\wideband} ^{\mathrm{el}}(\bm{\psi})]_{n,\ell} &= \bm{a}_{n}(\theta)^{\top} \mathrm{diag}(\bm{\gamma}_{\ell}) \bm{a}_{n}^{\mathrm{el}}(\phi) \end{align} and \begin{align} \bm{a}_{n}^{\mathrm{az}}(\bm{\psi}) &= \bm{a}_{n}(\bm{\psi}) \odot \left(\jmath(\frac{\partial\bm{k}_n(\bm{\psi})}{\partial [\bm{\psi}]_{\mathrm{az}}})^\top \bm{Q}\right)\\ \bm{a}_{n}^{\mathrm{el}}(\bm{\psi}) &= \bm{a}_{n}(\bm{\psi}) \odot \left(\jmath(\frac{\partial\bm{k}_n(\bm{\psi})}{\partial [\bm{\psi}]_{\mathrm{el}}})^\top \bm{Q}\right)\\ \frac{\partial\bm{k}_n(\bm{\psi})}{\partial [\bm{\psi}]_{\mathrm{az}}} &= \frac{2\pi}{\lambda_n}[-\sin([\bm{\psi}]_{\mathrm{el}})\sin([\bm{\psi}]_{\mathrm{az}}),\nonumber\\ &\qquad \qquad \qquad \sin([\bm{\psi}]_{\mathrm{el}})\cos([\bm{\psi}]_{\mathrm{az}}),0]^\top\\ \frac{\partial\bm{k}_n(\bm{\psi})}{\partial [\bm{\psi}]_{\mathrm{el}}} &= \frac{2\pi}{\lambda_n}[\cos([\bm{\psi}]_{\mathrm{el}})\cos([\bm{\psi}]_{\mathrm{az}}),\nonumber\\ &\qquad \quad \quad \cos([\bm{\psi}]_{\mathrm{el}})\sin([\bm{\psi}]_{\mathrm{az}}),-\sin([\bm{\psi}]_{\mathrm{el}})]^\top \end{align} For the radial velocities, we obtain \begin{align} \frac{\partial \bm{M}}{\partial v_{\mathrm{b}}} &= g_{\mathrm{b}} \bm{F} \dot{\bm{E}}(v_{\mathrm{b}}) \bm{F}^{{\textrm{H}}} \left[\bm{D}(\tau_{\mathrm{b}}) \odot \ccbig_{\wideband} (v_{\mathrm{b}}) \right]\nonumber\\ &+g_{\mathrm{b}} \bm{F} \bm{E}(v_{\mathrm{b}}) \bm{F}^{{\textrm{H}}} \left[\bm{D}(\tau_{\mathrm{b}}) \odot \dot{\ccbig}_{\wideband} (v_{\mathrm{b}}) \right]\\ \frac{\partial \bm{M}}{\partial v_{\mathrm{r}}} &= g_{\mathrm{r}} \bm{F} \dot{\bm{E}}(v_{\mathrm{r}}) \bm{F}^{{\textrm{H}}} \left[\bm{D}(\tau_{\mathrm{r}}) \odot \aabig_{\wideband} ( \bm{\phi}) \odot \ccbig_{\wideband} (v_{\mathrm{r}}) \right]\nonumber\\ &+g_{\mathrm{r}} \bm{F} \bm{E}(v_{\mathrm{r}}) \bm{F}^{{\textrm{H}}} \left[\bm{D}(\tau_{\mathrm{r}}) \odot \aabig_{\wideband} ( \bm{\phi}) \odot \dot{\ccbig}_{\wideband} (v_{\mathrm{r}}) \right] \end{align} where \begin{align} \label{eq_steer_doppler_derivative} [ \dot{\ccbig}_{\wideband} (v)]_{n,\ell} & = \frac{\jmath 2\ell \pi T_{\rm{sym}} }{\lambda_n} e^{\jmath 2\ell \pi T_{\rm{sym}} v/\lambda_n } \\ [\dot{\bm{E}}(v)]_{n,n} &= \frac{\jmath 2 \pi T_{\mathrm{o}} n }{N\lambda} \exp\left( \frac{\jmath 2 \pi T_{\mathrm{o}} n v}{N\lambda} \right) \end{align} Finally, the derivatives regarding the channel gains can be calculated as \begin{align} \frac{\partial \bm{M}}{\partial \Re(g_{\mathrm{b}})} &= \bm{F} \bm{E}(v_{\mathrm{b}}) \bm{F}^{{\textrm{H}}} \left(\bm{D}(\tau_{\mathrm{b}}) \odot \ccbig_{\wideband} (v_{\mathrm{b}})\right)\\ \frac{\partial \bm{M}}{\partial \Im(g_{\mathrm{b}})} &= \jmath\bm{F} \bm{E}(v_{\mathrm{b}}) \bm{F}^{{\textrm{H}}} \left(\bm{D}(\tau_{\mathrm{b}}) \odot \ccbig_{\wideband} (v_{\mathrm{b}})\right)\\ \frac{\partial \bm{M}}{\partial \Re(g_{\mathrm{r}})} &= \bm{F} \bm{E}(v_{\mathrm{r}}) \bm{F}^{{\textrm{H}}} \left[\bm{D}(\tau_{\mathrm{r}}) \odot \aabig_{\wideband} ( \bm{\phi}) \odot \ccbig_{\wideband} (v_{\mathrm{r}}) \right]\\ \frac{\partial \bm{M}}{\partial \Im(g_{\mathrm{r})}} &= \jmath\bm{F} \bm{E}(v_{\mathrm{r}}) \bm{F}^{{\textrm{H}}} \left[\bm{D}(\tau_{\mathrm{r}}) \odot \aabig_{\wideband} ( \bm{\phi}) \odot \ccbig_{\wideband} (v_{\mathrm{r}}) \right]. \end{align} \section{Calculating $\matInd{J}$}\label{app:jacob} In this section we calculate the derivatives to calculate the elements of the Jacobian matrix. We have \begin{align} \frac{\partial \tau_{\mathrm{b}}}{\partial \bm{p}} &= \frac{\bm{p}-\bm{p}_{\mathrm{b}}}{c\Vert\bm{p}-\bm{p}_{\mathrm{b}}\Vert}\\ \frac{\partial \tau_{\mathrm{r}}}{\partial \bm{p}} &= \frac{\bm{p}-\bm{p}_{\mathrm{r}}}{c\Vert\bm{p}-\bm{p}_{\mathrm{r}}\Vert}\\ [\frac{\partial \tau_{\mathrm{b}}}{\partial \Delta t}, \frac{\partial \tau_{\mathrm{r}}}{\partial \Delta t}] &=[1 , 1] \end{align} With regard to the \ac{aod}, we obtain \begin{align} \frac{\partial[\bm{\phi}]_{\mathrm{az}}}{\partial \bm{p}} &= \frac{-[\bm{s}_{\mathrm{r}}]_{2}\matInd{R}_{1,1:3}+[\bm{s}_{\mathrm{r}}]_{1}\matInd{R}_{2,1:3}}{ ([\bm{s}_{\mathrm{r}}]_{1})^2+([\bm{s}_{\mathrm{r}}]_{2})^2} \\ \frac{\partial[\bm{\phi}]_{\mathrm{el}}}{\partial \bm{p}} &= \frac{-\vecNorm{\bm{s}_{\mathrm{r}}}^2[\matInd{R}]_{3,1:3}+(\bm{p}-\vind{p}_{\rbuFont{r}})[\bm{s}_{\mathrm{r}}]_{3}}{\vecNorm{\bm{s}_{\mathrm{r}}}^2\sqrt{([\bm{s}_{\mathrm{r}}]_{1})^2+([\bm{s}_{\mathrm{r}}]_{2})^2}}. \end{align} Furthermore, the derivatives of the the channel gains ($\Re(g_{\mathrm{b}}), \Im(g_{\mathrm{b}}), \Re(g_{\mathrm{r}}), \Im(g_{\mathrm{r}})$) and radial velocities ($v_{\mathrm{b}}, v_{\mathrm{r}}$) are one with respect to themselves and zero otherwise. \end{document} \section{Introduction} Estimation of user location has become increasingly crucial in today's networking technology with applications in autonomous driving, navigation, data transmission, augmented reality, etc. \cite{bourdoux20206g}. Satellite localization systems such as the \ac{gps} have the downside that they do not function properly in indoor scenarios, urban canyons, or tunnels. As a complementary approach, cellular localization can be used, where the user state is estimated based on the radio signals interchanged between the \ac{bs} and the user. Provisioning of cellular localization was stirred by the governmental authorities demanding that the operators should provide the location of the \ac{ue} upon receiving emergency calls. In 4G wireless systems, the \ac{ue} location and clock bias are estimated by calculating \ac{tdoa} between the \ac{ue} and four synchronized \acp{bs} \cite{3gpp.36.855}. In 5G, the multi-antenna structure of BSs and \acp{ue} allowed networks to also use the angles of arrival and departure for localization, enabling positioning with one \ac{bs} under rich multipath conditions \cite{Shahmansoori18TWC}. In this work, we show that the next generation, 6G, can benefit from the new technological enablers, such as \acp{ris}, to estimate the \ac{ue} position, clock bias, and velocity, even for \ac{siso} wireless links. \acp{ris} are thin surfaces made of sub-wavelength unit cells, whose response to the impinging electromagnetic wave can be controlled \cite{chunhua_mag21}. Recently, a great deal of attention has been drawn to \acp{ris} as one of the foremost technological enablers of the next generation of wireless systems (see \cite{marco_smart} for an excellent literature review). \acp{ris} introduce a new paradigm in wireless systems since they enable the optimization of the channel to maximize the \ac{qos} \cite{2019Basar,joint_BS_RIS_BF_TCOM_2021_Poor,RIS_WCM_2021,RIS_EE_TWC_2019}. In a communication system, where the \ac{ris} response can be optimized to improve the \ac{snr} and the spectral efficiency at the \ac{ue} site, the main challenges pertain to \rev{path loss modeling \cite{RIS_PL_Model_TWC_2021}}, estimation of the propagation channels to/from the RIS elements \cite{swindlehurst2021channel,araujo_jstsp21,CE_IRS_TWC_2020,RIS_WCM_2021,cascadedCE_RIS_WCL_2020}, as well as the use of this estimate to employ optimized configuration of the RIS elements \rev{\cite{IRS_OFDM_TCOM_2020,JointActivePassiveRIS_TWC_2019}}. In radio localization, \acp{ris} can provide a strong and controllable \ac{nlos} signal path, as well as an extra location reference. Many works have studied the benefits of RISs in radio localization through deriving \ac{crb} and/or by designing estimation algorithms that use the reflected signal from the RIS to improve or enable \ac{ue} localization \cite{elzanaty2020reconfigurable,dardari_spawk,rahal2021ris,keykhosravi2021semi,zhang2020towards,habo_rss,sha_18,Yiming_ICC21,cramer_juan,haobo_2020,abu2020near,nearFieldRIS_LOSBlock_2022,LOS_NLOS_NearField_2021,RIS_loc_2021_TWC}. In \cite{elzanaty2020reconfigurable}, the \ac{crb} on the location and orientation of the \ac{ue} have been derived for a \ac{mimo} system equipped with an \ac{ris}, where considerable improvements in estimation accuracy have been observed because of the \ac{ris}. It has been shown that 3D localization is possible in an \ac{ris}-equipped SISO system \cite{keykhosravi2020siso,rahal2021ris}. Furthermore, in \cite{dardari_spawk,LOS_NLOS_NearField_2021}, SISO localization is performed with the help of a stripe-like \ac{ris} with blocked \ac{los} path even when the path from \ac{ris} to \ac{ue} is obstructed severely. Localization in the near-field of the \ac{ris} through analyzing the \ac{crb} has been studied in \cite{sha_18} for infinite phase resolution and in \cite{cramer_juan} for limited one. \rev{Moreover, in \cite{nearFieldRIS_LOSBlock_2022}, an uplink near-field localization algorithm is proposed for RIS-aided scenarios with \ac{los} blockage. To estimate and counteract such blockages, a joint beam training and positioning method is developed in \cite{RIS_loc_2021_TWC} in multi-RIS assisted mmWave communications.} Most of the aforementioned works consider quasi-static channels, where the movement of the \ac{ue} during pilot transmission is negligible. While the effect of \ac{ue} mobility \rev{has been unexplored} in RIS-based localization literature, a number of works consider \ac{ue} mobility for RIS-aided communication systems \cite{Matthiesen_continuous,basar2019reconfigurable,refractHighMob_TWC_2021,sun_wcl_doppler,RIS_MC_Doppler_TVT_2022,HST_WCL_2022,Doppler_RIS_Entropy_2022,PredictableDoppler_VTC_2021,DopplerMitigation_RIS_WCL_2022}. A continuous-time model for RIS-aided satellite communication has been derived in \cite{Matthiesen_continuous}, where the movement of the satellite has been taken into consideration in optimization of the RIS phase shifts. \rev{Similarly, \cite{PredictableDoppler_VTC_2021} investigates RIS phase shift design to simultaneously minimize the delay and Doppler spread and maximize the SNR in RIS-aided high-mobility vehicular communications under predictable \ac{ue} mobility.} In \cite{basar2019reconfigurable}, it has been shown that the multipath fading effect caused by \ac{ue} movement can be mitigated in an RIS-aided scenario. In \cite{refractHighMob_TWC_2021}, the authors presented a transmission protocol for channel estimation in a high-mobility scenario, where \rev{an intelligent \textit{refracting} surface} is mounted on the car. \rev{Following a similar approach, the study in \cite{DopplerMitigation_RIS_WCL_2022} proposes a two-stage transmission protocol and channel/Doppler estimation method in high-mobility RIS-aided scenarios, complemented by the design of RIS phase shifts to mitigate the RIS-induced Doppler effect.} Two channel estimation schemes for an RIS-aided communication system have been proposed in \cite{sun_wcl_doppler}, considering Doppler effects. \rev{Moreover, the study in \cite{RIS_MC_Doppler_TVT_2022} models doubly-selective high-mobility Rician channels in RIS-aided unmanned aerial vehicle (UAV) communications by including the Doppler effect, and deals with minimum mean squared error (MMSE) channel estimation and RIS phase shift optimization. Furthermore, a deep reinforcement learning-based method is proposed in \cite{HST_WCL_2022} to jointly design \ac{bs} beamforming and RIS phase shifts for RIS-assisted mmWave high-speed railway networks.} The spatial-wideband (WB) effect refers to the change of an array's response (spatial steering vector) due to the change in frequency within the signal bandwidth \cite{wang2018spatial},\rev{\cite{RIS_overview_2022}}. This can cause the beam-squint effect in far-field \cite{cai_squint16,hybrid_sac} and the misfocus effect in near-field \cite{infocus}. The spatial-WB effect has been studied for the case of massive \ac{mimo} (see e.g. \cite{wang2018spatial,cai_squint16,hybrid_sac}) and also recently for RISs \cite{dovelosintelligent,face_squint,chen2021beam}. In \cite{wang2018spatial}, the authors develop a spatial-WB channel model, and tailored a channel estimation algorithm based on it. In \cite{cai_squint16}, the effects of beam-squint have been analyzed and compensated for in designing analog codebooks. A channel estimation algorithm for a spatial-WB RIS-aided communication system has been proposed in \cite{face_squint}. Several RIS phase shift designs have been proposed in \cite{chen2021beam} to maximize information rate in the presence of the beam-squint effect. To the best of our knowledge, the combined contribution of \ac{ue} mobility and spatial-WB effect have not yet been studied in the context of RIS-localization. This paper extends our conference contribution in \cite{keykhosravi2020siso}, where it was shown that in a \ac{siso} system equipped with a single RIS, 3D \ac{ue} localization and synchronization is possible. In this paper, we define and study the problem of RIS-aided SISO localization under spatial-WB effects and user mobility. The \rev{main} contributions of this paper \rev{can be summarized} as follows. \begin{itemize} \item \rev{For the first time in the literature, we investigate the problem of single-snapshot RIS-aided SISO 3D localization and synchronization under \ac{ue} mobility and spatial-WB effects.} \item We develop a geometric channel model for \ac{ofdm} signal propagation under the far-field assumption, by \rev{explicitly} taking into account \ac{ue} mobility and spatial-WB effects. \rev{Unlike the studies on RIS-aided communications with \ac{ue} mobility \cite{Matthiesen_continuous,basar2019reconfigurable,refractHighMob_TWC_2021,sun_wcl_doppler,RIS_MC_Doppler_TVT_2022,HST_WCL_2022,Doppler_RIS_Entropy_2022,PredictableDoppler_VTC_2021,DopplerMitigation_RIS_WCL_2022}, the developed model formulates the \ac{los} (i.e., BS-to-UE) and \ac{nlos} (i.e., BS-to-\ac{ris}-to-UE) channels as a function of individual geometric parameters consisting of delays, Doppler shifts, and \acp{aod} in azimuth and elevation. In addition, unlike the existing literature on RIS-aided localization \cite{elzanaty2020reconfigurable,dardari_spawk,rahal2021ris,keykhosravi2021semi,zhang2020towards,habo_rss,sha_18,Yiming_ICC21,cramer_juan,haobo_2020,abu2020near,nearFieldRIS_LOSBlock_2022,LOS_NLOS_NearField_2021,RIS_loc_2021_TWC}, we incorporate Doppler shift into our model.} \item We design a low-complexity algorithm \rev{for joint localization and synchronization of \ac{ue}, accompanied by time-orthogonal RIS phase profile design to combat interpath interference. First, we estimate the channel gain, delay and Doppler of the \ac{los} path, and subtract its effect from the received signal. Based on the resulting \ac{los}-interference-eliminated signal, we then estimate the parameters of the \ac{nlos} path, involving the delay, Doppler and \ac{aod} from the RIS to \ac{ue}. In the final stage, 3D position and clock bias of the \ac{ue} are computed using the estimated geometric channel parameters. The proposed algorithm attains} the theoretical bounds at high SNRs when the spatial-WB effects are negligible. \item We study the influence of \ac{ue} mobility, spatial-WB effects, and the presence of scatterers on the estimation error through extensive simulation of the estimator and evaluation of the \ac{crb}, considering directional and random \ac{ris} phase profiles. \end{itemize} Our results suggest that in terms of fundamental bounds, neither \ac{ue} mobility nor spatial-WB effects influence the estimation accuracy. However, in terms of the accuracy of the estimator (designed based on the spatial-narrowband (NB) model), the spatial-WB effects reduce the position accuracy for large sizes of the RIS and large bandwidths when the angle between the direction of arrival or departure and the \ac{ris} normal is large. The performance of our estimator is not affected by the \ac{ue} speed. \vspace{.5cm} \paragraph*{Organization} The remainder of the paper is organized as follows. In Section\,\ref{sec:systemModChannelModel}, we present the system setup and derive the channel model in Section\,\ref{sec:extended-channel-models}. The \ac{ris} phase profile design is presented in Section\,\ref{sec:RisPhaseDesign}. The estimator is described in Section\,\ref{sec_estimator} through a number of separate algorithms. In Section\,\ref{sec:simulationResults}, we calculate the estimation errors through simulation and compare them with the \ac{crb} for an example of system parameters. Finally, Section\,\ref{sec:conclusion} concludes the paper. \vspace{.5cm} \paragraph*{Notation} We represent vectors by bold-face lowercase letters (e.g., $\bm{x}$) and matrices by bold-face uppercase ones (e.g., $\bm{X}$). The $n$th element of the vector $\bm{x}$ is shown by $[\bm{x}]_n$ and with $[\bm{X}]_{m,n}$ we indicate the element on the $m$th row and the $n$th column of matrix $\bm{X}$. Furthermore, $[\bm{X}]_{:,n}$ ($[\bm{X}]_{n,:}$) denote the $n$th column (row) of matrix $\bm{X}$. The subindex $m:n$ indicates all the elements between (and including) $m$ and $n$. The Kronecker product is shown by $\otimes$ and the Hadamard product by $\odot$. The real and imaginary parts of the complex number $x$ are shown by $\Re(x)$ and $\Im(x)$, respectively. The matrix vectorization operator is indicated by $\mathrm{vec}(\cdot)$. The vector $\bm{1}_L$ indicates the vector of length $L$, all of whose elements are one. \section{System and channel model}\label{sec:systemModChannelModel} \subsection{System setup}\label{sec:systemSetup} \begin{figure} \centering \begin{tikzpicture} \node (image) [anchor=south west]{\includegraphics[width=5cm]{Fig1a.png}}; \gettikzxy{(image.north east)}{\ix}{\iy}; \node at (1.1/5*\ix,1/3.5*\iy){\footnotesize BS}; \node at (3.8/5*\ix,3/3.5*\iy){\footnotesize RIS}; \node at (4.5/5*\ix,1.2/3.5*\iy){\footnotesize UE}; \newcommand\w{3} \node (image2) at (\ix,0) [anchor=south west]{\includegraphics[width=\w cm]{Fig1b.pdf}}; \gettikzxy{(image2.north east)}{\ixt}{\iyt}; \node at (\ix+.3/3*\w cm,.5/3*\iyt){\footnotesize$x$}; \node at (\ix+3/3*\w cm,.9/3*\iyt){\footnotesize$y$}; \node at (\ix+1.5/3*\w cm,3/3*\iyt){\footnotesize$z$}; \node at (\ix+1.6/3*\w cm,.7/3*\iyt){\footnotesize$\psi_{\mr{az}}$}; \node at (\ix+1.8/3*\w cm,1.5/3*\iyt){\footnotesize$\psi_{\mr{el}}$}; \node at (\ix+.5*\w cm,0/3.5*\iy){\footnotesize (b)}; \end{tikzpicture} \caption{(a): System setup, (b): Elevation and azimuth angles of a generic vector. } \label{fig:setup} \end{figure} We consider a wireless system with a single-antenna transmitter, one \ac{ris}, and a single-antenna \ac{ue} as shown in Fig.\,\ref{fig:setup}(a). We indicate the position of the \ac{bs} and the \ac{ris} center by $\bm{p}_{\mathrm{b}}\in \mathrm{R}^3$ and $\bm{p}_{\mathrm{r}}\in \mathrm{R}^3$ according to some general coordinate system. The values of $\bm{p}_{\mathrm{b}}$ and $\bm{p}_{\mathrm{r}}$ as well as the orientation of the \ac{ris} are assumed to be known. \rev{Additionally, we assume that the \ac{ue} is not time-synchronized to the \ac{bs}, leading to an unknown clock bias $\Delta_t\in \mathrm{R}$ at the \ac{ue} with respect to the \ac{bs}.} In addition to the \ac{ue}'s position ($\bm{p}\in \mathrm{R}^3$) \rev{and} clock bias $\Delta_t$, \rev{its} velocity ($\bm{v}\in \mathrm{R}^3$) \rev{is} unknown and to be estimated. The \ac{ris} is a \ac{upa} with $M = M_1\times M_2$ elements. The element in the $r$th row ($r \in \{0, \dots, M_{1}-1\}$) and $s$th column ($s \in \{0, \dots, M_{2}-1\}$) has the position $\bm{q}_{r,s} = [dr-d(M_{1}-1)/2),0,ds-d (M_{2}-1)/2 ]$ in the local coordinate system of \ac{ris}, with $d$ being the spacing between the elements. The phase profile matrix of the \ac{ris} at time $\ell$ is shown by $\bm{\Gamma}_{\ell}\in\mathbb{C}^{M_1\times M_2}$, where $[\bm{\Gamma}_\ell]_{r,s}$ indicates the phase shift applied to the impinging signal via the \ac{ris} element in the $r$th row and $s$th column. \subsection{Geometric relations} We introduce $v_{\mathrm{b}}$ and $v_{\mathrm{r}}$ as the \ac{ue}'s radial velocity (Doppler) along UE-BS and UE-RIS directions, respectively, and are given by \begin{align} v_{\mathrm{b}} &= \bm{v}^{\top} (\bm{p}_{\mathrm{b}}-\bm{p})/\Vert\bm{p}_{\mathrm{b}}-\bm{p}\Vert \label{eq:vub}\\ v_{\mathrm{r}} &= \bm{v}^{\top} (\bm{p}_{\mathrm{r}}-\bm{p})/\Vert\bm{p}_{\mathrm{r}}-\bm{p}\Vert.\label{eq:vur} \end{align} In addition, $\tau_{\mathrm{b}}$ and $\tau_{\mathrm{r}}$ represent, respectively, the delays of the direct and the reflected paths \begin{align} \tau_{\mathrm{b}} &= \frac{\Vert\bm{p}_{\mathrm{b}}-\bm{p}\Vert}{c}+\Delta_t\label{eq:taub}\\ \tau_{\mathrm{r}}&=\frac{\Vert\bm{p}_{\mathrm{b}}-\bm{p}_{\mathrm{r}}\Vert+\Vert\bm{p}_{\mathrm{r}}-\bm{p}\Vert}{c}+\Delta_t,\label{eq:taur} \end{align} where $\Delta_t$ is the clock bias and $c$ is the light speed. % The \ac{aod} from the \ac{ris} to the \ac{ue} is indicated by $\bm{\phi}$, which corresponds to the direction of the vector $\bm{s}$ from the \ac{ris} to the \ac{ue} in the local coordinate system of the \ac{ris}, i.e., $\bm{s} = \bm{R} (\bm{p}-\bm{p}_{\mathrm{r}})$, where $\bm{R}$ is a rotation matrix that maps the global frame of reference to the \ac{ris} local coordinate system. More specifically, we have \begin{align} [\bm{\phi}]_{\mathrm{az}} &=\atant\left( [\bm{s}]_{2}, [\bm{s}]_{1} \right)\label{eq:phiAz}\\ [\bm{\phi}]_{\mathrm{el}} &=\arccos \left(\frac{[\bm{s}]_{3}}{\vecNorm{\bm{p}-\bm{p}_{\mathrm{r}}}}\right).\label{eq:phiEl} \end{align} \subsection{Signal and baseline channel model} \label{sec:signalTransmissionStatic} We consider the transmission of $L$ OFDM symbols with $N$ subcarriers. Under the assumption of perfect frequency synchronization between the UE and the BS, the received signal after the \ac{fft} operation at the UE in the frequency/slow-time domain can be represented by the matrix $\bm{Y} \in \mathbb{C}^{N\times L}$ as \begin{align}\label{eq:channelModel:WB} \bm{Y} &= \bm{Y}_{\mathrm{b}}+ \bm{Y}_{\mathrm{r}}+\bm{N}, \end{align} where the noise matrix is represented with $\bm{N}$, whose elements are drawn independently from a circularly symmetric Gaussian distribution with variance $N_0$. The matrices $\bm{Y}_{\mathrm{b}}$ and $\bm{Y}_{\mathrm{r}}$ describe the signal received through the direct and reflected path, respectively. As a baseline, we consider a channel model that ignores any spatial-WB effect and assumes a sufficiently short observation time such that approximately $\bm{v}=\bm{0}$. For simplicity, we assume that all the transmitted symbols are equal to one. Hence following \cite{keykhosravi2020siso} \begin{align} \bm{Y}_{\mathrm{b}} &= g_{\mathrm{b}} \bm{D}(\tau_{\mathrm{b}})\label{eq:YbStatic} \\ \bm{Y}_{\mathrm{r}} &= g_{\mathrm{r}} \bm{D}(\tau_\mathrm{r}) \odot \bm{A}( \bm{\phi}),\label{eq:YrStatic} \end{align} where the complex channel gain for the direct path is indicated by $g_{\mathrm{b}}$ and for the reflected one by $g_{\mathrm{r}}$. The matrix $\bm{D} \in \complexset{N}{L}$ is the delay steering vector repeated across time and is defined as \begin{align}\label{eq:matrixD} \bm{D}(\tau) = [1, e^{-\jmath 2\pi\Delta_f \tau }, \dots, e^{-\jmath 2 \pi (N-1) \Delta_f \tau }]^\top \bm{1}_{L}^{\top}, \end{align} where $\Delta_f$ is the subcarrier spacing. Let $ \bm{\theta} $ denote the known \ac{aoa} from the \ac{bs} to the \ac{ris}. In \eqref{eq:YrStatic}, $\bm{A}(\bm{\phi})\in \complexset{N}{L}$ captures the effects of \ac{ris} phase modulation, given by \begin{align} [\bm{A}(\bm{\phi})]_{n,\ell} &= \bm{a}( \bm{\theta} )^{\top} \mathrm{diag}(\bm{\gamma}_{\ell}) \bm{a}( \bm{\phi} ),\label{eq:aphi} \end{align} where all the rows of $\bm{A}(\bm{\phi})$ are identical. The vector $\bm{\gamma}_{\ell} \in \mathrm{C}^{M}$ is defined as \begin{align} \bm{\gamma}_{\ell} = \mathrm{vec}(\bm{\Gamma}_{\ell})\label{eq:def_gamma} \end{align} and it represents the RIS phase profile vector at time $\ell$. The vector $\bm{a}(\cdot) \in \mathbb{C}^{M}$ is the narrowband \ac{ris} response steering vector and is defined as \begin{align} [\bm{a}(\bm{\psi})]_{m} = \exp\left(\jmath\bm{k}(\bm{\psi})^\top [\bm{Q}]_{:,m}\right),\label{eq:aVector} \end{align} where the relative RIS element positions are contained in \begin{align}\label{eq:Q} \bm{Q}&=[\bm{q}_{0,0},\bm{q}_{1,0}, \dots, \bm{q}_{M_1-1,M_2-1}]. \end{align} The wavenumber vector is defined as \begin{align}\label{eq:WaveNumVect} \bm{k}(\bm{\psi}) &= \frac{2\pi}{\lambda}[\sin([\bm{\psi}]_{\mathrm{el}})\cos([\bm{\psi}]_{\mathrm{az}}),\nonumber\\ &\qquad\qquad \quad \sin([\bm{\psi}]_{\mathrm{el}})\sin([\bm{\psi}]_{\mathrm{az}}),\cos([\bm{\psi}]_{\mathrm{el}})]^\top, \end{align} where $[\bm{\psi}]_{\mathrm{az}}$ and $[\bm{\psi}]_{\mathrm{el}}$ represent the azimuth and elevation of the generic direction described by angle $\bm{\psi}$ (see Fig.\,\ref{fig:setup}(b)), and $\lambda=c/f_c$ is the wavelength at the carrier frequency. \section{Extended channel models for spatial-WB and UE mobility} \label{sec:extended-channel-models} While the channel model from Section \ref{sec:signalTransmissionStatic} is common in the RIS literature, it is limited in two ways. First of all, when the RIS and the signal bandwidth are both large, the model fails to capture the variation of the RIS steering vector with the frequency, which is a consequence of the definition of the structure of $\bm{A}(\bm{\phi})$ with identical rows. Secondly, the assumption of negligible velocity severely limits the duration of the coherent processing interval $L/\Delta_f$. We now present two channel models that extend the model from Section \ref{sec:signalTransmissionStatic} in non-trivial ways: the first model captures both the spatial-WB effects \rev{\cite{dovelosintelligent,face_squint,chen2021beam}} and \ac{ue} mobility \rev{\cite{sun_wcl_doppler}}, and it is used for developing the \ac{crb} and simulating the channel; the second model neglects the spatial-WB effects and is employed in the estimator design. The original model \eqref{eq:YbStatic}--\eqref{eq:YrStatic}, which neglects both \ac{ue} mobility and spatial-WB effects, will be assumed for designing the \ac{ris} phase profiles. \subsection{Signal transmission: Dynamic spatial-wideband model} \label{sec:signalTransmissionWB} \rev{In the dynamic spatial-WB model, two fundamental changes occur with respect to the static spatial-NB model in \eqref{eq:YbStatic}--\eqref{eq:YrStatic}. First, the RIS response matrix $\bm{A}(\bm{\phi})$ in \eqref{eq:aphi} becomes frequency-dependent, leading to non-identical rows. Second, we incorporate new steering matrices that capture fast-time (sample-level) and slow-time (symbol-level) Doppler-induced phase progressions. Accordingly, as} shown in Appendix\,\ref{app:Specially_wideband_Ch_Model}, \eqref{eq:YbStatic}--\eqref{eq:YrStatic} should be extended to\footnote{We assume that the angular displacement caused by UE mobility is negligible due to the far-field assumption.} \begin{align} \bm{Y}_{\mathrm{b}} &= g_{\mathrm{b}} \bm{F}\bm{E}(v_{\mathrm{b}}) \bm{F}^{{\textrm{H}}} \left(\bm{D}(\tau_{\mathrm{b}}) \odot \ccbig_{\wideband} (v_{\mathrm{b}})\right), \label{eq:Yb}\\ \bm{Y}_{\mathrm{r}} &= g_{\mathrm{r}} \bm{F} \bm{E}(v_{\mathrm{r}}) \bm{F}^{{\textrm{H}}} \left[\bm{D}(\tau_\mathrm{r}) \odot \aabig_{\wideband} ( \bm{\phi}) \odot \ccbig_{\wideband} (v_{\mathrm{r}}) \right].\label{eq:Yr} \end{align} Here, the matrix $\bm{F} \in \complexset{N}{N}$ is the unitary DFT matrix with elements \begin{align} \left[ \bm{F} \right]_{n,\ell} = \frac{1}{\sqrt{N}} e^{- \jmath 2 \pi \frac{n \ell}{N}}\label{eq:dft} \end{align} for $n,\ell\in\{0,\dots,N-1\}$. In addition, $ \aabig_{\wideband} ( \bm{\phi} )$ represents the spatial-wideband version of $ \bm{A} ( \bm{\phi} )$ in \eqref{eq:aphi}; namely, \begin{align} [ \aabig_{\wideband} (\bm{\phi})]_{n,\ell} &= \bm{a}_n( \bm{\theta} )^{\top} \mathrm{diag}(\bm{\gamma}_{\ell}) \bm{a}_n( \bm{\phi} ),\label{eq:aphi_wb} \end{align} where the RIS steering vector now depends on the subcarrier index $n$: \begin{align} [\bm{a}_n(\bm{\psi})]_{m} = \exp\left(\jmath\bm{k}_n(\bm{\psi})^\top [\bm{Q}]_{:,m}\right),\label{eq:aVector_wb} \end{align} with $\bm{k}_n(\bm{\psi})$ being defined as in \eqref{eq:WaveNumVect} by replacing $\lambda$ with \begin{align} \lambda_n = \frac{c}{f_c+n\Delta_f}.\label{eq:lambda_n} \end{align} \rev{Moreover}, the effects of \ac{ue} mobility on the received signal is captured by the \ac{ici} phase rotation matrix $\bm{E}(v) \in \complexset{N}{N}$, \rev{which models Doppler-induced \textit{fast-time} phase rotations within an OFDM symbol \cite{Visa_CFO_TSP_2006,multiCFO_TSP_2019,multiCFO_TCOM_2019}}, and the temporal steering matrix $ \ccbig_{\wideband} (v)\in \complexset{N}{L}$, \rev{which quantifies Doppler-induced \textit{slow-time} phase progressions across consecutive OFDM symbols \cite{OFDM_ICI_TVT_2017,ICI_Friend_Foe_JSTSP}}: \begin{align} [ \ccbig_{\wideband} (v)]_{n,\ell} & \triangleq e^{\jmath 2\pi \ell T_{\rm{sym}} v/\lambda_n } \label{eq:Cmatrix} \\ \bm{E}(v) &\triangleq \diagg{1, e^{\jmath 2 \pi \frac{ T_{\mathrm{o}} }{N} v/\lambda }, \ldots, e^{\jmath 2 \pi \frac{ T_{\mathrm{o}} (N-1)}{N} v/\lambda} } \label{eq:Ematrix} \end{align} for $n\in\{0,\dots,N-1\}$ and $\ell\in\{0,\dots,L-1\}$. Here, $ T_{\mathrm{o}} =1/\Delta_f$ is the elementary symbol duration and $ T_{\rm{sym}} = T_{\rm{cp}} + T_{\mathrm{o}} $ is the total signal duration, with $ T_{\rm{cp}} $ denoting the cyclic prefix (CP) duration. \subsection{Signal transmission: Dynamic spatial-narrowband model} \label{sec:signalTransmissionNB} In order to reduce the complexity of our estimator, we design it based on a simpler channel than \eqref{eq:Yb}--\eqref{eq:Yr} \rev{by assuming a spatial-narrowband model}. \rev{In this case,} the channel \rev{in \eqref{eq:Yb}--\eqref{eq:Yr}} is constructed by reverting $\lambda_n$ in \eqref{eq:lambda_n} back to $\lambda=c/f_c$. This will simplify the structure of matrices $ \ccbig_{\wideband} $ and $ \aabig_{\wideband} $ by making their elements independent of \rev{the subcarrier index} $n$, i.e., all of their rows become identical. Specifically, \rev{under the spatial-narrowband model,} the received signal \rev{in \eqref{eq:Yb}--\eqref{eq:Yr} specializes to} \begin{align} \bm{Y}_{\mathrm{b}} &= g_{\mathrm{b}} \bm{F}\bm{E}(v_{\mathrm{b}}) \bm{F}^{{\textrm{H}}} \left(\bm{D}(\tau_{\mathrm{b}}) \odot \bm{C} (v_{\mathrm{b}})\right) \label{eq:Ybn}\\ \bm{Y}_{\mathrm{r}} &= g_{\mathrm{r}} \bm{F} \bm{E}(v_{\mathrm{r}}) \bm{F}^{{\textrm{H}}} \left[\bm{D}(\tau_\mathrm{r}) \odot \bm{A} ( \bm{\phi}) \odot \bm{C} (v_{\mathrm{r}}) \right] ~,\label{eq:Yrn} \end{align} \rev{where the subcarrier-dependent matrices $ \ccbig_{\wideband} (v)$ and $ \aabig_{\wideband} (\bm{\phi})$ in \eqref{eq:Yb}--\eqref{eq:Yr} revert to their narrowband (subcarrier-independent) counterparts $ \bm{C} (v)$ and $ \bm{A} (\bm{\phi})$. Here,} $ \bm{A} (\bm{\phi})$ is defined in \eqref{eq:aphi} and\rev{\footnote{\rev{Note that the dynamic spatial-narrowband model \eqref{eq:Ybn}--\eqref{eq:Yrn} reverts to the static spatial-narrowband model \eqref{eq:YbStatic}--\eqref{eq:YrStatic} when $v = 0$. }}} \begin{align} [ \bm{C} (v)]_{n,\ell} & \triangleq e^{\jmath 2 \pi \ell T_{\rm{sym}} v/\lambda } \label{eq:CNmatrix}. \end{align} \rev{For the spatial-narrowband approximation in \eqref{eq:Ybn} and \eqref{eq:Yrn} to be valid, the following conditions must be satisfied (see Appendix~\ref{app_nb_valid} for details):} \begin{align} \max\{v_{\mathrm{r}},v_{\mathrm{b}}\} L T_{\rm{sym}} B \approx \max\{v_{\mathrm{r}},v_{\mathrm{b}}\}LN&\ll c \label{eq:condVelWB} \\ \max(M_1,M_2)d \sin(\alpha) B &\ll c ~, \label{eq:condSpWB} \end{align} \rev{which ensure the validity of the approximations $ \ccbig_{\wideband} (v) \approx \bm{C} (v)$ and $ \aabig_{\wideband} (\bm{\phi}) \approx \bm{A} (\bm{\phi})$, respectively.} Here, $\alpha=\max\{\alpha_{\mathrm{\phi}},\alpha_{\mathrm{\theta}}\}$, where $\alpha_{\mathrm{\phi}}$ and $\alpha_{\mathrm{\theta}}$ are the angles between the RIS normal ($[0,1,0]^\top$) and the two vectors $\bm{k}(\bm{\phi})$ and $\bm{k}(\bm{\theta})$, respectively, which are defined in \eqref{eq:WaveNumVect}. While the condition in \eqref{eq:condVelWB} almost always holds (corresponding to the assumption of small time-bandwidth product \cite{OFDM_ICI_TVT_2017}), the condition in \eqref{eq:condSpWB} does not hold for RISs with large dimension combined with signals of large bandwidth \cite{wang2018spatial}. We will study the effects of this assumption in Section~\ref{sec:simulationResults}. \section{RIS phase profile design}\label{sec:RisPhaseDesign} In this section, we consider the design of the \ac{ris} phase profile $\bm{\Gamma}_{\ell}$ for $\ell = 0, \dots L-1$. In order to mitigate the interference between the direct path and the reflected one, we use the method described in \cite{keykhosravi2021multi}. The method deploys temporal orthogonal \ac{ris} phase profiles and a post processing at the receiver. \rev{This process resembles the code-division multiplexing, which is a well-known method in wireless communications (see e.g., \cite{hwacdma}).} It can remove the interpath interference completely in the static scenario ($\bm{v}=\bm{0}$). Next, we use the static channel model in Section\,\ref{sec:signalTransmissionStatic} to describe the \ac{ris} phase profile design. \subsection{Orthogonal RIS phase profiles} We set $L$ to be an even number and for each $ k = 0, 1, \dots, L/2$ we select beams $\bm{B}_{k}\in\mathbb{C}^{M_1\times M_2}$ either randomly or according to a directional codebook (we elaborate on this in Section\,\ref{sec:beamforming}). Also, similarly as in \eqref{eq:def_gamma}, we define $\bm{b}_k = \mathrm{vec}(\bm{B}_k)$. Then we set $\bm{\gamma}_{2k} = \bm{b}_{k}$ and $\bm{\gamma}_{2k+1} = - \bm{b}_{k}$. By doing so, from \eqref{eq:aphi} we have that $[\bm{A}(\bm{\phi})]_{:,2k+1} = -[\bm{A}(\bm{\phi})]_{:,2k}$. Therefore, from \eqref{eq:YbStatic} and \eqref{eq:YrStatic}, we have \begin{align} [\bm{Y}_{\mathrm{b}}]_{:,2k+1} & =g_b \left[\bm{D}(\tau_{\mathrm{b}})\right]_{:,2k+1}\\ &= [\bm{Y}_{\mathrm{b}}]_{:,2k} \label{eq:Yb_2l}\\ [\bm{Y}_{\mathrm{r}}]_{:,2k+1} & = g_{\mathrm{r}} \left[\bm{D}(\tau_\mathrm{r})\right]_{:,2k+1} \odot \left[\bm{A}( \bm{\phi})\right]_{:,2k+1} \\ & = - g_{\mathrm{r}} \left[\bm{D}(\tau_\mathrm{r})\right]_{:,2k} \odot \left[\bm{A}( \bm{\phi})\right]_{:,2k}\label{eq:Yr2l0}\\ &=-[\bm{Y}_{\mathrm{r}}]_{:,2k}.\label{eq:Yr2l} \end{align} The post-processing step at the receiver involves calculating matrices $\bm{Z}_{\mathrm{b}}\in\complexset{N}{L/2}$ and $\bm{Z}_{\mathrm{r}}\in\complexset{N}{L/2}$ as \begin{align} \left[\bm{Z}_{\mathrm{b}}\right]_{:,k} &= \left[\bm{Y}\right]_{:,2k} + \left[\bm{Y}\right]_{:,2k+1}\\ & = 2g_{\mathrm{b}} \left[\bm{D}(\tau_{\mathrm{b}})\right]_{:,2k}+\left[\bm{N}\right]_{:,2k}+\left[\bm{N}\right]_{:,2k+1}\\ &= 2[\bm{Y}_{\mathrm{b}}]_{:,2k}+\left[\bm{N}\right]_{:,2k}+\left[\bm{N}\right]_{:,2k+1}\label{eq:Zbs}\\ \left[\bm{Z}_{\mathrm{r}}\right]_{:,k} &= \left[\bm{Y}\right]_{:,2k} - \left[\bm{Y}\right]_{:,2k+1}\\ & = 2g_{\mathrm{r}} \left[\bm{D}(\tau_\mathrm{r})\right]_{:,2k} \odot \left[\bm{A}( \bm{\phi})\right]_{:,2k}+\left[\bm{N}\right]_{:,2k}-\left[\bm{N}\right]_{:,2k+1}\nonumber\\ &= 2[\bm{Y}_{\mathrm{r}}]_{:,2k}+\left[\bm{N}\right]_{:,2k}-\left[\bm{N}\right]_{:,2k+1}.\label{eq:Zrs} \end{align} It can be seen from \eqref{eq:Zbs} and \eqref{eq:Zrs} that the matrix $\bm{Y}_{\mathrm{b}}$ ($\bm{Y}_{\mathrm{r}}$) depends only on the parameters of the direct (reflected) channel. Therefore, with the aforementioned \ac{ris} phase profile design and post-processing, we can remove the interference between the two paths, which facilitates the estimation of the channel parameters. \rev{Furthermore, from \eqref{eq:Zbs} and \eqref{eq:Zrs} it can be seen the signals $\bm{Z}_{\mathrm{b}}$ and $\bm{Z}_{\mathrm{r}}$ have higher SNRs compared to the signals $\bm{Y}_{\mathrm{b}}$ and $\bm{Y}_{\mathrm{r}}$, respectively. This indicates that the presented orthogonal coding does not result in a waste of resources by repeating the beams.} For clarification, we consider a toy example with $L=4$ and $M=1$ and $[\bm{b}_0, \bm{b}_1]=[e^{\jmath \theta_0},e^{\jmath \theta_1}]$ for some $\theta_0,\theta_1 \in [0\,2\pi)$. Then the set of RIS phase profiles would be $[\bm{\gamma}_0, \bm{\gamma}_1, \bm{\gamma}_2, \bm{\gamma}_3] = [e^{\jmath \theta_0}, - e^{\jmath \theta_0}, e^{\jmath \theta_1}, -e^{\jmath \theta_1}]$. Also if the noise is neglected, we have $[\bm{Z}_{\mathrm{b}}]_{:,k} = 2 g_{\mathrm{b}} \bm{d}(\tau_{\mathrm{b}})$ and $[\bm{Z}_{\mathrm{r}}]_{:,k} = 2 g_{\mathrm{r}} e^{\jmath \theta_k}\bm{d}(\tau_{\mathrm{r}})$ for $k=0, 1$. For future use, we refer to the post processing step in \eqref{eq:Zbs} and \eqref{eq:Zrs} as matching the signal $\bm{Y}$ with vectors $\bm{w}_{\mathrm{b}}=[1,1]^{\top}$ and $\bm{w}_{\mathrm{r}}=[1,-1]^{\top}$, respectively. We explain this step in Algorithm\,\ref{alg:match} as follows. \begin{algorithm}[h] \caption{\textit{match($\bm{Y}$,$\bm{w}$)} }\label{alg:match} \textbf{Inputs:} Received signal ($\bm{Y}\in \complexset{N}{L}$) and vector $\bm{w}\in \mathbb{C}^2$. \\ \textbf{Output:} $\bm{Z}\in \complexset{N}{L/2}$. \begin{algorithmic}[1] \For {$k\in \{0, \dots, L/2-1\}$} \State $\left[\bm{Z}\right]_{:,k} = [\bm{w}]_1\left[\bm{Y}\right]_{:,2k} + [\bm{w}]_2 \left[\bm{Y}\right]_{:,2k+1}$ \EndFor \Return $\bm{Z}$ \end{algorithmic} \end{algorithm} \subsection{Loss of orthogonality due to UE mobility} For the dynamic case ($\bm{v}\neq \bm{0}$), one can write \rev{\eqref{eq:Zbs}--\eqref{eq:Zrs}} as \begin{align} \left[\bm{Z}_{\mathrm{b}}\right]_{:,k} &=(2-\epsilon(v_{\mathrm{b}}))[\bm{Y}_{\mathrm{b}}]_{:,2k}+\epsilon(v_{\mathrm{r}})[\bm{Y}_{\mathrm{r}}]_{:,2k}\nonumber\\ & \ \ \ \ +\left[\bm{N}\right]_{:,2k}+\left[\bm{N}\right]_{:,2k+1}\label{eq:Zbs_dynamic}\\ \left[\bm{Z}_{\mathrm{r}}\right]_{:,k} &= (2-\epsilon(v_{\mathrm{r}}))[\bm{Y}_{\mathrm{r}}]_{:,2k}+\epsilon(v_{\mathrm{b}})[\bm{Y}_{\mathrm{b}}]_{:,2k}\nonumber\\ & \ \ \ \ +\left[\bm{N}\right]_{:,2k}-\left[\bm{N}\right]_{:,2k+1}\label{eq:Zrs_dynamic}, \end{align} where $\epsilon(v) = 1-\exp(\jmath 2\pi T_{\mathrm{sym}} v/\lambda)$. By comparing \eqref{eq:Zbs_dynamic}--\eqref{eq:Zrs_dynamic} with \eqref{eq:Zbs}--\eqref{eq:Zrs}, one can see that \ac{ue} mobility introduces two impairments to the proposed method: \begin{itemize} \item Energy loss: some of the signal energy of the desired path is lost since $\vert2-\epsilon(v)\vert<2$ \item Residual interference: the second term in \eqref{eq:Zbs_dynamic}--\eqref{eq:Zrs_dynamic} exhibits the interference from the undesired path. \end{itemize} We design our estimator based on the approximation $\epsilon(v) = 0$. To counter the aforementioned impairments, we apply multiple iterations and deploy successive cancellation. \subsection{\rev{RIS phase profile design}}\label{sec:beamforming} In this section, we discuss the selection of $\bm{B}_{k}$ or equivalently $\bm{b}_{k}$. We consider two methods, namely random and directional profiles. The latter can be used when a prior information about the \ac{ue} location is available and the former when such information is lacking. \subsubsection{Random profile}\label{sec:randCodebook} With the random codebook, for $m = 0,\dots,M-1$ and $k = 0,\dots,L/2-1$ we let \begin{align} [\bm{b}_{k}]_m = e^{\jmath \theta_{k,m}}, \end{align} where $\theta_{k,m}$ are \ac{iid} realizations of the uniform distribution over the interval $[0,2\pi)$. \subsubsection{Directional profile} \label{sec:dirCodebook} Here, we assume that we have a prior knowledge of the \ac{ue} position, $\bm{\xi}$, which is distributed uniformly throughout the sphere \begin{align} \vert \bm{p}-\bm{\xi}\vert<\sigma.\label{eq:sphere} \end{align} We call $\sigma$ the uncertainty radius. Given the prior position knowledge $\bm{\xi}$, the \ac{ris} phase profile is designed as follows. We first select $L/2$ points $\bm{\xi}_{0},\dots,\bm{\xi}_{L/2-1}$ randomly (with uniform distribution) from the sphere centered at $\bm{\xi}$ with radius $\sigma$. Second, we set $\bm{b}_{k} = \bm{f}(\bm{\xi}_{k})$ where for $m=0,\dots, M-1$: \begin{align} [\bm{f}(\bm{x})]_{m} = \exp\left(-\jmath \left(\bm{k}(\bm{\theta})^{\top} +\frac{2\pi(\bm{x}^{\top}-\bm{p}_{\mathrm{r}}^{\top})}{\lambda\Vert \bm{x}-\bm{p}_{\mathrm{r}} \Vert}\right) [\bm{Q}]_m\right). \label{eq:GammaFunction} \end{align} One can see that with the phase profile in \eqref{eq:GammaFunction} the reflected signal energy from the \ac{ris} is concentrated towards the point $\bm{x}$. \section{Estimation algorithm}\label{sec_estimator} In this section, we propose an estimator to estimate first the channel parameters and then the \ac{ue} position and clock bias based on them. The overall process is described in the flowchart of Fig.\,\ref{fig:flowchart} using multiple separate procedures described in Algorithm~\ref{alg:coarse_v}--\ref{alg:pos} as building blocks. \rev{ To estimate the parameters, we first obtain a coarse estimation and then use it as an initial point in a refinement process, which is a standard approach in localization literature (see e.g., \cite{Shahmansoori18TWC}). Next we describe the Algorithms using a bottom-up approach.} \begin{figure} \centering \includegraphics[width=\columnwidth]{Fig2.pdf} \caption{A flowchart of Algorithm~\ref{alg:estimator}, comprising three stages: estimation of the parameters of the direct path (green), estimation of the parameters of the reflected path (blue), and position estimation (orange).} \label{fig:flowchart} \end{figure} \subsection{Estimation of $v_{\mathrm{b}}$} \label{sec:estimation_v} For estimating the UE velocity, we first obtain a coarse estimation using standard methods based on the \ac{dft} matrix and then provide a refined estimation by using the coarse estimate as the initial point for our optimization. \rev{Algorithm\,\ref{alg:coarse_v} provides a coarse estimation of the velocity.} The input signal is an estimate of $\bm{Z}_{\mathrm{b}} = \textit{match}(\bm{Y},\bm{w}_{\mathrm{b}})$ described in \eqref{eq:Zbs_dynamic}. One can see that for every $n$ we have that \begin{align} [\bm{Z}_{\mathrm{b}}]_{n,:} \approx \xi_n [ 1, e^{\jmath 2h_{\mathrm{v}}v_{\mathrm{b}}}, e^{\jmath 4h_{\mathrm{v}}v_{\mathrm{b}}}, \dots, e^{\jmath (L-2) h_{\mathrm{v}}v_{\mathrm{b}}} ]\label{eq:Zb_n} \end{align} for some scalar $\xi_n\in \mathbb{C}$, \rev{ where $h_{\mathrm{v}} = 2\pi T_{\mathrm{sym}}/\lambda$.} Then it can be seen that the maximum of \rev{ \begin{align}\label{eq:fv_defenition} f(v) = \Vert \hat{\bm{Z}}_{\mathrm{b}}[ 1, e^{\jmath 2h_{\mathrm{v}}v}, \dots, e^{\jmath (L-2) h_{\mathrm{v}}v} ]^{{\textrm{H}}} \Vert^2 \end{align}} provides an estimate of $v_{\mathrm{b}}$. \rev{To find the maximum of $f(v)$, we note that} the structure shown in \eqref{eq:Zb_n} is similar to the rows of the \ac{dft} matrix $\bm{F}$ in Line\,\ref{CoarseV:Line1} of Algorithm\,\ref{alg:coarse_v}, that is \begin{align} [\bm{F}]_{n,:}=\frac{\sqrt{2}}{\sqrt{L}}[1, e^{-\jmath \omega n}, e^{-\jmath 2\omega n}, \dots, e^{-\jmath (L/2-1)\omega n}]\label{eq:FRowsForV} \end{align} for all $n= 0, \dots, N_{\mathrm{v}}-1$. Here, $\omega=2\pi/N_{\mathrm{v}}$ and $N_{\mathrm{v}}$ is a design parameter that determines the dimension of the \ac{dft} matrix and accuracy of our coarse estimation. By comparing \eqref{eq:FRowsForV} to \eqref{eq:Zb_n}, one can \rev{approximate the $\arg\max_v f(v)$} via the maximization in Line\,\ref{CoarseV:Line3} and the assignment in Line\,\ref{CoarseV:Line6}. Finally, the condition in Line\,\ref{CoarseV:Line4} compensates for the wrap-around effect in the complex-exponential function when $v_{\mathrm{b}}<0$. \rev{\emph{Refinement}: Let the output of the Algorithm\,\ref{alg:coarse_v} be $v_{0}$. To refine this estimation, we perform the maximization $\hat{v}_b=\max_v f(v)$ via a \emph{quasi-Newton} algorithm initiated at $v_{0}$.} \begin{algorithm}[h] \caption{\textit{Coarse\_Velocity\_Est($\hat{\bm{Z}}_{\mathrm{b}}$)} }\label{alg:coarse_v} \textbf{Inputs:} Signal ($\hat{\bm{Z}}_{\mathrm{b}}\in \complexset{N}{L/2}$) \\ \textbf{Parameters:} DFT dimension ($N_{\mathrm{v}}$) \\ \textbf{Output:} $\hat{v}_{0}$ \begin{algorithmic}[1] \State $ \bm{F} \gets N_{\mathrm{v}} \times L/2$ DFT matrix\label{CoarseV:Line1} \State $\bm{Z}_{\mathrm{v}} \gets \bm{F} \hat{\bm{Z}}_{\mathrm{b}}^{\top}$\label{CoarseV:Line2} \State $i_{\mathrm{m}} \gets \mathrm{argmax}_i \Vert [\bm{Z}_{\mathrm{v}}]_{i,:}\Vert$ \label{CoarseV:Line3} \If{$i_{\mathrm{m}}> N_{\mathrm{v}}/2$}\label{CoarseV:Line4} \State $i_{\mathrm{m}} \gets i_{\mathrm{m}} - N_{\mathrm{v}}+1$\label{CoarseV:Line5} \EndIf \State $\hat{v}_{0} \gets i_{\mathrm{m}}\lambda/(2T_{\mathrm{sym}}N_{\mathrm{v}})$\label{CoarseV:Line6}\\ \Return $\hat{v}_{0}$ \end{algorithmic} \end{algorithm} \subsection{Estimation of ToA}\label{sec:Estimation_tau} Similar to Section\,\ref{sec:estimation_v}, the estimation of \ac{toa} comprises coarse and fine estimation steps. \rev{Algorithm\,\ref{alg:coarse_tau} describes the coarse} estimation of the \ac{toa} given the input signal $\bm{Z}_{\mathrm{\tau}}$\footnote{\rev{We use Algorithm\,\ref{alg:coarse_tau} within Algorithm\,\ref{alg:estimator_direct}, where the input ($\bm{Z}_{\mathrm{\tau}}$) is an estimate of $\sum_t[\bm{Z}_{\mathrm{b}}]_{:,t}$ with dimension $N\times 1$ and also in Algorithm\,\ref{alg:estimator_reflected}, where the input is an estimate of $\bm{Z}_{\mathrm{r}}$ with dimension $N\times L/2$.}}. We assume that the columns of the input signal have the structure \begin{align} [\bm{Z}_{\mathrm{\tau}}]_{:,t}\approx \xi_{t}[1, e^{\jmath h_{\mathrm{\tau}}\tau_{\mathrm{x}}}, \dots, e^{\jmath (N-1) h_{\mathrm{\tau}}\tau_{\mathrm{x}}}]^{\textrm{H}} \label{eq:Y_tau_t} \end{align} for some $\xi_t\in\mathbb{C}$, where $\tau_{\mathrm{x}}$ represents either $\{\tau_{\mathrm{b}}$ or $\tau_{\mathrm{r}}\}$. Algorithm\,\ref{alg:coarse_tau} can be explained similarly as in Section\,\ref{sec:estimation_v} using \eqref{eq:Y_tau_t}. \rev{\emph{Refinement:}} Based on \eqref{eq:Y_tau_t} a fine estimation of $\tau_{\mathrm{b}}$ or $\tau_{\mathrm{r}}$ can be found by calculating \begin{align}\label{eq:fineTau} \hat{\tau}=\arg\max_{\tau} \Vert [ 1, e^{\jmath h_{\mathrm{\tau}}\tau}, \dots, e^{\jmath (N-1) h_{\mathrm{\tau}}\tau} ] \bm{Z}_{\mathrm{\tau}} \Vert^2, \end{align} where $h_{\mathrm{\tau}} \gets 2\pi \Delta_f$. \rev{The optimization \eqref{eq:fineTau} can be solved via a quasi-Newton algorithm that uses the coarse estimation as the initial point of search.} \begin{algorithm}[h] \caption{\textit{Coarse\_delay\_Est($\bm{Z}_{\mathrm{\tau}}$)} }\label{alg:coarse_tau} \textbf{Inputs:} Signal ($\bm{Z}_{\mathrm{\tau}}\in \complexset{N}{T}$, \rev{where $T\in \{1, L/2\}$})\\ \textbf{Parameters:} IDFT dimension ($N_{\tau}$) \\ \textbf{Output:} $\hat{\tau}$ \begin{algorithmic}[1] \State $ \bm{F} \gets N_{\tau} \times N$ DFT matrix \State $\bm{W}_{\tau} \gets \bm{F}^{{\textrm{H}}} \bm{Z}_{\mathrm{\tau}}$ \State $i_{\mathrm{m}} \gets \mathrm{argmax}_i \Vert [\bm{W}_{\tau}]_{i,:}\Vert$ \State $\hat{\tau}\gets i_{\mathrm{m}}/(\Delta_f N_{\tau})$ \\ \Return $\hat{\tau}$ \end{algorithmic} \end{algorithm} \subsection{Joint estimation of velocity and AoD for the reflected path}\label{sec:Estimation_va} In this section, we describe coarse and fine steps for joint estimation of the angle and velocity. Algorithm\,\ref{alg:coarse_vA_phi} describes the \rev{coarse} estimation process of \ac{aod} and velocity. We assume that the input signal $\bm{z}_{\mathrm{\phi}}$ is proportional to the rows of the matrix $\bm{C}(v)\odot\bm{A}(\bm{\phi})$ and therefore has the structure \begin{align} [\bm{z}_{\mathrm{\phi}}]_k & = \xi e^{\jmath 2kh_{\mathrm{v}} v} \bm{a}(\bm{\theta})^\top \mathrm{diag}(\bm{b}_{k})\bm{a}(\bm{\phi})\label{eq:zPhi2} \end{align} for some constant $\xi\in\mathbb{C}$ and velocity $v$, which is to be estimated. Also, the constant $h_{\mathrm{v}}$ is defined as $h_{\mathrm{v}} = 2\pi T_{\mathrm{sym}}/\lambda$. To obtain a coarse estimation of $v$ and $\bm{\phi}$ based on the input signal $\bm{z}_{\mathrm{\phi}}$ described in \eqref{eq:zPhi2}, Algorithm\,\ref{alg:coarse_vA_phi} uses a set of candidate \acp{aod}. For the $s$th candidate, we calculate $\bm{z}_{s}$ in Line\,\ref{CoarseVA_Line4} and then normalize it in Line\,\ref{CoarseVA_Line5} to obtain $\bm{w}_{s}$. Assume that for some $s_{\mathrm{m}}$ we have $\bm{\phi}_{s_\mathrm{m}} = \bm{\phi}$, then we have that \begin{align} \bm{z}_{\bm{\phi}} \propto [1, e^{\jmath 2 h_{\mathrm{v}} v}, \dots, e^{\jmath (L-2) h_{\mathrm{v}} v}]^\top \odot \bm{w}_{s_{\mathrm{m}}}. \label{eq:Zs} \end{align} Motivated by the structure in \eqref{eq:Zs}, we compute the correlation of $\bm{z}_{\bm{\phi}}$ with all $\bm{w}_{s}$ and all of the rows of the \ac{dft} matrix in Lines\,\ref{CoarseVA_Line6}--\ref{CoarseVA_Line7}. Then, we search over different values of $s$ and $i$ (which indicates the rows of the \ac{dft} matrix) to find the one with the highest correlation in Line\,\ref{CoarseVA_Line8}. We estimate $v_{\mathrm{r}}$ through Lines\,\ref{CoarseVA_Line10}--\ref{CoarseVA_Line12}, which are the same steps as in Lines\,\ref{CoarseV:Line4}--\ref{CoarseV:Line6} of Algorithm\,\ref{alg:coarse_v}. We explain in Appendix\,\ref{app:fft} how to choose the candidate AoDs. \rev{\emph{Refinement:}} According to the \rev{\ac{rhs}} of \eqref{eq:zPhi2}, for $k\in\{0, \dots, L/2\}$, we define \begin{align} [\bm{g}(v,\bm{\phi})]_{k} = e^{\jmath 2 k h_{\mathrm{v}} v} \bm{a}(\bm{\theta})^{\top} \mathrm{diag}(\bm{b}_{k}) \bm{a}(\bm{\phi}), \end{align} \rev{which is a function of $v$ and $\bm{\phi}$}. Then one can estimate the constant $\xi$ as \begin{align} \hat\xi = {\bm{g}(v,\bm{\phi})^{{\textrm{H}}}\bm{z}_{\mathrm{\phi}}}/{\bm{g}(v,\bm{\phi})^{{\textrm{H}}}\bm{g}(v,\bm{\phi})}. \end{align} Next, we can define the objective function \begin{align} f(v,\bm{\phi}) = \Vert\bm{z}_{\mathrm{\phi}} - \left({\bm{g}(v,\bm{\phi})^{{\textrm{H}}}\bm{z}_{\mathrm{\phi}}}/{\bm{g}(v,\bm{\phi})^{{\textrm{H}}}\bm{g}(v,\bm{\phi})}\right)\bm{g}(v,\bm{\phi})\Vert. \end{align} \rev{To refine the estimation of $v$ and $\bm{\phi}$, we conduct two consecutive minimization of $f(v,\bm{\phi})$ via a quasi-Newton algorithm initiating at the coarse estimations.} \begin{algorithm}[h] \caption{\textit{Coarse\_Velocity\_Angle\_Est}($\bm{z}_{\mathrm{\phi}},\{\bm{b}_k\}$)}\label{alg:coarse_vA_phi} \textbf{Inputs:} Signal ($\bm{z}_{\mathrm{\phi}}\in \mathbb{C}^{L/2}$)\\ \textbf{Parameters:} DFT dimensions ($N_{\mathrm{\nu}}$), set of candidate \acp{aod} $\{\bm{\phi}_s\}_{s=0}^{N_{\mathrm{\phi}}-1}$ \\ \textbf{Output:} $\hat{\bm{\phi}}$ and $\hat{v}_{\mathrm{r}}$ \begin{algorithmic}[1] \State $ \bm{F} \gets N_{\mathrm{\nu}} \times L/2$ DFT matrix \label{CoarseVA_Line1} \For{$s\in\{0,\dots,N_{\mathrm{\phi}}-1\}$ }\label{CoarseVA_Line2} \For{$k\in\{0,\dots,L/2\}$ }\label{CoarseVA_Line3} \State $[\bm{z}_s]_{k} = \bm{a}(\theta)^{\top} \mathrm{diag}(\bm{b}_{k}) \bm{a}(\bm{\phi}_s)$\label{CoarseVA_Line4} \EndFor \State $\bm{w}_{s} = \bm{z}_{s}/\Vert \bm{z}_s \Vert$\label{CoarseVA_Line5} \State $\bm{g}_s = \bm{w}_{s}^{*} \odot \bm{z}_{\mathrm{\phi}}$\label{CoarseVA_Line6} \State $\bm{h}_s = \bm{F} \bm{g}_s$\label{CoarseVA_Line7} \EndFor \State $[i_{\mathrm{m}},s_{\mathrm{m}}] \gets \max_{i,s} \vert[\bm{h}_s]_i\vert$\label{CoarseVA_Line8} \State $\hat{\bm{\phi}} \gets \bm{\phi}_{s_{\mathrm{m}}}$\label{CoarseVA_Line9} \If{$i_{\mathrm{m}}> N_{\mathrm{\nu}}/2$}\label{CoarseVA_Line10} \State $i_{\mathrm{m}} \gets i_{\mathrm{m}} - N_{\mathrm{\nu}}+1$\label{CoarseVA_Line11} \EndIf \State $\hat{v}_{\mathrm{r}} \gets i_{\mathrm{m}}\lambda/(2T_{\mathrm{sym}}N_{\mathrm{\nu}})$\label{CoarseVA_Line12}\\ \Return $\hat{\bm{\phi}}$ and $\hat{v}_{\mathrm{r}}$ \end{algorithmic} \end{algorithm} \subsection{Estimation of channel parameters for the direct path} Algorithm\,\ref{alg:estimator_direct} presents the estimation of channel parameters for the direct path based on some of the previous algorithms. The input is the received \ac{ofdm} signal $\bm{Y}$. First $\bm{w}_{\mathrm{b}}$ is used to extract the direct signal. Next, we estimate the value of $v_{\mathrm{b}}$, which requires solving a non-convex optimization. We solve this problem by first obtaining a coarse estimation in Line\,\ref{line3d:Est}. \rev{In Line\,\ref{line4d:Est}, we use the \emph{refinement} step described in Sec.\,\ref{sec:estimation_v} using $\hat{v}_{\mathrm{b}}$ as the initial value}. Then, the effects of \ac{ue} mobility on the direct signal are compensated for and once again the direct signal is extracted via the vector $\bm{w}_{\mathrm{b}}$ in Line\,\ref{line6d:Est}. By compensating for the effects of \ac{ue} mobility, we reduce the residual interference and energy loss in the \emph{matching} process (see Section\,\ref{sec:RisPhaseDesign}). One can see that the matrix $\hat{\bm{Z}}_{\mathrm{b}}$ in Line\,\ref{line6d:Est} is an estimate of ${\bm{Z}}_{\mathrm{b}}$ in \eqref{eq:Zbs}. The matrix $\hat{\bm{Z}}_{\mathrm{b}}$ is then summed across time to establish $\bm{z}_{\tau}\in \mathbb{C}^N$. One can see that $\bm{z}_{\tau}$ has the structure $Lg_{\mathrm{b}} \left[\bm{D}(\tau_{\mathrm{b}})\right]_{:,1}$. Therefore, we use $\bm{z}_{\tau}$ to estimate $\tau_{\mathrm{b}}$ and then we use $\hat{\tau}_{\mathrm{b}}$ and $\bm{z}_{\tau}$ to estimate $g_{\mathrm{b}}$ in Line\,\ref{line10d:Est}. \begin{algorithm}[t] \caption{\textit{Direct\_Par\_Est}($\bm{Y}$) }\label{alg:estimator_direct} \textbf{Inputs:} Signal ($\bm{Y}\in \complexset{N}{L}$)\\ \textbf{Output:} Estimation of parameters for the direct path: gain $\hat{g}_{\mathrm{b}}$, radial velocity $\hat{v}_{\mathrm{b}}$, and delay $\hat{\tau}_{\mathrm{b}}$ \begin{algorithmic}[1] \State $ \bm{w}_{\mathrm{b}} \gets [1,1]^\top$\label{line1d:Est} \State $ \bm{Z}_{\mathrm{b}} \gets \textit{match}(\bm{Y},\bm{w}_{\mathrm{b}})$\label{line2d:Est} \State $ \hat{v}_b \gets \textit{Coarse\_Velocity\_Est}(\bm{Z}_{\mathrm{b}})$\label{line3d:Est} \State $ \hat{v}_b \gets \textit{Fine\_Velocity\_Est}(\bm{Z}_{\mathrm{b}},\hat{v}_b)$\label{line4d:Est} \State $\bm{T}_{\mathrm{b}} \gets \left(\bm{F}\bm{E}(\hat{v}_{\mathrm{b}})^{-1}\bm{F}^{{\textrm{H}}}\bm{Y}\right)\odot \bm{C}(\hat{v}_{\mathrm{b}})^*$\label{line5d:Est} \State $\hat{\bm{Z}}_{\mathrm{b}} \gets \textit{match}(\bm{T}_{\mathrm{b}} ,\bm{w}_{\mathrm{b}})$\label{line6d:Est} \State $\bm{z}_{\tau} \gets \sum_t [\hat{\bm{Z}}_{\mathrm{b}}]_{:,t}$\label{line7d:Est} \State $\hat{\tau}_{\mathrm{b}}\gets\textit{Coarse\_delay\_Est}(\bm{z}_{\tau})$\label{line8:Est} \State $\hat{\tau}_{\mathrm{b}}\gets\textit{Fine\_delay\_Est}(\bm{z}_{\tau},\hat{\tau}_{\mathrm{b}})$\label{line9d:Est} \State $\hat{g}_{\mathrm{b}}\gets \left[\bm{D}(\hat{\tau}_{\mathrm{b}})\right]_{:,1}^{{\textrm{H}}}\bm{z}_{\tau}/(NL)$\label{line10d:Est}\\ \Return $\hat{g}_{\mathrm{b}}, \hat{v}_{\mathrm{b}},$ and $ \hat{\tau}_{\mathrm{b}}$ \label{line11d:Est} \end{algorithmic} \end{algorithm} \subsection{Estimation of channel parameters for the reflected path}\label{sec:Coars_Est_reflected_ch_par} In this section, we use some of the previous algorithms to estimate the channel parameters for the reflected path. The process is described in Algorithm\,\ref{alg:estimator_reflected}. The input matrix $\hat{\bm{Y}}_{\mathrm{r}}$ is an estimate of ${\bm{Y}}_{\mathrm{r}}$ in \eqref{eq:Yr}. First, we match the signal with the vector $\bm{w}_{\mathrm{r}}$ to reduce the temporal dimension of the input signal from $L$ to $L/2$, without loss of information and also to remove any interference from the direct path (or possible scatterers). Then, we estimate $\tau_{\mathrm{r}}$ in Line\,\ref{line3r:Est} and then compensate for its effects in Line\,\ref{line4r:Est}. Next, we neglect the spatial-WB effects and assume that the effects of $v_{\mathrm{r}}$, and $\vectt{\phi}_{\mathrm{r}}$ are constant across the subcarriers. Therefore, to estimate these parameters we perform a summation across all subcarriers to obtain $\bm{z}_{\bm{\phi}}\in \mathbb{C}^{L/2}$. Assuming the spatial-NB model (see Section~\ref{sec:signalTransmissionNB}), one can see that $\bm{z}_{\bm{\phi}}$ has the structure at the \ac{rhs} of \eqref{eq:zPhi2}. Therefore, we use $\bm{z}_{\bm{\phi}}$ to estimate $v_{\mathrm{r}}$ and $\vectt{\phi}_{\mathrm{r}}$ jointly in Line\,\ref{line6r:Est}. Next, we compensate for the effects of \ac{ue} mobility via $\hat{v}_{\mathrm{r}}$ in Line\,\ref{line8r:Est}. This reduces the interpath interference and the energy loss due to the \ac{ue} mobility. The steps from Line\,\ref{line9r:Est} to Line\,\ref{line13r:Est} refine the estimations obtained in Lines\,\ref{line2r:Est}--\ref{line6r:Est} \rev{(using the corresponding \emph{refinement} steps in Sec.\ref{sec:estimation_v}--\ref{sec:Estimation_va})\footnote{\rev{The extra inputs serve as initial values for the quasi-Newton algorithm. Specifically $\textit{Fine\_Velocity\_Angle\_Est}(\bm{z}_{\mathrm{\phi}}, 0, \hat{\bm{\phi}}, \{\bm{b}_k\})$ uses the initial values $0$ and $\hat{\bm{\phi}}$ for searching along the velocity and \ac{aod} dimensions, respectively.}}}. However, since the effects of velocity have been already compensated for in Line~\ref{line8r:Est}, we estimate the residual velocity ${\Delta}\hat{v}$ (with zero as the initial estimate) in Line\,\ref{line13r:Est}, which is then added to the coarse estimation. This process is repeated $N_{\mathrm{iter}}$ times to obtain an accurate estimation, where $N_{\mathrm{iter}}$ is a design parameter. \rev{Alternatively, one can also stop the iterations after the difference between the estimated velocities becomes less than a certain threshold. } \begin{algorithm}[t] \caption{\textit{Reflected\_Par\_Est}($\hat{\bm{Y}}_{\mathrm{r}}, \{\bm{b}_0,\dots \bm{b}_{L/2-1}$\}) }\label{alg:estimator_reflected} \textbf{Inputs:} Signal ($\hat{\bm{Y}}_{\mathrm{r}} \in \mathbb{C}^{N\times L}$), beams $\{\bm{b}_k\}$\\ \textbf{Parameters:} Number of iterations $N_{\mathrm{iter}}$ \\ \textbf{Output:} Estimation of parameters for the reflected path: AoD $\hat{\phi}$, radial velocity $\hat{v}_{\mathrm{r}}$, delay $\hat{\tau}_{\mathrm{r}}$ \begin{algorithmic}[1] \State $ \bm{w}_{\mathrm{r}} \gets [1,-1]^\top$\label{line1r:Est} \State $ \hat{\bm{Z}}_{\mathrm{r}} \gets \textit{match}(\hat{\bm{Y}}_{\mathrm{r}},\bm{w}_{\mathrm{r}})$\label{line2r:Est} \State $\hat{\tau}_{\mathrm{r}}\gets\textit{Coarse\_delay\_Est}(\hat{\bm{Z}}_{\mathrm{r}})$\label{line3r:Est} \State $ \bm{T}_{\mathrm{r}} \gets \hat{\bm{Z}}_{\mathrm{r}}\odot [D(\hat{\tau}_{\mathrm{r}})^*]_{:,0:L/2-1}$\label{line4r:Est} \State $\bm{z}_{\mathrm{\phi}} \gets \sum_n [\bm{T}_{\mathrm{r}}]_{n,:}^{\top}$\label{line5r:Est} \State $[\hat{\phi}, \hat{v}_{\mathrm{r}}]\gets\textit{Coarse\_Velocity\_Angle\_Est}(\bm{z}_{\mathrm{\phi}},\{\bm{b}_k\})$\label{line6r:Est} \For{$i\in\{0,\dots, N_{\mathrm{iter}}\}$}\label{line7r:Est} \State $\hat{\bm{Y}}_{\mathrm{rs}} \gets \left(\bm{F}\bm{E}(\hat{v}_{\mathrm{r}})^{-1}\bm{F}^{{\textrm{H}}}\hat{\bm{Y}}_{\mathrm{r}}\right)\odot \bm{C}(\hat{v}_{\mathrm{r}})^*$\label{line8r:Est} \State $ \hat{\bm{Z}}_{\mathrm{rs}}\gets \textit{match}(\hat{\bm{Y}}_{\mathrm{rs}},\bm{w}_{\mathrm{r}})$\label{line9r:Est} \State $\hat{\tau}_{\mathrm{r}}\gets\textit{Fine\_delay\_Est}(\hat{\bm{Z}}_{\mathrm{rs}},\hat{\tau}_{\mathrm{r}})$\label{line10r:Est} \State $ \bm{T}_{\mathrm{rs}} \gets \hat{\bm{Z}}_{\mathrm{rs}}\odot [D(\hat{\tau}_{\mathrm{r}})^*]_{:,0:L/2-1}$\label{line11r:Est} \State $\bm{z}_{\mathrm{\phi}} \gets \sum_n [\bm{T}_{\mathrm{rs}}]_{n,:}$\label{line12r:Est} \State $[\Delta\hat{v}_{\mathrm{r}}$, $\hat{\bm{\phi}}]\gets\textit{Fine\_Velocity\_Angle\_Est}(\bm{z}_{\mathrm{\phi}}, 0, \hat{\bm{\phi}}, \{\bm{b}_k\})$\label{line13r:Est} \State $\hat{v}_{\mathrm{r}} = \hat{v}_{\mathrm{r}}+\Delta\hat{v}_{\mathrm{r}}$\label{line14r:Est} \EndFor\\ \Return $\hat{\phi}, \hat{v}_{\mathrm{r}},$ and $ \hat{\tau}_{\mathrm{r}}$ \label{line15r:Est} \end{algorithmic} \end{algorithm} \subsection{Estimation of UE position} Algorithm\,\ref{alg:pos} explains how to estimate the position of the \ac{ue} via geometrical channel parameters. First, we calculate the direction of the \ac{ue} based on $\hat{\bm{\phi}}$ in Line\,\ref{line2:pos}. Next, based on \eqref{eq:taub} and \eqref{eq:taur}, we can estimate the distance between the \ac{ue} and the RIS by minimizing the function $f(d)$, defined in Line\,\ref{line3:pos}. \begin{algorithm}[h] \caption{$\textit{Position\_Est}(\hat{\tau}_{\mathrm{b}},\hat{\tau}_{\mathrm{r}},\hat{\bm{\phi}})$ }\label{alg:pos} \textbf{Inputs:} Estimation of the \acp{toa} ($\hat{\tau}_{\mathrm{b}}$,$\hat{\tau}_{\mathrm{r}}$) and \ac{aod} ($\hat{\bm{\phi}}$) \\ \textbf{Output:} $\hat{\bm{p}}$ \begin{algorithmic}[1] \State $\Delta r\gets c\vert\hat{\tau}_{\mathrm{r}}-\hat{\tau}_{\mathrm{b}}\vert$\label{line1:pos} \State $\bm{k}\gets \begin{bmatrix} \sin([\bm{\hat{\phi}}]_{\mathrm{el}})\cos([\bm{\hat{\phi}}]_{\mathrm{az}})\\ \sin([\bm{\hat{\phi}}]_{\mathrm{el}})\sin([\bm{\hat{\phi}}]_{\mathrm{az}})\\ \cos([\bm{\hat{\phi}}]_{\mathrm{el}}) \end{bmatrix}$\label{line2:pos} \State $f(d)\gets \left(d+\Vert\bm{p}_{\mathrm{b}}-\bm{p}_{\mathrm{r}}\Vert-\Vert\bm{p}_{\mathrm{b}}-\bm{p}_{\mathrm{r}}-d\bm{k}\Vert-\Delta r\right)^2$\label{line3:pos} \State $d_{\mathrm{m}}\gets \min_df(d)$ \label{line4:pos} \State $\hat{\bm{p}}\gets d_{\mathrm{m}}\bm{k}$\label{line5:pos}\\ \Return $\hat{\bm{p}}$\label{line6:pos} \end{algorithmic} \end{algorithm} \subsection{Overall process} The overall estimation process is described in Algorithm~\ref{alg:estimator}. First, the direct channel parameters are estimated in Line\,\ref{line1:Est}. Next, we obtain an estimation of the direct signal and remove it from the received signal to obtain an estimate of the reflected one, which is used to obtain estimates of the reflected channel parameters. Next, we use the estimate of the geometrical channel parameters to find the \ac{ue} position. Finally, in Line\,\ref{line5:Est}, we use \eqref{eq:taub} to estimate the \ac{ue} clock bias. \vspace{.5cm} \paragraph*{Complexity} Algorithm\,\ref{alg:estimator} has a low complexity compared to a search over all the possible values of the channel parameters to maximize the likelihood function, which requires a 6-dimensional search (the optimal values of the gains can be calculated in closed-form). Note that our estimator performs at most a \rev{3}-dimensional search at each step. \rev{Specifically, Algorithms\,\ref{alg:coarse_v} and \ref{alg:coarse_tau} (and their corresponding refinement step) each apply only one line search each to find an estimation of the radial velocity and delay, respectively. Algorithm\,\ref{alg:coarse_vA_phi} searches over the possible radial velocities and also the AoDs (elevations and azimuths), consequently, the search is performed over a 3D space, while its corresponding refinement step applies a 1D and a 2D search. Furthermore, we significantly reduced the complexity of our 3D search by using FFT for searching over velocities and 2D FFT method to search over possible AoDs (see Appendix\,\ref{app:fft}). Therefore, the proposed algorithm} requires much less computational power compared to the maximum-likelihood estimator. \begin{algorithm}[t] \caption{\textit{Estimator}($\bm{Y}, \{\bm{b}_0,\dots \bm{b}_{L/2-1}$\}) }\label{alg:estimator} \textbf{Inputs:} Received signal ($\bm{Y}\in \complexset{N}{L}$), beams $\{\bm{b}_k\}$\\ \textbf{Output:} Estimation of UE position ($\hat{\bm{p}}$), UE clock bias $\hat{\Delta}_t$, and radial velocities $\hat{v}_{\mathrm{b}}, \hat{v}_{\mathrm{r}}$ \begin{algorithmic}[1] \State $[\hat{g}_{\mathrm{b}}, \hat{v}_{\mathrm{b}}, \hat{\tau}_{\mathrm{b}}]\gets$\textit{Direct\_Par\_Est}($\bm{Y}$)\label{line1:Est} \State $\hat{\bm{Y}}_{\mathrm{r}}\gets \bm{Y} - \hat{g}_{\mathrm{b}} \bm{F}\bm{E}(\hat{v}_{\mathrm{b}}) \bm{F}^{{\textrm{H}}} \left(\bm{D}(\hat{\tau}_{\mathrm{b}}) \odot \bm{C}(\hat{v}_{\mathrm{b}})\right)$\label{line2:Est} \State $[\hat{\phi}, \hat{v}_{\mathrm{r}}, \hat{\tau}_{\mathrm{r}}]\gets$\textit{Reflected\_Par\_Est}($\hat{\bm{Y}}_{\mathrm{r}}, \{\bm{b}_0,\dots \bm{b}_{L/2-1}$\})\label{line3:Est} \State $\hat{\bm{p}}\gets \textit{Position\_Est}(\hat{\tau}_{\mathrm{b}},\hat{\tau}_{\mathrm{r}},\hat{\bm{\phi}})$\label{line4:Est} \State $\hat{\Delta}_t \gets \hat{\tau}_{\mathrm{b}}-\Vert\hat{\bm{p}}-\bm{p}_{\mathrm{b}}\Vert/c$\label{line5:Est}\\ \Return $\hat{\bm{p}}$, $\hat{\Delta}_t$, $\hat{v}_{\mathrm{b}}$, and $\hat{v}_{\mathrm{r}}$ \label{line6:Est} \end{algorithmic} \end{algorithm} \section{Simulation results}\label{sec:simulationResults} \begin{table}[!t] \vspace{.1cm} \caption{Parameters used in the simulation.} \label{table:par} \centering \begin{tabular}{l l l } \hline \hline Parameter&Symbol& Value\\ \hline RIS dimensions & $M_1\times M_2$ &$64\times 64$\\ Wavelength & $\lambda$ & $1 \ {\mathrm{cm}}$\\ RIS element distance & $d$ &$0.5 \ {\mathrm{cm}}$\\ Light speed & $c$ & $3\times 10^8 \ \mathrm{m/s}$\\ Number of subcarriers & $N$ & $3\, 000$\\ Subcarrier bandwidth & $\Delta_f$ & $120 \ \mr{kHz}$\\ Symbol duration & $T$ & $8.33 \ \mr{us}$\\ CP duration & $ T_{\rm{cp}} $ & $0.58 \ \mr{us}$\\ Number of transmissions & $L$ & $256$\\ Transmission Power &$N \Delta_f E_{\mathrm{s}}$ & $20 \ \mathrm{dBm}$\\ Noise PSD & $N_0$ & $-174 \ \mathrm{dBm/Hz}$\\ UE's Noise figure& $n_f$ & $8 \ \mathrm{dB}$\\ Noise variance& $\sigma^2=n_f N_0$ & $-166 \ \mathrm{dBm/Hz}$\\ BS position & $\bm{p}_{\mathrm{b}}$ & $[5,5,0]$\\ RIS position & $\bm{p}_{\mathrm{r}}$ & $[0,0,0]$\\ Uncertainty radius & $ \sigma$ & $1$m\\ \hline \hline \end{tabular} \end{table} In this section, we assess the accuracy of our estimation method and compare it to the \ac{crb} for a system example with default parameters listed in Table\,\ref{table:par}. The algorithm parameters are set to $N_{\mathrm{\tau}} = 4096$ \rev{(the IDFT dimension for delay estimation in Algorithm\,\ref{alg:coarse_tau})} and $N_{\mathrm{v}} = N_{\mathrm{\nu}} = 256 $ \rev{(the DFT dimension for velocity estimation in Algorithms\,\ref{alg:coarse_tau} and \ref{alg:coarse_vA_phi}, respectively)}. \rev{The number of candidate \acp{aod} in Algorithm\,\ref{alg:coarse_vA_phi} is set to} $N_{\mathrm{\phi}} = 256$ when using the directional profiles or $N_{\mathrm{\phi}} = 256^2$ for the random profiles, \rev{and the selection of candidate \acp{aod} are done according to Appendix\,\ref{app:fft}.} \rev{Also, the number of iterations in Algorithm\,\ref{alg:estimator_reflected} is set to}\footnote{\rev{Based on our simulation results (not provided in this paper), for the considered parameters, the position error saturates after two iterations.}} $N_{\mathrm{itr}} = 3$. The \ac{ris} is located at the origin such that the local coordinates of RIS matches the global coordinate system ($\bm{R}$ is the identity matrix). \rev{Following the widely used assumption of quasi-static channel over a coherence interval\rev{\footnote{\rev{The UE mobility affects the time-varying phase of the received signal through Doppler-induced phase progressions in fast-time and slow-time domains, modeled by \eqref{eq:Ematrix} and \eqref{eq:Cmatrix}, respectively.}}} \cite{IRS_OFDM_TCOM_2020,Zhang_RIS_CE_2019,adaptiveOFDM_RIS_JSAC_2020,CE_OFDM_WCL_2021,CE_practical_2021,OFDMA_IRS_WCL_2020} (consisting of $L$ OFDM symbols)}, the channel gains are assigned random phases \rev{(fixed during $L$ symbols)} and the amplitudes are calculated as \cite[Eq. (21)--(23)]{ellingson2019path} \begin{align} \vert g_{\mathrm{b}}\vert &= \frac{\lambda\sqrt{E_{\mathrm{s}}}}{4\pi\Vert\bm{p}_{\mathrm{b}}-\bm{p}\Vert}\\ \vert g_{\mathrm{r}}\vert &= \frac{\lambda^2 \cos^q(\alpha_{\mathrm{\theta}})\cos^q(\alpha_{\mathrm{\phi}})\sqrt{E_{\mathrm{s}}}}{16\pi\Vert\bm{p}_{\mathrm{b}}-\bm{p}_{\mathrm{r}}\Vert\Vert\bm{p}_{\mathrm{r}}-\bm{p}\Vert} \end{align} with $q=0.285$ \rev{(see \cite{ellingson2019path})}, where $E_{\mathrm{s}}$ indicates the pilot energy, and $\alpha_{\mathrm{\phi}}$ and $\alpha_{\mathrm{\theta}}$ are defined below \eqref{eq:condSpWB}. Before presenting the results, in Section\,\ref{sec:fimAnalysis}, we present some preliminary information about the calculation of the \ac{crb} based on \ac{fim} analysis, which will be used as a benchmark in this section. Then, we study the spatial-WB effects in Section\,\ref{sec:Results_wbEffects} for different RIS sizes and signal bandwidths. Next in Section\,\ref{sec:Results_mobilityEffects} the mobility effects are considered and the influence of the uncertainty radius as well as scatterers are shown in Section\,\ref{sec:radius}. \subsection{FIM analysis}\label{sec:fimAnalysis} \ac{fim} analysis can be used to develop theoretical lower bounds on the estimation error of any unbiased estimator. We do so by calculating the \ac{fim} first for the channel parameters and then for the positional parameters. We define the set of channel parameters as \begin{align} \bm{\zeta}_{\mathrm{ch}} = [& \tau_{\mathrm{b}} , \tau_{\mathrm{r}}, [\bm{\phi}]_{\mathrm{az}}, [\bm{\phi}]_{\mathrm{el}}, v_{\mathrm{b}}, v_{\mathrm{r}}, \Re(g_{\mathrm{b}}), \Im(g_{\mathrm{b}}), \Re(g_{\mathrm{r}}), \Im(g_{\mathrm{r}}) ]^\top. \end{align} The \ac{fim} can be calculated as follows \cite{kayEstimation} \begin{align} \fisherInfMark_{\chParMark} = \frac{2}{\sigma^2}\sum\limits_{t=0}^{L-1}\sum\limits_{n=0}^{N-1}\operatorname{Re}\left\{\frac{\partial [\bm{M}]_{n,t}}{\partial \parameterMark_{\chParMark}} \left(\frac{\partial [\bm{M}]_{n,t}}{\partial \parameterMark_{\chParMark}}\right)^{\textrm{H}}\right\},\label{eq:fimch} \end{align} where $\bm{M}$ is the noiseless part of the received signal. \rev{In this paper, we use the dynamic spatial-wideband model in \eqref{eq:Yb}--\eqref{eq:Yr} to compute \eqref{eq:fimch} unless stated otherwise.} Next, we calculate the \ac{fim} for positional parameters, that is \begin{align} \bm{\zeta}_{\mathrm{po}} = [\bm{p}^\top, \Delta t, v_{\mathrm{b}}, v_{\mathrm{r}}, \Re(g_{\mathrm{b}}), \Im(g_{\mathrm{b}}), \Re(g_{\mathrm{r}}), \Im(g_{\mathrm{r}})]^\top. \end{align} We do so, by calculating $\fisherInfMark_{\positionOrientationMark} = \matInd{J}^\top \fisherInfMark_{\chParMark} \matInd{J}$, where the Jacobian matrix $\matInd{J}\in\mathbb{R}^{10\times 10}$ is defined as \begin{align} \matInd{J}_{\ell,s} = \frac{\partial [\parameterMark_{\chParMark}]_\ell}{\partial [\parameterMark_{\positionOrientationMark}]_s}.\label{eq:jacobElements} \end{align} By obtaining $\fisherInfMark_{\chParMark}$ the estimation error of the $m$th channel parameter is lower bounded as \begin{align} \mathrm{E}(\vert [\bm{\zeta}_{\mathrm{ch}}]_m-\widehat{[\bm{\zeta}_{\mathrm{ch}}]}_m\vert^2)\geq [\fisherInfMark_{\chParMark}^{-1}]_{m,m}, \end{align} where $\widehat{[\bm{\zeta}_{\mathrm{ch}}]}_m$ indicates the estimate of the parameter $[\bm{\zeta}_{\mathrm{ch}}]_m$. Similarly the estimation of the positional parameters can be bounded using $\fisherInfMark_{\positionOrientationMark}$. Furthermore, we use the \ac{peb} as a lower bound on the position estimation error, that is \begin{align} \sqrt{\big[\mathrm{E}(\Vert \bm{p}-\hat{\bm{p}}\Vert^2)\big]}\geq \sqrt{\mathrm{trace}([\fisherInfMark_{\positionOrientationMark}^{-1}]_{1:3,1:3})}. \end{align} The derivatives required for calculating $\fisherInfMark_{\chParMark}$ and $\matInd{J}$ can be calculated based on the relations described in Section\,\ref{sec:systemModChannelModel}. \rev{We note that for the directional codebook, the prior information of the UE position affects the FIM only through the beamforming, and we do not take into account the effects of the fusion of the estimated position and the prior information. Since the proposed estimator also does not perform information fusion, the presented PEB correctly lower-bounds the position error of our estimator.} \begin{figure} \centering \includegraphics[width =5cm]{Fig3.pdf} \caption{\rev{Placement of the RIS, BS, and UE in the 3D space.}} \label{fig:struct} \end{figure} \begin{figure*}[t] \centering \begin{subfigure}[b]{0.32\textwidth} \input{Fig4a.tex} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \input{Fig4b.tex} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \input{Fig4c.tex} \end{subfigure} % \caption{Estimation error and the CRB bounds for \ac{ue} position along the path $[-r/\sqrt{2},r/\sqrt{2},-10]$, where $r$ varies between $2$m to $100$m considering NB and WB models, and directional and random \ac{ris} phase profiles. Results are presented for different combinations of the number of \ac{ris} elements ($M$) and subcarriers ($N$): a) $M=64^2$, $N=3000$, b) $M=32^2$, $N=3000$, c) $M=64^2$, $N=1500$. } \label{fig:MN} \end{figure*} \subsection{Wideband effects}\label{sec:Results_wbEffects} In this section, we study the accuracy of our estimator in presence of spatial-WB effects using numerical results. To do so, we calculate the PEB and evaluate the UE position estimation error considering the random and directional RIS profiles described in Section\,\ref{sec:beamforming}. We place the \ac{ue} at $[-r/\sqrt{2} , r/\sqrt{2}, -10]$ for $r\in[2 , 100]$ (in meters). \rev{ Figure\,\ref{fig:struct} demonstrates the placement of BS, RIS, and UE in the 3D space.} Furthermore, in Fig.\,\ref{fig:MN} we consider the data transmission through the spatial-WB channel in \eqref{eq:Yb}--\eqref{eq:Yr} and also the spatial-NB one in \eqref{eq:Ybn}--\eqref{eq:Yrn}. For each point, we average the results over 20 sets of RIS phase profiles, for each of which we consider 20 noise realizations. Fig.\,\ref{fig:MN}\,(a) presents the results for \rev{$M = 64^2$} and $N = 3000$. It can be seen that with the NB channel, the estimator attains the PEB at every point. With the WB channel, the estimator has a noticeably larger error compared to the PEB for low values of $r$. The reason is that for low values of $r$ the angle $\alpha$ in \eqref{eq:condSpWB} becomes large and the assumption \eqref{eq:condSpWB} \rev{does} not hold. Therefore, the mismatch between the WB and the NB channels becomes considerable, and the accuracy of the estimator (which is designed based on the NB channel) deteriorates. Furthermore, one can observe that the PEBs for the WB and NB channel models are almost equal, which shows that the performance degradation can be compensated by adopting a better (and more complex) estimator. \rev{Future research can aim to prove mathematically (via FIM analysis) that the changes of PEB due to user mobility and spatial WB effects are indeed negligible.} \begin{figure} \centering \input{Fig5} \caption{\rev{Position error for the UE position $[-5/\sqrt{2}, -5/\sqrt{2}, -10]$ for directional and random RIS phase profiles vs the received SNR (of the direct path). }} \label{fig:snr} \end{figure} \begin{figure}[t] \centering \input{Fig6.tex} \caption{Estimation error and CRB bounds for the \ac{ue} position at $[-5/\sqrt{2}, 5/\sqrt{2}, -10]$ as a function of signal bandwidth ($B$). Directional and random phase profiles were considered. } \label{fig:freqs} \end{figure} In Fig.\,\ref{fig:MN}\,(a), it can be seen that \rev{the estimation error due to spatial-WB effects} has a more pronounced effect for directional beamforming than the random one, which is mainly due to higher SNRs in the directional case which makes the influence of the distortion caused by the spatial-WB effects more pronounced. Fig.\,\ref{fig:MN}\,(b) and Fig.\,\ref{fig:MN}\,(c), present the same results for a system with half of the RIS size and half of the bandwidth of the system considered in Fig.\,\ref{fig:MN}\,(a). Apart from a natural degradation in localization accuracy, it can be seen that the WB effects diminish. This can be justified by \eqref{eq:condSpWB}. Also, we note that for large values of $r$ and random beamforming the estimation error in Fig.\,\ref{fig:MN}\,(b) cannot follow the PEB due to low values of SNR. \begin{figure*}[t] \centering \begin{subfigure}[b]{0.32\textwidth} \input{Fig7a} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \input{Fig7b} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \input{Fig7c} \end{subfigure} % \caption{ CDF of estimation error and CRB bounds for $100$ realizations of directional and random \ac{ris} phase profiles for a) UE position, b) UE clock bias, and c) UE velocity $\bm{v}$. The \ac{ue} has the position $[-10, 10, -10]$ and velocity $[-v, v, 0]$, where $v\in\{0,30\}$m/s. } \label{fig:mobilityCDFs} \end{figure*} \rev{In Fig.\,\ref{fig:snr}, we demonstrate the position error for the UE location at $[-5/\sqrt{2}, -5/\sqrt{2}, -10]$ for a large range of the received SNR of the direct path. It can be seen that for SNRs lower than $0$ dBm, the estimator fails to estimate the UE location for random RIS profiles. Also, it can be seen that at high SNRs the estimation error saturates for both directional and random phase profiles due to the spatial WB effects. Based on our simulation results (not included in this paper), similar behavior can also be observed for the estimation error of the AOD.} To study the WB effects more closely in Fig.\,\ref{fig:freqs}, we present the PEB and the estimation errors at $r=5$ for a large range of signal bandwidth ($B=N\Delta_f$). As can be seen, the PEBs decrease with $B$, which shows that a better localization performance can be attained with higher bandwidths. However, our estimator, which is designed based on the NB model, does not show such behavior. Specifically with the directional beamforming, after $B = 140$ MHz the distortion caused by the WB effects causes a higher positioning error. \subsection{Mobility effects}\label{sec:Results_mobilityEffects} Fig.\,\ref{fig:mobilityCDFs} presents the \acp{cdf} of the estimation error and the \ac{crb} for $100$ different realizations of random and directional RIS phase profiles. For each RIS phase profile we generated $1000$ noise realizations to accurately calculate the estimation error. We consider two \ac{ue} velocities: One where \ac{ue} is static and one where the \ac{ue} velocity vector is set to \rev{$\bm{v}=[-30, 30, 0]\, \mathrm{m/s}$}. We consider the estimation of the UE position in Fig.\,\ref{fig:mobilityCDFs}\,(a), UE clock bias in Fig.\,\ref{fig:mobilityCDFs}\,(b), and UE radial velocity vector $v_{\mathrm{r}}$ in Fig.\,\ref{fig:mobilityCDFs}\,(c). The velocity vector is estimated based on the radial velocities and the relations \eqref{eq:vub} and \eqref{eq:vur} and by assuming that the estimator has the prior knowledge that $[\bm{v}]_3=0$. It can be seen from Fig.\,\ref{fig:mobilityCDFs} that in addition to the UE position, the \ac{ue} velocity vector and also the UE clock bias can be estimated. Therefore, the \ac{ue} can be synchronized to the BS. There is a small reduction in the accuracy of velocity estimation for the high-mobility user compared to the static one. This is due to the error in position estimation which causes error in the estimation of $\bm{v}$ from the estimated radial velocities, $\hat{v}_{\mathrm{b}}$ and $\hat{v}_{\mathrm{r}}$. Apart from this, it is apparent that the \ac{ue} velocity does not affect the estimation accuracy, both in terms of analytical bounds and also estimation error. This can be explained based on the fact that the \ac{ue} radial velocities can be estimated \rev{with the accuracy of up to $0.1\,\mathrm{m/s}$} and then their effects can be removed from the received signal. \begin{figure}[t] \centering \input{Fig8} \caption{CDF of estimation error (dashed lines) and CRB bounds (solid lines) for $100$ realizations of directional \ac{ris} phase profiles constructed based on different uncertainty radii ($\sigma$). The estimation error for $\sigma=1$m in presence of $10$ scatterers is also shown (the dotted line). The \ac{ue} has the position $[-10, 10, -10]$. } \label{fig:sigma} \end{figure} \subsection{Uncertainty radius and scatterers}\label{sec:radius} Fig.\,\ref{fig:sigma} demonstrates the \ac{cdf} of the position error for $100$ realizations of the directional RIS beams for different values of $\sigma$. It can be seen that the optimal performance among the considered values of $\sigma$ is obtained by $\sigma=0.5$\,m. For very small values of $\sigma$ (like $\sigma = 0.1$\,m), all the transmitted beams become almost similar to each other and therefore accurate \ac{aod} estimation cannot be performed due to lack of beam diversity. Furthermore, with larger values of $\sigma$ there is a probability that none of the transmitted beams hits the \ac{ue}, which results in low SNR and high estimation error. This is the reason why the \ac{cdf} of the estimation error becomes saturated around $0.95$ for $\sigma=3$m. Furthermore, we examine the performance of our estimator in presence of $10$ scatterers, whose radio cross sections are equal to $0.1$ $\text{m}^2$ and are distributed randomly on a disc placed on the $z=-11$ plane, centered at $[0,0,-11]$ with $5$ meters radius. The channel gains for scatterers are calculated based on the radar range equation (see e.g., \cite[Eq.\,(23)]{ellingson2019path}). It can be seen that although the presence of the scatterers can degrade the localization accuracy, it is still possible to perform cm-level positioning. \section{Conclusion}\label{sec:conclusion} We analyzed the influence of UE mobility and spatial-WB effects on the accuracy of an RIS-enabled SISO system by deriving \ac{crb} and also devising an estimator. Based on our numerical results, it was shown that the UE mobility does not have any notable effects on the estimation accuracy of the \ac{ue} state. This was shown in Fig.\,\ref{fig:mobilityCDFs}, where both the bounds and the estimation errors are virtually equal for a static \ac{ue} and a \ac{ue} with a very high speed. Our proposed estimator dealt with the \ac{ue} mobility by successively estimating the radial velocities and compensating for their effects. Our results suggest that the studies that assume static users can be potentially extended to account for the \ac{ue} mobility without any significant performance degradation. With regard to spatial-WB effects it was shown that these effects do not change the analytical bounds and therefore the performance of the optimal estimator. However, for a low-complexity estimator that ignores the spatial-WB effects (such as the one presented in this work), they can degrade the localization accuracy, especially for large signal bandwidth and RIS sizes. Specifically, it was shown in Fig.\ref{fig:freqs} that for some typical system values increasing the number of subcarriers can decrease the estimation accuracy indicating the existence of an optimal signal bandwidth. This result shows the importance of devising a low-complexity estimator that can cope with the spatial-WB effects in dynamic systems, which is an interesting topic for future research. \begin{appendices} \section{Spatial-wideband model under UE mobility} \label{app:Specially_wideband_Ch_Model} In this appendix, we derive the received signal coming through the reflected path ($\bm{Y}_{\mathrm{r}}$) by taking into account spatial-WB effects \rev{\cite{wang2018spatial,dovelosintelligent,face_squint,chen2021beam}} and UE mobility \rev{\cite{Matthiesen_continuous,basar2019reconfigurable,huang_icc21,sun_wcl_doppler}}. The received signal from the direct path ($\bm{Y}_{\mathrm{b}}$) in \eqref{eq:Yb} can be derived in the same fashion. We use the same notation as in Section~\ref{sec:systemModChannelModel}. \rev{In addition, for the derivations, we adopt an approach similar to those in \cite{face_squint,bjornson2021reconfigurable,wang2018spatial}, where we compute the total path delay between the BS and the UE, including the BS-to-RIS delay, the (adjustable) delay at the RIS and the RIS-to-UE delay, along with the Doppler effects due to UE mobility.} \subsection{Transmit signal model} The transmitted OFDM baseband signal can be expressed as \begin{align}\label{eq_ofdm_baseband_all} s(t) = \sum_{\ell=0}^{L-1} s_{\ell}(t), \end{align} where \begin{equation}\label{eq_ofdm_baseband} s_{\ell}(t) = \frac{1}{\sqrt{N}} \sum_{n = 0}^{N-1} x_{n,\ell} \, e^{\jmath 2 \pi n \Delta_f t} \rect{\frac{t - \ell T_{\rm{sym}} }{ T_{\rm{sym}} }} \end{equation} denotes the OFDM signal for the $\thn{\ell}$ symbol, $ x_{n,\ell} $ is the complex pilot symbol on the $\thn{n}$ subcarrier for the $\thn{\ell}$ symbol, and $\rect{t}$ is a rectangular function that takes the value $1$ for $t \in [0, 1 )$ and $0$ otherwise. Then, the upconverted transmit signal can be written as \begin{equation}\label{eq_passband_st} s_{\mathrm{u}} (t) = \Re \left\{ s(t) e^{j 2 \pi f_c t } \right\}. \end{equation} \subsection{Receive signal model} Based on the transmit signal model in \eqref{eq_passband_st}, the passband received signal at the UE due to the reflected path through the RIS is given by \rev{\cite{face_squint}} \begin{align}\label{eq_rec_passband} y_{\mathrm{ur}} (t) = \Re \left\{ \widetilde{g}_{\mathrm{r}} \sum_{m=0}^{M-1} s(t- [ \bm{\tau} (t)]_m) e^{\jmath 2 \pi f_c (t-[ \bm{\tau} (t)]_m) } \right\}, \end{align} where $\widetilde{g}_{\mathrm{r}}$ is the complex path gain and the vector $ \bm{\tau} (t)\in \mathbb{R}^{M}$ contains the delays between the BS and the UE through the different elements of the RIS. It can be computed as \begin{align}\label{eq_taumt} \bm{\tau} (t) = \bm{\tau}_{\rm{br}} + \bm{\tau}_{\rm{r}} (t) + \bm{\tau}_{\rm{ru}} (t) + \Delta_t , \end{align} where the vector $ \bm{\tau}_{\rm{br}} $ contains the delays between the BS and the elements of the RIS \begin{align} [ \bm{\tau}_{\rm{br}} ]_m = \frac{ \norm{ \bm{p}_{\mathrm{b}} - \bm{p}_{\mathrm{r},m} } }{c} \end{align} with $ \bm{p}_{\mathrm{r},m} $ denoting the location of the $\thn{m}$ RIS element, $[ \bm{\tau}_{\rm{r}} (t)]_m$ denotes the delay incurred by the $\thn{m}$ element of the RIS at time $t$ \cite{bjornson2021reconfigurable}, $[ \bm{\tau}_{\rm{ru}} (t)]_m = [ \bm{\tau}_{\rm{ru}} ]_m - \nu_{\mathrm{r}} t$ represents the time-varying delay \rev{\cite{basar2019reconfigurable}} from the $\thn{m}$ element to the UE with $\nu_{\mathrm{r}} = v_{\mathrm{r}}/c$ and $v_{\mathrm{r}}$ denoting the radial velocity along the RIS-UE direction in \eqref{eq:vur} and \begin{align} [ \bm{\tau}_{\rm{ru}} ]_m = \frac{ \norm{ \bm{p}_{\mathrm{r},m} - \bm{p} } }{c} \end{align} is the initial delay (at $t=0$). The complex baseband received signal after downconversion of \eqref{eq_rec_passband} can be written as \rev{\cite{face_squint}} \begin{align}\label{eq_rec_baseband} y_{\mathrm{r}}(t) = \widetilde{g}_{\mathrm{r}} \sum_{m=0}^{M-1} s(t-[ \bm{\tau} (t)]_m) e^{-\jmath 2 \pi f_c [ \bm{\tau} (t)]_m }. \end{align} Plugging \eqref{eq_ofdm_baseband_all} and \eqref{eq_ofdm_baseband} into \eqref{eq_rec_baseband}, we have \begin{align}\label{eq_rec_baseband2} y_{\mathrm{r}}(t) &= \widetilde{g}_{\mathrm{r}} \sum_{m=0}^{M-1} \sum_{\ell=0}^{L-1} \frac{1}{\sqrt{N}} \sum_{n = 0}^{N-1} x_{n,\ell} \, e^{\jmath 2 \pi n \Delta_f (t - [ \bm{\tau} (t)]_m )} \\ \nonumber &~~~~~~ \times e^{-\jmath 2 \pi f_c [ \bm{\tau} (t)]_m } \rect{\frac{t - [ \bm{\tau} (t)]_m - \ell T_{\rm{sym}} }{ T_{\rm{sym}} }}. \end{align} For the $\thn{\ell}$ symbol, we sample $y_{\mathrm{r}}(t)$ in \eqref{eq_rec_baseband2} at $t = \ell T_{\rm{sym}} + T_{\rm{cp}} + \tau_{\rm{min}} + k T_{\mathrm{o}} /N$ for $k = 0, \ldots, N-1$ (i.e., we remove the CP and sample the interval corresponding to the elementary OFDM signal), where \begin{align} \tau_{\rm{min}} = \min_{m} [\bm{\tau}(0)]_m \end{align} is the arrival time of the reflected path with respect to the receiver's clock (which can be detected\footnote{Since the variation of the delays $ \bm{\tau} (t)$ across the RIS elements could be much smaller than the delay resolution, the UE can possibly identify a single correlation peak contributed by all the RIS elements, in which case $ \tau_{\rm{min}} $ is set as the location of that peak.}, e.g., via downlink synchronization signals \cite{TS_38211}). Substituting \eqref{eq_taumt} into \eqref{eq_rec_baseband2}, the discrete-time signal for the \rev{$\thn{k}$ sample of the} $\thn{\ell}$ symbol at the receiver becomes \begin{align} [\widetilde{\bm{Y}}_{\mathrm{r}}]_{k,\ell} &= \widetilde{g}_{\mathrm{r}} \sum_{m=0}^{M-1} \frac{1}{\sqrt{N}} \sum_{n = 0}^{N-1} \Big[ x_{n,\ell} \, e^{\jmath 2 \pi n \Delta_f (\ell T_{\rm{sym}} + T_{\rm{cp}} + \tau_{\rm{min}} + k T_{\mathrm{o}} /N )} \nonumber\\ &~~~~ \times e^{-\jmath 2 \pi n \Delta_f ( [\bm{\tau}_{\rm{br}}]_m + [\bm{\tau}_{\rm{r},\ell}]_m + [\bm{\tau}_{\rm{ru}}]_m + \Delta_t )} \nonumber \\ \label{eq_ylk_long} &~~~~ \times e^{\jmath 2 \pi n \Delta_f \nu_{\mathrm{r}} (\ell T_{\rm{sym}} + T_{\rm{cp}} + \tau_{\rm{min}} + k T_{\mathrm{o}} /N)} \\ \nonumber &~~~~ \times e^{-\jmath 2 \pi f_c ( [\bm{\tau}_{\rm{br}}]_m + [\bm{\tau}_{\rm{r},\ell}]_m + [\bm{\tau}_{\rm{ru}}]_m + \Delta_t ) } \\ \nonumber &~~~~ \times e^{\jmath 2 \pi f_c \nu_{\mathrm{r}} (\ell T_{\rm{sym}} + T_{\rm{cp}} + \tau_{\rm{min}} + k T_{\mathrm{o}} /N) } \Big] \end{align} under the assumption that $[\bm{\tau}(0)]_m - \tau_{\rm{min}} \leq T_{\rm{cp}} $, which holds in practice since the UE is in the far-field of RIS and the RIS delays $ \bm{\tau}_{\rm{r}} $ are very small compared to propagation delays $ [\bm{\tau}_{\rm{br}}]_m $ and $[\bm{\tau}_{\rm{ru}}]_m$ \cite{bjornson2021reconfigurable}. In \eqref{eq_ylk_long}, it is assumed that the RIS profile can change across OFDM symbols and $ [\bm{\tau}_{\rm{r},\ell}]_m = \left[ \bm{\tau}_{\rm{r}} (\ell T_{\rm{sym}} + T_{\rm{cp}} ) \right]_m$ represents the delay of the $\thn{m}$ element corresponding to the RIS configuration applied for the $\thn{\ell}$ symbol. Since the receiver's clock reference can be set to an arbitrary known epoch, we can set $ \tau_{\rm{min}} = 0$. The received signal in \eqref{eq_ylk_long} can be written as \begin{align} [\widetilde{\bm{Y}}_{\mathrm{r}}]_{k,\ell} &= \frac{\widetilde{g}_{\mathrm{r}} }{\sqrt{N}} e^{\jmath 2\pi f_c \nu_{\mathrm{r}}( T_{\rm{cp}} + k T_{\mathrm{o}} /N)} \sum_{n = 0}^{N-1} \widetilde{x}_{n,\ell} e^{\jmath 2 \pi n k /N } \nonumber\\ &~~~~ \times e^{\jmath 2 \pi ( f_c +n \Delta_f) \nu_{\mathrm{r}}\ell T_{\rm{sym}} } e^{\jmath 2\pi n \Delta_f \nu_{\mathrm{r}} ( T_{\rm{cp}} + k T_{\mathrm{o}} /N)}\label{eq_ylk_long2}\\ &~~~~ \times \sum_{m=0}^{M-1}e^{-\jmath 2 \pi ( f_c + n \Delta_f) ( [\bm{\tau}_{\rm{br}}]_m + [\bm{\tau}_{\rm{r},\ell}]_m + [\bm{\tau}_{\rm{ru}}]_m + \Delta_t )},\nonumber \end{align} where \begin{align}\label{eq_xnl_tilde} \widetilde{x}_{n,\ell} = x_{n,\ell} e^{\jmath 2\pi n \Delta_f (\ell T_{\rm{sym}} + T_{\rm{cp}} )}. \end{align} We define the phase shift induced by the delay $ [\bm{\tau}_{\rm{r},\ell}]_m $ at the center frequency as $\psi_{\ell,m} = 2\pi f_c [\bm{\tau}_{\rm{r},\ell}]_m $, which we assume to be less than $2\pi$ (note that the choice of the RIS configuration $ [\bm{\tau}_{\rm{r},\ell}]_m $ is under the designer's control \rev{\cite{bjornson2021reconfigurable}} and $ [\bm{\tau}_{\rm{r},\ell}]_m \in [0, 1/ f_c )$ will cover all possible phase shifts). To \rev{make} \eqref{eq_ylk_long2} \rev{more compact}, \rev{we will now rely on the following approximations/simplifications:} \begin{enumerate} \item \rev{\textit{frequency-narrowband approximation:}} \begin{align}\label{eq:phaseApprox} \frac{ f_c +n\Delta_f}{ f_c } \psi_{\ell,m} \approx \psi_{\ell,m}, \end{align} which holds as long as $B/ f_c \ll 1$ \rev{(which is satisfied in our simulations according to Table~\ref{table:par} with $B = 360 \, \rm{MHz}$ and $ f_c = 30 \, \rm{GHz}$)}. \item \textit{far-field approximation\rev{\footnote{\rev{The far-field approximation in \eqref{eq:ff_taubr} (and, similarly, the one in \eqref{eq:ff_tauru}) can be readily derived by observing that, in the far-field regime, the difference between the BS-to-RIS center distance and the BS-to-$m$-th RIS element distance can be written as a function of $ \bm{\theta} $, the AoA from the BS to RIS, and $[\bm{Q}]_{:,m}$, the position of the $m$-th RIS element relative to the RIS center.}}}:} \begin{align} 2 \pi ( f_c +n \Delta_f) (\bm{\tau}_{\mathrm{br}}-\tau_{\mathrm{br}})& \approx - \bm{k}^\top(\bm{\theta})\bm{Q} \label{eq:ff_taubr}\\ 2 \pi ( f_c +n \Delta_f) (\bm{\tau}_{\mathrm{ru}}-\tau_{\mathrm{ru}})& \approx - \bm{k}^\top(\bm{\phi})\bm{Q}, \label{eq:ff_tauru} \end{align} where $\tau_{\mathrm{br}} = \Vert\bm{p}_{\mathrm{b}}-\bm{p}_{\mathrm{r}}\Vert/c$, $\tau_{\mathrm{ru}} = \Vert\bm{p}_{\mathrm{r}}-\bm{p}\Vert/c$, and $\bm{k}$ and $\bm{Q}$ are defined in \eqref{eq:WaveNumVect} and \eqref{eq:Q}, respectively. \item \rev{\textit{negligible phase term under practical velocity values\footnote{The phase of the \ac{lhs} of \eqref{eq:vApprox} can be upper bounded with $2\pi B T_{\rm{sym}} v_{\mathrm{r}}/c $, which for the values in Table~\ref{table:par} and $v_{\mathrm{r}}=30\,\mathrm{m}/\mathrm{s}$ is about $2\cdot 10^{-3}$.}:}} \begin{align}\label{eq:vApprox} e^{\jmath 2\pi n \Delta_f \nu_{\mathrm{r}} ( T_{\rm{cp}} + k T_{\mathrm{o}} /N)}\approx 1. \end{align} \end{enumerate} \rev{In addition}, we define $[\bm{\gamma}_{\ell}]_m = e^{\rev{-}\jmath \psi_{\ell,m}}$ to indicate the RIS phase profile, and a constant phase reference $\psi_{r} \rev{\triangleq} 2\pi f_{\mathrm{c}} \rev{\tau_{\mathrm{r}}}$. Using \rev{\eqref{eq:phaseApprox}--\eqref{eq:ff_tauru}}, the last summation in \eqref{eq_ylk_long2} can be written as \begin{align} \label{eq_approx_start} &\sum_{m=0}^{M-1} e^{-\jmath 2 \pi ( f_c + n \Delta_f) ( [\bm{\tau}_{\rm{br}}]_m + [\bm{\tau}_{\rm{r},\ell}]_m + [\bm{\tau}_{\rm{ru}}]_m + \Delta_t )} \\ \nonumber &= \rev{\sum_{m=0}^{M-1} e^{-\jmath 2 \pi ( f_c + n \Delta_f) ( [\bm{\tau}_{\rm{br}}]_m - \tau_{\mathrm{br}})} e^{-\jmath 2 \pi ( f_c + n \Delta_f) ([\bm{\tau}_{\rm{ru}}]_m - \tau_{\mathrm{ru}})}} \\ \nonumber &~~\rev{\times e^{-\jmath 2 \pi ( f_c + n \Delta_f) (\tau_{\mathrm{br}} + \tau_{\mathrm{ru}} + \Delta_t ) } e^{-\jmath 2 \pi ( f_c + n \Delta_f) [\bm{\tau}_{\rm{r},\ell}]_m }} \\ \nonumber & \approx \rev{e^{-\jmath 2\pi n\Delta_f \tau_{\mathrm{r}}} e^{-j \psi_{r}} \sum_{m=0}^{M-1} e^{j\bm{k}(\bm{\theta})^\top [\bm{Q}]_{:,m}} e^{j\bm{k}(\bm{\phi})^\top [\bm{Q}]_{:,m}} [\bm{\gamma}_{\ell}]_m} \\ \nonumber &= \rev{e^{-\jmath 2\pi n\Delta_f \tau_{\mathrm{r}}} e^{-j \psi_{r}} \sum_{m=0}^{M-1} [\bm{a}(\bm{\theta})]_{m} [\bm{\gamma}_{\ell}]_m [\bm{a}(\bm{\phi})]_{m} } \\ \label{eq_approx_main} &= \rev{e^{-\jmath 2\pi n\Delta_f \tau_{\mathrm{r}}} e^{-j \psi_{r}} [\bm{A}(\bm{\phi})]_{n,\ell}} ~, \end{align} where the matrix $\bm{A}(\bm{\phi})$ is defined in \eqref{eq:aphi} and $\tau_{\mathrm{r}}$ in \eqref{eq:taur}. By substituting \eqref{eq_approx_main} and \eqref{eq:vApprox} into \eqref{eq_ylk_long2}, we obtain \begin{align} [\widetilde{\bm{Y}}_{\mathrm{r}}]_{k,\ell} &= \frac{g_{\mathrm{r}}}{\sqrt{N}} e^{\jmath 2\pi v_{\mathrm{r}} k T_{\mathrm{o}} /(\lambda N)} \sum_{n = 0}^{N-1} \widetilde{x}_{n,\ell} e^{\jmath 2 \pi n k /N } \nonumber\\ &~~~~ \times e^{\jmath 2 \pi v_{\mathrm{r}} \ell T_{\rm{sym}} /\lambda_n }\label{eq_ylk_long5} e^{-\jmath 2\pi n\Delta_f \tau_{\mathrm{r}}} [\bm{A}(\bm{\phi})]_{n,\ell}. \end{align} Here, we used $( f_c +n \Delta_f)\nu_{\mathrm{r}} = v_{\mathrm{r}}( f_c +n \Delta_f)/c =v_{\mathrm{r}}/\lambda_n$, where $\lambda_n$ is defined in \eqref{eq:lambda_n}. Also, we have \begin{align} g_{\mathrm{r}} = \widetilde{g}_{\mathrm{r}} e^{\jmath 2\pi f_c \nu_{\mathrm{r}} T_{\rm{cp}} }e^{\rev{-}\jmath \psi_{\mathrm{r}}}. \end{align} Assuming $ \widetilde{x}_{n,\ell} = 1$ for all\footnote{According to \eqref{eq_xnl_tilde}, the pilot symbols $ x_{n,\ell} $ can be chosen such that $ \widetilde{x}_{n,\ell} = 1$ for the sake of simplicity of analysis. The signal model can be straightforwardly extended to the case of arbitrary pilot symbols. In addition, the effects of transmit power can be modeled by adjusting the noise variance in \eqref{eq:channelModel:WB}.} $n$ and $\ell$, the summation in \eqref{eq_ylk_long5} can be written via the DFT matrix $\bm{F}$ in \eqref{eq:dft} as \begin{align} \widetilde{\bm{Y}}_{\mathrm{r}} = g_{\mathrm{r}} \bm{E}(v_{\mathrm{r}}) \bm{F}^{\mathrm{H}}\left(\bm{D}(\tau_{\mathrm{r}})\odot \bm{A}(\bm{\phi})\odot \rev{ \ccbig_{\wideband} (v_{\mathrm{r}})} \right), \end{align} \rev{where $\bm{D}(\tau)$, $ \ccbig_{\wideband} (v)$ and $\bm{E}(v)$ are defined, respectively, in \eqref{eq:matrixD}, \eqref{eq:Cmatrix} and \eqref{eq:Ematrix}.} Finally, we define \begin{align} \bm{Y}_{\mathrm{r}} = \bm{F} \widetilde{\bm{Y}}_{\mathrm{r}} \end{align} to obtain \eqref{eq:Yr}. \section{\rev{Conditions of Validity for Spatial-Narrowband Approximation in \eqref{eq:Ybn}--\eqref{eq:Yrn}}}\label{app_nb_valid} \rev{In this part, we derive the conditions under which the spatial-narrowband approximation in \eqref{eq:Ybn}--\eqref{eq:Yrn} is valid. To this end, we explore when $ \ccbig_{\wideband} (v)$ and $ \aabig_{\wideband} (\bm{\phi})$ in the spatial-wideband model \eqref{eq:Yb}--\eqref{eq:Yr} can be approximated as $ \bm{C} (v)$ and $ \bm{A} (\bm{\phi})$ in the spatial-narrowband model \eqref{eq:Ybn}--\eqref{eq:Yrn}, respectively.} \rev{\subsection{Condition of Validity for Approximation of \eqref{eq:Cmatrix}}} \rev{For the transition from $[ \ccbig_{\wideband} (v)]_{n,\ell}$ in \eqref{eq:Cmatrix} to $[ \bm{C} (v)]_{n,\ell}$ in \eqref{eq:CNmatrix} to be valid, the following approximation must hold $\forall \ell, n$: \begin{align} e^{\jmath 2\pi \ell T_{\rm{sym}} v/\lambda_n} &\approx e^{\jmath 2\pi \ell T_{\rm{sym}} v/\lambda } ~, \end{align} which requires \begin{subequations} \begin{align} e^{\jmath 2\pi \ell T_{\rm{sym}} v ( f_c + n \Delta_f) / c} &\approx e^{\jmath 2\pi \ell T_{\rm{sym}} v f_c / c } \\ \label{eq_cmatrix_approx_c1} e^{\jmath 2\pi \ell T_{\rm{sym}} v n \Delta_f /c } &\approx 1 \\ \label{eq_cmatrix_approx_c2} L T_{\rm{sym}} B v & \ll c \\ \label{eq_approx_1_final} L N v & \ll c \\ \label{eq_approx_1_final2} L N \max\{v_{\mathrm{r}},v_{\mathrm{b}}\} &\ll c ~, \end{align} \end{subequations} where \eqref{eq_cmatrix_approx_c2} is obtained by plugging the worst-case conditions $\ell = L-1$ and $n = N-1$ (in terms of approximation quality) into \eqref{eq_cmatrix_approx_c1} and recalling that $B = N \Delta_f$, \eqref{eq_approx_1_final} results from $B T_{\rm{sym}} \approx B T = B / \Delta_f = N$ (assuming $ T_{\rm{cp}} / T$ is small), and \eqref{eq_approx_1_final2} follows by considering the maximum of direct and reflected path velocities.} \rev{\subsection{Condition of Validity for Approximation of \eqref{eq:aphi_wb}}} \rev{Similarly, for the transition from $[ \aabig_{\wideband} (\bm{\phi})]_{n,\ell}$ in \eqref{eq:aphi_wb} to $[ \bm{A} (\bm{\phi})]_{n,\ell}$ in \eqref{eq:aphi} to be valid, we need \begin{align} e^{\jmath\bm{k}_n(\bm{\psi})^\top [\bm{Q}]_{:,m}} \approx e^{ \jmath\bm{k}(\bm{\psi})^\top [\bm{Q}]_{:,m} } \end{align} for any $n$ and the angles $\bm{\psi} \in \{ \bm{\theta} , \bm{\phi} \}$, which represent, respectively, the AoA and AoD for the RIS in \eqref{eq:aphi_wb}. From \eqref{eq:WaveNumVect} and the definition of $\bm{q}_{r,s}$ in Sec.~\ref{sec:systemSetup}, this requires \begin{subequations} \begin{align} &e^{\jmath \max(M_1,M_2) d \sin(\alpha) 2 \pi / \lambda_n } \approx e^{\jmath \max(M_1,M_2) d \sin(\alpha) 2 \pi / \lambda } \\ &e^{\jmath \max(M_1,M_2) d \sin(\alpha) 2 \pi ( f_c + n \Delta_f) / c } \nonumber\\ &\qquad\qquad\qquad\qquad\approx e^{\jmath \max(M_1,M_2) d \sin(\alpha) 2 \pi f_c / c } \\ &e^{\jmath \max(M_1,M_2) d \sin(\alpha) 2 \pi n \Delta_f / c } \approx 1 \\ \label{eq_approx_2_final} &\max(M_1,M_2) d \sin(\alpha) B \ll c ~, \end{align} \end{subequations} where $\alpha$ denotes the angle between the RIS normal ($[0,1,0]^\top$) and the vector $\bm{k}(\bm{\psi})$\footnote{Note that $[\bm{Q}]_{:,m}$ is orthogonal to the RIS normal; therefore, only the component of $\bm{k}(\bm{\psi})$ that is orthogonal to the RIS normal contributes to the value of $\bm{k}(\bm{\psi})^\top [\bm{Q}]_{:,m}$. This component has the norm $\sin(\alpha)$. }, and \eqref{eq_approx_2_final} follows by considering the worst-case scenario (in terms of approximation quality) $n = N-1$.} \section{Choosing the candidate AoDs}\label{app:fft} In this section, we explain how we select the \acp{aod} $\bm{\phi}$. For the case with existing prior location information $\bm{\xi}$ (see Section~\ref{sec:dirCodebook}), we choose $N_{\mathrm{\phi}}$ points within the sphere centered at $\bm{\xi}$ with radius $\sigma$ (similarly as in Section~\ref{sec:dirCodebook}). Then the set $\{\bm{\phi}_s\}_{s=0}^{N_{\mathrm{\phi}}-1}$ is calculated as the angles from the \ac{ris} towards these points. Furthermore, with directional beams in \eqref{eq:GammaFunction} the calculation of $\bm{z}_s$ in Line\,\ref{CoarseVA_Line4} of Algorithm\,\ref{alg:coarse_vA_phi} can be performed in closed-form (the \ac{rhs} of Line\,\ref{CoarseVA_Line4} reduces to a geometric sum), which reduces the complexity of Algorithm\,\ref{alg:coarse_vA_phi}. In the absence of any prior information about the user, the values of $\bm{z}_s$ can be calculated offline since the beams can be set prior to the localization procedure. Furthermore, to reduce the complexity of calculating $\bm{z}_s$, we use 2D \ac{ifft}, which is explained as follows. We re-write the vector $\bm{a}(\bm{\psi})$ in \eqref{eq:aVector} as \begin{align} \bm{a}(\bm{\psi}) = \bm{a}_1(\bm{\psi}) \otimes \bm{a}_2(\bm{\psi}),\label{eq:kronProd} \end{align} where \begin{align} \bm{a}_1(\bm{\psi}) &= e^{\jmath \beta_1}[1,e^{\jmath [\bm{k}(\bm{\psi})]_1d},\dots,,e^{\jmath [\bm{k}(\bm{\psi})]_1 (M_1-1)d}]\\ \bm{a}_2(\bm{\psi}) &= e^{\jmath \beta_2}[1,e^{\jmath [\bm{k}(\bm{\psi})]_3d},\dots,,e^{\jmath [\bm{k}(\bm{\psi})]_3 (M_2-1)d}], \end{align} where $\beta_1 = [\bm{k}(\bm{\psi})]_1(M_1-1)d/2$ and $\beta_2 = [\bm{k}(\bm{\psi})]_3(M_2-1)d/2$. Next, from Line\,\ref{CoarseVA_Line4} we have that \begin{align} [\bm{z}_s]_{k} &= \bm{a}(\theta)^{\top} \mathrm{diag}(\bm{b}_{k}) \bm{a}(\bm{\phi}_s)\\ &= \bm{a}(\bm{\phi}_s)^{\top}\left( \bm{a}(\theta)\odot \bm{b}_{k} \right)\\ &= e^{\jmath(\beta_1+\beta_2)}\bm{a}_1(\bm{\phi}_s)^\top \bm{C}_k \bm{a}_2(\bm{\phi}_s),\label{eq:fftMotviation} \end{align} where \begin{align} \bm{C}_k = \left(\bm{a}_1(\theta)\bm{a}_2(\theta)^{\top} \right)\odot \bm{B}_k \end{align} and \eqref{eq:fftMotviation} follows from \eqref{eq:kronProd} and the properties of the Kronecker product (see \cite[Eq.~(520)]{MatCookBook}). Motivated by \eqref{eq:kronProd}, we set $\bm{z}_s$ to be the $s$th row of matrix $\bm{Z}_{\mathrm{f}}= [\bm{z}_{\mathrm{f},0},\dots, \bm{z}_{\mathrm{f},L/2-1}]$, where \begin{align} \bm{z}_{\mathrm{f},k} &= \mathrm{vec}\left(\bm{F}_{\mathrm{\phi},1}^\top \bm{C}_k \bm{F}_{\mathrm{\phi},2}\right).\label{eq:fftReal} \end{align} Here, $\bm{F}_{\mathrm{\phi},1}\in \mathbb{C}^{M_1\times N_{\mathrm{\phi},1}}$ and $\bm{F}_{\mathrm{\phi},2}\in \mathbb{C}^{M_2\times N_{\mathrm{\phi},1}}$ are IDFT matrices, where $N_{\mathrm{\phi},1}$ and $N_{\mathrm{\phi},2}$ are design parameters. Furthermore, the \ac{rhs} of \eqref{eq:fftReal} can be calculated using 2D IFFT. The set $\{\bm{\phi_s}\}$ can be calculated as $\{\bm{\phi}_{0,0}, \bm{\phi}_{1,0}, \dots, \bm{\phi}_{N_{\mathrm{\phi},1}-1,N_{\mathrm{\phi},2}-1} \}$, where \begin{align} [\bm{\phi}_{n_1,n_2}]_{\mathrm{az}} &=\mathrm{atan2}\left(k_2(n_1,n_2),k_1(n_1,n_2)\right)\\ [\bm{\phi}_{n_1,n_2}]_{\mathrm{el}} &=\mathrm{acos}\left(k_3(n_1,n_2)\right). \end{align} Here, \begin{align} k_1(n_1,n_2)&= f_{\mathrm{r}}\!\!\left(\frac{\lambda n_1}{d N_{\mathrm{\phi},1}}\right)\\ k_3(n_1,n_2)&=f_{\mathrm{r}}\!\!\left(\frac{\lambda n_2}{d N_{\mathrm{\phi},2}}\right)\\ k_2(n_1,n_2)&=\sqrt{1-k_1^2-k_3^2}, \end{align} where the function $f_{\mathrm{r}} = x-2\lfloor x/2 \rfloor$ compensates for the wrap-around effects. Furthermore, for the values of $n_1$ and $n_2$ if $ k_2(n_1,n_2)$ becomes imaginary $\bm{\phi}_{n_1,n_2}$ is undefined and the estimator can remove these values from the sets $\{\bm{z}_s\}$ and $\{\bm{\phi}_s\}$. \end{appendices} \balance \bibliographystyle{IEEEtran}
1,116,691,497,469
arxiv
\section{#1}\setcounter{equation}{0}} \renewcommand{\baselinestretch}{1.2} \begin{document} \bibliographystyle{physics} \begin{titlepage} \begin{flushright} {\sf\large TUM-T31-21/92}\\ {\sf\large July 1992} \end{flushright} \vspace{10mm} \begin{center} {\huge Current correlators to all orders in\\ \vspace*{4mm} the quark masses} {\Large \footnote{Supported by the German Bundesministerium f\"ur Forschung und Technologie, under the contract 06 TM 761.} } \vspace{15mm}\\ {\normalsize Matthias JAMIN${}^{2}$, Manfred M\"UNZ} \\ \ \\ {\small\sl Physik Department, Technische Universit\"at M\"unchen, D-8046 Garching, FRG.}\\ \vspace{-2mm} {\small\sl Email: \phantom{m}jamin @ feynman.t30.physik.tu-muenchen.de}\\ \vspace{-2mm} {\small\sl \hspace{10mm} mmuenz @ feynman.t30.physik.tu-muenchen.de}\\ {\small\sl ${}^{2}$ Address after October, 1st: Division TH, CERN, CH-1211 Geneva 23.} \vspace{15mm}\\ {\bf Abstract} \end{center} \noindent The contributions to the coefficient functions of the quark and the mixed quark-gluon condensate to mesonic correlators are calculated for the first time to all orders in the quark masses, and to lowest order in the strong coupling constant. Existing results on the coefficient functions of the unit operator and the gluon condensate are reviewed. The proper factorization of short- and long-distance contributions in the operator product expansion is discussed in detail. It is found that to accomplish this task rigorously the operator product expansion has to be performed in terms of {\em non}-normal-ordered condensates. The resulting coefficient functions are improved with the help of the renormalization group. The scale invariant combination of dimension 5 operators, including mixing with the mass operator, which is needed for the renormalization group improvement, is calculated in the leading order. \end{titlepage} \newpage \setcounter{page}{1} \newsection{Introduction} Since the pioneering papers by Shifman, Vainshtein, and Zakharov (SVZ) \cite{svz:79}, QCD sum rules have played a major role in extracting parameters describing the structure of the QCD vacuum, the so called QCD condensates, as well as for calculating hadronic parameters, e.g. masses and decay constants of mesons and baryons from the fundamental QCD Lagrangian. This procedure assumes the ``duality'' of a hadronic versus a partonic description, the partons here being quarks and gluons. The calculation of the partonic part is performed in the framework of the operator product expansion (OPE) \cite{wil:69}, most commonly including operators up to dimension 6, whose contributions signal the breakdown of perturbation theory for low momenta and parameterize non-perturbative effects. A large amount of results on the coefficient functions in the OPE exists in the literature, usually invoking some additional approximation, as for example small or large quark masses, or equal masses in the case of mesonic correlation functions. For further information the reader should consult refs. \cite{svz:79} and \cite{bec:81}--\nocite{rry:85,bag:86,nar:89,gen:90a,gen:90b}\cite{bag:92}. An important aspect which has to be mentioned in this context is the proper factorization of short- and long-distance contributions in the OPE. This means that the coefficient functions describing the short-distance part of a given correlation function should be free from dependences on the infrared structure of the theory, e.g. mass logarithms, and all such dependences should be absorbed into the corresponding matrix elements of operators, the condensates, which shall contain the long-distance contributions. This ``cancellation of mass log's'' has been already discussed in the literature \cite{gen:84,bro:84,bro:85}. We shall deal further with this point below. In this work we present new results on the coefficient functions for the quark and the mixed quark-gluon condensate in the unequal mass case, to all orders in the quark masses, and to lowest order in perturbation theory, for scalar, pseudoscalar, vector, and axialvector mesonic correlators. These results are obtained without any of the approximations mentioned above. In addition, for completeness we review the corresponding results for the unit operator and the gluon condensate. Extensive comparison of our results with results existing in the literature is made in various limiting cases. We explicitly show that all mass logarithms, which for small masses correspond to long-distance contributions, can be absorbed into the QCD condensates only, if the OPE is performed in terms of {\em non}-normal-ordered operators. This observation was already made in \cite{che:85,spi:88}. For an application see also ref.~\cite{bra:92}. The resulting coefficient functions are improved with the help of the renormalization group. In this context, the scale invariant combination of dimension 5, including mixing with the mass operator, is calculated. An application of our results to an improved determination of the current strange quark mass, as well as a discussion of higher order $\alpha_{s}$ corrections, will be presented in a forthcoming publication \cite{jam:92}. In sect.~2, we summarize some known facts on the OPE. The new results for the coefficient functions of the quark and mixed condensate are presented in sects.~4 and 6, and the corresponding results for the unit operator and the gluon condensate are given in sects.~3 and 5. The proper factorization in the OPE and the cancellation of mass log's is discussed in sect.~7, and in sect.~8 we show how to improve the coefficient functions with the help of the renormalization group (RG). Finally, sect.~9 summarizes our results. \newsection{The Coefficient Functions} Let us first define the relevant two-point and coefficient functions which we are going to examine, and then discuss some of their general properties in the framework of the OPE \cite{wil:69,zim:73}. The scalar, pseudoscalar, vector, and axialvector two-point functions are defined to be the matrix elements, between the physical vacuum $\vert\,\Omega\big>$, of the time-ordered product of the corresponding currents, \begin{equation} \Pi^{\Gamma}(q) \; \equiv \; i \, \big<\Omega\,\vert \, TN_{a}\{\, \widetilde j_{\Gamma}(q) \, j_{\Gamma}^{\dagger}(0)\}\vert\,\Omega\big> \, , \label{eq:2.1} \end{equation} where $\Gamma$ stands for one of the Dirac-matrices $\Gamma\in\{1,i\gamma_{5}, \gamma_{\mu},\gamma_{5}\gamma_{\mu}\}$, specifying the quantum numbers of the current (S, P, V, A respectively), the tilde sign will always denote quantities Fourier-transformed to momentum space, and $N_{a}$ is a suitable renormalization procedure for the operator product \cite{zim:73,col:84,sho:91}. Throughout this work we will use dimensional regularization in $D=4-2\,\varepsilon$ dimensions \cite{tho:72,lei:75} and the modified minimal subtraction ($\overline{MS}$) scheme \cite{bar:78}. The current $j_{\Gamma}$ shall take the form \begin{equation} j_{\Gamma}(x) \; \equiv \; :\!\bar Q(x)\Gamma q(x)\!: \, , \label{eq:2.2} \end{equation} where $Q$ and $q$ are quarks of possibly different flavour with masses $M$ and $m$ respectively. If $M$ and $m$ differ, we shall always assume $M>m$. To the order we are working, the normal-ordering is sufficient to renormalize the current. However, in higher orders, additional subtractions are required. Using the Ward-identity relating the divergence of the vector (axialvector) two-point function to the scalar (pseudoscalar) two-point function \cite{bec:81,bro:81}, we can express $\Pi^{\Gamma}(q)$ in terms of 4 Lorentz-scalar two-point functions $\Pi^{I}(q^{2})$, with $I\in\{S,P,V,A\}$. Then the vector (axialvector) two-point function can be written as \begin{eqnarray} \Pi_{\mu\nu}^{V,A}(q) & = & \Big[\, \frac{q_{\mu}q_{\nu}}{q^{2}} - g_{\mu\nu} \,\Big] \; \Pi^{V,A}(q^{2}) + g_{\mu\nu} \, \frac{ \big(M\mp m\big)^{2}}{q^{2}} \, \Pi^{S,P}(q^{2}) \nonumber \\ \vbox{\vskip 8mm} & \phantom{=} & + \,g_{\mu\nu} \, \frac{\big(M\mp m\big)}{q^{2}} \, \Big[\, \big<\bar QQ\big>^{(1)} \mp \big<\bar qq\big>^{(1)} \,\Big] \, , \label{eq:2.3} \end{eqnarray} with $\Pi^{S,P}(q^{2})$ being the scalar (pseudoscalar) two-point function. In the following, $\big<\ldots\big>^{(1)}$ will always denote matrix elements between the physical vacuum $\vert\,\Omega\big>$ (condensates), the superscript indicating renormalization to the order considered (1-loop in this case). Please note that the quark condensates in eq. (\ref{eq:2.3}) are {\em not} normal-ordered. The reason for this will be thoroughly discussed below. On the other hand, because of this fact, the third term in eq. (\ref{eq:2.3}) depends on the renormalization scale --- $m\big<\bar qq\big>$ is no longer renormalization group invariant \cite{spi:88} --- but this dependence is cancelled by the second term. In the framework of the OPE, $\Pi^{I}(q^{2})$ can be expanded in terms of local operators $O_{i}$ times Wilson-coefficient functions $C_{i}$, $C_{1}$ denoting the coefficient function of the unit operator, \begin{equation} \Pi^{I}(q^{2}) \; = \; \overline C_{1}^{\,I}\big(q^{2},M,m,\mu^{2},\alpha_{s}\big) + \sum_{i}\,\frac{\overline C_{i}^{\,I}\big(q^{2},M,m,\mu^{2},\alpha_{s} \big)}{(q^{2})^{[(n_{i}-1)/2]}} \, \big<\!\!:\! O_{i}(0) \!:\!\!\big>^{(0)} \, , \label{eq:2.4} \end{equation} thereby separating the short-distance dynamics, described by the Wilson-coefficient functions, from the long-distance behaviour, included in the operators $O_{i}$. The bar indicates coefficient functions corresponding to tree-level matrix elements, $\mu$ is a renormalization scale in the $\overline{MS}$-scheme, and the $n_{i}$ are the canonical dimensions of the $O_{i}$. With $[(n_{i}-1)/2]$ denoting the integer part of $(n_{i}-1)/2$, it follows that the $\overline C_{i}^{\,I}$ have dimension~0 or 1, depending on the dimension of $O_{i}$ being even or odd. Proper factorization of short- and long-distance contributions in the OPE requires the calculation of the matrix elements in eq. (\ref{eq:2.4}) at the same order as the coefficient functions. This procedure is well known from deep inelastic scattering \cite{bar:78,flo:78} and weak decays \cite{bur:90}. In this way, taking into account mixing of the operators under renormalization, all dependences on the infrared structure of the theory are absorbed into the matrix elements. Calculating the $\overline C_{i}^{\,I}$ straightforwardly using Wick's theorem, naturally leads to normal-ordered operators. However, as has been proven by \cite{che:82,tka:83,lle:88}, in minimal subtraction schemes the coefficient functions are analytic in the masses, if the OPE is performed in terms of {\em non}-normal-ordered operators. This feature is in general not present in non-minimal schemes. Since normal-ordering is a non-minimal scheme, using normal-ordered operators there appear terms of the form $m^{a}\ln^{b}m^{2}/\mu^{2}$ in the coefficient function $\overline C_{1}$. These terms would make it impossible to calculate non-leading contributions, because, while summing up the usual perturbative logarithms by setting $\mu^{2}=-q^{2}$, we would acquire possibly large log's of the form $\ln\,(m^{2}/\!-\!q^{2})$. For small masses they are remnants of the long-distance structure of the theory and should be absorbed into the corresponding condensates. Therefore, to arrive at physically sensible coefficient functions, we have to express the tree-level matrix elements $\big<\!\!:\! O_{i} \!:\!\!\big>^{(0)}$ in terms of the renormalized $\big<O_{i}\big>^{(1)}$, and, working with {\em non}-normal-ordered operators, also mixing with the unit operator under renormalization has to be taken into account. This will be done below. In our analysis we shall include operators up to dimension~5, namely \begin{equation} O_{i} \; \in \; \Big\{\, \big(\bar QQ\big),\, \big(\bar qq\big),\, \big(\frac{\alpha_{s}}{\pi}FF\big),\, \big(g_{s}\bar Q\sigma FQ\big),\, \big(g_{s}\bar q\sigma Fq\big) \,\Big\} \, , \label{eq:2.5} \end{equation} related to the quark condensates, the gluon condensate and the mixed condensates having dimension 3, 4, and 5 respectively. Implicitly, summation over colour, spinor, and Lorentz indices is assumed, and the $\sigma$-matrix is defined as $\sigma_{\mu\nu}=i/2\,[\gamma_{\mu},\gamma_{\nu}]$. Coefficient functions in various approximations up to dimension~8 have been calculated, and can be found for example in refs. \cite{svz:79}, \cite{bec:81}--\nocite{rry:85,bag:86,nar:89,gen:90a}\cite{gen:90b}, and \cite{bro:85,nik:83,bag:85}. Calculating the diagrams of fig.~1, in the leading order one obtains the following relations \cite{gen:84,spi:88} \begin{eqnarray} \big<\bar QQ(\mu)\big>^{(1)} & = & \big<\!\!:\!\bar QQ\!:\!\!\big>^{(0)} - \frac{N} {4\pi^{2}}\,M^{3} \Big[ \ln \frac{M^2}{\mu^{2}}-1 \Big] - \frac{1}{12 M} \, \big<\frac{\alpha_{s}}{\pi}\!:\! FF\!:\!\!\big>^{(0)} \; , \nonumber \\ \vbox{\vskip 10mm} \big<\frac{\alpha_{s}}{\pi}FF\big>^{(1)} & = & \big<\frac{\alpha_{s}}{\pi}\!:\! FF\!:\!\!\big>^{(0)} \, , \label{eq:2.6} \\ \vbox{\vskip 10mm} \big<g_{s}\bar Q\sigma FQ(\mu)\big>^{(1)} & = & \big<\!\!:\! g_{s}\bar Q\sigma FQ\!:\!\!\big>^{(0)} + \frac{M}{2} \ln \frac{M^2}{\mu^{2}}\,\big<\frac{\alpha_{s}}{\pi}\!:\! FF\!:\!\!\big>^{(0)}\, ,\nonumber \end{eqnarray} where $N$ is the number of colours. The corresponding relations for the $q$-quark condensates are obtained through the replacements $Q\rightarrow q$ and $M\rightarrow m$. Since the gluon condensate is already ${\cal O}(\alpha_{s})$, it does not get renormalized to the order we are working. Inserting these relations into eq. (\ref{eq:2.4}), we find the infrared stable coefficient functions $C_{i}^{I}$, \begin{eqnarray} C_{1}^{I} & = & \overline C_{1}^{\,I} + \frac{N}{4\pi^{2}} \biggl\{\, \frac{M^{3}}{q^{2}}\Big[\ln\frac{M^2}{\mu^{2}}-1\Big]\,\overline C_{\bar QQ}^{\,I} + \frac{m^{3}}{q^{2}}\Big[\ln\frac{m^2}{\mu^{2}}-1\Big]\,\overline C_{\bar qq}^{\,I} \,\biggr\} \; , \label{eq:2.7} \\ \vbox{\vskip 8mm} C_{\bar QQ}^{I} & = & \overline C_{\bar QQ}^{\,I} \phantom{_{F}} \qquad\qquad \hbox{and} \qquad\qquad \phantom{_{F}} C_{\bar qq}^{I} \; = \; \overline C_{\bar qq}^{\,I} \; , \label{eq:2.8} \\ \vbox{\vskip 10mm} C_{FF}^{I} & = & \overline C_{FF}^{\,I} + \frac{1}{12M}\,\overline C_{\bar QQ}^{\,I} + \frac{1}{12\,m}\,\overline C_{\bar qq}^{\,I} - \frac{M}{2q^{2}}\ln\frac{M^2}{\mu^{2}}\,\overline C_{\bar QFQ}^{\,I} - \frac{m}{2q^{2}}\ln\frac{m^2}{\mu^{2}}\,\overline C_{\bar qFq}^{\,I} \, , \nonumber \\ & & \label{eq:2.9} \\ C_{\bar QFQ}^{I} & = & \overline C_{\bar QFQ}^{\,I} \qquad\qquad \hbox{and} \qquad\qquad C_{\bar qFq}^{I} \; = \; \overline C_{\bar qFq}^{\,I} \, . \label{eq:2.10} \end{eqnarray} In terms of these coefficient functions, the OPE takes the form \begin{equation} \Pi^{I}(q^{2}) \; = \; C_{1}^{I}\big(q^{2},M,m,\mu^{2},\alpha_{s}\big) + \sum_{i}\,\frac{C_{i}^{I}\big(q^{2},M,m,\mu^{2},\alpha_{s} \big)}{(q^{2})^{[(n_{i}-1)/2]}} \, \big< O_{i}(\mu) \big>^{(1)} \, . \label{eq:2.11} \end{equation} In the following sections we calculate the coefficient functions of eqs. (\ref{eq:2.7})--(\ref{eq:2.10}) for the unit operator and the quark, gluon, and mixed condensate to all orders in the quark masses, and show explicitly that up to operators of dimension~7, they are analytic functions of the masses. Of course, as we shall also demonstrate, the coefficient functions may still contain non-analytic pieces of higher dimension, which are only cancelled through mixing with higher dimensional operators. In our example this is the case for $C_{1}^{I}$ and $C_{FF}^{I}$. This point will be discussed further in section~7. \newsection{The Perturbative Coefficient} The perturbative coefficient function for two different quark masses has been already calculated in ref. \cite{gen:90b}, in the leading order, as well as to ${\cal O}(\alpha_{s})$. For completeness, and for further reference, we give here our results for the coefficient functions $\overline C_{1}^{\,I}$, which are in agreement with \cite{gen:90b}, however using a slightly more compact notation. \begin{eqnarray} \overline C_{1}^{\,S,P}(q^{2}) & = & -\, \frac{N}{8\pi^{2}} \, \biggl\{\, q_{\pm}^{2}\, I_S\big(q^{2},M,m,\mu^{2}\big)-M^{2}\, l_{M}-m^{2}\, l_{m} \,\biggr\} \,, \label{3.1} \\ \vbox{\vskip 10mm} \overline C_{1}^{\,V,A}(q^{2}) & = & -\, \frac{N}{12\pi^{2}} \, \biggl\{\, \biggl[\, q^{2}+M^{2}+m^{2}-2\,\frac{(M^{2}-m^{2})^{2}}{q^{2}} \,\biggr]\, I_S\big(q^{2},M,m,\mu^{2}\big) \nonumber \\ \vbox{\vskip 8mm} -M^{2}\, l_{M}&&\hspace{-1cm}-\,m^{2}\, l_{m}+2\,\frac{(M^{2}-m^{2})} {q^{2}}\,\Big[\, M^{2}\, l_{M}-m^{2}\, l_{m}\,\Big]+\frac{q^{2}}{3}-M^{2}-m^{2} \,\biggr\} \,. \label{3.2} \end{eqnarray} Here $l_{M}\equiv\ln(M^{2}/\mu^{2})-1$, and $I_S(q^{2},M,m,\mu^{2})$ is the scalar one-loop integral, \begin{eqnarray} I_S\big(q^{2},M,m,\mu^{2}\big) & \equiv & \int_{0}^{1} dx\,\ln\frac{xM^{2}+ (1-x)\,m^{2}-x(1-x)\,q^{2}}{\mu^{2}} \nonumber \\ \vbox{\vskip 10mm} & = & \frac{u\,q_{-}^{2}}{q^{2}}\,\ln\frac{u+1}{u-1} + \frac{(M^{2}-m^{2})} {q^{2}}\,\ln\frac{M}{m}+\ln\frac{Mm}{\mu^{2}}-2 \,, \quad\label{3.3} \end{eqnarray} with \begin{equation} q_{\pm}^{2} \; \equiv \; q^{2} - \big(M\pm m\big)^{2} \quad\qquad \hbox{and} \quad\qquad u \; \equiv \; \sqrt{1-\frac{4Mm}{q_{-}^{2}}} \, . \label{3.4} \end{equation} To show explicitly the appearance of mass logarithms in the coefficient functions, we also present their expansions in the quark masses up to order $m^{4}$. \begin{eqnarray} \overline C_{1}^{\,S,P}(q^{2}) & = & \frac{N}{8\pi^{2}} \, \biggl\{\, 2\,q_{\pm}^{2}-\big(q_{\pm}^{2}-M^{2}-m^{2}\big)\,\ln\frac{-q^{2}}{\mu^{2}} \nonumber \\ \vbox{\vskip 8mm} & & -\,\Big[\,\frac{3}{2}\,M^{4}\pm2M^{3}m+2M^{2}m^{2}\pm2Mm^{3} +\frac{3}{2}\,m^{4}\,\Big]\,\frac{1}{q^{2}} \nonumber \\ \vbox{\vskip 8mm} & & +\,\Big[\,(M\pm2m)\,M^{3}\ln\frac{M^{2}}{-q^{2}}+(m\pm2M)\, m^{3}\ln\frac{m^{2}}{-q^{2}}\,\Big]\,\frac{1}{q^{2}} \,\biggr\} \,, \label{3.5} \quad \\ \vbox{\vskip 10mm} \overline C_{1}^{\,V,A}(q^{2}) & = & \frac{N}{12\pi^{2}} \, \biggl\{\, \frac{5}{3}\,q^{2}+3\,\big(M^{2}+m^{2}\big)-q^{2}\,\ln\frac{-q^{2}}{\mu^{2}} \nonumber \\ \vbox{\vskip 8mm} & & \hspace{-1cm}-\,3\,\Big[\,\frac{1}{2}\,M^{4}-2M^{2}m^{2}+ \frac{1}{2}\,m^{4}+M^{4}\,\ln\frac{M^{2}}{-q^{2}}+m^{4}\,\ln\frac{m^{2}} {-q^{2}}\,\Big]\,\frac{1}{q^{2}} \,\biggr\} \,. \label{3.6} \quad \end{eqnarray} As has been discussed in the previous section, these mass logarithms are cancelled after inclusion of the mixing with the quark condensate. This will be performed in section~7. Expressions for the $\overline C_{1}^{\,I}$, only expanding in $m$, are provided in appendix~A. \newsection{The Quark Condensate} The calculation of the coefficient functions for the quark condensate follows closely the method used in ref. \cite{ynd:89}. The contribution of the quark condensate to the two-point function $\Pi^{\Gamma}(q)$, shown in fig.~2, is given by \begin{equation} \Pi^{\Gamma}_{\bar qq+\bar QQ}(q) \; = \; - \int\!\frac{d^{D}\!p}{(2\pi)^D} \, \big<\!\!:\! \bar q(0)\Gamma S_{Q}(p-q)\Gamma\tilde q(p)\!:\!\!\big>^{(0)} + \; (q\leftrightarrow Q) \, . \label{eq:4.1} \end{equation} Here $S_{Q}(p-q)=(\not\! p\,-\!\not\! q-M)^{-1}$ is the free quark propagator. A necessary ingredient for calculating the coefficient functions to all orders in the quark masses is a closed expression for the non-local quark condensate. In $x$-space this expression reads \cite{bag:86,ynd:89,eli:88} \begin{equation} \big<\!\!:\!\bar q_{\alpha}(0)q_{\beta}(x)\!:\!\!\big>^{(0)}_{\bar qq} \; = \; \frac{1}{4m}\, \lnp\bar qq\rnp^{(0)}\,\Gamma\left(\frac{D}{2}\right)(i\!\not\!\partial+m)_{\beta\alpha} \sum_{n=0}^{\infty} \frac{(-m^{2}x^{2}/4)^{n}}{n!\,\Gamma(n+D/2)} \, , \label{eq:4.2} \end{equation} where the index $\bar qq$ denotes the projection onto the local quark condensate. One should remark that the non-local quark condensate has, of course, additional contributions from higher dimensional operators (see section~6). The sum in (\ref{eq:4.2}) can be expressed in terms of Bessel functions, \begin{equation} \sum_{n=0}^{\infty} \frac{(-m^{2}x^{2}/4)^{n}}{n!\,\Gamma(n+D/2)} \; = \; \left(\frac{2}{\sqrt{m^{2}x^{2}}}\right)^{D/2-1} J_{D/2-1}\big( \sqrt{m^{2}x^{2}}\big) \, , \label{eq:4.3} \end{equation} but like in \cite{ynd:89}, we prefer to work explicitly with the expanded form. Since we shall derive similar relations for the mixed condensate, we skip the derivation of eq. (\ref{eq:4.2}), and refer the reader to appendix~A of ref. \cite{eli:88}. The corresponding $p$-space expression for the non-local quark condensate is given by \cite{ynd:89} \begin{equation} \big<\!\!:\!\bar q_{\alpha}(0)\tilde q_{\beta}(p)\!:\!\!\big>^{(0)}_{\bar qq} \, = \, \frac{(2\pi)^{D}}{4m}\, \lnp\bar qq\rnp^{(0)} \,\Gamma\left(\frac{D}{2}\right) (\not\! p+m)_{\beta\alpha} \sum_{n=0}^{\infty} \frac{(m^{2}/4\, \partial_{p}^{\,2})^{n}}{n!\,\Gamma(n+D/2)}\, \delta^{D}(p) \, . \label{eq:4.4} \end{equation} It is easy to verify that the quark condensate (\ref{eq:4.4}) satisfies a free equation of motion, \begin{equation} \big(\!\not\! p-m\big) \, \big<\!\!:\!\bar q(0)\tilde q(p)\!:\!\!\big>^{(0)}_{\bar qq} \; = \; 0 \, . \label{eq:4.5} \end{equation} This is very useful, as it allows to replace an arbitrary non-singular function $f(p^{2},p,\ldots)$ by $f(m^{2},p,\ldots)$ in integrals of the type \begin{equation} \int d^{D}\!p \, f(p^{2},p,\ldots) \, \big<\!\!:\!\bar q(0)\tilde q(p) \!:\!\!\big>^{(0)}_{\bar qq} \, , \label{eq:4.6} \end{equation} thereby greatly simplifying the calculation. After performing the momentum integration and using the relations \begin{equation} \left[\frac{m^{2}}{4}\,\partial_{p}^{\,2}\right]^{n} \frac{1} {[\,q^{2}-2q\cdot p-M^{2}+m^{2}\,]} \; = \; \frac{(2n)!\,(m^{2}q^{2})^{n}} {[\,q^{2}-2q\cdot p-M^{2}+m^{2}\,]^{2n+1}} \, , \label{eq:4.7} \end{equation} \begin{eqnarray} f(z) & \equiv & \Gamma(D/2) \sum_{n=0}^{\infty} \frac{(2n)!}{n!\,\Gamma(n+D/2)} \, z^{n} \nonumber \\ \vbox{\vskip 8mm} & = & _{2}F_{1}(1,1/2;D/2;4z) \; \stackrel{D=4}{=} \; \frac{1}{2z} \, \Big[\, 1 - \sqrt{1-4z} \,\Big] \, , \label{eq:4.8} \end{eqnarray} where $_{2}F_{1}$ is the Hypergeometric function \cite{abr,gra}, we arrive at the following final results for the two-point functions in $D=4$ dimensions: \begin{eqnarray} \Pi_{\bar qq}^{S,P}(q^{2}) & = & - \, \frac{\lnp\bar qq\rnp^{(0)}}{2\,m} \, \left\{\, 1-\frac{q_{\pm}^{2}}{[\,q^{2}-M^{2}+m^{2}\,]} \, f(z_{m})\, \right\} \, , \label{eq:4.9} \\ \vbox{\vskip 10mm} \Pi_{\bar qq}^{V,A}(q^{2}) & = & - \, \frac{\lnp\bar qq\rnp^{(0)}}{3\,m} \, \biggl\{\, 1+2\,\frac{(M^{2}-m^{2})}{q^{2}} \nonumber \\ \vbox{\vskip 8mm} & \phantom{=} & - \,\frac{[\,q^{2}+M^{2}+m^{2}-2\,(M^{2}-m^{2})^{2} /q^{2}\,]}{[\,q^{2}-M^{2}+m^{2}\,]} \, f(z_{m})\, \biggr\} \, , \qquad \label{eq:4.10} \end{eqnarray} with \begin{equation} z_{m} \; \equiv \; \frac{m^{2}q^{2}}{[\,q^{2}-M^{2}+m^{2}\,]^{2}} \, . \label{eq:4.11} \end{equation} The corresponding functions for the heavy quark $Q$, $\Pi_{\bar QQ}^{I}(q^{2})$, can be obtained through the replacements $q\rightarrow Q$ and $m\leftrightarrow M$. In the equal mass case, $M=m$, our results agree with the various results given by the authors of refs. \cite{bag:86,ynd:89,ste:89}, except for the axial current in \cite{bag:86}. To examine the structure of the coefficient functions more explicitly, and to be able to compare with other previous results, we expand the two-point functions in the quark masses up to operators of dimension~6. For the coefficient functions of the quark condensate this leads to \begin{eqnarray} C_{\bar qq}^{S,P}(q^{2}) & = & \overline C_{\bar qq}^{\,S,P}(q^{2}) \; = \; \phantom{m\,} \biggl[\, \mp M-\frac{m}{2}\mp\frac{M^{3}}{q^{2}} \;\biggr] \, , \label{eq:4.12} \\ \vbox{\vskip 10mm} C_{\bar qq}^{V,A}(q^{2}) & = & \overline C_{\bar qq}^{\,V,A}(q^{2}) \; = \; m \,\biggl[\, 1+2\,\Big(M^{2}-\frac{m^{2}}{3}\Big)\frac{1}{q^{2}} \,\biggr]\,, \label{eq:4.13} \end{eqnarray} and related expressions for the $Q$-quark. Again, in appendix~A we provide results for solely expanding in $m$ up to order $m^{3}$. These results also agree with the results given by Generalis \cite{gen:90a}. \newsection{The Gluon Condensate} The contribution of the gluon condensate to all orders in the quark masses has already been presented in refs. \cite{rry:85,gen:90a}. We agree with these results and, for completeness, and further reference, cite here the corresponding expressions. \begin{eqnarray} \Pi_{FF}^{S,P}(q^{2}) & = & \frac{-1}{48}\,\frac{q^{2}}{q_{\mp}^{4}}\, \big<\frac{\alpha_{s}}{\pi}\!:\! FF\!:\!\!\big>^{(0)} \biggl\{\, \frac{3(3+u^{2})(1-u^{2})}{2u^{3}} \log \frac{u+1}{u-1} - \frac{3u^{4}+4u^{2}+9}{u^{2}(1-u^{2})} \,\biggr\} \nonumber \\ \vbox{\vskip 8mm} & & \mp \; \frac{1}{12Mm}\,\big<\frac{\alpha_{s}}{\pi}\!:\! FF\!:\!\!\big>^{(0)} \, , \label{eq:5.1} \\ \vbox{\vskip 10mm} \Pi_{FF}^{V,A}(q^{2}) & = & \frac{1}{48}\,\frac{q^{2}}{q_{\mp}^{4}}\, \big<\frac{\alpha_{s}}{\pi}\!:\! FF\!:\!\!\big>^{(0)} \biggl\{\, \frac{3(1+u^{2})(1-u^{2})^{2}}{2u^{5}} \log \frac{u+1}{u-1} - \frac{3u^{4}-2u^{2}+3}{u^{4}} \,\biggr\} \nonumber \\ & & \label{eq:5.2} \end{eqnarray} For $P$ and $A$ an additional change $M\rightarrow-M$ in $u$, being equivalent to $u\rightarrow1/u$, has to be performed. In the equal mass case our results for $\Pi_{FF}^{S,P}$ agree with those of Bag{\'a}n et.al. \cite{bag:86}. We would like to point out that the non-transverse part for the two-point function $\Pi_{FF,\,\mu\nu}^{V,A}$ of ref. \cite{rry:85} (eq. 3.53), and our scalar (pseudoscalar) two-point function $\Pi_{FF}^{S,P}$, differ by the $q$-independent piece in eq. (\ref{eq:5.1}). Writing eq.~(\ref{eq:2.3}) in terms of tree-level condensates would cancel this additional term, but then the Ward-identity would no longer be valid. Upon expansion in the quark masses up to operators of dimension~6, we find for the coefficient functions $\overline C_{FF}^{\,I}$ \begin{eqnarray} \overline C_{FF}^{\,S,P} & = & - \,\frac{1}{24} \pm \frac{1}{12}\Big( \frac{M}{m} + \frac{m}{M} \Big) - \frac{1}{12q^{2}}\big(M^{2}+m^{2}\big)\nonumber \\ \vbox{\vskip 8mm} & & \pm \,\frac{1}{12q^{2}}\Big(\frac{M^{3}}{m}+\frac{m^{3}}{M}\Big) \pm \frac{Mm}{4q^{2}}\Big[\, 3+2\log \frac{Mm}{-q^{2}} \,\Big] \, , \label{eq:5.4} \\ \vbox{\vskip 10mm} \overline C_{FF}^{\,V,A} & = & - \,\frac{1}{12} - \frac{1}{6q^{2}} \big(M^{2}+m^{2}\big) \, . \label{eq:5.5} \end{eqnarray} Like in the case of the quark condensate, in appendix~A we provide results for the $\overline C_{FF}^{\,I}$, expanded only in $m$. A few remarks are in order here: as is evident, at this intermediate stage there appear $1/m$ as well as $m^{2}\log m$ terms. Nevertheless, they are remnants of the long-distance structure of the vacuum condensates and will cancel in the final result for the coefficient functions $C_{FF}^{I}$, once the additional contributions through mixing (\ref{eq:2.9}) have been included. This cancellation of mass singularities has already been extensively discussed in \cite{gen:84,bro:84,bro:85}. We present the coefficients $C_{FF}^{I}$ in section~7. \newsection{The Mixed Condensate} The calculation of the contribution to the mixed condensate to all orders in the masses is somewhat more complicated than in the case of the quark condensate. We shall therefore discuss its evaluation in slightly more detail. The two diagrams contributing to the mixed condensate are shown in fig.~3. Let us first consider diagram 3a. Working in the coordinate gauge, this graph also arises from eq. (\ref{eq:4.1}), since in the interacting case $q(x)$ has to be expanded in terms of covariant derivatives (see for example \cite{pas:84}), and hence yields contributions which involve gluon fields. The procedure for the calculation of the mixed condensate contribution is similar to the one presented in the appendices~A and C of ref. \cite{eli:88}. For the explicit steps, the reader is referred to this publication. Our result for the non-local quark condensate contribution to the mixed condensate is \begin{eqnarray} \big<\!\!:\!\bar q_{\alpha}(0)q_{\beta}(x)\!:\!\!\big>^{(0)}_{\bar qFq} & = & -\,\frac{1}{8m^{3}}\,\lnp g_{s}\bar q\sigma Fq\rnp^{(0)}\,\Gamma\left(\frac{D}{2}\right) \,\cdot \nonumber \\ \vbox{\vskip 8mm} & & \cdot\,\sum_{n=0}^{\infty} \Big[\, (n-1)i\!\not\!\partial+n\,m\,\Big] _{\beta\alpha}\,\frac{(-m^{2}x^{2}/4)^{n}}{n!\,\Gamma(n+D/2)} \, . \label{eq:6.1} \end{eqnarray} Up to terms ${\cal O}(m^{3})$, the order calculated in ref. \cite{eli:88}, our result agrees with \cite{eli:88}. Similar to the quark condensate, the mixed condensate contribution satisfies an equation of motion. In $p$-space, this equation of motion reads \begin{equation} \big(p^{2}-m^{2}\big) \, \big<\!\!:\!\bar q(0)\tilde q(p)\!:\!\!\big>^{(0)}_{\bar qFq} \; = \;-\,\frac{\lnp g_{s}\bar q\sigma Fq\rnp^{(0)}}{2\,\lnp\bar qq\rnp^{(0)}}\,\big<\!\!:\!\bar q(0)\tilde q(p)\!:\!\!\big>^{(0)}_{\bar qq} \,, \label{eq:6.2} \end{equation} which also allows for simplification in momentum integrals. The replacement in momentum integrals which can be made by means of eq. (\ref{eq:6.2}) has the form \begin{eqnarray} \int d^{D}\!p \, f(p^{2},p,\ldots)\,\big<\!\!:\!\bar q(0)\tilde q(p)\!:\!\!\big>^{(0)} _{\bar qFq} & \longrightarrow & \int d^{D}\!p \,\biggl\{\,f(m^{2},p,\ldots) \,\big<\!\!:\!\bar q(0)\tilde q(p)\!:\!\!\big>^{(0)}_{\bar qFq} \nonumber \\ \vbox{\vskip 8mm} & & \hspace{-4.4cm} - \,\frac{\lnp g_{s}\bar q\sigma Fq\rnp^{(0)}}{2\,\lnp\bar qq\rnp^{(0)}}\,\Big[\,\partial_{m^{2}}f(m^{2},p,\ldots)\,\Big] \,\big<\!\!:\!\bar q(0)\tilde q(p)\!:\!\!\big>^{(0)}_{\bar qq} \,\biggr\} \,. \label{eq:6.2a} \end{eqnarray} The contribution from diagram 3b can be evaluated by considering the insertion of one gluon field strength into the non-local quark condensate of eq. (\ref{eq:4.2}) \cite{pas:84}. The corresponding expression for the non-local mixed condensate is found to be \begin{eqnarray} \big<\!\!:\! g_{s}\bar q_{\alpha}(0)F_{\mu\nu}(0)\,q_{\beta}(x)\!:\!\!\big>^{(0)}_{\bar qFq} & = & \frac{1}{4(D-1)(D-2)\,m^{2}}\,\lnp g_{s}\bar q\sigma Fq\rnp^{(0)}\,\Gamma\left(\frac{D}{2}\right) \,\cdot \nonumber \\ \vbox{\vskip 8mm} & & \hspace{-3.75cm} \cdot\, \Big[\,\big(\gamma_{\mu}\partial_{\nu}-\gamma_{\nu}\partial_{\mu} \big)+m\,\sigma_{\mu\nu} \,\Big] \big(i\!\not\!\partial+m\big)_{\beta\alpha} \sum_{n=0}^{\infty} \frac{(-m^{2}x^{2}/4)^{n}}{n!\,\Gamma(n+D/2)} \,. \quad \label{eq:6.3} \end{eqnarray} Here $F_{\mu\nu}=t^{a}F_{\mu\nu}^{a}$, where $t^{a}$ are the generators of the colour group $SU(N)$. The expansion of eq. (\ref{eq:6.3}) has been calculated by the authors of ref. \cite{eli:88} up to order $m^{3}$, and we agree with their result, except for the term ${\cal O}(m^{3})$. The derivation of eq.~(\ref{eq:6.3}) is presented in appendix~B. It is obvious from the structure of eq. (\ref{eq:6.3}), that the non-local mixed condensate also satisfies the free equation of motion \begin{equation} \big(\!\not\! p-m\big) \, \big<\!\!:\! g_{s}\bar q_{\alpha}(0)F_{\mu\nu}(0)\, q_{\beta}(x)\!:\!\!\big>^{(0)}_{\bar qFq} \; = \; 0 \, . \label{eq:6.4} \end{equation} Using eqs. (\ref{eq:6.1}) and (\ref{eq:6.3}), as well as the corresponding equations of motion, we obtain the following final results for the contribution of the mixed condensate to the two-point function of eq. (\ref{eq:2.1}): \begin{eqnarray} \Pi_{\bar qFq}^{S,P}(q^{2}) & = & - \, \frac{\lnp g_{s}\bar q\sigma Fq\rnp^{(0)}}{2\,m^{3}q_{\mp}^{2}}\, \biggl\{\, q^{2}-M^{2}\pm Mm \nonumber \\ \vbox{\vskip 8mm} & & - \,\frac{[\,(q^{2}-M^{2})^{2}\pm Mm(q^{2}-M^{2}+m^{2})-M^{2}m^{2}\,]} {[\,q^{2}-M^{2}+m^{2}\,]} \, f(z_{m})\, \biggr\} \, , \quad\label{eq:6.5} \\ \vbox{\vskip 10mm} \Pi_{\bar qFq}^{V,A}(q^{2}) & = & - \, \frac{\lnp g_{s}\bar q\sigma Fq\rnp^{(0)}}{3\,m^{3}q^{2}q_{\pm}^{2} q_{\mp}^{2}}\, \biggl\{\, [q^{2}+2M(M\mp m)](q^{2}-M^{2})^{2}-m^{2} q^{2} (q^{2}+M^{2}) \nonumber \\ \vbox{\vskip 8mm} & & - \,2Mm^{2}(M\mp m)(2M^{2}-m^{2})-\frac{P(q^{2},M,m)}{[\,q^{2}-M^{2}+ m^{2}\,]} \, f(z_{m})\, \biggr\} \, , \quad\label{eq:6.6} \end{eqnarray} with \begin{eqnarray} P(q^{2},M,m) & = & [q^{2}+2M(M\mp m)](q^{2}-M^{2})^{3} - M^{2}m^{2}q^{2} (4M^{2}\mp6Mm+m^{2}) \nonumber \\ \vbox{\vskip 8mm} & - & \hspace{-2mm} m^{2}q^{4}(q^{2}+M^{2}) + 2Mm^{2}(M\mp m)(3M^{4}-3M^{2}m^{2}+m^{4}) \,. \label{eq:6.7} \end{eqnarray} As in the case of the quark condensate, the corresponding functions for the heavy quark $Q$, $\Pi_{\bar QFQ}^{I}(q^{2})$, can be obtained through the replacements $q\rightarrow Q$ and $m\leftrightarrow M$. In the equal mass case, $M=m$, our results agree with the result given in \cite{bag:86}. The expansion in one light mass $m$ up to operators of dimension 6 for the vector current agrees with the result obtained in \cite{gen:90a}. Expanding the coefficient functions for the mixed condensate up to operators of dimension 6 yields the well known expressions \begin{equation} C_{\bar qFq}^{S,P}(q^{2}) \; = \; \overline C_{\bar qFq}^{\,S,P}(q^{2}) \; = \; \pm\,\frac{M}{2} \, , \quad \hbox{and} \quad C_{\bar qFq}^{V,A}(q^{2}) \; = \; \overline C_{\bar qFq}^{\,V,A}(q^{2}) \; = \; 0 \,, \label{eq:6.8} \end{equation} and related expressions for the $Q$-quark. Again, in appendix~A, we give results for the expansion in one small mass $m$. \newsection{The Cancellation of Mass Log's} We are now in a position to calculate the infrared stable coefficient functions $C_{1}^{I}$ and $C_{FF}^{I}$ of eqs. (\ref{eq:2.7}) and (\ref{eq:2.9}). The result up to dimension 4 for the unit operator is found to be \begin{eqnarray} C_{1}^{S,P}(q^{2}) & = & \frac{N}{16\pi^{2}}\, \biggl\{\, 4\,q_{\pm}^{2}- \big(M^{4}+4M^{2}m^{2}+m^{4}\big)\frac{1}{q^{2}} \nonumber \\ \vbox{\vskip 8mm} & & \hspace{-2cm} - \,2\Big[\, q_{\pm}^{2}-M^{2}-m^{2}+\big(M^{4}\pm2M^{3}m\pm2Mm^{3}+ m^{4}\big)\frac{1}{q^{2}} \,\Big]\ln\frac{-q^{2}}{\mu^{2}} \,\biggr\} \,, \label{eq:7.1} \\ \vbox{\vskip 10mm} C_{1}^{V,A}(q^{2}) & = & \frac{N}{72\pi^{2}}\, \biggl\{\, 10\,q^{2}+ 18\,\big(M^{2}+m^{2}\big)-9\,\big(\,3M^{4}-4M^{2}m^{2}+3m^{4}\big) \frac{1}{q^{2}} \nonumber \\ \vbox{\vskip 8mm} & & - \,6\Big[\,q^{2}-3\big(M^{4}+m^{4}\big)\frac{1}{q^{2}}\,\Big] \ln\frac{-q^{2}}{\mu^{2}} \,\biggr\} \,, \label{eq:7.2} \\ \vbox{\vskip 8mm} & & \hspace{-3.0cm}\hbox{and up to dimension 6 for the gluon condensate} \nonumber\\ \vbox{\vskip 8mm} C_{FF}^{S,P}(q^{2}) & = & - \,\frac{1}{8}-\frac{(M^{2}+m^{2})}{12\,q^{2}} \pm\frac{Mm}{4\,q^{2}}\,\Big[\,3-2\ln\frac{-q^{2}}{\mu^{2}}\,\Big] \,, \label{eq:7.3} \\ \vbox{\vskip 10mm} C_{FF}^{V,A}(q^{2}) & = & \frac{1}{12}-\frac{(M^{2}+m^{2})}{18\,q^{2}} \,. \label{eq:7.4} \end{eqnarray} It is evident that all mass logarithms have cancelled, and the resulting coefficient functions are only polynomial in the quark masses. Only this fact allows for a consistent summation of logarithmic corrections through the choice $\mu^{2}=-q^{2}$. This task is accomplished in the next section. However, the cancellation of mass log's only takes place up to the dimension of operators, to which all contributions have been included consistently. For example, the coefficient functions $C_{FF}^{I}(q^{2})$ still contain mass log's of the form $Mm^{3}\ln m$, which were only cancelled, if mixing with operators of dimension 8 had been included \cite{bro:85}. The appropriate treatment of small masses in the OPE therefore is to expand in the mass up to operators of the dimension in question. Analogously, heavy masses have to be expanded in $1/M$. The case of heavy quark masses will be discussed in a subsequent publication \cite{jam:93}. \newsection{Renormalization Group Improved Coefficients} The coefficient functions for the unit operator of eqs. (\ref{eq:7.1}) and (\ref{eq:7.2}) satisfy an inhomogeneous renormalization group equation, \begin{equation} \mu\,\frac{d}{d\mu}\,C_{1}^{I}\big(Q^{2}/\mu^{2}\big) \; = \; h_{0}^{I} \,, \label{eq:8.1} \end{equation} where we have set $Q^{2}=-\,q^{2}$, and \begin{eqnarray} h_{0}^{S,P} & = & \frac{N}{4\pi^{2}}\,\Big[\, q_{\pm}^{2}-M^{2}- m^{2}+\big(M^{4}\pm2M^{3}m\pm2Mm^{3}+m^{4}\big)\frac{1}{q^{2}} \,\Big] \,, \label{eq:8.2} \\ h_{0}^{V,A} & = & \frac{N}{6\pi^{2}}\,\Big[\,q^{2}-3\big(M^{4}+m^{4}\big) \frac{1}{q^{2}}\,\Big] \,. \label{eq:8.3} \end{eqnarray} The presence of the inhomogeneity originates from the divergence of the current product of eq.~(\ref{eq:2.1}) The solution to this equation is given by \begin{equation} C_{1}^{I}\big(Q^{2}/\mu^{2}\big) \; = \; C_{1}^{I}\big(1\big) + \frac{\pi h_{0}^{I}}{\beta_{0}}\left[\,\frac{1}{\alpha_{s}(Q^{2})}- \frac{1}{\alpha_{s}(\mu^{2})}\,\right] \,. \label{eq:8.4} \end{equation} This expression is the improved form for the coefficient function $C_{1}^{I}$, where the leading $\ln(Q^{2}/\mu^{2})$ have been summed up. $\beta_{0}$ is the leading coefficient in the expansion of the $\beta$-function, \begin{equation} \mu\,\frac{d\alpha_{s}}{d\mu} \; = \; \alpha_{s}\beta(\alpha_{s}) \, , \qquad \beta(\alpha_{s}) \; = \; \beta_{0}\,\frac{\alpha_{s}}{\pi}+\ldots \, , \qquad \beta_{0} \; = \;-\,\frac{1}{6}(11N-2f) \,, \label{eq:8.5} \end{equation} where $f$ denotes the number of quark flavours. Here and in the following we have adopted the notation of Pascual and Tarrach \cite{pas:84} for the renormalization group functions. To improve on the contribution from higher dimensional operators, we note, that the sum in eq.~(\ref{eq:2.11}) has to be independent of $\mu$. Therefore, the choice $\mu^{2}=Q^{2}$ sums up the leading log's, leaving us with condensates $\big<O_{i}(Q^{2})\big>$, evaluated at the scale $Q^{2}$. These can now be expressed in terms of condensates $\big<O_{i}(\mu_{0}^{2})\big>$, evaluated at some fixed scale $\mu_{0}$, at which the numerical values of the condensates are known, e.g. $\mu_{0}=1\,GeV$. For simplicity, in the following we consider only one quark flavour $q$, however keeping the $f$ dependence in the renormalization group functions. The generalization to an arbitrary number of flavours should be obvious. In order to write the relation between $\big<O_{i}(Q^{2})\big>$ and $\big<O_{i}(\mu_{0}^{2})\big>$, it is convenient to assemble the operators $O_{i}$ into a vector $\vec O$, \begin{equation} \vec O^{T} \; = \; \Big(\,\big(g_{s}\bar q\sigma Fq\big),\, \big(\frac{\alpha_{s}}{\pi}FF\big),\, \big(\bar qq\big),\, m\,\Big) \,, \label{eq:8.6} \end{equation} in which we have included the mass as an operator. This allows us to write the scale invariant combinations of the operators $O_{i}$ in the form \begin{equation} \vec\phi^{\,T} \; = \; \big(\, \phi_{3},\, \phi_{2},\, \phi_{1},\, \phi_{0} \,\big)^{T} \; \equiv \; \hat R(\mu)\,\big<\vec O(\mu)\big> \,. \label{eq:8.7} \end{equation} $\phi_{0}$ just is the invariant quark mass, the invariants $\phi_{1}$ and $\phi_{2}$ have been calculated in ref.~\cite{spi:88}, where also next-to-leading order corrections can be found, and $\phi_{3}$ was obtained in ref.~\cite{nar:83}, without taking into account mixing with the mass operator. The matrix $\hat R(\mu)$ is given by \begin{equation} \hat R(\mu) \; = \; \left( \begin{array}{cccc} \alpha_{s}^{d_{\sigma}^{\,0}}(\mu) & x\,\alpha_{s}^{d_{\sigma}^{\,0}-1}(\mu)\,m(\mu) & y\,\alpha_{s}^{d_{\sigma}^{\,0}}(\mu)\,m^{2}(\mu) & z\,\alpha_{s}^{d_{\sigma}^{\,0}-1} (\mu)\,m^{4}(\mu) \\ 0& \pi\beta_{0}& 4\gamma_{m}^{0}\alpha_{s}(\mu)\,m(\mu) & \gamma_{0}^{0}\,m^{3}(\mu)\\ 0 & 0 & m(\mu) & w\,\alpha_{s}^{-1}(\mu)\,m^{3}(\mu) \\ 0 & 0 & 0 & \alpha_{s}^{d_{m}^{\,0}}(\mu) \\ \end{array} \right) \,, \label{eq:8.8} \end{equation} where $\gamma_{m}^{0}$, $\gamma_{0}^{0}$, and $\gamma_{\sigma}^{0}$ are the leading order anomalous dimensions of the mass operator, the vacuum energy, and the mixed condensate, respectively, \begin{equation} \gamma_{m}^{0} \; = \; \frac{3(N^{2}-1)}{4N}\,, \qquad \gamma_{0}^{0} \; = \; \frac{N}{2\pi}\,, \quad {\rm and} \quad \gamma_{\sigma}^{0} \; = \; \frac{(N^{2}-5)}{4N}\,, \qquad \label{eq:8.9} \end{equation} $d_{m}^{\,0}$ and $d_{\sigma}^{\,0}$ are defined to be the ratios $d_{m}^{\,0}\equiv\gamma_{m}^{0}/\beta_{0}$ and $d_{\sigma}^{\,0}\equiv\gamma_{\sigma}^{0}/\beta_{0}$, and finally, $w$, $x$, $y$, and $z$ are given by \pagebreak \begin{eqnarray} w & = & \frac{\gamma_{0}^{0}}{(\beta_{0}+4\gamma_{m}^{0})} \; = \; \frac{3N^{2}}{\pi(7N^{2}+2Nf-18)} \,, \nonumber \\ x & = & \frac{-\pi}{(\beta_{0}+\gamma_{m}^{0}-\gamma_{\sigma}^{0})} \; = \; \frac{6\pi N}{(8N^{2}-2Nf-3)} \,, \label{eq:8.10} \\ y & = & \frac{4\gamma_{m}^{0}(\beta_{0}+\gamma_{m}^{0}-\gamma_{\sigma}^{0} -1)}{(\gamma_{\sigma}^{0}-\gamma_{m}^{0})(\beta_{0}+\gamma_{m}^{0}- \gamma_{\sigma}^{0})} \; = \; \frac{-6(N^{2}-1)[\,8N^{2}+2N(3-f)-3\,]} {(N^{2}+1)(8N^{2}-2Nf-3)} \,, \nonumber \\ z & = & \frac{y\,\gamma_{0}^{0}}{(\beta_{0}+5\gamma_{m}^{0}- \gamma_{\sigma}^{0})} \; = \; \frac{-18N^{2}(N^{2}-1)[\,8N^{2}+2N(3-f)-3\,]} {\pi(N^{2}+1)(8N^{2}-2Nf-3)(10N^{2}+2Nf-15)} \,. \nonumber \end{eqnarray} The element (1,4) of the matrix $\hat R$ which describes the mixing between $(g_{s}\bar q\sigma Fq)$ and $m^{5}$ is new. We should remark that strictly speaking it is not fully consistent to take the complete leading order matrix $\hat R$ at the order we have given the coefficient functions, because some entries of $\hat R$ appear first at ${\cal O}(\alpha_{s})$. Nevertheless we found it convenient to have at hand the full leading order renormalization group invariant combinations of the operators $O_{i}$. Use of the leading as well as next-to-leading order corrections will be made in \cite{jam:92}. The relation between condensates evaluated at two different scales is then given by \begin{equation} \big<\vec O(Q^{2})\big> \; = \; \hat U(Q^{2},\,\mu_{0}^{2})\, \big<\vec O(\mu_{0}^{2})\big> \,, \quad {\rm with} \quad \hat U(Q^{2},\,\mu_{0}^{2}) \; = \; \hat R^{-1}(Q^{2})\,\hat R(\mu_{0}^{2})\,. \label{eq:8.11} \end{equation} The dependence on $m(Q^{2})$ appearing in the evolution matrix $\hat U(Q^{2},\,\mu_{0}^{2})$ can of course be expressed in terms of $m(\mu_{0}^{2})$, using the relation \begin{equation} m(Q^{2}) \; = \; \left(\frac{\alpha_{s}(\mu_{0}^{2})}{\alpha_{s}(Q^{2})}\right)^{d_{m}^{\,0}} m(\mu_{0}^{2}) \,, \label{eq:8.12} \end{equation} such that the $Q^{2}$ dependence of the evolution matrix enters only through $\alpha_{s}(Q^{2})$. Putting everything together, our final result for the renormalization group improved coefficient function of eq.~(\ref{eq:2.1}) takes the form \begin{eqnarray} \Pi^{I}(Q^{2}) & = & C_{1}^{I}\big(\mu^{2}=Q^{2}\big) + \frac{\pi h_{0}^{I}} {\beta_{0}}\left[\,\frac{1}{\alpha_{s}(Q^{2})}-\frac{1}{\alpha_{s}(\mu^{2})}\,\right] \nonumber\\ \vbox{\vskip 8mm} & + & \vec C^{I^{T}}\!\big(\mu^{2}=Q^{2}\big)\,\hat U(Q^{2},\,\mu_{0}^{2})\, \big<\vec O(\mu_{0}^{2})\big> \,, \label{eq:8.13} \end{eqnarray} where the appropriate powers of $1/q^{2}$ have been absorbed into $\vec C^{I}$. \pagebreak \newsection{Summary} We have calculated the coefficient functions of the quark and the mixed quark gluon condensate for scalar, pseudoscalar, vector, and axialvector current correlators to all orders in the quark masses, in the framework of the operator product expansion. For completeness the coefficient functions for the unit operator as well as the gluon condensate are reviewed. The proper factorization of short and long distance contributions has been performed, which requires renormalization of the condensates to the order considered. It is found that it is only possible to absorb all long distance contributions into the condensates, if the OPE is expressed in terms of {\em non}-normal-ordered operators. This fact necessitates the inclusion of mixing with the unit operator under renormalization. The resulting coefficient functions were improved with the help of the renormalization group equation. This is trivial in the case of the unit operator, but is somewhat more complicated for higher dimensional operators, since mixing under renormalization has to be taken into account. The scale invariant combinations up to operators of dimension 5, which are needed for the renormalization group improvement of the condensates, are given. The inclusion of mixing with the unit operator in the scale invariant of dimension 5 is new. An application of our results to an improved determination of the current strange quark mass, as well as a discussion of higher order $\alpha_{s}$ corrections will be presented in a forthcoming publication \cite{jam:92} \vskip 1cm \noindent {\Large\bf Acknowledgement} \noindent We would like to thank A. J. Buras, K. G. Chetyrkin, and P. H. Weisz for helpful discussions. The Feynman diagrams were drawn with the aid of the program {\em feynd}, written by S. Herrlich. \newpage \noindent
1,116,691,497,470
arxiv
\section{Introduction} Adding to the landscape of embedded systems, cloud computing, networking, and telecommunications -- \textit{edge computing} is proposed to provide solutions for various cyber-physical systems (CPS) and applications, such as augmented reality, predictive functions e.g. for anomaly detection, and collaborative CPS to name a few. These applications often share requirements on high availability, real-time behavior, domain-specific sensitive data, while more and more involving huge amounts of data and corresponding processing demands. A key advantage of edge computing is localized and enhanced computational performance, which reduces costs on the device/embedded systems side because of less computing and storage resources and overcomes the shortcoming on latency, bandwidth, and privacy issues of the centralized cloud-based solutions. By adding a new third tier of computing to address these requirements and limitations, edge computing is projected to have profound implications in the coming decades, ~\cite{sat17, Ahmed2017, 8100873, Khan2019, toerngren16}. As a consequence, edge computing is today driven by strong market forces stemming from IT/cloud, telecom, and networking - with corresponding multiple interpretations of "edge computing", including in terms of where the edge lies, e.g. device edge, network edge, distributed cloud, etc. Such interpretations include \begin {itemize} \item \textit{multi-access edge computing} (MEC) -- a term coined by the European Telecommunications Standards Institute\footnote{\url{https://www.etsi.org/technologies/multi-access-edge-computing} (accessed Dec. 21, 2020)}, previously referred to as \textit{mobile} edge computing. This edge computing concept is closely associated with telecom and 5G networks, for example exploiting base stations as compute facilities ~\cite{Abbas.2018}, \item \textit{fog computing} -- as an extension of cloud computing that beyond the cloud leverages additional localized resources such as routers and gateways~\cite{Bonomi.2012}, \item \textit{cloudlets} -- as clusters of trusted computers with a strong connection to the Internet that is utilized by nearby mobile devices~\cite{Satyanarayanan.2009}. \end {itemize} The focus of this study lies the in the intersection of the various {edge computing} paradigms and {cyber-physical systems} and applications. CPS represent the “Integration of computation, networking, and physical processes”, ranging from minuscule (e.g. pacemakers) to large-scale (e.g. national power-grid) and typically involving feedback, \cite{raj10, damm16, toerngren16, Platforms4CPS2018}. While CPS have been around at least since the late 1970s (depending on how you interpret the term), they are today provided with entirely new capabilities due to improvements in various technologies, ranging from sensors, communication, computation, and artificial intelligence (AI) and machine learning (ML), algorithms, to new materials, batteries, and additive manufacturing. Corresponding trends for CPS include operation in more complex environments, higher levels of automation, electrification, and CPS-cloud and development/operation integration. This paves the way for unprecedented market opportunities, leading to CPS deployment in more open environments in all kinds of application domains such as transportation, manufacturing, healthcare, and smart cities. This can be seen as letting the “robots out of their cages”, as exemplified with automated driving and co-bots (robots collaborating with humans~\cite{BOZHINOSKI2019150, damm16, toerngren16}). \begin{figure} \centering \includegraphics[width=0.75\columnwidth]{"Figures/ITS_RA"} \caption{OpenFog architecture in ITS scenario (adapted from OpenFog Report \cite{OpenFog.2017})} \label{fig:ITS_tasks} \end{figure} CPS are often associated with critical applications where failures may jeopardize lives or where the lack of availability -- for example of infrastructure and manufacturing -- may have severe cost and/or safety implications. As an example of an integration between CPS and edge computing, Figure~\ref{fig:ITS_tasks} depicts a scenario with an intelligent transportation system (ITS) following the OpenFog reference architecture \cite{OpenFog.2017}. This ITS scenario brings the opportunity to examine the interactions among fog domains and cloud domains such as element management systems (EMS), service provider (SP), metro traffic services, and system manufacturer clouds. By leveraging fog architecture, the strict requirements of this ITS applications can be accommodated. For instance, fog computing can be utilized to compute tasks obtained from a traffic control system or an autonomous vehicle. Therefore, the task can be performed in real-time to ensure the optimal and safe operations of the ITS. \subsection*{Motivation} The strong potential of future CPS comes along with new needs for computation, aligning with the strong drivers for edge computing. Future CPS are therefore likely to integrate edge computing in various forms, from device edge to the network edge (likely adapted to the needs and constraints of the respective domains), in essence adding a new tier to existing embedded systems and cloud computing on the cyber-side, This cyber-enhancement will enable to deploy new, enhanced and integrated cyber-physical capabilities. Our interests lie in the applications of edge computing as part of Cyber-Physical Systems (CPS), where the introduction of edge-based CPS for critical systems requires an emphasis on trustworthiness and dependability. We note that both trustworthiness and dependability represent multifaceted properties, strongly related to CPS, security and human perception of trust, \cite{Dep}, \cite{nist16} \cite{Hanckock-Security}, \cite{NistGlossary}. While the concept of dependability in its current form has been around for some 40 years, the concept of trustworthiness is now emerging as an umbrella term associated with how we, as humans, perceive the operation of increasingly advanced and complex CPS, see e.g. \cite{nist16, Platforms4CPS2018}, This use of the term is further underpinned by the relatively recent adoption of the term in the context of trustworthy AI, visible for example through the EU efforts towards trustworthy AI (see e.g. \cite{EU.2021.web} for an overview of work by the "High-level expert group on artificial intelligence" initiated by the European Commission). The technological shift is also implying that current methodologies and standards are partly inadequate to address challenges of future CPS (see e.g. \cite{Torngren2018}). This is clearly seen in the multitude of ongoing standardization efforts related to automated driving, cyber-security, and introduction of advanced perception and AI in the context of safety critical systems, see e.g. \cite{ISO-SAE-21434, SOTIF, UL4600, P2846, ISO-AWI-TS-5083} - reflecting different aspects of designing and assuring safety and security for highly automated CPS\footnote{For example, the so called SOTIF standard (ISO 21448) addresses safety aspects of machine learning and advanced perception systems for automated driving. As such it is representative for similar efforts also in other CPS domains (where also AI/ML and advanced perception systems are introduced, e.g. with similar ongoing work to extend \cite{IEC}. ISO 5083 addreeses safety for high levels of driving automation, including cybersecurity considerations} as well as in updated editions of traditional safety standards, see e.g. \cite{ISO26262}. In this paper, we have chosen to emphasize the following attributes (or sub-properties) of trustworthiness: Safety, Security, and Predictability. The rationale for this choice of attributes stems from industrial needs as derived in the TECoSA research center, \cite{TECoSA}; industry sees these three attributes as vital for introducing new edge-based CPS. Each of the attributes (safety, security and predictability) is facing new challenges as CPS expand to become edge-based, collaborative autonomous and "filled with" AI. Moreover, the mutual dependencies and trade-offs between these attributes also need to be explicitly considered. The selection of these attributes, and at this granularity also provides a delimitation of the scope of our survey. We detail and elaborate on how we use Trustworthiness and its attributes as part of the survey in Section~\ref{sec:Classification}. \subsection*{Contribution} This paper, therefore, investigates the directions and concerns for the use of edge computing in CPS that need to be trustworthy. Considering the strong drivers and the relative novelty of the field, it becomes important to understand the specific requirements and characteristics of edge-based CPS, and to ensure that research is guided adequately to address specific gaps. We present the results of a systematic mapping study \cite{Kitchenham07}, a kind of systematic literature survey, investigating the use of edge computing for CPS with a special emphasis on trustworthiness. The main contributions of this study are: \begin {itemize} \item \textit{A detailed description of the current research efforts in edge-based CPS} - relating to CPS domains, types of applications and system aspects, and the type of edge computing considered (MEC, fog computing and cloudlets). \item \textit{An analysis on how those research efforts address trustworthiness in terms of safety, security and predictability} - including combinations of these properties, and their relations to various edge-computing concepts and applications. \item \textit{An analysis on the research gaps found during this study} - including recommendations for future work directions. \end {itemize} We first review related surveys of edge computing, CPS, and overlapping studies in Section~\ref{sec:RelatedWork}. We use a mapping study (this type of systematic literature survey is described in Section~\ref{sec:method}) and a classification, to structure and characterize research literature in the intersection between edge-computing and CPS, as described in Section~\ref{sec:Classification}. We present the results in Section~\ref{sec:Results}, where a link to the data can also be found. Next, we discuss the findings, identify research gaps and treat validity in Section~\ref{sec:Disc}. Finally, we elaborate on future work and recommendations for research in Section~\ref{sec:FutureWork} and provide concluding remarks in Section~\ref{sec:Conclusions}. \section{Related work} \label{sec:RelatedWork} To the best of our knowledge, no previous paper has provided a broader systematic literature survey on the connection between edge-computing and CPS research. However, several studies have addressed fog-computing for specific CPS domains such as smart cities and Industry 4.0, and many literature studies were carried out in related areas of edge-computing and CPS. In the following, we briefly describe surveys with some relation to our survey and the specific perspectives they provide. As elaborated in the following, the surveys indicate needs to further address trustworthiness related properties of relevance for the use of edge computing in CPS \cite{Gonzalez2016, tocze2018, sat17, Ahmed2017, Khan2019}. \subsection*{Surveys on Edge computing in CPS} In \cite{10.1145/3057266} a survey of fog computing for sustainable smart cities is provided, revealing that (i) cloud/fog collaboration ("cloud companion support"), (ii) data analytics, (iii) multi-protocol support at communication level, (iv) mobility, and (v) security and privacy represent commonly addressed research topics in fog computing applications. The paper draws a conclusion that both IoT and fog computing are comparatively immature fields, motivating a need for a focus on platforms for testing, experimentation, and evaluation. The importance to support multiple communication and application-level protocols, privacy and security (including authenticity, confidentiality, and integrity), and distributed intelligence is highlighted. The topic of fog computing in the context of Industrial Internet of Things (IIoT) and Industry 4.0 has received a lot of attention in the research literature. For instance, \cite{Basir2019FogCE} reviews fog infrastructure and protocols in IIoT applications. Several communication and networking challenges are treated including (i) energy efficiency (e.g. balancing quality of service with energy consumption), (ii) network throughput and storage capacity (dependent on decisions of where to use and store data), (iii) resource allocation and spectrum use (as a challenge for network performance with impact on many quality of service parameters), (iv) latency, dealing with real-time connectivity requirements and the end-to-end chain of networking and processing (with several issues affecting latency such as resource allocation, network architecture, and node storage and energy capabilities), and (v) cache enabled edge devices (to reduce the load on backhaul links, and with schemes for efficiently accessing data). In \cite{CAIZA2020e03706}, research papers on fog computing in the context of Industry 4.0 are surveyed. Industrial IoT protocols and applications are examined in terms of their architecture, latency, security, and energy consumption, and the authors highlight several challenges with industrial fog computing. In a more recent survey, \cite{Cao2021SurveyEdgeEdgeCloudComputingAssisted} considers edge computing-assisted CPS from a similar perspective of quality-of-service optimization. They define a series of critical challenges including latency, energy consumption, security and privacy, and system reliability. In addition to classifying studies into these categories, they also summarize mechanisms for addressing them. In \cite{Sitton.2019}, the authors conduct a survey on four edge computing reference architectures proposed by Intel-SAP \cite{intel}, FAR-Edge Project \cite{FAR}, Edge Computing Consortium \cite{ECC}, and the Industrial Consortium for Industry 4.0 \cite{IIC}. The aforementioned reference architectures are all based on a three-layer model for edge computing, which integrates all layers to process the service. We note that these reference architectures focus on edge computing for industrial environments, increasing the importance of ensuring e.g. system reliability and security. Although these four reference architectures have contributed partially to trustworthiness attributes with an emphasis on security, very few focused on other trustworthiness attributes such as safety and availability for edge-based CPS. \subsection*{CPS surveys} There are several directions of CPS literature surveys, focusing e.g. on applications of CPS, see e.g. \cite{Chen-2017}, specific CPS domains (such as manufacturing/industry 4.0 or electrical grid), e.g. \cite{7740849, Lu-2017}, or on specific properties, such as security~\cite{10.1145/3313150.3313228}. Literature surveys of CPS highlight connectivity, IoT, big data, and cloud interactions. Specific mentioning of fog or edge computing appears to be relatively rare and the surveyed literature generally focuses on technical or methodological aspects applicable to CPS in distributed computer system settings such as interoperability and performance. Challenges highlighted by \cite{Lu-2017, Chen-2017} include complexity of CPS, interoperability, cybersecurity, safety, dependability, and energy consumption. CPS are often associated with critical applications; this is well recognized in CPS roadmaps and research challenge formulations~\cite{SRA-ECSEL-19, Platforms4CPS2018, BOZHINOSKI2019150}. Similarly, the increasing complexity of CPS with connectivity, collaboration, and more advanced algorithms including artificial intelligence and deep learning, poses both new opportunities and challenges~\cite{SRA-ECSEL-19, Platforms4CPS2018}. In the context of critical applications, this is especially true for properties such as security and safety, which face new challenges with new attack surfaces and faults/failure modes with complex behaviors and interactions in more open environments, requiring new approaches for system development, operation, and maintenance, see e.g. \cite{raj10, damm16, SRA-ECSEL-19, Platforms4CPS2018}. Another aspect is the increasing level of automation of CPS. NASA in \cite{NASA-AssuringSafety} provides a comprehensive survey on safety assurance of increasingly autonomous systems. They identify open challenges regarding, (i) methodologies for safety assurance (e.g how do we go about designing and reasoning about the safety of autonomous CPS, and in providing automated reasoning to assist developers?), (ii) architecting autonomous systems to support assurance with an emphasis on pervasive monitoring, (iii) dealing with human-autonomous CPS interactions, and (iv) considering ethics for autonomous systems. These findings are also supported by comparisons of related agendas and roadmaps, see e.g. \cite{Platforms4CPS2018} and by the NIST CPS architecture framework \cite{NIST-CPS}. The NIST framework was developed based on consultations with experts. It resulted in the identification of key life-cycle phases and aspects of CPS, with the aspects as representing groupings of cross-cutting concerns of relevance for one or more system stakeholders. Examples of key aspects identified include human-CPS interaction, trustworthiness, timing, data, and composability. The Platforms4CPS survey of agendas and roadmaps provided recommendations that address research, innovation, societal, legal, and business challenges related to CPS. Particular emphasis was placed on trust-related concerns and CPS edge computing was highlighted as a specific research challenge. \subsection*{Edge computing surveys} Surveys of the various flavours of edge computing include those focusing on characteristics and requirements, e.g. \cite{sat17, Gonzalez2016, Khan2019}, resource management, \cite{Mao2017, tocze2018}, reference architectures \cite{Sitton.2019}, or specific technological instances such as multi-access or mobile edge computing, \cite{Mao2017, Taleb-7931566} and fog computing, \cite{7867731, Gonzalez2016, 8100873}. In \cite{Khan2019}, a comprehensive survey of literature on edge computing paradigms is presented, providing characteristics of edge computing systems including fog computing, cloudlets, and mobile edge computing. Based on the survey, requirements for enabling edge computing systems are summarized, including availability, reliability, and security. Low-cost fault-tolerance and security are put forward as open challenges. Additional application and challenge perspectives are provided by \cite{sat17}, considering requirements for IoT applications including wearable cognitive assistance and favorable properties of edge-based realizations including availability, privacy, and latency. Challenges ahead, including complexity, security, and viable business models, are discussed. A previous survey on mobile edge computing of the same authors \cite{Ahmed2017} discusses requirements and challenges. Requirements mentioned for edge computing include reliability, scalability, resource management, security, interoperability, and business models. Open challenges put forward include seamless edge execution handover, eco-systems and business models enabling collaboration, lightweight security and privacy, and real-time data processing at scale. \cite{Gonzalez2016} investigates fog computing and specifically highlights challenges related to performance, security, and governance. In \cite{8100873}, a survey on fog computing, focusing on algorithms and architectures, is presented. The paper describes expectations and the suitability of fog computing for future Tactile Internet applications involving physical tactile experiences and remote real-time control, with example applications in telesurgery, and vehicle platooning. Requirements of connectivity and latency are elaborated for such applications, including expected end-to-end latencies of \SI{1}{\milli\second} or less and a maximum of \SI{1}{\second} outage per year. Challenges discussed include the design of higher layer APIs and protocols on top of lower layer protocols (e.g. provided by 5G), algorithms for tactile applications as well as novel resource management algorithms. A complementary perspective is taken by \cite{tocze2018}, by focusing on resource management independent of the type of edge computing system. The findings indicate a relatively low coverage of non-functional\footnote{Sometimes such properties are also referred to as "extra-functional", meaning that they specify e.g. how well or how to scale one or more functions.} properties in the literature; those covered in the paper include response time, energy, availability, and resource efficiency (in terms of resource utilization). Another study \cite{Mao2017} also focuses on resource management but in the specific context of mobile edge computing, emphasizing joint radio-and-computational resource management. Privacy and energy-related issues are included. In \cite{bakhshi_dependable_2019}, a systematic literature review was conducted on dependability and fog computing. This study provides an overview of the current state of the research, analyzing dependability attributes, sources of threats, and threat management techniques. The authors identified reliability and availability as the most studied dependability attributes. Node failure and link or path failure were the main sources of failure reported in the literature. The study also focused on the means applied to ensure dependability, identifying redundancy techniques as the most common methods. The relation between safety and security in the solutions proposed for fog computing was also considered, finding very few studies that address both topics. Finally, it identified certain research gaps, e.g. reintegration after fault recovery in distributed systems. The survey \cite{renSurveyEndEdgeCloudOrchestrated2019} presents an overview of the emerging edge computing paradigms (fog, MEC, cloudlet), from the perspective of orchestrating the storage and computing resources of end-devices, edge servers and the cloud, which the authors call \textit{end-edge-cloud orchestration}, and the paradigms are compared and evaluated in terms of offloading, caching, security and privacy. In the study, the authors also argue that transparent computing% \footnote{An extension to the classical von Neumann architecture, where the lowest layers of a computer system is extended over a network. By leveraging block-streaming and just-in-time compilation, data and instructions can be fetched and executed over a network instead of the local bus.} shares this commonality, and thus include it in the survey. However, this study appears to be unique in this regard, and we chose to not include transparent computing in our final categorization. Furthermore, in the ambitiously titled survey \cite{Yousefpour2019AllOneNeedsKnowFog}, "all one needs to know" about edge computing paradigms, includes taxonomies over fog, MEC and cloudlet architectures, and evaluations of their quality-of-service, security, RAS (reliability, availability, survivability) and management, to name a few objectives. The paper concludes by identifying challenges and research directions, several of which relate to trustworthiness aspects including resilient fog system design (considering reliability and availability, and trade-offs w.r.t. latency, throughput and security), fog system service level agreements, and various further security aspects such as trust and authentication in Heterogeneous Fog Systems. \section{Method}\label{sec:method} A systematic mapping study is a well-established methodology from the Software Engineering research community that provides a structured classification of papers (a map of the field), where the classification relates to the corresponding research questions \cite{PETERSEN2008}. Systematic mapping studies are used by researchers based on established existing guidelines and well-defined steps. Our systematic mapping study follows the guidelines presented in \cite{PETERSEN20151, Kitchenham07}. The process is adapted from \cite{abbaspour_asadollah_10_2017}, and consists of the following main steps as presented in Figure \ref{fig:workflow}. The details of each step are presented in the next subsections. \begin{figure}[ht] \centering \includegraphics [scale=.8]{Figures/ResearchProcess.pdf} \caption{Workflow of the research method process} \label{fig:workflow} \end{figure} \subsection*{Definition of research questions (Step 1)} \label{sec:RQs} The main goal of this study is to investigate the use of edge computing for Cyber-Physical Systems (CPS) and to find research gaps. This goal is refined into the following research questions (RQs): \begin{description} \item[RQ1:] How are edge computing solutions used for, or considered together with CPS in research? \textit{Objective}: to identify the areas where edge computing is being investigated, which technologies are used, and how trustworthiness is treated in the context. \item[RQ1.1:] Which CPS domains are in the focus of edge computing? \item[RQ1.2:] Which edge computing solutions are used for CPS? \item[RQ1.3:] Which attributes (or aspects) of trustworthiness are addressed within edge computing for CPS? \item[RQ2:] What types of applications within CPS are being treated with edge computing? \textit{Objective}: to identify which application types are using edge techniques in the field of CPS, and to identify research gaps in it. \item[RQ3:] What type of research is being conducted within edge computing for CPS? \textit{Objective}: to characterize what the individual studies emphasize, in terms of research contribution. \item[RQ4:] What other factors are influencing the development of edge computing for CPS? \textit{Objective}: to analyze other trends in the development of edge computing technology within CPS. \item[RQ4.1:] What classes of Artificial Intelligence (AI) are being used in edge-based CPS context? \item[RQ4.2:] What type of edge computing solutions for CPS consider energy efficiency? \end{description} \subsection*{Identification of search string and source selection (Step 2)} The main focus of this section is to identify a search string and the selection of database sources to apply the search to achieve both a good coverage of existing research on the topic and a manageable number of studies \cite{Kitchenham07}. \subsubsection*{Search string} A relevant search string should be able to return research works from the databases that address the study's RQs. For our research, we are interested in the intersection of two \textit{domains}, namely those of edge computing solutions, and cyber-physical systems. To characterize each domain, synonyms of the main keywords and terms related to the respective domains are combined using the logical \textit{OR} operator. The following list includes the terms used to define each of the domains. \begin{multicols}{2} \paragraph{Domain A} \begin{itemize} \item Edge computing \item Fog computing \item Cloudlet \end{itemize} \paragraph{Domain B} \begin{itemize} \item Cyber-physical systems \item CPS \item Industry 4.0 \end{itemize} \end{multicols} The wildcard character "$*$" is used to provide results with and without hyphenation. To compose a search string for such an intersection, the logical operator \textit{AND} is used, to return studies that belong to both sets. The final search string is shown in \autoref{tab:string}. \begin{table}[ht] \centering \caption{The final search string} \begin{tabularx}{0.8\columnwidth}{| >{\arraybackslash}X |} \hline (``edge computing'' OR ``fog computing" OR cloudlet) AND \\ ( ``cyber*physical" OR CPS OR ``industry 4.0” ) \\ \hline \end{tabularx} \label{tab:string} \end{table} \subsubsection*{Source selection} In order to find the existing relevant occurrences for this topic, two scientific online digital libraries were chosen: \textit{IEEE Xplore Digital Library}\footnote{IEEE Xplore Digital Library [Online]. Available: \url{https://ieeexplore.ieee.org/Xplore/home.jsp}}, \textit{ACM Digital Library}\footnote{ACM Digital Library [Online]. Available: \url{https://dl.acm.org/}}. The presented search string is used to query the studies from the sources with the necessary adaptations made in the syntax. The query resulted in a total amount of 667 {candidate} studies. The total number of retrieved studies from each database is shown in \autoref{tab:tab3}. \begin{table}[ht] \centering \caption{Number of studies retrieved from each library catalog} \begin{tabular}{l c} \toprule \textbf{Digital Library} & \textbf{Search Results} \\ \midrule ACM Digital Library & 338 \\ IEEE Xplore Digital Library & 329 \\ \midrule \textbf{Total} & \textbf{667} \\ \bottomrule \end{tabular} \label{tab:tab3} \end{table} \subsection*{Study selection criteria (Step 3)} This step performs the shortlisting/selecting the relevant studies that are identified in the previous step based on some inclusion/exclusion criteria. For a study to be classified as relevant, it should meet all the inclusion criteria at once, and none of the exclusion ones. The inclusion criteria include all the studies referring to edge computing within the domain of CPS or Industry 4.0. The exclusion criteria determines which studies to be excluded: we excluded studies that are duplicates of other studies; studies that are not peer-reviewed, tutorial papers, and poster papers; survey studies are removed since they lie outside the scope of the mapping. Instead, the relevant survey studies are covered in Section~\ref{sec:RelatedWork}. The selection process includes several steps and is detailed in Figure~\ref{fig:selectionprocess}. \begin{figure}[ht] \centering \includegraphics[scale=0.67]{Figures/StudySelectionProcess.pdf} \caption{Overview of the study selection process} \label{fig:selectionprocess} \end{figure} After obtaining the total amount of studies from the automatic search, the first step to select the relevant studies begins with the removal of the duplicates. We used Zotero\footnote{https://www.zotero.org/} to identify these duplicates and remove them. This is an open-source tool and is widely used. In the next step, all survey studies were removed, leaving the process with 649 studies after this step. For the next step, "Title \& abstract exclusion", the studies were divided amongst the participating researchers/authors for a review of the paper titles and abstracts, resulting in a flagging of papers as "Relevant" (R), or "Not clear" (NC), or "Not relevant". A study was marked as relevant if it met all the inclusion criteria and none of the exclusion criteria, or as not-relevant if it lacked one of the inclusion criteria or met at least one of the exclusion criteria, or as not-clear if there were uncertainties arising from the title and abstracts review. All studies flagged as NC were examined closely by means of full-text skimming. Any subsequent studies that remain NC were brought to discussion. \subsubsection*{Snowballing} After identifying the primary studies, the snowballing aimed at identifying additional papers by using the references of the already identified relevant papers. This was accomplished automatically using the free and open-source reference management software Zotero with the AI-backed search engine Semantic Scholar (S2), by integrating their web APIs\footnote{Git repository [Online]. Available: \url{https://gits-15.sys.kth.se/nilsjor/zotero-s2-api}}. The snowballing resulting in the retrieving of 4705 references. After removing duplicates, 3792 studies remained. These were then compared with the list from the original search string, the 667 papers shown in Figure \ref{fig:selectionprocess}, reducing the number to 3709. The resulting list is finally filtered using the same search string described above; the studies had to include a term related to CPS and a term related to edge computing. This process reduced the final results to 24 references. Finally, the content of the studies was analyzed, adding a total of 17 new sources from the snowballing (see left bottom part in Figure~\ref{fig:selectionprocess}). At the end of the Study selection process, there were a total of 224 studies flagged as Relevant, and the data mapping step could begin. \subsection*{Data mapping (Step 4)} The next step in the systematic mapping study is to establish how the {relevant} studies are to be classified, in order to attempt to answer the underlying research questions of the study. This section provides the definitions used in the classification scheme. Beyond the general publication data like title, name of the author(s), and publication year, seven additional facets are considered. The first, being the \textit{research types} presented in \cite{Wieringa}, which are adopted unaltered in our study, like in \cite{abbaspour_asadollah_10_2017}. The others are \textit{CPS domain}, \textit{edge implementation}, \textit{application class}, \textit{trustworthiness}, \textit{artificial intelligence}, and \textit{energy efficiency}. The initial categorization is based on the acquired knowledge from discussions with experts from their respective fields. Next, weekly meetings were held to refine the categories further. We defined a well-structured form based on the classification scheme and the research questions in order to extract the data from the selected relevant studies. Microsoft Excel spreadsheet was used to organize and store the extracted information from each relevant study for subsequent analysis. \subsection*{Analysis of results and discussion of insight (Step 5)} The last step of the systematic mapping study process is results analysis, where the map of the field is produced from the relevant studies and then a comprehensive analysis of the studies is performed to address the research questions. We used multiple methods to produce the maps such as bubble charts, pie charts, and line graphs. Produced maps and the derived analysis are presented in Section \ref{sec:Results}, and a discussion of the insights is presented in Section \ref{sec:Disc}. \section{Classification Schemes Definition} \label{sec:Classification} This section explains all the classifications in detail. \subsection*{Research type, adopted from \cite{Wieringa}} \begin{itemize} \item \textbf{Validation research} concentrates on investigating a proposed solution, which is novel and has not yet been implemented in practice. Investigations are carried out systematically, i.e., prototyping, simulation, experiments, mathematical systematic analysis, and mathematical proof of properties. \item \textbf{Evaluation research} focuses on evaluating a problem or an implemented solution in practice, i.e., case studies, field studies, and field experiments. \item \textbf{Solution proposal} provides a novel solution for a problem or a new significant extension to an existing technique. \item \textbf{Philosophical paper} describes a new way of looking at things by structuring in form of a conceptual framework or taxonomy. \item \textbf{Opinion paper} expresses the author's opinion whether a certain technique is good or bad. \item \textbf{Experience paper} sketches on the personal experience of the author, i.e., what and how something has been done in practice. \item \textbf{Survey paper} represents research where data and results are taken from other, already existing publications, where conclusions are drawn regarding trends, challenges, areas of interests, and future work, etc. \end{itemize} \subsection*{CPS Domain} \begin{itemize} \item \textbf{Telecommunication (Telecom)} includes communication infrastructures, wireless communication, applications of 5G mobile networks, etc. \item \textbf{Healthcare} includes the technologies to monitor and give medical care to the patients in hospitals as well as customers outside the hospital. \item \textbf{Manufacturing} includes sensing, actuation, big data analysis, communication, control, and optimization of manufacturing systems. Typical CPS-related concepts include cloud manufacturing, industry 4.0, and digital twins. \item \textbf{Infrastructure} includes smart buildings/homes and smart cities. The CPS technologies enable remote monitoring and control and hence have the potential to improve the safety, security, and energy efficiency in for example smart buildings. \item \textbf{Energy} encompasses energy-related considerations in for example smart grids, power plants, household electricity generation with renewable energy. Energy has been as one keep aspect related to sustainability. \item \textbf{Transportation} refers to different modes for transporting people and goods (cars, trucks, buses, trains, etc.). Major applications include for example autonomous vehicles, vehicle to X communications, and intelligent transportation systems. \item \textbf{Other} referring to any other CPS domain mentioned. \item \textbf{Not specified} -- relevant if the work is independent of any explicit CPS domain mentioning. \end{itemize} \subsection*{Edge implementation/concepts} Among the many interpretations of edge computing, we find that “the edge” is given a different meaning as already mentioned in the introduction. We focus on the following implementations, or concepts, referring to edge computing which we understand as mainstream, see e.g.\cite{8016213, ELAZHARY2019105}. \begin{itemize} \item \textbf{Fog computing} can be seen as an extension of cloud computing introduced by Cisco Systems in 2012 \cite{Bonomi.2012}. It enables computing, storage, networking, and data management from the core of the network to its edges. Therefore, network performance can be enhanced given that the processes are not only executed in centralized cloud servers but also along the path to them. \item \textbf{Multi-Access Edge Computing} (MEC) is a platform that provides IT and cloud-computing capabilities within radio access network (RAN) in 4G and 5G, in close proximity to mobile subscribers \cite{Abbas.2018, Taleb-7931566}. Particularly, it is located on the network edge and provides computation capabilities and storage resources to nearby low energy, low resource mobile devices. \item \textbf{Cloudlet} is another direction in distributed mobile computing that shares many traits with MEC. Specifically, a cloudlet refers to a cluster of trusted computers with a strong connection to the Internet that is utilized by nearby mobile devices. Moreover, cloudlets are located in the middle tier of a 3-tier continuum, i.e., mobile device-cloudlet-cloud, and typically one hop away from mobile devices. The idea is to offload computation from mobile devices to a virtual machine (VM) based cloudlets located on the network edge. Therefore, cloudlets need infrastructure with VM capability \cite{Satyanarayanan.2009}. \item \textbf{Other} definitions of edge computing are used in the surveyed literature. Some research articles that we reviewed proposed their solution in terms of other related edge computing concepts such as mist computing, vehicular edge computing, etc). Thus, we classify those articles as "other". \item \textbf{Not specified} has been assigned to studies without any explicit reference to any type of edge computing implementation. \end{itemize} \subsection*{Application class} With application class we refer to the application or system aspect in focus for the research. \begin{itemize} \item \textbf{Resource management} considers edge-based methods for handling system resources, such as scheduling, orchestration, migration, and distribution of computation, storage, etc. \item \textbf{Collaborative CPS} deal with systems where information is exchanged between several CPS, typically with edge computing infrastructure foreseen to support computation, for the purposes of collaboration. \item \textbf{Real-time application analytics} concerns applications where edge computing can be leveraged to bring demanding real-time computation closer to the edge devices. \item \textbf{Human-machine interaction} are applications where edge computing can be used to provide low-latency feedback to human operators, such as augmented reality and cognitive assistance. \item \textbf{Networked control systems} include CPS with a closed-loop feedback control over the edge, and/or dynamical systems analyzed using control theory. \item \textbf{Autonomous systems} describes edge-device systems with a high degree of autonomy, even in the absence of other devices. \item \textbf{System-internal monitoring} denotes methods for measuring or otherwise detecting system characteristics, such as energy consumption, latency, or faults/failures. \item \textbf{Software architecture} is used to refer to structural/behavior arrangements and configurations of software and hardware components, e.g. related to concepts such as software-defined networking and blockchains. \end{itemize} \subsection*{Trustworthiness} As mentioned in the introduction, we use the term trustworthiness as an umbrella property, focusing on the attributes of safety, security and predictability. This choice of attributes implies that the way we use the term comes relatively close to the concept of dependability, as "the ability to deliver service that can justifiably be trusted". Dependability encompasses the attributes of availability, reliability, safety, integrity, maintainability, and more recently, security, and in addition considers means to deal with these attributes (such as fault removal and tolerance) and "threats" to dependability (faults, errors, and failures), \cite{Dep}. Trustworthiness is increasingly adopted in the context of CPS, see e.g. \cite{nist16, Platforms4CPS2018, EU.2021.web}. Trustworthiness as a concept, reflects an emphasis on the end properties of a system, where the resulting trust will stem from the integration of cyber- and physical parts, and their interactions with humans and other systems. This concept thus extends well beyond pure computing systems, and is suitable for CPS. Considering this adoption and usages of the term, trustworthiness has been our choice. Given our emphasis on three trustworthiness, attributes, the corresponding classification is as follows. \begin{itemize} \item \textbf{Safety} commonly concerns either an absolute or a risk-related property; we exemplify here with the latter interpretation, viewing safety as the "absence of unacceptable risk" from conditions that can lead to harm to people, property, or the environment, see e.g. \cite{IEC}. Safety considerations typically result in requirements on how a system is used and interacts with its environment, and on availability and reliability related properties of subsystems/components and their interactions. According to Firesmith, safety can be seen as the degree to which accidental harm is prevented, detected, and reacted to, \cite{Firesmith}. However, newer safety standards are beginning to highlight that harm may arise also from malicious intent and usage of a system, thus safety will increasingly rely on protection from attacks (security). For example, the ISO26262 edition from 2018, has the following statement: \textit{”5.4.2.3 The organization shall institute and maintain effective communication channels between functional safety, cybersecurity, and other disciplines that are related to the achievement of functional safety. EXAMPLE 1 Communication channels between functional safety and cybersecurity in order to exchange relevant information (e.g. in the case it is identified that a cybersecurity issue might violate a safety goal or a safety requirement, or in the case a cybersecurity requirement might compete with a safety requirement).”} \cite{ISO26262}. While it is important that such a point and reference to cyber-security is made, it is also evident that methodological guidance on how to accomplish this is urgently required, indeed representing a research topic that is drawing (and requiring much more) attention as for example seen from the publications in recent Safecomp conferences. \item \textbf{Security}, as opposed to safety, can be seen as the degree to which malicious harm is prevented, detected, and reacted to, \cite{Firesmith}. Security is in itself multi-attribute, taken for example to encompass authentication, authorization, integrity, confidentiality, and availability (see Chapter 4 in \cite{SecHandbook}). The increasing connectivity, and the introduction of edge-based CPS, provides both promises to deal with security attacks (by e.g. local monitoring and responses), but also exposes more attack surfaces, where attackers may leverage both the cyber-, physical, and humans dimensions (and their combinations) for attacks. \item \textbf{Predictability} is a term traditionally associated with real-time computing systems, referring to the ability to satisfy the timing requirements of critical tasks with some level of guarantee (depending on the static or dynamic nature of the systems), \cite{Stankovic}. Edge-based CPS will be dynamic in nature, with varying loads, partial failures or losses (e.g. loss of message packets), potential migration of computations, etc. To deal with real-time critical applications, a number of timing requirements may be relevant such as precise timing, age of data, and the corresponding detection of timing overruns, \cite{TorngrenJRTS}. This relates closely to the availability and resource management of end-to-end computation chains in an edge-based CPS. With predictability, we refer to both hard and soft real-time capabilities, including approaches that in some way address availability and resource management. \item \textbf{Combinations} of several trustworthiness properties and their trade-offs will normally have to considered in edge-based CPS. We therefore also specifically searched for papers that considered combinations of these properties. \item \textbf{None} has been assigned to studies without any specific reference to a trustworthiness property. \end{itemize} \subsection*{Artificial intelligence} In this paper, the primary interest is to understand which classes of artificial intelligence methods have been used in the context of trustworthy edge computing. Note that AI methods can either be applied to enhance the capabilities of edge computing infrastructure or used within applications on top of edge computing. For the sake of the present study, we have divided the AI technologies into the following classes: \begin{itemize} \item \textbf{Machine reasoning} refers to symbolic ontology-based methods working with declarative knowledge including logical reasoning. \item \textbf{Machine learning} includes numeric and symbolic learning methods including supervised, unsupervised, reinforcement learning, and combinations of those. \item \textbf{Model-based methods} includes methods used for procedural knowledge processing, including state space exploration and AI-planning. \item \textbf{Other} refers to methods that are not included in the categories above, such as evolutionary methods and game theory. \item \textbf{None} has been assigned to the studies without any specific reference to artificial intelligence. \end{itemize} \subsection*{Energy efficiency} This is a binary category where we have identified if a paper considers energy efficiency at the application level and/or the computing infrastructure. \section{Results} \label{sec:Results} In this section we present the findings from the survey with a subsection for each of the four research questions as introduced in Section~\ref{sec:method}. The outcomes of the research questions are illustrated in the form of charts and/or graphs. The complete list of the relevant studies and their classifications in the study can be found online\footnote{Available in CSV-format: \url{https://zenodo.org/record/5112378}}. \subsection*{RQ1: How are edge computing solutions used for, or considered together with CPS in research?} Figure~\ref{fig:PieChartCPS} shows the distribution of the CPS domains among the literature studied. The results show that the biggest group of studies address CPS in general, without specifying the domain. Among the ones that are related to a specific domain, manufacturing has the largest representation. \begin{figure}[ht] \centering \includegraphics[scale=1.0]{Figures/PieChartCPS-eps-converted-to.pdf} \caption{Distribution of the CPS domains considered in all relevant studies} \label{fig:PieChartCPS} \end{figure} The distribution of the different edge implementations is shown in Figure~\ref{fig:PieChartEdge}. Fog computing is overwhelmingly the largest category, covering nearly half of the total number of publications. \begin{figure}[ht] \centering \includegraphics[scale=1.0]{Figures/PieChartEdge-eps-converted-to.pdf} \caption{Distribution of edge implementations considered in all relevant studies} \label{fig:PieChartEdge} \end{figure} Finally, the distribution of the trustworthiness attributes is shown in Figure~\ref{fig:PieChartTrust}. The pie chart on the left shows that \SI{48}{\percent} of the studied paper do not consider any of the trustworthiness attributes, and only \SI{3}{\percent} of the studies consider all the trustworthiness attributes. The remaining \SI{49}{\percent} consider one or two attributes, and the chart on the right shows the break-down of publications in this category. Safety, security, and predictability are represented by a primary color and their intersections represent publications that mention two of them. \begin{figure}[ht] \centering \includegraphics[scale=0.67]{Figures/PieChartTrust-eps-converted-to.pdf} \caption{Trustworthiness concepts. The left chart shows how many distinct aspects of trustworthiness are considered, and the right chart shows a detailed breakdown of the "One or more" (brown) category.} \label{fig:PieChartTrust} \end{figure} An interesting aspect to consider is the evolution of the trustworthiness attributes in edge-based CPS over time, as seen in Figure~\ref{fig:YearTrust}. It is possible to observe how, despite the increase in the number of publications, the ratio of those that consider trustworthiness attributes remains relatively constant. The low number of publications in 2020 comes from the fact that the study began at the beginning of that year. \begin{figure}[ht] \centering \includegraphics[scale=0.67]{Figures/YearTrust-eps-converted-to.pdf} \caption{Trustworthiness categories over time. Note that data from the year 2020 is incomplete, owing to the timing of the study.} \label{fig:YearTrust} \end{figure} Finally, the relations between CPS domain, edge implementation, and trustworthiness are presented in Figure~\ref{fig:CPSEdgeTrust}. The $x$-axis shows the CPS domains, the $y$-axis shows the edge computing implementations. In each intersection, the total number of publications is shown, as well as the relative coverage of the three trustworthiness attributes. In order to reduce the complexity of the pie charts, the intersections between the trustworthiness attributes are not shown. Instead, studies that address more than one attribute are counted once for each contribution. \begin{figure}[ht] \centering \includegraphics[scale=0.67]{Figures/CPSEdgeTrust-eps-converted-to.pdf} \caption{Relation between CPS domain, edge implementation and trustworthiness} \label{fig:CPSEdgeTrust} \end{figure} Figure~\ref{fig:CPSEdgeTrust} shows that manufacturing using fog computing as the edge implementation has received the largest attention. Some gaps are also noticeable, such as healthcare or energy applications using cloudlet-based edge computing. \subsection*{RQ2: What types of applications within CPS are being treated with edge computing?} Figure~\ref{fig:PieChartApp} represents the distribution of the application types, revealing that resource management and real-time application analytics are the most studied application types for edge-based CPS. \begin{figure}[ht] \centering \includegraphics[scale=1.0]{Figures/PieChartApp-eps-converted-to.pdf} \caption{Distribution of application types considered in all relevant studies} \label{fig:PieChartApp} \end{figure} Figure~\ref{fig:CPSApp} shows how the application types are distributed among the CPS domains. The $x$-axis represents the number of publications and the $y$-axis shows the different domains. \begin{figure}[ht] \centering \includegraphics[scale=0.67]{Figures/CPSApp-eps-converted-to.pdf} \caption{Applications distributed amongst the domains in all relevant studies} \label{fig:CPSApp} \end{figure} Finally, the relation between application type, edge implementation, and trustworthiness is presented in Figure~\ref{fig:AppEdgeTrust}. It should be mentioned that it closely resembles Figure~\ref{fig:CPSEdgeTrust}, but with the application class on the $x$-axis, rather than the CPS domain. Real-time application analytics and Resource management using fog computing represent the largest groups. There are some research gaps, where none or very few publications have been found, e.g. human-machine interaction using MEC and system-internal monitoring using cloudlets. It can also be noticed that human-machine interaction received the least attention in the surveyed research. \begin{figure}[ht] \centering \includegraphics[scale=0.67]{Figures/AppEdgeTrust-eps-converted-to.pdf} \caption{Relation between application type, edge implementation and trustworthiness} \label{fig:AppEdgeTrust} \end{figure} \subsection*{RQ3: What type of research is being conducted within edge computing for CPS?} The distribution of the research types are shown in Figure~\ref{fig:PieChartType}. More than half of the studies have been classified as solution proposals, while evaluation and validation only represent \SI{21}{\percent} and \SI{9}{\percent} respectively. The remaining studies are shared among opinion, philosophical, and experience papers. \begin{figure}[ht] \centering \includegraphics[scale=1.0]{Figures/PieChartType-eps-converted-to.pdf} \caption{Distribution of research types in all relevant studies} \label{fig:PieChartType} \end{figure} Figure~\ref{fig:AppEdgeType} shows the relation between application type, edge implementation, and research type. Regarding the research type, it is visible that solution proposal is the predominant category in almost every group. Evaluation and validation tend to occupy the second and third positions, but there are quite a few groups where these categories are not present. \begin{figure}[ht] \centering \includegraphics[scale=0.67]{Figures/AppEdgeType-eps-converted-to.pdf} \caption{Relation between application type, edge implementation and research type} \label{fig:AppEdgeType} \end{figure} \subsection*{RQ4: What other factors are influencing the development of edge computing for CPS?} The distribution of the AI methods used in the studies is shown in Figure~\ref{fig:PieChartAI}, illustrating that two-thirds of the publications do not mention any kind of AI. Among the studies that use AI, learning methods are the most common ones. \begin{figure}[ht] \centering \includegraphics[scale=1.0]{Figures/PieChartAI-eps-converted-to.pdf} \caption{Distribution of AI methods considered in all relevant studies} \label{fig:PieChartAI} \end{figure} However, the evolution of those categories over time gives a slightly different picture, as shown in Figure~\ref{fig:YearAI}. It can be seen that the interest in learning methods within edge computing for CPS increased substantially for the year 2019. \begin{figure}[ht] \centering \includegraphics[scale=0.67]{Figures/YearAI-eps-converted-to.pdf} \caption{Distribution of AI over time. Note that data from the year 2020 is incomplete, owing to the timing of the study.} \label{fig:YearAI} \end{figure} Finally, with energy efficiency chosen to represent sustainability, only \SI{9}{\percent} of the studies consider some aspect of energy efficiency. Figure~\ref{fig:CPSEnergy} shows the distribution of studies considering energy efficiency per CPS domain. \begin{figure}[ht] \centering \includegraphics[scale=0.67]{Figures/CPSEnergy-eps-converted-to.pdf} \caption{Energy efficiency as considered within the different domains in the relevant studies} \label{fig:CPSEnergy} \end{figure} \section{Discussion} \label{sec:Disc} Our systematic mapping study, as well as the related surveys, clearly paints a picture of edge-based CPS as an emerging field that addresses multiple types of applications. It is clear that the initial drive towards edge-computing has been focusing on non-critical applications, but that the momentum and opportunities are likely to lead to increased adoption of edge-computing in CPS, and therefore a need to increasingly deal with multiple attributes of trustworthiness. In the following, we first discuss the findings from our systematic mapping study and then contrast them with the findings of the related surveys. Finally, we discuss the validity of our mapping study. \subsection*{Discussing the findings} Among the edge computing solutions, fog computing is the most present implementation in the analyzed studies, as seen in Figure \ref{fig:PieChartEdge}. When looking at the distribution within the CPS domains, as in Figure \ref{fig:CPSEdgeTrust}, it can be seen that only in telecom is MEC more frequent than fog computing. This finding is natural given the connection between the telecom industry and MEC. A possible explanation for the limited number of papers covering MEC among the CPS domains could be that the telecommunication companies have a strong tradition of patenting (rather than writing papers), note that patents are not covered in this mapping study. By contrast, manufacturing, which is much more focused on fog computing according to the surveyed publications, is also the CPS domain with the highest number of studies about edge-based CPS. This is likely because manufacturing technology already has a high degree of automation, sensors, and network capabilities, and so taking the next step to Edge is a relatively small one. Additionally, the search string included "Industry 4.0" which also may favor results within the manufacturing industry. Lastly, the cloudlet architecture is the category with the lowest amount of publications for all of the CPS domains, which could have its explanation in that cloudlets have (so far) mainly intended non-CPS applications. The consideration of trustworthiness when using edge computing in the context of critical CPS was one of the main motivations for embarking on this study. After analyzing the results in Figure~\ref{fig:PieChartTrust}, it is clear that many of the research efforts in edge-based CPS are only considering trustworthiness of the systems to a very limited extent. In recent years, as shown in Figure \ref{fig:YearTrust} the number of publications related to edge-based CPS has experienced huge growth. Nevertheless, the proportion of those that consider trustworthiness has been constant, around 50\%. Among the ones that consider it, most of them only cover one or two of the attributes of trustworthiness that we have analyzed. Regarding those aspects, Predictability and Security received the most attention. For predictability we note a strong interest in various aspects of resource management, which have rendered a classification of "predictability". In hindsight we can say that there are nuances of predictability, essentially referring to "best-effort" (average-case performance) vs. efforts providing some level of guarantees. Thus, not all efforts on predictability are relevant for critical CPS. Further, safety is the least considered trustworthiness aspect. This is especially noteworthy since many papers indeed refer to various types of critical applications as relevant for edge computing, including for example in manufacturing and transportation (e.g. vehicle platooning). The development of safety-critical systems requires adherence to safety standards. The current set of standards and thus best practices are not fit for the next generation AI-equipped edge-based CPS, see e.g. \cite{NASA-AssuringSafety, SRA-ECSEL-19, Platforms4CPS2018}. The corresponding challenges are perhaps most prominently seen for automated driving (AD), with the increasing number of efforts that are trying to promote AD safety, see e.g. \cite{SAFAD}. Regarding the application type, resource management and real-time application analytics have been the most extensively studied areas, especially using fog computing as the edge implementation, as shown in Figure \ref{fig:PieChartApp}. Power consumption in energy-aware systems, together with task completion latency, are the two most concerned elements during the application. Extensive studies of edge-based CPS, especially regarding computation task offloading \cite{wang2016mobile}, energy-efficient scheduling \cite{yu2018energy}, and resource allocation management \cite{hu2018mobility}, have investigated the natural trade-off between these two factors. The minimization of execution delay and energy consumption in a cooperative edge-based system requires the joint optimization of communication and computation resources between local devices and edge servers. This consideration is usually measured as a weighted-sum function of the delay and energy, and adjusted with different weightings to satisfy the requirements in various use cases. Nevertheless, other factors, such as the operational cost \cite{xu2017online}, network utility \cite{tan2015utility}, quality-of-experience \cite{ning2019deep}, and robustness of transmission network \cite{anwar2017minimax}, are also considered as the objectives to be optimized in the application of edge computing system for CPS. When analyzing the type of research in Figure \ref{fig:PieChartType}, more than half of the studies have been classified as a solution proposal. When grouping the studies by application type and edge implementation, solution proposal is still the predominant category in almost every group. Evaluation and validation tend to occupy the second and third positions, but there are some research gaps where these two categories are not present, e.g. resource management using cloudlets. As we are embarking on a phase of novel, edge-based CPS in many applications, more effort will be needed on evaluation and validation studies, not the least concerning trustworthiness properties. Regarding other factors that are influencing the development of edge computing for CPS, this study has analyzed the AI methods used and the inclusion of energy efficiency. Regarding the AI methods, only a third of the studies explicitly state that they use artificial intelligence (Figure~\ref{fig:PieChartAI}). On the other hand, when considering the temporal evolution, (Figure~\ref{fig:YearAI}) one can see an increase in the use of machine learning methods from 2019. Regarding energy efficiency, it is only considered by a rather low fraction of the total number of studies. This value is particularly low in the domains of manufacturing and zero in the healthcare domain. As CPS are used to integrate new technologies and deployed in settings that span embedded, edge and cloud computing, there are corresponding needs to bridge gaps between the involved research communities. As treated in this paper, this becomes particularly important concerning the edge vs. CPS disciplines. A high-level summary of these findings is illustrated in \autoref{tab:tab4}. Edge computing communities have not had the same exposure to critical applications. Since several trustworthiness attributes have been identified as "research challenges", this would provide a useful starting point for discussions with CPS and dependability fields where these topics have a long tradition. For example, our survey indicates that edge computing research has primarily focused on soft real-time (SRT) systems (where meeting timing requirements is generally not seen as critical), whereas CPS communities have for long studied both hard real-time (HRT) (where missing timing requirements may be critical) and SRT systems, as well as their combination. In any case, as we embark on more open and increasingly complex systems, all trustworthiness attributes face challenges on their own (e.g. for security - more attack surfaces, for safety - learning systems deployed in more open world settings), but also need consideration in conjunction. Further, as cyber-physical systems become connected and start to collaborate, this will lead to cyber-physical systems of systems (CPSoS). We believe that many such CPSoS will tend to include edge computing and AI to support many of the coordination and collaboration challenges. Systems of systems are characterized by the operational and managerial independence of their constituent systems, and by emergent behavior, \cite{SoS-Maier}. Take for instance city traffic as a “system”, exemplifying a situation with a multitude of stakeholders, independent evolution (of streets, vehicles, other infrastructure), not always clear responsibilities, and where a change or the introduction of entirely new systems (such as automated vehicles) may cause hard to predict behaviors (emergence). Finally, although not an aspect we emphasized in our survey, we note that business models are identified as research challenges by the edge computing communities. To our understanding, CPS has not to the same extent considered this topic and will need to do so as CPS is likely to be increasingly provided as services as part of CPSoS. \subsection*{Findings vs. related edge-computing surveys} It is interesting to reflect on our mapping study findings versus the other state-of-the-art surveys that we summarized in Section~\ref{sec:RelatedWork}, in particular for those covering some flavor of edge-computing. We note that these survey papers identify a rather broad range of topics as relevant research challenges. Commonly identified challenges include various aspects of resource management, latency, security, privacy, and energy efficiency. Topics identified by a few papers include interoperability-related challenges, governance, business models, architecture, mobility, and application algorithms including data analytics. The resource-constrained nature of edge-based systems is highlighted by several papers, where a few call for cost-efficient approaches to security and fault-tolerance. The complexity of edge-based systems is touched upon by a few papers directly or indirectly. We note that security in itself is a multifaceted topic that would deserve a more in-depth survey. While research has addressed or is highlighting selected reliability and availability challenges, security is still mainly identified as a research challenge with an emphasis on privacy and confidentiality -- thus with less coverage of security implications on availability and safety \cite{bakhshi_dependable_2019}. In addition, the need to deal with conflicting objectives and multi-objective design is also highlighted by a few surveys (e.g. considering quality-of-service, energy, cost and bandwidth). Specifically, several of the surveys, including \cite{tocze2018, bakhshi_dependable_2019, Yousefpour2019AllOneNeedsKnowFog, Cao2021SurveyEdgeEdgeCloudComputingAssisted}, highlight gaps regarding non-functional properties in terms of trustworthiness/dependability related attributes. As an overall remark, we conclude that the related surveys found similar gaps when it comes to addressing trustworthiness attributes, while there is a lack in considering safety and its relation to other trustworthiness attributes explicitly. A combined view, drawing upon our findings and the related surveys, is illustrated in \autoref{tab:tab4}. \begin{table}[] \centering \caption{High-level overview of findings w.r.t. key CPS properties} \label{tab:SummaryFindings} \small \begin{tabular}{|c|c|c|c|c|}\hline \textit{Field vs. Properties} & \textbf{Safety} & \textbf{Security} & \textbf{Predictability} & \textbf{Energy} \\\hline \textbf{CPS} & Yes & Yes & Yes (HRT/SRT) & Partly \\\hline \textbf{Edge computing} & No & Research challenge & Yes (SRT) & Research challenge \\\hline \end{tabular} \label{tab:tab4} \end{table} \subsection*{Validity of the results} Several issues need to be taken into account when conducting a systematic mapping study, which, if unaddressed, can potentially limit the validity of obtained results \cite{Kitchenham07}. One such limitation is that this study only considered published papers written in English. For this reason, some relevant contributions in other languages may have been omitted. However, it should be mentioned that this is a limitation with most systematic mapping studies, and the impact is assumed to be small \cite{BOZHINOSKI2019150, abbaspour_asadollah_10_2017}. Additionally, since the snowballing process was automated, it is possible that the occasional publication is parsed incorrectly, and thus be considered "unknown" by the automation tool. Such publications would thus not have been properly processed by the management software, and thus omitted from the results. Another potential threat to the validity is the subjectivity of the individual researcher during the classification stage. Since only one option is chosen for each category, it can sometimes be hard to assess the core subject matter or the study under review. To mitigate this, a validation process was performed where researchers reviewed a randomly sampled subset from the rest of the team. No significant discrepancies were identified during this step. Moreover, weekly meetings were performed where all the reviewers participated, to harmonize the concepts and classifications. This process led to several clarifications and in some cases to re-reviews of papers to make sure the same approach was applied to all papers. Finally, the relevance of the findings may not be representative or relevant if the search string/terms were not appropriate to the corresponding research questions. As discussed in this paper, this topic is non-trivial since many concepts and synonyms are used to refer to edge-computing as well as to CPS. We believe that the validity of our mapping study is strengthened with the comparison with the related surveys since they encompass a broader (sometimes slightly different) scope compared to our mapping study. For example, the state-of-the-art surveys also include "CPS-" and "edge-" only surveys. The performed snowballing also helped to reduce the risk of missing relevant publications. \section{Recommendations and Future Work} \label{sec:FutureWork} As covered by our mapping and the related surveys (Section ~\ref{sec:RelatedWork}), a multitude of topics is already being researched concerning edge computing systems. It is clear that much more research and industrial efforts (including standardization) will be needed in the direction of future edge-based CPS. We summarize here our analysis of the findings (from the Discussion Section~\ref{sec:Disc}) in terms of recommendations for further research and other efforts that would complement current efforts: \begin{itemize} \item \textit{Further addressing security, safety, and predictability challenges}. Each trustworthiness property needs further research on its own, but also to take the others into account, and moreover, the considerations of multiple simultaneous functional and extra-functional requirements to be considered during design and dealt with during run-time. Research directions include how to deal with security (new vulnerabilities and attack surfaces) and predictability given the dynamics of edge-based CPS (e.g. mobility, and partial failures) and the desire to reason about and tailor latency (e.g. with respect to different quality of services levels) over end-to-end complex computational chains. Edge-computing and communication provide new or enhanced capabilities that "augment CPS", for example by enhancing performance and safety. At the same time, these new capabilities, based on hardware, software and data (with environment dependencies), increase the system complexity and invariably lead to new faults and failure modes, as well as potential unintended effects (emergence) and unintended usage. These effects are likely to introduce new hazards and risks that will require new research to better understand how to systematically deal with risk mitigation and the challenging task of safety assurance/certification in the context of future edge-based CPS, \cite{NASA-AssuringSafety}. \item \textit{Addressing the relationships between trustworthiness properties}. This requires an understanding of how these properties relate to each other, can be traded against each-other - ensuring a proper balancing between trustworthiness properties in partly open upgradable systems, and how edge-based CPS can be realized in cost-efficient ways. Important directions here include methodologies for complexity management, run-time reconfiguration, architecture frameworks, and reference architectures. As a common pattern, shared between the trustworthiness attributes considered in this paper, there is a need to investigate how to manage and orchestrate such compute/communication chains to obtain (optimize and trade) the desired properties (e.g. w.r.t. to latency, robustness, availability and so on). Key ingredients here include monitoring, error/anomaly detection, error handling, and ways to deal with system reconfiguration and degrades modes. \item \textit{Architecting, platforms and programmability}. Edge-based CPS will involve the (often) dynamic integration of heterogeneous subsystems, with tight internal and external (environment) interactions. These systems often have long life-spans -- over which they are also likely to evolve -- and so must thus be maintainable, upgradeable, debuggable and scalable. Research is needed into platforms and programming models that can enable such properties, along with interoperability, reconfigurability and energy management, while explicitly supporting trustworthiness properties at various levels. We believe that the trustworthiness properties need to be treated as first-class citizens, all the way from reference architectures, over APIs to the programming models. Resilience needs to be provided bottom-up with sufficient tailorability to suite different application needs. Research needs to address new abstractions and architectures in order to find a balance between the increasing complexity (of new mechanisms) and the overall system properties. \item \textit{Business models and operational models (contracts) for edge-based CPS}. Edge computing will not only introduce new technology into CPS but in many cases also new stakeholders such as edge computing and communication platform providers and operators. Our findings support a need to further investigate suitable business models and "contracts", that would promote collaborative edge-based CPS, clarify responsibilities and liability. \item \textit{Considering the characteristics and domain-specific requirements of edge-based CPS}. Research and other efforts need to consider the specific characteristics of edge-computing systems in terms of their distributed nature, heterogeneity, dynamics (e.g. potential mobility), resource constraints, and trustworthiness-related requirements. The latter requirements will vary among application domains, and in regard to the risks of the domains concerned. \item \textit{Incorporating energy and environmental sustainability considerations into research}. Edge-based CPS form part of an increasingly digitalized society with computing "everywhere". To make this cost-efficient and to minimize environmental impact, circular economy concepts (reuse, repair, re-purpose, etc.) and energy considerations need further research and to be integrated into the overall architecting of future edge-based CPS. As stated in \cite{Hamm.2020}, the sustainable developments generally only receive little attention within the framework of edge computing. Hence, the sustainability should be incorporated in the development of edge computing. \item \textit{Emphasis on testbeds and experimental evaluation/validation}. This recommendation follows from the relative novelty of edge-based CPS as a field and the apparently limited emphasis on experimental work. While the limited amount of work could be an indication of early stages, it is important with testbeds for experimentation and learning. This might be even more important for edge-based CPS, as they integrate technologies from telecom, IT/cloud, embedded systems and communications. \item \textit{Forums for networking and collaboration regarding edge-based CPS}. The integration mentioned in the previous bullet(s) requires establishing new forums for interactions between the CPS and edge-computing communities. We also believe that reference architectures and architectural frameworks (first bullet) can help to address the needed cross-domain understanding. \end{itemize} As a follow-up to our systematic mapping study, it could be of interest to increase the scope of the study, by incorporating more attributes related to trustworthiness, such as transparency and accountability (\cite{EU.2019}), and also potentially to increase the level of detail by including more related attributes (or sub-attributes), such as resilience, availability, integrity and confidentiality. A further potential direction would be to increase the reach of the study through the inclusion of other search terms, potentially providing further insights. Such directions include incorporating related concepts or characteristic properties with respect to CPS and industry 4.0 like IIoT or dependability. It would also be of interest to provide a more in-depth analysis regarding sustainability and related concepts such as the "circular economy". A more fine-grained analysis of the AI methods used for edge-based CPS would also be beneficial. A broader reach covering more of some of the industrial developments could also beneficially be extended to incorporate patents. Since the whole field of research is growing rapidly, an update to include the newest papers would also beneficial in the next few years. We also believe that nuances of predictability and security could be explored in more detail. \section{Conclusions} \label{sec:Conclusions} The introduction of edge computing for CPS comes as a natural solution to the opportunities at hand, and the current limitations of embedded systems and cloud computing. However, the heterogeneity of "things at the edge" as well as the integration with other fields of computing, has brought proposals for multiple possible solutions. This study provides an overview of the current research efforts in the usage of edge computing solutions for critical CPS. Through the analysis and classification of 224 papers, this study provides an overview and insight into the current connections between the two fields and the corresponding research gaps. The analysis motivates a bigger emphasis on research to address trustworthiness-related properties, an aspect that is particularly relevant and necessary for the introduction of critical edge-based CPS. \begin{acks} This research has been carried out as part of the TECoSA Vinnova Competence Center for Trustworthy Edge Computing Systems and Applications at KTH Royal Institute of Technology and in addition been partly supported through the InSecTT. InSecTT (www.insectt.eu) has received funding from the ECSEL Joint Undertaking (JU) under grant agreement No 876038. The JU receives support from the European Union’s Horizon 2020 research and innovation programme and Austria, Sweden, Spain, Italy, France, Portugal, Ireland, Finland, Slovenia, Poland, Netherlands, Turkey”. The document reflects only the author’s view and the Commission is not responsible for any use that may be made of the information it contains. \end{acks} \bibliographystyle{ACM-Reference-Format}
1,116,691,497,471
arxiv
\section*{Introduction} Let $A$ be a unital associative algebra over a field $k$. M.E. Sweedler's result \cite[Theorem 7.0.4]{Sw} which states that the functor ${\rm Hom} (- , \, A) : {\rm CoAlg}_k \to {\rm Alg}_k ^{\rm op}$ from the category of coalgebras over $k$ to the opposite of the category of $k$-algebras has a right adjoint denoted by ${\rm M} (-, \, A)$ proved itself remarkable through its applications. Furthermore, ${\rm M} (A, \, A)$ turns out to be a bialgebra and the final object in the category of all bialgebras that act on $A$ through a module algebra structure. The dual version was considered by Tambara \cite{Tambara} and in a special (graded) case by Manin \cite{Manin}. To be more precise, \cite[Theorem 1.1]{Tambara} proves that if $A$ is a finite dimensional algebra, then the tensor functor $A \otimes - : {\rm Alg}_k \to {\rm Alg}_k$ has a left adjoint denoted by $a(A, \, -)$. In the same spirit, $a(A,\, A)$ is proved to be a bialgebra as well and the initial object in the category of all bialgebras that coact on $A$ through a comodule algebra structure. Both objects are very important: as explain in \cite{Manin}, the Hopf envelope of $a(A, A)$ plays the role of a symmetry group in non-commutative geometry. For further details we refer to \cite{AA, ana2019, anagorj}. A more general construction, which contains all the above as special cases, was recently considered in \cite{anagorj2} in the context of $\Omega$-algebras. The starting point of this paper was an attempt to prove the counterpart of Tambara's result at the level of Leibniz algebras. Introduced by Bloh \cite{Bl1} and rediscovered by Loday \cite{Lod2}, Leibniz algebras are non-commutative generalizations of Lie algebras. This new concept generated a lot of interest mainly due to its interaction with (co)homology theory, vertex operator algebras, the Godbillon-Vey invariants for foliations or differential geometry. Another important concept for our approach is that of a \emph{current Lie} algebra. Being first introduced in physics \cite{Gel}, current Lie algebras, are Lie algebras of the form $\mathfrak{g} \otimes A$ with the bracket given by $\left[ x\otimes a, \, y\otimes b \right] := \left[x, \, y\right] \otimes ab$, for all $x$, $y \in \mathfrak{g}$ and $a$, $b\in A$, where $\mathfrak{g}$ is a Lie algebra and $A$ is a commutative algebra. They are interesting objects that arise in various branches of mathematics and physics such as the theory of affine Kac-Moody algebras or the structure of modular semisimple Lie algebras (see \cite{Zus, Zus2}). Current Leibniz algebras are immediate generalizations, i.e. Leibniz algebras of the form $\mathfrak{h}\otimes A $ whose bracket is defined as in the case of Lie algebras, where this time $\mathfrak{h}$ is a Leibniz algebra and $A$ a commutative algebra. By fixing a Leibniz algebra $\mathfrak{h}$, we obtain a functor $\mathfrak{h}\otimes - : {\rm ComAlg}_k \to {\rm Lbz}_k$ from the category of commutative algebras to the category of Leibniz algebras called the current Leibniz algebra functor. \thref{adjunctie} proves that the functor $\mathfrak{h} \otimes - \, : {\rm ComAlg}_k \to {\rm Lbz}_k$ has a left adjoint, denoted by ${\mathcal A} (\mathfrak{h}, \, -)$, if and only if $\mathfrak{h}$ is finite dimensional. For an $n$-dimensional Leibniz algebra $\mathfrak{h}$ and an arbitrary Leibniz algebra $\mathfrak{g}$ with $|I| = {\rm dim}_k (\mathfrak{g})$, ${\mathcal A} (\mathfrak{h}, \, \mathfrak{g})$ is a quotient of the usual polynomial algebra $k [X_{si} \, | s = 1, \cdots, n, \, i\in I]$. The commutative algebra ${\mathcal A} (\mathfrak{h}, \, \mathfrak{g})$ is a very powerful tool for studying Leibniz/Lie algebras as it captures most of the essential information on the two Leibniz/Lie algebras. Note for instance that the characters of this algebra parameterize the set of all Leibniz algebra homomorphisms between $\mathfrak{g}$ and $\mathfrak{h}$ (\coref{morlbz}). \thref{adjunctie} has obviously a Lie algebra counterpart. In this case, if $\mathfrak{g}$ is a Lie algebra and $m$ a positive integer then the characters of the commutative algebra ${\mathcal A} (\mathfrak{gl} (m, k), \, \mathfrak{g})$ parameterize the space of all $m$-dimensional representations of $\mathfrak{g}$ (\coref{replie}). The commutative algebra ${\mathcal A} (\mathfrak{h}) := {\mathcal A} (\mathfrak{h}, \, \mathfrak{h})$ is called the \emph{universal algebra} of $\mathfrak{h}$: it is a quotient of the polynomial algebra $M(n) := k[X_{ij} \, | \, i, j = 1, \cdots, n]$ through an ideal generated by $n^3$ polynomials called the \emph{universal polynomials} of $\mathfrak{h}$. \prref{bialgebra} proves that ${\mathcal A} (\mathfrak{h})$ has a canonical bialgebra structure such that the projection $\pi: M(n) \to {\mathcal A} (\mathfrak{h})$ is a bialgebra homomorphism. The first main application of the universal (bi)algebra ${\mathcal A} (\mathfrak{h})$ of $\mathfrak{h}$ is given in \thref{automorf} which provides an explicit description of a group isomorphism between the group of automorphisms of $\mathfrak{h}$ and the group of all invertible group-like elements of the finite dual ${\mathcal A} (\mathfrak{h})^{\rm o}$: $$ {\rm Aut}_{{\rm Lbz}} (\mathfrak{h}) \cong U \bigl(G\bigl( {\mathcal A} (\mathfrak{h})^{\rm o} \bigl) \bigl). $$ We mention that achieving a complete description of the automorphisms group ${\rm Aut}_{\rm Lie} (\mathfrak{h})$ of a given Lie algebra $\mathfrak{h}$ is a classical \cite{borel, jacobson} and notoriously difficult problem intimately related to the structure of Lie algebras (for more details see the recent papers \cite{am-2018, Ar, fisher} and their references). The unit of the adjunction depicted in \thref{adjunctie}, denoted by $\eta_{\mathfrak{h}} : \mathfrak{h} \to \mathfrak{h} \otimes {\mathcal A} (\mathfrak{h})$, endows $\mathfrak{h}$ with a right ${\mathcal A}(\mathfrak{h})$-comodule structure and the pair $({\mathcal A} (\mathfrak{h}), \, \eta_{\mathfrak{h}})$ is the intial object in the category of all commutative bialgebras that coact on the Leibniz algebra $\mathfrak{h}$ (\thref{univbialg}). This result allows for two important consequences: \prref{graduari} proves that for an abelian group $G$ there exists an explicitly described bijection between the set of all $G$-gradings on $\mathfrak{h}$ and the set of all bialgebra homomorphisms ${\mathcal A} (\mathfrak{h}) \to k[G]$. Furthermore, all $G$-gradings on $\mathfrak{h}$ are classified in \thref{nouaclas}: the set $G$-${\rm \textbf{gradings}}(\mathfrak{h})$ of isomorphism classes of all $G$-gradings on $\mathfrak{h}$ is in bijection with the quotient set $ {\rm Hom}_{\rm BiAlg} \, \bigl( {\mathcal A} (\mathfrak{h}) , \, k[G] \bigl)/\approx$ of all bialgebra homomorphisms ${\mathcal A} (\mathfrak{h}) \, \to k[G]$ by the equivalence relation given by the usual conjugation with an invertible group-like element. Secondly, if $G$ is a finite group, \prref{actiuni} shows that there exists a bijection between the set of all actions as automorphisms of $G$ on $\mathfrak{h}$ (i.e. morphisms of groups $G \to {\rm Aut}_{{\rm Lbz}} (\mathfrak{h})$) and the set of all bialgebra homomorphisms ${\mathcal A} (\mathfrak{h}) \to k[G]^*$. Concerning the last two results we mention that there exists a vast literature concerning the classification of all $G$-gradings on a given Lie algebra (see \cite{bahturin, eld, G, MZ} and their references). On the other hand, the study of actions as automorphisms of a group $G$ on a Lie algebra $\mathfrak{h}$ goes back to Hilbert's invariant theory whose foundation was set at the level of Lie algebras in the classical papers \cite{borel, bra, thrall}; for further details see \cite{am-2018} and the references therein. Using once again \thref{univbialg} and the existence of a free commutative Hopf algebra on any commutative bialgebra \cite[Theorem 65, (2)]{T}, we prove in \thref{univhopf} that there exists a universal coacting Hopf algebra on any finite dimensional Leibniz algebra. We point out that, to the best of our knowledge, this is the only universal Hopf algebra associated to a Leibniz algebra appearing in the literature. \section{Preliminaries}\selabel{prel} All vector spaces, (bi)linear maps, Leibniz, Lie or associative algebras, bialgebras and so on are over an arbitrary field $k$ and $\otimes = \otimes_k$. A Leibniz algebra is a vector space $\mathfrak{h}$, together with a bilinear map $[- , \, -] : \mathfrak{h} \times \mathfrak{h} \to \mathfrak{h}$ satisfying the Leibniz identity for any $x$, $y$, $z \in \mathfrak{h}$: \begin{equation}\eqlabel{Lbz1} \left[ x,\, \left[y, \, z \right] \right] = \left[ \left[x, \, y\right], \, z \right] - \left[\left[x, \, z\right] , \, y\right] \end{equation} Any Lie algebra is a Leibniz algebra, and a Leibniz algebra $\mathfrak{h}$ satisfying $[x, \, x] = 0$, for all $x \in \mathfrak{h}$ is a Lie algebra. We shall denote by ${\rm Aut}_{\rm Lbz} (\mathfrak{h})$ (resp. ${\rm Aut}_{\rm Lie} (\mathfrak{h})$) the automorphisms group of a Leibniz (resp. Lie) algebra $\mathfrak{h}$. Any vector space $V$ is a Leibniz algebra with trivial bracket $[x,\, y] := 0$, for all $x$, $y\in V$ -- such a Leibniz algebra is called \emph{abelian} and will be denoted by $V_0$. For two subspaces $A$ and $B$ of a Leibniz algebra $\mathfrak{h}$ we denote by $[A, \, B]$ the vector space generated by all brackets $[a, \, b]$, for any $a \in A$ and $b\in B$. In particular, $\mathfrak{h}' := [\mathfrak{h}, \, \mathfrak{h}]$ is called the derived subalgebra of $\mathfrak{h}$. We shall denote by ${\rm Lbz}_k$, ${\rm Lie}_k$ and ${\rm ComAlg}_k$ the categories of Leibniz, Lie and respectively commutative associative algebras. Furthermore, the category of commutative bialgebras (resp. Hopf algebras) is denoted by ${\rm ComBiAlg}_k$ (resp. ${\rm ComHopf}_k$). If $\mathfrak{h}$ is a Leibniz algebra and $A$ a commutative algebra then $\mathfrak{h}\otimes A $ is a Leibniz algebra with bracket defined for any $x$, $y \in \mathfrak{h}$ and $a$, $b\in A$ by: \begin{equation}\eqlabel{curant} \left[ x\otimes a, \, y\otimes b \right] := \left[x, \, y\right] \otimes ab \end{equation} called the \emph{current Leibniz} algebra. Indeed, as $A$ is a commutative and associative algebra, we have: \begin{eqnarray*} && \left[ \, \left[x \otimes a, \, y\otimes b \right], \, z\otimes c \right] - \left[ \, \left[x \otimes a, \, z\otimes c \right], \, y\otimes b \right] \\ && = \left[ \, \left[x, \, y\right], \, z \right] \otimes abc - \left[ \, \left[x, \, z\right] , \, y\right] \otimes acb \\ && = \bigl( \left[ \, \left[x, \, y\right], \, z \right] - \left[ \, \left[x, \, z\right] , \, y\right] \bigl) \otimes abc \\ && = \left[x,\, \left[y, \, z \right] \, \right] \otimes abc = \left[x\otimes a, \, \left[y\otimes b, \, z \otimes c\right] \, \right] \end{eqnarray*} for all $x$, $y$, $z \in \mathfrak{h}$ and $a$, $b$, $c\in A$, i.e. the Leibniz identity \equref{Lbz1} holds for $\mathfrak{h}\otimes A $. For a fixed Leibniz algebra $\mathfrak{h}$, assigning $A \mapsto \mathfrak{h} \otimes A$ defines a functor $\mathfrak{h} \otimes - \, : {\rm ComAlg}_k \to {\rm Lbz}_k$ from the category of commutative $k$-algebras to the category of Leibniz algebras called the current Leibniz algebra functor. If $f : A \to B$ is an algebra map then ${\rm Id}_{\mathfrak{h}} \, \otimes f : \mathfrak{h} \otimes A \to \mathfrak{h} \otimes B$ is a morphism of Leibniz algebras. For basic concepts on category theory we refer the reader to \cite{mlane} and for unexplained notions pertaining to Hopf algebras to \cite{radford, Sw}. \section{Universal constructions}\selabel{sect2} Our first resut is the Leibniz algebra counterpart of \cite[Theorem 1.1]{Tambara}. \begin{theorem}\thlabel{adjunctie} Let $\mathfrak{h}$ be a Leibniz algebra. Then the current Leibniz algebra functor $\mathfrak{h} \otimes - \, : {\rm ComAlg}_k \to {\rm Lbz}_k$ has a left adjoint if and only if $\mathfrak{h}$ is finite dimensional. Moreover, if $\mathfrak{h} \neq 0$ the functor $\mathfrak{h} \otimes - \, $ does not admit a right adjoint. \end{theorem} \begin{proof} Assume first that $\mathfrak{h}$ is a finite dimensional Leibniz algebra and ${\rm dim}_k (\mathfrak{h}) = n$. Fix $\{e_1, \cdots, e_n\}$ a basis in $\mathfrak{h}$ and let $\{\tau_{i, j}^s \, | \, i, j, s = 1, \cdots, n \}$ be the structure constants of $\mathfrak{h}$, i.e. for any $i$, $j = 1, \cdots, n$ we have: \begin{equation}\eqlabel{const1} \left[e_i, \, e_j \right]_{\mathfrak{h}} = \sum_{s=1}^n \, \tau_{i, j}^s \, e_s. \end{equation} In what follows we shall explicitly construct a left adjoint of the current Leibniz algebra functor $\mathfrak{h} \otimes -$, denoted by ${\mathcal A} (\mathfrak{h}, \, - ) : {\rm Lbz}_k \to {\rm ComAlg}_k$. Let $\mathfrak{g}$ be a Leibniz algebra and let $\{f_i \, | \, i \in I\}$ be a basis of $\mathfrak{g}$. For any $i$, $j\in I$, let $B_{i,j} \subseteq I$ be a finite subset of $I$ such that for any $i$, $j \in I$ we have: \begin{equation}\eqlabel{const2} \left[f_i, \, f_j \right]_{\mathfrak{g}} = \sum_{u \in B_{i, j}} \, \beta_{i, j}^u \, f_{u}. \end{equation} Let $k [X_{si} \, | s = 1, \cdots, n, \, i\in I]$ be the usual polynomial algebra and define \begin{equation}\eqlabel{alguniv} {\mathcal A} (\mathfrak{h}, \, \mathfrak{g}) := k [X_{si} \, | s = 1, \cdots, n, \, i\in I] / J \end{equation} where $J$ is the ideal generated by all polynomials of the form \begin{equation}\eqlabel{poluniv} P_{(a, i, j)} ^{(\mathfrak{h}, \, \mathfrak{g})} := \sum_{u \in B_{i, j}} \, \beta_{i, j}^u \, X_{au} - \sum_{s, t = 1}^n \, \tau_{s, t}^a \, X_{si} X_{tj}, \quad {\rm for}\,\, {\rm all}\,\, a = 1, \cdots, n\,\, {\rm and}\,\, i,\, j\in I. \end{equation} We denote by $x_{si} := \widehat{X_{si}}$ the class of ${X_{si}}$ in the algebra ${\mathcal A} (\mathfrak{h}, \, \mathfrak{g})$; thus the following relations hold in the commutative algebra ${\mathcal A} (\mathfrak{h}, \, \mathfrak{g})$: \begin{equation}\eqlabel{relatii} \sum_{u \in B_{i, j}} \, \beta_{i, j}^u \, x_{au} = \sum_{s, t = 1}^n \, \tau_{s, t}^a \, x_{si} x_{tj}, \quad {\rm for}\,\, {\rm all}\,\, a = 1, \cdots, n,\,\, {\rm and} \,\, i,\, j\in I. \end{equation} Now we consider the map: \begin{equation}\eqlabel{unitadj} \eta_{\mathfrak{g}} : \mathfrak{g} \to \mathfrak{h} \otimes {\mathcal A} (\mathfrak{h}, \, \mathfrak{g}), \quad \eta_{\mathfrak{g}} (f_i) := \sum_{s=1}^n \, e_s \otimes x_{si}, \quad {\rm for\,\, all}\,\, i\in I. \end{equation} We shall prove first that $\eta_{\mathfrak{g}}$ is a Leibniz algebra homomorphism. Indeed, for any $i$, $j\in I$ we have: \begin{eqnarray*} && \left[\eta_{\mathfrak{g}} (f_i), \, \eta_{\mathfrak{g}} (f_j) \right]_{\mathfrak{h} \otimes {\mathcal A} (\mathfrak{h}, \, \mathfrak{g})} = \left[ \sum_{s=1}^n \, e_s \otimes x_{si}, \,\, \sum_{t=1}^n \, e_t \otimes x_{tj} \right]_{\mathfrak{h} \otimes {\mathcal A} (\mathfrak{h}, \, \mathfrak{g})}= \sum_{s, t =1}^n \, \left[e_s, \, e_t \right]_{\mathfrak{h}} \otimes x_{si} x_{tj}\\ && = \sum_{a =1}^n \, e_a \otimes \underline{\bigl(\sum_{s, \,t = 1}^n \, \tau_{s, t}^a \, x_{si} x_{tj} \bigl)} \,\, \stackrel{\equref{relatii}} = \,\, \sum_{a=1}^n \, e_a \otimes \bigl( \sum_{u \in B_{i, j}} \, \beta_{i, j}^u \, x_{au}\bigl) = \sum_{u \in B_{i, j}} \, \beta_{i, j}^u \, \eta_{\mathfrak{g}} (f_u) \\ && = \eta_{\mathfrak{g}} (\left[f_i, \, f_j \right]_{\mathfrak{g}}) \end{eqnarray*} Now we prove that for any Leibniz algebra $\mathfrak{g}$ and any commutative algebra $A$ the map defined below is bijective: \begin{equation}\eqlabel{adjp} \gamma_{\mathfrak{g}, \, A} \, : {\rm Hom}_{\rm Alg_k} \, ( {\mathcal A} (\mathfrak{h}, \, \mathfrak{g}), \, A) \to {\rm Hom}_{\rm Lbz_k} \, (\mathfrak{g}, \, \mathfrak{h} \otimes A), \quad \gamma_{\mathfrak{g}, \, A} (\theta) := \bigl( {\rm Id}_{\mathfrak{h}} \otimes \theta \bigl) \circ \eta_{\mathfrak{g}} \end{equation} To this end, let $f : \mathfrak{g} \to \mathfrak{h} \otimes A$ be a Leibniz algebra homomorphism. We have to prove that there exists a unique algebra homomorphism $\theta : {\mathcal A} (\mathfrak{h}, \, \mathfrak{g}) \to A$ such that the following diagram is commutative: \begin{eqnarray} \eqlabel{diagrama10} \xymatrix {& \mathfrak{g} \ar[r]^-{\eta_{\mathfrak{g}} } \ar[dr]_{f} & { \mathfrak{h} \otimes {\mathcal A} (\mathfrak{h}, \, \mathfrak{g})} \ar[d]^{ {\rm Id}_{\mathfrak{h}} \otimes \theta }\\ & {} & {\mathfrak{h} \otimes A}} \qquad i.e. \,\,\, f = \bigl( {\rm Id}_{\mathfrak{h}} \otimes \theta \bigl) \circ \, \eta_{\mathfrak{g}}. \end{eqnarray} Let $\{d_{si} \, | \, s = 1, \cdots, n, i\in I \}$ be a family of elements of $A$ such that for any $i\in I$ we have: \begin{equation}\eqlabel{constfmor} f( f_i) = \sum_{s=1}^n \, e_s \otimes d_{si} \end{equation} A straightforward computation shows that for all $i$, $j\in I$ we have: $$ f \bigl( \left[f_i, \, f_j \right]_{\mathfrak{g}} \bigl) = \sum_{a=1}^n \, e_a \otimes \bigl ( \sum_{u \in B_{ij}} \, \beta_{i, j}^u \, d_{au} \bigl) \,\, {\rm and} \,\,\left[ f(f_i), \, f(f_j) \right]_{\mathfrak{h} \otimes A} = \sum_{a=1}^n \, e_a \otimes \bigl ( \sum_{s, t = 1}^n \, \tau_{s, t}^a \, d_{si} d_{tj} \bigl) $$ Since $f: \mathfrak{g} \to \mathfrak{h} \otimes A$ is a Leibniz algebra homomorphism, it follows that the family of elements $\{d_{si} \, | \, s = 1, \cdots, n, i\in I \}$ need to fullfil the following relations in $A$: \begin{equation}\eqlabel{deurile} \sum_{u \in B_{ij}} \, \beta_{i, j}^u \, d_{au} = \sum_{s, t = 1}^n \, \tau_{s, t}^a \, d_{si} d_{tj}, \quad {\rm for}\,\, {\rm all}\,\, i,\, j\in I\,\, {\rm and}\,\, a = 1, \cdots, n. \end{equation} The universal property of the polynomial algebra yields a unique algebra homomorphism $v : k [X_{si} \, | s = 1, \cdots, n, \, i\in I] \to A$ such that $v (X_{si}) = d_{si}$, for all $s = 1, \cdots, n$ and $i\in I$. It can be easily seen that ${\rm Ker} (v) \supseteq J$, where $J$ is the ideal generated by all polynomials listed in \equref{poluniv}. Indeed, for any $i$, $j\in I$ and $a = 1, \cdots, n$ we have \begin{eqnarray*} v \bigl( P_{(a, i, j)} ^{(\mathfrak{h}, \, \mathfrak{g})} \bigl) = v \bigl( \sum_{u \in B_{i, j}} \, \beta_{i, j}^u \, X_{au} - \sum_{s, t = 1}^n \, \tau_{s, t}^a \, X_{si} X_{tj} \bigl) = \sum_{u \in B_{i, j}} \, \beta_{i, j}^u \, d_{au} - \sum_{s, t = 1}^n \, \tau_{s, t}^a \, d_{si} d_{tj} \stackrel{\equref{deurile}} = 0. \end{eqnarray*} Thus, there exists a unique algebra homomorphism $\theta : {\mathcal A} (\mathfrak{h}, \, \mathfrak{g}) \to A$ such that $\theta (x_{si}) = d_{si}$, for all $s = 1, \cdots, n$ and $i\in I$. Furthermore, for any $i\in I$ we have: \begin{eqnarray*} \bigl( {\rm Id}_{\mathfrak{h}} \otimes \theta \bigl) \circ \, \eta_{\mathfrak{g}} (f_i) = \bigl( {\rm Id}_{\mathfrak{h}} \otimes \theta \bigl) \bigl( \sum_{s=1}^n \, e_s \otimes x_{si} \bigl) = \sum_{s=1}^n \, e_s \otimes d_{si} \stackrel{\equref{constfmor}} = f (f_i). \end{eqnarray*} Therefore, we have $\bigl( {\rm Id}_{\mathfrak{h}} \otimes \theta \bigl) \circ \, \eta_{\mathfrak{g}} = f$ as desired. Next we show that $\theta$ is the unique morphism with this property. Let $\tilde{\theta} : {\mathcal A} (\mathfrak{h}, \, \mathfrak{g}) \to A$ be another algebra homomorphism such that $\bigl( {\rm Id}_{\mathfrak{h}} \otimes \tilde{\theta} \bigl) \circ \, \eta_{\mathfrak{g}} (f_i) = f (f_i)$, for all $i\in I$. Then, $\sum_{s=1}^n \, e_s \otimes \tilde{\theta} (x_{si}) = \sum_{s=1}^n \, e_s \otimes d_{si}$, and hence $\tilde{\theta} (x_{si}) = d_{si} = \theta (x_{si})$, for all $s= 1, \cdots, n$ and $i\in I$. Since $\{x_{si} \, | s= 1, \cdots, n, i \in I \, \}$ is a system of generators for the algebra ${\mathcal A} (\mathfrak{h}, \, \mathfrak{g})$ we obtain $\tilde{\theta} = \theta$. All in all, we have proved that the map $\gamma_{\mathfrak{g}, \, A}$ given by \equref{adjp} is bijective. Next we show that assigning to each Leibniz algebra $\mathfrak{g}$ the commutative algebra ${\mathcal A} (\mathfrak{h}, \, \mathfrak{g})$ defines a functor ${\mathcal A} (\mathfrak{h}, \, - ) : {\rm Lbz}_k \to {\rm ComAlg}_k$. First, let $\alpha : \mathfrak{g}_1 \to \mathfrak{g}_2$ be a Leibniz algebra homomorphism. Using the bijectivity of the map defined by \equref{adjp} for the Leibniz algebra homomorphism $f := \eta_{\mathfrak{g}_2} \circ \alpha$, yields a unique algebra homomorphism $\theta : {\mathcal A} (\mathfrak{h}, \, \mathfrak{g}_1) \to {\mathcal A} (\mathfrak{h}, \, \mathfrak{g}_2)$ such that the following diagram is commutative: \begin{eqnarray} \eqlabel{diagramapag4} \xymatrix {& \mathfrak{g}_1 \ar[rr]^-{\eta_{\mathfrak{g}_1} } \ar[d]_{\alpha} & {} & { \mathfrak{h} \otimes {\mathcal A} (\mathfrak{h}, \, \mathfrak{g}_1)} \ar[d]^{ {\rm Id}_{\mathfrak{h}} \otimes \theta }\\ & \mathfrak{g}_2 \ar[rr]^-{\eta_{\mathfrak{g}_2}} & {} & {\mathfrak{h} \otimes {\mathcal A} (\mathfrak{h}, \, \mathfrak{g}_2) } } \qquad i.e. \,\,\, \bigl( {\rm Id}_{\mathfrak{h}} \otimes \theta \bigl) \circ \, \eta_{\mathfrak{g}_1} = \eta_{\mathfrak{g}_2} \circ \alpha \end{eqnarray} We denote this unique morphism $\theta$ by ${\mathcal A} (\mathfrak{h}, \, \alpha )$ and the functor ${\mathcal A} (\mathfrak{h}, \, - )$ is now fully defined. Furthermore, the commutativity of the diagram \equref{diagramapag4} shows the naturality of $\gamma_{\mathfrak{g}, \, A}$ in $\mathfrak{g}$. It can now be easily checked that ${\mathcal A} (\mathfrak{h}, \, - )$ is indeed a functor and that $\gamma_{\mathfrak{g}, \, A}$ is also natural in $A$. To conclude, the functor ${\mathcal A} (\mathfrak{h}, \, - )$ is a left adjoint of the current Leibniz algebra functor $ \mathfrak{h} \otimes -$. Conversely, assume that the functor $\mathfrak{h}\otimes - : {\rm ComAlg}_k \to {\rm Lbz}_k$ has a left adjoint. In particular, $\mathfrak{h}\otimes - $ preserves arbitrary products. Now recall that in both categories ${\rm ComAlg}_k$ and ${\rm Lbz}_k$ products are constructed as simply the products of the underlying vector spaces. Imposing the condition that $\mathfrak{h}\otimes - $ preserves the product of a countable number of copies of the base field $k$ will easily lead to the finite dimensionality of $\mathfrak{h}$. Assume, now that the functor $\mathfrak{h}\otimes - : {\rm ComAlg}_k \to {\rm Lbz}_k$ has a right adjoint. This implies that $\mathfrak{h}\otimes - $ preserves coproducts. Now, since in the category ${\rm ComAlg}_k$ of commutative algebras the coproduct of two commutative algebras is given by their tensor product, it follows that for any commutative algebras $A$ and $B$ there exists an isomorphism of Leibniz algebras $\mathfrak{h} \otimes (A \otimes B) \cong (\mathfrak{h} \otimes A) \, \sqcup (\mathfrak{h} \otimes B)$, where we denote by $\sqcup$ the coproduct of two current Leibniz algebras. In particular, for $A = B := k$, we obtain that $\mathfrak{h}\cong \mathfrak{h} \sqcup \mathfrak{h}$ and the corresponding morphisms $\mathfrak{h} \to \mathfrak{h} \sqcup \mathfrak{h}$ are just the identity maps. Therefore, for every Leibniz algebra $\mathfrak{g}$ there exists a unique Leibniz algebra homomorphism $\mathfrak{g} \to \mathfrak{h}$. Now by taking $\mathfrak{g} = \mathfrak{h} \times \mathfrak{h}$ and the embeddings $\mathfrak{h} \hookrightarrow \mathfrak{h} \times \mathfrak{h}$ to different components we reach a contradiction if $\mathfrak{h} \neq 0$. \end{proof} \begin{remark} \relabel{remar1} \thref{adjunctie} remains valid in the special case of Lie algebras: if $\mathfrak{h}$ is a finite dimensional Lie algebra, the current Lie algebra functor $\mathfrak{h}\otimes - : {\rm ComAlg}_k \to {\rm Lie}_k$ has a left adjoint ${\mathcal A} (\mathfrak{h}, \, -)$ which is constructed as in the proof of \thref{adjunctie}. We point out, however, that the polynomials defined in \equref{poluniv} take a rather simplified form. The skew symmetry fulfilled by the bracket of a Lie algebra imposes the following restrictions on the structure constants: $$\tau_{i,i}^s = 0\,\, {\rm and}\,\, \tau_{i,j}^s = - \tau_{j,i}^s\,\, {\rm for}\,\, {\rm all}\,\, i, j, s = 1,\cdots, n.$$ \end{remark} The commutative algebra ${\mathcal A} (\mathfrak{h}, \, \mathfrak{g})$ constructed in the proof of \thref{adjunctie} provides an important tool for studying Lie/Leibniz algebras as it captures most of the essential information on the two Lie/Leibniz algebras. Indeed, note for instance that the characters of this algebra (i.e. the algebra homomorphisms ${\mathcal A} (\mathfrak{h}, \, \mathfrak{g}) \to k$) parameterize the set of all Leibniz algebra homomorphisms between the two algebras. This follows as an easy consequence of the bijection described in \equref{adjp} by taking $A := k$: \begin{corollary}\colabel{morlbz} Let $\mathfrak{g}$ and $\mathfrak{h}$ be two Leibniz algebras such that $\mathfrak{h}$ is finite dimensional. Then the following map is bijective: \begin{equation}\eqlabel{adjpk} \gamma \, : {\rm Hom}_{\rm Alg_k} \, ( {\mathcal A} (\mathfrak{h}, \, \mathfrak{g}), \, k) \to {\rm Hom}_{\rm Lbz_k} \, (\mathfrak{g}, \, \mathfrak{h}), \quad \gamma (\theta) := \bigl( {\rm Id}_{\mathfrak{h}} \otimes \theta \bigl) \circ \eta_{\mathfrak{g}}. \end{equation} \end{corollary} In particular, by applying \coref{morlbz} for $\mathfrak{h} : = \mathfrak{gl} (m, k)$ and an arbitrary Lie algebra $\mathfrak{g}$ we obtain: \begin{corollary}\colabel{replie} Let $\mathfrak{g}$ be a Lie algebra and $m$ a positive integer. Then there exists a bijective correspondence between the space of all $m$-dimensional representations of $\mathfrak{g}$ and the space of all algebra homomorphisms ${\mathcal A} (\mathfrak{gl} (m, k), \, \mathfrak{g}) \to k$. \end{corollary} \begin{examples} \exlabel{exgen} 1. If $\mathfrak{h}$ and $\mathfrak{g}$ are abelian Leibniz algebras then ${\mathcal A} (\mathfrak{h}, \, \mathfrak{g}) \cong k [X_{si} \, | s = 1, \cdots, n, \, i\in I]$, where $n = {\rm dim}_k (\mathfrak{h})$ and $|I| = {\rm dim}_k (\mathfrak{g})$. 2. Let $\mathfrak{h}$ be an $n$-dimensional Leibniz algebra with structure constants $\{\tau_{i, j}^s \, | \, i, j, s = 1, \cdots, n \}$. Then ${\mathcal A} (\mathfrak{h}, \, k) \cong k[X_1, \cdots, X_n]/J$, where $J$ is the ideal generated by the polynomials $\sum_{s, t = 1}^n \, \tau_{s, t}^a \, X_{s} X_{t}$, for all $a = 1, \cdots, n$. 3. Let $\mathfrak{g}$ be a Leibniz algebra. Then ${\mathcal A} (k, \, \mathfrak{g}) \cong S (\mathfrak{g}/\mathfrak{g}')$, the symmetric algebra of $\mathfrak{g}/\mathfrak{g}'$, where $\mathfrak{g}'$ is the derived subalgebra of $\mathfrak{g}$. In particular, if $\mathfrak{g}$ is perfect (that is $\mathfrak{g}' = \mathfrak{g}$), then ${\mathcal A} (k, \, \mathfrak{g}) \cong k$. Indeed, the functor ${\mathcal A} (k, \, -)$ is a left adjoint for the tensor functor $k \otimes - : {\rm ComAlg}_k \to {\rm Lbz}_k$; since the tensor product is also taken over $k$ this functor is isomorphic to the functor $(-)_0 : {\rm ComAlg}_k \to {\rm Lbz}_k$, which sends any commutative algebra $A$ to the abelian Leibniz algebra $A_0 := A$. We shall prove that the functor $\mathfrak{g} \mapsto S (\mathfrak{g}/\mathfrak{g}')$ is a left adjoint of $(-)_0$. The uniqueness of adjoint functors \cite{mlane} will then lead to the desired algebra isomorphism ${\mathcal A} (k, \, \mathfrak{g}) \cong S (\mathfrak{g}/\mathfrak{g}')$. Let $\mathfrak{g}$ be a Leibniz algebra and define $\overline{\eta_{\mathfrak{g}}} : \mathfrak{g} \to S (\mathfrak{g}/\mathfrak{g}')$ as the composition $\overline{\eta_{\mathfrak{g}}} := i \circ \pi$, where $\pi: \mathfrak{g} \to \mathfrak{g}/\mathfrak{g}'$ is the usual projection and $i : \mathfrak{g}/\mathfrak{g}' \hookrightarrow S (\mathfrak{g}/\mathfrak{g}')$ is the canonical inclusion of the vector space $\mathfrak{g}/\mathfrak{g}'$ in its symmetric algebra. We shall prove now that the following map is bijective for any commutative algebra $A$ and any Leibniz algebra $\mathfrak{g}$: \begin{equation}\eqlabel{adjdoi} \overline{\gamma_{\mathfrak{g}, \, A}} \, : {\rm Hom}_{\rm Alg_k} \, ( S (\mathfrak{g}/\mathfrak{g}'), \, A) \to {\rm Hom}_{\rm Lbz_k} \, (\mathfrak{g}, \, A_0), \quad \overline{\gamma_{\mathfrak{g}, \, A}} \, (\theta) := \theta \circ \overline{\eta_{\mathfrak{g}}} \end{equation} This shows that the functor $\mathfrak{g} \mapsto S (\mathfrak{g}/\mathfrak{g}')$ is a left adjoint of $(-)_0$. Indeed, let $f: \mathfrak{g} \to A_0$ be a Leibniz algebra homomorphism, i.e. $f$ is a $k$-linear map such that $f (\left[x, \, y \right]) = 0$, for all $x$, $y\in \mathfrak{g}$. That is ${\rm Ker(f)}$ contains $\mathfrak{g}'$, the derived algebra of $\mathfrak{g}$. Thus, there exists a unique $k$-linear map $\overline{f} : \mathfrak{g}/\mathfrak{g}' \to A$ such that $ \overline{f} \circ \pi = f$. Now, using the universal property of the symmetric algebra we obtain that there exists a unique algebra homomorphism $\theta : S (\mathfrak{g}/\mathfrak{g}') \to A$ such that $\theta \circ i = \overline{f}$, and hence $\overline{\gamma_{\mathfrak{g}, \, A}} \, (\theta) = f$. Therefore, the map $\overline{\gamma_{\mathfrak{g}, \, A}}$ is bijective and the proof is now finished. \end{examples} \begin{definition} \delabel{alguniv} Let $\mathfrak{g}$ and $\mathfrak{h}$ be Leibniz algebras with $\mathfrak{h}$ finite dimensional. Then the commutative algebra ${\mathcal A} (\mathfrak{h}, \, \mathfrak{g})$ is called the \emph{universal algebra} of $\mathfrak{h}$ and $\mathfrak{g}$. When $\mathfrak{h} = \mathfrak{g}$ we denote the universal algebra of $\mathfrak{h}$ simply by ${\mathcal A} (\mathfrak{h})$. \end{definition} If $\{\tau_{i, j}^s \, | \, i, j, s = 1, \cdots, n \}$ are the structure constants of $\mathfrak{h}$, where $n$ is the dimension of $\mathfrak{h}$, then the polynomials defined for any $a$, $i$, $j = 1, \cdots, n$ by: \begin{equation}\eqlabel{poluniv2} P_{(a, i, j)} ^{(\mathfrak{h})} := \sum_{u = 1}^n \, \tau_{i, j}^u \, X_{au} - \sum_{s, t = 1}^n \, \tau_{s, t}^a \, X_{si} X_{tj} \, \in k[X_{ij} \, | \, i, j = 1, \cdots, n] \end{equation} are called the \emph{universal polynomials} of $\mathfrak{h}$. It follows from the proof of \thref{adjunctie} that ${\mathcal A} (\mathfrak{h}) = k[X_{ij} \, | \, i, j = 1, \cdots, n]/J$, where $J$ is the ideal generated by the universal polynomials $P_{(a, i, j)} ^{(\mathfrak{h})}$, for all $a$, $i$, $j = 1, \cdots, n$. Moreover, if $\{e_1, \cdots, e_n\}$ is a basis in $\mathfrak{h}$ then the canonical map \begin{equation}\eqlabel{unitadj2} \eta_{\mathfrak{h}} : \mathfrak{h} \to \mathfrak{h} \otimes {\mathcal A} (\mathfrak{h}), \quad \eta_{\mathfrak{h}} (e_i) := \sum_{s=1}^n \, e_s \otimes x_{si} \end{equation} for all $i = 1, \cdots, n$ is a Leibniz algebra homomorphism. The commutative algebra ${\mathcal A} (\mathfrak{h})$ and the family of polynomials $P_{(a, i, j)} ^{(\mathfrak{h})}$ are purely algebraic objects that capture the entire information of the Leibniz algebra $\mathfrak{h}$. Moreover, the universal algebra ${\mathcal A} (\mathfrak{h})$ satisfies the following universal property: \begin{corollary}\colabel{initialobj} Let $\mathfrak{h}$ be a finite dimensional Leibniz algebra. Then for any commutative algebra $A$ and any Leibniz algebra homomorphism $f : \mathfrak{h} \to \mathfrak{h} \otimes A$, there exists a unique algebra homomorphism $\theta: {\mathcal A} (\mathfrak{h}) \to A$ such that $f = ({\rm Id}_{\mathfrak{h}} \otimes \theta) \circ \eta_{\mathfrak{h}}$, i.e. the following diagram is commutative: \begin{eqnarray} \eqlabel{univerah} \xymatrix {& \mathfrak{h} \ar[rr]^-{\eta_{\mathfrak{h}} } \ar[rrd]_{ f } & {} & {\mathfrak{h} \otimes {\mathcal A} (\mathfrak{h} )} \ar[d]^{ {\rm Id}_{\mathfrak{h}} \otimes \theta }\\ & {} & {} & {\mathfrak{h} \otimes A} } \end{eqnarray} \end{corollary} \begin{proof} Follows straightforward from the bijection given in \equref{adjp} for $\mathfrak{g}:= \mathfrak{h}$. \end{proof} \begin{remark} \relabel{remar2} If $\mathfrak{h}$ is a Lie algebra of dimension $n$, then the structure constants are subject to the following relations $\tau_{i,i}^s = 0$ and $\tau_{i,j}^s = - \tau_{j,i}^s$, for all $i$, $j$, $s = 1,\cdots, n$. Consequently, we can easily see that the universal polynomials of $\mathfrak{h}$ fulfill the following conditions: \begin{equation}\eqlabel{liepol} P_{(a, i, i)} ^{(\mathfrak{h})} = 0 \quad \quad {\rm and }\quad \quad P_{(a, i, j)} ^{(\mathfrak{h})} = - P_{(a, j, i)} ^{(\mathfrak{h})} \end{equation} for all $a$, $i$, $j = 1, \cdots n$, $i \neq j$. Thus, in the case of Lie algebras the universal algebra ${\mathcal A} (\mathfrak{h})$ takes a simplified form. We provide further examples in the sequel. \end{remark} \begin{examples} \exlabel{unvlie} 1. Let $\mathfrak{h} := {\rm aff} (2, k)$ be the affine $2$-dimensional Lie algebra with basis $\{e_1, e_2\}$ and bracket given by $\left[e_1, \, e_2 \right] = e_1$. Then, we have: \begin{eqnarray*} {\mathcal A} ({\rm aff} (2, k)) &\cong& \, k [X_{11}, X_{12}, X_{21}, X_{22}]/(X_{21}, \, X_{11} - X_{12}X_{22} + X_{12}X_{21}) \\ & \cong& \, k[X, Y, Z]/(X - YZ) \end{eqnarray*} Indeed, the non-zero structure constants of $\mathfrak{h}$ are $\tau_{1,2}^1 = 1 = - \tau_{2,1}^1$. Using \equref{liepol} from the previous remark the only non-zero universal polynomials of the Lie algebra ${\rm aff} (2, k)$ are $P_{(1, 1, 2)} = X_{11} - X_{12}X_{22} + X_{12}X_{21}$, $P_{(2, 1, 2)} = X_{21}$, $-P_{(1, 1, 2)}$ and $-P_{(2, 1, 2)}$. The conclusion now follows. 2. Let $\mathfrak{h} := \mathfrak{sl}(2, k)$ be the Lie algebra with basis $\{e_1, e_2, e_3\}$ and bracket $\left[e_1, \, e_2 \right] = e_3$, $\left[e_3, \, e_2 \right] = -2 e_2$, $\left[e_3, \, e_1 \right] = 2e_1$. A routinely computation proves that ${\mathcal A} (\mathfrak{sl}(2, k)) \cong k[X_{ij} \, | \, i, j = 1, 2, 3]/J$, where $J$ is the ideal generated by the following nine universal polynomials of $\mathfrak{sl}(2, k)$: \begin{eqnarray*} && \hspace*{-10mm} X_{13} - 2 X_{12}X_{31} + 2X_{11}X_{32}, \,\,\, 2X_{11} - 2X_{11}X_{33} + 2 X_{13}X_{31},\,\,\, 2X_{12} - 2X_{13}X_{32} + 2X_{12}X_{33}\\ && \hspace*{-10mm} X_{23} - 2 X_{21}X_{32} + 2X_{22}X_{31}, \,\,\, 2X_{21} - 2X_{23}X_{31} + 2 X_{21}X_{33},\,\,\, 2X_{22} - 2X_{22}X_{33} + 2X_{23}X_{32}\\ && \hspace*{-10mm} X_{33} - X_{11}X_{22} + X_{12}X_{21}, \,\,\, 2X_{31} - X_{21}X_{13} + X_{11}X_{23}, \,\,\, 2X_{32} - X_{12}X_{23} + X_{13}X_{22}. \end{eqnarray*} \end{examples} We recall that the polynomial algebra $M(n) = k[X_{ij} \, | \, i, j = 1, \cdots, n]$ is a bialgebra with comultiplication and counit given by $\Delta (X_{ij}) = \sum_{s=1}^n \, X_{is} \otimes X_{sj}$ and $\varepsilon (X_{ij}) = \delta_{i, j}$, for any $i$, $j=1, \cdots, n$. We will prove now that the universal algebra ${\mathcal A} (\mathfrak{h})$ is also a bialgebra. \begin{proposition} \prlabel{bialgebra} Let $\mathfrak{h}$ be a Leibniz algebra of dimension $n$. Then there exists a unique bialgebra structure on ${\mathcal A} (\mathfrak{h})$ such that the Leibniz algebra homomorphism $\eta_{\mathfrak{h}} : \mathfrak{h} \to \mathfrak{h} \otimes {\mathcal A} (\mathfrak{h})$ becomes a right ${\mathcal A} (\mathfrak{h})$-comodule structure on $\mathfrak{h}$. More precisely, the comultiplication and the counit on ${\mathcal A} (\mathfrak{h})$ are given for any $i$, $j=1, \cdots, n$ by \begin{equation} \eqlabel{deltaeps} \Delta (x_{ij}) = \sum_{s=1}^n \, x_{is} \otimes x_{sj} \quad {\rm and} \quad \varepsilon (x_{ij}) = \delta_{i, j} \end{equation} Furthermore, the usual projection $\pi \colon M(n) \to {\mathcal A} (\mathfrak{h})$ becomes a bialgebra homomorphism. \end{proposition} \begin{proof} Consider the Leibniz algebra homomorphism $f : \mathfrak{h} \to \mathfrak{h} \otimes {\mathcal A} (\mathfrak{h}) \otimes {\mathcal A} (\mathfrak{h})$ defined by $f := (\eta_{\mathfrak{h}} \otimes {\rm Id}_{{\mathcal A} (\mathfrak{h})} ) \, \circ \, \eta_{\mathfrak{h}}$. It follows from \coref{initialobj} that there exists a unique algebra homomorphism $\Delta : {\mathcal A} (\mathfrak{h}) \to {\mathcal A} (\mathfrak{h}) \otimes {\mathcal A} (\mathfrak{h})$ such that $({\rm Id}_{\mathfrak{h}} \otimes \Delta) \circ \eta_{\mathfrak{h}} = f$; that is, the following diagram is commutative: \begin{eqnarray} \eqlabel{delta} \xymatrix {& \mathfrak{h} \ar[rr]^-{\eta_{\mathfrak{h}} } \ar[d]_{ \eta_{\mathfrak{h}} } & {} & {\mathfrak{h} \otimes {\mathcal A} (\mathfrak{h} )} \ar[d]^{ {\rm Id}_{\mathfrak{h}} \otimes \Delta }\\ & \mathfrak{h} \otimes {\mathcal A} (\mathfrak{h} ) \ar[rr]_-{\eta_{\mathfrak{h}} \otimes {\rm Id}_{{\mathcal A} (\mathfrak{h})}} & {} & {\mathfrak{h} \otimes {\mathcal A} (\mathfrak{h} ) \otimes {\mathcal A} (\mathfrak{h} )} } \end{eqnarray} Now, if we evaluate the diagram \equref{delta} at each $e_i$, for $i = 1, \cdots, n$ we obtain, taking into account \equref{unitadj2}, the following: \begin{eqnarray*} && \sum_{t=1}^n \, e_t \otimes \Delta (x_{ti}) = (\eta_{\mathfrak{h}} \otimes {\rm Id}) (\sum_{s=1}^n \, e_s \otimes x_{si}) = \sum_{s=1}^n ( \sum_{t=1}^n \, e_t \otimes x_{ts}) \otimes x_{si}\\ && = \sum_{t=1}^n \, e_t \otimes (\sum_{s=1}^n x_{ts} \otimes x_{si} ) \end{eqnarray*} and hence $\Delta (x_{ti}) = \sum_{s=1}^n \, x_{ts} \otimes x_{si}$, for all $t$, $i=1, \cdots, n$. Obviously, $\Delta$ given by this formula on generators is coassociative. In a similar fashion, applying once again \coref{initialobj}, we obtain that there exists a unique algebra homomorphism $\varepsilon: {\mathcal A} (\mathfrak{h}) \to k$ such that the following diagram is commutative: \begin{eqnarray} \eqlabel{epsilo} \xymatrix {& \mathfrak{h} \ar[rr]^-{\eta_{\mathfrak{h}} } \ar[drr]_{ {\rm can} } & {} & {\mathfrak{h} \otimes {\mathcal A} (\mathfrak{h} )} \ar[d]^{ {\rm Id}_{\mathfrak{h}} \otimes \varepsilon }\\ & {} & {} & {\mathfrak{h} \otimes k} } \end{eqnarray} where ${\rm can} : \mathfrak{h} \to \mathfrak{h} \otimes k$ is the canonical isomorphism, ${\rm can} (x) = x \otimes 1$, for all $x\in \mathfrak{h}$. If we evaluate this diagram at each $e_t$, for $t = 1, \cdots, n$, we obtain $\varepsilon (x_{ij}) = \delta_{i, j}$, for all $i$, $j=1, \cdots, n$. It can be easily checked that $\varepsilon$ is a counit for $\Delta$, thus ${\mathcal A} (\mathfrak{h})$ is a bialgebra. Furthermore, the commutativity of the above two diagrams imply that the canonical map $\eta_{\mathfrak{h}} : \mathfrak{h} \to \mathfrak{h} \otimes {\mathcal A} (\mathfrak{h})$ defines a right ${\mathcal A} (\mathfrak{h})$-comodule structure on $\mathfrak{h}$. \end{proof} We call the pair $({\mathcal A}(\mathfrak{h}), \, \eta_{\mathfrak{h}} )$, with the coalgebra structure defined in \prref{bialgebra}, the {\it universal coacting bialgebra of the Leibniz algebra $\mathfrak{h}$}. It fullfils the following universal property which extends \coref{initialobj}: \begin{theorem}\thlabel{univbialg} Let $\mathfrak{h}$ be a Leibniz algebra of dimension $n$. Then, for any commutative bialgebra $B$ and any Leibniz algebra homomorphism $f \colon \mathfrak{h} \to \mathfrak{h} \otimes B$ which makes $\mathfrak{h}$ into a right $B$-comodule there exists a unique bialgebra homomorphism $\theta \colon {\mathcal A} (\mathfrak{h}) \to B$ such that the following diagram is commutative: \begin{eqnarray} \eqlabel{univbialg} \xymatrix {& \mathfrak{h} \ar[r]^-{\eta_{\mathfrak{h}}} \ar[dr]_-{f } & {\mathfrak{h} \otimes {\mathcal A} (\mathfrak{h} )} \ar[d]^{ {\rm Id}_{\mathfrak{h}} \otimes \theta }\\ & {} & {\mathfrak{h} \otimes B} } \end{eqnarray} \end{theorem} \begin{proof} As ${\mathcal A} (\mathfrak{h})$ is the universal algebra of $\mathfrak{h}$, there exists a unique algebra homomorphism $\theta \colon {\mathcal A} (\mathfrak{h}) \to B$ such that diagram~\equref{univbialg} commutes. The proof will be finished once we show that $\theta$ is a coalgebra homomorphism as well. This follows by using again the universal property of ${\mathcal A} (\mathfrak{h})$. Indeed, we obtain a unique algebra homomorphism $\psi \colon {\mathcal A} (\mathfrak{h}) \to B \otimes B$ such that the following diagram is commutative: \begin{equation}\eqlabel{101} \xymatrix{ \mathfrak{h}\ar[r]^-{\eta_{\mathfrak{h}} }\ar[rdd]_{\bigl({\rm Id}_{\mathfrak{h}}\otimes\, \Delta_{B} \circ \theta\bigl)\circ \eta_{\mathfrak{h}} } & {\mathfrak{h} \otimes {\mathcal A} (\mathfrak{h})}\ar[dd]^{{\rm Id}_{\mathfrak{h}} \otimes \psi} \\ {} & {} \\ {} & {\mathfrak{h} \otimes B \otimes B} } \end{equation} The proof will be finished once we show that $(\theta \otimes \theta) \circ \Delta$ makes diagram~\equref{101} commutative. Indeed, as $f \colon \mathfrak{h} \to \mathfrak{h} \otimes B$ is a right $B$-comodule structure, we have: \begin{eqnarray*} \bigl({\rm Id}_{\mathfrak{h}} \otimes\, (\theta \otimes \theta) \circ \Delta\bigl)\circ \,\eta_{\mathfrak{h}} &=& \bigl({\rm Id}_{\mathfrak{h}} \otimes \theta \otimes \theta \bigl)\circ \underline{\bigl({\rm Id}_{\mathfrak{h}} \otimes \Delta\bigl)\circ \, \eta_{\mathfrak{h}}}\\ &\stackrel{\equref{delta}} {=}& \bigl({\rm Id}_{\mathfrak{h}} \otimes \theta \otimes \theta \bigl)\circ (\eta_{\mathfrak{h}} \otimes {\rm Id}_{{\mathcal A} (\mathfrak{h})})\circ \eta_{\mathfrak{h}}\\ &=& \bigl(\underline{({\rm Id}_{\mathfrak{h}} \otimes \theta) \circ \eta_{\mathfrak{h}}}\ \otimes \theta\bigl)\circ \, \eta_{\mathfrak{h}}\\\ &\stackrel{\equref{univbialg}} {=}& \bigl(f \otimes \theta\bigl)\circ \, \eta_{\mathfrak{h}}\\ &=& (f \otimes {\rm Id}_{B})\circ \underline{({\rm Id}_{\mathfrak{h}} \otimes \theta) \circ \, \eta_{\mathfrak{h}}}\\ &\stackrel{\equref{univbialg}} {=}& \underline{(f \otimes {\rm Id}_{B})\circ f}\\ &=& ({\rm Id}_{\mathfrak{h}} \otimes \Delta_{B}) \circ \underline{f}\\ &\stackrel{\equref{univbialg}} {=}& ({\rm Id}_{\mathfrak{h}} \otimes \Delta_{B}) \circ ({\rm Id}_{\mathfrak{h}} \otimes \theta) \circ \eta_{\mathfrak{h}}\\ &=& ({\rm Id}_{\mathfrak{h}} \otimes \Delta_{B} \circ \theta) \circ \eta_{\mathfrak{h}} \end{eqnarray*} as desired. Similarly, one can show that $\varepsilon_B \, \circ \, \theta = \varepsilon$ and the proof is now finished. \end{proof} In what follows we construct for any finite dimensional Leibniz algebra $\mathfrak{h}$ a universal commutative Hopf algebra ${\mathcal H} (\mathfrak{h})$ together with a Leibniz algebra homomorphism $\lambda_{\mathfrak{h}} \colon \mathfrak{h} \to \mathfrak{h} \otimes {\mathcal H} (\mathfrak{h})$ which makes $\mathfrak{h}$ into a right ${\mathcal H} (\mathfrak{h})$-comodule. This is achieved by using the free commutative Hopf algebra generated by a commutative bialgebra introduced in \cite[Chapter IV]{T}. Recall that assigning to a commutative bialgebra the free commutative Hopf algebra defines a functor $L \colon {\rm ComBiAlg}_k \to {\rm ComHopf}_k$ which is a left adjoint to the forgetful functor ${\rm ComHopf}_k \to {\rm ComBiAlg}_k$ (\cite[Theorem 65, (2)]{T}). Throughout, we denote by $\mu \colon 1_{{\rm ComBiAlg}_k} \to UL$ the unit of the adjunction $L \dashv U$. \begin{definition} Let $\mathfrak{h}$ be a finite dimensional Leibniz algebra. The pair $\bigl({\mathcal H} (\mathfrak{h}) := L({\mathcal A} (\mathfrak{h})), \, \lambda_{\mathfrak{h}} := ({\rm Id}_{\mathfrak{h}} \otimes \mu_{{\mathcal A} (\mathfrak{h})}) \, \circ \, \eta_{\mathfrak{h}}\bigl)$ is called the {\it universal coacting Hopf algebra of $\mathfrak{h}$}. \end{definition} The pair $\bigl( {\mathcal H} (\mathfrak{h}), \, \lambda_{\mathfrak{h}} \bigl)$ fulfills the following universal property which shows that it is the initial object in the category of all commutative Hopf algebras that coact on $\mathfrak{h}$. \begin{theorem} \thlabel{univhopf} Let $\mathfrak{h}$ be a finite dimensional Leibniz algebra. Then, for any commutative Hopf algebra $H$ and any Leibniz algebra homomorphism $f \colon \mathfrak{h} \to \mathfrak{h} \otimes H$ which makes $\mathfrak{h}$ into a right $H$-comodule there exists a unique Hopf algebra homomorphism $g \colon {\mathcal H} (\mathfrak{h}) \to H$ for which the following diagram is commutative: \begin{eqnarray} \eqlabel{univHopfalg} \xymatrix {& \mathfrak{h} \ar[rr]^-{\lambda_{\mathfrak{h}} } \ar[drr]_{ f } & {} & {\mathfrak{h} \otimes {\mathcal H} (\mathfrak{h} )} \ar[d]^{ {\rm Id}_{\mathfrak{h}} \otimes g }\\ & {} & {} & {\mathfrak{h} \otimes H} } \end{eqnarray} \end{theorem} \begin{proof} Let $H$ be a commutative Hopf algebra together with a Leibniz algebra homomorphism $f \colon \mathfrak{h} \to \mathfrak{h} \otimes H$ which makes $\mathfrak{h}$ into right a $H$-comodule. Using \thref{univbialg} we obtain a unique bialgebra homomorphism $\theta: {\mathcal A} (\mathfrak{h}) \to H$ which makes the following diagram commutative: \begin{eqnarray}\label{final1} \xymatrix {& \mathfrak{h} \ar[rr]^-{\eta_{\mathfrak{h}} } \ar[drr]_{ f } & {} & {\mathfrak{h} \otimes {\mathcal A} (\mathfrak{h} )} \ar[d]^{ {\rm Id}_{\mathfrak{h}} \otimes \theta }\\ & {} & {} & {\mathfrak{h} \otimes H} }\qquad i.e.\,\,\,({\rm Id}_{\mathfrak{h}} \otimes \theta ) \circ \eta_{\mathfrak{h}} = f. \end{eqnarray} Now the adjunction $L \dashv U$ yields a unique Hopf algebra homomorphism $g \colon L({\mathcal A} (\mathfrak{h})) \to H$ such that the following diagram commutes: \begin{eqnarray}\label{final2} \xymatrix{ {\mathcal A} (\mathfrak{h})\ar[rr]^-{\mu_{{\mathcal A} (\mathfrak{h})}}\ar[rrd]_{\theta} & {} & {L({\mathcal A} (\mathfrak{h}))}\ar[d]^{ g} \\ {} & {} & {H} } \qquad {\rm i.e.}\,\,\, g \circ \mu_{{\mathcal A} (\mathfrak{h})} = \theta. \end{eqnarray} We are now ready to show that $g \colon {\mathcal H} (\mathfrak{h}) = L({\mathcal A} (\mathfrak{h})) \to H$ is the unique Hopf algebra homomorphism which makes diagram \equref{univHopfalg} commutative. Indeed, putting all the above together yields: \begin{eqnarray*} ({\rm Id}_{\mathfrak{h}} \otimes g) \circ ({\rm Id}_{\mathfrak{h}} \otimes \mu_{{\mathcal A} (\mathfrak{h})}) \circ \eta_{\mathfrak{h}} &=& ({\rm Id}_{\mathfrak{h}} \otimes \underline{g \circ \mu_{{\mathcal A} (\mathfrak{h})}}) \circ \eta_{\mathfrak{h}} \\ &\stackrel{(\ref{final2})} {=}& \underline{({\rm Id}_{\mathfrak{h}} \otimes \theta) \circ \eta_{\mathfrak{h}}} \\ &\stackrel{(\ref{final1})} {=}& f. \end{eqnarray*} Since $g$ is obviously the unique Hopf algebra homomorphism which makes the above diagram commutative, the proof is finished. \end{proof} \section{Applications: the automorphism group and the classification of gradings on Leibniz algebras}\selabel{sect3} In this section we discuss three applications of our previous results which highlight the importance of the newly introduced universal coacting bialgebra (Hopf algebra) of a Leibniz algebra. The first one concerns the description of the automorphism group ${\rm Aut}_{{\rm Lbz}} (\mathfrak{h})$ of a given Leibniz/Lie algebra $\mathfrak{h}$ which is a classical and notoriously difficult problem arising form Hilbert's invariant theory. We start by recalling a few basic facts from the theory of Hopf algebras \cite{Sw, radford} which will be useful in the sequel. For any bialgebra $H$ the set of group-like elements, denoted by $G(H) := \{g\in H \, | \, \Delta (g) = g \otimes g \,\, {\rm and } \,\, \varepsilon(g) = 1 \}$ is a monoid with respect to the multiplication of $H$. We denote by $H^{\rm o}$, the finite dual bialgebra of $H$, i.e.: $$ H^{\rm o} := \{ f \in H^* \,| \, f(I) = 0, \, {\rm for \, some \, ideal} \,\, I \lhd H \,\, {\rm with} \,\, {\rm dim}_k (H/I) < \infty \} $$ It is well known (see for instance \cite[pag. 62]{radford}) that $G(H^{\rm o}) = {\rm Hom}_{\rm Alg_k} (H, \, k)$, the set of all algebra homomorphisms $H\to k$. Now, we shall give the first application of the universal bialgebra of a Leibniz algebra. \begin{theorem} \thlabel{automorf} Let $\mathfrak{h}$ be a finite dimensional Leibniz algebra with basis $\{e_1, \cdots, e_n\}$ and consider $U\bigl (G\bigl( {\mathcal A} (\mathfrak{h})^{\rm o} \bigl)\bigl)$ to be the group of all invertible group-like elements of the finite dual ${\mathcal A} (\mathfrak{h})^{\rm o}$. Then the map defined for any $\theta \in U\bigl(G\bigl( {\mathcal A} (\mathfrak{h})^{\rm o} \bigl)\bigl)$ and $i = 1, \cdots, n$ by: \begin{equation} \eqlabel{izomono} \overline{\gamma} : U \bigl(G\bigl( {\mathcal A} (\mathfrak{h})^{\rm o} \bigl) \bigl) \to {\rm Aut}_{{\rm Lbz}} (\mathfrak{h}), \qquad \overline{\gamma} (\theta) (e_i) := \sum_{s=1}^n \, \theta(x_{si}) \, e_s \end{equation} is an isomorphism of groups. \end{theorem} \begin{proof} By applying \coref{morlbz} for $\mathfrak{g}:= \mathfrak{h}$ it follows that the map $$ \gamma : {\rm Hom}_{\rm Alg_k} ({\mathcal A} (\mathfrak{h}) , \, k) \to {\rm End}_{{\rm Lbz}} (\mathfrak{h}), \quad \gamma (\theta) = \bigl( {\rm Id}_{\mathfrak{h}} \otimes \theta \bigl) \circ \eta_{\mathfrak{g}} $$ is bijective. Based on formula \equref{unitadj2}, it can be easily seen that $\gamma$ takes the form given in \equref{izomono}. As mentioned above we have $ {\rm Hom}_{\rm Alg_k} ({\mathcal A} (\mathfrak{h}) , k) = G\bigl( {\mathcal A} (\mathfrak{h})^{\rm o} \bigl)$. Therefore, since $\overline{\gamma}$ is the restriction of $\gamma$ to the invertible elements of the two monoids, the proof will be finished once we show that $\gamma$ is an isomorphism of monoids. To this end, recall that the monoid structure on ${\rm End}_{{\rm Lbz}} (\mathfrak{h})$ is given by the usual composition of endomorphisms of the Leibniz algebra $\mathfrak{h}$, while $G\bigl( {\mathcal A} (\mathfrak{h})^{\rm o} \bigl)$ is a monoid with respect to the convolution product, that is: \begin{equation}\eqlabel{convolut} (\theta_1 \star \theta_2) (x_{sj}) = \sum_{t=1}^n \, \theta_1(x_{st}) \theta_2(x_{tj}) \end{equation} for all $\theta_1$, $\theta_2 \in G\bigl( {\mathcal A} (\mathfrak{h})^{\rm o} \bigl)$ and $j$, $s = 1, \cdots, n$. Now, for any $\theta_1$, $\theta_2 \in G\bigl( {\mathcal A} (\mathfrak{h})^{\rm o} \bigl)$ and $j = 1, \cdots, n$ we have: \begin{eqnarray*} && \bigl(\gamma(\theta_1) \circ \gamma(\theta_2) \bigl) (e_j) = \gamma(\theta_1) \bigl( \sum_{t=1}^n \, \theta_2 (x_{tj}) e_t \bigl) = \sum_{s, t = 1}^n \, \theta_1(x_{st}) \theta_2 (x_{tj})\, e_s \\ && = \sum_{s=1}^n \, \bigl( \sum_{t=1}^n \, \theta_1(x_{st}) \theta_2 (x_{tj}) \bigl) \, e_s = \sum_{s=1}^n \, (\theta_1 \star \theta_2) (x_{sj}) \, e_s = \gamma (\theta_1 \star \theta_2) (e_j) \end{eqnarray*} thus, $\gamma (\theta_1 \star \theta_2) = \gamma(\theta_1) \circ \gamma(\theta_2)$, and therefore $\gamma$ respects the multiplication. We are left to show that $\gamma$ also preserves the unit. Note that the unit $1$ of the monoid $G\bigl( {\mathcal A} (\mathfrak{h})^{\rm o} \bigl)$ is the counit $\varepsilon_{{\mathcal A} (\mathfrak{h})}$ of the bialgebra ${\mathcal A} (\mathfrak{h})$ and we obtain: $$ \gamma(1) (e_i) = \gamma (\varepsilon_{{\mathcal A} (\mathfrak{h})}) (e_i) = \sum_{s=1}^n \, \varepsilon_{{\mathcal A} (\mathfrak{h})} (x_{si}) \, e_s = \sum_{s=1}^n \, \delta_{si} \, e_s = e_i = {\rm Id}_{\mathfrak{h}} (e_i) $$ Thus we have proved that $\gamma$ is an isomorphism of monoids and the proof is finished. \end{proof} \begin{remark} We point out that the construction of ${\mathcal A} (\mathfrak{h})$, as well as ${\mathcal A} (\mathfrak{h}, \mathfrak{g})$, and the description of the automorphism group of $\mathfrak{h}$ can be achieved for an arbitrary finite dimensional algebra $\mathfrak{h}$, not necessarily Lie or Leibniz. This avenue of investigation is considered in a a forthcoming paper of the authors. \end{remark} The second application we consider is related to the classical problem of classifying all $G$-gradings on a given Leibniz/Lie algebras. Let $G$ be an abelian group and $\mathfrak{h}$ a Leibniz algebra. Recall that a \emph{$G$-grading} on $\mathfrak{h}$ is a vector space decomposition $\mathfrak{h} = \oplus_{\sigma \in G} \, \mathfrak{h}_{\sigma}$ such that $\left [\mathfrak{h}_{\sigma}, \, \mathfrak{h}_{\tau} \right] \subseteq \mathfrak{h}_{\sigma \tau}$ for all $\sigma$, $\tau \in G$. For more detail on the problem of classifying $G$-gradings on Lie algebras see \cite{eld} and the references therein. In what follows $k[G]$ denotes the usual group algebra of a group $G$. \begin{proposition}\prlabel{graduari} Let $G$ be an abelian group and $\mathfrak{h}$ a finite dimensional Leibniz algebra. Then there exists a bijection between the set of all $G$-gradings on $\mathfrak{h}$ and the set of all bialgebra homomorphisms ${\mathcal A} (\mathfrak{h}) \to k[G]$. The bijection is given such that the $G$-grading on $\mathfrak{h} = \oplus_{\sigma \in G} \, \mathfrak{h}_{\sigma}^{(\theta)} $ associated to a bialgebra map $\theta: {\mathcal A} (\mathfrak{h}) \to k[G]$ is given by: \begin{equation}\eqlabel{gradass} \mathfrak{h}_{\sigma}^{(\theta)} := \{ x \in \mathfrak{h} \, | \, \bigl({\rm Id}_{\mathfrak{h}} \otimes \theta \bigl) \, \circ \, \eta_{\mathfrak{h}} (x) = x \otimes \sigma \} \end{equation} for all $\sigma \in G$. \end{proposition} \begin{proof} By applying \thref{univbialg} for the commutative bialgebra $B := k[G]$ yields a bijection between the set of all bialgebra homomorphisms ${\mathcal A} (\mathfrak{h}) \to k[G]$ and the set of all Leibniz algebra homomorphisms $f \colon \mathfrak{h} \to \mathfrak{h} \otimes k[G]$ which makes $\mathfrak{h}$ into a right $k[G]$-comodule. The proof is finished if we show that the latter set is in bijective correspondence with the set of all $G$-gradings on $\mathfrak{h}$. Indeed, it is a well known fact in Hopf algebra theory \cite[Excercise 3.2.21]{radford} that there exists a bijection between the set of all right $k[G]$-comodule structures $f: \mathfrak{h} \to \mathfrak{h} \otimes k[G]$ on the vector space $\mathfrak{h}$ and the set of all vector space decompositions $\mathfrak{h} = \oplus_{\sigma \in G} \, \mathfrak{h}_{\sigma}$. The bijection is given such that $x_{\sigma} \in \mathfrak{h}_{\sigma}$ if and only if $f (x_{\sigma}) = x_{\sigma} \otimes \sigma$, for all $\sigma \in G$. The only thing left to prove is that under this bijection a right coaction $f: \mathfrak{h} \to \mathfrak{h} \otimes k[G]$ is a Leibniz algebra homomorphism if and only if $\left [\mathfrak{h}_{\sigma}, \, \mathfrak{h}_{\tau} \right] \subseteq \mathfrak{h}_{\sigma \tau}$, for all $\sigma$, $\tau \in G$. Indeed, let $\sigma$, $\tau \in G$ and $x_{\sigma} \in \mathfrak{h}_{\sigma}$, $x_{\tau} \in \mathfrak{h}_{\tau}$; then $\left [f(x_{\sigma}), \, f(x_{\tau}) \right] = \left [x_{\sigma}\otimes \sigma, \, x_{\tau} \otimes \tau \right] = \left [x_{\sigma}, \, x_{\tau} \right] \otimes \sigma \tau$. Thus, we obtain that $ f (\left[ x_{\sigma}, \, x_{\tau} \right]) = \left [f(x_{\sigma}), \, f(x_{\tau}) \right]$ if and only if $ \left[x_{\sigma}, \, x_{\tau} \right] \in \mathfrak{h}_{\sigma \tau}$. Hence, $f: \mathfrak{h} \to \mathfrak{h} \otimes k[G]$ is a Leibniz algebra homomorphism if and only if $\left [\mathfrak{h}_{\sigma}, \, \mathfrak{h}_{\tau} \right] \subseteq \mathfrak{h}_{\sigma \tau}$, for all $\sigma$, $\tau \in G$ and the proof is now finished. \end{proof} Our next result classifies all $G$-gradings on a given Leibniz algebra $\mathfrak{h}$, where $G$ is an abelian group. Recall that two $G$-gradings $\mathfrak{h} = \oplus_{\sigma \in G} \, \mathfrak{h}_{\sigma} = \oplus_{\sigma \in G} \, \mathfrak{h}_{\sigma} ^{'}$ on $\mathfrak{h}$ are called \emph{isomorphic} if there exists $w \in {\rm Aut}_{{\rm Lbz}} (\mathfrak{h})$ an automorphism of $\mathfrak{h}$ such that $w (\mathfrak{h}_{\sigma}) \subseteq \mathfrak{h}_{\sigma} ^{'}$, for all $\sigma \in G$. Since $w$ is bijective and $\mathfrak{h}$ is $G$-graded we can prove that the last condition is equivalent to $w (\mathfrak{h}_{\sigma}) = \mathfrak{h}_{\sigma} ^{'}$, for all $\sigma \in G$, which is the condition that usually appears in the literature in the classification of $G$-gradings (\cite{eld}). Indeed, let $x_{\sigma} ^{'} \in \mathfrak{h}_{\sigma} ^{'}$; since $w$ is surjective there exists $y = \sum_{i=1}^t \, y_{\tau_i} \in \mathfrak{h}$ such that $x_{\sigma} ^{'} = w(y) = \sum_{i=1}^t \, w(y_{\tau_i})$, where $y_{\tau_i} \in \mathfrak{h}_{\tau_i}$, for all $i = 1, \cdots, t$ are the homogeneous components of $y$. Since $w(y_{\tau_i}) \in \mathfrak{h}_{\tau_i}^{'}$ and $\mathfrak{h} = \oplus_{\sigma \in G} \, \mathfrak{h}_{\sigma} ^{'}$ we obtain that $w(y_{\tau_i}) = 0$, for all $\tau_i \neq \sigma$. As $w$ is injective, it follows that $y_{\tau_i} = 0$, for all $\tau_i \neq \sigma$; hence $y = y_{\sigma}$ and $x_{\sigma} ^{'} = w (y_{\sigma}) \in w (\mathfrak{h}_{\sigma})$, as needed. We recall one more elementary fact from Hopf algebra theory: if $H$ and $L$ are two bialgebras over a field $k$ then the abelian group ${\rm Hom} (H, \, L)$ of all $k$-linear maps is an unital associative algebra under the convolution product (\cite{Sw}): $(\theta_1 \star \theta_2) (h) := \sum \, \theta_1 (h_{(1)}) \theta_2 (h_{(2)})$, for all $\theta_1$, $\theta_2 \in {\rm Hom} (H, \, L)$ and $h\in H$. \begin{definition}\delabel{conjug} Let $G$ be an abelian group and $\mathfrak{h}$ a finite dimensional Leibniz algebra. Two homomorphisms of bialgebras $\theta_1, \theta_2: {\mathcal A} (\mathfrak{h}) \to k[G]$ are called \emph{conjugate}, if there exists $g \in U\bigl (G\bigl( {\mathcal A} (\mathfrak{h})^{\rm o} \bigl)\bigl)$ an invertible group-like element of the finite dual ${\mathcal A} (\mathfrak{h})^{\rm o}$ such that $\theta_2 = g \star \theta_1 \star g^{-1}$, in the convolution algebra ${\rm Hom} \bigl( {\mathcal A} (\mathfrak{h}) , \, k[G] \bigl)$. We use the notation $\theta_1 \approx \theta_2$ to designate two conjugate homomorphisms. \end{definition} We denote by ${\rm Hom}_{\rm BiAlg} \, \bigl( {\mathcal A} (\mathfrak{h}) , \, k[G] \bigl)/\approx $ the quotient of the set of all bialgebra homomorphisms ${\mathcal A} (\mathfrak{h}) \to k[G]$ by the above equivalence relation and let $\hat{\theta}$ denote the equivalence class of $\theta \in {\rm Hom}_{\rm BiAlg} \, \bigl( {\mathcal A} (\mathfrak{h}) , \, k[G] \bigl)$. The next theorem classifies all $G$-gradings on $\mathfrak{h}$. \begin{theorem} \thlabel{nouaclas} Let $G$ be an abelian group, $\mathfrak{h}$ a finite dimensional Leibniz algebra and consider $G$-${\rm \textbf{gradings}}(\mathfrak{h})$ to be the set of isomorphism classes of all $G$-gradings on $\mathfrak{h}$. Then the map $$ {\rm Hom}_{\rm BiAlg} \, \bigl( {\mathcal A} (\mathfrak{h}) , \, k[G] \bigl)/\approx \,\,\, \mapsto \,\, G{\rm-\textbf{gradings}} (\mathfrak{h}), \qquad \hat{\theta} \mapsto \mathfrak{h}^{(\theta)} := \oplus_{\sigma \in G} \, \mathfrak{h}_{\sigma}^{(\theta)} $$ where $\mathfrak{h}_{\sigma}^{(\theta)} = \{ x \in \mathfrak{h} \, | \, \bigl({\rm Id}_{\mathfrak{h}} \otimes \theta \bigl) \, \circ \, \eta_{\mathfrak{h}} (x) = x \otimes \sigma \}$, for all $\sigma \in G$, is bijective. \end{theorem} \begin{proof} Let $\{e_1, \cdots, e_n\}$ be a basis in $\mathfrak{h}$. By \prref{graduari} for any $G$-grading $\mathfrak{h} = \oplus_{\sigma \in G} \, \mathfrak{h}_{\sigma}$ on $\mathfrak{h}$ there exists a unique bialgebra homomorphism $\theta: {\mathcal A} (\mathfrak{h}) \to k[G]$ such that $\mathfrak{h}_{\sigma} = \mathfrak{h}_{\sigma}^{(\theta)}$, for all $\sigma \in G$. It remains to investigate when two such $G$-gradings are isomorphic. Let $\theta_1$, $\theta_2 : {\mathcal A} (\mathfrak{h}) \to k[G]$ be two bialgebra homomorphisms and let $\mathfrak{h} = \mathfrak{h}^{(\theta_1)} := \oplus_{\sigma \in G} \, \mathfrak{h}_{\sigma}^{(\theta_1)} = \oplus_{\sigma \in G} \, \mathfrak{h}_{\sigma}^{(\theta_2)} =: \mathfrak{h}^{(\theta_2)}$ be the associated $G$-gradings. It follows from the proof of \prref{graduari} that defining a $G$-grading on $\mathfrak{h}$ is equivalent (and the correspondence is bijective) to defining a right $k[G]$-comodule structure $\rho :\mathfrak{h} \to \mathfrak{h} \otimes k[G]$ on $\mathfrak{h}$ such that the right coaction $\rho$ is a Leibniz algebra homomorphism. Moreover, the right coactions $\rho^{(\theta_1)}$ and $\rho^{(\theta_2)} : \mathfrak{h} \to \mathfrak{h} \otimes k[G]$ are implemented from $\theta_1$ and $\theta_2$ using \thref{univbialg}, that is, they are given for any $j = 1, 2$ by \begin{equation} \eqlabel{3000} \rho^{(\theta_j)} : \mathfrak{h} \to \mathfrak{h} \otimes k[G], \qquad \rho^{(\theta_j)} (e_i) = \sum_{s=1}^n \, e_s \otimes \theta_j (x_{si}) \end{equation} for all $i = 1, \cdots, n$. Now a well known result in Hopf algebra theory states that the two $G$-gradings $\mathfrak{h}^{(\theta_1)}$ and $\mathfrak{h}^{(\theta_2)}$ are isomorphic if and only if $(\mathfrak{h}, \, \rho^{(\theta_1)})$ and $(\mathfrak{h}, \, \rho^{(\theta_2)})$ are isomorphic as Leibniz algebras and right $k[G]$-comodules, that is there exists $w : \mathfrak{h} \to \mathfrak{h}$ an automorphism of $\mathfrak{h}$ such that $\rho^{(\theta_2)} \, \circ w = (w \otimes {\rm Id}_{k[G]}) \, \circ \rho^{(\theta_1)}$. We apply now \thref{automorf}: for any Leibniz algebra automorphism $w : \mathfrak{h} \to \mathfrak{h}$ there exists a unique invertible group-like element of the finite dual $g \in U\bigl (G\bigl( {\mathcal A} (\mathfrak{h})^{\rm o} \bigl)\bigl)$ such that $w = w_g$ is given for any $i = 1, \cdots, n$ by \begin{equation} \eqlabel{3001} w_g (e_i) = \sum_{s=1}^n \, g(x_{si}) \, e_s \end{equation} Using \equref{3000} and \equref{3001} we can easily compute that: $$ \bigl (\rho^{(\theta_2)} \, \circ w_g \bigl) (e_i) = \sum_{a=1}^n \, e_a \otimes \bigl( \sum_{s=1}^n \, \theta_2 (x_{as}) g(x_{si}) \bigl) $$ and $$ \bigl( (w_g \otimes {\rm Id}_{k[G]}) \, \circ \rho^{(\theta_1)} \bigl) (e_i) = \sum_{a=1}^n \, e_a \otimes \bigl(\sum_{s=1}^n \, g(x_{as}) \theta_1 (x_{si}) \bigl) $$ for all $i = 1, \cdots, n$. Thus, the Leibniz algebra automorphism $w_g : \mathfrak{h} \to \mathfrak{h}$ is also a right $k[G]$-comodule map if and only if \begin{equation} \eqlabel{3002} \sum_{s=1}^n \, g(x_{as}) \theta_1 (x_{si}) = \sum_{s=1}^n \, \theta_2 (x_{as}) g(x_{si}) \end{equation} for all $a$, $i = 1, \cdots, n$. Taking into account the formula of the comultiplication on the universal algebra ${\mathcal A} (\mathfrak{h})$, the equation \equref{3002} can be easily rephrased as $(g \star \theta_1) ( x_{ai}) = (\theta_2 \star g ) ( x_{ai})$, for all $a$, $i = 1, \cdots, n$ in the convolution algebra ${\rm Hom} \bigl( {\mathcal A} (\mathfrak{h}) , \, k[G] \bigl)$, or (since $\{x_{ai}\}_{a, i = 1, \cdots, n}$ is a system of generators of ${\mathcal A} (\mathfrak{h})$) just as $g \star \theta_1 = \theta_2 \star g$. We also note that $g: {\mathcal A} (\mathfrak{h}) \to k$ is an invertible element in the above convolution algebra. In conclusion, we have proved that two $G$-gradings $ \mathfrak{h}^{(\theta_1)}$ and $\mathfrak{h}^{(\theta_2)}$ on $\mathfrak{h}$ associated to two bialgebra homomorphisms $\theta_1$, $\theta_2 : {\mathcal A} (\mathfrak{h}) \to k[G]$ are isomorphic if and only if there exists $g \in U\bigl (G\bigl( {\mathcal A} (\mathfrak{h})^{\rm o} \bigl)\bigl)$ such that $\theta_2 = g \star \theta_1 \star g^{-1}$, in the convolution algebra ${\rm Hom} \bigl( {\mathcal A} (\mathfrak{h}) , \, k[G] \bigl)$, that is $\theta_1 \approx \theta_2$ and the proof is now finished. \end{proof} Recall that an \emph{action as automorphisms of a group $G$ on a Leibniz algebra $\mathfrak{h}$} is a group homomorphism $\varphi: G \to {\rm Aut}_{{\rm Lbz}} (\mathfrak{h})$. We give now the last application of the universal bialgebra ${\mathcal A} (\mathfrak{h})$. \begin{proposition} \prlabel{actiuni} Let $G$ be a finite group and $\mathfrak{h}$ a finite dimensional Leibniz algebra with basis $\{e_1, \cdots, e_n\}$. Then there exists a bijection between the set of all actions as automorphisms of $G$ on $\mathfrak{h}$ and the set of all bialgebra homomorphisms ${\mathcal A} (\mathfrak{h}) \to k[G]^*$. The bijection is given such that the group homomorphism $\varphi_{\theta} : G \to {\rm Aut}_{{\rm Lbz}} (\mathfrak{h})$ associated to a bialgebra homomorphism $\theta: {\mathcal A} (\mathfrak{h}) \to k[G]^*$ is defined as follows: \begin{equation}\eqlabel{actiuniexp} \varphi_{\theta} ( g) (e_i) = \sum_{s=1}^n \, <\theta(x_{si}), \, g> \, e_s \end{equation} for all $g\in G$ and $i = 1, \cdots, n$. \end{proposition} \begin{proof} Applying \thref{univbialg} for the commutative bialgebra $B := k[G]^*$ gives a bijection between the set of all bialgebra homomorphisms ${\mathcal A} (\mathfrak{h}) \to k[G]^*$ and the set of all Leibniz algebra homomorphisms $f \colon \mathfrak{h} \to \mathfrak{h} \otimes k[G]^*$ which make $\mathfrak{h}$ into a right $k[G]^*$-comodule. The proof is finished if we show that the latter set is in bijective correspondence with the set of all group homomorphisms $G \to {\rm Aut}_{{\rm Lbz}} (\mathfrak{h})$. This follows by a standard argument in Hopf algebra theory, similar to the one used in \cite[Lemma 1]{rad2}. We indicate very briefly how the argument goes, leaving the details to the reader. Indeed, the category of right $k[G]^*$-comodules is isomorphic to the category of left $k[G]$-modules. The left action $\bullet : k[G] \otimes \mathfrak{h} \to \mathfrak{h}$ of the group algebra $k[G]$ on $\mathfrak{h}$ associated to a right coaction $f \colon \mathfrak{h} \to \mathfrak{h} \otimes k[G]^*$ is given by $g \bullet x := \, < x_{<1>} , \, g> \, x_{<0>}$, where we used the $\sum$-notation for comodules, $f(x) = x_{<0>} \otimes x_{<1>} \in \mathfrak{h} \otimes k[G]^*$ (summation understood). We associate to the action $\bullet$ the map $\varphi_{\bullet} : G \to {\rm Aut}_k (\mathfrak{h})$, $\varphi_{\bullet} (g) (x) := g \bullet x$, for all $g \in G$ and $x \in \mathfrak{h}$. Now, it can be easily checked that $f \colon \mathfrak{h} \to \mathfrak{h} \otimes k[G]^*$ being a Leibniz algebra homomorphism is equivalent to $\varphi_{\bullet} (g)$ being an automorphism of the Leibniz algebra $\mathfrak{h}$, for all $g \in G$ and the proof is finished. \end{proof}
1,116,691,497,472
arxiv
\section{INTRODUCTION \label{sec:intro}} The desire to manipulate visible light has existed for well over 2000 years \cite{Smith2015}. Research on this topic has borne several technologies key to modern life, from spectacles to fibre optical cables. For at least the last two decades, technology trends have pushed for ever greater miniaturisation, performance and efficiency. One proposed solution to these challenges is to build computing elements from optical devices. Recent demonstrations of this principle include optical differentiation for edge detection \cite{Zhou2020}, optical integration \cite{Ferrera2010}, systems that solve differential equations \cite{Tan2013} and optical neural networks \cite{Shastri2021}. At a fundamental level all of these technologies require the ability to manipulate light at the nanoscale in designer ways. One way to achieve this is to take inspiration from radio frequency applications, where waves of different frequency and polarisation are regularly re--directed or re--shaped by antenna systems. In the same way, optical light can be controlled by resonant metallic structures \cite{Hulst2017, Giannini2011}, `plasmonic antennas'. Plasmonic antennas, typically build from a small number ($\sim 2-10$) of resonant metallic elements, suffer from a couple of significant drawbacks. Firstly, plasmonic structures have large absorption at optical wavelengths, limiting efficiency. An alternate approach is to use dielectric resonators rather than plasmonic ones \cite{Staude2017}. Dielectric resonators exhibit shape dependant Mie resonances \cite{Mie1908} that have lower loss at optical wavelengths than plasmonic resonances. Secondly, due to the small number of elements, the number of degrees of freedom is limited. This can make it difficult to design plasmonic antennas that have arbitrary effects upon light. To achieve more general control of light, one can assemble structures made from several ($\gg 10$) plasmonic or dielectric elements \cite{Meinzer2014}, giving many more design degrees of freedom. Built from several discrete sub--wavelength elements, this kind of structure is a metamaterial. As metamaterials have many geometric parameters that can be tuned to change the response of the material to electromagnetic waves, finding a set of parameters that give a particular response can be very challenging. Many methods to solve this `inverse design' problem have emerged recently \cite{Molesky2018}. To design metasurface lenses \cite{Khorasaninejad2016} or holograms \cite{Ni2013}, where the function of the metamaterial is to impart a known phase offset onto the incident field, the Gerchberg--Saxton algorithm \cite{GS1972} is commonly used. This algorithm is simple and efficient, however assumes that elements of the metasurface do not strongly couple to each other and typically requires many full--wave simulations to build up a library of the many phase--changing elements from which the metasurface is built. The design of aperiodic metamaterials built from discrete resonating elements, with the aim of coupling to emitters, can be facilitated with genetic algorithms \cite{Wiecha2018a, Wiecha2018b}. Genetic algorithms are extremely effective at exploring large and complex search--spaces with many local minima, however the resulting structures can be difficult to understand intuitively \cite{Yeung2020}. Topology optimisation \cite{TObook} is used extensively to design graded index structures for a wide range of functionality including wavelength splitters\cite{Piggott2015}, lenses that overcome the diffraction limit \cite{Otomori2017} and mode sorters \cite{Frellsen2016}. The optimisation is usually performed using gradient descent, which can be slow particularly for large structures or fine discretisations. One way to accelerate this is to make use of reciprocity \cite{LL8} to convert the slow calculation of a gradient into two field calculations. This `adjoint' method \cite{Miller2012, Keraly2013} can be very efficient, however still requires many full--wave simulations over the course of the iterative optimisation. In this work, we derive a method for designing metamaterials made from discrete scatterers. By assuming that the scatterers support only electric and magnetic dipole resonances, valid for small scatterers at optical wavelengths \cite{Kuznetsov2016}, Maxwell's equations can be solved exactly eliminating the need for full--wave simulations. These solutions are developed in Section \ref{sec:dda}. To achieve the required scattering properties, the desired figure of merit can be expanded under small changes in the position of a scatterer. This gives an analytic expression that can be used to iteratively update the scatterer locations to maximise or minimise the figure of merit. We derive and apply this procedure to several relevant problems in Section \ref{sec:designing}, including manipulating the coupling between two nearby emitters, focusing a plane wave to a point and designing a structure with a particular radiation pattern. \section{THE DISCRETE DIPOLE APPROXIMATION} \label{sec:dda} To address the problem of designing metasurfaces, we begin by considering a metasurface composed of sub--wavelength discrete elements that support electric and magnetic dipole resonances. Maxwell's equations for a fixed frequency $\omega = ck$, where $k$ is the wave--number, can then be written as \begin{equation} \begin{pmatrix} \nabla \times \nabla \times & 0 \\ 0 & \nabla \times \nabla \times \end{pmatrix} \begin{pmatrix} \boldsymbol{E} \\ \boldsymbol{H} \end{pmatrix} - k^2 \begin{pmatrix} \boldsymbol{E} \\ \boldsymbol{H} \end{pmatrix} = \begin{pmatrix} \boldsymbol{E}_s \\ \boldsymbol{H}_s \end{pmatrix} + \begin{pmatrix} \omega^2 \mu_0 & i \omega \mu_0 \nabla \times \\ -i\omega \nabla \times & k^2 \end{pmatrix} \begin{pmatrix} \boldsymbol{P} \\ \boldsymbol{M} \end{pmatrix} . \label{eq:maxwell} \end{equation} In this expression $\boldsymbol{E}_s$ and $\boldsymbol{H}_s$ represent the source fields, for example the field due to an emitter or a background plane wave. The properties of the metasurface are encoded in the polarisation density $\boldsymbol{P}$ and the magnetisation density $\boldsymbol{M}$. This is generally a difficult equation to solve, however the assumption that the scatterers are sub--wavelength $r k \leq 1$ means that the elements of the metasurface can be modelled as point--like. In general, the polarisation and magnetisation densities contain all multipole moments \cite{Raab2005, Evlyukhin2011, Evlyukhin2013}, however if we assume that only the dipole terms are present, then we can write the polarisation and magnetisation densities as \begin{align} \boldsymbol{P} &= \sum_n \boldsymbol{\alpha}_E \boldsymbol{E} (\boldsymbol{r}_n) \delta (\boldsymbol{r}-\boldsymbol{r}_n), & \boldsymbol{M} &= \sum_n \boldsymbol{\alpha}_H \boldsymbol{H} (\boldsymbol{r}_n) \delta (\boldsymbol{r}-\boldsymbol{r}_n) . \label{eq:PM} \end{align} This reduces the source terms in Maxwell's equations (\ref{eq:maxwell}) to a summation of delta functions. Equations of this form can be solved with the dyadic Greens function and its curl \cite{Schwinger1950, Tai1993} \begin{align} \boldsymbol{G} (\boldsymbol{r}, \boldsymbol{r'}) &= \left[ \boldsymbol{1} + \frac{1}{k^2} \nabla \otimes \nabla \right] \frac{e^{ik|\boldsymbol{r}-\boldsymbol{r'}|}}{4 \pi |\boldsymbol{r}-\boldsymbol{r'}|}, & \boldsymbol{G}_{EH} (\boldsymbol{r}, \boldsymbol{r'}) &= \nabla \times \boldsymbol{G} (\boldsymbol{r}, \boldsymbol{r'}) . \end{align} The solution to Maxwell's equations (\ref{eq:maxwell}) with source terms given by (\ref{eq:PM}) can then be written as \begin{equation} \begin{pmatrix} \boldsymbol{E} (\boldsymbol{r}) \\ \boldsymbol{H} (\boldsymbol{r}) \end{pmatrix} = \begin{pmatrix} \boldsymbol{E}_s (\boldsymbol{r}) \\ \boldsymbol{H}_s (\boldsymbol{r}) \end{pmatrix} + \sum_{n=1}^{n=N} \begin{pmatrix} \xi^2 \boldsymbol{G} (\boldsymbol{r}, \boldsymbol{r}_n) \boldsymbol{\alpha}_E & i \xi \boldsymbol{G}_{EH} (\boldsymbol{r}, \boldsymbol{r}_n) \boldsymbol{\alpha}_H \\ -i \xi \boldsymbol{G}_{EH} (\boldsymbol{r}, \boldsymbol{r}_n) \boldsymbol{\alpha}_E & \xi^2 \boldsymbol{G} (\boldsymbol{r}, \boldsymbol{r}_n) \boldsymbol{\alpha}_H \end{pmatrix} \begin{pmatrix} \boldsymbol{E}(\boldsymbol{r}_n) \\ \boldsymbol{H}(\boldsymbol{r}_n) \end{pmatrix} \label{eq:fields} \end{equation} where we have chosen units such that the impedance of free space is $\eta_0 = 1$ and work in terms of a dimensionless wavenumber $\xi$. This solution is not yet closed, since the fields applied to each scatterer $(\boldsymbol{E} (\boldsymbol{r}_n), \boldsymbol{H} (\boldsymbol{r}_n))$ must be determined. Imposing self--consistency yields the following matrix equations connecting the source and total fields at each scatterer \begin{equation} \boldsymbol{R}_{nm} \begin{pmatrix} \boldsymbol{E}(\boldsymbol{r}_m) \\ \boldsymbol{H}(\boldsymbol{r}_m) \end{pmatrix} = \begin{pmatrix} \boldsymbol{E}_s (\boldsymbol{r}_n) \\ \boldsymbol{H}_s (\boldsymbol{r}_n) \end{pmatrix} , \label{eq:self-const} \end{equation} where \begin{equation} \boldsymbol{R}_{nm} = \begin{pmatrix} \boldsymbol{1}\delta_{nm} - \xi^2 \boldsymbol{G} (\boldsymbol{r}_n, \boldsymbol{r}_m) \boldsymbol{\alpha}_E & - i \xi \boldsymbol{G}_{EH} (\boldsymbol{r}_n, \boldsymbol{r}_m) \boldsymbol{\alpha}_H \\ i \xi \boldsymbol{G}_{EH} (\boldsymbol{r}_n, \boldsymbol{r}_m) \boldsymbol{\alpha}_E & \boldsymbol{1}\delta_{nm} - \xi^2 \boldsymbol{G} (\boldsymbol{r}_n, \boldsymbol{r}_m) \boldsymbol{\alpha}_H \end{pmatrix} . \end{equation} The self--consistency condition (\ref{eq:self-const}) can be solved with standard matrix methods \cite{NumericalRecipes} for the fields applied to each scatterer, which includes the source field as well as contributions from all of the other scatterers. Once these fields are found, the solution to Maxwell's equations given by (\ref{eq:fields}) is fully specified. The particular physical system we consider in numerical examples is an arrangement of silicon spheres of radius 65 nm at a wavelength of 550 nm giving $k r \approx 0.75$. For this simple choice of metasurface element, the electric and magnetic polarisability tensors can be constructed from the Mie coefficients \cite{Mie1908, Bohren1983} $a_1$ and $b_1$ as \begin{align} \boldsymbol{\alpha}_E &= \boldsymbol{1} i \frac{6 \pi}{k^3} a_1 & \boldsymbol{\alpha}_H &= \boldsymbol{1} i \frac{6 \pi}{k^3} b_1 \end{align} where $\boldsymbol{1} = {\rm diag} (1,1,1)$ is the unit tensor. Polarisability tensors for more complicated scatterers can be extracted from numerical modelling \cite{Arango2013, Liu2016}, making this method applicable to a very wide range of systems. The key benefit is that an expensive full--wave simulation is required for only a single scatterer, not the entire metasurface. \section{DESIGNING METASURFACES} \label{sec:designing} In the previous section, we derived expressions that give the effect of a metasurface, defined by a collections of scatterers at locations $\{ \boldsymbol{r}_n \}$ and with polarisabiltities $\boldsymbol{\alpha}_E$ and $\boldsymbol{\alpha}_H$. The aim is to now find a way to choose the distribution of the scatterers $\{ \boldsymbol{r}_n \}$ to achieve a desired wave--scattering effect. We consider how moving one of the scatterers by a small amount changes the fields. Taylor expanding the Dirac--deltas in (\ref{eq:PM}) as \begin{align} \delta (\boldsymbol{r} - \boldsymbol{r}_n - \delta \boldsymbol{r}_n) = \delta (\boldsymbol{r} - \boldsymbol{r}_n) + (\delta \boldsymbol{r}_n \cdot \nabla) \delta (\boldsymbol{r} - \boldsymbol{r}_n) + \frac{1}{2} ( \delta \boldsymbol{r}_n \cdot \nabla )^2 \delta (\boldsymbol{r} - \boldsymbol{r}_n) + \ldots \end{align} and keeping only the first order terms gives the variation in the field due to a small change in the position of the scatterers \begin{equation} \begin{pmatrix} \delta \boldsymbol{E} (\boldsymbol{r}) \\ \delta \boldsymbol{H} (\boldsymbol{r}) \end{pmatrix} = - \begin{pmatrix} \xi^2 \boldsymbol{G}(\boldsymbol{r}, \boldsymbol{r}_n) \boldsymbol{\alpha}_E & i \xi \boldsymbol{G}_{EH}(\boldsymbol{r}, \boldsymbol{r}_n) \boldsymbol{\alpha}_H \\ - i \xi \boldsymbol{G}_{EH}(\boldsymbol{r}, \boldsymbol{r}_n) \boldsymbol{\alpha}_E & \xi^2 \boldsymbol{G}(\boldsymbol{r}, \boldsymbol{r}_n) \boldsymbol{\alpha}_H \end{pmatrix} \begin{pmatrix} \nabla \boldsymbol{E} (\boldsymbol{r}_n) \\ \nabla \boldsymbol{H} (\boldsymbol{r}_n) \end{pmatrix} \delta \boldsymbol{r}_n , \label{eq:field_var} \end{equation} where the fields are given by (\ref{eq:fields}). This gives an expression for how moving the position of a single scatterer changes the fields at every point in space. These expressions can be used to find how changing the location of a scatterer affects a given figure of merit, which is a functional of the field configurations $\mathcal{F}[\boldsymbol{E}, \boldsymbol{H}]$. Moving one scatterer produces a small change in the fields at every point in space, which in turn changes the figure of merit by a small amount \begin{equation} \mathcal{F}[\boldsymbol{E}, \boldsymbol{H}] \rightarrow \mathcal{F}[\boldsymbol{E}, \boldsymbol{H}] + \delta \mathcal{F}[\boldsymbol{E}, \boldsymbol{H}, \delta \boldsymbol{E}, \delta \boldsymbol{H}] . \end{equation} The change in the figure of merit is linear in $\delta \boldsymbol{r}_n$, so once we have derived the analytic expression for $\delta \mathcal{F}$ it can be used to find an expression for a $\delta \boldsymbol{r}_n$ that leads to an increase in the figure of merit. In this way, we can begin from an initial distribution of scatterers and iteratively calculate the set of moves for each scatterer $\{\delta \boldsymbol{r}_n\}$ that increase the figure of merit. In the following examples we demonstrate the versatility of this procedure by considering three different figures of merit. \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{coupling_fig.pdf} \caption{Designing dielectric structures that manipulate the coupling between two nearby emitters. Beginning from an initial distribution of scatterers, shown in the centre panels, the update equation (\ref{eq:coupling_update}) is used to iteratively move the scatterers to a) increase and b) decrease the coupling between two nearby emitters. The change in coupling over the optimisation procedure is shown in the left--hand panels and the optimised structure is shown in the right--hand panels. The two emitters are shown as a magenta square and a green triangle. The polarisation of the emitters in a) is $\boldsymbol{p} = \hat{\boldsymbol{x}}$ for the green triangle and $\boldsymbol{p} = \boldsymbol{\hat{y}}$ for the magenta square, while in b) is $\boldsymbol{p} = \boldsymbol{\hat{y}}$ for both emitters. In both cases, the $y$ component of the field is re--shaped by moving the scatterers to exhibit either a null or peak at the location of the scatterer shown by the magneta square.} \label{fig:coupling} \end{figure} Firstly, we consider the coupling between two emitters with different polarisations. For this problem, we have two sources located at $\boldsymbol{r}_{s,1}$ and $\boldsymbol{r}_{s,2}$ and with electric polarisations $\boldsymbol{p}_{1,2}$. This means that the source fields in Maxwell's equations (\ref{eq:maxwell}) can be written as \begin{align} \boldsymbol{E}_s (\boldsymbol{r}) &= \xi^2 \boldsymbol{G} (\boldsymbol{r}, \boldsymbol{r}_{s,1}) \cdot \boldsymbol{p}_1 + \xi^2 \boldsymbol{G} (\boldsymbol{r}, \boldsymbol{r}_{s,2}) \cdot \boldsymbol{p}_2, \\ \boldsymbol{H}_s (\boldsymbol{r}) &= - i \xi \boldsymbol{G}_{EH} (\boldsymbol{r}, \boldsymbol{r}_{s,1}) \cdot \boldsymbol{p}_1 - i \xi \boldsymbol{G}_{EH} (\boldsymbol{r}, \boldsymbol{r}_{s,2}) \cdot \boldsymbol{p}_2 , \end{align} assuming that the sources are small compared to the wavelength. The coupling between the two sources is then \begin{equation} \rho_{12} = {\rm Im} \left[ \boldsymbol{p}^*_{1} \cdot \boldsymbol{E}_2 (\boldsymbol{r}_1) \right] . \end{equation} where $\boldsymbol{E}_2 (\boldsymbol{r}_1)$ is the field generated by the second emitter, along with the scattering structure, at the first emitter. In this way, $\rho_{12}$ characterises the overlap of the fields generated by the emitters. To design a structure that manipulates the coupling between two emitters, we expand the figure of merit to first order under small changes in the fields at the second emitter \begin{align} \mathcal{F}_{\rm coupling} &= {\rm Im} \left[ \boldsymbol{p}^*_1 \cdot ( \boldsymbol{E}_2(\boldsymbol{r}_1) + \delta \boldsymbol{E}_2(\boldsymbol{r}_1)) \right] , \\ \delta F_{\rm coupling} &= {\rm Im} \left[ \boldsymbol{p}^*_1 \cdot \delta \boldsymbol{E}_2(\boldsymbol{r}_1) \right] . \end{align} Inserting the expression for the variation of the fields (\ref{eq:field_var}), we find \begin{equation} \delta F_{\rm coupling} = - \sum_n {\rm Im} \left[ \boldsymbol{p}^*_1 \cdot \left\{ \xi^2 \boldsymbol{G}(\boldsymbol{r}_1, \boldsymbol{r}_n) \boldsymbol{\alpha}_E \nabla \boldsymbol{E} (\boldsymbol{r}_n) + i \xi \boldsymbol{G}_{EH}(\boldsymbol{r}_1, \boldsymbol{r}_n) \boldsymbol{\alpha}_H \nabla \boldsymbol{H}(\boldsymbol{r}_n) \right\} \right] \delta \boldsymbol{r}_n . \end{equation} This gives a way of calculating a move of the $n^{\rm th}$ scatterer so that the figure of merit is guaranteed to either increase or decrease. Choosing \begin{equation} \delta \boldsymbol{r}_n \propto \mp {\rm Im} \left[ \boldsymbol{p}^*_1 \cdot \left\{ \xi^2 \boldsymbol{G}(\boldsymbol{r}_1, \boldsymbol{r}_n) \boldsymbol{\alpha}_E \nabla \boldsymbol{E} (\boldsymbol{r}_n) + i \xi \boldsymbol{G}_{EH}(\boldsymbol{r}_1, \boldsymbol{r}_n) \boldsymbol{\alpha}_H \nabla \boldsymbol{H}(\boldsymbol{r}_n) \right\} \right] \label{eq:coupling_update} \end{equation} leads to a positive $\delta \mathcal{F}_{\rm coupling}$ if the negative sign is taken, and a negative $\delta \mathcal{F}_{\rm coupling}$ if the positive sign is taken. This gives a way of calculating how to move all of the scatterers at the same time in a way that changes the figure of merit in the desired way. An example of applying this procedure to change the coupling between two emitters is shown in Figure \ref{fig:coupling}. In Figure \ref{fig:coupling}a, the coupling between an electric dipole with polarisation $\boldsymbol{p} = \boldsymbol{\hat{x}}$, shown as a green triangle, and an electric dipole with polarisation $\boldsymbol{p} = \boldsymbol{\hat{y}}$, shown as a magenta square, is enhanced. The scatterer positions are updated according to the upper sign of (\ref{eq:coupling_update}), leading to a redistribution of the scattered field. To increase the coupling, the $\boldsymbol{\hat{y}}$ component of electric field at the location of the emitter with polarisation $\boldsymbol{\hat{y}}$ is increased greatly. Another case of interest might be to reduce the coupling between two similarly polarised nearby emitters. Taking the lower sign in the update equation (\ref{eq:coupling_update}) and decreasing the coupling between two emitters with the same polarisation, $\boldsymbol{\hat{y}}$, is demonstrated in Figure \ref{fig:coupling}b. The scatterers are now redistributed to place a null in the field at the location of the emitter shown by the magenta square. \begin{figure} \centering \includegraphics[width=\linewidth]{lensing.pdf} \caption{The design of a metamaterial than focuses the energy from a plane wave to a point, shown as the green star. The figure of merit for this optimisation is the modulus of the electric field at the target location (\ref{eq:fom_lens}); a) shows the increase of this quantity as a function of progressing optimisation and b) is the final design. A cut of the field along the blue line is given in c) showing the narrow focus.} \label{fig:lensing} \end{figure} Secondly, we consider the problem of focusing a plane--wave to a point. For this problem, the source fields in the solutions to Maxwell's equations (\ref{eq:fields}) are plane waves, with a particular polarisation and wave--vector. For the example in Figure \ref{fig:lensing}, we choose a TE polarised wave with wave--vector $\boldsymbol{k} = k (1,0,0)$. The figure of merit is the magnitude of the electric field at the target location $\boldsymbol{r}_\star$. \begin{equation} \mathcal{F}_{\rm lens} = |\boldsymbol{E}(\boldsymbol{r}_\star)| . \label{eq:fom_lens} \end{equation} This can be expanded to first order under small changes in the fields as \begin{align} |\boldsymbol{E}(\boldsymbol{r}_\star)| &= \sqrt{\boldsymbol{E}(\boldsymbol{r}_\star) \cdot \boldsymbol{E}^*(\boldsymbol{r}_\star)} , \\ &= \sqrt{(\boldsymbol{E}(\boldsymbol{r}_\star) + \delta \boldsymbol{E}(\boldsymbol{r}_\star))\cdot (\boldsymbol{E}^*(\boldsymbol{r}_\star) + \delta \boldsymbol{E}^*(\boldsymbol{r}_\star) )} , \\ &= \sqrt{|\boldsymbol{E}(\boldsymbol{r}_\star)|^2 + 2 {\rm Re} \left[ \delta \boldsymbol{E} (\boldsymbol{r}_\star) \cdot \boldsymbol{E}^* (\boldsymbol{r}_\star) \right]} , \\ &= |\boldsymbol{E}(\boldsymbol{r}_\star)| \sqrt{1 + \frac{2 {\rm Re} \left[ \delta \boldsymbol{E} (\boldsymbol{r}_\star) \cdot \boldsymbol{E}^* (\boldsymbol{r}_\star)\right]}{|\boldsymbol{E}(\boldsymbol{r}_\star)|^2}} , \\ &\approx |\boldsymbol{E}(\boldsymbol{r}_\star)| + \frac{{\rm Re} \left[ \delta \boldsymbol{E}(\boldsymbol{r}_\star) \cdot \boldsymbol{E}^*(\boldsymbol{r}_\star)\right]}{|\boldsymbol{E}(\boldsymbol{r}_\star)|} \label{eq:mod_expansion}. \end{align} Substituting the expression for the variation of the fields gives the following change in the figure of merit $\mathcal{F}_{\rm lens}$ \begin{equation} \delta \mathcal{F}_{\rm lens} = \frac{-1}{|\boldsymbol{E} (\boldsymbol{r}_\star)|} \sum_n {\rm Re} \left[ \left\{ \xi^2 \boldsymbol{G}(\boldsymbol{r}_\star, \boldsymbol{r}_n) \boldsymbol{\alpha}_E \nabla \boldsymbol{E} (\boldsymbol{r}_n) + i \xi \boldsymbol{G}_{EH}(\boldsymbol{r}_\star, \boldsymbol{r}_n) \boldsymbol{\alpha}_H \nabla \boldsymbol{H}(\boldsymbol{r}_n) \right\} \cdot \boldsymbol{E}^* (\boldsymbol{r}_\star) \right] \delta \boldsymbol{r}_n . \end{equation} As this is linear in $\delta \boldsymbol{r}_n$, this gives a way of choosing $\delta \boldsymbol{r}_n$ so that the figure of merit increases. The result of applying this procedure is shown in Figure \ref{fig:lensing}. A structure is designed that focuses the field to the desired location. Fitting a Gaussian of the form \begin{equation} \mathcal{G}(y) = A \exp \left( \frac{(y-\mu)^2}{2 \sigma^2}\right) + B \end{equation} to the peak, we find that the width is $\sim \lambda /3$. \begin{figure} \centering \includegraphics[width=\linewidth]{ff_assembled.pdf} \caption{The design of a metamaterial with a chosen radiation pattern. In each case, the structure is driven by an emitter polarised along the $z$ axis at the origin. For each of the target radiation patterns, black dashed lines, the scatterers begin at the locations indicated by black circles and are iteratively moved to reduce the difference between the radiation pattern and the desired pattern (\ref{eq:fom_rss}). The optimised locations of the scatterers are shown as red dots and the final radiation patterns as red lines. } \label{fig:farfield} \end{figure} The final problem we consider, is shaping the far--field Poynting vector of an emitter. We would like $|\boldsymbol{S}(\theta)|$ to have a particular shape, $\phi_T (\theta)$ in the far--field. Our aim is to design a scattering structure that produces a particular $|\boldsymbol{S}(\theta)|$ in the far--field, defined by a target angular distribution $\phi_T (\theta)$. One way this can be achieved is by minimising the residual sum of squares between the current angular distribution of the Poynting vector and the target distribution \begin{equation} \mathcal{F}_{\rm RSS} = \sum_i \left[ |\boldsymbol{S}(\theta_i)| - \phi_T(\theta_i) \right]^2 . \label{eq:fom_rss} \end{equation} In order to use this figure of merit, both $|\boldsymbol{S}(\theta_i)|$ and $\phi_T(\theta_i)$ must be normalised to range from 0 to 1. It should be noted that the choice of figure of merit is not unique: one could seek to maximise the overlap integral between the current angular distribution of Poynting vector and the target distribution \cite{Capers2021}. To derive an expression that can be used to calculate how the scatterers should be moved to minimise this figure of merit, we expand first under small changes in $\delta |\boldsymbol{S}(\theta)|$ \begin{align} \sum_i \left[ |\boldsymbol{S}(\theta_i)| - \phi_T(\theta_i) \right]^2 &= \sum_i \left( |\boldsymbol{S}(\theta_i)| + \delta |\boldsymbol{S}(\theta_i)| - \phi_T(\theta_i) \right) \left( |\boldsymbol{S}(\theta_i)| + \delta |\boldsymbol{S}(\theta_i)| - \phi_T(\theta_i) \right) , \\ &= \sum_i |\boldsymbol{S}(\theta_i)|^2 + \phi_T^2 (\theta_i) + 2 \delta |\boldsymbol{S}(\theta_i)| (|\boldsymbol{S}(\theta_i)| - \phi_T (\theta_i)) , \end{align} and retaining only first order terms, we find that \begin{equation} \delta \mathcal{F}_{\rm RSS} = \sum_i \left[ 2 \delta |\boldsymbol{S}(\theta_i)| (|\boldsymbol{S}(\theta_i)| - \phi_T (\theta_i) \right] . \label{eq:deltaFrss} \end{equation} It is then necessary to find $\delta |\boldsymbol{S}|$, the variation in the Poynting vector, in terms of the variations in the fields (\ref{eq:field_var}). Using the expression we obtained from expanding the modulus of the electric field (\ref{eq:mod_expansion}), we know that \begin{equation} \delta |\boldsymbol{S}| = \frac{{\rm Re} \left [\delta \boldsymbol{S} \cdot \boldsymbol{S}^* \right]}{|\boldsymbol{S}|} . \end{equation} Then, $\delta \boldsymbol{S}$ can be derived from the expression for the Poynting vector \begin{align} \boldsymbol{S} &= \frac{1}{2} \boldsymbol{E} \times \boldsymbol{H}^* \\ &= \frac{1}{2} (\boldsymbol{E} + \delta \boldsymbol{E}) \times (\boldsymbol{H}^* + \delta \boldsymbol{H}^*), \\ \delta \boldsymbol{S} &= \frac{1}{2} \left[ \boldsymbol{E} \times \delta \boldsymbol{H}^* + \delta \boldsymbol{E} \times \boldsymbol{H}^* \right] . \end{align} Substituting this into (\ref{eq:deltaFrss}) gives us the change of the figure of merit in terms of the changes in the fields, which are linear in $\delta \boldsymbol{r}_n$. As with the previous examples, the expressions for the field variations (\ref{eq:field_var}) can be substituted in to yield an expression for moving the scatterers to decrease this figure of merit. Figure \ref{fig:farfield} shows several examples of using this process to design scatterering structures with arbitrary far--field radiation patterns. \section{SUMMARY \& CONCLUSIONS} We have derived a method of designing metamaterials built from several discrete scatterers that exhibit electric and magnetic dipole resonances. While gradient based, our method leverages the advantages of the adjoint method and by utilising the discrete dipole approximation to avoid full--wave simulations ensures numerical efficiency. We have applied our design methodology to three different problems, relevant to nanophotonics. The coupling between two nearby emitters can be manipulated with an appropriate photonic structure to increase coupling by a factor of ~250 or massively reduce the coupling, removing cross--talk. A plane wave can be focused to a chosen point, with a focus width of $\sim \lambda / 3$. Finally, we demonstrate the design of a dielectric antenna with any desired radiation pattern. This framework might be extended beyond the dipole approximation to include high order multipoles, to achieve more diverse control of light. Developing the method to design structures that perform differernt functions for different exciting fields would be very useful for optical computing applications. The general idea of analytically expanding figures of merit under small perturbations in design parameters to find efficient ways to calculate gradients could be applied to many other optics problems, from fibre optics to imaging through disorder. \acknowledgments We acknowledge financial support from the Engineering and Physical Sciences Research Council (EPSRC) of the United Kingdom, via the EPSRC Centre for Doctoral Training in Metamaterials (Grant No. EP/L015331/1). J.R.C also wishes to acknowledge financial support from Defence Science Technology Laboratory (DSTL). S.A.R.H acknowledges financial support from the Royal Society (RPG-2016-186). \copyright Copyright 2022 Society of Photo‑Optical Instrumentation Engineers (SPIE). According to SPIE Article-Sharing Policies ``Authors may post draft manuscripts on preprint servers such as arXiv. If the full citation and Digital Object Identifier (DOI) are known, authors are encouraged to add this information to the preprint record.'' \url{https://www.spiedigitallibrary.org/article-sharing-policies}. This document represents a draft from J. R. Capers, S. J. Boyes, A. P. Hibbins and S. A. R. Horsley ``Designing Metasurfaces to Manipulate Antenna Radiation'', Proc. SPIE 12130, Metamaterials XIII, 121300H (24 May 2022); \url{https://doi.org/10.1117/12.2621160} Please check out the SPIE paper for a complete list of figures, tables, references and general content.
1,116,691,497,473
arxiv
\section{Appendix A: Details on the Templates} \begin{table*}[!th] \centering \caption{This table presents details on the templates utilized in our paper. Here, we analyze 37 relations in ConceptNet \cite{speer2017conceptnet}. } \label{tab:template} \begin{tabular}{c|m{8.5cm}|c} \hline \textbf{Relation} & \multicolumn{1}{c|}{\textbf{Template}} & \# of samples \\ \hline\hline RelatedTo & [[SUBJ]] is related to [[OBJ]] . & 287,459 \\ HasContext & [[SUBJ]] is used in the context of [[OBJ]] . & 113,066 \\ IsA & [[SUBJ]] is a [[OBJ]] . & 74,316 \\ DerivedFrom & [[OBJ]] is derived from [[SUBJ]] . & 69,510\\ Synonym & [[SUBJ]] and [[OBJ]] are same . & 28,379 \\ FormOf & [[OBJ]] is the root word of [[SUBJ]] . & 27,208\\ EtymologicallyRelatedTo & [[SUBJ]] is etymologically related to [[OBJ]] . & 10,187\\ SimilarTo & [[SUBJ]] is similar to [[OBJ]] . & 8,384 \\ AtLocation & Something you find at [[OBJ]] is [[SUBJ]] . & 7,644 \\ MannerOf & [[SUBJ]] is a way to [[OBJ]] . & 6,230 \\ PartOf & [[SUBJ]] is part of [[OBJ]] . & 5,320 \\ Antonym & [[SUBJ]] and [[OBJ]] are opposite . & 3,932 \\ HasProperty & [[SUBJ]] can be [[OBJ]] . & 2,886 \\ UsedFor & [[SUBJ]] may be used for [[OBJ]] . & 2,145 \\ DistinctFrom & [[SUBJ]] is not [[OBJ]] . & 1,256 \\ HasPrerequisite & [[SUBJ]] requires [[OBJ]] . & 1,142 \\ HasSubevent & When [[SUBJ]] , [[OBJ]] . & 1,119 \\ Causes & [[SUBJ]] causes [[OBJ]] . & 999 \\ HasA & [[SUBJ]] contains [[OBJ]] . & 943 \\ InstanceOf & [[SUBJ]] is an instance of [[OBJ]] . & 902 \\ CapableOf & [[SUBJ]] can [[OBJ]] . & 697 \\ ReceivesAction & [[SUBJ]] can be [[OBJ]] . & 658 \\ MotivatedByGoal & You would [[SUBJ]] because [[OBJ]] . & 603 \\ CausesDesire & [[SUBJ]] would make you want to [[OBJ]] . & 556 \\ MadeOf & [[SUBJ]] can be made of [[OBJ]] . & 316 \\ HasLastSubevent & The last thing you do when you [[SUBJ]] is [[OBJ]] . & 302 \\ Entails & [[SUBJ]] entails [[OBJ]] . & 298 \\ HasFirstSubevent & The first thing you do when you [[SUBJ]] is [[OBJ]] . & 280 \\ Desires & [[SUBJ]] wants [[OBJ]] . & 200 \\ NotHasProperty & [[SUBJ]] is not [[OBJ]] . & 161 \\ CreatedBy & [[SUBJ]] is creatd by [[OBJ]] . & 118 \\ DefinedAs & [[SUBJ]] can be defined as [[OBJ]] . & 80 \\ NotDesires & [[SUBJ]] does not want [[OBJ]] . & 71 \\ NotCapableOf & [[SUBJ]] can not [[OBJ]] . & 43 \\ LocatedNear & [[SUBJ]] is typically near [[OBJ]] . & 36 \\ EtymologicallyDerivedFrom & [[SUBJ]] is etymologically derived from [[OBJ]] . & 27 \\ SymbolOf & [[SUBJ]] is an symbol of [[OBJ]] . & 4 \\ \hline \end{tabular} \end{table*} \newpage \section{Appendix B: Qualitative Analysis for Probabilistic Distributions} \begin{table*}[!ht] \centering \caption{Results of the $hits@K$ metric for each relation in ConceptNet. } \label{tab:results_on_each_relation} \begin{tabular}{c|cccc|cccc} \hline \multicolumn{1}{c|}{\multirow{3}{*}{\textbf{Relations}}} & \multicolumn{8}{c}{$hits@K$} \\ \cline{2-9} \multicolumn{1}{c|}{} & \multicolumn{4}{c|}{BERT$_{base}$} & \multicolumn{4}{c}{BERT$_{large}$} \\ \cline{2-9} \multicolumn{1}{c|}{} & 1 & 5 & 10 & \multicolumn{1}{c|}{100} & 1 & 5 & 10 & 100 \\ \hline\hline RelatedTo & 7.60 & 9.30 & 11.77 & 25.38 & 6.51 & 8.50 & 10.97 & 24.14 \\ HasContext & 6.79 & 16.17 & 22.38 & 48.90 & 6.91 & 15.84 & 22.13 & 47.57 \\ IsA & 0.46 & 1.56 & 2.27 & 15.57 & 0.41 & 1.19 & 1.89 & 11.67 \\ DerivedFrom & 0.14 & 5.77 & 10.70 & 31.47 & 0.11 & 3.41 & 6.90 & 23.42 \\ Synonym & 16.16 & 27.33 & 33.12 & 52.70 & 13.38 & 26.74 & 34.69 & 56.39 \\ FormOf & 0.57 & 20.10 & 28.08 & 42.41 & 2.84 & 32.39 & 38.68 & 48.76 \\ EtymologicallyRelatedTo & 5.39 & 8.35 & 10.71 & 22.45 & 3.69 & 6.59 & 9.22 & 21.70 \\ SimilarTo & 1.60 & 4.39 & 6.09 & 14.92 & 2.84 & 7.13 & 10.13 & 23.61 \\ AtLocation & 2.03 & 3.72 & 5.41 & 23.36 & 3.04 & 5.89 & 8.93 & 32.28 \\ MannerOf & 2.66 & 5.05 & 8.77 & 35.71 & 2.17 & 5.85 & 9.61 & 36.25 \\ PartOf & 21.05 & 34.37 & 40.91 & 59.43 & 24.38 & 37.18 & 43.30 & 58.97 \\ Antonym & 17.14 & 25.70 & 32.38 & 53.69 & 28.26 & 34.55 & 40.65 & 63.26 \\ HasProperty & 3.22 & 8.39 & 12.14 & 38.04 & 5.23 & 12.93 & 17.75 & 46.14 \\ UsedFor & 12.87 & 16.50 & 21.44 & 47.16 & 12.26 & 14.78 & 19.25 & 45.72 \\ DistinctFrom & 1.67 & 4.36 & 6.75 & 23.70 & 5.10 & 11.09 & 15.22 & 37.81 \\ HasPrerequisite & 11.30 & 10.56 & 14.73 & 37.29 & 13.75 & 13.35 & 17.93 & 40.54 \\ HasSubevent & 1.79 & 2.55 & 4.03 & 16.20 & 2.32 & 3.39 & 5.11 & 18.40 \\ Causes & 9.71 & 12.73 & 17.05 & 40.79 & 10.81 & 13.90 & 18.65 & 45.81 \\ HasA & 4.24 & 10.55 & 15.17 & 40.35 & 4.67 & 9.75 & 14.19 & 37.22 \\ InstanceOf & 0.00 & 5.93 & 10.29 & 22.43 & 0.11 & 4.92 & 11.12 & 31.92 \\ CapableOf & 10.04 & 17.20 & 24.27 & 53.13 & 12.34 & 22.90 & 28.19 & 52.54 \\ ReceivesAction & 12.01 & 28.12 & 36.51 & 71.44 & 14.89 & 30.52 & 38.85 & 72.45 \\ MotivatedByGoal & 0.00 & 1.07 & 2.37 & 17.90 & 0.00 & 0.17 & 0.76 & 17.74 \\ CausesDesire & 4.32 & 11.52 & 17.59 & 57.25 & 2.34 & 7.54 & 13.95 & 52.13 \\ MadeOf & 12.34 & 44.12 & 51.85 & 72.94 & 18.67 & 42.22 & 50.63 & 75.05 \\ HasLastSubevent & 8.61 & 16.30 & 22.85 & 58.73 & 10.60 & 18.04 & 25.09 & 62.30 \\ Entails & 2.01 & 4.53 & 7.38 & 22.20 & 2.35 & 4.53 & 6.88 & 24.27 \\ HasFirstSubevent & 12.86 & 23.96 & 29.38 & 63.99 & 17.50 & 29.79 & 37.56 & 71.55 \\ Desires & 4.00 & 7.52 & 7.57 & 50.90 & 7.50 & 9.47 & 11.12 & 50.17 \\ NotHasProperty & 4.35 & 14.29 & 18.32 & 42.24 & 6.83 & 23.29 & 27.64 & 60.87 \\ CreatedBy & 2.54 & 9.75 & 15.25 & 35.88 & 0.85 & 5.08 & 10.17 & 29.52 \\ DefinedAs & 0.00 & 2.50 & 3.75 & 17.92 & 2.50 & 4.17 & 10.42 & 33.75 \\ NotDesires & 1.41 & 0.28 & 2.25 & 8.74 & 1.41 & 1.69 & 3.66 & 12.94 \\ NotCapableOf & 16.28 & 32.56 & 41.86 & 73.84 & 18.60 & 27.91 & 40.12 & 76.74 \\ LocatedNear & 2.78 & 8.33 & 13.89 & 36.11 & 5.56 & 8.33 & 8.33 & 25.00 \\ EtymologicallyDerivedFrom & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 3.70 \\ \multicolumn{1}{c|}{SymbolOf} & \multicolumn{1}{c}{0.00} & \multicolumn{1}{c}{50.00} & \multicolumn{1}{c}{50.00} & \multicolumn{1}{c|}{50.00} & \multicolumn{1}{c}{25.00} & \multicolumn{1}{c}{50.00} & \multicolumn{1}{c}{50.00} & \multicolumn{1}{c}{50.00}\\\hline \end{tabular} \end{table*} \newpage \section{Appendix C: Details on the Reading Comprehension Question Types} \begin{table*}[!ht] \centering \caption{Examples and descriptions for the question type of the \textit{has answer} questions. The main evidences for the categorization of the questions are colored. } \label{tab:results_on_each_relation} \begin{tabular}{c|m{0.30\columnwidth}|m{0.40\columnwidth}}\hline Question Types & \multicolumn{1}{c|}{Description} & \multicolumn{1}{c}{Example} \\\hline\hline Synonymy & There is a clear correspondence between question and context.& \begin{tabular}{@{}p{7cm}@{}}\textbf{Question}: Which entity is the \textbf{\textcolor{red}{secondary}} legislative body?\\\textbf{Context}: ... The \textbf{\textcolor{blue}{second main}} legislative body is the Council, which is composed of different ministers of the member states. ...\end{tabular} \\\hline \begin{tabular}[c]{@{}c@{}}Common sense\\ knowledge\end{tabular} & Common sense knowledge is required to solve the question. & \begin{tabular}{@{}p{7cm}@{}}\textbf{Question}: Where is the \textcolor{red}{\textbf{Asian}} influence strongest in Victoria?\\\textbf{Context}: ... Many \textcolor{blue}{\textbf{Chinese}} miners worked in Victoria, and their legacy is particularly strong in Bendigo and its environs. ...\end{tabular}\\\hline No semantic variation & There is no semantic variation such as synonymy or common sense knowledge. & \begin{tabular}{@{}p{7cm}@{}}\textbf{Question}: Who are the \textbf{\textcolor{red}{un-elected subordinates of member state governments}}?\\\textbf{Context}: ... This means Commissioners are, through the appointment process, the \textbf{\textcolor{blue}{unelected subordinates of member state governments}}. ...\end{tabular} \\\hline Multi-sentence reasoning & Hints for solving questions are shattered in multiple sentences. & \begin{tabular}{@{}p{7cm}@{}}\textbf{Question}: Why did \textcolor{red}{\textbf{France}} choose to give up continental lands?\\\textbf{Context}: ... \textcolor{blue}{\textbf{France}} chose to cede the former, ... \textcolor{blue}{\textbf{They}} viewed the economic value of the Caribbean islands' sugar cane ...\end{tabular} \\\hline Others & The labeled answer is incorrect. & \begin{tabular}{@{}p{7cm}@{}}\textbf{Question}: Who \textcolor{red}{\textbf{won the battle}} of Lake George?\\\textbf{Context}: ... The \textcolor{blue}{\textbf{battle ended inconclusively}}, with both sides withdrawing from the field. ...\end{tabular} \\\hline Typo & There exist typing errors in the question or context. & \begin{tabular}{@{}p{7cm}@{}}\textbf{Question}: What kind of measurements define \textbf{\textcolor{red}{accelerlations}}?\\\textbf{Context}... \textbf{\textcolor{blue}{Accelerations}} can be defined through kinematic measurements. ...\end{tabular}\\\hline \end{tabular} \end{table*} \bibliographystyle{aaai} \section{Introduction} One of the long-standing problems in natural language processing (NLP) is to teach machines to effectively understand language and infer knowledge \cite{winograd1972understanding}. In NLP, reading comprehension (RC) is a task to predict the correct answer in the associated context for a given question. RC is widely regarded as an evaluation benchmark for a machine's ability of natural language understanding and reasoning \cite{richardson2013mctest}. Neural language models (NLMs) that consist of neural networks to predict a word sequence distribution have widely been utilized in natural language understanding tasks \cite{radford2018improving}. In particular, masked neural language models (MNLMs) including \textit{Bidirectional Encoder Representations from Transformers} (BERT), which are trained to restore the randomly masked sequence of words, have recently led to a breakthrough in various RC tasks\cite{devlin2019bert}. However, the \textit{black box} nature of the neural networks prohibits analyzing which type of knowledge leads to performance enhancement and which type of knowledge remains untrained. Recently, there are active efforts to understand which information is trained in the pretrained NLMs \cite{shi2016does,adi2017fine,perone2018evaluation, conneau2018you,Sahin2019LINSPECTORMP,hewitt2019structural, liu2019linguistic, hahn2019tabula, tenney2019you,tenney2019bert,manning2020emergent, kim2020pre}. Existing studies mainly focus on exploring whether a trained model embodies linguistic features for semantic analysis such as tense analysis \cite{shi2016does, conneau2018you} and named entity recognition (NER) \cite{Sahin2019LINSPECTORMP,liu2019linguistic, tenney2019bert}, and for syntactic analysis such as part-of-speech tagging \cite{shi2016does,Sahin2019LINSPECTORMP, liu2019linguistic, tenney2019bert}, chunking \cite{hahn2019tabula} and parsing \cite{Sahin2019LINSPECTORMP,liu2019linguistic, tenney2019you, tenney2019bert, hewitt2019structural, manning2020emergent, kim2020pre}. One common approach for the linguistic probing is to verify the existence of simple linguistic features by training simple classifiers upon the MNLMs for each task \cite{conneau2018you}. Commonsense knowledge, defined as `information that people are supposed to know in common\cite{nilsson1998artificial}', is known to be another essential factor for natural language understanding and reasoning in the RC task \cite{mihaylov2018knowledgeable}. A recent study shows how to attain commonsense knowledge from pretrained MNLMs without additional training procedures \cite{davison2019commonsense}. However, to the best of our knowledge, detailed analysis on which type of knowledge is trained and untrained in the MNLMs has not yet been thoroughly examined and clearly discovered. The focus of our paper is to verify how much the MNLM-based RC models answer or process the complicated RC tasks by understanding semantic relations among the words. To answer this problem, we raise the following questions regarding the semantic understanding of MNLMs: \vspace{-0.3em} \begin{enumerate}[itemsep=-0.1em] \item Do MNLMs understand various types of commonsense knowledge, especially relations of entities? (Section~\ref{sec:knowledge_probing_test}) \item Do MNLMs distinguish some semantically related relations well? (Section~\ref{sec:synonym and antonym}) \item \revised{What are the challenging RC-task questions for the MNLM-based RC models? (Section~\ref{sec:difficulty})} \end{enumerate} \vspace{-0.3em} \revised{To answer Questions 1 and 2, we introduce a \textit{knowledge probing test} designed to analyze whether an MNLM understands structured commonsense knowledge such as semantic triples in an external repository, specifically ConceptNet \cite{speer2017conceptnet}. Experimental results on the knowledge probing test reveal that MNLMs understand some types of semantic knowledge. However, unexpectedly, we also observe that MNLMs have a lot of missing or untrained knowledge, and thus cannot precisely distinguish simple semantic concepts such as opposite relations. In addition, we can notice that when fintuning MNLMs as a commonsense knowledge base, not only probing performance but also models help to distinguish opposite relations.} For Question 3, we first explore a possible factor for determining the difficulty level of RC questions. Herein, we postulate that the lexical variation between a question and a context is the factor. Indeed, we observe that the lexical variation correlates with the difficulty level of RC questions. On top of that, we show a difficult question may require additional inference procedures to solve it. Inspired by these observations, we categorize RC questions into six question types based on the required information to solve the questions. Then, we clarify that the questions, which require commonsense knowledge to solve them, are still challenging for the existing MNLM-based RC models. Finally, by analyzing the result of the knowledge probing test and the observed frequency of subject and object entity pairs, we find that MNLMs' way of learning knowledge is substantially affected by the conditional probability of the entity pairs. Based on this finding, we explain why an external knowledge repository is needed to overcome the limitations of MNLMs. In addition, we conduct controlled experiments to show that an external knowledge repository can be helpful to overcome the limitation of MNLM-based RC models. In the experiments, we enrich the incorrectly predicted questions with required commonsense knowledge from the external knowledge repository. The results show that the incorrectly predicted questions are properly answered by MNLM-based RC models without any change of the models. Main contributions in this paper are as follows: \vspace{-0.3em} \begin{itemize}[itemsep=-0.1em] \item From the experimental results of the knowledge probing test on the commonsense knowledge of ConceptNet, we decisively observe that MNLMs have a lot of missing or untrained knowledge. \item By analyzing the results of the MNLM-based RC models, we observe a new finding that the existing MNLMs have critical limitations when solving questions requiring commonsense knowledge. \item To the best of our knowledge, it is the first approach to empirically explain the fundamental reasons why a large portion of commonsense knowledge is not learned by the existing MNLMs and discuss why external commonsense knowledge repositories are still required. Moreover, we show that MNLMs can be complemented by integrating an external commonsense knowledge in the actual RC-task. \end{itemize} \vspace{-0.3em} The paper is organized as follows. Section~\ref{sec:background} briefly describes the notions required to readily understand our paper. Section~\ref{sec:knowledge_probing} introduces our knowledge probing test and demonstrates the results of the test. Then, we present the performance of the MNLM models on different difficulties of RC problems in Section~\ref{sec:difficulty}. Section~\ref{sec:discussion} discusses the reasons why external commmonsense repositories are needed and suggests a possible direction to overcome the limitation of the existing MNLM-based RC models. Finally, the conclusion is stated in Section~\ref{sec:conclusion}. \section{Background} \label{sec:background} \subsection{Masked Neural Language Models} We consider an MNLM that calculates the probability distribution over the sequence of words with a neural network. We mainly discuss three types of MNLMs, BERT and its two variations: 1) BERT, 2) \textit{A Robustly Optimized BERT Pretraining Approach} (RoBERTa) \cite{liu2019RoBERTa} and 3) \textit{A Light BERT} (ALBERT) \cite{lan2019albert}. Detailed structural information on the experimental models used in this paper is described in Table~\ref{tab:model_structures}. BERT is made up of the transformer architecture \cite{vaswani2017attention}. The model has $L$ transformer layers. Each layer comprises $S$ self-attention heads and $H$ hidden dimensions. In addition, the input of the model is a conjunction of two sentences $A_1,...,A_N$ and $B_1,...,B_M$, where each token is split into WordPiece \cite{schuster2012japanese} with a vocabulary of 30,000 tokens. Special delimiter tokens `[CLS]' and `[SEP]', which indicate `classification token' and `sentence separate token' respectively, are adopted to integrate two sentences into the ensuing input: \begin{gather*} [CLS],A_1,...,A_N,[SEP],B_1,...,B_M,[SEP] \end{gather*} By adding delimiter tokens, the final number of tokens of the input sequence should be $N+M+3$. Two objectives are used to pretrain BERT model: 1) the masked language model (MLM) loss and 2) the next sentence prediction (NSP) loss. Different from traditional language models that optimize the likelihood of the next word prediction, BERT is optimized with the MLM loss. With the MLM loss, tokens in the text are randomly masked with a special token `[MASK]' at a designated proportion, and BERT is optimized with the cross-entropy loss to predict the correct tokens for the masked input. On the other hand, NSP loss is a binary classification loss to determine whether sentences $A$ and $B$ are naturally observed in the data sequence. In a positive example, $A$ and $B$ are consecutive sentences. In contrast, $B$ is randomly selected from another document in a negative example. We adopt two BERT models (BERT$_{base}$ and BERT$_{large}$) to investigate the results. The pretrain data of these models include an integration of two different corpora (English Wikipedia and Book Corpus \cite{zhu2015aligning}) which approximates 16GB. RoBERTa has the same structure of transformers with BERT. However, there are several changes to refine the original BERT. First, different from BERT that fixes masked tokens during the entire training procedure, RoBERTa changes the masked tokens through the process of learning. Next, the NSP loss of BERT is no longer used in RoBERTa. Instead, RoBERTa is trained on a single sequence of document consists of up to 512 tokens. In addition, during the training, RoBERTa uses a much larger batch size compared with BERT to reduce training time. Furthermore, RoBERTa's vocabulary uses byte pair encoding (BPE) \cite{sennrich2016neural} instead of the word piece token used in BERT. Finally, RoBERTa is pretrained with approximately 160GB of data including the BERT pretraining corpora as well as three additional corpora (CommonCrawl News dataset\cite{ccnews}, open-source recreation of the WebText corpus \cite{Openwebtext} and STORIES corpus \cite{trinh2018simple}). We utilize two RoBERTa models (RoBERTa$_{base}$ and RoBERTa$_{large}$). ALBERT also consists of the structure of transformers. Nevertheless, some changes are adopted to amend the original BERT. First, in order to reduce the number of parameters, ALBERT decreases token embedding size and shares the parameters of attention and feed-forward networks across all transformer layers. Not only that, instead of NSP loss, ALBERT is trained with sentence order prediction (SOP) loss that predicts the natural order of two sentences $A$ and $B$ come from the same document. To compare the effects of data size, we utilize three ALBERT version 1 models (ALBERT1$_{base}$, ALBERT1$_{large}$, and ALBERT1$_{xlarge}$) pretrained with the BERT pretrain data and three ALBERT version 2 models (ALBERT2$_{base}$, ALBERT2$_{large}$, and ALBERT2$_{xlarge}$) pretrained with the RoBERTa pretrain data. \subsection{Generative Pre-Training Models} \revised{We also discuss Generative Pre-Training (GPT) models, which are the state-of-the-art generative NLMs. Herein, we utilize the largest models of each version of GPT models: GPT1 \cite{radford2018improving}, GPT2\cite{radford2019language} and GPT3\cite{brown2020language}. } \revised{GPT models are built on the transformer architecture. Unlike the aforementioned BERT families, GPT models are trained to predict the next token for the given sequence of words as the traditional language models do. For example, if we have a sentence $A_{1...N}$ and the token sequence $A_{1...N-1}$ is given, then the GPT models are trained to maximize the probability $P(A_N|A_{1...N-1})$ such that the token $A_N$ will be observed.} \revised{The most significant differences between each GPT version are the number of parameters and the pretraining data as summarized in Table~\ref{tab:model_structures}. The number of parameters becomes larger in the order of GPT1 to GPT3 (GPT1: 119M, GPT2: 1.5B and GPT3: 175B), as well as the size of the pretraining data (GPT1: 4GB, GPT2: 40GB and GPT3: 570GB). Among the GPT models, GPT3 has been reported to have impressive zero-shot and few-shot inference performances in previous studies \cite{brown2020language, da2021understanding, sainz2021ask2transformers}. In this paper, we use \textit{Davinci}, which is the largest model among the GPT3 models provided by OpenAI API\cite{openai}. Note that the GPT3 API is provided for a fee and the pretrained parameters are not openly available.} \subsection{Commonsense Knowledge Repositories} It is important to determine an external resource where we can extract commonsense knowledge. ConceptNet, a part of an \textit{open mind commonsense} (OMCS) \cite{singh2002open} project, is a knowledge base designed to help computers understand commonsense knowledge shared by people. It has been widely exploited as a commonsense knowledge repository in previous studies \cite{wang2018yuanfudao,guan2019story,talmor2019commonsenseqa,petroni2019language,kassnerS20negated, jiang2020can,shin2020eliciting,bouraoui2020inducing}. Commonsense knowledge in ConceptNet is represented as semantic triples which constitute a subject entity, an object entity, and a relation between the entities. ConceptNet includes commonsense knowledge that originates from several resources: crowdsourcing, expert-creating, and games with a purpose. We utilize ConceptNet 5.6.0 \cite{conceptnet5.6.0} version for experiments. In this paper, we conduct the knowledge probing test on 32 relations. Detail information of the relations that we use can be found in Appendix~\ref{apx:details_on_the_templates}. \section{Probing Commonsense Knowledge in MNLMs} \label{sec:knowledge_probing} This section investigates which types of commonsense knowledge are well trained and contained in the pretrained MNLMs. Clarifying the knowledge included in the MNLMs is difficult for ensuing reasons. First, there is a disparity between the input format of MNLMs that is natural language and the structure of ConceptNet knowledge that is made up of semantic triples. The Cloze test \cite{chapelle1990cloze}, known to be a reliable assessment for the language ability of a participant, is a task wherein one fills in the correct answer for the blank in the text. In the following example, ``children and \_ are opposite.'', the answer word would be `adults' rather than `kids'. To infer the correct answer, we must know not only the meaning of each word but also the semantic relation between the words. \revised{Currently, several studies suggest the methodologies to gauge relational knowledge from pretrained MNLMs with the Cloze test approaches \cite{davison2019commonsense, petroni2019language,kassnerS20negated, jiang2020can, bouraoui2020inducing, shin2020eliciting, zhong2021factual}. LAMA (LAnguage Model Analysis) probe\cite{petroni2019language} is an early study of the Cloze style probing approach. Herein, probing has been conducted by a Cloze test with manually designed templates. The results show that some factual knowledge can be recalled from the MNLMs without finetuning procedures. However, in this case, the results highly depend on the designed set of templates. To ameliorate this, LM Prompt And Query Archive (LPAQA) suggests to create a set of candidate prompts for each relation by using text-mining and paraphrasing approaches \cite{jiang2020can}. Among the multiple candidate prompts, one may select the top-K prompts. The authors claim that they can achieve higher performance when the probing results of the retrieved text and the manually designed text are ensembled. Recently, the gradient-based methods have been proposed to find the optimal relational prompts for each model \cite{shin2020eliciting,zhong2021factual}. AutoPrompt is a method to automatically generate prompts for a diverse set of tasks, based on a gradient-guided search\cite{Wallace2019Triggers}. The authors suggest to generate a prompt for each relation by adding ``trigger'' tokens that can be any token in the vocabulary. Herein, all tokens are set to `[MASK]' token in the beginning then recurrently updated to optimize the probability of the answer label of the training data. OptiPrompt is another gradient-based prompt engineering approach \cite{zhong2021factual}. Different from AutoPrompt which optimizes discretely, OptiPrompt suggests to directly optimize the input vector space. As a result, they can find real-valued inputs for extracting factual knowledge. However, existing methods have some drawbacks to be adopted to our experimental setting. First of all, the gradient-based prompt searching can generate semantic and grammatical inscriptions such as ``''(),ex-,Liverpool'' which are far from the natural sentences \cite{Wallace2019Triggers}. On top of that, as GPT3 API is provided for a fee, it requires tremendous cost to paraphrase the prompts suitable for each model, which is also the case for the ensemble-based method. Finally, the gradient-based approaches require `white-box' access to the NLMs to calculate gradient. Thus, it is hard to be applied to the extremely large NLMs such as GPT3 which are not publicly accessible. Therefore, we try to conduct the knowledge probing, named as \textit{knowledge probing test}, by taking into account 1) semantic and grammatical plausibility, 2) not relying on a single prompt for each relation, 3) minimizing experimental cost, and 4) black-box setting for the parameter accessibility of the very large NLMs. } In the knowledge probing test, we first transform a semantic triple $(s,r,o)$ into a sentence that can be used as an input to a designated MNLM. Herein, a sentence is created via predefined predicate templates collected from frequently used patterns representing particular relations in the OMCS dataset\cite{OMSC}. To be specific, for a give triple we generate masked sentence for the knowledge probing test by the following procedures. \revised{First of all, we find the most grammatically plausible candidate sentence for each template. To this end, from an original template, grammatically diversified sentences are generated by grammar transformation rules\cite{davison2019commonsense}. The sentence with the lowest perplexity \cite{radford2018improving} on the pretrained LM among the generated sentences is selected as the candidate sentence of a template. Then, the most semantically probable one is picked among candidate sentences originated from the original templates. Specifically, the sentence with the lowest perplexity is selected as the masked sentence of a triple. Through this process, we can create the most grammatically and semantically natural masked sentences for each triple. Herein, GPT1 is used as the pretrained LM for perplexity calculation. The detailed procedure of the grammatical transformation is described in Appendix~\ref{apx:details_on_the_grammar_transformation}. In addition, we provide several examples for the results of the knowledge probing test in Appendix~\ref{apx:qualitative_results}. The details on the original templates are presented in Appendix~\ref{apx:details_on_the_templates}.} Our paper reports following fundamental limitations of the existing MNLMs in literature, veiled behind empirical successes of NLMs, which have not been extensively explored yet: 1) Even if MNLMs predict correct answers on knowledge triples, it may not guarantee MNLMs accurately understand attributes of subject entities, 2) MNLMs have a hard time to discern semantically related relations such as opposite relation pairs. \subsection{Probing on Various Types of Relations} \label{sec:knowledge_probing_test} The result of the knowledge probing test, we use hits@K metric \cite{bordes2013translating} that measures the ratio of correctly predicted answers, in the top K predictions, out of all true answers from the ConceptNet repository. Table~\ref{tab:hits@K} presents macro-average, an equally weighted average of the result of relations, and micro-average, a weighted average of the results of the relations according to their frequencies. We use the macro average as the main yardstick because there is a large variation in the number of examples in each relation. For example, the experimental results are greatly influenced by relations with a high proportion such as `RelatedTo' and `HasContext'. Individual results for each relation are listed in Appendix~\ref{apx:quantitative_analysis_for_probabilistic_distributions}. \revised{Note that, our knowledge probing test is in the recent research lines in that Cloze test is used. Some of the recent studies mainly focus on the positive results that NLMs are able to infer factual knowledge \cite{davison2019commonsense,petroni2019language}. On the other hand, a recent study demonstrates that the existing NLMs predict almost the same results in a given sentence and its negated sentence \cite{kassnerS20negated}.} First, we analyze the effect of the size for each type of MNLM through the performance of knowledge probing test. From the experimental results in Table~\ref{tab:hits@K}, for the same type of MNLMs, the larger models generally perform better. \revised{Especially, the largest models of each type of MNLMs, BERT$_{large}$, RoBERTa$_{large}$, and ALBERT2$_{xlarge}$, show performances around 40 at hits@100. Moreover, the performance of GPT3 clearly surpasses that of GPT1 and GPT2, and hits@100 shows that the ability to extract factual knowledge is substantially higher than that of the other models.} Meanwhile, the results of ALBERT1 models demonstrate somewhat different from those of the other models. Firstly, the overall performance is less than the other type of models. In particular, the hits@100 performance of ALBERT1 is less than 30, which is about 25\% less than the other models. Next, it can be seen that ALBERT1$_{xlarge}$ has lower performance than ALBERT1$_{base}$ and ALBERT1$_{large}$, which consist of fewer parameters and layers than those of ALBERT1$_{xlarge}$. However, we can also find that such issue is ameliorated in the results of the ALBERT2 models trained on larger corpus than ALBERT1 models. Therefore, from these observations, we can achieve the following inferences. First, commonsenese knowledge can be extracted from MNLMs without further fine-tuning. Next, typically a deeper and larger model shows better performance compared to smaller models. However, for models with very deep and large sizes such as ALBERT$_{xlarge}$ models, it is difficult to learn commonsense knowledge with a relatively smaller dataset. Finally, MNLMs can learn more commonsense if they are trained on the larger pretrain dataset. Although the average hits@100 performances above 50\% may seem to be high, considering the average number of answers provided by ConceptNet (see Appendix~\ref{apx:details_on_the_templates}) is less than 5, the listed models cannot predict all 5 of the confident answers correctly within top 100 predicted words. In addition, large fluctuation can be found in the quantitative results for each relation. Some relations (`Entails', `AtLocation'...) show below 20\% in hits@100 while some (`NotCapableOf', `MadeOf', `ReceivesAction') show above at least 60\%. \begin{figure*}[ht!] \begin{center} \includegraphics[width=\linewidth]{images/madeof_hitmap_tab20b_r_rebuttal.pdf} \end{center} \caption{\textbf{Color-coded results of the BERT$_{base}$ model's predictions on 100 samples in the `MadeOf' relation}. This figure shows whether each sample (x-axis) contains certain object words (y-axis) in the top 10 predictions. Each color represents the 10 most frequently observed words in the predictions on the `MadeOf' relation.} \label{fig:made_of_color_coded} \end{figure*} \begin{figure*}[ht!] \begin{center} \includegraphics[width=\linewidth]{images/rebuttal_top10_common.pdf} \end{center} \caption{\textbf{Frequencies of the top 5 frequently occurred words in top 10 predictions}. X-axis indicates the top 5 most commonly observed words in top 10 predictions of each model. Y-axis is a frequency ratio of the commonly observed words. } \label{fig:topk_word_image} \end{figure*} Furthermore, we conjecture that the semantic understanding of MNLMs about relations is not as accurate as expected despite the high hit ratios. An illustrative example is `MadeOf' relation which consistently shows the best performance in almost all MNLMs. Indeed, `MadeOf' relation has the highest performance in all MNLMs based on hits@10 among the relations with more than 50 examples. However, when we have a closer look at the predictions from MNLMs, it is commonly observed that some specific words are repeated across different subjects. Especially the BERT$_{base}$ model, which achieves the highest hits@10 in `MadeOf' relation, presents a noticeable result. Figure~\ref{fig:made_of_color_coded} shows the appearance of the 10 most frequent words, in the top 10 predictions of the BERT$_{base}$ model for 100 samples of the `MadeOf' relation. In more than 70\% of sampled subjects, `wood', `metal', and `glass' appear as high-rank predictions. Therefore, our observations say that the prediction tends to follow the marginal distribution of `MadeOf' relation instead of reflecting the conditional distribution of a subject. This can be problematic when those frequent words are definitely incorrect answers. For example, `wood' is predicted as the most probable answer for the question ``What is butter made of?'' where a human can easily notice `wood' is an inadequate answer. \revised{As Figure~\ref{fig:topk_word_image} shows, such repetition of the predicted words can be commonly observed among the MNLMs. Note that, even if the size of the model increases ($base$ < $large$ < $xlarge$; GPT1 < GPT2 < GPT3) or the model is trained on the larger dataset (ALBERT1 < ALBERT2; GPT1 < GPT2 < GPT3), the marginal prediction issue of the `MadeOf' relation is not fundamentally solved. We provide detailed information about the frequently observed object words for each model in Appendix~\ref{apx:MadeOf}.} \subsection{Probing the Relationship Between Two Opposite Relations}\label{sec:synonym and antonym} \begin{figure}[!t] \adjustbox{minipage=2em,raise=-\height}{\subcaption{} \label{fig:opposite_test_move}}% \raisebox{-\height}{\includegraphics[width=.45\linewidth]{images/cheap_diff_color_rebuttal.pdf}} \adjustbox{minipage=2em,raise=-\height}{\subcaption{} \label{fig:opposite_test_trust}}% \raisebox{-\height}{\includegraphics[width=.45\linewidth]{images/obedience_diff_color.pdf}} \caption{\textbf{Results of the BERT$_{base}$ model on the top 10 words on the opposite relations on subject words}. \textbf{(a)} `cheap' and \textbf{(b)} `obedience'. Words commonly observed in both results are painted in the same color, and the other words are in light gray.} \label{fig:antonym_synonym_overlapping} \end{figure} \revised{So far, we have discussed the behavior of MNLMs for each relation. Here, we address the following question ``Do MNLMs precisely understand the semantic differences between relations?'' To answer the question, we compare the results from the knowledge probing test of the four following pairs of opposite relations on the same subject: `Synonym / Antonym', `HasProperty / NotHasProperty', `Desire / NotDesire' and `CapableOf / NotCapableOf.' Different from the previous study\cite{kassnerS20negated}, our experimental environment is more restrictively controlled that there must exist answers for both relations of an opposite relation pair to the same subject. The results for each relation of an opposite relation pair should be clearly distinguished if the MNLMs understand subtle semantic differences between the opposite relations. } Figure~\ref{fig:antonym_synonym_overlapping} indicates illustrative examples in both of the opposite relations on the same subject words. Unexpectedly, there are words simultaneously predicted in the opposite relations. The quantitative results in Table~\ref{tab:overlapping_results} show that words with high probabilities of two opposite relations are common in many cases. The finding supports that the MNLMs may not clearly distinguish the subtle meaning of the opposite relations. \revised{This phenomenon is not fundamentally solved either as the size of the model grows or as the training data increases as seen in the results of the ALBERT and GPT models.} Furthermore, in order to demonstrate that such overlapping issue is undesirable and even problematic, we measure the ratio of incorrect answers by grading the predictions with answers from the opposite relation that can be regarded as wrong answers. Table~\ref{tab:intergrade_results} summarizes the result. Among the opposite relation pairs, the answer object of the `Synonym / Antonym' pair is incompatible while the other opposite pairs can have identical answers. For example, there is an identical answer `pregnant' for a given subject `person' and the `Desire / NotDesire' opposite relations. For this reason, we conduct the experiments on the `Synonym / Antonym' pair. Hits@K, in this case, can be interpreted as the Miss@K, which is the ratio of predicted words definitely incorrect. The experimental results demonstrate that MNLMs have a relatively high incorrect ratio considering it is unnecessary. In addition, as the size of the model and the training data increases, the performance of the models enhances. However, at the same time, the incorrect rate also increases generally. Thus, from these results, it is difficult for MNLMs to discern a precise difference between semantically related relationships specifically opposite relations. Additional examples of the overlapped predictions between the opposite relations can be found in Appendix~\ref{apx:additional_examples_on_overlapping}. \subsection{The Impact on the Masked Sentence Selection Procedure} \revised{Herein, we analyze the impact of the selected templates in the reported performance of the LMs. For this, we analyze the impact of the masked sentence selection in the knowledge probing test. More specifically, for the BERT models, we compare the probing performance of the selected masked sentence with the average probing performance of the candidate sentences derived from all original templates. The results in Table~\ref{tab:impact_of_template_selection} demonstrate that the selected masked sentences show substantially higher performance compared to the candidate sentences. From this, we can see that the selected sentences are not only the most natural among the candidate sentences, but also effective for extracting factual knowledge in general.} \subsection{The Impact of the Fine-tuning on the ConceptNet} \revised{The previous experimental sections are performed in a zero-shot environment without additional fine-tuning on pretrained MNLMs for the knowledge probing test. However, the distribution of per-train corpora of MNLMs has a large discrepancy from that of ConceptNet. Therefore, we need to verify that the limitations we point out above arise from the difference between the train corpora of MNLMs and ConceptNet. For this, we fine-tune BERT models on ConceptNet triples and analyze the results.} \revised{First of all, we randomly divide all of the triples included in the knowledge probing test into 3 folds. After that, we conduct fine-tuning and evaluating of MLNMs by designating each fold as train, validation, and test sets. To be specific, to evaluate each fold, we specify train, validation, and test sets as follows: 1) Train: Fold 0, Validation: Fold 1, Test: Fold 2, 2) Train: Fold 1, Validation: Fold 2, Test: Fold 0, 3) Train: Fold 2, Validation: Fold 0, Test: Fold 1.} \revised{During fine-tuning, models are trained to predict the answer objects of a masked sentence. Note that, we set the hyper-parameters for fine-tuning of models to the default values of OptiPrompt \cite{zhong2021factual}. Finally, the performance of fine-tuned model is calculated by averaging the test performance for each fold.} \revised{Table~\ref{tab:hits@K_finetune} is the experimental results on the knowledge probing test of pretrained BERTs and fine-tuned BERTs. After fine-tune, it can be seen that the performance is significantly enhanced ($p < 0.05$) in both BERT$_{base}$ and BERT$_{large}$ models. Especially, the performances of hits@100 show over 70\% for both models.} \revised{Tables~\ref{tab:overlap@K_finetune} and Tables~\ref{tab:miss@K_finetune} show the results of experiments on overlapping between opposite relations for the fine-tuned models. Here, we can see that miss@1 of the fine-tuned model is significantly reduced compared to before training. It is assumed that this is because the prediction probability of the correct answer object increases as the prediction rate of the Synonym and Antonym relationships increases. Nevertheless, in the case of overalp@K, the problem is not fundamentally solved even after fine-tuning, and in the case of synonym-antonyms, it can be seen that the overlap ratio increases after fine-tuning.} \revised{Overall, we can see that when the model is fine-tuned to ConceptNet, not only the knowledge probing performance is improved, but also the miss-overlapped ratio between some opposite relations can be reduced. These results imply the importance of a tailored dataset similar to the distribution of a target task. However, even after fine-tuning, it can be seen that the problem of overlapping is still observed in the model output results in the opposite relationship is not fundamentally solved. Detailed experimental results for each relation in the folds of the models can be found in the Appendix~\ref{apx:finetuning_on_KB}.} \section{Which Types of Questions Are Still Challenging for MNLMs?}\label{sec:difficulty} \revised{In recent years, MNLMs have led to breakthroughs in RC task even beating human-level performance\cite{radford2018improving,devlin2019bert,lan2019albert}. It is widely known that not only syntactic but also comprehensive semantic knowledge including commonsense knowledge is required to solve the task accurately. Then, do these results mean that the linguistic understanding of the model exceeds that of humans who can infer the correct answer based on background including commonsense knowledge? In this section, we will figure out which types of questions are still challenging for the existing MNLMs. Especially, we scrutinize the problems that require factual background knowledge, such as a ConceptNet triple, among the keywords in a question-context pair.} \revised{The experiments are conducted on the following environment. First of all, we use two widely known MRC benchmarks, the Stanford Question Answering Dataset (SQuAD) 2.0\cite{rajpurkar2018know} and the Reading comprehension with Commonsense Reasoning Dataset (ReCoRD) \cite{zhang2018record}. SQuAD comprises two types of questions: \textit{has answer} and \textit{no answer}. A \textit{has answer} question contains an answer in the corresponding context. A \textit{no answer} question does not have a contextual answer. On the other hand, ReCoRD is a task that finds the answer in the entities of the context for a given question, and it is known that more commonsense reasoning is needed to solve the problem compared to the SQuAD. ReCoRD is a multiple choice MRC task. ReCoRD contains a cloze-test style question and the correct answer for a question will be the one of entities for the given context\cite{wang2019superglue}. In our experiment, to use the same performance evaluation criteria as SQuAD, we conduct ReCoRD experiments with an answer span prediction approach. We analyze the sampled question-context pairs from the development set of each dataset since we cannot access the test set.} \revised{In terms of training models, we only concentrate on the largest models of MNLMs ( BERT$_{large}$, ALBERT1$_{xlarge}$, ALBERT2$_{xlarge}$ and RoBERTa$_{large}$). This is because not only MNLMs show substantially higher performance than GPT 1 and 2 models in natural language understanding tasks\cite{devlin2019bert, van2019does, clark2020electra}, but also there is no way to finetune a GPT 3 model. Detailed analysis for the point of view of lexical overlapping ratio between question and context of each dataset and a data sampling method are described in Appendix~\ref{apx:difficulty_word_overlap}.} We first classify the questions into the following six classes by referring to the question types of SQuAD \cite{rajpurkar2016squad}. \revised{Detailed explanations of the question types are described with examples in Appendix~\ref{apx:details_on_RC_question_types}}: \begin{itemize} \item The \textit{synonymy} class indicates the existence of a synonym relation between an answer sentence and a question. \item The \textit{commonsense knowledge} class indicates that commonsense is required to solve a question. \item The \textit{no semantic variation} class denotes that a question does neither belong to \textit{synonymy} nor \textit{commonsense knowledge} type. \item The \textit{multiple sentence reasoning} class indicates that there are anaphora or clues scattered across multiple sentences. \item The \textit{typo} class denotes a typographical error in a question or an answer sentence. \item The \textit{others} class indicates that the presented answers are incorrectly tagged. \end{itemize} Then, we manually label the question types to see which types are more challenging than other types for the MNLM-based RC models. \revised{We report the analysis results on the proportions of the question types of SQuAD in Table~\ref{tab:question_types_squad} and ReCoRD in Table~\ref{tab:question_types_record}. We investigate the proportions of question types comparing questions incorrectly predicted by the models to questions correctly predicted by the models.} The results show that there are obvious differences in the proportion of the question types between the correctly predicted questions and the incorrectly predicted questions. Across all models, the proportion of \textit{no semantic variation} questions accounts for more than 50\% in the correctly predicted questions, while the proportion decreases to about 30\% in the incorrectly predicted questions. In other words, the easier questions have a higher proportion of the \textit{no semantic variation} type questions. Likewise, the proportion of \textit{commonsense knowledge} questions comprises approximately 22\% to 25\% in the correctly predicted questions, whereas the proportion increases to about 50\% in incorrectly predicted questions. \revised{On the other hand, we see that commonsense type questions account for the majority in ReCoRD. Moreover, there is an extremely large portion of semantic variation type questions. In the case of no semantic variation type questions, we notice that all the models answer the same questions incorrectly. In contrast, in case of the commonsense knowledge type questions, BERT, ALBERT1, ALBERT2, and RoBERTa incorrectly predict problems 42, 40, 34, and 29, respectively. This result indicates that the models with the larger size of the pretrained corpora have higher performance on commonsense type questions.} From these results, we can infer that commonsense type questions can be substantially affected by the size of the pretrained corpus, that is, the size of knowledge contained in the corpus. Therefore, it is still challenging for the MNLM-based RC models to deal with questions with semantic variations, or questions that require commonsense knowledge to solve. This is an important discovery because it suggests that MNLMs should have the capacity to address commonsense knowledge, to overcome the limitations of MNLMs. \section{Main Results and a Possible Solution} \label{sec:discussion} Section~\ref{sec:knowledge_probing} reveals that there are still limitations of the existing MNLMs even though they have the potential to infer commonsense knowledge. \revised{Furthermore, the results in Section~\ref{sec:difficulty} indicate that the \textit{commonsense knowledge} type is the most dominant among the questions that MNLM-based RC models have hard time to answer correctly.} We conjecture that the existing MNLMs are heavily trained to learn observed information in the corpus and co-occurrences of the words rather than precise meanings of relations. Here, we first analyze observed joint frequencies of subject and object entities in the BERT pretrain data (English Wikipedia and Book Corpus). Subsequently, we discuss the impact of word frequency on knowledge probing results. Finally, we suggest a potential solution and a direction to handle the \textit{semantic variation} type questions by utilizing an external commonsense repository. \subsection{Why MNLMs still Need External Commonsense Repository?} \label{sec:frequency} \begin{figure*}[ht!] \begin{center} \includegraphics[width=\linewidth/2]{images/entity_pair_statistics.pdf} \end{center} \caption{\textbf{Statistics of the triples in the BERT pretrain datset.} X-axis indicates the frequency with which subject and object entities are observed together. Y-axis is the number of triples for each section of the joint frequency of subject and object. The joint frequency and the proportion of triples for each frequency sector have a negative correlation.} \label{fig:entity_pair_statistics} \end{figure*} \begin{figure*}[!ht] \centering \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\linewidth]{images/BERT_large_entity_pair_answer_ratio_templates.pdf} \caption{BERT$_{large}$} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\linewidth]{images/ALBERT_xlarge_entity_pair_answer_ratio_templates.pdf} \caption{ALBERT1$_{xlarge}$} \end{subfigure} \caption{\textbf{The proportion of knowledge triples whose objects are in top 100 predictions for each model:} \textbf{(a)} BERT$_{large}$, \textbf{(b)} ALBERT1$_{xlarge}$. X-axis indicates the frequency with which subject and object entities are observed together. Y-axis is the proportion of which the answer object can be found in top 100 model's prediction for each frequency section. Blue bars indicate that the answer is in the top 100 predictions, while orange bars mean that the answer is not in the top 100 predictions. The results show that the joint frequency of subject and object affects the knowledge probing performance.} \label{fig:bar_plot_of_joint_freq_and_preformance} \end{figure*} \begin{figure*}[!ht] \centering \begin{subfigure}[b]{0.40\textwidth} \includegraphics[width=\linewidth]{images/BERT_large_top100_conditional_heatmap_templates.pdf} \caption{BERT$_{large}$ in top 100} \end{subfigure} \begin{subfigure}[b]{0.40\textwidth} \includegraphics[width=\linewidth]{images/BERT_large_dynamic_conditional_heatmap_templates.pdf} \caption{BERT$_{large}$ in top k} \end{subfigure} \\ \begin{subfigure}[b]{0.40\textwidth} \includegraphics[width=\linewidth]{images/ALBERT_xlarge_top100_conditional_heatmap_templates.pdf} \caption{ALBERT1$_{xlarge}$ in top 100} \end{subfigure} \begin{subfigure}[b]{0.40\textwidth} \includegraphics[width=\linewidth]{images/ALBERT_xlarge_dynamic_conditional_heatmap_templates.pdf} \caption{ALBERT1$_{xlarge}$ in top k} \end{subfigure} \caption{\revised{\textbf{Correlations of the joint frequency of subject and object, and frequency of subject:} \textbf{(a)} BERT$_{large}$ in top 100, \textbf{(b)} BERT$_{large}$ in top k, \textbf{(c)} ALBERT1$_{xlarge}$ in top 100, \textbf{(d)} ALBERT1$_{xlarge}$ in top k. X-axis indicates the frequency with which subject and object entities are observed together. Y-axis is the frequency of the subject. Values of heatmap mean the proportion of knowledge triples whose objects can be found in top 100 or top k model’s predictions for each frequency grid. Herein, the `k' indicates the number of the ground truth objects for each subject and relation pair. The results show that the performance increases as the subject observation frequency is lower in the same joint frequency section of subjects and objects.} } \label{fig:frquency_performance_heat_map} \end{figure*} Herein, we analyze conditions for the MNLMs to learn the commonsense knowledge triples. Since there are no exclusive patterns for each relation, it is difficult to calculate the exact observation frequencies of the triples directly. Therefore, we adopt an assumption of the distant supervision paradigm of relation extraction tasks \cite{mintz2009distant}. The assumption is that if sentences include a pair of entities that have a specific relation, then some of the sentences probably represent the relation of the entity pair. The more sentences contain the entity pair, the more sentences are likely to represent the relation. \revised{For this reason, we calculate the joint frequency of entity pairs of the ConceptNet from the BERT pretrain dataset. In this section, top 100 indicates that the answer object can be found in the top 100 prediction of a model. Furthermore, top k denotes that the answer object can be found in the number of objects of a given subject and relation pair.} Figure~\ref{fig:entity_pair_statistics} presents the statistics for the number of triples in each interval of the joint frequency of subject and object. Herein, many triples lack of the joint frequency. Then, we analyze the probing results of BERT and ALBERT1 models which are trained on the BERT pretrain dataset. Finally, we attest whether there is an answer object in the model's top 100 or top k predictions based on the knowledge probing results of triples whose joint frequency of the subject and object entity pair is at least 1 time. In this section, we only show the results of the BERT$_{large}$ and ALBERT1$_{xlarge}$ models since the experimental results are consistent in the other models. Experimental results of the other models are demonstrated in Appendix~\ref{apx:frequency_performance}. \revised{Figure~\ref{fig:bar_plot_of_joint_freq_and_preformance}, the main observation of our study, shows that the joint frequency of subjects and objects obviously affects performances of the knowledge prediction. Nevertheless, there are some entity pairs with very low frequency (1 to 9) which succeed to predict the answer in the top 100. We analyze why this happens and report the results in Figure~\ref{fig:frquency_performance_heat_map}. The results show that, in the same joint frequency section, as the subject observations frequency increases, performance tends to decrease. Therefore, we can see that the performance is positively affected by the joint frequency of the subject and object, while negatively affected by the subject frequency. It demonstrates that commonsense knowledge trained in MNLMs can be influenced by the conditional probability of the object given the subject.} From these findings, training MNLMs on a larger dataset will be useful to enhance learning commonsense knowledge as the joint frequency of entity pairs is expected to increase in the larger dataset. However, it can be inefficient to simply increase the size of the dataset to sufficiently train relatively rare entity pairs. This is because, most entity pairs in the larger dataset can be redundant since the number of co-occurrences of word pairs follows the power-law distribution\cite{pennington2014glove}, similar to the well-known Zipf's law\cite{powers1998applications}. Moreover, it does not guarantee that the conditional probability of entity pairs in the larger dataset will increase. Therefore, learning commonsense knowledge apart from the distribution of training corpora will be an important factor for overcoming the limitations of the existing MNLMs. Taking all these into consideration, it is still hard for the existing MNLMs to learn a significant portion of commonsense knowledge and may need help from external knowledge repositories or other tailored special purpose datasets. \subsection{Can An External Commonsense Repository Help for MNLMs to Solve the Questions including the Semantic Variation?} \label{sec:complement_NLMs_with_knowledge} As discussed, it is clear that there are limitations in the existing MNLM-based RC models to infer semantic variation type questions. However, it is yet obscure whether the explicit complement of necessary knowledge can be a solution to the limitations of the MNLM-based RC models. Therefore, we conduct a controlled experiment to verify whether the external commonsense knowledge can help ameliorate the limitations. Specifically, we integrate adequate knowledge by enriching the text of a question or a context without additional training or changing the model. \revised{Examples in Table~\ref{tab:example_of_manually_integrating_in_squad} and Table~\ref{tab:example_of_manually_integrating_in_record} illustrate how the required knowledge is integrated into the text of each dataset. First, we find the subject word of each required knowledge triple from the question and the context. Then, the word is enriched with the relation and the object of the triple. Specifically, the relation is converted into natural language using the templates. A detailed algorithm of the knowledge integration is provided in Appendix~\ref{apx:details_on_the_integrating_external_commonsense_repository_test}.} \revised{We analyze the \textit{semantic variation} type questions to figure out whether clues for each question can be found among knowledge triples in the ConceptNet. As a result we find that 244 questions out of 684 semantic variation type questions of SQuAD have clues in the ConceptNet. We also observe 104 questions out of 200 semantic variation type questions of ReCoRD have clues in the ConceptNet. For the evaluation, the questions are divided into two groups according to their exact match (EM) \cite{rajpurkar2016squad} scores. The first group is `incorrect' questions whose predicted answers are not fully matched with the ground truth answers. The other group is `correct' questions whose predicted answers are fully matched with the ground truth answers.} Then, we evaluate our baseline models with the selected data in EM and F1 score of Equation~(\ref{F1}), that is a harmonic mean of recall (Equation~(\ref{RECALL})) and precision (Equation~(\ref{PRECISION})). We use BERT $_{large}$, ALBERT1 $_{xlarge}$, ALBERT2 $_{xlarge}$, and RoBERTa $_{large}$, as our baseline models. In addition, we conduct one-tailed t-test for the significant test. \begin{equation} \label{PRECISION} \hspace*{2.1cm} \text{Precision}=\frac{\text{\#\,of\,words\,in\,predicted\,answer\,matched\,with\,words\,in\,the\,ground\,truth}}{\text{\#\,of\,words\,in\,predicted\,answer}} \end{equation} \begin{equation} \label{RECALL} \hspace*{2.35cm} \text{Recall}=\frac{\text{\#\,of\,words\,in\,predicted\,answer\,matched\,with\,words\,in\,the\,ground\,truth}}{\text{\#\,of\,words\,in\,the\,ground\,truth}} \end{equation} \begin{equation} \label{F1} \hspace*{5.8cm} \text{F1}=\frac{2 \times \text{Precision} \times \text{Recall}}{\text{Precision}+\text{Recall}} \end{equation} \revised{Table~\ref{tab:result_of_manual_integration} indicates the results of the knowledge integration. The experimental results show that the performance of incorrect questions are significantly improved from all experimental models ($p$ < 0.05) by adapting knowledge integrated text without any modifications in the model architecture on both SQuAD and ReCoRD. On the other hand, as a result of the knowledge integration, the performance of correct questions was slightly decreased. Nevertheless, from the experimental results, we can see that the improvement in the performance of incorrect questions is prominently higher compared to the performance decreasing of the correct questions. } \revised{What should be noted are the results of the ALBERT1 and ALBERT2 models. In particular, the performance improvement in knowledge integration on ALBERT1 is consistently higher than that of ALBERT2. In the previous sections, we showed that ALBERT1 was trained with the small amount of text corpora on the same topology compared to ALBERT2, and this can result in 1) poor probing performance, and 2) relatively susceptible to solving the semantic variation type questions. Considering all these points, we can infer that ALBERT1, which lacks knowledge of learning, is more sensitive to the knowledge contained in the context.} \section{Conclusion and Future Work} \label{sec:conclusion} We investigate which types of commonsense knowledge are trained in the pretrained MNLMs by using the knowledge probing test. We find that MNLMs understand some commonsense knowledge while the trained knowledge is not precise enough to distinguish opposite relations. We also find that the questions requiring commonsense knowledge are still challenging to existing MNLM-based RC systems. Finally, we empirically demonstrate that the limitations of MNLMs can be complemented by integrating a commonsense knowledge repository. To the best of our knowledge, our study is the first to report the fundamental reason why existing MNLMs do not include a large portion of commonsense knowledge yet. \revised{Although the aforementioned observations, some questions are still left behind us. First of all, in Section~\ref{sec:frequency}, we analyze the impact of commonsense explicitly trained in the corpus, although commonsense is found implicitly in the corpus, in many cases. However, due to the nature of implicit commonsense, it is extremely difficult to directly measure the influence on the corpus, and many previous studies have tried to evaluate it based on indirect approaches such as probing. To ameliorate this, we intensively analyze pairs with high hit scores among very low frequency subject-object pairs. In particular, we can analyze the correlation with relatively frequently observed entity pairs by using the adjacency graph structure for co-occurrence frequencies between entities. In addition, it is required to verify the direct causality between the ability of knowledge replication and the performance of the downstream tasks.} It is also a question to investigate how MNLMs learned to discern a subtle semantic difference between opposite relations from unlabeled text. Further, an automatic solution for the knowledge integrating remains to be developed as future work.
1,116,691,497,474
arxiv
\section*{Acknowledgements} \noindent We express our gratitude to our colleagues in the CERN accelerator departments for the excellent performance of the LHC. We thank the technical and administrative staff at the LHCb institutes. We acknowledge support from CERN and from the national agencies: CAPES, CNPq, FAPERJ and FINEP (Brazil); MOST and NSFC (China); CNRS/IN2P3 (France); BMBF, DFG and MPG (Germany); INFN (Italy); NWO (Netherlands); MNiSW and NCN (Poland); MEN/IFA (Romania); MSHE (Russia); MinECo (Spain); SNSF and SER (Switzerland); NASU (Ukraine); STFC (United Kingdom); DOE NP and NSF (USA). We acknowledge the computing resources that are provided by CERN, IN2P3 (France), KIT and DESY (Germany), INFN (Italy), SURF (Netherlands), PIC (Spain), GridPP (United Kingdom), RRCKI and Yandex LLC (Russia), CSCS (Switzerland), IFIN-HH (Romania), CBPF (Brazil), PL-GRID (Poland) and OSC (USA). We are indebted to the communities behind the multiple open-source software packages on which we depend. Individual groups or members have received support from AvH Foundation (Germany); EPLANET, Marie Sk\l{}odowska-Curie Actions and ERC (European Union); A*MIDEX, ANR, Labex P2IO and OCEVU, and R\'{e}gion Auvergne-Rh\^{o}ne-Alpes (France); Key Research Program of Frontier Sciences of CAS, CAS PIFI, and the Thousand Talents Program (China); RFBR, RSF and Yandex LLC (Russia); GVA, XuntaGal and GENCAT (Spain); the Royal Society and the Leverhulme Trust (United Kingdom).
1,116,691,497,475
arxiv
\section{Introduction}\vspace{-0mm} \label{sec:intro} The fundamental challenge of integrated planning and learning is to design an autonomous agent that can plan its actions to maximize its expected total rewards while interacting with an unknown task environment. Recent research efforts tackling this challenge have progressed from the use of simple Markov models assuming discrete-valued, independent observations (e.g., in \emph{Bayesian reinforcement learning} (BRL) \cite{Poupart2006}) to that of a rich class of Bayesian nonparametric \emph{Gaussian process} (GP) models characterizing continuous-valued, correlated observations in order to represent the latent structure of more complex, possibly noisy task environments with higher fidelity. Such a challenge is posed by the following important problems in machine learning, among others: \noindent {\bf Active learning/sensing (AL).} In the context of environmental sensing (e.g., adaptive sampling in oceanography \cite{Leonard07}, traffic sensing \cite{LowUAI12,LowRSS13,LowTASE15}), its objective is to select the most informative (possibly noisy) observations for predicting a spatially varying environmental field (i.e., task environment) modeled by a GP subject to some sampling budget constraints (e.g., number of sensors, energy consumption). The rewards of an AL agent are defined based on some formal measure of predictive uncertainty such as the entropy or mutual information criterion. To resolve the issue of sub-optimality (i.e., local maxima) faced by greedy algorithms \cite{Guestrin08,LowAAMAS12,LowAAMAS14,YehongAAAI16}, recent developments have made nonmyopic AL computationally tractable with provable performance guarantees \cite{LowAAMAS13,NghiaICML14,LowICAPS09,LowAAMAS08,LowAAMAS11}, some of which have further investigated the performance advantage of adaptivity by proposing nonmyopic adaptive observation selection policies that depend on past observations. \noindent {\bf Bayesian optimization (BO).} Its objective is to select and gather the most informative (possibly noisy) observations for finding the global maximum of an unknown, highly complex (e.g., non-convex, no closed-form expression nor derivative) objective function (i.e., task environment) modeled by a GP given a sampling budget (e.g., number of costly function evaluations). The rewards of a BO agent are defined using an improvement-based \cite{Brochu10} (e.g., \emph{probability of improvement} (PI) or \emph{expected improvement} (EI) over currently found maximum), entropy-based \cite{Hennig12,Ghahramani14}, or \emph{upper confidence bound} (UCB) acquisition function \cite{Srinivas10}. A limitation of most BO algorithms is that they are myopic. To overcome this limitation, approximation algorithms for nonmyopic adaptive BO \cite{RamosUAI14,Osborne09} have been proposed, but their performances are not theoretically guaranteed. \noindent {\bf General tasks/problems.} In practice, other types of rewards (e.g., logarithmic, unit step functions) need to be specified for an agent to plan and operate effectively in a given real-world task environment (e.g., natural phenomenon like wind or temperature) modeled by a GP, as detailed in Section~\ref{gppfram}. As shall be elucidated later, similarities in the structure of the above problems motivate us to consider whether it is possible to tackle the overall challenge by devising a nonmyopic adaptive GP planning framework with a general class of reward functions unifying some AL and BO criteria and affording practitioners some flexibility to specify their desired choices for defining new tasks/problems. Such an integrated planning and learning framework has to address the exploration-exploitation trade-off common to the above problems: The agent faces a dilemma between gathering observations to maximize its expected total rewards given its current, possibly imprecise belief of the task environment (exploitation) vs. that to improve its belief to learn more about the environment (exploration). This paper presents a novel nonmyopic adaptive \emph{Gaussian process planning} (GPP) framework endowed with a general class of Lipschitz continuous reward functions that can unify some AL and BO criteria (e.g., UCB) discussed earlier and offer practitioners some flexibility to specify their desired choices for defining new tasks/problems (Section~\ref{gppfram}). In particular, it utilizes a principled Bayesian sequential decision problem framework for jointly and naturally optimizing the exploration-exploitation trade-off, consequently allowing planning and learning to be integrated seamlessly and performed simultaneously instead of separately \cite{Deisenroth15}. In general, the resulting induced GPP policy cannot be derived exactly due to an uncountable set of candidate observations. A key contribution of our work here thus lies in exploiting the Lipschitz continuity of the reward functions to solve for a nonmyopic adaptive \emph{$\epsilon$-optimal GPP} ($\epsilon$-GPP) policy given an arbitrarily user-specified loss bound $\epsilon$ (Section~\ref{eogpp}). To plan in real time, we further propose an asymptotically optimal, branch-and-bound anytime variant of $\epsilon$-GPP with performance guarantee. Finally, we empirically evaluate the performances of our $\epsilon$-GPP policy and its anytime variant in BO and an energy harvesting task on simulated and real-world environmental fields (Section~\ref{expt}). To ease exposition, the rest of this paper will be described by assuming the task environment to be an environmental field and the agent to be a mobile robot, which coincide with the setup of our experiments.\vspace{-2.3mm} \section{Gaussian Process Planning (GPP)}\vspace{-0.2mm} \label{gppfram} {\bf Notations and Preliminaries.} Let $\mathcal{S}$ be the domain of an environmental field corresponding to a set of sampling locations. At time step $t > 0$, a robot can deterministically move from its previous location $s_{t-1}$ to visit location $s_t\in\mathcal{A}(s_{t-1})$ and observes it by taking a corresponding realized (random) field measurement $z_t$ ($Z_t$) where $\mathcal{A}(s_{t-1}) \subseteq \mathcal{S}$ denotes a finite set of sampling locations reachable from its previous location $s_{t-1}$ in a single time step. The state of the robot at its initial starting location $s_0$ is represented by prior observations/data $d_0\triangleq\langle \mathbf{s}_0 , \mathbf{z}_0\rangle$ available before planning where $\mathbf{s}_0$ and $\mathbf{z}_0$ denote, respectively, vectors comprising locations visited/observed and corresponding field measurements taken by the robot prior to planning and $s_0$ is the last component of $\mathbf{s}_0$. Similarly, at time step $t>0$, the state of the robot at its current location $s_t$ is represented by observations/data $d_t \triangleq \langle \Shist{t}, \Zhist{t} \rangle$ where $\Shist{t}\triangleq\mathbf{s}_0 \oplus (s_1 , \cdots s_t)$ and $\Zhist{t}\triangleq\mathbf{z}_0 \oplus (z_1 , \cdots z_t)$ denote, respectively, vectors comprising locations visited/observed and corresponding field measurements taken by the robot up until time step $t$ and `$\oplus$' denotes vector concatenation. At time step $t > 0$, the robot also receives a reward $\rfn(z_t, \Shist{t})$ to be defined later. \vspace{1mm} \noindent {\bf Modeling Environmental Fields with Gaussian Processes (GPs).} \label{GPSSSS} The GP can be used to model a spatially varying environmental field as follows: The field is assumed to be a realization of a GP. Each location $s \in \mathcal{S}$ is associated with a latent field measurement $Y_s$. Let $Y_{\mathcal{S}} \triangleq \{Y_{s}\}_{s \in \mathcal{S}}$ denote a GP, that is, every finite subset of $Y_{\mathcal{S}}$ has a multivariate Gaussian distribution \cite{Rasmussen06}. Then, the GP is fully specified by its \emph{prior} mean $\mu_s \triangleq \mathbb{E}[Y_s]$ and covariance $k_{ss'} \triangleq \text{cov}[Y_s,Y_{s'}]$ for all $s, s'\in \mathcal{S}$, the latter of which characterizes the spatial correlation structure of the environment field and can be defined using a covariance function. A common choice is the squared exponential covariance function $ k_{ss'} \triangleq \sigma_{y}^2\exp\{-0.5(s-s')^{\top}M^{-2}(s-s')\}$ where $\sigma_{y}^2$ is the signal variance controlling the intensity of measurements and $M$ is a diagonal matrix with length-scale components $l_1$ and $l_2$ governing the degree of spatial correlation or ``similarity'' between measurements in the respective horizontal and vertical directions of the $2$D fields in our experiments. The field measurements taken by the robot are assumed to be corrupted by Gaussian white noise, i.e., $Z_t \triangleq Y_{s_t}+\varepsilon$ where $\varepsilon\sim\mathcal{N}(0, \sigma_{n}^{2})$ and $\sigma_{n}^{2}$ is the noise variance. Supposing the robot has gathered observations $d_t = \langle \Shist{t}, \Zhist{t} \rangle$ from time steps $0$ to $t$, the GP model can perform probabilistic regression by using $d_t$ to predict the noisy measurement at any unobserved location $s_{t+1}\in\mathcal{A}(s_{t})$ as well as provide its predictive uncertainty using a Gaussian predictive distribution $p(z_{t+1}|d_t, s_{t+1})=\mathcal{N}(\mu_{s_{t+1}| d_t}, \sigma^2_{s_{t+1}|\Shist{t}})$ with the following \emph{posterior} mean and variance, respectively:\vspace{-1mm} $$ \begin{array}{rl} \mu_{s_{t+1}| d_t}\hspace{-2.8mm} &\triangleq\displaystyle\mu_{s_{t+1}}+\Sigma_{s_{t+1}\Shist{t}} \Gamma_{\Shist{t}\Shist{t}}^{-1} (\Zhist{t}-\mu_{\Shist{t}})^{\top}\vspace{0.5mm}\\ \sigma^2_{s_{t+1}|\Shist{t}}\hspace{-2.8mm} &\triangleq \displaystyle k_{s_{t+1}s_{t+1}}+\sigma_{n}^2 - \Sigma_{s_{t+1}\Shist{t}} \Gamma_{\Shist{t}\Shist{t}}^{-1} \Sigma_{\Shist{t}s_{t+1}}\vspace{-1mm} \end{array} $$ where $\mu_{\Shist{t}}$ is a row vector with mean components $\mu_s$ for every location $s$ of $\Shist{t}$, $\Sigma_{s_{t+1}\Shist{t}}$ is a row vector with covariance components $k_{s_{t+1}s}$ for every location $s$ of $\Shist{t}$, $\Sigma_{\Shist{t}s_{t+1}}$ is the transpose of $\Sigma_{s_{t+1}\Shist{t}}$, and $\Gamma_{\Shist{t}\Shist{t}}\triangleq\Sigma_{\Shist{t}\Shist{t}}+\sigma_{n}^2I$ such that $\Sigma_{\Shist{t}\Shist{t}}$ is a covariance matrix with components $k_{ss'}$ for every pair of locations $s, s'$ of $\Shist{t}$. An important property of the GP model is that, unlike $\mu_{s_{t+1}| d_t}$, $\sigma^2_{s_{t+1}|\Shist{t}}$ is independent of $\Zhist{t}$.\vspace{0.5mm} \noindent {\bf Problem Formulation.} \label{proble} To frame nonmyopic adaptive \emph{Gaussian process planning} (GPP) as a Bayesian sequential decision problem, let an adaptive policy $\pi$ be defined to sequentially decide the next location $\pi(d_t)\in\mathcal{A}(s_t)$ to be observed at each time step $t$ using observations $d_t$ over a finite planning horizon of $H$ time steps/stages (i.e., sampling budget of $H$ locations). The value $V_0^\pi(d_0)$ under an adaptive policy $\pi$ is defined to be the expected total rewards achieved by its selected observations when starting with some prior observations $d_0$ and following $\pi$ thereafter and can be computed using the following $H$-stage Bellman equations:\vspace{-1mm} $$ \begin{array}{rl} V_t^\pi(d_t) \hspace{-2.8mm} &\triangleq \displaystyle Q_t^\pi(d_t, \pi(d_{t}))\vspace{0.5mm}\\ Q_t^\pi(d_t, s_{t+1}) \hspace{-2.8mm} &\triangleq \displaystyle \mathbb{E}[ \rfn(Z_{t+1}, \Shist{t+1})\ + \\ &\quad V_{t+1}^\pi(\langle \Shist{t+1}, \Zhist{t}\oplus Z_{t+1}\rangle)|d_t,s_{t+1} ]\vspace{-1mm} \end{array} $$ for stages $t = 0,\ldots,H-1$ where $V_H^\pi(d_H)\triangleq 0$. To solve the GPP problem, the notion of Bayes-optimality\if\myproof1\footnote{Bayes-optimality has been studied in discrete BRL \cite{Poupart2006} whose assumptions (Section~\ref{sec:intro}) do not hold in GPP. Continuous BRLs \cite{Ross09,Ross08} assume a known parametric form of observation function, the reward function to be independent of measurements and past states, and/or, when exploiting GP, maximum likelihood observations during planning with no provable performance guarantee.} \fi is exploited for selecting observations to achieve the largest possible expected total rewards with respect to all possible induced sequences of future Gaussian posterior beliefs $p(z_{t+1}|d_t, s_{t+1})$ for $t=0,\ldots,H-1$ to be discussed next. Formally, this involves choosing an adaptive policy $\pi$ to maximize $V^{\pi}_0(d_0)$, which we call the GPP policy $\pi^\ast$. That is, $V^{\ast}_0(d_0)\triangleq V^{\pi^\ast}_0(d_0) = \max_\pi V^{\pi}_0(d_0)$. By plugging $\pi^\ast$ into $V_t^\pi(d_t)$ and $Q_t^\pi(d_t, s_{t+1})$ above,\vspace{-0.7mm} \begin{equation} \hspace{-1.2mm} \begin{array}{rl} V_t^*(d_t) \triangleq\hspace{-2.4mm} & \max_{s_{t+1} \in \mathcal{A}(s_{t})} Q_t^*(d_t, s_{t+1})\vspace{0.5mm}\\ Q_t^*(d_t, s_{t+1}) \triangleq\hspace{-2.4mm} & \displaystyle \mathbb{E}[ \rfn(Z_{t+1}, \Shist{t+1})|d_t, s_{t+1} ]\ + \\ & \mathbb{E}[V_{t+1}^*(\langle \Shist{t+1}, \Zhist{t}\oplus Z_{t+1}\rangle)|d_t, s_{t+1} ]\vspace{-0.6mm} \end{array} \label{eq:OptimalValFunDef} \end{equation} for stages $t = 0,\ldots,H-1$ where $V_H^*(d_H)\triangleq 0$. To see how the GPP policy $\pi^\ast$ jointly and naturally optimizes the exploration-exploitation trade-off, its selected location $\pi^\ast(d_t) = \arg\max_{s_{t+1} \in \mathcal{A}(s_{t})} Q_t^*(d_t, s_{t+1})$ at each time step $t$ affects both the immediate expected reward $\mathbb{E}[ \rfn(Z_{t+1}, \Shist{t}\oplus\pi^*(d_t))|d_t,\pi^*(d_t) ]$ given current belief $p(z_{t+1}|d_t, \pi^*(d_t))$ (i.e., exploitation) as well as the Gaussian posterior belief $p(z_{t+2}|\langle \Shist{t}\oplus\pi^*(d_t), \Zhist{t}\oplus z_{t+1}\rangle, \pi^*(\langle \Shist{t}\oplus\pi^*(d_t), \Zhist{t}\oplus z_{t+1}\rangle))$ at next time step $t+1$ (i.e., exploration), the latter of which influences expected future rewards $\mathbb{E}[V_{t+1}^*(\langle \Shist{t}\oplus\pi^*(d_t), \Zhist{t}\oplus Z_{t+1}\rangle)|d_t,\pi^*(d_t) ]$. In general, the GPP policy $\pi^\ast$ cannot be derived exactly because the expectation terms in \eqref{eq:OptimalValFunDef} usually cannot be evaluated in closed form due to an uncountable set of candidate measurements (Section~\ref{sec:intro}) except for degenerate cases like $\rfn(z_{t+1}, \Shist{t+1})$ being independent of $z_{t+1}$ and $H\leq 2$. To overcome this difficulty, we will show in Section~\ref{eogpp} later how the Lipschitz continuity of the reward functions can be exploited for theoretically guaranteeing the performance of our proposed nonmyopic adaptive $\epsilon$-optimal GPP policy, that is, the expected total rewards achieved by its selected observations closely approximates that of $\pi^{\ast}$ within an arbitrarily user-specified loss bound $\epsilon > 0$.\vspace{0.5mm} \noindent {\bf Lipschitz Continuous Reward Functions.} \label{ch:rewardfunctions} $\rfn(z_t, \Shist{t})\triangleq$ $\rfn_1(z_t) + \rfn_2(z_t) + \rfn_3(\Shist{t})$ where $\rfn_1$, $\rfn_2$, and $\rfn_3$ are user-de- fined reward functions that satisfy the conditions below:\vspace{-0mm} \squishlisttwo \item $\rfn_{1}(z_t)$ is Lipschitz continuous in $z_t$ with Lipschitz constant $\ell_{1}$. So, $h_{\sigma}(u)\triangleq (\rfn_{1} \ast \mathcal{N}(0, \sigma^2))(u)$ is Lipschitz continuous in $u$ with $\ell_{1}$ where `$\ast$' denotes convolution; \item $\rfn_{2}(z_t)$: Define $g_{\sigma}(u)\triangleq(\rfn_{2} \ast \mathcal{N}(0, \sigma^2))(u)$ such that (a) $g_{\sigma}(u)$ is well-defined for all $u \in \mathbb{R}$, (b) $g_{\sigma}(u)$ can be evaluated in closed form or computed up to an arbitrary precision in reasonable time for all $u \in \mathbb{R}$, and (c) $g_{\sigma}(u)$ is Lipschitz continuous\footnote{\label{const}Unlike $\rfn_{1}$, $\rfn_{2}$ does not need to be Lipschitz continuous (or continuous); it must only be Lipschitz continuous after convolution with any Gaussian kernel. An example of $\rfn_{2}$ is unit step function.} in $u$ with Lipschitz constant $\ell_{2}(\sigma)$; \item $\rfn_{3}(\Shist{t})$ only depends on locations $\Shist{t}$ visited/observed by the robot up until time step $t$ and is independent of realized measurement $z_{t}$. It can be used to represent some sampling or motion costs or explicitly consider exploration by defining it as a function of $\sigma^2_{s_{t+1}|\Shist{t}}$.\vspace{-0mm} \squishend Using the above definition of $\rfn(z_t, \Shist{t})$, the immediate expected reward in \eqref{eq:OptimalValFunDef} evaluates to $ \mathbb{E}[ \rfn(Z_{t+1}, \Shist{t+1})|d_t, s_{t+1} ]$ $= (h_{\sigma_{s_{t+1}|\Shist{t}}} + g_{\sigma_{s_{t+1}|\Shist{t}}})\hspace{-1mm}\left(\mu_{s_{t+1}|d_{t}}\right) +\rfn_3(\Shist{t+1} $ which is Lipschitz continuous in the realized measurements $\Zhist{t}$: \begin{lemma} Let $\alpha(\Shist{t+1}) \triangleq \lVert\Sigma_{s_{t+1}\Shist{t}}\Gamma_{\Shist{t}\Shist{t}}^{-1}\rVert$ and $d_{t}'\triangleq\langle \Shist{t}, \Zhist{t}' \rangle$. \noindent Then,$ \displaystyle\left|\mathbb{E}[ \rfn(\hspace{-0.1mm}Z_{t+1},\hspace{-0.5mm} \Shist{t+1})|d_t,\hspace{-0.5mm} s_{t+1} ]\hspace{-0.8mm} - \hspace{-0.8mm}\mathbb{E}[ \rfn(\hspace{-0.1mm}Z_{t+1}, \hspace{-0.5mm}\Shist{t+1})|d'_t,\hspace{-0.5mm}s_{t+1} ] \right|$ $\leq \alpha(\Shist{t+1}) \left(\ell_1 + \ell_2(\sigma_{s_{t+1}|\Shist{t}})\right) \lVert\Zhist{t} - \Zhist{t}'\rVer \ .$ \label{rewlc} \end{lemma} Its proof is in\if\myproof1 Appendix~\ref{geeez2}. \fi\if\myproof0 \cite{AA16}. \fi Lemma~\ref{rewlc} will be used to prove the Lipschitz continuity of $V^*_t$ in \eqref{eq:OptimalValFunDef} later. Before doing this, let us consider how the Lipschitz continuous reward functions defined above can unify some AL and BO criteria discussed in Section~\ref{sec:intro} and be used for defining new tasks/problems.\vspace{0.5mm} \noindent \textbf{Active learning/sensing (AL).} Setting $\rfn(z_{t+1}, \Shist{t+1})=R_3(\Shist{t+1})=0.5\log(2 \pi e\sigma^2_{s_{t+1}|\Shist{t}})$ yields the well-known nonmyopic AL algorithm called \emph{maximum entropy sampling} (MES) \cite{Shewry87} which plans/decides locations with maximum entropy to be observed that minimize the posterior entropy remaining in the unobserved areas of the field. Since $\rfn(z_{t+1}, \Shist{t+1})$ is independent of $z_{t+1}$, the expectations in \eqref{eq:OptimalValFunDef} go away, thus making MES non-adaptive and hence a straightforward search algorithm not plagued by the issue of uncountable set of candidate measurements. As such, we will not focus on such a degenerate case. This degeneracy vanishes when the environment field is instead a realization of log-Gaussian process. Then, MES becomes adaptive \cite{LowICAPS09} and its reward function can be represented by our Lipschitz continuous reward functions: By setting $R_1(z_{t+1})=0$, $R_2$ and $g_{\sigma_{s_{t+1}|\Shist{t}}}$ as identity functions with $\ell_2(\sigma_{s_{t+1}|\Shist{t}})=1$, and $R_3(\Shist{t+1})=0.5\log(2 \pi e\sigma^2_{s_{t+1}|\Shist{t}})$, $\mathbb{E}[ \rfn(Z_{t+1}, \Shist{t+1})|d_t, s_{t+1} ] =\mu_{s_{t+1}| d_t} + 0.5\log(2 \pi e\sigma^2_{s_{t+1}|\Shist{t}})$. \noindent \textbf{Bayesian optimization (BO).} The greedy BO algorithm of \citeauthor{Srinivas10}~\shortcite{Srinivas10} utilizes the UCB selection criterion $\mu_{s_{t+1}| d_t} + \beta\sigma_{s_{t+1}|\Shist{t}}$ ($\beta\geq 0$) to approximately optimize the global BO objective of total field measurements $\sum^{H}_{t=1} z_t$ taken by the robot or, equivalently, minimize its total regret. UCB can be represented by our Lipschitz continuous reward functions: By setting $R_1(z_{t+1})=0$, $R_2$ and $g_{\sigma_{s_{t+1}|\Shist{t}}}$ as identity functions with $\ell_2(\sigma_{s_{t+1}|\Shist{t}})=1$, and $R_3(\Shist{t+1})=\beta\sigma_{s_{t+1}|\Shist{t}}$, $\mathbb{E}[ \rfn(Z_{t+1}, \Shist{t+1})|d_t, s_{t+1} ] =\mu_{s_{t+1}| d_t} + \beta\sigma_{s_{t+1}|\Shist{t}}$. In particular, when $\beta=0$, it can be derived that our GPP policy $\pi^*$ maximizes the \emph{expected} total field measurements taken by the robot, hence optimizing the exact global BO objective of \citeauthor{Srinivas10}~\shortcite{Srinivas10} in the expected sense. So, unlike greedy UCB, our nonmyopic GPP framework does not have to explicitly consider an additional weighted exploration term (i.e., $\beta\sigma_{s_{t+1}|\Shist{t}}$) in its reward function because it can jointly and naturally optimize the exploration-exploitation trade-off, as explained earlier. Nevertheless, if a stronger exploration behavior is desired (e.g., in online planning), then $\beta$ has to be fine-tuned. Different from nonmyopic BO algorithm of \citeauthor{RamosUAI14}~\shortcite{RamosUAI14} using UCB-based rewards, our proposed nonmyopic $\epsilon$-optimal GPP policy (Section~\ref{eogpp}) does not need to impose an extreme assumption of maximum likelihood observations during planning and, more importantly, provides a performance guarantee, including for the extreme assumption made by nonmyopic UCB. Our GPP framework differs from nonmyopic BO algorithm of \citeauthor{Osborne09}~\shortcite{Osborne09} in that every selected observation contributes to the total field measurements taken by the robot instead of considering just the expected improvement for the last observation. So, it usually does not have to expend all the given sampling budget to find the global maximum.\vspace{0.5mm} \noindent \textbf{General tasks/problems.} In practice, the necessary reward function can be more complex than the ones specified above that are formed from an identity function of the field measurement. For example, consider the problem of placing wind turbines in optimal locations to maximize the total power production. Though the average wind speed in a region can be modeled by a GP, the power output is not a linear function of the steady-state wind speed. In fact, power production requires a certain minimum speed known as the cut-in speed. After this threshold is met, power output increases and eventually plateaus. Assuming the cut-in speed is $1$, this effect can be modeled with a logarithmic reward function\footnote{In reality, the speed-power relationship is not exactly logarithmic, but this approximation suffices for the purpose of modeling.}: $\rfn(z_{t+1}, \Shist{t+1}) = \rfn_1(z_{t+1})$ gives a value of $\log(z_{t+1})$ if $z_{t+1}>1$, and $0$ otherwise where $\ell_1=1$. To the best of our knowledge, $h_{\sigma_{s_{t+1}|\Shist{t}}}(u)$ has no closed-form expression. In\if\myproof1 Appendix~\ref{aeg}, \fi\if\myproof0 \cite{AA16}, \fi we present other interesting reward functions like unit step function\cref{const} and Gaussian distribution that can be represented by $\rfn(z_{t+1}, \Shist{t+1})$ and used in real-world tasks. Theorem~\ref{th:LipschitzAll} below reveals that $V^*_t(d_t)$ \eqref{eq:OptimalValFunDef} with Lipschitz continuous reward functions is Lipschitz continuous in $\Zhist{t}$ with Lipschitz constant $L_{t}(\Shist{t})$ defined below:\vspace{-1mm} \begin{definition}\label{def:ValFunGenLip} Let $L_{H}(\Shist{H}) \triangleq 0$. For $t=0,\ldots,H-1$, define $ L_{t}(\Shist{t}) \triangleq \max_{s_{t+1} \in \mathcal{A}(s_t)} \alpha(\Shist{t+1}) \left(\ell_1 + \ell_2(\sigma_{s_{t+1}|\Shist{t}})\right) + L_{t+1}(\Shist{t+1})\sqrt{1+\alpha(\Shist{t+1})^2}\ . \vspace{-1mm} $ \end{definition} \begin{theorem}[Lipschitz Continuity of $V^*_t$]\label{th:LipschitzAll} For $t=0,\ldots,H$, $|V^*_t(d_t) - V^*_t(d_{t}')| \le L_{t}(\Shist{t}) \lVert\Zhist{t}-\Zhist{t}'\rVert\ .$\vspace{-0.5mm} \end{theorem} Its proof uses Lemma~\ref{rewlc} and is in\if\myproof1 Appendix~\ref{akaka}. \fi\if\myproof0 \cite{AA16}. \fi The result below is a direct consequence of Theorem~\ref{th:LipschitzAll} and will be used to theoretically guarantee the performance of our proposed nonmyopic adaptive $\epsilon$-optimal GPP policy in Section~\ref{eogpp}:\vspace{-0.5mm} \begin{corollary} \label{th:LipschitzSingle} For $t=0,\ldots,H$, $|V^*_{t}(\langle \Shist{t}, \Zhist{t-1} \oplus z_{t} \rangle) - V^*_{t}(\langle \Shist{t}, \Zhist{t-1} \oplus z'_{t} \rangle) | \le L_{t}(\Shist{t})| z_{t} -z_{t}'|$. \vspace{-1.5mm} \end{corollary} \section{$\epsilon$-Optimal GPP ($\epsilon$-GPP) \label{eogpp} \label{ch:SamplingStrategy} The key idea of constructing our proposed nonmyopic adaptive $\epsilon$-GPP policy is to approximate the expectation terms in \eqref{eq:OptimalValFunDef} at every stage using a form of deterministic sampling, as illustrated in the figure below. Specifically, the measurement space of $p(z_{t+1}|d_t, s_{t+1})$ is first partitioned into $n\geq 2$ intervals $\zeta_0, \ldots, \zeta_{n-1}$ such that intervals $\zeta_1, \ldots, \zeta_{n-2}$ are equally spaced within the bounded gray region $[\mu_{s_{t+1}|d_t} - \sdk\sigma_{s_{t+1}|\Shist{t}}, \mu_{s_{t+1}|d_t} + \sdk\sigma_{s_{t+1}|\Shist{t}}]$ specified by a user-defined width parameter $\tau\geq 0$ while intervals $\zeta_0$ and $\zeta_{n-1}$ span the two infinitely long red tails. Note that $\tau>0$ requires $n>2$ for the partition to be valid. The $n$ sample measurements $z^{0} \dots z^{n-1}$ are then selected by setting $z^0$ as upper limit of red interval $\zeta_0$, $z^{n-1}$ as lower limit of red interval $\zeta_{n-1}$, and $z^1, \ldots, z^{n-2}$ as centers of the respective gray intervals $\zeta_1, \ldots, \zeta_{n-2}$. Next, the weights $w^{0} \dots w^{n-1}$ for the corresponding sample measurements $z^0, \ldots, z^{n-1}$ are defined as the areas under their respective intervals $\zeta_0, \ldots, \zeta_{n-1}$ of the Gaussian predictive distribution $p(z_{t+1}|d_t, s_{t+1})$. So, $\sum^{n-1}_{i=0}w^i = 1$. An example of such a partition is given in\if\myproof1 Appendix~\ref{dsampeg}. \fi\if\myproof0 \cite{AA16}. \fi The selected sample measurements and their corresponding weights can be exploited for approximating $V^*_t$ with Lipschitz continuous reward functions \eqref{eq:OptimalValFunDef} using the following $H$-stage Bellman equations:\vspace{-1.5mm} \begin{equation} \hspace{-1.8mm} \begin{array}{rl} V_t^{\epsilon}(d_t) \triangleq\hspace{-2.4mm} & \max_{s_{t+1} \in \mathcal{A}(s_{t})} Q_t^{\epsilon}(d_t, s_{t+1})\vspace{0.5mm}\\ Q_t^{\epsilon}(d_t, s_{t+1}) \triangleq\hspace{-2.4mm} & g_{\sigma_{s_{t+1}|\Shist{t}}}\hspace{-1mm}\left(\mu_{s_{t+1}|d_{t}}\right) +\rfn_3(\Shist{t+1})\ + \\ &\displaystyle\sum^{n-1}_{i=0} w^i \hspace{-1mm}\left(R_1(z^{i}) + V_{t+1}^{\epsilon}(\langle \Shist{t+1}, \Zhist{t}\oplus z^{i}\rangle)\right)\vspace{-3mm}\hspace{-5.3mm} \end{array} \label{eq:EpsilonValFunDef} \end{equation} for stages $t = 0,\ldots,H-1$ where $V_H^{\epsilon}(d_H)\triangleq 0$. The resulting induced $\epsilon$-GPP policy $\pi^\epsilon$ jointly and naturally optimizes the exploration-exploitation trade-off in a similar manner as that of the GPP policy $\pi^*$, as explained in Section~\ref{proble}. It is interesting to note that setting $\sdk=0$ yields $z^{0}=\ldots = z^{n-1}=\mu_{s_{t+1}| d_t}$, which is equivalent to selecting a single sample measurement of $\mu_{s_{t+1}| d_t}$ with corresponding weight of $1$. This is identical to the special case of maximum likelihood observations during planning which is the extreme assumption used by nonmyopic UCB \cite{RamosUAI14} for sampling to gain time efficiency.\vspace{0.5mm} \begin{figure} \resizebox{8.4cm}{!}{% \begin{tikzpicture} \draw[scale=1,domain=-5:5,smooth,variable=\x,black, line width=3.0pt] plot ({\x},{4*exp(-\x*\x*0.5*0.2)}) (4,1.7) node[above, black] {$p(z_{t+1}|d_t, s_{t+1})$}; \fill[fill=red] (4,0) -- plot [domain=4:5] ({\x},{4*exp(-\x*\x*0.5*0.2)}) -- (5,0) -- cycle; \fill[fill=red] (-5,0) -- plot [domain=-5:-4] ({\x},{4*exp(-\x*\x*0.5*0.2)}) -- (-3,0) -- cycle; \fill[fill=gray] (-4,0) -- plot [domain=-4:4] ({\x},{4*exp(-\x*\x*0.5*0.2)}) -- (4,0) -- cycle; \foreach \x [evaluate = \x as \xpp using 4*exp(-\x*\x*0.5*0.2)] in {-4,-2.9,-1.65,-0.55, 0.55, 1.65,2.9,4} \draw[dashed, black, line width=1pt] (\x, 0) -- (\x,\xpp ); \draw (3.15,2.54) node[black,above] {$\begin{array}{r} z^{0} \triangleq \mu_{s_{t+1}|d_t}\hspace{-1mm}-\hspace{-0.5mm}\tau\sigma_{s_{t+1}|\mathbf{s}_{t}};\vspace{0mm}\\ z^{n-1} \triangleq \mu_{s_{t+1}|d_t}\hspace{-1mm}+\hspace{-0.5mm}\tau\sigma_{s_{t+1}|\mathbf{s}_{t}};\vspace{0mm}\\ z^{i}\hspace{-1mm} \triangleq\hspace{-0.5mm} z^{0}\hspace{-1mm} + \hspace{-1mm}\frac{i-0.5}{n-2}(z^{n-1}\hspace{-1mm} -\hspace{-1mm} z^{0})\\ \text{for}\ i = 1,\ldots,n-2. \end{array}$}; \draw (-2.7,2.9) node[black,above] {$ \begin{array}{l} w^{i}\hspace{-1mm} \triangleq\hspace{-0.5mm} \Phi(\frac{2i\tau}{n-2}\hspace{-1mm}-\hspace{-0.5mm}\tau) \hspace{-1mm}-\hspace{-1mm} \Phi(\frac{2(i-1)\tau}{n-2}\hspace{-1mm}-\hspace{-0.5mm}\tau)\\ \text{for}\ i = 1,\ldots,n-2;\vspace{0.5mm}\\ w^0 = w^{n-1}\triangleq\Phi(-\tau). \end{array} $}; \draw (-4,0) -- ( -4,-0.1) node[below] {$z^{0}$} (-4.5,-0.03) node[above] {$w^0$}; \draw (-3.45,0) -- ( -3.45,-0.1) node[below] {$z^{1}$} ( -3.45,0.35) node[above] {$w^{1}$}; \draw (-2.275,-0.99) node[below] {$\ldots$} (-2.275,-0.56) node[gray, below] {$\ldots$} (-2.275,-0.35) node[below] {$\ldots$} (-2.275,1) node[above] {$\ldots$}; \draw (-1.1,0) -- ( -1.1,-0.1) node[below] {$z^{i\text{-}1}$} ( -1.1,1.6) node[above] {$w^{i\text{-}1}$}; \draw ( 0,0) -- ( 0,-0.1) node[below] {$z^{i}$} ( 0, 1.9) node[above] {$w^{i}$}; \draw ( 1.1,0) -- ( 1.1,-0.1) node[below] {$z^{i\text{+}1}$} ( 1.1,1.6) node[above] {$w^{i\text{+}1}$}; \draw (4,0) -- ( 4,-0.1) node[below] {$z^{n\text{-}1}$} (4.5,-0.03) node[above] {$w^{n\text{-}1}$}; \draw (3.45,0) -- ( 3.45,-0.1) node[below] {$z^{n\text{-}2}$} ( 3.45,0.35) node[above] {$w^{n\text{-}2}$}; \draw (2.275,-0.99) node[below] {$\ldots$} (2.275,-0.56) node[gray, below] {$\ldots$} (2.275,-0.35) node[below] {$\ldots$} (2.275,1) node[above] {$\ldots$}; \draw (4.85,0) node[black,below] {$z_{t+1}$}; \draw[<->, gray, line width =1.5] (-4, -0.7) -- node[black,below] {$\zeta_{1}$} ++ (1.1,0); \draw[<->, gray, line width =1.5] (-1.65, -0.7) -- node[black,below] {$\zeta_{i\text{-}1}$} ++ (1.1,0); \draw[<->, gray, line width =1.5] (-0.55, -0.7) -- node[black,below] {$\zeta_{i}$} ++ (1.1,0); \draw[<->, gray, line width =1.5] (0.55, -0.7) -- node[black,below] {$\zeta_{i\text{+}1}$} ++ (1.1,0); \draw[<->, gray, line width =1.5] (2.9, -0.7) -- node[black,below] {$\zeta_{n\text{-}2}$} ++ (1.1,0); \draw[->, red, line width =1.5] (-5, -0.7) -- node[black,below] {$\zeta_0$} ++ (1,0); \draw[<-, red, line width =1.5] (4.0, -0.7) -- node[black,below] {$\zeta_{n\text{-}1}$} ++ (1,0); \draw[->, black, line width =1] (-5,0) -- (5.13,0); \end{tikzpicture} }% \vspace{-6mm} \label{fig:dsamp} \end{figure} \noindent {\bf Performance Guarantee.} The difficulty in theoretically guaranteeing the performance of our $\epsilon$-GPP policy $\pi^{\epsilon}$ (i.e., relative to that of GPP policy $\pi^*$) lies in analyzing how the values of the width parameter $\tau$ and deterministic sampling size $n$ can be chosen to satisfy the user-specified loss bound $\epsilon$, as discussed below. The first step is to prove that $V_t^{\epsilon}$ in \eqref{eq:EpsilonValFunDef} approximates $V_t^{*}$ in \eqref{eq:OptimalValFunDef} closely for some chosen $\tau$ and $n$ values, which relies on the Lipschitz continuity of $V_t^{*}$ in Corollary~\ref{th:LipschitzSingle}. Define $\Lambda(n, \sdk)$ to be equal to the value of $\sqrt{2/\pi}$ if $n\geq2\wedge \sdk=0$, and value of $\kappa(\sdk)+\eta(n,\sdk)$ if $n>2\wedge \sdk> 0$ where $\kappa(\sdk)\triangleq\sqrt{{2/\pi}} \exp(-0.5{\sdk^2})- 2\sdk\Phi(-\sdk)$, $\eta(n,\sdk)\triangleq{2\sdk}(0.5-\Phi(-\sdk))/({n-2})$, and $\Phi$ is a standard normal CDF.\vspace{-0mm} \begin{theorem} \label{th:MultistageError} Suppose that $\lambda > 0$ is given. For all $d_t$ and $t = 0,\ldots, H$, if\vspace{-1mm} \begin{equation} \label{eq:PartitionIneq} \lambda \ge \Lambda(n, \sdk) \sigma_{s_{t+1}|\Shist{t}}(\ell_1 + L_{t+1}(\Shist{t+1}))\vspace{-1mm} \end{equation} for all $s_{t+1}\in\mathcal{A}(s_t)$, then $|V_t^{\epsilon}(d_t) - V_t^{*}(d_t)| \le \lambda (H-t)\ .$\vspace{-0mm} \end{theorem} Its proof uses Corollary~\ref{th:LipschitzSingle} and is given in\if\myproof1 Appendix~\ref{jordan}. \fi\if\myproof0 \cite{AA16}. \fi \noindent \emph{Remark} $1$. From Theorem~\ref{th:MultistageError}, a tighter bound on the error $|V_t^{\epsilon}(d_t) - V_t^{*}(d_t)|$ can be achieved by decreasing the sampling budget of $H$ locations\footnote{\label{change}This changes $\epsilon$-GPP by reducing its planning horizon though.} and increasing the deterministic sampling size $n$; increasing $n$ reduces $\eta(n,\sdk)$ and hence $\Lambda(n,\sdk)$, which allows $\lambda$ to be reduced as well. The width parameter $\tau$ has a mixed effect on this error bound: Note that $\kappa(\sdk)$ ($\eta(n,\sdk)$) is proportional to some upper bound on the error incurred by the extreme sample measurements $z^0$ and $z^{n-1}$ ($z^1,\ldots,z^{n-2}$), as shown in\if\myproof1 Appendix~\ref{jordan}. \fi\if\myproof0 \cite{AA16}. \fi Increasing $\tau$ reduces $\kappa(\sdk)$ but unfortunately raises $\eta(n,\sdk)$. So, in order to reduce $\Lambda(n,\sdk)$ further by increasing $\tau$, it has to be complemented by raising $n$ fast enough to keep $\eta(n,\sdk)$ from increasing. This allows $\lambda$ to be reduced further as well. \noindent \emph{Remark} $2$. A feasible choice of $\tau$ and $n$ satisfying \eqref{eq:PartitionIneq} can be expressed analytically in terms of the given $\lambda$ and hence computed prior to planning, as shown in\if\myproof1 Appendix~\ref{yayat}. \fi\if\myproof0 \cite{AA16}. \fi \noindent \emph{Remark} $3$. $\sigma_{s_{t+1}|\Shist{t}}$ and $L_{t+1}(\Shist{t+1})$ for all $\Shist{t+1}$ and $t=0,\ldots,H-1$ can be computed prior to planning as they depend on $\Shist{0}$ and all reachable locations from $s_0$ but not on their measurements. Using Theorem~\ref{th:MultistageError}, the next step is to bound the performance loss of our $\epsilon$-GPP policy $\pi^{\epsilon}$ relative to that of GPP policy $\pi^*$, that is, policy $\pi^{\epsilon}$ is $\epsilon$-optimal:\vspace{-0mm} \begin{theorem}\label{th:PolicyLoss} Given the user-specified loss bound $\epsilon>0$, $V_0^*(d_0)-V_0^{\pi^{\epsilon}}(d_0) \le \epsilon$ by substituting $\lambda=\epsilon/( H(H+1))$ into the choice of $\tau$ and $n$ stated in Remark $2$ above.\vspace{-0mm} \end{theorem} Its proof is in\if\myproof1 Appendix~\ref{hender}. \fi\if\myproof0 \cite{AA16}. \fi It can be observed from Theorem~\ref{th:PolicyLoss} that a tighter bound $\epsilon$ on the error $V_0^*(d_0)-V_0^{\pi^{\epsilon}}(d_0)$ can be achieved by decreasing the sampling budget of $H$ locations\cref{change} and increasing the deterministic sampling size $n$. The effect of width parameter $\tau$ on this error bound $\epsilon$ is the same as that on the error bound of $|V_t^{\epsilon}(d_t) - V_t^{*}(d_t)|$, as explained in Remark $1$ above. \vspace{1mm} \noindent {\bf Anytime $\epsilon$-GPP.} Unlike GPP policy $\pi^*$, our $\epsilon$-GPP policy $\pi^{\epsilon}$ can be derived exactly since its incurred time is independent of the size of the uncountable set of candidate measurements. However, expanding the entire search tree of $\epsilon$-GPP \eqref{eq:EpsilonValFunDef} incurs time containing a $\mathcal{O}(n^H)$ term and is not always necessary to achieve $\epsilon$-optimality in practice. To mitigate this computational difficulty\footnote{\label{sucks}The value of $n$ is a bigger computational issue than that of $H$ when $\epsilon$ is small and in online planning.}, we propose an anytime variant of $\epsilon$-GPP that can produce a good policy fast and improve its approximation quality over time, as briefly discussed here and detailed with the pseudocode in\if\myproof1 Appendix~\ref{aegpp}. \fi\if\myproof0 \cite{AA16}. \fi The key intuition is to expand the sub-trees rooted at ``promising'' nodes with the highest weighted uncertainty of their corresponding values $V^{*}_t(d_t)$ so as to improve their estimates. To represent such uncertainty at each encountered node, upper \& lower heuristic bounds (respectively, $\obar{V}_{t}^{*}(d_t)$ and $\ubar{V}_{t}^{*}(d_t)$) are maintained, like in \cite{Simmons06}. A partial construction of the entire tree is maintained and expanded incrementally in each iteration of anytime $\epsilon$-GPP that incurs linear time in $n$ and comprises $3$ steps: \noindent \textbf{Node selection.} Traverse down the partially constructed tree by repeatedly selecting nodes with largest difference between their upper and lower bounds (i.e., uncertainty) discounted by weight $w^{i^*}$ of its preceding sample measurement $z^{i^*}$ until an unexpanded node, denoted by $d_t$, is reached. \noindent \textbf{Expand tree.} Construct a ``minimal'' sub-tree rooted at node $d_t$ by sampling all possible next locations and only their median sample measurements $z^{\bar{i}}$ recursively up to full height $H\hspace{-0.7mm}$. \noindent \textbf{Backpropagation.} Backpropagate bounds from the leaves of the newly constructed sub-tree to node $d_t$, during which the refined bounds of expanded nodes are used to inform the bounds of unexpanded siblings by exploiting the Lipschitz continuity of $V^*_t$ (Corollary~\ref{th:LipschitzSingle}), as explained in\if\myproof1 Appendix~\ref{aegpp}. \fi\if\myproof0 \cite{AA16}. \fi Backpropagate bounds to the root of the partially constructed tree in a similar manner. By using the lower heuristic bound to produce our anytime $\epsilon$-GPP policy, its performance loss relative to that of GPP policy $\pi^*$ can be bounded, as proven in\if\myproof1 Appendix~\ref{aegpp}. \fi\if\myproof0 \cite{AA16}. \fi\vspace{-2.5mm} \section{Experiments and Discussion}\vspace{-0.5mm} \label{expt} This section empirically evaluates the online planning performance and time efficiency of our $\epsilon$-GPP policy $\pi^{\epsilon}$ and its anytime variant under limited sampling budget in an energy harvesting task on a simulated wind speed field and in BO on simulated plankton density (chl-a) field and real-world log-potassium (lg-K) concentration (mg~l$^{-1}$) field\if\myproof1 (Appendix~\ref{treesize2}) \fi\if\myproof0 \cite{AA16} \fi of Broom's Barn farm \cite{Webster01}. Each simulated (real-world lg-K) field is spatially distributed over a $0.95$~km by $0.95$~km ($520$~m by $440$~m) region discretized into a $20\times 20$ ($14\times 12$) grid of sampling locations. These fields are assumed to be realizations of GPs. The wind speed (chl-a) field is simulated using hyperparameters $\mu_s=0$,\footnote{Its actual prior mean is not zero; we have applied zero-mean GP to $Y_s-\mu_s$ for simplicity.} $l_1=l_2=0.2236$ ($0.2$)~km, $\sigma_{n}^2=10^{-5}$, and $\sigma_{y}^2=1$. The hyperparameters $\mu_s=3.26$, $l_1=42.8$~m, $l_2=103.6$~m, $\sigma^2_{n}=0.0222$, and $\sigma^2_{y}=0.057$ of lg-K field are learned using maximum likelihood estimation \cite{Rasmussen06}. The robot's initial starting location is near to the center of each simulated field and randomly selected for lg-K field. It can move to any of its $4$ adjacent grid locations at each time step and is tasked to maximize its total rewards over $20$ time steps (i.e., sampling budget of $20$ locations). In BO, the performances of our $\epsilon$-GPP policy $\pi^{\epsilon}$ and its anytime variant are compared with that of state-of-the-art \emph{nonmyopic UCB} \cite{RamosUAI14} and \emph{greedy PI, EI, UCB} \cite{Brochu10,Srinivas10}. Three performance metrics are used: (a) Total rewards achieved over the evolved time steps (i.e., higher total rewards imply less total regret in BO (Section~\ref{gppfram})), (b) maximum reward achieved during experiment, and (c) search tree size in terms of no. of nodes (i.e., larger tree size implies higher incurred time). All experiments are run on a Linux machine with Intel Core i$5$ at $1.7$~GHz.\vspace{0.5mm} \noindent {\bf Energy Harvesting Task on Simulated Wind Speed Field.} A robotic rover equipped with a wind turbine is tasked to harvest energy/power from the wind while exploring a polar region \cite{Chen14}. It is driven by the logarithmic reward function described under `General tasks/problems' in Section~\ref{ch:rewardfunctions}. \begin{table} {\setlength{\tabcolsep}{0em} \begin{tabular}{p{\textwidth}} \floatbox[{\capbeside\thisfloatsetup{capbesideposition={left,top},capbesidewidth=2.66cm}}]{figure}[\FBwidth] {\caption{Graphs of total rewards and tree size of $\epsilon$-GPP policies with (a-b) online planning horizon $H'=4$ and varying $\epsilon$ and (c-d) varying $H'=1,2,3,4$ (respectively, $\epsilon=0.002, 0.06, 0.8, 5$)\vspace{-1mm} }\label{fig:gpp_synthetic_log}} {\hspace{-9mm} \begin{tabular}{cc} \hspace{-0mm}\includegraphics[width=2.77cm]{log005CompareEpsilon.pdf} & \hspace{-0mm}\includegraphics[width=2.77cm]{log005CompareEpsilon_cost.pdf}\vspace{-2mm}\\ \hspace{2mm}{\scriptsize (a)} & \hspace{4mm}{\scriptsize (b)} \vspace{-0mm}\\ \hspace{-0mm}\includegraphics[width=2.77cm]{log005CompareHeights.pdf} & \hspace{-0mm}\includegraphics[width=2.77cm]{log005CompareHeights_cost.pdf}\vspace{-2mm}\\ \hspace{2mm}{\scriptsize (c)} & \hspace{3mm}{\scriptsize (d)}\vspace{-1mm} \end{tabular} }\vspace{-4mm}\\ {vs. no. of time steps for energy harvesting task. The plot of $\epsilon^*=5$ uses our anytime variant with a maximum tree size of $5 \times 10^4$ nodes while the plot of $\epsilon=250$ effectively assumes maximum likelihood observations during planning like that of nonmyopic UCB \cite{RamosUAI14}.}\vspace{-4.9mm} \end{tabular}} \end{table} Fig.~\ref{fig:gpp_synthetic_log} shows results of performances of our $\epsilon$-GPP policy and its anytime variant averaged over $30$ independent realizations of the wind speed field. It can be observed that the gradients of the achieved total rewards (i.e., power production) increase over time, which indicate a higher obtained reward with an increasing number of time steps as the robot can exploit the environment more effectively with the aid of exploration from previous time steps. The gradients eventually stop increasing when the robot enters a perceived high-reward region. Further exploration is deemed unnecessary as it is unlikely to find another preferable location within $H'$ time steps; so, the robot remains near-stationary for the remaining time steps. It can also be observed that the incurred time is much higher in the first few time steps. This is expected because the posterior variance $\sigma_{s_{t+1}|\Shist{t}}$ decreases with increasing time step $t$, thus requiring a decreasing deterministic sampling size $n$ to satisfy \eqref{eq:PartitionIneq}. Initially, all $\epsilon$-GPP policies achieve similar total rewards as the robots begin from the same starting location. After some time, $\epsilon$-GPP policies with lower user-specified loss bound $\epsilon$ and longer online planning horizon $H'$ achieve considerably higher total rewards at the cost of more incurred time. In particular, it can be observed that a robot assuming maximum likelihood observations during planning (i.e., $\epsilon=250$) like that of nonmyopic UCB or using a greedy policy (i.e., $H'=1$) performs poorly very quickly. In the former case (Fig.~\ref{fig:gpp_synthetic_log}a), the gradient of its total rewards stops increasing quite early (i.e., from time step $9$ onwards), which indicates that its perceived local maximum is reached prematurely. Interestingly, it can be observed from Fig.~\ref{fig:gpp_synthetic_log}d that the $\epsilon$-GPP policy with $H'=2$ and $\epsilon=0.06$ incurs more time than that with $H'=3$ and $\epsilon=0.8$ despite the latter achieving higher total rewards. This suggests trading off tighter loss bound $\epsilon$ for longer online planning horizon $H'$, especially when $\epsilon$ is too small that in turn requires a very large $n$ and consequently incurs significantly more time\cref{sucks}.\vspace{0.5mm} \noindent {\bf BO on Real-World Log-Potassium Concentration Field.} An agricultural robot is tasked to find the peak lg-K measurement (i.e., possibly in an over-fertilized area) while exploring the Broom's Barn farm \cite{Webster01}. It is driven by the UCB-based reward function described under `BO' in Section~\ref{ch:rewardfunctions}. Fig.~\ref{fig:gpp_realdata} shows results of performances of our $\epsilon$-GPP policy and its anytime variant, nonmyopic UCB (i.e., $\epsilon=25$), and greedy PI, EI, UCB (i.e., $H'=1$) averaged over $25$ randomly selected robot's initial starting location. It can be observed from Figs.~\ref{fig:gpp_realdata}a and~\ref{fig:gpp_realdata}b that the gradients of the achieved total normalized\footnote{To ease interpretation of the results, each reward is normalized by subtracting the prior mean from it.\label{crass}} rewards generally increase over time. In particular, from Fig.~\ref{fig:gpp_realdata}a, nonmyopic UCB assuming maximum likelihood observations during planning obtains much less total rewards than the other $\epsilon$-GPP policies and the anytime variant after $20$ time steps and finds a maximum lg-K measurement of $3.62$ that is at least $0.4\sigma_y$ worse after $20$ time steps. The performance of the anytime variant is comparable to that of our best-performing $\epsilon$-GPP policy with $\epsilon= 3$. From Fig.~\ref{fig:gpp_realdata}b, the greedy policy (i.e., $H'=1$) with $\beta=0$ performs much more poorly than its nonmyopic $\epsilon$-GPP counterparts and finds a maximum lg-K measurement of $3.56$ that is lower than that of greedy PI and EI due to its lack of exploration. By increasing $H'$ to $2$-$4$, our $\epsilon$-GPP policies with $\beta=0$ outperform greedy PI and EI as they can naturally and jointly optimize the exploration-exploitation trade-off. Interestingly, Fig.~\ref{fig:gpp_realdata}c shows that our $\epsilon$-GPP policy with $\beta = 2$ achieves the highest total rewards after $20$ time steps, which indicates the need of a slightly stronger exploration behavior than that with $\beta=0$. This may be explained by a small length-scale (i.e., spatial correlation) of the lg-K field, thus requiring some exploration to find the peak measurement. By increasing $H'$ beyond $4$ or with larger spatial correlation\if\myproof1 (Appendix~\ref{treesize1}), \fi\if\myproof0 \cite{AA16}, \fi we expect a diminishing role of the $\beta\sigma_{s_{t+1}|\Shist{t}}$ term. It can also be observed that aggressive exploration (i.e., $\beta\geq 10$) hurts the performance. Results of the tree size (i.e., incurred time) of our $\epsilon$-GPP policy and its anytime variant are in\if\myproof1 Appendix~\ref{treesize2}. \fi\if\myproof0 \cite{AA16}.\vspace{-2.6mm}\fi \begin{figure} \begin{tabular}{ccc} \hspace{-2.3mm} \includegraphics[width=2.7cm]{RealCompareEpsilon_val.pdf} & \hspace{-4mm}\includegraphics[width=2.7cm]{RealCompareHeight_val_EI_PI.pdf} & \hspace{-4mm}\includegraphics[width=2.7cm]{RealCompareBeta_val.pdf}\vspace{-2.5mm}\\ \hspace{-2.5mm}{\scriptsize (a)} & \hspace{-4mm}{\scriptsize (b)} & \hspace{-4mm}{\scriptsize (c)}\vspace{-4.5mm} \end{tabular} \caption{Graphs of total normalized\cref{crass} rewards of $\epsilon$-GPP policies using UCB-based rewards with (a) $H'=4$, $\beta=0$, and varying $\epsilon$, (b) varying $H'=1,2,3,4$ (respectively, $\epsilon=0.002, 0.003, 0.4, 2$) and $\beta=0$, and (c) $H'=4$, $\epsilon=1$, and varying $\beta$ vs. no. of time steps for BO on real-world lg-K field. The plot of $\epsilon^*=1$ uses our anytime variant with a maximum tree size of $3 \times 10^4$ nodes while the plot of $\epsilon=25$ effectively assumes maximum likelihood observations during planning like that of nonmyopic UCB.\vspace{-5mm}} \label{fig:gpp_realdata} \end{figure} \if\myproof1 Due to lack of space, we present additional experimental results for BO on simulated plankton density field in Appendix~\ref{treesize1} that yield similar observations to the above.\fi \section{Conclusion}\vspace{-1mm} This paper describes a novel nonmyopic adaptive $\epsilon$-GPP framework endowed with a general class of Lipschitz continuous reward functions that can unify some AL and BO criteria and be used for defining new tasks/problems. In particular, it can jointly and naturally optimize the exploration-exploitation trade-off. We theoretically guarantee the performances of our $\epsilon$-GPP policy and its anytime variant and empirically demonstrate their effectiveness in BO and an energy harvesting task. For our future work, we plan to scale up $\epsilon$-GPP and its anytime variant for big data using parallelization \cite{LowUAI13,LowAAAI15}, online learning \cite{Xu2014}, and stochastic variational inference \cite{NghiaICML15} and extend them to handle unknown hyperparameters \cite{NghiaICML14}.\vspace{1mm} \noindent {\bf Acknowledgments.} This work was supported by Singapore-MIT Alliance for Research and Technology Subaward Agreement No. $52$ R-$252$-$000$-$550$-$592$.\vspace{-2.6mm} \bibliographystyle{aaai}
1,116,691,497,476
arxiv
\section{Introduction} \label{sec:introduction} Failure management is one of the fundamental instruments that allows network operators to provide communication services that are much more reliable than the individual network components (nodes and links). It allows reacting to failures of network components by reconfiguring the resource allocation so as to make use of the surviving network infrastructure able to provide services. Traditionally, failure resilience has been incorporated in distributed protocols at the transport (like e.g. SDH) and/or network layer (like e.g. MPLS) with some optimization of resources pre-computed for a class of possible failures (like e.g. single link or node failures) and implemented with signaling mechanisms used to notify failures and activate backup resources. With the introduction of the revolutionary and successful paradigm of Software-defined Networking (SDN), the traditional distributed networking approach is replaced with a centralized network controller able to orchestrate traffic management through the programing of low-level forwarding policies into network nodes (switches) according to simple abstractions of the switching function like that defined in OpenFlow with the match/action flow table \cite{mckeown08}. Even if SDN and OpenFlow provide huge flexibility and a powerful platform for programming any type of innovative network application without the strong constraints of distributed protocols, they can make the implementation of important traditional functions, like failure resilience, neither easy nor efficient, since reaction to events in the network must always involve the central controller (notification of an event and installation of new forwarding rules) with non-negligible delays and signaling overheads. New versions of OpenFlow \cite{of14} have recently introduced a mechanism, namely fast-failover, for allowing quick and local reaction to failures without the need to resort on central controller. This is obtained by instantiating multiple action buckets for the same flow entry, and applying them according to the status of links (active or failed). However, fast-failover can be used only to define local detour mechanisms when alternative paths are available from the node that detects the failure. Depending on network topology and the specific failure, local detour paths may not be available or they may be inefficient from the resource allocation point of view. A recent proposal (by some of the authors) \cite{bianchi14,bianchi14b}, named OpenState, has extended the data plane abstraction of OpenFlow to include the possibility for switches to apply different match-actions rules depending on states and to make states evolve according to state machines where transitions are triggered by packet-level events. In this paper, we propose a new approach to failure management in SDN which exploits OpenState ability to react to packet-level events in order to define a fast path restoration mechanism that allows to reallocate flows affected by failure by enabling detours in any convenient nodes along the primary path. No specific signaling procedure is adopted for triggering detours, rather the same packets of the data traffic flows are tagged and forwarded back to notify nodes of the failure and to induce a state transition for the activation of pre-computed detours. We define optimization models for the computation of backup paths for all possible single node and link failures that consider multiple objectives including link congestion level, distance of the reroute point from the failure detection point, and level of sharing of backup paths by different flows. We show that the MILP (Mixed Integer Linear Programming) formulations proposed are flexible enough to incorporate the optimization of the OpenFlow fast-failover reroutes as a special case and that path computation for all possible failure scenarios can be performed within reasonable time for realistic size networks with state-of-the-art solvers (cplex). The reminder of the paper is presented as follows. In Section~\ref{sec:openstate} we first present an overview of OpenState and next we present the proposed failure recovery scheme in Section~\ref{sec:approach}. Related work is reviewed in Section~\ref{sec:related} and in Section~\ref{sec:model} two modelling formulations are presented. Computational results are discussed in Section~\ref{sec:results}. Conclusions are provided in Section~\ref{sec:conclusion}. \section{OpenState} \label{sec:openstate} The most prominent instance of SDN is OpenFlow, which, by design, focuses on an extreme centralization of the network intelligence at the controller governing switches, which in turn are considered dumb. In OpenFlow, adaptation and reconfiguration of forwarding policies can only be performed by remote controllers, with a clear consequence in terms of overhead and control latency. OpenState is an OpenFlow extension that enables mechanisms for controllers to offload some of their control logic to switches. In OpenState, the programmer is able to define forwarding rules that can autonomously adapt in a stateful fashion on the basis of packet-level events. The motivation beside OpenState is that control tasks that require only switch-local knowledge are unnecessarily handled at the controller, and thus can be offloaded to switches, while maintaining centralized control for those tasks that require global knowledge of the network. OpenState has been designed as an extension (superset) of OpenFlow. In OpenState the usual OpenFlow match/action flow table is preceded by a state table that contains the so called ``flow-states''. First, packets are matched against the state table using only a portion of the packet header (a programmable lookup-key), a state lookup operation is performed and a state label (similar to OpenFlow's metadata) is appended to packet headers. A \texttt{DEFAULT} state is returned if no row is matched in the state table. Packets are then sent to the flow-table where the usual OpenFlow processing is performed, while a new \texttt{SET\_STATE} action is available to insert or rewrite rows of the state table with arbitrary values. Figure~\ref{fig:os-stateful-stage} illustrates the packet flow in the two tables. OpenState allows also to match packets using ``global-states'', so called because, in contrast to flow-states, these are globally valid for the whole switch (datapath) and not just for a given flow. By using flow-states and global-states a programmer can define flow entries that apply to different scenarios, and by using state transition primitives she can control how those scenarios should evolve. OpenState has been showed to bring tangible benefits in the implementation of fundamental network applications \cite{bianchi14b}. An open-source implementation of an OpenState controller and switch can be found at \cite{openstatehomepage}, along with a modified version of Mininet and few application examples. \begin{figure} \centering \includegraphics[width=\columnwidth]{os-stateful-stage} \caption{Simplified packet flow in OpenState.} \label{fig:os-stateful-stage} \vspace{-5mm} \end{figure} \section{Proposed Approach} \label{sec:approach} The approach we take is similar to that used in crankback signaling \cite{rfc4920}. In the context of end-to-end QoS in MPLS and GMPLS with RVSP-TE, when a connection or flow setup fails because of a blocked link or node, crankback is a signaling technique in which a notification of the failure is backtracked along the flow path, from the upstream node that faces the blockage up to the first node (called ``repair point'') that can determine an alternative path to the destination node. Our solution is based on the same idea, but with the major difference that, upon link or node failure, the same data packets, and not a notification, can be sent back on their original path. We distinguish two situations: (i) the node which detects the failure is able to reroute the demand, and (ii) the packet must be forwarded back on it's primary path until a convenient reroute node is encountered. In the first case, solutions like OpenFlow's fast-failover already guarantee almost instantaneous protection switching without controller intervention, while in the second case it would be impracticable to signal the failure to other nodes without the intervention of the controller. The novelty of our approach is given by the fact that, in the second case, a crankback approach is performed using the same data packets, which are first tagged (e.g. with a MPLS label containing information on the failure event) and then sent back trough the primary path. A reroute node who receives the tagged packet will be able to respond to the failure by rerouting the tagged packets and by enabling a detour for all subsequent packets. That said, only the first packets of the flow are sent back from the detect node. As soon as the first tagged packet is processed by the reroute node, a state transition is performed in the OpenState switch, and all subsequent packets coming from the source node will be forwarded on the detour. An example of the mechanism described so far is summarized in Figure~\ref{fig:os-ft-example}. \begin{figure} \centering \includegraphics[width=\columnwidth]{os-ft-example-1-hop} \caption{Example of failure recovery with OpenState: in (1) the upstream node detects the failure, tags the packet and forwards it back. In (2) the reroute node receive the tagged packet, executes a state transition and forward the packet on the detour. In (3) all the packets received for the considered demand after the state transition, will be tagged and forwarded on the detour. Finally in (4), at the end of the detour, the tag is popped and the packet is forwarded on the primary path, towards its destination node.} \label{fig:os-ft-example} \vspace{-5mm} \end{figure} With this approach, flow-states are used to distinguish the forwarding of each traffic demand at each switch. The \texttt{DEFAULT} state implies that the demand can be forwarded towards the next node on the primary path, other arbitrary states are used to describe the specific failure that can affect the demand, so that the same reroute nodes can react differently according to the specific failure event. Global-states are instead used to describe the operational status of switch ports (up or down). In this case global-states are completely equivalent to ``port liveness'' states used by OpenFlow fast-failover feature. Our proposal is currently independent of the way failures are detected, because it does not influence the modeling aspect of the solution. We assume it could be implemented either via Loss Of Signal (LOS) or Bidirectional Forwarding Detection (BFD) \cite{rfc5880} mechanisms. In both cases, as soon as the state of the failed port is updated, our solution guarantees instantaneous reaction with ideally zero packet-loss. \section{Related Work} \label{sec:related} Failure management in SDN is a topic that has been already explored by the research community. In \cite{staessens11} the authors analyze the case of restoration for OpenFlow networks, showing how hard it is to achieve fast ($<50$ms) recovery times in large networks. Restoration is also taken into consideration in \cite{sharma11}, where the controller is entitled to monitor link status on the network, and, in case of failure, it computes a new path for the affected demand and replaces or deletes flow entries in switches, accordingly. In \cite{kempf12} an end-to-end path protection scheme is proposed: OpenFlow 1.1 is extended by implementing in the switches a monitoring function that allows to reduce processing load on the controller. Such a function is used in conjunction with OpenFlow fast-failover feature, thus allowing nodes to autonomously react to failures by switching to a precomputed end-to-end backup path. In \cite{sgambelluri13} a segment protection mechanisms is proposed only for the case of link failure. Backup paths are pre-installed, and OpenFlow is extended to enable switches to locally react to connected failed links. Another way to reduce the load at the controller is presented in \cite{lee14}. The authors propose a centralized monitoring scheme and a model to reduce the number of monitoring iterations that the controller must perform in order to check all links. A completely different and creative approach is proposed in \cite{borokhovich14}, where classic graph search algorithms are presented to implement a solution based on the OpenFlow fast-failover scheme, where backup paths are not known in advance but nodes implement an algorithm to randomly try new routes to reach the destination. \section{Problem Formulation} \label{sec:model} Let $G(N,A)$ be a symmetric directed graph so that $N$ represents the set of network switches, and $A$ the set of links between switches. The demands are assumed to be known in advance. Also assumed is the fact that each demand will be routed using a primary path optimized as a shortest-path with link capacity constraints. Our main problem then focuses on the evaluation of backup paths for each demand, for every possible single failure scenario in the primary path. The significance of a failure scenario will be clearly indicated in the next subsection. For comparison purposes we also present, at the end of the Section, a congestion avoidance version of the same backup path problem. \subsection{Backup Path Problem Formulation} \label{sec:bp-problem} In the forthcoming model, we refer to ``failure detection event" rather than simply ``failure state" to indicate that a failure has been perceived. Moreover, instead of making an a priori distinction between the case of link and the case of node failure, a ``fault detection event'' $f = (n,m)$ may be either. The notation simply indicates that node $n$ detects a failure while transmitting to a downstream node $m$. Therefore two distinct situations are considered: (i) a failure on link $(n,m)$ (e.g. disconnected or truncated cable, etc.) and (ii) a scenario where downstream node $m$ fails implying the disconnection of all its adjacent links. When evaluating the backup path for a given demand, we always consider the worst-case scenario of a node failure, thus completely avoiding to forward packets to $m$, except for the case where $m$ is also the destination node of the considered demand ($m = t_d$). In such a case, we try to deliver packets to $m$ avoiding the failed link $(n,m)$. Let us now define the following parameters: \subsection*{Parameters} \begin{description}[\IEEEsetlabelwidth{$u_{ij}^{nm}$}] \item[D] set of demands to be routed; \item[$s_d$] source node of demand $d$; \item[$t_d$] destination node of demand $d$; \item[$\beta_{dij}$] is equal to 0 if link $(i,j)$ belongs to the primary path for demand $d$, otherwise 1; \item[$b_d$] requested bandwidth for demand $d$; \item[$c_{ij}$] total capacity of link $(i,j)$; \item[$w_{cap}$] percentage of the link capacity available; \item[$F$] set including all the possible failure detection events $(n,m)$ that can affect at least one primary path; \item[$D^{nm}$] subset of D including all the demands affected by the failure detection events $(n,m)$; \item[$D_1^{nm}$] subset of $D^{nm}$ including all the demands $d$ affected by the failure detection event $(n,m)$, when $m$ is not the destination node of the considered demand and thus $m \neq t_d$; \item[$D_2^{nm}$] subset of $D^{nm}$ including all the demands $d$ affected by the failure detection event $(n,m)$, where $m$ is the destination node of the considered demand and thus $m=t_d$; \item[$L^{m}$] subset of A that will include all the links incident to node $m$; \item [$u_{ij}^{nm}$] represents the used capacity of link $(i,j)$ when link $(n,m)$ fails. Note that in this parameter we consider only the link capacity allocated for those demands for which the primary path does not include neither $(m,n)$ or $(n,m)$; \item [$v_{ij}^{m}$] is the used capacity of link $(i,j)$ in case of failure for node $m$. In this case we consider only the link capacity allocated for those demands that are not affected by a failure of node $m$, in other words those demands which primary path does not include any of the links incident to $m$; \item [$p_d^k$] represents the link $(i,j)$ in the $k$-th position of the primary path for demand $d$, where $k=1$ is intended as the first link of the primary path starting from node $s_d$; \item [$\lambda^{nm}_d$] is the number of nodes that a packet of demand $d$ traverses on the primary path, before reaching node $n$ of failure detection event $(n,m)$. $\lambda^{nm}_d = 0$ means that the failure has been detected by the first node of the path. \end{description} \subsection*{Decision variables} \begin{description}[\IEEEsetlabelwidth{$y_{dij}^{nm})$}] \item[$y_{dij}^{nm}$] is equal to 1 if link $(i,j)$ belongs to the backup path of demand $d$ in case of failure detection event $(n,m)$, otherwise 0; \item[$h^{nm}_d$] non-negative integer that represents the number of backward hops that a tagged packet of demand $d$ must perform in case of failure detection event $(n,m)$, before reaching the reroute node that will enable the detour. When $h^{nm}_d = 0$ we mean that node $n$ that detected the failure is also the reroute node; \item[$z_{dij}$] equal to 0 if $(i,j)$ is not used by any backup path (for every possible failure) for demand $d$, otherwise 1. \end{description} \subsection*{Objective Function} \vspace{-5mm} \begin{IEEEeqnarray}{lCl} min & & \sum_{(n,m) \in F}\sum_{d \in D^{nm}} w_{h}h^{nm}_d \nonumber \\ & + & \sum_{(n,m) \in F} \sum_{d \in D^{nm}}\sum_{(i,j) \in A} w_{y}y^{nm}_{dij} \nonumber \\ & + & \sum_{d \in D} \sum_{(i,j) \in A}w_{z} \beta_{dij} z_{dij} \label{eq:bp-obj} \end{IEEEeqnarray} The objective function is composed of three weighted terms. The first minimizes the length of the reverse path that tagged data packets must travel in case of failure. The second minimizes the length of backup paths. The third term minimizes the number of links allocated to the backup paths for a given demand, in other words we want more backup paths of the same demand to share the same links. By using the three weights $w_h$, $w_y$, and $w_z$ we are able to characterize the behavior of the objective function in different ways. \subsection*{Link availability constraints} \vspace{-5mm} \begin{IEEEeqnarray}{c} \sum_{(i,j) \in L^m} y^{nm}_{dij}\le 0 \ \ \ \ \forall (n,m) \in F, \forall d \in D_1^{nm} \label{eq:bp-la-cons-generic} \end{IEEEeqnarray} \begin{IEEEeqnarray}{c} y^{nm}_{dnm} + y^{nm}_{dmn}\le 0 \ \ \ \ \forall (n,m) \in F, \forall d \in D_2^{nm} \label{eq:bp-la-cons-terminal} \end{IEEEeqnarray} These constraints disable the use of certain links when evaluating the backup path for a given demand. \subsection*{Link capacity constraints} \vspace{-5mm} \begin{IEEEeqnarray}{c} u^{nm}_{ij} + \sum_{d \in D^{nm}} b_d y^{nm}_{dij} + \sum_{e \in D^{mn}} b_e y^{mn}_{eij} \le w_{cap} c_{ij} \nonumber \\ \forall (n,m) \in F, \forall (i,j) \in L \label{eq:bp-lc-cons-link} \end{IEEEeqnarray} \begin{IEEEeqnarray}{c} v^{m}_{ij} + \sum_{\substack{n \in N: \\ (n,m) \in F}} \sum_{d \in D^{nm}} b_d y^{nm}_{dij} \le w_{cap}c_{ij} \nonumber\\ \forall m \in N, \forall (i,j) \in L \label{eq:bp-lc-cons-node} \end{IEEEeqnarray} The above constraints insure that for every possible failure, when allocating the backup paths, the link capacity must be respected. The first set of constraints is specific for the case of link failure, while the second set is specific for the case of node failure. Because we do not know the exact nature of a failure detection event, we want our solution to be valid (in terms of resource allocation) in case of both link and node failure. \subsection*{Flow conservation constraints} \vspace{-5mm} \begin{IEEEeqnarray}{c} \sum_{\mathclap{\substack{ j \in N: \\ (i,j)\in A}}} y^{nm}_{dij} - \sum_{\mathclap{\substack{ j \in N: \\ (j,i) \in A}}} y^{nm}_{dji} = \left\{ \begin{array}{ll} 1, & \text{if} \ i = s_d; \\ -1, & \text{if} \ i = t_d ; \\ 0, & \text{otherwise.} \end{array} \right. \nonumber \\ \forall i \in N, \forall (n,m) \in F, \forall d \in D^{nm} \label{eq:bp-fc-cons} \end{IEEEeqnarray} These constraints assure that there is continuity in backup paths. \subsection*{Cycle avoidance constraints} \vspace{-5mm} \begin{equation} \sum_{\mathclap{\substack{ j \in N: \\ (i,j)\in L}}} y^{nm}_{dij} \le 1 \ \ \ \ \forall i \in N, \forall (n,m) \in F, \forall d \in D^{nm} \label{eq:nocycle} \end{equation} These constraints avoid the creation of cycles in the backup paths. \subsection*{Reverse path constraints} \vspace{-5mm} \begin{IEEEeqnarray}{c} \sum_{\substack{k=1:\\ (i,j) = p_d^k}}^{\lambda^{nm}_d} (1-y_{dij}^{nm}) \le h^{nm}_d \nonumber \\ \forall (n,m) \in F, \forall d \in D^{nm} : \lambda^{nm}_d\neq 0 \label{eq:reverseBP} \end{IEEEeqnarray} These constraints are needed to evaluate the variable $h^{nm}_d$. \subsection*{Capacity use constraints} \vspace{-5mm} \begin{IEEEeqnarray}{cr} z_{dij} \ge y^{nm}_{dij} \ \ \ \ \forall (i,j) \in A, \forall (n,m) \in F, \forall d \in D^{nm} &\label{eq:capuseBP} \end{IEEEeqnarray} These constraints are needed to evaluate the variable $z_{dij}$. Having reviewed the main backup path formulation, we now present, in the the next subsection a congestion avoidance formulation to be used for comparison purposes. \subsection{Congestion Avoidance Formulation} \label{sec:bp-ca-problem} Let us first define the following additional variables: \begin{description}[\IEEEsetlabelwidth{$D^{nm}_g$}] \item[$\mu_{ij}$] represents the maximum capacity used on link $(i,j)$ w.r.t. all possible failure detection events; \item[$\phi_{ij}$] represents the cost of using link $(i,j)$ when the capacity used is $\mu_{ij}$. \end{description} The problem can then be formulated as follows \subsection*{Objective function} \begin{equation} min \sum_{(i,j) \in A} \phi_{ij} \label{eq:ca-obj-func} \end{equation} This new objective function is a classical non-linear congestion related optimization function that aims at minimizing the load on each link. As we will later see, the function will be linearized in order to treat the integer problem. \subsection*{Link capacity constraints} Previous constraints (\ref{eq:bp-la-cons-generic}), (\ref{eq:bp-la-cons-terminal}) and (\ref{eq:bp-fc-cons}) are maintained, while link capacity constrains (\ref{eq:bp-lc-cons-link}) and (\ref{eq:bp-lc-cons-node}) are substituted by the following: \begin{IEEEeqnarray} {ccr} u^{nm}_{ij} + \sum_{d \in D^{nm}} b_d y^{nm}_{dij} + \sum_{e \in D^{mn}} b_e y^{mn}_{eij} \le \mu_{ij} \nonumber\\ \forall (n,m) \in F, \forall (i,j) \in L \label{eq:ca-lc-cons-link} \end{IEEEeqnarray} \begin{IEEEeqnarray}{cr} v^{m}_{ij} + \sum_{\substack{n \in N: \\ (n,m) \in F}} \sum_{d \in D^{nm}} b_d y^{nm}_{dij} \le \mu_{ij} \nonumber\\ \forall m \in N, \forall (i,j) \in L \label{eq:ca-lc-cons-node} \end{IEEEeqnarray} \begin{equation} \mu_{ij} \le w_{cap}c_{ij} \ \ \ \ \forall (i,j) \in L \label{eq:ca-lc-cons} \end{equation} (\ref{eq:ca-lc-cons-link}) and (\ref{eq:ca-lc-cons-node}) evaluate the maximum load on link $(i,j)$ for all considered failure detection events $(m,n)$, while (\ref{eq:ca-lc-cons}) stipulates that even for the maximum value the capacity of the link must be respected. \subsection*{Linearization constraints} Given that $\phi_{ij}$ in (\ref{eq:ca-obj-func}) is a non-linear performance function, it should be linearized by the following constraints: \begin{IEEEeqnarray} {clr} & \phi_{ij} \ge \frac{\mu_{ij}}{w_{cap}c_{ij}} & \ \ \ \ \forall (i,j) \in A \label{eq:cost1}\\ & \phi_{ij} \ge 3 \frac{\mu_{ij}}{w_{cap}c_{ij}}-\frac{2}{3} & \ \ \ \ \forall (i,j) \in A \label{eq:cost2} \\ & \phi_{ij} \ge 10 \frac{\mu_{ij}}{w_{cap}c_{ij}}-\frac{16}{3} & \ \ \ \ \forall (i,j) \in A \label{eq:cost3} \\ & \phi_{ij} \ge 70 \frac{\mu_{ij}}{w_{cap}c_{ij}}-\frac{178}{3} & \ \ \ \ \forall (i,j) \in A \label{eq:cost4} \\ & \phi_{ij} \ge 500 \frac{\mu_{ij}}{w_{cap}c_{ij}}-\frac{1468}{3} & \ \ \ \ \forall (i,j) \in A \label{eq:cost5} \end{IEEEeqnarray} This set of equations represent the linearized load cost function shown in Fig. \ref{fig:load-cost-func}. \begin{figure} \centering \resizebox{0.8\columnwidth}{!}{ \begin{tikzpicture} \begin{axis}[ xmin = -0, xmax = 1, ymin = 0, ymax = 11, xlabel=$\mu_{ij} / (w_{cap}c_{ij})$, ylabel=$\phi_{ij}$ ] \addplot[black, thick] coordinates{ (0,0) (0.333, 0.3333) (0.667, 1.333) (0.9, 3.667) (1,10.667) }; \end{axis} \end{tikzpicture} } \caption{Load cost function $\phi_{ij}$} \label{fig:load-cost-func} \vspace{-5mm} \end{figure} \section{Computational Results} \label{sec:results} \begin{table} \caption{Topologies summary} \centering \begin{tabular}{|c|c|c|c|c|c|} \hline \bfseries Topology &$\mid N \mid$ &$\mid A\mid$ & $\mid N_{edge}\mid$ & $\mid N_{core}\mid$ &$\mid D\mid$ \\ \hline Polska & 12 & 36 & 9 & 3 & 72 \\ Norway & 27 & 102 & 16 & 11 & 240 \\ Fat tree & 20 & 64 & 8 & 12 & 56 \\ \hline \end{tabular} \label{table:topo-summary} \end{table} The model was tested on three different network topologies portrayed in Figure \ref{fig:topologies}. Two real backbone topologies, namely Polska and Norway, taken from \cite{orlowski10}, and a fat tree, which is an example of a symmetric topology well known for its degree of fault-resiliency \cite{niranjan09}, and widely used in data centers. For each topology, nodes are divided in two sets: edge nodes and core nodes. Edge nodes act as source and destination of traffic while core nodes are only in charge of routing. \begin{figure*} \centering \subfloat[][]{\includegraphics[width=0.27\textwidth]{polska} \label{fig:polska-top}} ~ \subfloat[][]{\includegraphics[width=0.27\textwidth]{norway} \label{fig:norway-top}} ~ \subfloat[][]{\includegraphics[width=0.27\textwidth]{fat-tree} \label{fig:ft-top}} \caption{Network topologies used in test instances: (a) Polska, (b) Norway, and (c) Fat tree}% \label{fig:topologies} \vspace{-5mm} \end{figure*} As mentioned in Section \ref{sec:model}, one of the inputs of the model is a set of primary paths evaluated as shortest paths for every traffic demand. Once such input was known, backup paths were found by varying weights $w_h$,$w_{y}$, and $w_{z}$ of objective function (\ref{eq:bp-obj}). Three types of instances were evaluated for comparison purposes: those referring to the backup problem with a given set of weights, those referring to the congestion avoidance formulation and those referring to a classic end-to-end path protection formulation. A summary of such instances is given below: \begin{description}[\IEEEsetlabelwidth{BP$_{111}$}] \item[BP$_{111}$] all three terms of the objective function are taken into account; \item[BP$_{100}$] only the first term is considered, thus the model is forced to find a solution that minimizes the length of the reverse path, converging to a solution where failure detection node and reroute node are the same; \item[BP$_{010}$] only the second term is considered by minimizing the length of backup paths from $s_d$ to $t_d$; \item[BP$_{001}$] only the third term is considered, thus trying to minimize the number of links allocated for all backup paths of each demand; \item[BP$_{\text{CA}}$] congestion avoidance formulation of the BP problem, minimizing the maximum load for each link; \item[E2E] classic end-to-end path protection problem formulation. \end{description} The instances were executed assuming 2 different link capacity sets $c_{i,j}$: (i) capacity is set to the minimum value to obtain a feasible solution, and (ii) links are over-provisioned with very high capacity. For each test the requested bandwidth for each demand is always set to $b_d = 1$, and the available link capacity parameter is fixed to $w_{cap} = 80\%$. The models were formalized and solved to optimality with AMPL-Cplex, using PCs with 8 CPU cores Intel Core i7 and 8GB of RAM. For all executions a solution was found in less than 30 seconds, except for the case of BP$_\text{CA}$ evaluated for the Norway topology, where the execution required about ten minutes. The solutions were compared evaluating the trade-off with respect to the following parameters: \begin{itemize} \item \textbf{Backup path length:} this measure was assessed with respect to the primary path length. A value of 100\% means that the length of the backup path is twice the primary path length, whereas 0\% indicates that the two paths have the same length. \item \textbf{Link capacity occupation:} is the percentage of the total link capacity allocated for all primary and backup paths that use the considered link. \item \textbf{Reverse path length:} is the portion of the primary path that a tagged packet has to traverse before being rerouted. A value of 100\% indicates that the packet has to go back to the source node of the demand, while a 0\% means that the packet is rerouted from the same node that detected the failure. \end{itemize} The complete set of results is shown in Table \ref{table:comp-results} and in chart form in Figure~\ref{fig:result-charts}. In all instances BP$_{\text{111}}$ offers the best trade-off in terms of backup path length and reverse path length, with no major drawbacks. BP$_{\text{CA}}$ produces better solutions in terms of link capacity occupation, especially when considering instances with minimum capacity $c_{ij}$ (see Figures~\ref{fig:polska-uc}, \ref{fig:norway-uc} and \ref{fig:fattree-uc} for a clearer view). The drawback of using BP$_{\text{CA}}$ is represented by longer backup paths. In fact, for Norway and Polska topologies, BP$_{\text{CA}}$ produces solutions with the longest backup paths, about the double in both cases (Figures~\ref{fig:polska-pl} and \ref{fig:norway-pl}). However, note that in the case of an on-line scenario BP$_{\text{CA}}$ would guarantee more residual capacity and thus a higher probability of accepting new traffic demands. Concerning the reverse path length, the best solution is obtained with configuration BP$_{100}$ (Figures \ref{fig:polska-rp}, \ref{fig:norway-rp}, \ref{fig:fattree-rp}). The drawback in this case is represented by longer backup paths, about the double when compared to primary paths. It is interesting to note that for the fat tree topology with $c_{ij}=100$ (Figure~\ref{fig:fattree-rp}) BP$_{100}$ returns a solution with reverse path length equal to 0\%. This is worth mentioning because this solution would be suitable to be implemented with OpenFlow fast-failover, where detect node and reroute node are always the same. Unfortunately such a solution is not always feasible, as it strongly depends on topology and capacity constraints. Indeed, for all other cases, BP$_{100}$ is unable to provide a solution with 0\% reverse path length. Thanks to this result we can show how our solution based on OpenState, which is able to handle reverse paths, guarantees an higher degree of fault-resiliency when compared to a solution based on OpenFlow fast-failover. It is also interesting to note that for the Norway topology the given set of primary paths has no feasible solution for the E2E model. This is due to the fact that in the classic formulation of E2E path protection, primary paths and backup paths must be evaluated at the same time, thus avoiding the situation where for a given primary path is impossible to find a completely disjoint backup path. We show in this case the flexibility of our approach by always providing a feasible solution. Finally, it is interesting to note how for the case of the fat tree topology, the results obtained from BP$_{111}$ are the same of the E2E model, always having backup path length equal to primary paths, and reverse path length equal to 100\%. This means that in case of failure, packets will be always rerouted from the source node of the demand. In this case a solution adopting OpenState would guarantee less disruption thanks to the fact that nodes would be able to automatically switch to the backup path, whereas OpenFlow would require to forward packet to the controller to enable the backup path at the source node by installing the respective forwarding rules. \begin{table*}[t] \centering \caption{Computational results} \begin{tabular}{|c|c|ccc|ccc|ccc|} \hline \bfseries Instance & \bfseries Model & \multicolumn{3}{c|}{\bfseries Backup path length} & \multicolumn{3}{c|}{\bfseries Link capacity occupation} & \multicolumn{3}{c|}{\bfseries Reverse path length } \\ && \bfseries \textit{min} & \textit{max} & \textit{avg (var)} & \textit{min} & \textit{max} & \textit{avg (var)} & \bfseries\textit{min} & \textit{max} & \textit{avg (var)} \\ \hline\hline & BP$_{111}$ &0\% &300\%& 48\% (61\%)& 29\%&79\%&68\% (10\%)&0\%&100\%&36\% (41\%) \\ & BP$_{100}$ & 0\% & 900\% & 80\% (103\%)&43\%&79\%&69\% (9\%) & 0\% & 100\% &6\% (19\%)\\ \bfseries Polska& BP$_{010}$ & 0\% & 300\% & 47\% (61\%)&43\%&79\%&68\% (9\%) & 0\% & 100\% &50\% (45\%)\\ \bfseries$ c_{ij} = 14, \forall (i,j) \in A$& BP$_{001}$ & 0\% & 300\% & 52\% (60\%)&43\%&79\%&64\% (12\%) & 0\% & 100\% &92\% (24\%)\\ & BP$_{\text{CA}}$ & 0\% & 700\% & 103\% (123\%)&7\%&79\%&54\% (20\%) & 0\% & 100\% &75\% (43\%)\\ & E2E & 0\% & 300\% & 85\% (75\%)&29\%&79\%&64\% (13\%) & 100\% & 100\% &100\% (0\%)\\ \hline & BP$_{111}$ & 0\% &300\%& 48\% (61\%)& 4\%&12\%&9\% (2\%)&0\%&100\%&43\% (45\%) \\ & BP$_{100}$ & 0\% & 600\% & 105\% (118\%)&5\%&16\%&10\% (2\%) & 0\% & 100\% &4\% (16\%)\\ \bfseries Polska& BP$_{010}$ & 0\% & 300\% & 47\% (61\%)&6\%&12\%&9\% (1\%) & 0\% & 100\% &69\% (43\%)\\ \bfseries $ c_{ij} = 100, \forall (i,j) \in A$ & BP$_{001}$ & 0\% & 300\% & 50\% (61\%)&4\%&11\%&9\% (2\%) & 0\% & 100\% &97\% (16\%)\\ & BP$_{\text{CA}}$ & 0\% & 700\% & 103\% (136\%)&2\%&11\%&7\% (3\%) & 0\% & 100\% &81\% (39\%)\\ & E2E & 0\% & 300\% & 79\% (77\%)&3\%&12\%&9\% (2\%) & 100\% & 100\% &100\% (0\%)\\ \hline \hline & BP$_{111}$ & 0\% &500\%& 32\% (55\%)& 3\%&80\%&59\% (20\%)&0\%&100\%&42\% (43\%) \\ & BP$_{100}$ & 0\% & 900\% & 79\% (98\%)&17\%&80\%&61\% (18\%) & 0\% & 100\% &15\% (31\%)\\ \bfseries Norway & BP$_{010}$ & 0\% & 500\% & 29\% (53\%)&7\%&80\%&58\% (20\%) & 0\% & 100\% &57\% (42\%)\\ \bfseries $ c_{ij} = 30, \forall (i,j) \in A$& BP$_{001}$ & 0\% & 500\% & 40\% (54\%)&7\%&80\%&53\% (20\%) & 0\% & 100\% &91\% (25\%)\\ & BP$_{\text{CA}}$ & 0\% & 1600\% & 99\% (137\%)&0\%&80\%&45\% (25\%) & 0\% & 100\% &61\% (49\%)\\ &E2E & - & - & - & - & - & - & - & - & - \\ \hline & BP$_{111}$ & 0\% &500\%& 29\% (51\%)& 0\%&12\%&6\% (3\%)&0\%&100\%&31\% (39\%) \\ & BP$_{100}$ & 0\% & 1400\% & 94\% (131\%)&1\%&14\%&7\% (3\%) & 0\% & 100\% &4\% (17\%)\\ \bfseries Norway & BP$_{010}$ & 0\% & 500\% & 27\% (52\%)&0\%&12\%&6\% (3\%) & 0\% & 100\% &59\% (42\%)\\ \bfseries $ c_{ij} = 300, \forall (i,j) \in A$& BP$_{001}$ & 0\% & 500\% & 36\% (53\%)&0\%&12\%&5\% (3\%) & 0\% & 100\% &93\% (23\%)\\ & BP$_{\text{CA}}$ & 0\% & 1400\% & 107\% (138\%)&1\%&10\%&4\% (3\%) & 0\% & 100\% &61\% (49\%)\\ &E2E & - & - & - & - & - & - & - & - & - \\ \hline \hline & BP$_{111}$ & 0\% &0\%& 0\% (0\%)& 15\%&77\% &59\% (13\%)&100\%&100\%&100\% (0\%) \\ & BP$_{100}$ & 0\% & 500\% & 67\% (70\%)&31\%&77\%&57\% (13\%) & 0\% & 100\% &4\% (13\%)\\ \bfseries Fat tree& BP$_{010}$ & 0\% & 0\% & 0\% (0\%)&23\%&77\%&52\% (13\%) & 0\% & 100\% &97\% (18\%)\\ \bfseries $ c_{ij} = 13, \forall (i,j) \in A$ & BP$_{001}$ & 0\% & 0\% & 0\% (0\%)&15\%&77\%&50\% (14\%) & 0\% & 100\% &100\% (0\%)\\ & BP$_{\text{CA}}$ & 0\% & 150\% & 103\% (128\%)&0\%&77\%&50\% (15\%) & 0\% & 100\% &85\% (35\%)\\ & E2E & 0\% & 0\% & 0\% (0\%)&15\%&77\%&50\% (15\%) & 100\% & 100\% &100\% (0\%)\\ \hline & BP$_{111}$ & 0\% &0\%& 0\% (0\%)& 1\%&11\%&6\% (2\%)&100\%&100\%&100\% (0\%) \\ & BP$_{100}$ & 0\% & 400\% & 75\% (75\%)&3\%&12\%&8\% (2\%) & 0\% & 0\% &0\% (0\%)\\ \bfseries Fat tree & BP$_{010}$ & 0\% &0\% & 0\% (0\%)&2\%&12\%&7\% (2\%) & 0\% & 100\% &89\% (31\%)\\ \bfseries $ c_{ij} = 100, \forall (i,j) \in A$ & BP$_{001}$ & 0\% & 0\% & 0\% (0\%)&0\%&12\%&6\% (2\%) & 100\% & 100\% &100\% (0\%)\\ & BP$_{\text{CA}}$ & 0\% & 200\% & 20\% (35\%)&1\%&11\%&6\% (2\%) & 0\% & 100\% &84\% (36\%)\\ & E2E & 0\% & 0\% & 0\% (0\%)&0\%&12\%&6\% (3\%) & 100\% & 100\% &100\% (0\%)\\ \hline \end{tabular} \label{table:comp-results} \end{table*} \begin{figure*} \centering \textbf{Polska}\\ \subfloat[][]{ \resizebox{0.3\textwidth}{!}{ \begin{tikzpicture} \begin{axis}[ ybar, axis x line=bottom, axis y line=left, ymin = 0, enlarge x limits=0.15, legend style={at={(0.5,-0.20)}, anchor=north,legend columns=-1}, ylabel={\bfseries Backup path length}, xlabel={Model}, symbolic x coords={BP$_{111}$,BP$_{100}$,BP$_{010}$,BP$_{001}$,BP$_{CA}$,E2E}, xtick=data, nodes near coords, nodes near coords align={vertical}, ] \addplot[fill=white] coordinates {(BP$_{111}$,48) (BP$_{100}$,80) (BP$_{010}$,47) (BP$_{001}$,52) (BP$_{CA}$,103) (E2E,85)}; \addplot[fill=gray] coordinates {(BP$_{111}$,48) (BP$_{100}$,105) (BP$_{010}$, 47) (BP$_{001}$,50) (BP$_{CA}$,103) (E2E,79)}; \legend{$c_{ij} = 14$,$c_{ij} = 100$} \end{axis} \end{tikzpicture} } \label{fig:polska-pl}% } \subfloat[][]{ \resizebox{0.3\textwidth}{!}{ \begin{tikzpicture} \begin{axis}[ ybar, axis x line=bottom, axis y line=left, ymin = 0, enlarge x limits=0.15, enlarge y limits=0, legend style={at={(0.5,-0.20)},anchor=north,legend columns=-1}, ylabel={\bfseries Reverse path length}, xlabel={Model}, symbolic x coords={BP$_{111}$,BP$_{100}$,BP$_{010}$,BP$_{001}$,BP$_{CA}$,E2E}, xtick=data, nodes near coords, nodes near coords align={vertical}, ] \addplot [fill=white]coordinates {(BP$_{111}$,36) (BP$_{100}$,6) (BP$_{010}$,50) (BP$_{001}$,92) (BP$_{CA}$,75) (E2E,100)}; \addplot [fill=gray] coordinates {(BP$_{111}$,43) (BP$_{100}$,4) (BP$_{010}$, 69) (BP$_{001}$,97) (BP$_{CA}$,81) (E2E,100)}; \legend{$c_{ij} = 14$,$c_{ij} = 100$} \end{axis} \end{tikzpicture} } \label{fig:polska-rp}% } \subfloat[][]{ \resizebox{0.3\textwidth}{!}{ \begin{tikzpicture} \begin{axis}[ ybar, axis x line=bottom, axis y line=left, ymin = 0, enlarge x limits=0.15, enlarge y limits=0, legend style={at={(0.5,-0.20)}, anchor=north,legend columns=-1}, ylabel={\bfseries Link capacity occupation}, xlabel={Model}, symbolic x coords={BP$_{111}$,BP$_{100}$,BP$_{010}$,BP$_{001}$,BP$_{CA}$,E2E}, xtick=data, nodes near coords, nodes near coords align={vertical}, ] \addplot[fill=white] coordinates {(BP$_{111}$,68) (BP$_{100}$,69) (BP$_{010}$,68) (BP$_{001}$,64) (BP$_{CA}$,54) (E2E,64)}; \addplot[fill=gray] coordinates {(BP$_{111}$,9) (BP$_{100}$,10) (BP$_{010}$, 9) (BP$_{001}$,9) (BP$_{CA}$,7) (E2E,9)}; \legend{$c_{ij} = 14$,$c_{ij} = 100$} \end{axis} \end{tikzpicture} } \label{fig:polska-uc}% } \\ \vspace{5mm} \textbf{Norway}\\ \subfloat[][]{ \resizebox{0.3\textwidth}{!}{ \begin{tikzpicture} \begin{axis}[ ybar, axis x line=bottom, axis y line=left, ymin = 0, enlarge x limits=0.15, legend style={at={(0.5,-0.20)}, anchor=north,legend columns=-1}, ylabel={\bfseries Backup path length}, xlabel={Model}, symbolic x coords={BP$_{111}$,BP$_{100}$,BP$_{010}$,BP$_{001}$,BP$_{CA}$,E2E}, xtick=data, nodes near coords, nodes near coords align={vertical}, ] \addplot[fill=white] coordinates {(BP$_{111}$,32) (BP$_{100}$,79) (BP$_{010}$,29) (BP$_{001}$,40) (BP$_{CA}$,99)}; \addplot[fill=gray] coordinates {(BP$_{111}$,29) (BP$_{100}$,94) (BP$_{010}$, 27) (BP$_{001}$,36) (BP$_{CA}$,107)}; \legend{$c_{ij} = 30$,$c_{ij} = 300$} \end{axis} \end{tikzpicture} } \label{fig:norway-pl}% } \subfloat[][]{ \resizebox{0.3\textwidth}{!}{ \begin{tikzpicture} \begin{axis}[ ybar, axis x line=bottom, axis y line=left, ymin = 0, enlarge x limits=0.15, enlarge y limits=0, legend style={at={(0.5,-0.20)},anchor=north,legend columns=-1}, ylabel={\bfseries Reverse path length}, xlabel={Model}, symbolic x coords={BP$_{111}$,BP$_{100}$,BP$_{010}$,BP$_{001}$,BP$_{CA}$,E2E}, xtick=data, nodes near coords, nodes near coords align={vertical}, ] \addplot [fill=white]coordinates {(BP$_{111}$,42) (BP$_{100}$,15) (BP$_{010}$,57) (BP$_{001}$,91) (BP$_{CA}$,61)}; \addplot[fill=gray] coordinates {(BP$_{111}$,31) (BP$_{100}$,4) (BP$_{010}$, 59) (BP$_{001}$,93) (BP$_{CA}$,61)}; \legend{$c_{ij} = 30$,$c_{ij} = 300$} \end{axis} \end{tikzpicture} } \label{fig:norway-rp}% } \subfloat[][]{ \resizebox{0.3\textwidth}{!}{ \begin{tikzpicture} \begin{axis}[ ybar, axis x line=bottom, axis y line=left, ymin = 0, enlarge x limits=0.15, enlarge y limits=0, legend style={at={(0.5,-0.20)}, anchor=north,legend columns=-1}, ylabel={\bfseries Link capacity occupation}, xlabel={Model}, symbolic x coords={BP$_{111}$,BP$_{100}$,BP$_{010}$,BP$_{001}$,BP$_{CA}$,E2E}, xtick=data, nodes near coords, nodes near coords align={vertical}, ] \addplot[fill=white] coordinates {(BP$_{111}$,59) (BP$_{100}$,61) (BP$_{010}$,58) (BP$_{001}$,53) (BP$_{CA}$,45) }; \addplot[fill=gray] coordinates {(BP$_{111}$,6) (BP$_{100}$,7) (BP$_{010}$, 6) (BP$_{001}$,5) (BP$_{CA}$,4)}; \legend{$c_{ij} = 30$,$c_{ij} = 300$} \end{axis} \end{tikzpicture} } \label{fig:norway-uc}% } \\ \vspace{5mm} \textbf{Fat tree}\\ \subfloat[][]{ \resizebox{0.3\textwidth}{!}{ \begin{tikzpicture} \begin{axis}[ ybar, axis x line=bottom, axis y line=left, ymin = 0, enlarge x limits=0.15, legend style={at={(0.5,-0.20)}, anchor=north,legend columns=-1}, ylabel={\bfseries Backup path length}, xlabel={Model}, symbolic x coords={BP$_{111}$,BP$_{100}$,BP$_{010}$,BP$_{001}$,BP$_{CA}$,E2E}, xtick=data, nodes near coords, nodes near coords align={vertical}, ] \addplot[fill=white] coordinates {(BP$_{111}$,0) (BP$_{100}$,67) (BP$_{010}$,0) (BP$_{001}$,0) (BP$_{CA}$,103) (E2E,0)}; \addplot[fill=gray] coordinates {(BP$_{111}$,0) (BP$_{100}$,75) (BP$_{010}$, 0) (BP$_{001}$,0) (BP$_{CA}$,20) (E2E,0)}; \legend{$c_{ij} = 13$,$c_{ij} = 100$} \end{axis} \end{tikzpicture} } \label{fig:fattree-pl}% } \subfloat[][]{ \resizebox{0.3\textwidth}{!}{ \begin{tikzpicture} \begin{axis}[ ybar, axis x line=bottom, axis y line=left, ymin = 0, enlarge x limits=0.15, enlarge y limits=0, legend style={at={(0.5,-0.20)},anchor=north,legend columns=-1}, ylabel={\bfseries Reverse path length}, xlabel={Model}, symbolic x coords={BP$_{111}$,BP$_{100}$,BP$_{010}$,BP$_{001}$,BP$_{CA}$,E2E}, xtick=data, nodes near coords, nodes near coords align={vertical}, ] \addplot [fill=white]coordinates {(BP$_{111}$,100) (BP$_{100}$,4) (BP$_{010}$,97) (BP$_{001}$,100) (BP$_{CA}$,85) (E2E,100)}; \addplot[fill=gray] coordinates {(BP$_{111}$,100) (BP$_{100}$,0) (BP$_{010}$, 89) (BP$_{001}$,100) (BP$_{CA}$,84) (E2E,100)}; \legend{$c_{ij} = 13$,$c_{ij} = 100$} \end{axis} \end{tikzpicture} } \label{fig:fattree-rp}% } \subfloat[][]{ \resizebox{0.3\textwidth}{!}{ \begin{tikzpicture} \begin{axis}[ ybar, axis x line=bottom, axis y line=left, ymin = 0, enlarge x limits=0.15, enlarge y limits=0, legend style={at={(0.5,-0.20)}, anchor=north,legend columns=-1}, ylabel={\bfseries Link capacity occupation}, xlabel={Model}, symbolic x coords={BP$_{111}$,BP$_{100}$,BP$_{010}$,BP$_{001}$,BP$_{CA}$,E2E}, xtick=data, nodes near coords, nodes near coords align={vertical}, ] \addplot[fill=white] coordinates {(BP$_{111}$,59) (BP$_{100}$,57) (BP$_{010}$,52) (BP$_{001}$,50) (BP$_{CA}$,50) (E2E,50)}; \addplot[fill=gray] coordinates {(BP$_{111}$,6) (BP$_{100}$,8) (BP$_{010}$, 7) (BP$_{001}$,6) (BP$_{CA}$,6) (E2E,6)}; \legend{$c_{ij} = 13$,$c_{ij} = 100$} \end{axis} \end{tikzpicture} } \label{fig:fattree-uc}% } \caption{Result charts for the three topology examinated}% \label{fig:result-charts}% \vspace{-5mm} \end{figure*} \section{Conclusion} \label{sec:conclusion} In this paper we have presented a new failure management framework for SDN and a mathematical modeling approach specifically designed to exploit the capabilities of OpenState. The framework considers both single link and single node failure. The protection scheme is based on the idea that, upon failure detection, packets can be tagged and backtracked along the primary path to signal the failure to the first convenient reroute node, automatically establishing a detour path. Such scheme aims at having zero packet loss after failure detection, and doesn't require controller intervention. The models were tested on three well-known topologies and comparative results were obtained, showing the superiority of the scheme with respect to a classic end-to-end path protection scheme an with respect to an approach based on the OpenFlow fast-failover mechanism. We are currently working on the dimensioning problem and developing the OpenState application to experimentally validate the proposed solution. \section*{Acknowledgment} This work has been funded by NSERC Discovery Grant and by the European Community BEBA project. Luca Pollini and Davide Sanvito were part of the team that coded the algorithms in an OpenState emulator. We are grateful for their input that allowed us to assess upfront the feasibility of the proposed modeling approaches.
1,116,691,497,477
arxiv
\section{Introduction} Throughout this paper, we denote the \textit{complex plane} by $\mathbb{C}$ and set of integers greater than zero by $\mathbb{N}$. We assume the function $f:\mathbb{C}\rightarrow\mathbb{C}$ is \textit{transcendental entire function} (TEF) unless otherwise stated. For any $n\in\mathbb{N}, \;\; f^{n}$ always denotes the nth \textit{iterates} of $f$. Let $ f $ be a TEF. The set of the form $$ I(f) = \{z\in \mathbb{C}:f^n(z)\rightarrow \infty \textrm{ as } n\rightarrow \infty \} $$ is called an \textit{escaping set} and any point $ z \in I(S) $ is called \textit{escaping point}. For TEF $f$, the escaping set $I(f)$ was first studied by A. Eremenko \cite{ere}. He showed that $I(f)\not= \emptyset$; the boundary of this set is a Julia set $ J(f) $ (that is, $ J(f) =\partial I(f) $); $I(f)\cap J(f)\not = \emptyset$; and $\overline{I(f)}$ has no bounded component. Note that the complement of Julia set $ J(f) $ in complex plane $ \mathbb{C} $ is a \textit{Fatou set} $F(f)$. We confine our study on Fatou set, Julia set and escaping set of transcendental semigroup and its conjugate semigroup. It is very obvious fact that a set of transcendental entire maps on $ \mathbb{C} $ naturally forms a semigroup. Here, we take a set $ A $ of transcendental entire maps and construct a semigroup $ S $ consists of all elements that can be expressed as a finite composition of elements in $ A $. We call such a semigroup $ S $ by \textit{transcendental semigroup} generated by set $ A $. Our particular interest is to study of the dynamics of the families of transcendental entire maps. For a collection $\mathscr{F} = \{f_{\alpha}\}_{\alpha \in \Delta} $ of such maps, let $$ S =\langle f_{\alpha} \rangle $$ be a \textit{transcendental semigroup} generated by them. The index set $ \Delta $ to which $ \alpha $ belongs is allowed to be infinite in general unless otherwise stated. Here, each $f \in S$ is a transcendental entire function and $S$ is closed under functional composition. Thus, $f \in S$ is constructed through the composition of finite number of functions $f_{\alpha_k},\; (k=1, 2, 3,\ldots, m) $. That is, $f =f_{\alpha_1}\circ f_{\alpha_2}\circ f_{\alpha_3}\circ \cdots\circ f_{\alpha_m}$. A semigroup generated by finitely many transcendental functions $f_{i}, (i = 1, 2, \\ \ldots, n) $ is called \textit{finitely generated transcendental semigroup}. We write $S= \langle f_{1}, f_{2},\\ \ldots,f_{n} \rangle$. The transcendental semigroup $S$ is \textit{abelian} if $f_i\circ f_j =f_j\circ f_i$ for all generators $f_{i}$ and $f_{j}$ of $ S $. The semigroup $ S $ is right cancellative if $ f \circ g = h \circ g \Longrightarrow f = h $, left cancellative if $ f \circ g = h \circ h \Longrightarrow g = f$ for all $ f, g, h \in S $ and cancellative if it is both right and left cancellative. The family $\mathscr{F}$ of complex analytic maps forms a \textit{normal family} in a domain $ D $ if given any composition sequence $ (f_{\alpha}) $ generated by the member of $ \mathscr{F} $, there is a subsequence $( f_{\alpha_{k}}) $ which is uniformly convergent or divergent on all compact subsets of $D$. If there is a neighborhood $ U $ of the point $ z\in\mathbb{C} $ such that $\mathscr{F} $ is normal family in $U$, then we say $ \mathscr{F} $ is normal at $ z $. If $\mathscr{F}$ is a family of members from the transcendental semigroup $ S $, then we simply say that $ S $ is normal in the neighborhood of $ z $ or $ S $ is normal at $ z $. Semigroup $ S $ is \textit{iteratively divergent} at $ z $ if $f^n(z)\rightarrow \infty \; \textrm{as} \; n \rightarrow \infty$ for all $ f \in S $. Based on the Fatou-Julia-Eremenko theory of a complex analytic function, the Fatou set, Julia set and escaping set in the settings of transcendental semigroup are defined as follows. \begin{dfn}[\textbf{Fatou set, Julia set and escaping set}]\label{2ab} \textit{Fatou set} of the transcendental semigroup $S$ is defined by $$ F (S) = \{z \in \mathbb{C}: S\;\ \textrm{is normal in a neighborhood of}\;\ z\} $$ and the \textit{Julia set} $J(S) $ of $S$ is the compliment of $ F(S) $ where as the escaping set of $S$ is defined by $$ I(S) = \{z \in \mathbb{C}: S \; \text{is iteratively divergent at} \;z \}. $$ We call each point of the set $ I(S) $ by \textit{escaping point}. \end{dfn} There is a slightly larger family of transcendental semigroups that can fulfill most of the results of abelian transcendental semigroup. We call these semigroups nearly abelian and it is considered the more general form than that of abelian semigroups. \begin{dfn}[\textbf{Nearly abelian semigroup}] \label{1p} We say that a transcendental semigroup $ S $ is \textit{nearly abelian} if there is a family $ \Phi = \{\phi_{i} \} $ of conformal maps such that \begin{enumerate} \item $ \phi_{i}(F(S)) = F(S) $ for all $ \phi_{i}\in \Phi $ and \item for all $ f, g \in S $, there is a $ \phi \in \Phi $ such that $ f \circ g = \phi \circ g\circ f $. \end{enumerate} \end{dfn} \begin{dfn}[\textbf{Commutator}]\label{com} Let $ S $ be a transcendental semigroup. The set of the form $$\Phi(S) = \{\phi:\; \text{there are}\;\ f, g\in S\;\ \text{such that}\;\ f\circ g = \phi \circ g\circ f\} $$ is called the set of \textit{commutators} of $ S $. We write $ \phi = [f, g] $ if $ f \circ g = \phi \circ g\circ f $. \end{dfn} The notion of commutator is very useful to obtain conjugate maps of each generator $ f_{i} $ of the semigroup $ S $ and conjugate semigroup of the semigroup $ S $. \begin{dfn}[\textbf{conjugate semigroup}]\label{csg} Let $S =\langle f_{1}, f_{2}, f_{3}, \ldots, f_{n} \rangle$ be a finitely generated transcendental semigroup and $ \Phi(S) $ be a set of its commutators. Let us define a set \begin{equation}\label{1t} S^{'} = \langle \phi \circ f_{1} \circ \phi^{-1}, \; \phi \circ f_{2} \circ \phi^{-1}, \ldots, \; \phi \circ f_{n} \circ \phi^{-1} \rangle \end{equation} where $ \phi \in \Phi(S) $ such that $ \phi = [f_{i}, f_{j}]$ and $ \phi^{-1} = [f_{j}, f_{i}]$ as we defined before. If we let $ g_{i} = \phi \circ f_{i} \circ \phi^{-1} $, then we say function $ f_{i}$ is conjugate to $g_{i}$ by a map $ \phi : \mathbb{C}\rightarrow \mathbb{C}$. The semigroup $ S^{'} $ is then called a \textit{conjugate semigroup} of semigroup $ S $. \end{dfn} The image of the Fatou set, Julia set and escaping set of nearly abelian semigroup under commutator $ \phi \in \Phi(S) $ is respectively Fatou set, Julia set and escaping set of its conjugate. \begin{theorem}\label{cgs2} Let $S =\langle f_{1}, f_{2}, f_{3}, \ldots, f_{n} \rangle$ be a nearly abelian transcendental semigroup with commutators the maps of the form $\phi(z) = az + b $ for some non-zero $ a $ and $S^{'} = \langle \phi \circ f_{1} \circ \phi^{-1},\; \phi \circ f_{2} \circ \phi^{-1}, \ldots, \phi \circ f_{n} \circ \phi^{-1} \rangle$ be a conjugate semigroup of $ S $. Then $ \phi (I(S)) = I(S^{'}), \; \phi (J(S)) = J(S^{'}) $ and $\phi (F(S)) = F(S^{'})$. \end{theorem} Analogous to {\cite [Theorem 4.3]{hin}}, every function of the nearly abelian transcendental semigroup $ S $ can be written as the composition of either an element of commutator $ \Phi(S) $ or an element from the set generated by $ \Phi(S) $ and the composition of the certain powers of its generators. \begin{theorem}\label{cgs4} Let $S =\langle f_{1}, f_{2}, f_{3}, \ldots, f_{n},\ldots \rangle$ be a nearly abelian cancellative transcendental semigroup. Then every element $ f \in S $ can be written in the form $$ f = \phi \circ f^{t_{1}}_{1}\circ f^{t_{2}}_{2}\circ f^{t_{3}}_{3}\circ\cdots f^{t_{m}}_{m} $$ where $ \phi \in \Phi(S) $ if $ \Phi(S) $ is a group or semigroup otherwise $ \phi \in G$, where $ G =\langle\Phi(S) \rangle $ is a group generated by $ \Phi(S) $ and $ t_{i} $ are non-negative integers. \end{theorem} \section{The Notion Conjugate Semigroup and Proof of the Theorem \ref{cgs2}} Let $ S $ be a transcendental semigroup. If there is a holomorphic function $ \phi $ such that $f\circ g = \phi \circ g\circ f $ for every pair of functions $ f, g \in S $, then $ \phi $ is called commutator of $ f $ and $ g $. Note that such a commutator is unique for every pair of transcendental entire functions. Recall that $$\Phi(S) = \{\phi: f\circ g = \phi \circ g\circ f\; \text{for every pair of functions}\; f, g\in S \}$$ is a set of commutators of transcendental semigroup $ S $. If $ S $ is abelian, then commutator $ \phi $ is an identity function. As in definition \ref{com}, we write $ \phi =[f, g] $ if $ f\circ g = \phi \circ g\circ f$. Note that $ [f, g]^{-1} = [g, f] $ and for any $ f\in S, \; [f, f]$ = identity. So, in $\Phi(S)$, there is an identity element and inverse of each $ \phi\in\Phi(S) $. It is not clear in general whether $\Phi(S)$ has group structure or semigroup structure but we can make a group or semigroup $ G = \langle \Phi(S) \rangle $ generated by the elements of $ \Phi(S) $ whenever it is necessary. Note that there are some commutator identities of groups which can be verified in $ \Phi(S) $ if $ S $ is a cancellative transcendental semigroup. For example: \begin{enumerate} \item $ [f,\; g \circ f^{n}] = [f,\; g] $. \item $[f,\; f^{n} \circ g] \circ f^{n} = f^{n} \circ [f,\; g]$. \item $[f\circ g, \; g \circ f] \circ g \circ f = f \circ g \circ [g,\; f] $ \end{enumerate} In practice, there is a commutator for given pair of transcendental entire functions. For example: \begin{exm} Let $ f(z) = e^{z^{2}} + \lambda $ and $ g(z) = - f(z)$, where $ \lambda \in \mathbb{C} $. It is easy to see that $ (f \circ g)(z) = e^{(z^{2} + \lambda )^{2}}+ \lambda = \phi(-e^{(z^{2} + \lambda)^{2}}- \lambda) = (\phi \circ g \circ f)(z)$, where $ \phi(z) = -z $. Likewise, if $f(z) = \lambda \cos z, \; (\lambda \in \mathbb{C}) $ and $ g(z) = -f(z) $, then $ (f \circ g)(z) = \lambda \cos (\lambda \cos z) = \phi (-\lambda \cos (\lambda \cos z)) =(\phi \circ g \circ f)(z) $, where $ \phi(z) = -z $. \end{exm} The family $\Phi(S)$ of holomorphic functions $ \phi $ for which $f \circ g = \phi \circ g\circ f$ for some $ f, g \in S $ is assumed to be pre-compact. This means that any sequence $ (\phi_{i}) $ of elements of $ \Phi(S) $ contains a subsequence $ (\phi_{i_{k}}) $ that converges to holomorphic function (but not to a constant function) uniformly on $ \mathbb{C} $. On the basis of the notion of commutator, we can get conjugate map of each generator $ f_{i} $ of the semigroup $ S $ and conjugate semigroup of the semigroup $ S $ as defined in above definition \ref{csg}. If our semigroup is nearly abelian, our most of the tasks on commutators will be more easy so that dynamics on such semigroups can be handled in a right track. We prove the following result that shows conjugate semigroup is nearly abelian if and only if semigroup itself is nearly abelian. \begin{theorem}\label{csg1} The conjugate semigroup $S^{'} = \langle \phi \circ f_{1} \circ \phi^{-1},\; \phi \circ f_{2} \circ \phi^{-1},\; \ldots, \phi \circ f_{n} \circ \phi^{-1} \rangle$ of a transcendental semigroup $S =\langle f_{1}, f_{2}, f_{3}, \ldots, f_{n} \rangle$ is nearly abelian if and only if $ S $ is nearly abelian. \end{theorem} \begin{proof} Let $ S $ is a nearly abelian transcendental semigroup. Then $f_{i}\circ f_{j} = \phi \circ f_{j}\circ f_{i}$ for all generators $ f_{i},\; f_{j}\in S $ and $ \phi \in \Phi (S) $. Now for any $ \phi \circ f_{i} \circ \phi^{-1},\; \phi \circ f_{j} \circ \phi^{-1} \in S^{'}$, we have \begin{align*} (\phi \circ f_{i} \circ \phi^{-1}) \circ (\phi \circ f_{j} \circ \phi^{-1}) = & \phi \circ f_{i} \circ f_{j} \circ \phi^{-1} \\ = & \phi \circ \xi \circ f_{j} \circ f_{i} \circ \phi^{-1}\;\; \text{for some}\;\; \xi \in \Phi(S) \\ = & \xi \circ \phi \circ f_{j} \circ f_{i} \circ \phi^{-1}\\ = & \xi \circ (\phi \circ f_{j} \circ \phi^{-1}) \circ (\phi \circ f_{i} \circ \phi^{-1})\\ \end{align*} This shows that the conjugate semigroup $S^{'} $ of a nearly abelian transcendental semigroup $ S $ is a nearly abelian. Conversely, suppose that semigroup $ S^{'} $ is nearly abelian. Then $g_{i}\circ g_{j} = \phi \circ g_{j}\circ g_{i}$ for $ \phi \in \Phi (S) $ and for all generators $ g_{i}, g_{j}\in S^{'} $, where $ g_{i} = \phi \circ f_{i} \circ \phi^{-1} $ and $ g_{j} = \phi \circ f_{j} \circ \phi^{-1} $ and from which get $ f_{i} = \phi^{-1} \circ g_{i} \circ \phi $ and $ f_{j} = \phi^{-1} \circ f_{j} \circ \phi $. Now, for any $ f_{i}, \; f_{j} \in S $, we have \begin{align*} f_{i} \circ f_{j} = & (\phi^{-1} \circ g_{i} \circ \phi) \circ (\phi^{-1} \circ g_{j} \circ \phi) \\ = & \phi^{-1} \circ g_{i} \circ g_{j} \circ\phi \\ = & \phi^{-1} \circ \phi \circ g_{j} \circ g_{i} \circ\phi \\ = & g_{j} \circ g_{i} \circ \phi \\ = & \phi \circ f_{j} \circ \phi^{-1} \circ \phi \circ f_{i} \circ \phi^{-1} \circ \phi \\ = & \phi \circ f_{j} \circ f_{i} \end{align*} This shows that semigroup $ S $ is nearly abelian if its conjugate semigroup $ S^{'} $ is nearly abelian. \end{proof} To prove the theorem \ref{cgs2}, we need the following lemma. \begin{lem}\label{cgs3} Let $ f $ and $ g $ be two transcendental entire functions and $ \phi $ be an entire function of the form $ z \to az + b $, where $ a \neq 0 $ such that $ \phi \circ f = g \circ \phi $. Then $ \phi (I(f)) = I(g), \; \phi (J(f)) = J(g) $ and $\phi (F(f)) = F(g)$. \end{lem} \begin{proof} Let $ w \in \phi (I(f)) $, then there is $ z \in I(f) $ such that $ w = \phi (z) $. The condition $ z \in I(f) \Longrightarrow f^{n}(z) \to \infty $ as $ n \to \infty $. Now $ g^{n}(w) = g^{n}(\phi(z)) =( g^{n} \circ \phi)(z) = (g^{n-1}\circ g \circ \phi)(z) = (g^{n-1}\circ \phi \circ f)(z) =( g^{n-2}\circ \phi \circ f^{2})(z) = \ldots = (\phi \circ f^{n})(z) = \phi (f^{n}(z)) $. Since $ \phi(z) = az + b, \, (a \neq 0) $ and $f^{n}(z) \to \infty $ as $ n \to \infty$. So we must have $ g^{n}(w)\to \infty$ as $ n \to \infty $. This shows that $ \phi (I(f)) \subset I(g) $. For opposite inclusion, we note that if $ z \in I(g) $ then we must have $\phi( z) \in I(g) $. As above $ \phi(f^{n}(z)) = g^{n}(\phi(z)) \to \infty $ as $ n \to \infty $. This shows that $ z \in \phi (I(f)) $ and so $I(g) \subset \phi(I(f)) $. This proves that $ \phi (I(f)) = I(g)$. Remaining equality obtained from the facts $ \partial I(f) = J(f) $ and $ F(f) =\mathbb{C} \setminus J(f) $. \end{proof} \begin{proof}[Proof of the Theorem \ref{cgs2}] Let $ \phi \circ f_{i} \circ \phi^{-1} = g_{i} $ for all $i = 1, 2, \cdots, n$. From which we get $ \phi \circ f_{i} = g_{i}\circ \phi $ for all $i = 1, 2, \cdots, n$. Any $ f \in S $ and $ g \in S^{'} $ can be written respectively as $ f = f_{i_{1}}\circ f_{i_{2}} \circ \ldots \circ f_{i_{n}} $ and $ g = g_{i_{1}}\circ g_{i_{2}} \circ \ldots \circ g_{i_{n}} $. From which we get $ \phi \circ f = \phi \circ f_{i_{1}}\circ f_{i_{2}} \circ \ldots \circ f_{i_{n}} = g_{i_{1}} \circ \phi \circ f_{i_{2}} \circ \ldots \circ f_{i_{n}} = g_{i_{1}} \circ g_{i_{2}} \circ \phi \circ \ldots \circ f_{i_{n}} = \ldots = g_{i_{1}} \circ g_{i_{2}} \circ \ldots \circ g_{i_{n}} \circ \phi = g \circ \phi $ for all $ f \in S $ and $ g \in S^{'} $. Since $S =\langle f_{1}, f_{2}, f_{3}, \ldots, f_{n} \rangle$ be a nearly abelian transcendental entire functions, so from the {\cite[Theorem 1.1]{sub5}}, we have $I(S) = I(f),\; J(S) = J(f) $ and ${F(S)} = {F(f)} $ for all $f \in S$. Now $I(S) = I(f) \Longrightarrow \phi (I(S)) = \phi (I(f))$. By lemma \ref{cgs3}, $\phi(I(f)) = I(g)$. By the theorem \ref{csg1}, semigroup $ S^{'}$ is nearly abelian , so again by the {\cite[Theorem 1.1]{sub5}}, we have $ I(S^{'}) = I(g)$. Thus we get $ \phi (I(S)) = I(S^{'})$. Next two equality are also obtained by the similar fashion. \end{proof} \section{Proof of the Theorem \ref{cgs4}} There is a nice way of writing arbitrary element of nearly abelian transcendental semigroup. Before doing so, we will see how does any $ f \in S $ behave just like semi-conjugacy for some member of $ \Phi(S) $ and a member from the set generated by $ \Phi(S) $ as shown in the following lemma. Note that the statement and the proof this lemma is analogous to {\cite[Lemma 4.2]{hin}}. \begin{lem}\label{lemma} Let $ S $ be a nearly abelian cancellative transcendental semigroup. Then for any $ f \in S $ and for any $ \phi \in \Phi(S) $, there is a map $ \xi\in G $, where $ G = \langle \Phi(S) \rangle $ is a group generated by the elements in $ \Phi(S) $ such that $ f \circ \phi = \xi \circ f $. \end{lem} \begin{proof} For any $ \phi \in \Phi (S) $, there are $ g, h \in S $ such that $ g \circ h = \phi \circ h\circ g $. Then, for any $ f \in S $, we can write \begin{equation}\label{eq1} f \circ g \circ h =f \circ \phi \circ h\circ g. \end{equation} Furthermore, \begin{equation}\label{eq3} f \circ g \circ h =\xi_{1} \circ g \circ f\circ h =\xi_{1}\circ \xi_{2} \circ f \circ h\circ g. \end{equation} for some $ \xi_{1}, \xi_{2}\in \Phi(S) $. Since $ S $ is cancellative semigroup, so from the equations \ref{eq1} and \ref{eq3}, we get $$ f \circ \phi = \xi_{1}\circ \xi_{2} \circ f = \xi \circ f $$ where $ \xi = \xi_{1}\circ \xi_{2} \in G $. \end{proof} From the equation \ref{eq3}, we can say that composite of two commutators may not be a commutator. We investigate a couple of examples of transcendental semigroups such that essence of above lemma \ref{lemma} holds. \begin{exm} If a semigroup $ S $ generated by functions $ f(z) = \lambda \cos z \in S$ and $ g(z) = -f(z) $, then it is nearly abelian where $ \phi (z) = -z$ is the commutator of $ f $ and $ g $ (see for instance {\cite[Example 2.2]{sub5}}). Since $ \phi ^{2}(z) = z$, an identity element of a group $ G =\langle \Phi(S) \rangle = \{\text{Identity}, \; \phi \}$ such that $f\; \circ \; \phi = f =\text{Identity}\; \circ f $ and $g\; \circ \; \phi = g = \text{Identity}\; \circ g $. \end{exm} \begin{exm} If a semigroup $ S $ generated by functions $ f(z) = e^{z^{2}} + \lambda $ and $ g(z) = -f(z) $, then it is nearly abelian where $ \phi (z) = -z $ is the commutator of $ f $ and $ g $ (see for instance {\cite[Example 2.2]{sub5}}). Since $ \phi ^{2}(z) = z$, an identity element of a group $ G =\langle \Phi(S) \rangle = \{\text{Identity}, \; \phi \}$ such that $f\; \circ \; \phi = f =\text{Identity}\; \circ f $ and $g\; \circ \; \phi = g = \text{Identity}\; \circ g $. \end{exm} Also note that in both of these examples, we have $\phi\circ f = -f \neq f = f \circ\xi $ for any $ \xi \in G $. That is, for given $ \phi \in \Phi(S) $, there may not always possible to find element $ \xi \in G $ satisfying $\phi\circ f = f\circ\xi $ for any choice of $ \xi \in G $. \begin{proof}[Proof of the Theorem \ref{cgs4}] The proof of this theorem follows from the inductive application of above lemma \ref{lemma} to each element $ f = f_{i_{1}}\circ f_{i_{2}}\circ \ldots \circ f_{i_{n}} $ of $ S $. \end{proof}
1,116,691,497,478
arxiv
\section{Introduction} Let $\Gamma$ be a totally ordered abelian group, and $\Gamma^{+}:=\{x\in\Gamma : x \ge 0\}$ the positive cone of $\Gamma$. A dynamical system $(A,\Gamma^{+},\alpha)$ is a system consisting of a $C^*$-algebra $A$, an action $\alpha:\Gamma^{+}\rightarrow \operatorname{End}{A}$ of $\Gamma^{+}$ by endomorphisms $\alpha_{x}$ of $A$ such that $\alpha_{0}={\rm id}_{A}$. Since we do not require the algebra $A$ to have an identity element, we need to assume that every endomorphism $\alpha_{x}$ extends to a strictly continuous endomorphism $\overline{\alpha}_{x}$ of the multiplier algebra $M(A)$ as it is used in \cite{Adji1,Larsen}, and note that extendibility of $\alpha_{x}$ may imply ${\alpha}_{x}(1_{M(A)})\neq 1_{M(A)}$. A partial-isometric covariant representation, the analogue of isometric covariant representation, of the system $(A,\Gamma^{+},\alpha)$ is defined in \cite{LR} where the endomorphisms $\alpha_{s}$ are represented by partial-isometries instead of isometries. The partial-isometric crossed product $A\times_{\alpha}^{\operatorname{piso}}\Gamma^{+}$ is defined in there as the Toeplitz algebra studied in \cite{F} associated to a product system of Hilbert bimodules arises from the underlying dynamical system $(A,\Gamma^{+},\alpha)$. This algebra is universal for covariant partial-isometric representations of the system. The success of the theory of isometric crossed products \cite{Adji2,Adji,ALNR,murphy,murphy2,murphy3} has led authors in \cite{LR} to study the structure of partial-isometric crossed product of the distinguished system $(B_{\Gamma^{+}},\Gamma^{+},\tau)$, where $\tau_{x}$ acts on the subalgebra $B_{\Gamma^{+}}$ of $\ell^{\infty}(\Gamma^{+})$ as the right translation. However the analogous view of isometric crossed products as full corners in crossed products by groups \cite{Adji1,Laca,stacey} for partial-isometric crossed products remains unavailable. This is the main task undertaken in the present work. We construct a covariant partial-isometric representation of $(A,\Gamma^{+},\alpha)$ in the $C^{*}$-algebra ${\mathcal{L}}(\ell^{2}(\Gamma^{+},A))$ of adjointable operators on the Hilbert $A$-module $\ell^{2}(\Gamma^{+},A)$, and we show the corresponding representation of the crossed product is an isomorphism of $A\times_{\alpha}^{\operatorname{piso}}\Gamma^{+}$ onto a full corner in the subalgebra of ${\mathcal{L}}(\ell^{2}(\Gamma^{+},A))$. We use the idea from \cite{KS} for the construction: the embedding $\pi_{\alpha}$ of $A$ into ${\mathcal{L}}(\ell^{2}(\Gamma^{+},A))$, together with the isometric representation $S:\Gamma^{+}\rightarrow {\mathcal{L}}(\ell^{2}(\Gamma^{+},A))$, satisfy the equation $\pi_{\alpha}(a)S_{x}=S_{x}\pi(\alpha_{x}(a))$ for all $a\in A$ and $x\in\Gamma^{+}$, and then the algebra ${\mathcal{T}}_{(A,\Gamma^{+},\alpha)}$ generated by $\pi(A)$ and $S(\Gamma^{+})$ contains $A\times_{\alpha}^{\operatorname{piso}}\Gamma^{+}$ as a full corner. However since the results in \cite{KS} are developed to compute and to show that $KK$-groups of ${\mathcal{T}}_{(A,\Gamma^{+},\alpha)}$ and $A$ are equivalent, the theory is set for unital $C^*$-algebras and unital endomorphisms: if the algebra is not unital, they use the smallest unitization algebra $\tilde{A}$ and then the extension of endomorphism on $\tilde{A}$ is unital. Here we use the (largest unitization) multiplier algebra $M(A)$ of $A$, and every endomorphism is extendible to $M(A)$. So we generalize the arguments in \cite{KS} to the context of multiplier algebra. When endomorphisms in a given system are unital, then we are in the context of \cite{KS} which therefore the $C^*$-algebra $A\times_{\alpha}^{\operatorname{piso}}\Gamma^{+}$ enjoys all properties of the algebra ${\mathcal{T}}_{(A,\Gamma^{+},\alpha)}$ described in \cite{KS}. Moreover if the action is automorphic action then we show that $A\times_{\alpha}^{\operatorname{piso}}\Gamma^{+}$ is a full corner in the crossed product by group action. Using the corner realization of $A\times_{\alpha}^{\operatorname{piso}}\Gamma^{+}$, we identify the kernel of the natural surjective homomorphism $i_{A}\times i_{\Gamma^{+}}: A\times_{\alpha}^{\operatorname{piso}}\Gamma^{+} \rightarrow A\times_{\alpha}^{\operatorname{iso}}\Gamma^{+}$ induced by the canonical isometric covariant pair $(i_{A},i_{\Gamma^{+}})$ of $(A,\Gamma^{+},\alpha)$, and to get the exact sequence of \cite{KS} and the Pimsner-Voiculescu exact sequence in \cite{PV}. We begin the paper with a preliminary section containing background material about partial-isometric and isometric crossed products, and then identify the spanning elements of the kernel of the natural homomorphism from partial isometric crossed product onto the isometric crossed product of a system $(A,\Gamma^{+},\alpha)$. In Section 3, we construct a covariant partial-isometric representation of $(A,\Gamma^{+},\alpha)$ in ${\mathcal{L}}(\ell^{2}(\Gamma^{+},A))$ for which it gives an isomorphism of $A\times_{\alpha}^{\operatorname{piso}}\Gamma^{+}$ onto a full corner of the subalgebra of ${\mathcal{L}}(\ell^{2}(\Gamma^{+},A))$. In Section 4, we show that when the semigroup $\Gamma^{+}$ is ${\mathbb N}$ then the kernel of that natural homomorphism is a full corner in the algebra of compact operators on $\ell^{2}({\mathbb N},A)$. We discuss in Section 5, the theory of partial-isometric crossed products for systems by automorphic actions of the semigroups $\Gamma^{+}$. We show that $A\times_{\alpha}^{\operatorname{piso}}\Gamma^{+}$ is a full corner in the classical crossed product $(B_{\Gamma}\otimes A)\times \Gamma$ of a dynamical system by a group of automorphisms. \section{Preliminaries} A \emph{partial isometry} $V$ on a Hilbert space $H$ is an operator which satisfies $\|Vh\|=\|h\|$ for all $h\in (\ker V)^{\perp}$. A bounded operator $V$ is a partial isometry if and only if $VV^{*}V=V$, and then the adjoint $V^{*}$ is a partial isometry too. Furthermore the two operators $V^{*}V$ and $VV^{*}$ are the orthogonal projections on the initial space $(\ker V)^{\perp}$ and the range $VH$ respectively. So for an element $v$ of a $C^*$-algebra $A$ is called a partial isometry if $vv^{*}v=v$. A \emph{partial-isometric representation} of $\Gamma^{+}$ on a Hilbert space $H$ is a map $V:\Gamma^{+}\rightarrow B(H)$ such that $V_{s}:=V(s)$ is a partial isometry and $V_{s}V_{t}=V_{s+t}$ for every $s,t\in\Gamma^{+}$. The product $ST$ of two partial isometries $S$ and $T$ is not always a partial isometry, unless $S^{*}S$ commutes with $TT^{*}$ (Proposition 2.1 of \cite{LR}). A partial isometry $S$ is called a power partial isometry if $S^{n}$ is a partial isometry for every $n\in{\mathbb N}$. So a partial isometric representation of ${\mathbb N}$ is determined by a single power partial isometry $V_{1}$ because $V_{n}=V_{1}^{n}$. Proposition 3.2 of \cite{LR} says that if $V$ is a partial-isometric representation of $\Gamma^{+}$, then every $V_{s}$ is a power partial isometry, and $V_{s}V_{s}^{*}$ commutes with $V_{t}V_{t}^{*}$, $V_{s}^{*}V_{s}$ commutes with $V_{t}^{*}V_{t}$. A \emph{covariant partial-isometric representation} of $(A,\Gamma^{+},\alpha)$ on a Hilbert space $H$ is a pair $(\pi,V)$ consisting of a non-degenerate representation $\pi:A\rightarrow B(H)$ and a partial-isometric representation $V:\Gamma^{+}\rightarrow B(H)$ which satisfies \begin{equation}\label{cov} \pi(\alpha_{s}(a))=V_{s}\pi(a)V_{s}^{*} \quad \mbox{and} \quad V_{s}^{*}V_{s}\pi(a)=\pi(a) V_{s}^{*}V_{s} \quad \mbox{ for } s\in\Gamma^{+}, a\in A. \end{equation} Every covariant representation $(\pi,V)$ of $(A,\Gamma^{+},\alpha)$ extends to a covariant representation $(\overline{\pi},V)$ of $(M(A),\Gamma^{+},\overline{\alpha})$. Lemma 4.3 of \cite{LR} shows that $(\pi,V)$ is a covariant representation of $(A,\Gamma^{+},\alpha)$ if and only if \[ \pi(\alpha_{s}(a))V_{s}=V_{s}\pi(a) \quad \mbox{and} \quad V_{s}V_{s}^{*} =\overline{\pi}(\overline{\alpha}_{s}(1)) \quad \mbox{ for } s\in\Gamma^{+}, a\in A.\] Every system $(A,\Gamma^{+},\alpha)$ admits a nontrivial covariant partial-isometric representation \cite[Example 4.6]{LR}. \begin{definition}\label{def} A partial-isometric crossed product of $(A,\Gamma^{+},\alpha)$ is a triple $(B,i_{A},i_{\Gamma^{+}})$ consisting of a $C^{*}$-algebra $B$, a non-degenerate homomorphism $i_{A}:A\rightarrow B$, and a partial-isometric representation $i_{\Gamma^{+}}:\Gamma^{+}\rightarrow M(B)$ such that \begin{itemize} \item[(i)] the pair $(i_{A},i_{\Gamma^{+}})$ is a covariant representation of $(A,\Gamma^{+},\alpha)$ in $B$; \item[(ii)] for every covariant partial-isometric representation $(\pi,V)$ of $(A,\Gamma^{+},\alpha)$ on a Hilbert space $H$ there is a non-degenerate representation $\pi\times V$ of $B$ on $H$ which satisfies $(\pi\times V)\circ i_{A}=\pi$ and $\overline{(\pi\times V)}\circ i_{\Gamma^{+}}=V$; and \item[(iii)] the $C^*$-algebra $B$ is spanned by $\{i_{\Gamma^{+}}(s)^{*}i_{A}(a)i_{\Gamma^{+}}(t): a\in A, s,t \in\Gamma^{+}\}$. \end{itemize} \end{definition} \begin{remark} Proposition 4.7 of \cite{LR} shows that such $(B,i_{A},i_{\Gamma^{+}})$ always exists, and it is unique up to isomorphism: if $(C,j_{A},j_{\Gamma^{+}})$ is a triple that satisfies all properties (i),(ii) and (iii) then there is an isomorphism of $B$ onto $C$ which carries $(i_{A},i_{\Gamma^{+}})$ into $(j_{A},j_{\Gamma^{+}})$. We use the standard notation $A\times_{\alpha}\Gamma^{+}$ for the crossed product of $(A,\Gamma^{+},\alpha)$, and we write $A\times_{\alpha}^{\operatorname{piso}}\Gamma^{+}$ if we want to distinguish it with the other kind of crossed product. Theorem 4.8 of \cite{LR} asserts that a covariant representation $(\pi,V)$ of $(A,\Gamma^{+},\alpha)$ on $H$ induces a faithful representation $\pi\times V$ of $A\times_{\alpha}\Gamma^{+}$ if and only if $\pi$ is faithful on $(V_{s}^{*}H)^{\perp}$ for all $s>0$, and this condition is equivalent to say that $\pi$ is faithful on the range of $(1-V_{s}^{*}V_{s})$ for all $s>0$. \end{remark} \subsubsection{Isometric crossed products} The above definition of partial-isometric crossed product is analogous to the one for isometric crossed product: the endomorphisms $\alpha_{s}$ are implemented by partial isometries instead of isometries. We recall that an \emph{isometric representation} $V$ of $\Gamma^{+}$ on a Hilbert space $H$ is a homomorphism $V:\Gamma^{+}\rightarrow B(H)$ such that each $V_{s}$ is an isometry and $V_{s+t}=V_{s}V_{t}$ for all $s,t\in\Gamma^{+}$. A pair $(\pi,V)$, of non degenerate representation $\pi$ of $A$ and an isometric representation $V$ of $\Gamma^{+}$ on $H$, is a \emph{covariant isometric representations} of $(A,\Gamma^{+},\alpha)$ if $\pi(\alpha_{s}(a))=V_{s}\pi(a)V_{s}^{*}$ for all $a\in A$ and $s\in\Gamma^{+}$. The \emph{isometric crossed product} $A\times_{\alpha}^{\operatorname{iso}}\Gamma^{+}$ is generated by a universal isometric covariant representation $(i_{A},i_{\Gamma^{+}})$, such that there is a bijection $(\pi,V)\mapsto \pi\times V$ between covariant isometric representations of $(A,\Gamma^{+},\alpha)$ and non degenerate representations of $A\times_{\alpha}^{\operatorname{iso}}\Gamma^{+}$. We make a note that some systems $(A,\Gamma^{+},\alpha)$ may not have a non trivial covariant isometric representations, in which case their isometric crossed products give no information about the systems. When $\alpha:\Gamma^{+}\rightarrow \operatorname{End}(A)$ is an action of $\Gamma^{+}$ such that every $\alpha_{x}$ is an automorphism of $A$, then every isometry $V_{s}$ in a covariant isometric representation $(\pi,V)$ is a unitary. Thus $A\times_{\alpha}^{\operatorname{iso}}\Gamma^{+}$ is isomorphic to the classical group crossed product $A\times_{\alpha}\Gamma$. For more general situation, \cite{Adji1,Laca} show that we get, by dilating the system $(A,\Gamma^{+},\alpha)$, a $C^{*}$-algebra $B$ and an action $\beta$ of the group $\Gamma$ by automorphisms of $B$ such that $A\times_{\alpha}^{\operatorname{iso}}\Gamma^{+}$ is isomorphic to the full corner $p(B\times_{\alpha}\Gamma)p$ where $p$ is the unit $1_{M(A)}$ in $B$. If $(A,\Gamma^{+},\alpha)$ is the distinguished system $(B_{\Gamma^{+}},\Gamma^{+},\tau)$ of the unital $C^*$-algebra $B_{\Gamma^{+}}:=\overline{\operatorname{span}}\{1_{s}\in\ell^{\infty}(\Gamma^{+}): s\in\Gamma^{+}\}$ spanned by the characteristics functions $1_{s}(x)=\left\{ \begin{array}{ll} 1 & \mbox { if } x\ge s \\ 0 & \mbox{ if } x< s, \end{array} \right.$ and the action $\tau:\Gamma^{+}\rightarrow \operatorname{End}(B_{\Gamma^{+}})$ is given by the translation on $\ell^{\infty}(\Gamma^{+})$ which satisfies $\tau_{t}(1_{s})=1_{s+t}$. Then \cite{ALNR} shows that any isometric representation $V$ of $\Gamma^{+}$ induces a unital representation $\pi_{V}:1_{s}\mapsto V_{s}V_{s}^{*}$ of $B_{\Gamma^{+}}$ such that $(\pi_{V},V)$ is a covariant isometric representation of $(B_{\Gamma^{+}},\Gamma^{+},\tau)$, and the representation $\pi_{V}\times V$ of $B_{\Gamma^{+}}\times_{\tau}^{\operatorname{iso}}\Gamma^{+}$ is faithful provided all $V_{s}$ are non unitary. Since the isometric representation given by the Toeplitz representation $T:s\mapsto T_{s}$ of $\Gamma^{+}$ on $\ell^{2}(\Gamma^{+})$ is non unitary, then $\pi_{T}\times T$ is an isomorphism of $B_{\Gamma^{+}}\times_{\tau}^{\operatorname{iso}}\Gamma^{+}$ onto the Toeplitz algebra ${\mathcal{T}}(\Gamma)$ . \bigskip We consider the two crossed product $(A\times_{\alpha}^{\operatorname{iso}}\Gamma^{+},i_{A},i_{\Gamma^{+}})$ and $(A\times_{\alpha}^{\operatorname{piso}}\Gamma^{+},j_{A},j_{\Gamma^{+}})$ of a dynamical system $(A,\Gamma^{+},\alpha)$. The equation $i_{\Gamma^{+}}(s)^{*}i_{\Gamma^{+}}(s)i_{A}(a)=i_{A}(a)i_{\Gamma^{+}}(s)^{*}i_{\Gamma^{+}}(s)$ is automatic because $i_{\Gamma^{+}}$ is an isometric representation of $\Gamma^{+}$. Therefore we have a covariant partial-isometric representation $(i_{A},i_{\Gamma^{+}})$ of $(A,\Gamma^{+},\alpha)$ in the $C^{*}$-algebra $A\times_{\alpha}^{\operatorname{iso}}\Gamma^{+}$, and the universal property of $A\times_{\alpha}^{\operatorname{piso}}\Gamma^{+}$ gives a non degenerate homomorphism \[ \phi:=i_{A}\times i_{\Gamma^{+}}:(A\times_{\alpha}^{\operatorname{piso}}\Gamma^{+},j_{A},j_{\Gamma^{+}}) \longrightarrow (A\times_{\alpha}^{\operatorname{iso}}\Gamma^{+},i_{A},i_{\Gamma^{+}}), \] which satisfies $\phi(j_{\Gamma^{+}}(x)^{*}j_{A}(a)j_{\Gamma^{+}}(y))=i_{\Gamma^{+}}(x)^{*}i_{A}(a)i_{\Gamma^{+}}(y)$ for all $a\in A$ and $x,y\in\Gamma^{+}$. Consequently $\phi$ is surjective, and then we have a short exact sequence \begin{equation}\label{xact} 0\longrightarrow\ker\phi\longrightarrow A\times_{\alpha}^{\operatorname{piso}}\Gamma^{+} \stackrel{\phi}{\longrightarrow} A\times_{\alpha}^{\operatorname{iso}}\Gamma^{+} \longrightarrow 0. \end{equation} In the next proposition, we identify spanning elements for the ideal $\ker\phi$. \begin{prop}\label{surj} Suppose $(A,\Gamma^{+},\alpha)$ is a dynamical system. Then \begin{equation}\label{kernel-piso-iso} \ker\phi=\overline{\operatorname{span}}\{j_{\Gamma^{+}}(x)^{*}j_{A}(a)(1-j_{\Gamma^{+}}(t)^{*}j_{\Gamma^{+}}(t))j_{\Gamma^{+}}(y) : a\in A,\mbox{ and } x,y,t \in\Gamma^{+} \}. \end{equation} \end{prop} Before we prove this proposition, we first want to show the following lemma. \begin{lemma}\label{p-jstar} For $t\in\Gamma^{+}$, let $P_{t}$ be the projection $1-j_{\Gamma^{+}}(t)^{*}j_{\Gamma^{+}}(t)$. Then the set $\{P_{t} : t \in\Gamma^{+}\}$ is a family of increasing projections in the multiplier algebra $M(A\times_{\alpha}^{\operatorname{piso}}\Gamma^{+})$, which satisfy the following equations: $j_{A}(a)P_{t}=P_{t}j_{A}(a)$ for $a\in A$ and $t\in\Gamma^{+}$, \[ P_{x} j_{\Gamma^{+}}(y)^{*}=\left\{\begin{array}{ll} 0 & \quad \mbox{ if } x\le y \\ j_{\Gamma^{+}}(y)^{*} P_{x-y} & \quad \mbox{ if } x>y, \end{array} \right. \quad \text{ and } \quad P_{x} P_{y} =\left\{\begin{array}{ll} P_{x} \quad \mbox{ if } x\le y \\ P_{y} \quad \mbox{ if } x> y. \end{array} \right. \] \end{lemma} \begin{proof} For $s\ge t$ in $\Gamma^{+}$, \begin{align*} P_{s}-P_{t} & = (1-j_{\Gamma^{+}}(s)^{*}j_{\Gamma^{+}}(s))-(1-j_{\Gamma^{+}}(t)^{*}j_{\Gamma^{+}}(t)) \\ & = j_{\Gamma^{+}}(t)^{*}j_{\Gamma^{+}}(t) - j_{\Gamma^{+}}(s)^{*}j_{\Gamma^{+}}(s) \\ & = j_{\Gamma^{+}}(t)^{*}j_{\Gamma^{+}}(t) - j_{\Gamma^{+}}(t)^{*}j_{\Gamma^{+}}(s-t)^{*}j_{\Gamma^{+}}(s-t)j_{\Gamma^{+}}(t) \\ & = j_{\Gamma^{+}}(t)^{*} P_{s-t} j_{\Gamma^{+}}(t)= j_{\Gamma^{+}}(t)^{*} P_{s-t} P_{s-t} j_{\Gamma^{+}}(t)\\ & = [P_{s-t}j_{\Gamma^{+}}(t)]^{*} [P_{s-t}j_{\Gamma^{+}}(t)]. \end{align*} So $P_{s}-P_{t}\ge 0$, and hence $P_{s}\ge P_{t}$. If $x\le y$, then \begin{align*} P_{x} j_{\Gamma^{+}}(y)^{*} & = (1-j_{\Gamma^{+}}(x)^{*}j_{\Gamma^{+}}(x))j_{\Gamma^{+}}(x)^{*}j_{\Gamma^{+}}(y-x)^{*}\\ & = [j_{\Gamma^{+}}(x)^{*}-j_{\Gamma^{+}}(x)^{*}j_{\Gamma^{+}}(x)j_{\Gamma^{+}}(x)^{*}]j_{\Gamma^{+}}(y-x)^{*}=0, \end{align*} and if $x>y$, we have \begin{align*} P_{x} j_{\Gamma^{+}}(y)^{*} & = j_{\Gamma^{+}}(y)^{*}-j_{\Gamma^{+}}(x)^{*}j_{\Gamma^{+}}(x)j_{\Gamma^{+}}(y)^{*} \\ & = j_{\Gamma^{+}}(y)^{*}-j_{\Gamma^{+}}(y)^{*}j_{\Gamma^{+}}(x-y)^{*}j_{\Gamma^{+}}(x-y)j_{\Gamma^{+}}(y)j_{\Gamma^{+}}(y)^{*} \\ & = j_{\Gamma^{+}}(y)^{*}-j_{\Gamma^{+}}(y)^{*} j_{\Gamma^{+}}(x-y)^{*}j_{\Gamma^{+}}(x-y)\overline{j}_{A}(\overline{\alpha}_{y}(1)) \\ & = j_{\Gamma^{+}}(y)^{*}-j_{\Gamma^{+}}(y)^{*} \overline{j}_{A}(\overline{\alpha}_{y}(1)) j_{\Gamma^{+}}(x-y)^{*}j_{\Gamma^{+}}(x-y)\\ & = j_{\Gamma^{+}}(y)^{*}-[j_{\Gamma^{+}}(y)^{*} j_{\Gamma^{+}}(y)j_{\Gamma^{+}}(y)^{*}] j_{\Gamma^{+}}(x-y)^{*}j_{\Gamma^{+}}(x-y) \\ & = j_{\Gamma^{+}}(y)^{*}P_{x-y}. \end{align*} Next we use the equation \[ j_{\Gamma^{+}}(x)^{*}j_{\Gamma^{+}}(x)j_{\Gamma^{+}}(y)^{*}j_{\Gamma^{+}}(y)=j_{\Gamma^{+}}(\max\{x,y\})^{*}j_{\Gamma^{+}}(\max\{x,y\}) \text{ for any } x,y \in\Gamma^{+}, \] to see that \begin{align*} P_{x}P_{y} & = (1-j_{\Gamma^{+}}(x)^{*}j_{\Gamma^{+}}(x))(1-j_{\Gamma^{+}}(y)^{*}j_{\Gamma^{+}}(y))\\ & = 1-j_{\Gamma^{+}}(x)^{*}j_{\Gamma^{+}}(x)-j_{\Gamma^{+}}(y)^{*}j_{\Gamma^{+}}(y)+j_{\Gamma^{+}}(x)^{*}j_{\Gamma^{+}}(x)j_{\Gamma^{+}}(y)^{*}j_{\Gamma^{+}}(y)\\ & = 1-j_{\Gamma^{+}}(x)^{*}j_{\Gamma^{+}}(x)-j_{\Gamma^{+}}(y)^{*}j_{\Gamma^{+}}(y)+j_{\Gamma^{+}}(\max\{x,y\})^{*}j_{\Gamma^{+}}(\max\{x,y\})\\ & = \left\{\begin{array}{ll} P_{x} \quad \mbox{ if } x\le y \\ P_{y} \quad \mbox{ if } x> y. \end{array} \right. \end{align*} \end{proof} \begin{proof}[Proof of Proposition \ref{surj}] We clarify that the right hand side of (\ref{kernel-piso-iso}) \[ \mathcal{I}:=\overline{\operatorname{span}}\{j_{\Gamma^{+}}(x)^{*}j_{A}(a)(1-j_{\Gamma^{+}}(t)^{*}j_{\Gamma^{+}}(t))j_{\Gamma^{+}}(y) : a\in A, \mbox{ and } x,y,t \in\Gamma^{+} \}\] is an ideal of $(A\times_{\alpha}^{\operatorname{piso}}\Gamma^{+},j_{A},j_{\Gamma^{+}})$, by showing that $j_{A}(b){\mathcal{I}}$ and $j_{\Gamma^{+}}(s){\mathcal{I}}$, $j_{\Gamma^{+}}(s)^{*}{\mathcal{I}}$ are contained in ${\mathcal{I}}$ for all $b\in A$ and $s\in\Gamma^{+}$. The last containment is trivial. For the first two, we compute using the partial isometric covariance of $(j_{A},j_{\Gamma^{+}})$ to get the following equations for $b\in A$, $s, x\in\Gamma^{+}$: \[ j_{A}(b)j_{\Gamma^{+}}(x)^{*}=[j_{\Gamma^{+}}(x) j_{A}(b^{*})]^{*}=[j_{A}(\alpha_{x}(b^{*}))j_{\Gamma^{+}}(x)]^{*}= j_{\Gamma^{+}}(x)^{*}j_{A}(\alpha_{x}(b)), \] and \[ j_{\Gamma^{+}}(s)j_{\Gamma^{+}}(x)^{*}=\left\{\begin{array}{ll} j_{\Gamma^{+}}(x-s)^{*}j_{\Gamma^{+}}(x)j_{\Gamma^{+}}(x)^{*}= j_{\Gamma^{+}}(x-s)^{*}\overline{j}_{A}(\overline{\alpha}_{x}(1)) & \mbox{ if } s<x \\ j_{\Gamma^{+}}(x)j_{\Gamma^{+}}(x)^{*}=\overline{j}_{A}(\overline{\alpha}_{x}(1)) & \mbox{ if } s=x \\ j_{\Gamma^{+}}(s-x)j_{\Gamma^{+}}(x)j_{\Gamma^{+}}(x)^{*}=\overline{j}_{A}(\overline{\alpha}_{s}(1)) j_{\Gamma^{+}}(s-x) & \mbox{ if } s>x. \end{array} \right. \] Consequently we have \[ j_{A}(b)j_{\Gamma^{+}}(x)^{*}j_{A}(a)P_{t} j_{\Gamma^{+}}(y) = j_{\Gamma^{+}}(x)^{*}j_{A}(\alpha_{x}(b)a)P_{t}j_{\Gamma^{+}}(y) \in {\mathcal{I}}, \] and \[ j_{\Gamma^{+}}(s)j_{\Gamma^{+}}(x)^{*}j_{A}(a)P_{t} j_{\Gamma^{+}}(y) = j_{\Gamma^{+}}(x-s)^{*}j_{A}(\overline{\alpha}_{x}(1)a)P_{t} j_{\Gamma^{+}}(y) \in {\mathcal{I}} \] whenever $b\in A$ and $t, s\le x$ in $\Gamma^{+}$. If $s>x$, then \[ P_{t}j_{\Gamma^{+}}(s-x)^{*}=\left\{ \begin{array}{ll} 0 & \mbox{ for } t\le s-x \\ j_{\Gamma^{+}}(s-x)^{*}P_{t-(s-x)} & \mbox { for } t>s-x. \end{array}\right. \] Therefore \begin{align*} j_{\Gamma^{+}}(s)j_{\Gamma^{+}}(x)^{*}j_{A}(a)P_{t} j_{\Gamma^{+}}(y) & = \overline{j}_{A}(\overline{\alpha}_{s}(1))j_{\Gamma^{+}}(s-x)j_{A}(a)P_{t} j_{\Gamma^{+}}(y)\\ & = \overline{j}_{A}(\overline{\alpha}_{s}(1))j_{A}(\alpha_{s-x}(a))j_{\Gamma^{+}}(s-x)P_{t}j_{\Gamma^{+}}(y)\\ & = j_{A}(\overline{\alpha}_{s}(1)\alpha_{s-x}(a))~[P_{t}j_{\Gamma^{+}}(s-x)^{*}]^{*}~j_{\Gamma^{+}}(y), \end{align*} which is the zero element of ${\mathcal{I}}$ for $t\le s-x$, and is the element \[ j_{A}(\overline{\alpha}_{s}(1)\alpha_{s-x}(a))P_{t-(s-x)}j_{\Gamma^{+}}(s-x+y) \text{ of } {\mathcal{I}} \text{ for } t>s-x. \] So $j_{\Gamma^{+}}(s)j_{\Gamma^{+}}(x)^{*}j_{A}(a)P_{t} j_{\Gamma^{+}}(y)$ belongs to ${\mathcal{I}}$, and ${\mathcal{I}}$ is an ideal of $A\times_{\alpha}^{\operatorname{piso}}\Gamma^{+}$. We are now showing the equation $\ker \phi={\mathcal{I}}$. The first inclusion ${\mathcal{I}}\subset \ker\phi$ follows from the fact that ${\mathcal{I}}$ is an ideal of $A\times_{\alpha}^{\operatorname{piso}}\Gamma^{+}$, and that $\overline{\phi}(P_{t})=1-i_{\Gamma^{+}}(t)^{*}i_{\Gamma^{+}}(t)=0$ for all $t\in\Gamma^{+}$. For the other inclusion, suppose $\rho$ is a non degenerate representation of $A\times_{\alpha}^{\operatorname{piso}}\Gamma^{+}$ on a Hilbert space $H$ with $\ker\rho={\mathcal{I}}$. Then the pair $(\pi:=\rho\circ j_{A}, V:=\overline{\rho}\circ j_{\Gamma^{+}})$ is a covariant partial-isometric representation of $(A,\Gamma^{+},\alpha)$ on $H$. We claim that every $V_{t}$ is an isometry. To see this, let $(a_{\lambda})$ be an approximate identity for $A$. Then we have \[ 0 =\rho(j_{A}(a_{\lambda})(1-j_{\Gamma^{+}}(t)^{*}j_{\Gamma^{+}}(t))) =\pi(a_{\lambda})(1-V_{t}^{*}V_{t}) \text{ for all } \lambda, \] and $\pi(a_{\lambda})(1-V_{t}^{*}V_{t})$ converges strongly to $1-V_{t}^{*}V_{t}$ in $B(H)$. Therefore $1-V_{t}^{*}V_{t}=0$. Consequently the pair $(\pi,V)$ is a covariant isometric representation of $(A,\Gamma^{+},\alpha)$ on $H$, and hence there exists a non degenerate representation $\psi$ of $(A\times_{\alpha}^{\operatorname{iso}}\Gamma^{+},i_{A},i_{\Gamma^{+}})$ on $H$ which satisfies $\psi(i_{A}(a))=\rho(j_{A}(a))$ and $\overline{\psi}(i_{\Gamma^{+}}(x))=\overline{\rho}(j_{\Gamma^{+}}(x))$ for all $a\in A$ and $x\in\Gamma^{+}$. So $\psi\circ\phi=\rho$ on the spanning elements of $A\times_{\alpha}^{\operatorname{piso}}\Gamma^{+}$, thus $\ker\phi\subset\ker\rho$. \end{proof} \begin{prop} If $\Gamma$ is a subgroup of ${\mathbb R}$, then $\ker \phi$ is an essential ideal of the crossed product $A\times_{\alpha}^{\operatorname{piso}}\Gamma^{+}$. \end{prop} \begin{proof} Let $J$ be a non zero ideal of $A\times_{\alpha}^{\operatorname{piso}}\Gamma^{+}$, we want to show that $J\cap \ker\phi\neq \{0\}$. Assume that $\ker\phi\neq \{0\}$. Take a non degenerate representation $\pi\times V$ of $A\times_{\alpha}^{\operatorname{piso}}\Gamma^{+}$ on $H$ such that $\ker\pi\times V= J$. Since $J\neq \{0\}$, $\pi\times V$ is not a faithful representation. Consequently, by \cite[Theorem 4.8]{LR}, $\pi$ does not act faithfully on $(V_{s}^{*}H)^{\perp}$ for some $s\in \Gamma^{+}\backslash\{0\}$ . So there is $a\neq 0$ in $A$ such that $\pi(a)(1-V_{s}^{*}V_{s})=0$. It follows from \[ 0=\pi(a)(1-V_{s}^{*}V_{s})=\pi\times V (j_{A}(a)(1-j_{\Gamma^{+}}(s)^{*}j_{\Gamma^{+}}(s))), \] that $j_{A}(a)(1-j_{\Gamma^{+}}(s)^{*}j_{\Gamma^{+}}(s))$ belongs to $\ker\pi\times V=J$. Moreover $j_{A}(a)(1-j_{\Gamma^{+}}(s)^{*}j_{\Gamma^{+}}(s))$ is also contained in $\ker\phi$ because $\overline{\phi}(P_{s})=0$, hence it is contained in $\ker\phi \cap J$. Next we have to clarify that $j_{A}(a)(1-j_{\Gamma^{+}}(s)^{*}j_{\Gamma^{+}}(s))$ is nonzero. If it is zero, then $1-j_{\Gamma^{+}}(s)^{*}j_{\Gamma^{+}}(s)=0$ because $j_{A}(a)\neq 0$ by injectivity of $j_{A}$. Thus $j_{\Gamma^{+}}(s)$ is an isometry, and so is $j_{\Gamma^{+}}(ns)$ for every $n\in{\mathbb N}$. We claim that every $j_{\Gamma^{+}}(x)$ is an isometry, and consequently $A\times_{\alpha}^{\operatorname{piso}}\Gamma^{+}$ is isomorphic to $A\times_{\alpha}^{\operatorname{iso}}\Gamma^{+}$. Therefore $\ker\phi =0$, and $j_{A}(a)(1-j_{\Gamma^{+}}(s)^{*}j_{\Gamma^{+}}(s))$ can not be zero. To justify the claim, note that if $x<s$ then $s-x<s$, and we have \begin{align*} j_{\Gamma^{+}}(s-x)^{*} j_{\Gamma^{+}}(s) & = j_{\Gamma^{+}}(s-x)^{*}j_{\Gamma^{+}}(s-x)j_{\Gamma^{+}}(s-(s-x)) \\ & = [j_{\Gamma^{+}}(s-x)^{*}j_{\Gamma^{+}}(s-x)][j_{\Gamma^{+}}(x)j_{\Gamma^{+}}(x)^{*}]j_{\Gamma^{+}}(x) \\ & = [j_{\Gamma^{+}}(x)j_{\Gamma^{+}}(x)^{*}][j_{\Gamma^{+}}(s-x)^{*}j_{\Gamma^{+}}(s-x)]j_{\Gamma^{+}}(x) \\ & =j_{\Gamma^{+}}(x) j_{\Gamma^{+}}(s)^{*}j_{\Gamma^{+}}(s)=j_{\Gamma^{+}}(x). \end{align*} So the equation $j_{\Gamma^{+}}(s)^{*}=j_{\Gamma^{+}}(x)^{*}j_{\Gamma^{+}}(s-x)^{*}$ implies \[ 1=j_{\Gamma^{+}}(s)^{*}j_{\Gamma^{+}}(s)=j_{\Gamma^{+}}(x)^{*}j_{\Gamma^{+}}(s-x)^{*} j_{\Gamma^{+}}(s)= j_{\Gamma^{+}}(x)^{*}j_{\Gamma^{+}}(x). \] Thus $j_{\Gamma^{+}}(x)$ is an isometry for every $x< s$. For $x>s$, by the Archimedean property of $\Gamma$, there exists $n_{x}\in{\mathbb N}$ such that $x< n_{x} s$, and since $j_{\Gamma^{+}}(n_{x}s)$ is an isometry, applying the previous arguments, we see that $j_{\Gamma^{+}}(x)$ is an isometry. \end{proof} \section{The partial-isometric crossed product as a full corner.} Suppose $(A,\Gamma^{+},\alpha)$ is a dynamical system. We consider the Hilbert $A$-module $\ell^{2}(\Gamma^{+},A)=\{f:\Gamma^{+}\rightarrow A:\sum_{x\in\Gamma^{+}} f(x)^{*}f(x) \text{ converges in the norm of }A \}$ with the module structure: $(f\cdot a)(x) =f(x)a$ and $\langle f,g\rangle=\sum_{x\in\Gamma^{+}} f(x)^{*}g(x)$ for $f,g\in\ell^{2}(\Gamma^{+},A)$ and $a\in A$. One can also want to consider the Hilbert $A$-module $\ell^{2}(\Gamma^{+})\otimes A$, the completion of the vector space tensor product $\ell^{2}(\Gamma^{+})\odot A$, that has a right (incomplete) inner product $A$-module structure: $(x\otimes a)\cdot b=x\otimes ab$ and $\langle x\otimes a,y\otimes b\rangle=(y|x) a^{*}b$ for $x,y\in \ell^{2}(\Gamma^{+})$ and $a, b\in A$. The two modules are naturally isomorphic via the map defined by $\phi: x\otimes a\mapsto \phi(x\otimes a)(t)=x(t)a$ for $x\in\ell^{2}(\Gamma^{+}), t\in\Gamma^{+}, a\in A$. Let $\pi_{\alpha}:A\rightarrow {\mathcal{L}}(\ell^{2}(\Gamma^{+},A))$ be a map of $A$ into the $C^*$-algebra ${\mathcal{L}}(\ell^{2}(\Gamma^{+},A))$ of adjointable operators on $\ell^{2}(\Gamma^{+},A)$, defined by \[ (\pi_{\alpha}(a)f)(t)=\alpha_{t}(a)f(t) \text{ for } a\in A, f\in \ell^{2}(\Gamma^{+},A). \] It is a well-defined map as we can see that $\pi_{\alpha}(a)f\in \ell^{2}(\Gamma^{+},A)$: \[ \sum_{t\in\Gamma^{+}}(\alpha_{t}(a)f(t))^{*}(\alpha_{t}(a)f(t))=\sum_{t\in\Gamma^{+}}f(t)^{*}\alpha_{t}(a^{*}a)f(t)\le \|\alpha_{t}(a^{*}a)\|\sum_{t\in\Gamma^{+}}f(t)^{*}f(t).\] Moreover $\pi_{\alpha}$ is an injective *-homomorphism, which could be degenerate (for example when each of endomorphism $\alpha_{t}$ acts on a unital algebra $A$ and $\alpha_{t}(1)\neq 1$). Let $S\in {\mathcal{L}}(\ell^{2}(\Gamma^{+},A))$ defined by \[ S_{t}(f)(i)=\left\{ \begin{array}{ll} f(i-t) & \text{ if } i\ge t \\ 0 & \text{ if } i<t. \end{array} \right. \] Then $S_{t}^{*}S_{t}=1$, $S_{t}S_{t}^{*} \neq 1$, and the pair $(\pi_{\alpha},S)$ satisfies the following equations: \begin{equation}\label{pii-S} \pi_{\alpha}(a)S_{t}=S_{t}\pi_{\alpha}(\alpha_{t}(a)) \text{ and } (1-S_{t}S_{t}^{*}) \pi_{\alpha}(a)=\pi_{\alpha}(a)(1-S_{t}S_{t}^{*}) \text{ for all } a\in A, t\in\Gamma^{+}. \end{equation} Next we consider the vector subspace of ${\mathcal{L}}(\ell^{2}(\Gamma^{+},A))$ spanned by $\{S_{x}\pi_{\alpha}(a)S_{y}^{*}: a\in A, x,y\in\Gamma^{+}\}$. Using the equations in (\ref{pii-S}), one can see this space is closed under the multiplication and adjoint, we therefore have a $C^*$-subalgebra of ${\mathcal{L}}(\ell^{2}(\Gamma^{+},A))$, namely \begin{equation}\label{t-alpha} {\mathcal{T}}_{\alpha}:=\overline{\operatorname{span}}\{S_{x}\pi_{\alpha}(a)S_{y}^{*}: a\in A, x,y\in\Gamma^{+}\}. \end{equation} One can see that $x\in\Gamma^{+}\mapsto S_{x}\in M({\mathcal{T}}_{\alpha})$ is a semigroup of non unitary isometries, and $\pi_{\alpha}(A)\subseteq{\mathcal{T}}_{\alpha}$. We show in Lemma \ref{pi-alpha-bar} that $\pi_{\alpha}$ extends to the strictly continuous homomorphism $\overline{\pi}_{\alpha}$ on the multiplier algebra $M(A)$, and the equations in (\ref{pii-S}) remain valid. The algebra ${\mathcal{T}}_{\alpha}$ defined in (\ref{t-alpha}) satisfies the following natural properties. If $(A,\Gamma^{+},\alpha)$ and $(B,\Gamma^{+},\beta)$ are two dynamical systems with extendible endomorphism actions, let $S_{x}\pi_{\alpha}(a)S_{y}^{*}$ and $T_{x}\pi_{\beta}(b)T_{y}^{*}$ denote spanning elements for ${\mathcal{T}}_{\alpha}$ and ${\mathcal{T}}_{\beta}$ respectively. If $\phi:A\rightarrow B$ is a non degenerate homomorphism such that $\phi\circ \alpha_{t}=\beta_{t}\circ\phi$ for every $t\in\Gamma^{+}$, then by using the identification $\ell^{2}(\Gamma^{+},A)\otimes_{A} B~\simeq~\ell^{2}(\Gamma^{+},B)$, we have a homomorphism $\tau_{\phi}:{\mathcal{T}}_{\alpha}\rightarrow {\mathcal{T}}_{\beta}$ which satisfies $\tau_{\phi}(S_{x}\pi_{\alpha}(a)S_{y}^{*})=T_{x}\pi_{\beta}(\phi(a))T_{y}^{*}$ for all $a\in A$ and $x,y\in\Gamma^{+}$. Note that if $\phi$ is injective then so is $\tau_{\phi}$. This property is consistent with the extendibility of endomorphism $\alpha_{t}$ and $\beta_{t}$. Since the canonical map $\iota_{A}:A\rightarrow M(A)$ is injective and non degenerate, it follows that we have an injective homomorphism $\tau_{\iota_{A}}: {\mathcal{T}}_{\alpha} \rightarrow {\mathcal{T}}_{\overline{\alpha}}$ such that $\tau_{\iota_{A}}({\mathcal{T}}_{\alpha})$ is an ideal of ${\mathcal{T}}_{\overline{\alpha}}$. Moreover since the non degenerate homomorphism $\phi:A\rightarrow B$ extends to $\overline{\phi}$ on the multiplier algebras which satisfies $\overline{\phi}\circ\overline{\alpha}_{t}=\overline{\beta}_{t}\circ\overline{\phi}~\forall t\in\Gamma^{+}$, therefore $\overline{\phi}$ induces the homomorphism $\tau_{\overline{\phi}}:{\mathcal{T}}_{\overline{\alpha}}\rightarrow {\mathcal{T}}_{\overline{\beta}}$, and it satisfies $\tau_{\overline{\phi}}\circ \tau_{\iota_{A}}=\tau_{\iota_{B}}\circ \tau_{\phi}$. \begin{lemma}\label{pi-alpha-bar} The homomorphism $\pi_{\alpha}:A\rightarrow M({\mathcal{T}}_{\alpha})$ extends to the strictly continuous homomorphism $\overline{\pi}_{\alpha}$ on the multiplier algebra $M(A)$, such that the pair $(\overline{\pi}_{\alpha},S)$ satisfies $\overline{\pi}_{\alpha}(m) S_{t}=S_{t}\overline{\pi}_{\alpha}(\overline{\alpha}_{t}(m))$ and $(1-S_{t}S_{t}^{*}) \overline{\pi}_{\alpha}(m)=\overline{\pi}_{\alpha}(m)(1-S_{t}S_{t}^{*})$ for all $m\in M(A)$ and $t\in\Gamma^{+}.$ \end{lemma} \begin{proof} We want to find a projection $p\in M({\mathcal{T}}_{\alpha})$ such that $\pi_{\alpha}(a_{\lambda})$ converges strictly to $p$ in $M({\mathcal{T}}_{\alpha})$ for an approximate identity $(a_{\lambda})$ in $A$. Consider the map $p$ defined on $\ell^{2}(\Gamma^{+},A)$ by \[ (p (f))(t)=\overline{\alpha}_{t}(1)f(t). \] First we clarify that $p(f)$ belongs to $\ell^{2}(\Gamma^{+},A)$ for all $f\in \ell^{2}(\Gamma^{+},A)$. Let $t\in\Gamma^{+}$, then we have \begin{align*} (p(f))(t)^{*}(p(f))(t)=(\overline{\alpha}_{t}(1)f(t))^{*}(\overline{\alpha}_{t}(1)f(t))=f(t)^{*}\overline{\alpha}_{t}(1)f(t). \end{align*} Since $\overline{\alpha}_{t}(1)$ is a positive element of $M(A)$, it follows that \[ f(t)^{*}\overline{\alpha}_{t}(1)f(t)\le \|\overline{\alpha}_{t}(1)\|f(t)^{*} f(t)\le f(t)^{*} f(t). \] Consequently $0\le \sum_{t\in F} (p(f))(t)^{*}p(f)(t)\le \sum_{t\in F}f(t)^{*} f(t)$ for every finite set $F\subset \Gamma^{+}$. Moreover we know that the sequence of partial sums of $\sum_{t\in\Gamma^{+}} f(t)^{*} f(t)$ is Cauchy in $A$ because $f\in \ell^{2}(\Gamma^{+},A)$. Therefore $\sum_{t\in\Gamma^{+}} (p(f))(t)^{*}p(f)(t)$ converges in $A$, and hence $p(f)\in \ell^{2}(\Gamma^{+},A)$. On can see from the definition of $p$ that it is a linear map, and the computations below show it is adjointable, which particularly it satisfies $p^{*}=p$ and $p^{2}=p$. So $p$ is a projection in ${\mathcal{L}}(\ell^{2}(\Gamma^{+},A))$: \begin{align*} \langle p(f),g\rangle & = \sum_{t\in\Gamma^{+}} (p(f)(t))^{*}g(t) = \sum_{t\in\Gamma^{+}} (\overline{\alpha}_{t}(1)f(t))^{*}g(t) = \sum_{t\in\Gamma^{+}} f(t)^{*}\overline{\alpha}_{t}(1)g(t) \\ & = \sum_{t\in\Gamma^{+}} f(t)^{*} (p(g)(t)) = \langle f, p(g)\rangle. \end{align*} To see that $p$ belongs to $M({\mathcal{T}}_{\alpha})$, a direct computation on every $f\in \ell^{2}(\Gamma^{+},A)$ shows that $[(p~(S_{x}\pi_{\alpha}(a)S_{y}^{*}))~f](t)=[S_{x}\pi_{\alpha}(\overline{\alpha}_{x}(1)a)S_{y}^{*}~ f](t)$ and $[((S_{x}\pi_{\alpha}(a)S_{y}^{*})~p)~f](t)=[S_{x}\pi_{\alpha}(a \overline{\alpha}_{y}(1))S_{y}^{*}~ f](t)$. Thus $p$ multiples every spanning element of ${\mathcal{T}}_{\alpha}$ into itself, so $p\in M({\mathcal{T}}_{\alpha})$. Now we want to prove that $(\pi_{\alpha}(a_{\lambda}))_{\lambda\in\Lambda}$ converges strictly to $p$ in $M({\mathcal{T}}_{\alpha})$. For this we show that $\pi_{\alpha}(a_{\lambda})S_{x}\pi_{\alpha}(a)S_{y}^{*}$ and $S_{x}\pi_{\alpha}(a)S_{y}^{*}\pi_{\alpha}(a_{\lambda})$ converge in ${\mathcal{T}}_{\alpha}$ to $p~S_{x}\pi_{\alpha}(a)S_{y}^{*}$ and $S_{x}\pi_{\alpha}(a)S_{y}^{*}~p$ respectively. Note that $\pi_{\alpha}(a_{\lambda})S_{x}\pi_{\alpha}(a)S_{y}^{*}=S_{x}\pi_{\alpha}(\alpha_{x}(a_{\lambda})a)S_{y}^{*}\in{\mathcal{T}}_{\alpha}$ and $S_{x}\pi_{\alpha}(a)S_{y}^{*}\pi_{\alpha}(a_{\lambda})=S_{x}\pi_{\alpha}(a\alpha_{y}(a_{\lambda}))S_{y}^{*} \in {\mathcal{T}}_{\alpha}$. Since $\alpha_{x}(a_{\lambda})a\rightarrow \overline{\alpha}_{x}(1)a$ in $A$ by the extendibility of $\alpha_{x}$, it follows that $S_{x}\pi_{\alpha}(\alpha_{x}(a_{\lambda})a)S_{y}^{*} \rightarrow S_{x}\pi_{\alpha}(\overline{\alpha}_{x}(1)a) S_{y}^{*}=p~(S_{x}\pi_{\alpha}(a)S_{y}^{*})$ and $S_{x}\pi_{\alpha}(a\alpha_{y}(a_{\lambda}))S_{y}^{*}\rightarrow S_{x}\pi_{\alpha}(a\overline{\alpha}_{y}(1))S_{y}^{*}= (S_{x}\pi_{\alpha}(a)S_{y}^{*})~p$ in ${\mathcal{T}}_{\alpha}$. Thus we have shown that $\pi_{\alpha}$ is extendible, and therefore we have $\overline{\pi}_{\alpha}(1_{M(A)})=p$. Next we want to clarify the equation $\overline{\pi}_{\alpha}(m)S_{x}=S_{x}\overline{\pi}_{\alpha}(\overline{\alpha}_{x}(m))$ in $M({\mathcal{T}}_{\alpha})$. Let $(a_{\lambda})$ be an approximate identity for $A$. The extendibility of $\pi_{\alpha}$ implies $\pi_{\alpha}(a_{\lambda}m)\rightarrow \overline{\pi}_{\alpha}(m)$ strictly in $M({\mathcal{T}}_{\alpha})$, and hence $\pi_{\alpha}(a_{\lambda}m)S_{x}\rightarrow \overline{\pi}_{\alpha}(m)S_{x}$ strictly in $M({\mathcal{T}}_{\alpha})$. But $\pi_{\alpha}(a_{\lambda}m)S_{x}=S_{x}\pi_{\alpha}(\alpha_{x}(a_{\lambda}m))$ converges strictly to $S_{x}\overline{\pi}_{\alpha}(\overline{\alpha}_{x}(m))$ in $M({\mathcal{T}}_{\alpha})$. Therefore $\overline{\pi}_{\alpha}(m)S_{x}=S_{x}\overline{\pi}_{\alpha}(\overline{\alpha}_{x}(m))$. Similar arguments show that $\overline{\pi}_{\alpha}(m)(1-S_{t}S_{t}^{*})=(1-S_{t}S_{t}^{*})\overline{\pi}_{\alpha}(m)$ in $M({\mathcal{T}}_{\alpha})$. \end{proof} We have already shown that $\pi_{\alpha}:A\rightarrow M({\mathcal{T}}_{\alpha})$ is extendible in Lemma \ref{pi-alpha-bar}. Therefore we have a projection $\overline{\pi}_{\alpha}(1_{M(A)})=p$ in $M({\mathcal{T}}_{\alpha})$. Note that $p$ is the identity of $pM({\mathcal{T}}_{\alpha})p$, and $\pi_{\alpha}(a)=\pi_{\alpha}(1_{M(A)}a1_{M(A)})=p\pi_{\alpha}(a)p\in pM({\mathcal{T}}_{\alpha})p$. We claim that the homomorphism $\pi_{\alpha}:A\rightarrow pM({\mathcal{T}}_{\alpha})p$ is non degenerate. To see this, let $(a_{\lambda})$ be an approximate identity for $A$, and $\xi:=S_{x}\pi_{\alpha}(b)S_{y}^{*}$. Then $\pi_{\alpha}(a_{\lambda})p\xi p=S_{x}\pi_{\alpha}(\alpha_{x}(a_{\lambda})b)S_{y}^{*}p$ converges to $S_{x}\pi_{\alpha}(\overline{\alpha}_{x}(1)b)S_{y}p=p\xi p$ in $p{\mathcal{T}}_{\alpha}p$. Similar arguments show that $p\xi p \pi_{\alpha}(a_{\lambda})\rightarrow p\xi p$ in $p{\mathcal{T}}_{\alpha}p$. In the next proposition we show that the algebra $p{\mathcal{T}}_{\alpha}p$ is a partial-isometric crossed product of $(A,\Gamma^{+},\alpha)$. \begin{prop}\label{piso-ptp} Suppose $(A,\Gamma^{+},\alpha)$ is a system such that every $\alpha_{x}\in \operatorname{End}(A)$ is extendible. Let $p=\overline{\pi}_{\alpha}(1_{M(A)})$, and let \[ k_{A}: A \rightarrow p{\mathcal{T}}_{\alpha}p \quad \text{and} \quad w:\Gamma^{+} \rightarrow M(p{\mathcal{T}}_{\alpha}p)\] be the maps defined by \[ k_{A}(a)= \pi_{\alpha}(a) \quad \text{and} \quad w_{x}=pS_{x}^{*}p. \] Then the triple $(p{\mathcal{T}}_{\alpha}p,k_{A},w)$ is a partial-isometric crossed product of $(A,\Gamma^{+},\alpha)$, and therefore $\psi:=k_{A}\times w: (A\times_{\alpha}^{\operatorname{piso}}\Gamma^{+},i_{A},v) \rightarrow p{\mathcal{T}}_{\alpha}p$ is an isomorphism which satisfies $\psi(i_{A}(a))=k_{A}(a)$ and $\psi(v_{x})=w_{x}$. Moreover, the crossed product $A\times_{\alpha}^{\operatorname{piso}}\Gamma^{+}$ is Morita equivalent to the algebra ${\mathcal{T}}_{\alpha}$. \end{prop} Before we prove the proposition, we show the following lemma. \begin{lemma}\label{phi-injective} The pair $(k_{A},w)$ forms a covariant partial-isometric representation of $(A,\Gamma^{+},\alpha)$ in $p{\mathcal{T}}_{\alpha}p$, and that the homomorphism $\varphi:= k_{A}\times w: A\times_{\alpha}^{\operatorname{piso}}\Gamma^{+} \rightarrow p{\mathcal{T}}_{\alpha}p$ is injective. \end{lemma} \begin{proof} Each of $w_{x}$ is a partial isometry: $w_{x}=pS_{x}^{*}p=\overline{\pi}_{\alpha}(\overline{\alpha}_{x}(1))S_{x}^{*}\Rightarrow w_{x} w_{x}^{*} w_{x}=\overline{\pi}_{\alpha}(\overline{\alpha}_{x}(1))S_{x}^{*}=w_{x}$, and \[ w_{x}w_{y} = \overline{\pi}_{\alpha}(\overline{\alpha}_{x}(1))S_{x}^{*}\overline{\pi}_{\alpha}(\overline{\alpha}_{y}(1))S_{y}^{*} = \overline{\pi}_{\alpha}(\overline{\alpha}_{x}(1))\overline{\pi}_{\alpha}(\overline{\alpha}_{x+y}(1))S_{x+y}^{*} = w_{x+y} \quad \text{for } x,y \in\Gamma^{+}. \] The computations below show that $(k_{A},w)$ satisfies the partial-isometric covariance relations: \begin{align*} w_{x}k_{A}(a)w_{x}^{*} & =\overline{\pi}_{\alpha}(\overline{\alpha}_{x}(1))S_{x}^{*}[\pi_{\alpha}(a)S_{x}]\overline{\pi}_{\alpha}(\overline{\alpha}_{x}(1)) \\ & = \overline{\pi}_{\alpha}(\overline{\alpha}_{x}(1))\pi_{\alpha}(\alpha_{x}(a))\overline{\pi}_{\alpha}(\overline{\alpha}_{x}(1)) = \pi_{\alpha}(\alpha_{x}(a)) =k_{A}(\alpha_{x}(a)), \end{align*} and \begin{align*} w_{x}^{*}w_{x}k_{A}(a) & =S_{x}\overline{\pi}_{\alpha}(\overline{\alpha}_{x}(1))S_{x}^{*}\pi_{\alpha}(a) = S_{x} \pi_{\alpha}(\overline{\alpha}_{x}(1)\alpha_{x}(a))S_{x}^{*} \\ & = S_{x} \pi_{\alpha}(\alpha_{x}(a)\overline{\alpha}_{x}(1))S_{x}^{*} =S_{x} \pi_{\alpha}(\alpha_{x}(a))\overline{\pi}_{\alpha}(\overline{\alpha}_{x}(1))S_{x}^{*}\\ & = \pi_{\alpha}(a)S_{x}\overline{\pi}_{\alpha}(\overline{\alpha}_{x}(1))S_{x}^{*}= \pi_{\alpha}(a)w_{x}^{*}w_{x}= k_{A}(a)w_{x}^{*}w_{x}. \end{align*} So we get a non degenerate homomorphism $\varphi:=k_{A}\times w: A\times_{\alpha}^{\operatorname{piso}}\Gamma^{+} \rightarrow p{\mathcal{T}}_{\alpha}p$. We want to see it is injective. Put $p{\mathcal{T}}_{\alpha}p$ by a faithful and non degenerate representation $\gamma$ into a Hilbert space $H$. Then we want to prove that the representation $\gamma\circ \varphi$ of $(A\times_{\alpha}^{\operatorname{piso}}\Gamma^{+},i_{A},v)$ on $H$ is faithful. Let $\sigma=\gamma\circ \varphi\circ i_{A}$ and $t=\overline{\gamma\circ \varphi}\circ v$. By \cite[Theorem 4.8]{LR}, we have to show that $\sigma$ acts faithfully on the range of $(1-t_{x}^{*}t_{x})$ for every $x>0$ in $\Gamma^{+}$. If $x>0$ in $\Gamma^{+}$, $a\in A$, and $\sigma(a)|_{{\rm range}(1-t_{x}^{*}t_{x})}=0$, then we want to see that $a=0$. First note that $\sigma(a)(1-t_{x}^{*}t_{x})=\gamma\circ\varphi(i_{A}(a)(1-v_{x}^{*}v_{x}))$, and \begin{align*} \varphi(i_{A}(a)(1-v_{x}^{*}v_{x})) & =\varphi(i_{A}(a))(\overline{\varphi}(1)-\varphi(v_{x}^{*}v_{x})) =\varphi(i_{A}(a))(p-\overline{\varphi}(v_{x}^{*})\overline{\varphi}(v_{x})) \\ & = k_{A}(a)(p-w_{x}^{*}w_{x}) \\ & = \pi_{\alpha}(a)(\overline{\pi}_{\alpha}(1)-S_{x}\overline{\pi}_{\alpha}(\overline{\alpha}_{x}(1))\overline{\pi}_{\alpha}(\overline{\alpha}_{x}(1))S_{x}^{*})\\ & =\pi_{\alpha}(a)(\overline{\pi}_{\alpha}(1)-\overline{\pi}_{\alpha}(1)S_{x}S_{x}^{*}\overline{\pi}_{\alpha}(1)) \\ & =\pi_{\alpha}(a)(1-S_{x}S_{x}^{*})\overline{\pi}_{\alpha}(1) =\pi_{\alpha}(a)\overline{\pi}_{\alpha}(1)(1-S_{x}S_{x}^{*}) \\ & =\pi_{\alpha}(a)(1-S_{x}S_{x}^{*}). \end{align*} So $\sigma(a)(1-t_{x}^{*}t_{x})=0$ implies $\pi_{\alpha}(a)(1-S_{x}S_{x}^{*})=0$ in ${\mathcal{L}}(\ell^{2}(\Gamma^{+},A))$. But for $f\in \ell^{2}(\Gamma^{+},A)$, we have \[ ((1-S_{x}S_{x}^{*})f)(y)=\left\{\begin{array}{ll} 0 & \text{ for } y\ge x >0 \\ f(y) & \text{ for } y< x. \end{array}\right. \] Thus evaluating the operator $\pi_{\alpha}(a)(1-S_{x}S_{x}^{*})$ on a chosen element $f\in \ell^{2}(\Gamma^{+},A)$ where $f(y)=a^{*}$ for $y=0$ and $f(y)=0$ for $y\neq 0$, we get \begin{align*} (\pi_{\alpha}(a)(1-S_{x}S_{x}^{*})(f))(y) & = \left\{\begin{array}{ll} \alpha_{y}(a)f(y) & \text{ for } y=0 \\ 0 & \text{ for } y\neq 0 \end{array}\right. = \left\{\begin{array}{ll} aa^{*} & \text{ for } y=0 \\ 0 & \text{ for } y\neq 0. \end{array}\right. \end{align*} Therefore $aa^{*}=0\in A$, and hence $a=0$. \end{proof} \begin{proof}[Proof of Proposition \ref{piso-ptp}] Let $(\rho,W)$ be a covariant partial-isometric representation of $(A,\Gamma^{+},\alpha)$ on a Hilbert space $H$. We want to construct a non degenerate representation $\Phi$ of $p{\mathcal{T}}_{\alpha}p$ on $H$ such that $\Phi(p S_{i}\pi_{\alpha}(a)S_{j}^{*}p)=W_{i}^{*}\rho(a)W_{j}$ for all $a\in A,i,j\in\Gamma^{+}$. It follows from this equation that $\Phi(k_{A}(a))=\rho(a)$ for all $a\in A$, and $\overline{\Phi}(w_{i})=W_{i}$ for $i\in\Gamma^{+}$ because $\Phi(p\pi_{\alpha}(a_{\lambda})S_{i}^{*}p)=\rho(a_{\lambda})W_{i}$ for all $i\in\Gamma^{+}$, $\rho(a_{\lambda})W_{i}$ converges strongly to $W_{i}$ in $B(H)$, and \begin{align*} \Phi(p\pi_{\alpha}(a_{\lambda})S_{i}^{*}p) & =\Phi(\pi_{\alpha}(a_{\lambda}))\overline{\Phi}(pS_{i}^{*}p) =\rho(a_{\lambda})\overline{\Phi}(pS_{i}^{*}p) \rightarrow \overline{\Phi}(pS_{i}^{*}p) \text{ strongly in } B(H). \end{align*} So we want the representation $\Phi$ to satisfy \[ \Phi\left(\sum \lambda_{i,j} p S_{i}\pi_{\alpha}(a_{i,j})S_{j}^{*}p\right) = \sum \lambda_{i,j}\Phi(p S_{i}\pi_{\alpha}(a_{i,j})S_{j}^{*}p)= \sum \lambda_{i,j}W_{i}^{*}\rho(a_{i,j})W_{j}.\] We prove that this formula gives a well-defined linear map $\Phi$ on $\operatorname{span}\{pS_{i}\pi_{\alpha}(a)S_{j}^{*}p : a\in A, i,j \in\Gamma^{+}\}$, and simultaneously $\Phi$ extends to $p{\mathcal{T}}_{\alpha}p$ by showing that \[ \left\|\sum \lambda_{i,j}W_{i}^{*}\rho(a_{i,j})W_{j}\right\|\le \left\|\sum \lambda_{i,j} p S_{i}\pi_{\alpha}(a_{i,j})S_{j}^{*}p\right\|. \] Note that the non degenerate representation $\rho\times W$ of $(A\times_{\alpha}^{\operatorname{piso}}\Gamma^{+},i_{A},v)$ on $H$ satisfies $\rho\times W(v_{i}^{*}i_{A}(a)v_{j})=W_{i}^{*}\rho(a)W_{j}$, and the injective homomorphism $\varphi:(A\times_{\alpha}^{\operatorname{piso}}\Gamma^{+},i_{A},v)\rightarrow p{\mathcal{T}}_{\alpha}p$ in Lemma \ref{phi-injective} satisfies $\varphi(v_{i}^{*}i_{A}(a)v_{j})=w_{i}^{*}k_{A}(a)w_{j}=pS_{i}\pi_{\alpha}(a)S_{j}^{*}p$. Now we compute \begin{align*} \left\|\sum_{i,j\in\Gamma^{+}} \lambda_{i,j}W_{i}^{*}\rho(a_{i,j})W_{j}\right\| & = \left\|\rho\times W\left(\sum \lambda_{i,j}v_{i}^{*}i_{A}(a_{i,j})v_{j}\right)\right\| \\ & \le \left\|\sum \lambda_{i,j}v_{i}^{*}i_{A}(a_{i,j})v_{j}\right\| \\ & = \left\|\varphi\left(\sum\lambda_{i,j}v_{i}^{*}i_{A}(a_{i,j})v_{j}\right)\right\| \quad \text{by injectivity of } \varphi \\ & = \left\|\sum \lambda_{i,j}pS_{i}\pi_{\alpha}(a_{i,j})S_{j}^{*}p\right\|. \end{align*} Next we verify that $\Phi$ is a *-homomorphism. It certainly preserves the adjoint, and we claim by our arguments below that it also preserves the multiplication. Note that \[ \xi:= (pS_{i}\pi_{\alpha}(a)S_{j}^{*}p)~(pS_{n}\pi_{\alpha}(b)S_{m}^{*}p) =\left\{\begin{array}{ll} pS_{i}\pi_{\alpha}(a\overline{\alpha}_{j}(1)b)S_{m}^{*}p & \text{ for } j=n \\ pS_{i}\pi_{\alpha}(a\alpha_{j-n}(\overline{\alpha}_{n}(1)b))S_{j-n+m}^{*}p & \text{ for } j>n \\ pS_{i+n-j}\pi_{\alpha}(\alpha_{n-j}(a)\overline{\alpha}_{n}(1)b)S_{m}^{*} p & \text{ for } j<n. \end{array}\right. \] Then use the covariance of $(\rho,W)$ to see that $\Phi(\xi)=(W_{i}^{*}\rho(a)W_{j})(W_{n}^{*}\rho(a)W_{m})$ for all cases of $j$ and $n$. So $\Phi$ preserves the multiplication. Thus $\Phi$ is a representation of $p{\mathcal{T}}_{\alpha}p$ on $H$. We want to see that $\Phi$ is non degenerate. The representation $\rho$ of $A$ is non degenerate and $\rho(a)=\Phi(\pi_{\alpha}(a))$, therefore \begin{align*} H & = \overline{\operatorname{span}}\{\rho(a)h: a\in A, h\in H\} & \subset \overline{\operatorname{span}}\{\Phi(pS_{i}\pi_{\alpha}(a)S_{j}^{*}p)h:a\in A,i,j\in\Gamma^{+},h\in H\}, \end{align*} so $\Phi$ is non-degenerate. The $C^*$-algebra $p{\mathcal{T}}_{\alpha}p$ is spanned by $\{w_{i}^{*}i_{A}(a)w_{j}: a\in A,i,j \in\Gamma^{+}\}$ because $w_{i}^{*}i_{A}(a)w_{j}=pS_{i}p\pi_{\alpha}(a)pS_{j}^{*}p=pS_{i}\pi_{\alpha}(a)S_{j}^{*}p$. Thus $p{\mathcal{T}}_{\alpha}p$ and $A\times_{\alpha}^{\operatorname{piso}}\Gamma^{+}$ are isomorphic. Finally we prove the fullness of $A\times_{\alpha}^{\operatorname{piso}}\Gamma^{+}$ in ${\mathcal{T}}_{\alpha}$. It is enough by \cite[Example 3.6]{RW} to show that ${\mathcal{T}}_{\alpha}p{\mathcal{T}}_{\alpha}$ is dense in ${\mathcal{T}}_{\alpha}=\overline{\operatorname{span}}\{S_{i}\pi_{\alpha}(a)S_{j}^{*} : i,j\in\Gamma^{+}, a\in A\}$. Take a spanning element $S_{i}\pi_{\alpha}(a)S_{j}^{*}\in {\mathcal{T}}_{\alpha}$ and an approximate identity $(a_{\lambda})$ for $A$. Then $S_{i}\pi_{\alpha}(a)S_{j}^{*}=\lim_{\lambda} S_{i}\pi_{\alpha}(aa_{\lambda})S_{j}^{*}$, and since $S_{i}\pi_{\alpha}(aa_{\lambda})S_{j}^{*}=S_{i}\pi_{\alpha}(a)S_{0}^{*}pS_{0}\pi_{\alpha}(a_{\lambda})S_{j}^{*}\in {\mathcal{T}}_{\alpha}p{\mathcal{T}}_{\alpha}$, therefore a linear combination of spanning elements in ${\mathcal{T}}_{\alpha}$ can be approximated by elements of ${\mathcal{T}}_{\alpha}p{\mathcal{T}}_{\alpha}$. Thus $\overline{{\mathcal{T}}_{\alpha}p{\mathcal{T}}_{\alpha}}={\mathcal{T}}_{\alpha}$. \end{proof} \begin{remark} When dealing with systems $(A,\Gamma^{+},\alpha)$ in which $\overline{\alpha}_{t}(1)=1$, then $p=\overline{\pi}_{\alpha}(1)$ is the identity of ${\mathcal{L}}(\ell^{2}(\Gamma^{+},A))$, and the assertion of Proposition \ref{piso-ptp} says that $A\times_{\alpha}^{\operatorname{piso}}\Gamma^{+}$ is isomorphic to ${\mathcal{T}}_{\alpha}$. \end{remark} \section{The partial-isometric crossed product of a system by a single endomorphism.} In this section we consider a system $(A,{\mathbb N},\alpha)$ of a (non unital) $C^*$-algebra $A$ and an action $\alpha$ of ${\mathbb N}$ by extendible endomorphisms of $A$. The module $\ell^{2}({\mathbb N},A)$ is the vector space of sequences $(x_{n})$ such that the series $\sum_{n\in{\mathbb N}} x_{n}^{*}x_{n}$ converges in the norm of $A$, with the module structure $(x_{n})\cdot a =(x_{n}a)$, and the inner product $\langle(x_{n}),(y_{n})\rangle=\sum_{n\in{\mathbb N}} x_{n}^{*}y_{n}$. The homomorphism $\pi_{\alpha}:A\rightarrow {\mathcal{L}}(\ell^{2}({\mathbb N},A))$ defined by $\pi_{\alpha}(a)(x_{n})=(\alpha_{n}(a)x_{n})$ is injective, and together with the non unitary isometry $S\in {\mathcal{L}}(\ell^{2}({\mathbb N},A))$ \[ S(x_{0},x_{1},x_{2},\cdots)=(0,x_{0},x_{1},x_{2},\cdots) \] satisfy the following equation \begin{equation}\label{pi-S} \pi_{\alpha}(a) S_{i} = S_{i}\pi_{\alpha}(\alpha_{i}(a))\quad \text{for all } a\in A, i\in{\mathbb N}. \end{equation} Note that $S_{n}\pi_{\alpha}(ab^{*})(1-SS^{*})S_{m}^{*}=\theta_{f,g}$ where $f(n)=a$ and $f(i)=0$ for $i\neq n$, $g(m)=b$ and $g(i)=0$ for $i\neq m$. So we can identify the $C^*$-algebra ${\mathcal{K}}(\ell^{2}({\mathbb N},A))$ as \[ \overline{\operatorname{span}}\{S_{n}\pi_{\alpha}(ab^{*})(1-SS^{*})S_{m}^{*}: n,m\in{\mathbb N}, a,b\in A\}. \] Let $(A\times_{\alpha}^{\operatorname{iso}}{\mathbb N},j_{A},T)$ be the isometric crossed product of $(A,{\mathbb N},\alpha)$, and consider the natural homomorphism $\phi=(i_{A}\times T) :A\times_{\alpha}^{\operatorname{piso}}{\mathbb N}\rightarrow A\times_{\alpha}^{\operatorname{iso}}{\mathbb N}$. From the Proposition \ref{surj}, we know that \begin{equation}\label{kernel-span2} \ker \phi=\overline{\operatorname{span}}\{v_{m}^{*}i_{A}(a)(1-v^{*}v)v_{n}: a\in A, m,n\in{\mathbb N}\}. \end{equation} We show in the next theorem that the ideal $\ker \phi$ is a corner in $A\otimes K(\ell^{2}({\mathbb N}))$. \begin{theorem}\label{ext} Suppose $(A,{\mathbb N},\alpha)$ is a dynamical system in which every $\alpha_{n}:=\alpha^{n}$ extends to a strictly continuous endomorphism on the multiplier algebra $M(A)$ of $A$. Let $p=\overline{\pi}_{\alpha}(1_{M(A)}) \in {\mathcal{L}}(\ell^{2}({\mathbb N},A))$. Then the isomorphism $\psi:A\times_{\alpha}^{\operatorname{piso}}{\mathbb N}\rightarrow p{\mathcal{T}}_{\alpha}p$ in Proposition \ref{piso-ptp} takes the ideal $\ker \phi$ of $A\times_{\alpha}^{\operatorname{piso}}{\mathbb N}$ given by {\rm (\ref{kernel-span2})} isomorphically to the full corner $p~[K(\ell^{2}({\mathbb N},A))]~p$. So there is a short exact sequence of $C^*$-algebras \begin{equation}\label{diagram1} \begin{diagram}\dgARROWLENGTH=0.5\dgARROWLENGTH \node{0} \arrow{e} \node{p~[K(\ell^{2}({\mathbb N},A))]~p} \arrow{e,t}{\Psi} \node{A\times_{\alpha}^{\operatorname{piso}}{\mathbb N}} \arrow{e,t}{\phi} \node{A\times_{\alpha}^{\operatorname{iso}}{\mathbb N}} \arrow{e} \node{0,} \end{diagram} \end{equation} where $\Psi(pS_{m}\pi_{\alpha}(a)(1-SS^{*})S_{n}^{*}p)=v_{m}^{*}i_{A}(a)(1-v^{*}v)v_{n}$. \end{theorem} \begin{proof} We compute the image $\psi(\mu)$ of a spanning element $\mu:=v_{m}^{*}i_{A}(a)(1-v^{*}v)v_{n}$ of $\ker\phi$ \[ \psi(\mu) = pS_{m}p\pi_{\alpha}(a)\psi(1-v^{*}v)pS_{n}^{*}p = pS_{m}\pi_{\alpha}(a)(p-pSpS^{*}p)pS_{n}^{*}p, \] \[ pSpS^{*}=(\overline{\pi}_{\alpha}(1)S)\overline{\pi}_{\alpha}(1)S^{*}=S\overline{\pi}_{\alpha}(\overline{\alpha}(1))S^{*}= S(S\overline{\pi}_{\alpha}(\overline{\alpha}(1))^{*}=S(\overline{\pi}_{\alpha}(1)S)^{*}=SS^{*}p\] and \[ pS_{n}^{*}p=\overline{\pi}_{\alpha}(1)(\overline{\pi}_{\alpha}(1)S_{n})^{*} = \overline{\pi}_{\alpha}(1)(S_{n}\overline{\pi}_{\alpha}(\overline{\alpha_{n}}(1)))^{*} =\overline{\pi}_{\alpha}(\overline{\alpha_{n}}(1))S_{n}^{*}=(\overline{\pi}_{\alpha}(1)S_{n})^{*} =S_{n}^{*}p. \] Therefore we have \begin{equation}\label{psi-span} \psi(v_{m}^{*}i_{A}(a)(1-v^{*}v)v_{n}) = p~(S_{m}\pi_{\alpha}(a)(1-SS^{*})S_{n}^{*})~p. \end{equation} Since $S_{m}\pi_{\alpha}(a)(1-SS^{*})S_{n}^{*}=\lim_{\lambda}S_{m}\pi_{\alpha}(aa_{\lambda}^{*})(1-SS^{*})S_{n}^{*}$ where $(a_{\lambda})$ is an approximate identity in $A$, and $S_{m}\pi_{\alpha}(aa_{\lambda}^{*})(1-SS^{*})S_{n}^{*}=\theta_{\xi,\eta_{\lambda}}$ for which $\xi,\eta_{\lambda} \in \ell^{2}({\mathbb N},A)$ are given by $\xi(m)=a$ and $\xi(i)=0$ for $i\neq m$, $\eta_{\lambda}(n)=a_{\lambda}$ and $\eta_{\lambda}(i)=0$ for $i\neq n$, it follows that $\psi(\mu)\in p~[{\mathcal{K}}(\ell^{2}({\mathbb N},A))]~p$. Thus $\psi(\ker \phi)\subset p~[K(\ell^{2}({\mathbb N},A))]~p$. Conversely by similar computations to the way we get the equation (\ref{psi-span}), we have $pS_{m}\pi_{\alpha}(ab^{*})(1-SS^{*})S_{n}^{*}p= \psi(v_{m}^{*}i_{A}(ab^{*})(1-v^{*}v)v_{n})$. Hence $p~[K(\ell^{2}({\mathbb N},A))]~p\subset \psi(\ker \phi)$. This corner is full because the algebra $K(\ell^{2}({\mathbb N},A)) p K(\ell^{2}({\mathbb N},A))$ is dense in $K(\ell^{2}({\mathbb N},A))$: for an approximate identity $(a_{\lambda})$ in $A$, we have \[ S_{m}\pi_{\alpha}(a)(1-SS^{*})S_{n}^{*} = \lim_{\lambda}S_{m}\pi_{\alpha}(aa_{\lambda})(1-SS^{*})S_{n}^{*} \] and $S_{m}\pi_{\alpha}(aa_{\lambda})(1-SS^{*})S_{n}^{*} =(S_{m}\pi_{\alpha}(a)(1-SS^{*})S_{0}^{*}) p (S_{0}\pi_{\alpha}(a_{\lambda})(1-SS^{*})S_{n}^{*}$ is contained in $K(\ell^{2}({\mathbb N},A)) p K(\ell^{2}({\mathbb N},A))$. \end{proof} \begin{remark} The external tensor product $\ell^{2}({\mathbb N})\otimes A$ and $\ell^{2}({\mathbb N},A)$ are isomorphic as Hilbert $A$-modules \cite[Lemma 3.43]{RW}, and the isomorphism is given by \[ \varphi(f\otimes a)(n)=(f(0)a,f(1)a,f(2)a,\cdots) \text{ for }f\in\ell^{2}({\mathbb N}) \text{ and } a\in A. \] The isomorphism $\psi: T\in{\mathcal{L}}(\ell^{2}({\mathbb N},A))~\mapsto~\varphi^{-1} T\varphi \in {\mathcal{L}}(\ell^{2}({\mathbb N})\otimes A)$ satisfies $\psi(\theta_{\xi,\eta})=\varphi^{-1} ~ \theta_{\xi,\eta} ~\varphi=\theta_{\varphi^{-1}(\xi),\varphi^{-1}(\eta)}$ for all $\xi,\eta \in\ell^{2}({\mathbb N},A)$. Therefore $\psi({\mathcal{K}}(\ell^{2}({\mathbb N},A)))={\mathcal{K}}(\ell^{2}({\mathbb N})\otimes A)$. So $\psi(p)=\varphi^{-1} p\varphi=:\tilde{p}$ is a projection in ${\mathcal{L}}(\ell^{2}({\mathbb N})\otimes A)$. To see how $\tilde{p}$ acts on $\ell^{2}({\mathbb N})\otimes A$, let $f\in \ell^{2}({\mathbb N})$, $a\in A$ and $\{e_{n}\}$ the usual orthonormal basis in $\ell^{2}({\mathbb N})$. Then $\tilde{p}(f\otimes a)=\varphi^{-1}(p\varphi(f\otimes a))$, and \[ p\varphi(f\otimes a)=(f(i)\overline{\alpha}_{i}(1)a)_{i\in{\mathbb N}}=\lim_{k\rightarrow\infty} \varphi(\sum_{i=0}^{k}f(i)e_{i}\otimes \overline{\alpha}_{i}(1)a). \] Therefore $\tilde{p}(f\otimes a)=\varphi^{-1}(p\varphi(f\otimes a))=\lim_{k\rightarrow\infty}\sum_{i=0}^{k}f(i)e_{i}\otimes\overline{\alpha}_{i}(1)a$, and hence $p[{\mathcal{K}}(\ell^{2}({\mathbb N},A))]p\simeq \tilde{p}[{\mathcal{K}}(\ell^{2}({\mathbb N})\otimes A)]\tilde{p}$. \end{remark} \begin{example}\label{example-LR} We now want to compare our results with \cite[\S 6]{LR}. Consider a system consisting of the $C^*$-algebra ${\bf c}:=\overline{\operatorname{span}}\{1_{n} : n\in{\mathbb N}\}$ of convergent sequences, and the action $\tau$ of ${\mathbb N}$ generated by the usual forward shift (non unital endomorphism) on ${\bf c}$. The ideal ${\bf c_{0}}:=\overline{\operatorname{span}}\{1_{x}-1_{y} : x< y\in{\mathbb N}\}$, of sequences in ${\bf c}$ convergent to $0$, is an extendible $\tau$-invariant in the sense of \cite{Adji1,AH}. So we can also consider the systems $({\bf c_{0}},{\mathbb N},\tau)$ and $({\bf c}/{\bf c_{0}},{\mathbb N},\tilde{\tau})$, where the action $\tilde{\tau}_{n}$ of the quotient ${\bf c}/{\bf c_{0}}$ is given by $\tilde{\tau}_{n}(1_{x}+{\bf c_{0}})=\tau_{n}(1_{x})+{\bf c_{0}}$. We show that the three rows of exact sequences in \cite[Theorem 6.1]{LR}, are given by applying our results to $({\bf c},{\mathbb N},\tau)$, $({\bf c_{0}},{\mathbb N},\tau)$ and $({\bf c}/{\bf c_{0}},{\mathbb N},\tilde{\tau})$. The crossed product ${\bf c} \times_{\tau}^{\operatorname{piso}}{\mathbb N}$ of $({\bf c},{\mathbb N},\tau)$ is, by \cite[Proposition 5.1]{LR}, the universal algebra generated by a power partial isometry $v$: a covariant partial-isometric representation $(i_{c},v)$ of $({\bf c},{\mathbb N},\tau)$ is defined by $i_{c}(1_{n})=v_{n}v_{n}^{*}$. Let $p=\pi_{\tau}(1)$ be the projection in ${\mathcal{T}}_{{\bf c},\tau}$, and the partial-isometric representation $w:n\mapsto w_{n}=pS_{n}^{*}p$ of ${\mathbb N}$ in $p{\mathcal{T}}_{{\bf c},\tau}p$ gives a representation $\pi_{w}$ of ${\bf c}$ where $\pi_{w}(1_{x})=w_{x}w_{x}^{*}$, such that $(\pi_{w},w)$ is a covariant partial-isometric representation of $({\bf c},{\mathbb N},\tau)$ in $p{\mathcal{T}}_{{\bf c},\tau}p$. This $\pi_{w}$ is the homomorphism $k_{\bf c}:{\bf c}\rightarrow p{\mathcal{T}}_{{\bf c},\tau}p$ defined by Proposition \ref{piso-ptp}, and the covariant representation $(\pi_{w},w)$ is $(k_{\bf c},w)$. So $\pi_{w}\times w = k_{\bf c}\times w$ is an isomorphism of ${\bf c} \times_{\tau}^{\operatorname{piso}}{\mathbb N}$ onto the $C^*$-algebra $p{\mathcal{T}}_{{\bf c},\tau}p$. Moreover, the injective homomorphism $\Psi:p~[K(\ell^{2}({\mathbb N},{\bf c}))]~p \rightarrow ({\bf c}\times_{\tau}^{\operatorname{piso}}{\mathbb N},i_{{\bf c}},v)$ in Theorem \ref{ext} satisfies \[ \Psi(pS_{i}\pi_{\tau}(1_{n})(1-SS^{*})S_{j}^{*}p)=v_{i}^{*}i_{{\bf c}}(1_{n})(1-v^{*}v)v_{j}=v_{i}^{*}v_{n}v_{n}^{*}(1-v^{*}v)v_{j}, \] and the latter is a spanning element $g_{i,j}^{n}$ of $\ker\varphi_{T}$ by \cite[Lemma 6.2]{LR}. Consequently the ideal $p~[K(\ell^{2}({\mathbb N},{\bf c}))]~p$, in our Theorem \ref{ext}, is the $C^*$ algebra ${\mathcal{A}}=\pi^{*}(\ker \varphi_{T})$ of \cite[Proposition~6.9]{LR}, where the homomorphism $\varphi_{T}:{\bf c} \times_{\tau}^{\operatorname{piso}}{\mathbb N}\rightarrow {\mathcal{T}}({\mathbb Z})$ is induced by the Toeplitz representation $n \mapsto T_{n}$. Now the Toeplitz (isometric) representation $T:n \mapsto T_{n}$ on $\ell^{2}({\mathbb N})$ gives the isomorphism of ${\bf c} \times_{\tau}^{\operatorname{iso}}{\mathbb N}$ onto the Toeplitz algebra ${\mathcal{T}}({\mathbb Z})$, and ${\bf c_{0}} \times_{\tau}^{\operatorname{iso}}{\mathbb N}$ onto the algebra $K(\ell^{2}({\mathbb N}))$ of compact operators on $\ell^{2}({\mathbb N})$. Then the second row exact sequence in \cite[Theorem 6.1]{LR} follows from the commutative diagram: \begin{equation}\label{diagram-c} \begin{diagram}\dgARROWLENGTH=0.5\dgARROWLENGTH \node{0} \arrow{e} \node{p~[K(\ell^{2}({\mathbb N}, {\bf c}))]~p} \arrow{s,r}{\Psi}\arrow{e,t}{\Psi} \node{{\bf c} \times_{\tau}^{\operatorname{piso}}{\mathbb N}} \arrow{s,r}{\operatorname{id}}\arrow{e,t}{\phi} \node{{\bf c} \times_{\tau}^{\operatorname{iso}}{\mathbb N}} \arrow{s,r}{T}\arrow{e} \node{0.} \\ \node{0} \arrow{e} \node{\ker(\varphi_{T})\stackrel{\pi^{*}}{\simeq}{\mathcal{A}}} \arrow{e,t}{(\pi^{*})^{-1}} \node{{\bf c} \times_{\tau}^{\operatorname{piso}}{\mathbb N}} \arrow{e,t}{\varphi_{T}} \node{{\mathcal{T}}({\mathbb Z})} \arrow{e} \node{0.} \end{diagram} \end{equation} \end{example} Next we do similarly for $({\bf c_{0}},{\mathbb N},\tau)$ and $({\bf c}/{\bf c_{0}},{\mathbb N},\tilde{\tau})$ to get the first and third row exact sequences of diagram (6.1) in \cite[Theorem 6.1]{LR}. We know from \cite[Theorem~2.2]{AH} that ${\bf c_{0}}\times_{\tau}^{\operatorname{piso}}{\mathbb N}$ embeds in $({\bf c} \times_{\tau}^{\operatorname{piso}}{\mathbb N},i_{c},v)$ as the ideal $D=\overline{\operatorname{span}}\{v_{i}^{*}i_{c}(1_{s}-1_{t})v_{j} : s<t, i,j \in {\mathbb N}\}$, such that the quotient $({\bf c}\times_{\tau}^{\operatorname{piso}}{\mathbb N})/({\bf c_{0}}\times_{\tau}^{\operatorname{piso}}{\mathbb N})\simeq {\bf c}/{\bf c_{0}}\times_{\tilde{\tau}}^{\operatorname{piso}}{\mathbb N}$. Then the isomorphism $\Phi$ in \cite[Corollary 3.1]{AH} together with the isomorphism $\pi$ in \cite[Proposition 6.9]{LR} give the relations ${\bf c_{0}}\times_{\tau}^{\operatorname{piso}}{\mathbb N}\stackrel{\Phi}{\simeq}\ker(\varphi_{T^{*}})\stackrel{\pi}{\simeq} {\mathcal{A}}$, where the homomorphism $\varphi_{T^{*}}:{\bf c} \times_{\tau}^{\operatorname{piso}}{\mathbb N}\rightarrow {\mathcal{T}}({\mathbb Z})$ is associated to partial-isometric representation $n \mapsto T_{n}^{*}$. Let $q=\overline{\pi}_{\tau}(1_{M({\bf c_{0}})})$ be the projection in $M({\mathcal{T}}_{{\bf c_{0}},\tau})$. Then we have \[ q~[K(\ell^{2}({\mathbb N},{\bf c_{0}}))]~q =\overline{\operatorname{span}}\{q S_{i}\pi_{\tau}(1_{m}-1_{m+1})(1-SS^{*})S_{j}^{*} q: i,j\le m\}, \] and \[ \xi_{ijm}:=\Psi(q S_{i}\pi_{\tau}(1_{m}-1_{m+1})(1-SS^{*})S_{j}^{*} q) = g_{i,j}^{m}-g_{i,j}^{m+1}=f^{m}_{m-i,m-j} - f^{m+1}_{m-i,m-j}\] where $g_{i,j}^{m}$ and $f_{i,j}^{m}$ are defined in \cite[Lemma 6.2]{LR}. So $\xi_{ijm}$ is, by \cite[Lemma 6.4]{LR}, the spanning element of the ideal ${\mathcal{I}}:=\ker(\varphi_{T^{*}})\cap \ker(\varphi_{T})$. We use the isomorphism $\pi$ given by \cite[Proposition 6.5]{LR} to identify ${\mathcal{I}}$ with ${\mathcal{A}}_{0}$, to have the commutative diagram: \begin{equation}\label{diagram-c0} \begin{diagram}\dgARROWLENGTH=0.3\dgARROWLENGTH \node{0} \arrow{e} \node{q~[K(\ell^{2}({\mathbb N}, {\bf c_{0}}))]~q} \arrow{s,r}{\Psi} \arrow{e,t}{\Psi} \node{{\bf c_{0}} \times_{\tau}^{\operatorname{piso}}{\mathbb N}} \arrow{s,r}{\Phi}\arrow{e,t}{\phi} \node{{\bf c_{0}} \times_{\tau}^{\operatorname{iso}}{\mathbb N}} \arrow{s,r}{T} \arrow{e} \node{0} \\ \node{0} \arrow{e} \node{{\mathcal{I}}\stackrel{\pi}{\simeq}{\mathcal{A}}_{0}} \arrow{e,t}{{\rm id}} \node{\ker(\varphi_{T^{*}})\stackrel{\pi}{\simeq} {\mathcal{A}}}\arrow{e,t}{\epsilon_{\infty}} \node{{\mathcal{K}}(\ell^{2}({\mathbb N}))}\arrow{e} \node{0} \end{diagram} \end{equation} Finally for the system $({\bf c}/{\bf c_{0}},{\mathbb N},\tilde{\tau})$, we first note that it is equivariant to $({\mathbb C},{\mathbb N},{\rm id})$. So in this case, we have $r K(\ell^{2}(N,{\mathbb C}))r=K(\ell^{2}({\mathbb N}))$, and ${\mathbb C}\times_{\rm id}^{\operatorname{piso}}{\mathbb N} \stackrel{\rho}{\simeq} {\mathcal{T}}({\mathbb Z})$ where the isomorphism $\rho$ is given by the partial-isometric representation $n\mapsto T_{n}^{*}$, and identify $({\mathbb C}\times_{\rm id}^{\operatorname{iso}}{\mathbb N}, j_{{\mathbb N}}) \simeq {\mathbb C}\times_{\rm id}{\mathbb Z}\simeq (C^{*}({\mathbb Z}),u)$ with the algebra $C({\mathbb T})$ of continuous functions on ${\mathbb T}$ using $\delta: j_{{\mathbb N}}(n)\mapsto u_{-n}\in C^{*}({\mathbb Z}) \mapsto (z \mapsto \overline{z}^{n}) \in C({\mathbb T})$. Then we get the third row exact sequence of diagram (6.1) of \cite[Theorem 6.1]{LR}: \begin{equation}\label{diagram-CC} \begin{diagram}\dgARROWLENGTH=0.3\dgARROWLENGTH \node{0} \arrow{e} \node{K(\ell^{2}({\mathbb N}))} \arrow{se}\arrow{e,t}{\Psi} \node{{\mathbb C} \times_{\operatorname{id}}^{\operatorname{piso}}{\mathbb N}} \arrow{s,r}{\rho}\arrow{e,t}{\phi} \node{{\mathbb C} \times_{\operatorname{id}}^{\operatorname{iso}}{\mathbb N}} \arrow{s,r}{\delta}\arrow{e} \node{0} \\ \node{ } \node{ } \node{{\mathcal{T}}({\mathbb Z})} \arrow{e,t}{\psi_{T}} \node{C({\mathbb T})} \arrow{ne} \node{} \end{diagram} \end{equation} \begin{remark} We have seen in Example \ref{example-LR} the three rows exact sequences of \cite[Diagram 6.1]{LR} are computed from our results. The three columns exact sequences can actually be obtained by \cite[Theorem 2.2,Corollary 3.1]{AH}. Although these do not imply the commutativity of all rows and columns (because we have not obtained the analogous theorem of \cite[Theorem 2.2]{AH} for the algebra ${\mathcal{T}}_{(A,{\mathbb N},\alpha)}$), nevertheless it follows from our results that the algebras ${\mathcal{A}}$ and ${\mathcal{A}}_{0}$ appeared in \cite[Diagram 6.1]{LR} are Morita equivalent to ${\bf c}\otimes K(\ell^{2}({\mathbb N}))$ and ${\bf c_{0}}\otimes K(\ell^{2}({\mathbb N}))$ respectively. It is a helpful fact in particular for describing the primitive ideal space of ${\bf c}\times_{\tau}^{\operatorname{piso}}{\mathbb N}$. \end{remark} \begin{example} If $(A,{\mathbb N},\alpha)$ is a system of a $C^{*}$-algebra for which $\overline{\alpha}(1)=1$, then (\ref{diagram1}) is the exact sequence of \cite[Theorem 1.5]{KS}. This is because $p=\overline{\pi}_{\alpha}(1)$ is the identity of ${\mathcal{T}}_{(A,{\mathbb N},\alpha)}$, so $A\times_{\alpha}^{\operatorname{piso}}{\mathbb N}$ is isomorphic to ${\mathcal{T}}_{(A,{\mathbb N},\alpha)}$ and $p[{\mathcal{K}}(\ell^{2}({\mathbb N},A))]p={\mathcal{K}}(\ell^{2}({\mathbb N},A))$. Let $(A_{\infty},\beta^{n})_{n}$ be the limit of direct sequence $(A_{n})$ where $A_{n}=A$ for every $n$ and $\alpha_{m-n}:A_{n}\rightarrow A_{m}$ for $n\le m$. All the bonding maps $\beta^{i}:A_{i}\rightarrow A_{\infty}$ extend trivially to the multiplier algebras and preserve the identity. Therefore we have $(A\times^{\operatorname{iso}}_{\alpha}{\mathbb N},j_{A},j_{{\mathbb N}})\simeq (A_{\infty}\times_{\alpha_{\infty}}{\mathbb Z},i_{\infty},u)$ in which the isomorphism is given by $\iota(j_{{\mathbb N}}(n)^{*}j_{A}(a)j_{{\mathbb N}}(m)=u_{n}^{*}i_{\infty}(\beta^{0}(a))u_{m}$, and then the commutative diagram follows: \begin{equation}\label{diagram-KS} \begin{diagram}\dgARROWLENGTH=0.3\dgARROWLENGTH \node{0} \arrow{e} \node{p~[K(\ell^{2}({\mathbb N}, A))]~p} \arrow{s,r}{\operatorname{id}} \arrow{e,t}{\Psi} \node{A\times_{\alpha}^{\operatorname{piso}}{\mathbb N}} \arrow{s,r}{\psi}\arrow{e,t}{\phi} \node{A \times_{\alpha}^{\operatorname{iso}}{\mathbb N}} \arrow{s,r}{\iota} \arrow{e} \node{0} \\ \node{0} \arrow{e} \node{K(\ell^{2}({\mathbb N}, A))} \arrow{e,t}{{\rm id}} \node{{\mathcal{T}}_{(A,{\mathbb N},\alpha)}}\arrow{e,t}{q} \node{A_{\alpha_{\infty}}\times_{\alpha_{\infty}}{\mathbb Z}}\arrow{e} \node{0.} \end{diagram} \end{equation} \end{example} \section{The partial-isometric crossed product of a system by a semigroup of automorphisms.} Suppose $(A,\Gamma^{+},\alpha)$ is a system of an action $\alpha:\Gamma^{+} \rightarrow \operatorname{Aut}{A}$ by automorphisms on $A$, and consider the distinguished system $(B_{\Gamma^{+}},\Gamma^{+},\tau)$ of the commutative $C^*$-algebra $B_{\Gamma^{+}}$ by semigroup of endomorphisms $\tau_{x}\in\operatorname{End}(B_{\Gamma^{+}})$. Then $x\mapsto \tau_{x}\otimes \alpha_{x}^{-1}$ defines an action $\gamma$ of $\Gamma^{+}$ by endomorphisms of $B_{\Gamma^{+}}\otimes A$. So we have a system $(B_{\Gamma^{+}}\otimes A,\Gamma^{+},\gamma)$ by a semigroup of endomorphisms. We prove in the proposition below that the isometric-crossed product $(B_{\Gamma^{+}}\otimes A) \times_{\gamma}^{\operatorname{iso}} \Gamma^{+}$ is $A\times_{\alpha}^{\operatorname{piso}}\Gamma^{+}$. \begin{prop}\label{piso-iso} Suppose $\alpha:\Gamma^{+} \rightarrow \operatorname{Aut}{A}$ is an action by automorphisms on a $C^*$-algebra $A$ of the positive cone $\Gamma^{+}$ of a totally ordered abelian group $\Gamma$. Then the partial-isometric crossed product $A\times_{\alpha}^{\operatorname{piso}}\Gamma^{+}$ is isomorphic to the isometric crossed product $((B_{\Gamma^{+}}\otimes A) \times_{\gamma}^{\operatorname{iso}} \Gamma^{+},j)$. More precisely, the $C^{*}$-algebra $(B_{\Gamma^{+}}\otimes A) \times_{\gamma}^{\operatorname{iso}} \Gamma^{+}$ together with a pair of homomorphisms $(k_{A},k_{\Gamma^{+}}):(A,\Gamma^{+},\alpha)\rightarrow M((B_{\Gamma^{+}}\otimes A) \times_{\gamma}^{\operatorname{iso}} \Gamma^{+})$ defined by $k_{A}(a)=j_{B_{\Gamma^{+}}\otimes A}(1\otimes a)$ and $k_{\Gamma^{+}}(x)=j_{\Gamma^{+}}(x)^{*}$ is a partial-isometric crossed product for $(A,\Gamma^{+},\alpha)$. \end{prop} \begin{proof} Every $k_{\Gamma^{+}}(x)$ satisfies $k_{\Gamma^{+}}(x)k_{\Gamma^{+}}(x)^{*}=j_{\Gamma^{+}}(x)^{*}j_{\Gamma^{+}}(x)=1$, and $(k_{A},k_{\Gamma^{+}})$ is a partial-isometric covariant representation for $(A,\Gamma^{+},\alpha)$: \begin{align*} j_{B_{\Gamma^{+}}\otimes A}(1\otimes \alpha_{x}(a)) & = j_{\Gamma^{+}}(x)^{*}j_{\Gamma^{+}}(x)j_{B_{\Gamma^{+}}\otimes A}(1\otimes \alpha_{x}(a))j_{\Gamma^{+}}(x)^{*}j_{\Gamma^{+}}(x) \\ & = j_{\Gamma^{+}}(x)^{*}j_{B_{\Gamma^{+}}\otimes A}(\tau_{x}\otimes \alpha_{x}^{-1}(1\otimes \alpha_{x}(a)))j_{\Gamma^{+}}(x)\\ & = j_{\Gamma^{+}}(x)^{*}j_{B_{\Gamma^{+}}\otimes A}(1_{x}\otimes a)j_{\Gamma^{+}}(x) = j_{\Gamma^{+}}(x)^{*}j_{B_{\Gamma^{+}}}(1_{x})j_{A}(a)j_{\Gamma^{+}}(x)\\ & = j_{\Gamma^{+}}(x)^{*}j_{\Gamma^{+}}(x)j_{\Gamma^{+}}(x)^{*}j_{A}(a)j_{\Gamma^{+}}(x) = j_{\Gamma^{+}}(x)^{*}j_{B_{\Gamma^{+}}\otimes A}(1\otimes a)j_{\Gamma^{+}}(x), \end{align*} and $j_{\Gamma^{+}}(x)j_{\Gamma^{+}}(x)^{*}j_{B_{\Gamma^{+}}\otimes A}(1\otimes a)= j_{B_{\Gamma^{+}}\otimes A}(1\otimes a)j_{\Gamma^{+}}(x)j_{\Gamma^{+}}(x)^{*}$ because $j_{B_{\Gamma^{+}}\otimes A}(1_{x}\otimes a)=j_{A}(a)j_{B_{\Gamma^{+}}}(1_{x})$ Suppose $(\pi,V)$ is a partial-isometric covariant representation of $(A,\Gamma^{+},\alpha)$ on $H$. We want to get a non degenerate representation $\pi\times V$ of the isometric crossed product $(B_{\Gamma^{+}}\otimes A) \times_{\gamma}^{\operatorname{iso}} \Gamma^{+}$ which satisfies $(\pi\times V)\circ k_{A}(a)=\pi(a)$ and $(\overline{\pi\times V})\circ k_{\Gamma^{+}}(x)=V_{x}$ for all $a\in A$ and $x\in\Gamma^{+}$. Since $V_{x}V_{x}^{*}=1$ for all $x\in\Gamma^{+}$, $x\mapsto V_{x}^{*}$ is an isometric representation of $\Gamma^{+}$, and therefore $\pi_{V^{*}}(1_{x})=V_{x}^{*}V_{x}$ defines a representation $\pi_{V^{*}}$ of $B_{\Gamma^{+}}$ such that $(\pi_{V^{*}},V^{*})$ is an isometric covariant representation of $(B_{\Gamma^{+}},\Gamma^{+},\tau)$. Moreover $\pi_{V^{*}}$ commutes with $\pi$ because $\pi_{V^{*}}(1_{x})\pi(a)=V_{x}^{*}V_{x}\pi(a)=\pi(a)V_{x}^{*}V_{x}=\pi(a)\pi_{V^{*}}(1_{x})$. Thus $\pi_{V^{*}}\otimes \pi$ is a non degenerate representation of $B_{\Gamma^{+}}\otimes A$ on $H$, and $\pi_{V^{*}}\otimes \pi(1_{y}\otimes a)=\pi_{V^{*}}(1_{y})\pi(a)=\pi(a)\pi_{V^{*}}(1_{y})$. We clarify that $(\pi_{V^{*}}\otimes \pi, V^{*})$ is in fact an isometric covariant representation of the system $(B_{\Gamma^{+}}\otimes A,\Gamma^{+},\gamma)$: \begin{align*} \pi_{V^{*}}\otimes \pi(\tau_{x}\otimes \alpha_{x}^{-1}(1_{y}\otimes a)) & = \pi_{V^{*}}(\tau_{x}(1_{y}))\pi(\alpha_{x}^{-1}(a)) =V_{x}^{*}\pi_{V^{*}}(1_{y})V_{x}\pi(\alpha_{x}^{-1}(a)) \\ & =V_{x}^{*}\pi_{V^{*}}(1_{y})\pi(\alpha_{x}(\alpha_{x}^{-1}(a)))V_{x} \mbox{ by piso covariance of } (\pi,V) \\ & =V_{x}^{*}\pi_{V^{*}}(1_{y})\pi(a)V_{x} =V_{x}^{*}(\pi_{V^{*}}\otimes \pi)(1_{y}\otimes a)V_{x}. \end{align*} Then $\rho:=(\pi_{V^{*}}\otimes \pi)\times V^{*}$ is the non degenerate representation of $(B_{\Gamma^{+}}\otimes A) \times_{\gamma}^{\operatorname{iso}}\Gamma^{+}$ which satisfies the requirements \[ \rho(k_{A}(a))=\rho(j_{B_{\Gamma^{+}}\otimes A }(1\otimes a))=\pi_{V^{*}}\otimes\pi(1\otimes a)=\pi(a) \] and $\overline{\rho}(k_{\Gamma^{+}}(x))=\overline{\rho}(j_{\Gamma^{+}}(x)^{*})=V_{x}$. Finally, the span of $\{k_{\Gamma^{+}}(x)^{*}k_{A}(a)k_{\Gamma^{+}}(y)\}$ is dense in $(B_{\Gamma^{+}}\otimes A) \times_{\gamma}^{\operatorname{iso}}\Gamma^{+}$ because \[ k_{\Gamma^{+}}(x)^{*}k_{A}(a)k_{\Gamma^{+}}(y)=j_{\Gamma^{+}}(y)^{*}j_{B_{\Gamma^{+}}\otimes A }(1_{x+y}\otimes \alpha_{x+y}^{-1}(a))j_{\Gamma^{+}}(x). \] \end{proof} Proposition \ref{piso-iso} gives an isomorphism $k: (A\times_{\alpha}^{\operatorname{piso}}\Gamma^{+},i)\rightarrow ((B_{\Gamma^{+}}\otimes A) \times_{\gamma}^{\operatorname{iso}} \Gamma^{+},j)$ which satisfies $k(i_{\Gamma^{+}}(x))=j_{\Gamma^{+}}(x)^{*}$ and $k(i_{A}(a))=j_{B_{\Gamma^{+}}\otimes A}(1\otimes a)$. This isomorphism maps the ideal $\ker\phi$ of $A\times_{\alpha}^{\operatorname{piso}}\Gamma^{+}$ in Proposition \ref{kernel-piso-iso} isomorphically onto the ideal \[ {\mathcal I} := \overline{\operatorname{span}}\{j_{B_{\Gamma^{+}\otimes A}}(1\otimes a) j_{\Gamma^{+}}(x) [1-j_{\Gamma^{+}}(t)j_{\Gamma^{+}}(t)^{*}] j_{\Gamma^{+}}(y)^{*} : a\in A, x,y,t \in\Gamma^{+}\} \] of $(B_{\Gamma^{+}}\otimes A) \times_{\gamma}^{\operatorname{iso}} \Gamma^{+}$. We identify this ideal in Lemma \ref{ker-iso}. First we need to recall from \cite{Adji1} the notion of extendible ideals, it was shown there that \[ B_{\Gamma^{+},\infty}:=\overline{\operatorname{span}}\{1_{x}-1_{y} : x<y \in\Gamma^{+}\} \] is an extendible $\tau$-invariant ideal of $B_{\Gamma^{+}}$. Thus $B_{\Gamma^{+},\infty}\otimes A$ is an extendible $\gamma$-invariant ideal of $B_{\Gamma^{+}}\otimes A$. We can therefore consider the system $(B_{\Gamma^{+},\infty}\otimes A),\Gamma^{+},\gamma)$. Extendibility of ideal is required to assure the crossed product $(B_{\Gamma^{+},\infty}\otimes A)\times_{\gamma}^{\operatorname{iso}}\Gamma^{+}$ embeds naturally as an ideal of $(B_{\Gamma^{+}}\otimes A)\times_{\gamma}^{\operatorname{iso}}\Gamma^{+}$ such that the quotient is the crossed product of the quotient algebra $B_{\Gamma^{+}}\otimes A /B_{\Gamma^{+},\infty}\otimes A$ \cite[Theorem 3.1]{Adji1}. \begin{lemma} \label{ker-iso} The ideal ${\mathcal I}$ is $(B_{\Gamma^{+},\infty}\otimes A) \times_{\gamma}^{\operatorname{iso}}\Gamma^{+}$. \end{lemma} \begin{proof} We know from \cite[Theorem 3.1]{Adji1} that the ideal $(B_{\Gamma^{+},\infty}\otimes A)\times_{\gamma}^{\operatorname{iso}}\Gamma^{+}$ is spanned by \[ \{j_{\Gamma^{+}}(v)^{*} j_{B_{\Gamma^{+}\otimes A}}((1_{s}-1_{t})\otimes a)j_{\Gamma^{+}}(w): s< t, v,w \text{ in } \Gamma^{+}, a\in A\}.\] So to prove the Lemma, it is enough to show that ${\mathcal I}$ and $(B_{\Gamma^{+},\infty}\otimes A)\times_{\gamma}^{\operatorname{iso}}\Gamma^{+}$ contain each other. We compute on their generator elements in next paragraph using the fact that the covariant representation $(j_{B_{\Gamma^{+}}\otimes A},j_{\Gamma^{+}})$ gives a unital homomorphism $j_{B_{\Gamma^{+}}}$ which commutes with the non degenerate homomorphism $j_{A}$, and that the pair $(j_{B_{\Gamma^{+}}},j_{\Gamma^{+}})$ is a covariant representation of $(B_{\Gamma^{+}},\Gamma^{+},\tau)$. Each isometry $j_{\Gamma^{+}}(x)$ is not a unitary, so the pair $(j_{A},j_{\Gamma^{+}})$ fails to be a covariant representation of $(A,\Gamma^{+},\alpha^{-1})$. However it satisfies the equation $j_{A}(\alpha^{-1}_{x}(a))j_{\Gamma^{+}}(x)=j_{\Gamma^{+}}(x)j_{A}(a)$ for all $a\in A$ and $x\in\Gamma^{+}$. Let $\xi$ be a spanning element of ${\mathcal I}$. If $x<y$ and $t$ are in $\Gamma^{+}$, then $j_{\Gamma^{+}}(y)^{*}=j_{\Gamma^{+}}(x)^{*}j_{\Gamma^{+}}(y-x)^{*}$, and \begin{align*} j_{\Gamma^{+}}(x)[1-j_{\Gamma^{+}}(t)j_{\Gamma^{+}}(t)^{*}] j_{\Gamma^{+}}(y)^{*} & = (j_{\Gamma^{+}}(x)j_{\Gamma^{+}}(x)^{*}-j_{\Gamma^{+}}(x+t)j_{\Gamma^{+}}(x+t)^{*})j_{\Gamma^{+}}(y-x)^{*}\\ & = \overline{j}_{B_{\Gamma^{+}}\otimes A}((1_{x}-1_{x+t})\otimes 1_{M(A)}) j_{\Gamma^{+}}(y-x)^{*}, \end{align*} so \begin{align*} \xi=j_{B_{\Gamma^{+}}\otimes A}((1_{x}-1_{x+t})\otimes a)j_{\Gamma^{+}}(y-x)^{*} & = j_{\Gamma^{+}}(y-x)^{*} j_{B_{\Gamma^{+}}\otimes A}(\gamma_{y-x}((1_{x}-1_{x+t})\otimes a)) \\ & = j_{\Gamma^{+}}(y-x)^{*} j_{B_{\Gamma^{+}}\otimes A}((1_{y}-1_{y+t})\otimes \alpha_{y-x}^{-1}(a)). \end{align*} If $x\ge y$, then $j_{\Gamma^{+}}(x) =j_{\Gamma^{+}}(x-y) j_{\Gamma^{+}}(y)$, and \begin{align*} j_{\Gamma^{+}}(x) & [1-j_{\Gamma^{+}}(t)j_{\Gamma^{+}}(t)^{*}] j_{\Gamma^{+}}(y)^{*} = j_{\Gamma^{+}}(x-y)[j_{\Gamma^{+}}(y)j_{\Gamma^{+}}(y)^{*}-j_{\Gamma^{+}}(y+t)j_{\Gamma^{+}}(y+t)^{*}] \\ & = j_{\Gamma^{+}}(x-y)\overline{j}_{B_{\Gamma^{+}}\otimes A}((1_{y}-1_{y+t})\otimes 1_{M(A)})j_{\Gamma^{+}}(x-y)^{*}j_{\Gamma^{+}}(x-y) \\ & = \overline{j}_{B_{\Gamma^{+}}\otimes A}((1_{x}-1_{x+t})\otimes 1_{M(A)})j_{\Gamma^{+}}(x-y), \end{align*} so $\xi=j_{B_{\Gamma^{+}}\otimes A}((1_{x}-1_{x+t})\otimes a)j_{\Gamma^{+}}(x-y)$, and therefore ${\mathcal I}\subset (B_{\Gamma^{+},\infty}\otimes A)\times_{\gamma}^{\operatorname{iso}}\Gamma^{+}$. For the other inclusion, let $\eta=j_{B_{\Gamma^{+}}\otimes A} ((1_{s}-1_{t})\otimes a)j_{\Gamma^{+}}(x)$ be a generator of $(B_{\Gamma^{+},\infty}\otimes A)\times_{\gamma}^{\operatorname{iso}}\Gamma^{+}$. Then $\eta=j_{A}(a) [j_{\Gamma^{+}}(s)j_{\Gamma^{+}}(s)^{*}-j_{\Gamma^{+}}(t)j_{\Gamma^{+}}(t)^{*}]j_{\Gamma^{+}}(x)$, and a similar computation shows that \begin{align*} & [j_{\Gamma^{+}}(s)j_{\Gamma^{+}}(s)^{*} - j_{\Gamma^{+}}(t)j_{\Gamma^{+}}(t)^{*}] j_{\Gamma^{+}}(x) \\ &=\left\{ \begin{array}{ll} j_{\Gamma^{+}}(s)[1-j_{\Gamma^{+}}(t-s)j_{\Gamma^{+}}(t-s)^{*}]j_{\Gamma^{+}}(s-x)^{*} & \mbox{ for } x\le s< t \\ j_{\Gamma^{+}}(x)[1-j_{\Gamma^{+}}(t-x)j_{\Gamma^{+}}(t-x)^{*}] & \quad \quad s<x <t \\ 0 & \mbox{ for } t=x \mbox{ or } s<t<x, \end{array} \right. \end{align*} which implies that $\eta\in {\mathcal I}$. \end{proof} \bigskip An isometric crossed product is isomorphic to a full corner in the ordinary crossed product by the dilated action. The action $\tau:\Gamma^{+}\rightarrow\operatorname{End}(B_{\Gamma^{+}})$ is dilated to the action $\tau:\Gamma\rightarrow \operatorname{Aut}(B_{\Gamma})$ where $\tau_{s}(1_{x})=1_{x+s}$ acts on the algebra $B_{\Gamma}=\overline{\operatorname{span}}\{1_{x} : x \in\Gamma\}$. We refer to Lemma 3.2 of \cite{Adji2} to see that a dilation of $(B_{\Gamma^{+}}\otimes A,\Gamma^{+},\gamma)$ gives the system $(B_{\Gamma}\otimes A,\Gamma,\gamma_{\infty})$, in which $\gamma_{\infty}=\tau\otimes\alpha^{-1}$ acts by automorphisms on the algebra $B_{\Gamma}\otimes A$. The bonding homomorphism $h_{s}$ for $s\in\Gamma^{+}$, is given by \[ h_{s}: (1_{x}\otimes a)\in B_{\Gamma^{+}}\otimes A ~ \mapsto ~ (1_{x}\otimes a)\in \overline{\operatorname{span}}\{1_{y}: y\ge -s\}\otimes A \hookrightarrow B_{\Gamma}\otimes A. \] This homomorphism extends to the multiplier algebras, we write as $\overline{h}_{0}$, and it carries the identity $1_{0}\otimes 1_{M(A)} \in M(B_{\Gamma^{+}}\otimes A)$ into the projection $\overline{h}_{0}(1_{0}\otimes 1_{M(A)}) \in M(B_{\Gamma}\otimes A)$. Let \[ p:=\overline{j}_{B_{\Gamma}\otimes A}(\overline{h}_{0}(1_{0}\otimes 1_{M(A)})) \] be the projection in the crossed product $M((B_{\Gamma}\otimes A)\times_{\gamma_{\infty}}\Gamma)$. Then it follows from \cite[Theorem 2.4]{Adji1} or \cite[Theorem 2.4]{Laca} that $(B_{\Gamma^{+}}\otimes A) \times_{\gamma}^{\operatorname{iso}} \Gamma^{+}$ is isomorphic onto the full corner $p~[(B_{\Gamma}\otimes A)\times_{\gamma_{\infty}}\Gamma]~p$. \begin{cor}\label{morita} There is an isomorphism of $A\times_{\alpha}^{\operatorname{piso}}\Gamma^{+}$ onto the full corner $p~[(B_{\Gamma}\otimes A)\times_{\gamma_{\infty}}\Gamma]~p$ of the crossed product $(B_{\Gamma}\otimes A)\times_{\gamma_{\infty}}\Gamma$, such that the ideal $\ker\phi$ of $A\times_{\alpha}^{\operatorname{piso}}\Gamma^{+}$ in Proposition \ref{kernel-piso-iso} is isomorphic onto the ideal $p~[(B_{\Gamma,\infty}\otimes A)\times_{\gamma_{\infty}}\Gamma]~p$, where $B_{\Gamma,\infty}=\overline{\operatorname{span}}\{1_{s}-1_{t} : s<t \in\Gamma\}$. \end{cor} \begin{cor}\label{trivial-action} Suppose $\alpha:\Gamma^{+}\rightarrow {\rm Aut}(A)$ is the trivial action $\alpha_{x}={\rm identity}$ for all $x$, and let ${\mathcal C}_{\Gamma}$ denote the commutator ideal of the Toeplitz algebra ${\mathcal T}(\Gamma)$. Then there is a short exact sequence \begin{equation}\label{diagram2} \begin{diagram}\dgARROWLENGTH=0.5\dgARROWLENGTH \node{0} \arrow{e} \node{A\otimes {\mathcal C}_{\Gamma}} \arrow{e} \node{A\times_{\alpha}^{\operatorname{piso}}\Gamma^{+}} \arrow{e,t}{\phi} \node{A\times_{\alpha}\Gamma} \arrow{e} \node{0.} \end{diagram} \end{equation} \end{cor} \begin{proof} We have already identified in Lemma \ref{ker-iso} that the ideal ${\mathcal{I}}$ is $(B_{\Gamma^{+},\infty}\otimes A)\times_{\tau\otimes{\rm id}}^{\operatorname{iso}}\Gamma^{+}$. We know that we have a version of \cite[Lemma 2.75]{Dana} for isometric crossed product, which says that if $(C,\Gamma^{+},\gamma)$ is a dynamical system and $D$ is any $C^*$-algebra, then $(C\otimes_{\max}D)\times_{\gamma\otimes{\rm id}}^{\operatorname{iso}}\Gamma^{+}$ is isomorphic to $(C\times_{\gamma}^{\operatorname{iso}}\Gamma^{+})\otimes_{\max} D$. Applying this to the system $(B_{\Gamma^{+},\infty},\Gamma^{+},\tau)$ and the $C^*$-algebra $A$, we get \[ (B_{\Gamma^{+},\infty}\otimes A)\times_{\tau\otimes{\rm id}}^{\operatorname{iso}}\Gamma^{+}\simeq (B_{\Gamma^{+},\infty}\times_{\tau}^{\operatorname{iso}}\Gamma^{+})\otimes A\simeq {\mathcal C}_{\Gamma}\otimes A, \] and hence we obtained the exact sequence. \end{proof} \begin{remark} Note that \[ A\times_{\rm id}^{\operatorname{piso}}\Gamma^{+}\simeq (B_{\Gamma^{+}}\otimes A)\times_{\tau\otimes{\rm id}}^{\operatorname{iso}}\Gamma^{+}\simeq (B_{\Gamma^{+}}\times_{\tau}^{\operatorname{iso}}\Gamma^{+})\otimes A\simeq {\mathcal{T}}(\Gamma)\otimes A, \] and $A\times_{\rm id}^{\operatorname{iso}}\Gamma^{+}\simeq A\times_{\rm id}\Gamma \simeq A\otimes C^{*}(\Gamma)\simeq A\otimes C(\hat{\Gamma})$. So (\ref{diagram2}) is the exact sequence \[ \begin{diagram}\dgARROWLENGTH=0.5\dgARROWLENGTH \node{0} \arrow{e} \node{A\otimes {\mathcal C}_{\Gamma}} \arrow{e} \node{A\otimes {\mathcal{T}}(\Gamma)} \arrow{e,t}{\phi} \node{A\otimes C(\hat{\Gamma})} \arrow{e} \node{0,} \end{diagram} \] which is the (maximal) tensor product with the algebra $A$ to the well-known exact sequence $0\rightarrow {\mathcal C}_{\Gamma}\rightarrow {\mathcal{T}}(\Gamma)\rightarrow C(\hat{\Gamma})\rightarrow 0$. \end{remark} \subsection{The extension of Pimsner Voiculescu} Consider a system $(A,\Gamma^{+},\alpha)$ in which every $\alpha_{x}$ is an automorphism of $A$. Let $(A\times_{\alpha}\Gamma,j_{A},j_{\Gamma})$ be the corresponding group crossed product. The Toeplitz algebra ${\mathcal{T}}(\Gamma)$ is the $C^{*}$-algebra generated by semigroup $\{T_{x}: x\in\Gamma^{+}\}$ of non unitary isometries $T_{x}$, and the commutator ideal ${\mathcal C}_{\Gamma}$ of ${\mathcal{T}}(\Gamma)$ generated by the elements $T_{s}T_{s}^{*}-T_{t}T_{t}^{*}$ for $s<t$ is given by $\overline{\operatorname{span}}\{T_{r}(1-T_{u}T_{u}^{*})T_{t}^{*}: r,u,t\in\Gamma^{+}\}$ of ${\mathcal{T}}(\Gamma)$. Consider the $C^*$-subalgebra ${\mathcal{T}}_{PV}(\Gamma)$ of $M((A\times_{\alpha}\Gamma)\otimes {\mathcal{T}}(\Gamma))$ generated by $\{j_{A}(a)\otimes I: a\in A\}$ and $\{j_{\Gamma}(x)\otimes T_{x} : x\in\Gamma^{+}\}$. Let ${\mathcal S}(\Gamma)$ be the ideal of ${\mathcal{T}}_{PV}(\Gamma)$ generated by $\{j_{A}(a)\otimes (T_{s}T_{s}^{*}-T_{t}T_{t}^{*}): s<t \in\Gamma^{+},a\in A\}$. We claim that $(A\times_{\alpha^{-1}}^{\operatorname{piso}}\Gamma^{+},i_{A},i_{\Gamma^{+}}) \simeq {\mathcal{T}}_{PV}(\Gamma)$, and the isomorphism takes the ideal $\ker(\phi)$ onto ${\mathcal S}(\Gamma)$. To see this, let $\pi(a):=j_{A}(a)\otimes I$ and $V_{x}:=j_{\Gamma}(x)^{*}\otimes T_{x}^{*}$. Then $(\pi,V)$ is a partial-isometric covariant representation of $(A,\Gamma^{+},\alpha^{-1})$ in the $C^*$-algebra $M((A\times_{\alpha}\Gamma)\otimes{\mathcal{T}}(\Gamma))$. So we have a homomorphism $\psi:A\times_{\alpha^{-1}}^{\operatorname{piso}}\Gamma^{+}\rightarrow (A\times_{\alpha}\Gamma)\otimes{\mathcal{T}}(\Gamma)$ such that \[ \psi( i_{A}(a)) = j_{A}(a)\otimes I \mbox{ and } \overline{\psi}(i_{\Gamma^{+}}(x))=j_{\Gamma}(x)^{*}\otimes T_{x}^{*} \mbox{ for } a \in A, x\in\Gamma^{+}. \] Moreover for $a\in A$ and $x>0$, we have \begin{eqnarray*} \pi(a)(1-V_{x}^{*}V_{x}) & = & (j_{A}(a)\otimes I)(1-(j_{\Gamma}(x)\otimes T_{x})(j_{\Gamma}(x)^{*}\otimes T_{x}^{*})) \\ & = & (j_{A}(a)\otimes I) - (j_{A}(a)\otimes I)(j_{\Gamma}(x)\otimes T_{x})(j_{\Gamma}(x)^{*}\otimes T_{x}^{*}) \\ & = & (j_{A}(a)\otimes I) - (j_{A}(a)\otimes T_{x}T_{x}^{*}) \\ & = & j_{A}(a)\otimes (I-T_{x}T_{x}^{*}). \end{eqnarray*} Since $T_{x}T_{x}^{*}\neq I$, the equation $\pi(a)(1-V_{x}^{*}V_{x})=0$ must imply $j_{A}(a)=0$ in $A\times_{\alpha}\Gamma$, and hence $a=0$ in $A$. So by Theorem 4.8 \cite{LR} the homomorphism $\psi$ is faithful. Thus $A\times_{\alpha^{-1}}^{\operatorname{piso}}\Gamma^{+}\simeq \psi(A\times_{\alpha^{-1}}^{\operatorname{piso}}\Gamma^{+})={\mathcal{T}}_{PV}(\Gamma)$. The isomorphism $\psi:A\times_{\alpha^{-1}}^{\operatorname{piso}}\Gamma^{+}\rightarrow {\mathcal{T}}_{PV}(\Gamma)$ takes the ideal $\ker \phi$ of $A\times_{\alpha^{-1}}\Gamma^{+}$ to the algebra ${\mathcal S}(\Gamma)$. \begin{cor}[The extension of Pimsner and Voiculescu] \hspace{3cm}\\ Let $(A,{\mathbb N},\alpha)$ be a system in which $\alpha\in\operatorname{Aut}(A)$. Then there is an exact sequence $0\rightarrow A\otimes {\mathcal{K}}(\ell^{2}({\mathbb N}))\rightarrow {\mathcal{T}}_{PV}\rightarrow A\times_{\alpha}{\mathbb Z}\rightarrow 0$. \end{cor} \begin{proof} Apply Theorem \ref{ext} to the system $(A,{\mathbb N},\alpha^{-1})$, and then use the identifications $A\times_{\alpha^{-1}}^{\operatorname{piso}}{\mathbb N}\simeq {\mathcal{T}}_{PV}({\mathbb Z})$, $\ker\phi\simeq {\mathcal S}(\Gamma)\simeq {\mathcal{K}}(\ell^{2}({\mathbb N},A))$ and $A\times_{\alpha} {\mathbb Z}\simeq A\times_{\alpha^{-1}} {\mathbb Z}$. \end{proof}
1,116,691,497,479
arxiv
\section{Introduction} Let $G$ be quasi-split reductive group defined over a $p$-adic field. The Plancherel measure is an analytic invariant associated with a parabolic induction on $G$. It was conjectured by Langlands, \cite{Lang}, that this invariant is a ratio of certain $L$-functions and hence of arithmetic significance. Under some mild assumptions, this conjecture was proven for generic inducing data by Shahidi in \cite{Sha 90} where the Plancherel measure was utilized to derive some results in harmonic analysis on $p$-adic reductive groups. Since, up to a well understood positive constant, the Plancherel measure equals an inverse of a scalar arising from a composition of two standard intertwining integrals, Shahidi was able to use his theory of local coefficients for that proof. In this paper we follow Shahidi's approach and compute the Plancherel measure for coverings of $SL_2(F)$ even though uniqueness of Whittaker model fails. Fix an integer $n \geq 1$. Let $F$ be a p-adic which contains the full group of $n^{\operatorname{th}}$ order roots of 1. We shall also assume that its residual characteristic is prime to $n$. Let $\widetilde{G(F)}$ be the $n$-fold cover of $G(F)=SL_2(F)$ afforded by the Kubota cocycle, see \cite{Kub}. Let $H(F)$ be the diagonal subgroup inside $G(F)$, let $N(F)$ be the group of upper triangular unipotent matrices in $G(F)$ and let $B(F)=H(F) \ltimes N(F)$. $\widetilde{H(F)}$, the inverse image in $\widetilde{G(F)}$ of $H(F)$ is not Abelian. Rather, it is a two step nilpotent group. In Section \ref{car rep} we construct the irreducible genuine admissible representations of $\widetilde{H(F)}$ using an analog of Kazhdan-Patterson's standard maximal Abelian group, \cite{KP}. It is interesting to note the similarity to the $n=2$ case, if $n$ is even then the construction involves the Weil index of a character of second degree. This phenomena is not present in the construction in \cite{KP}. Let $\tau$ be a genuine smooth admissible irreducible representation of $\widetilde{H(F)}$. Let $I(\tau,s)=\operatorname{Ind}^{\widetilde{G(F)}}_{\widetilde{B(F)}} \tau_s$ be the representation of $\widetilde{G(F)}$ parabolically induced from $\tau_s$. Here $s$ is a complex parameter. Let $$A_w(\tau,s): I(\tau,s) \rightarrow I(\tau^w,-s)$$ be the standard intertwining integral associated with the unique non-trivial Weyl element of $G(F)$. Then $A_w(\tau,s)$ induces a map $$\widehat{A}_w(\tau,s): Wh_{\psi}(\tau^w,-s) \rightarrow Wh_{\psi}(\tau,s) $$ where $\psi$ is a non-trivial character of $N(F)$ and $Wh_{\psi}(\tau,s) $ is the space of $\psi$-Whittaker functionals on $I(\tau,s)$. Unless $n \leq 2$, $Wh_{\psi}(\tau,s)$ is not one dimensional. In fact, $$\dim \bigl( Wh_{\psi}(\tau,s) \bigr)=\frac {n}{gcd(n,2)}.$$ In Section \ref{meta sha} we fix certain bases for $Wh_{\psi}(\tau^w,-s)$ and $Wh_{\psi}(\tau,s)$ and compute the matrix $D(\tau,s)$ representing $\widehat{A}_w(\tau,s)$ with respect to these bases. Using $D(\tau,s)$ and $D(\tau^w,-s)$ we prove in Section \ref{irr res} our main result: if we take the measure in the integral defining the intertwining operators to be the self dual measure with respect to $\psi$ then \begin{equation} \label{Plancherel as L} A_{w^{-1}} (\tau^w,-s) \circ A_{w}(\tau,s)=q^{e(\chi^n,\psi)}\frac{L \bigl(ns,\chi^n \bigr)L \bigl(-ns,\chi^{-n}\bigr)}{L \bigl(1-ns,\chi^{-n} \bigr)L \bigl(1+ns,\chi^{n}\bigr)}\operatorname{Id} \end{equation} Here $\chi$ is a character of $F^*$, $\chi^n$ is an invariant of $\tau$ and $q^{e(\chi^n,\psi)}$ is a positive constant defined in Section \ref{basics}. Observe that while $D(\tau,s)$ and $D(\tau^w,-s)$ depend on $\psi$, the constant in the right hand side of \eqref{Plancherel as L} is independent of $\psi$ except for the choice of measure. Moreover, if $\tau$ and $\psi$ are unramified then $q^{e(\chi^n,\psi)}=1$ and \eqref{Plancherel as L} may be deduced from the metaplectic Gindikin-Karpilevich formula. Indeed, if $I(\tau,s)$ is unramified and $\phi^0_{\tau,s}$ is its normalized spherical vector then from Theorem 12.1 of \cite{Mc2} it follows that $$ A_{w}(\tau,s)\bigl(\phi^0_{\tau,s})=\frac{L \bigl(ns,\chi^n \bigr)}{L \bigl(1+ns,\chi^{n}\bigr)}\phi^0_{\tau^w,-s}.$$ As an immediate application of \eqref{Plancherel as L} we find all the reducible genuine principal series representations of $\widetilde{G(F)}$ induced from a unitary data, see Theorem \ref{savin}. The matrix $D(\tau,s)$ is a higher dimensional metaplectic analog of (the inverse of) Shahidi local coefficients. In Lemma 1.33 of \cite{KP}, Kazhdan and Patterson have computed this matrix in the context of $n$-fold covers of $GL_r(F)$. Their result involved Gauss sums. However, while the computation in \cite{KP} addresses only representations induced from unramified data, our computation applies for all inducing data. Moreover, our method deviates from those used in \cite{KP}. In the cases where $n$ is odd we have used the Tate $\gamma$-factor and functional equation in a fundamental way: in Section \ref{ari and meta ari} we follow an elegant argument used by Ariturk for the $n=3$ case and prove a functional equation suitable for our computation. In the cases where $n$ is even we prove a similar functional equation involving the metaplectic $\widetilde{\gamma}$-factor arising from the functional equation proven in \cite{Sz 3}. The role of $\widetilde{\gamma}$ seems to be as natural and important here as the role of the Tate $\gamma$-factor. We note that $\widetilde{\gamma}$ is represented by a certain Tate-type integral. This integral initially appeared in unpublished notes of W. Jay Sweet, \cite{Sweet}, where it was computed and utilised for the study of degenerate principal series representations of the metaplectic double cover of $\spn$. It then appeared in \cite{Sz 2} which contains the $n=2$ case of the computation presented here. We expect $\widetilde{\gamma}$ to play a similar roll in the representation theory of even fold covers of classical groups. In \cite{Sweet}, Sweet thanks Kudla for a helpful suggestion about the computation of $\widetilde{\gamma}$. To the best of our knowledge, this computation is not available in print. In an appendix we correct this situation and reproduce the results given in \cite{Sweet}. The main motivation for this work is an ongoing project \cite{FGS} in which we study the analog of Kazhdan-Patterson's exceptional representations for coverings of classical and similitude groups. The technique presented here is sufficient for the completion of the project, namely, for producing an analog of Lemma 1.33 of \cite{KP} for all the groups in discussion. Since our method is applicable for coverings of $GL_r(F)$, our result can be used to identify the Gauss sums in \cite{KP} as $\epsilon$-factors. Although we shall not develop this point here, our computations show that even-fold covers are more delicate than odd-fold covers as the analytic properties of $D(\tau,s)$ depend on the choice of Whittaker character for these groups. This was already noted in the $n=2$ case, see \cite{Sz 2}. This phenomena is responsible for fact that the irreducible modules that appear in the restriction of the unramified distinguished representation studied by Gelbart and Piatetski-Shapiro in \cite{GP80} and \cite{GP81} to the double cover of $\slt$ have a Whittaker model with respect to exactly one orbit of Whittaker characters. We expect this phenomena to reappear in the study of even-fold covers of classical groups. In a recent paper, \cite{GanGao}, Gan and Gao raised the question whether the Langlands-Shahidi method could be extended to metaplectic groups other than the metaplectic double cover of $\spn$. We hope that the results contained in these notes will contribute to this discussion. We would like to thank Solomon Friedberg for helpful discussions on the subject matter and to Wee Teck Gan and Freydoon Shahidi for their valuable comments. We would also like to thank Gordan Savin for making his notes available to us. \section{Preparations} \subsection{Basic notations} \label{basics} Let $\F$ be a p-adic field. Denote by $p$ its residual characteristic and by $q$ the cardinality of its residue field. Denote by $\Of$ its ring of integers. Fix $\varpi$, a generator of $\Pf$, the maximal ideal of $\Of$. We normalize the absolute value on $F$ such that $\ab \varpi \ab =q^{-1}$. For $\psi$, a non-trivial character of $F$ and $\chi$, a character of $\F^*$, we define $e(\psi)$ and $e(\chi)$ to be the conductors of $\psi$ and $\chi$ respectively and we define $$e(\psi,\chi)=e(\psi)-e(\chi).$$ Let $S(F)$ be the space of Schwartz functions on $\F^*$. Given $\phi \in S(F)$ we define $\widehat{\phi} \in S(F)$ to be its $\psi$ Fourier transform, i.e., $$\widehat{\phi}(x)=\int_F \phi(y) \psi(xy) \, d_\psi y.$$ Here $d_\psi y$ is the self dual measure with respect to $\psi$. We define $d_\psi^*x=\ab x \ab^{-1} d_\psi x$. It is a Haar measure on $F^*$. If $\psi$ is unramified we write $dx$ and $d^*x$ for $d_\psi x$ and $d^*_\psi x$ respectively. For $a\in F^*$ we denote by $\psi_a$ the character of $\F$ given by $x \mapsto \psi(xa)$. Let $\gamma_F(\psi)$ be the unnormalized Weil index of character of second degree, see \cite{Weil}, defined by $$\lim_{r \rightarrow \infty} \int_{\Pf^{-r}} \psi(x^2) \, d_\psi x.$$ It known that $\gamma^8_F(\psi)=1$. If $\F$ is of odd residual characteristic and $\psi$ is spherical then $\gamma_F(\psi)=1$. Let $\gamma_\psi:F^* \mapsto \C$ be the normalized Weil index defined by $$\gamma_\psi(a)=\frac {\gamma_F(\psi_a)}{\gamma_F(\psi)}.$$ Recall that $\gamma_\psi({\F^*}^2)=1$ and that for $x,y \in F^*$ we have $$\gamma_\psi(xy)=\gamma_\psi(x)\gamma_\psi(x)(x,y)_2,$$ where $(\cdot,\cdot)_2$ is the quadratic Hilbert symbol. \begin{remark} \label{remark on split} If $\F$ is of odd residual characteristic and $-1 \in {F^*}^2$ then $\gamma_\psi(\F^*) \in \mu_2$. If, in addition, $\psi$ is unramified then $\gamma_\psi(\Of^*)=1$ and for exactly one representative $u$ of $\Of^* / {\Of^*}^2$ we have $\gamma_\psi(u\varpi)=1$. \end{remark} \subsection{Tate and metaplectic-Tate gamma factors.} \label{T and meta T} Let $\chi$ be a character of $\F^*$, let $\psi$ be a non-trivial character of $\F$ and let $s$ be a complex parameter . For $\phi \in S(F)$ we let $\zeta(s,\chi,\phi)$ be its Mellin transform, namely, the meromorphic continuation of $$\int_{F^*} \phi(x) \ab x \ab^s \chi(x) \, d_\psi^*x.$$ Let $$\gamma(s,\chi,\psi)=\epsilon(s,\chi,\psi)\frac{L(1-s,\chi^{-1})}{L(s,\chi)}$$ be the Tate $\gamma$-factor, \cite{T}. It is defined via the functional equation \begin{equation} \label{tate def} \zeta(1-s,\chi^{-1},\widehat{\phi})=\gamma(s,\chi,\psi)\zeta(s,\chi,\phi). \end{equation} Recall that both Mellin transforms in the equation above are given by absolutely convergent integrals in some common vertical strip. The following are stated in \cite{T2}. \begin{equation} \label{Tate gamma} {\epsilon}(1-s,\chi^{-1},\psi)=\chi(-1)\epsilon^{-1}(s,\chi,\psi), \end{equation} \begin{equation} \label{epsilon twist} \epsilon(s+t,\chi,\psi)=q^{e(\psi,\chi)t}\epsilon(s,\chi,\psi),\end{equation} \begin{equation} \label{epsilon change psi} \epsilon(s,\chi,\psi_a)=\chi(a)\ab a \ab^{s-\half}\epsilon(s,\chi,\psi). \end{equation} Observe that \eqref{Tate gamma} and \eqref{epsilon twist} imply that \begin{equation} \label{epsilon product} \epsilon(s,\chi,\psi)\epsilon(-s,\chi^{-1},\psi)=\chi(-1)q^{-e(\psi,\chi)}.\end{equation} For $\phi \in S(F)$ we now define $\widetilde{\phi}:\F^* \rightarrow \C$ by $$\widetilde{\phi}(x)=\int_{F^*} \phi(y)\gamma_\psi^{-1}(xy) \psi(xy) d_\psi y.$$ Although $\widetilde{\phi}(x)$ is typically not an element of $S(\F)$ it was proven in \cite{Sz 3} that $$\int_{F^*} \widetilde{\phi}(x) \chi(x) \ab x \ab^{s} d^*_\psi x$$ converges absolutely for $a<Re(s)<a+1$, for some $a \in \R$, to some rational function in $q^{-s}$ (it was assumed in \cite{Sz 3} that $\psi$ is spherical but this is unnecessary). This enables the natural definition of $\zeta(s,\chi,\widetilde{\phi})$. Moreover, $\zeta(1-s,\chi^{-1},\widetilde{\phi})$ and $\zeta(s,\chi,\psi)$ are both given by absolutely convergent integrals in some common vertical strip and there exists a function $\widetilde{\gamma}(s,\chi,\psi)$ such that $$\zeta(1-s,\chi^{-1},\widetilde{\phi})=\zeta(s,\chi,\phi)\widetilde{\gamma}(s,\chi,\psi).$$ For all $\phi \in S(F)$. It was also proven in \cite{Sz 3} that $\widetilde{\gamma}(\chi^{-1},1-s,\psi)$ is the meromorphic continuation of $$\lim_{r \rightarrow \infty}\int_{\Pf^{-r}}\chi(x) \ab x \ab^s \gamma_\psi(x)^{-1} \psi(x) \, d_\psi^* x.$$ The computation of this integral is contained in unpublished notes of W. Jay Sweet, \cite{Sweet}, and is provided here in the appendix. We have \begin{equation} \label{meta gama formula} \widetilde{\gamma}(1-s,\chi^{-1},\psi)=\gamma_F^{-1}(\psi_{-1}) \chi(-1)\gamma^{-1}(2s,\chi^{2},{\psi_{_2}}) \gamma(s+\half,\chi,\psi). \end{equation} We note here that although the arguments in \cite{Sz 3} are correct, the formula given there for $\widetilde{\gamma}(s,\chi,\psi)$ is slightly mistaken. The formula given here is the correct one. \subsection{$n^{th}$ power Hilbert symbol} Fix an integer $n \geq 1$. We shall assume that $\F^*$ contains the full group of $n^{th}$ roots of unity. Denote this cyclic group by $\mu_n$. We identify $\mu_n$ with the group of $n^{th}$ roots of unity in $\C^*$ and suppress this identification. Let $$( \cdot, \cdot):F^* \times F^* \rightarrow \mu_n$$ be the $n^{th}$ power Hilbert symbol. It is a bilinear form on $\F^*$ that defines a non-degenerate bilinear form on $\F^* / {F^*}^n$. It is known that for all $x,y \in F^*$ $$(x,y)(y,x)=(x,-x)=1.$$ Recall also that for all $x \in F^*$, $$(x,x)=(-1,x) \in \mu_2.$$ For $r \in \N$ let $\F^*_r$ be the subgroup of $\F^*$ of elements whose valuation lies in $r\Z$. Clearly $\F^*_r \simeq \Of^* \times r\Z$. \begin{lem} \label{kernal p n 1}If $gcd(p,n)=1$ then $$\{x\in \F^* \mid (x,y)=1 \, \forall y\in \F^*_n \}=\F^*_n.$$ \end{lem} \begin{proof} Section 5 of Chapter XIII of \cite{Weil book}. \end{proof} For $x \in F^*$ define $\eta_x$ to be the character of $F^*$ given by $$\eta_x(y)=(x,y).$$ \begin{lem} Suppose $gcd(p,n)=1$. Then $\eta_x$ is trivial on $1+\Pf$. Furthermore, $\eta_x$ is unramified if and only if $x\in F^*_n$. \end{lem} \begin{proof} By Hensel's lemma, $1+\Pf \subseteq {\F^*}^n$ provided that $gcd(p,n)=1$. This proves the first assertion. The second is clear. \end{proof} From this point we assume that $gcd(p,n)=1$. This assumption is relaxed in the appendix and in remark \ref{last remark}. We shall denote $$d=\frac{n} {gcd(n,2)}.$$ Define $$\beta_\varpi:F_d^* \rightarrow F^*$$ by $$\beta_\varpi(u\varpi^{md})=u\varpi^m.$$ It is an isomorphism. Note that $\beta_\varpi$ depends on the choice of a uniformizer. \begin{lem} Suppose that $n$ is even. For $x,y \in F_d^*$ we have $(x,y)^2=1$ and $$(x, y)=\bigr(\beta_\varpi(x),\beta_\varpi(y) \bigl)_2$$ \end{lem} \begin{proof} Given $x,y \in F^*$ write $x=u\varpi^{dm}, \, y=u'\varpi^{dm'}$ where $u,u' \in \Of^*$ we have $$(x,y)=(u,\varpi)^{dm'}(u',\varpi)^{-dm}(-1,\varpi)^{d^2mm'}.$$ Since $n=2d$ the first assertion follow. Denote by $\chi_o$ the unique non-trivial quadratic character of $\Of^*$. With this notation we have $$(x,y)=\chi^{m'}_{o}(u)\chi^{m}_{o}(u')(-1,\varpi)^{d^2mm'}$$ and $$\bigr(\beta_\varpi(x),\beta_\varpi(y) \bigl)_2=\chi^{m'}_{o}(u)\chi^{m}_{o}(u')(-1,\varpi)_2^{mm'}.$$ Thus, the proof is done once we show that $$(-1,\varpi)^{d^2mm'}=(-1,\varpi)_2^{mm'}.$$ Indeed, if $n=0 \, (\operatorname{mod }4)$ then $d$ is even and since $F$ contains a primitive $4^{\operatorname{th}}$ root of 1 we have $-1 \in {F^*}^2$. This shows that $$(-1,\varpi)^{d}=(-1,\varpi)_2=1.$$ Suppose now that $n=2 \, (\operatorname{mod }4)$. It is sufficient to show that $$(-1,\varpi)=(-1,\varpi)_{2}.$$ We note that $$ (-1,\varpi)_{_2}=\begin{cases} 1 & -1 \in {F^*}^2; \\ -1 & -1 \not \in {F^*}^2,\end{cases} \, \, \, (-1,\varpi)=\begin{cases} 1 & -1 \in {F^*}^n; \\ -1 & -1 \not \in {F^*}^n. \end{cases}$$ One can easily see that the assertion that $-1 \in {F^*}^2$ is equivalent to the assertion that $-1 \in {F^*}^n$ (in fact, both assertions are equivalent to the assertion that $F^*$ contains the full group of $2n^{\operatorname{th}}$ roots of 1). \end{proof} \begin{lem} \label{split hilbert} Define $\xi_{\psi,\varpi}:F^*_d \rightarrow \C^1$ to be the trivial map if n is odd and $\gamma_\psi^{-1} \circ \beta_\varpi$ if $n$ is even. Then $\xi_\psi$ splits the Hilbert symbol on $F_d^* \times F_d^*$, i.e., $$\xi_{\psi,\varpi}(xy)=\xi_{\psi,\varpi} (x)\xi_{\psi,\varpi} (y)(x,y)$$ for all $x,y \in \F^*_d$.\\ \end{lem} \begin{proof} Clear. \end{proof} \begin{lem} \label{split porp}Suppose that $n$ is even.\\ 1. If $d$ is odd then $\xi_{\psi,\varpi}=\gamma_\psi^{-1}$. \\ 2. If $d$ is even then $$\xi_{\psi,\varpi}(x)= \begin{cases} \gamma_\psi^{-1}(x) & x \in F^*_n; \\ \gamma_\psi^{-1}(x\varpi) & x \not \in F^*_n, \end{cases}=\gamma_\psi^{-1}(x)\begin{cases} 1 & x \in F^*_n; \\ \gamma_\psi^{-1}(\varpi)(\varpi,x)^d & x \not \in F^*_n. \end{cases}$$ \begin{comment} 3. For $u_0 \in \Of^*$, $x=u\varpi^{kd}$ we have $$\xi_{\psi,u_0\varpi}(x)=\xi_{\psi,\varpi}(x) \bigl(\gamma_\psi(u_0)\chi_{_0}(u_0)\bigr)^{k(d-1)}.$$ In particular, if $d$ is odd then $\xi_{\psi,u_0\varpi}$ is independent of $\varpi$ while if $d$ is even then $$\xi_{\psi,u_0\varpi}(x)=\xi_{\psi,\varpi}(x)\begin{cases} (-1)^k & u \not \in {F^*}^2, \, e(\psi)\in 2\Z, \\ 1 & Otherwise \end{cases}.$$ 4. For both even on odd $d$ $$\xi_{\psi_a,\varpi}(x)=\xi_{\psi,\varpi}(x)\bigl(a,\beta_\varpi(x) \bigr)_{_2}.$$ In particular, For both even and odd $d$, the restrictions of $\xi_{\psi_a,\varpi}$ and $\xi_{\psi,\varpi}$ to ${F^*}^d$ if and and only $a\in {F^*}^2 $. \\ 5. If $d$ is even, $e(\psi) \in 2\Z$ and $u_0 \in \Of^*$ then $\xi_{\psi,u_0\varpi}=\xi_{\psi_{u_0},\varpi}$. \end{comment} \end{lem} \begin{proof} Part 1 follows from the fact that if $n=2 \, ( \operatorname{mod} \, 4)$ then $\beta_\varpi(y) \in y{F^*}^2$. Part two follows from the fact that if $n=0 \, ( \operatorname{mod}\, 4)$ then for $y \in F^*_n$ we have $\beta_\varpi(y) \in y{F^*}^2$ while for $y \not \in F^*_n$ we have $\beta_\varpi(y) \in \varpi y{F^*}^2$. \end{proof} \subsection{Characteristic functions} Denote by $F^*_{n,k}$ the set of elements of $\F^*$ of the form $u\varpi^m$ where $u \in \Of^*$ and $m\in k+n\Z$. Thus, $F^*_n=F^*_{n,0}$. Fix a unit $u_{_0}$ such that $\xi=(u_{_0},\varpi)$ is a primitive $n^{th}$ root of 1. Define a function $$\beta_{n,k}:F^* \rightarrow \C$$ by $$\beta_{n,k}(x)=\frac 1 n \sum_{l=0}^{n-1}(u_{_0},x\varpi^{-k})^l.$$ \begin{lem} $\beta_{n,k}$ is the characteristic function of $F^*_{n,k}$. \end{lem} \begin{proof} Clearly, if $x \in F^*_{n,k}$ then $\beta_k(x)=1$. On the other hand, if $x=u \varpi^m$ where $m \not \in k+n\Z$ then $1 \neq (u_{_0},x\varpi^{-k}) \in \mu_n.$ Thus, $\beta_k(x)=0$. \end{proof} Since $x\mapsto (u_{_0},x)$ is an unramified character then $$(u_{_0},x)=\ab x \ab^c$$ for some purely imaginary number $c$. Thus, $\xi=q^{-c}$ and \begin{equation} \label{beta as sum for inv} \beta_{n,k}(x)=\frac 1 n \sum_{l=0}^{n-1} \xi^{-kl}\ab x \ab^{lc}=\frac 1 n \sum_{l=0}^{n-1} \xi^{kl}\ab x \ab^{-lc}. \end{equation} \section{Functional equations.} \label{ari and meta ari} In this Section we generalize an argument that appears in \cite{Ariturk}. Precisely, Lemmas 3.1 and 3.2 in \cite{Ariturk} are the $n=3$ case of Theorem \ref{few functional equations} below. For $\phi \in S(F)$ define $$\zeta_{n,k}(s,\chi,\phi)=\zeta(s,\chi,\phi \cdot \beta_{n,k}).$$ \begin{thm} \label{few functional equations} For $0\leq k<n$ we have \begin{equation} \label{aritur} \zeta_{n,k}(s,\chi,\widehat{\phi})=\sum_{m=0}^{n-1}\theta_{m}(s,\chi,\psi) \zeta_{n,m+e(\psi,\chi)-k}(1-s,\chi^{-1},\phi), \end{equation} where, for unramified $\chi$, $$\theta_{m}(s,\chi,\psi) =\epsilon^{-1}(s,\chi,\psi)L(ns,\chi^n) \bigl(\chi(\varpi)q^{-s}\bigr)^m \times \begin{cases} (1-q^{-1}) & 0 \leq m \leq n-2; \\ \\ L^{-1}(1-ns,\chi^{-n}) & m=n-1, \end{cases}$$ and where for ramified $\chi$, $$\theta_{m}(s,\chi,\psi) =\begin{cases} \chi(-1)\epsilon^{-1}(s,\chi,\psi) & m=0; \\ \\ 0 & m \neq 0. \end{cases}$$ \end{thm} \begin{proof} It is sufficient to prove this result for $s$ such that all Mellin transforms in \eqref{aritur} are given by absolutely convergent integrals. We have $$\zeta_{n,k}(s,\chi,\widehat{\phi})=\int_{F^*} \beta_{n,k}(x)\widehat{\phi}(x) \chi(x) \ab x \ab^s d^*_\psi x=\frac 1 n \sum_{l=0}^{n-1} \xi^{-kl} \int_{F^*} \widehat{\phi}(x) \chi(x) \ab x \ab^{s+lc} d^*_\psi x=$$ $$\frac 1 n \sum_{l=0}^{n-1} \xi^{-kl} \gamma\bigl(1-(s+lc),\chi^{-1},\psi \bigr) \int_{F^*}\phi(x) \chi^{-1}(x) \ab x \ab^{1-s-lc} d^*_\psi x=$$ $$\int_{F^*} \delta_{k,\psi,\chi}(x)\phi(x) \chi^{-1}(x) \ab x \ab^{1-s} d^*_\psi x$$ where $$\delta_{k,\psi,\chi,s}(x)=\sum_{l=0}^{n-1} \xi^{-kl} \gamma\bigl(1-(s+lc),\chi^{-1},\psi \bigr)\ab x \ab^{-lc}.$$ It remains to show that $$\delta_{k,\psi,\chi,s}=\sum_{m=0}^{n-1}\theta_{m}(s,\chi,\psi) \beta_{n,m+e(\psi,\chi)-k}.$$ Suppose first that $\chi$ is ramified. In this case, by \eqref{Tate gamma} and \eqref{epsilon twist} we have $$\delta_{k,\psi,\chi,s}(x)=\chi(-1)\epsilon^{-1}(s,\chi, \psi) \sum_{l=0}^{n-1} \xi^{l\bigl(e(\psi,\chi)-k \bigr)} \ab x \ab^{-lc}.$$ By \eqref{beta as sum for inv} we are done. Suppose now that $\chi$ is unramified. By \eqref{Tate gamma} and \eqref{epsilon twist} we now have $$\delta_{k,\psi,\chi,s}= \epsilon^{-1}(s,\chi, \psi) \frac 1 n \sum_{l=0}^{n-1} \ab x \ab^{-lc}\xi^{l(e(\psi)-k)} \frac{1-q^{-1}q^{s}\chi^{-1}(\varpi)\xi^{-l}}{1-q^{-s}\chi(\varpi) \xi^l}=$$ $$\prod_{m=0}^{n-1}(1-q^{-s}\chi(\varpi) \xi^m)^{-1}\epsilon^{-1}(s,\chi, \psi) \times $$ $$ \frac 1 n \sum_{l=0}^{n-1} \Bigl(\ab x \ab^{-lc}\xi^{l(e(\psi)-k)} (1-q^{-1}q^{s}\chi^{-1}(\varpi)\xi^{-l})\! \! \! \! \! \prod_{0 \leq m \leq n-1 \, m\neq l} \! \! \! \! \! \bigl(1-q^{-s}\chi(\varpi) \xi^m\bigr) \Bigr).$$ Since $\xi$ is a primitive element in $\mu_n$, we have the elementary identities $$\prod_{m=0}^{n-1}(1-q^{-s}\chi(\varpi) \xi^m)=\bigl(1-q^{-ns}\chi^n(\varpi)\bigr)$$ and $$\prod_{0 \leq m \leq n-1 \, m\neq l} \! \! \! \bigl(1-q^{-s}\chi(\varpi) \xi^m\bigr)=\sum_{m=0}^{n-1} q^{-ms}\chi^m(\varpi) \xi^{lm}.$$ Thus, $$\delta_{k,\psi,\chi,s}(x)=\chi(-1)\epsilon^{-1}(s,\chi, \psi)L(\chi^n,ns) \times $$ $$\Biggl( \frac{\bigl(q^{-s}\chi(\varpi)\bigr)^{n-1}}{L(\chi^{-n},1-ns)}\frac 1 n \sum_{l=0}^{n-1} \ab x \ab^{-lc}\xi^{l(e(\psi)-k-1)}+(1-q^{-1})\sum_{m=0}^{n-2}(q^{-s}\chi(\varpi))^m \Bigl(\sum_{l=0}^{n-1} \ab x \ab^{-lc}\xi^{l(e(\psi)-k+m)} \Bigr) \Biggl).$$ Using \eqref{beta as sum for inv} once more we complete the proof. \end{proof} We now define $$\zeta_{n,k}(s,\chi,\widetilde{\phi})=\zeta(s,\chi,\widetilde{\phi}\beta_{n,k})$$ and prove a metaplectic analog to Theorem \ref{few functional equations}. \begin{thm} \label{meta few functional equations} Suppose that $n$ is even. For $0\leq k<n$ we have \begin{equation} \label{meta ari}\zeta_{n,k}(s,\chi,\widetilde{\phi})=\sum_{m=0}^{n-1}\widetilde{\theta}_{m}(s,\chi,\psi) \zeta_{n,m+e(\psi,\chi^2)-k}(1-s,\chi^{-1},\phi), \end{equation} where $\widetilde{\theta}_{m}(s,\chi,\psi)$ is defined as follows. If $\chi$ is unramified then $$\widetilde{\theta}_{m}(s,\chi,\psi) =\gamma_F^{-1}(\psi_{-1}) \epsilon^{-1}(2s,\chi^{2},{\psi_{_2}}) \epsilon(s+\half,\chi,\psi) \times $$ $$\begin{cases} q^{-\half}q^s\ \chi^{-1}(\varpi) & m=n-1; \\ \\ (1-q^{-1})\bigl(q^{-s}\chi(\varpi)\bigr)^{m}L(ns,\chi^n) & m\in 2\Z; \\ \\ 0 & otherwise. \end{cases}$$ If $\chi$ is ramified but $\chi^2$ is unramified then, $$\widetilde{\theta}_{m}(s,\chi,\psi) =\gamma_F^{-1}(\psi_{-1}) \epsilon^{-1}(2s,\chi^{2},{\psi_{_2}}) \epsilon(s+\half,\chi,\psi) \bigl(q^{-s}\chi(\varpi)\bigr)^{m-1}L(\chi^n,ns) \times$$ $$\begin{cases} L^{-1}(1-ns,\chi^{-n}) & m=n-1; \\ \\ (1-q^{-1}) & m\in 1+2\Z, \, m \neq n-1; \\ \\ 0 & otherwise. \end{cases}$$ If $\chi^2$ is ramified then, $$\widetilde{\theta}_{m}(s,\chi,\psi) =\begin{cases} \gamma_F^{-1}(\psi_{-1}) \chi(-1)\epsilon^{-1}(2s,\chi^{2},{\psi_{_2}}) \epsilon(s+\half, \chi,\psi) & m=0; \\ \\ 0 & m \neq 0. \end{cases}$$ \end{thm} \begin{proof} It is sufficient again to prove this result for $s$ such that all Mellin transforms in \eqref{meta ari} are given by absolutely convergent integrals. Similar to the proof of Theorem \ref{few functional equations} we have $$\zeta_{n,k}(s,\chi,\widetilde{\phi})=\int_{F^*} \widetilde{\delta}_{k,\psi,\chi}(x)\phi(x) \chi^{-1}(x) \ab x \ab^{1-s} d^*_\psi x,$$ where $$\widetilde{\delta}_{k,\psi,\chi,s}(x)=\sum_{l=0}^{n-1} \xi^{-kl} \widetilde{\gamma}\bigl(1-(s+lc),\chi^{-1},\psi \bigr)\ab x \ab^{-lc}.$$ We now continue using \eqref{meta gama formula}. Note that since $n$ is even, $F$ is of odd residual characteristic. This implies that $e(\psi_2)=e(\psi)$. Suppose first that $\chi^2$ is ramified. We claim that this implies that $e(\chi)=e(\chi^2)$. Indeed, if $e(\chi)=1$ then there is nothing to proof. If $e(\chi)>1$ this equality follows from the fact that $1+\Pf \subseteq {F^*}^2$. Thus, $$\widetilde{\delta}_{k,\psi,\chi,s}(x)=\gamma_F^{-1}(\psi_{-1}) \chi(-1)\epsilon^{-1}(2s,\chi^{2},{\psi_{_2}}) \epsilon(s+\half,\chi,\psi)\sum_{l=0}^{n-1} \xi^{l\bigl(e(\psi,\chi)-k\bigr)}\ab x \ab^{-lc}.$$ By arguments we used already in the proof of Theorem \ref{aritur} we are done in this case. Suppose now that $\chi$ is unramified. We have $$\widetilde{\gamma}\bigl(1-(s+lc),\chi^{-1},\psi \bigr)=\gamma_F^{-1}(\psi_{-1}) \epsilon^{-1}(2s,\chi^{2},{\psi_{_2}}) \epsilon(s+\half,\chi,\psi) \xi^{le(\psi)} \frac {(1-q^{-1})+q^{-\half}(q^{s}\chi^{-1}\bigl(\varpi)-q^{-s}\chi(\varpi)\bigr)}{1-q^{-2s}\chi^2(\varpi)\xi^{2l}}.$$ One now repeats similar arguments to those used in the unramified case in Theorem \ref{few functional equations}. In the course of the computation one uses the fact that since $\xi^2$ is a primitive element in $\mu_d$ we have $$\prod_{m=0}^{n-1}(1-q^{-2s}\chi^2(\varpi) \xi^{2m})=\bigl(1-q^{-ns}\chi^n(\varpi)\bigr)^2$$ and $$\prod_{0 \leq m \leq n-1 \, m\neq l} \! \! \! \bigl(1-q^{-2s}\chi^2(\varpi) \xi^m\bigr)=\bigl(1-q^{-ns}\chi^n(\varpi)\bigr)\sum_{m=0}^{d-1} q^{-2ms}\chi^{2m}(\varpi) \xi^{2lm}.$$ This ultimately gives $$\widetilde{\delta}_{k,\psi,\chi,s}(x)=\gamma_F^{-1}(\psi_{-1}) \chi(-1)\epsilon^{-1}(2s,\chi^{2},{\psi_{_2}}) \epsilon(s+\half,\chi,\psi) \times$$ $$ \Bigl( q^{-\half}q^s\ \chi^{-1}(\varpi)\beta_{n-1-k+e(\psi)}(x)+(1-q^{-1})L(ns,\chi^n)\sum_{m=0}^{d-1} \bigl(q^{-s}\chi(\varpi)\bigr)^{m} \beta_{2m-k+e(\psi)}(x)\Bigr).$$ If $\chi$ is ramified but $\chi^2$ is unramified then $$\widetilde{\gamma}\bigl(1-(s+lc),\chi^{-1},\psi \bigr)=\gamma_F^{-1}(\psi_{-1}) \chi(-1) \epsilon^{-1}(2s,\chi^{2},{\psi_{_2}}) \epsilon(s+\half,\chi,\psi) \xi^{(\bigl(le(\psi)-1\bigr)} \frac {1-q^{-1}q^{2s}\chi^{-2}(\varpi)\xi^{2l}}{1-q^{-2s}\chi^2(\varpi)\xi^{-2l}}.$$ Repeating the same arguments as above we obtain $$\widetilde{\delta}_{k,\psi,\chi,s}(x)=\gamma_F^{-1}(\psi_{-1}) \chi(-1)\epsilon^{-1}(2s,\chi^{2},{\psi_{_2}}) \epsilon(s+\half,\chi,\psi)L(ns,\chi^n) \times $$ $$\Bigl( \bigl( q^{-s}\chi(\varpi) \bigr)^{n-2}L^{-1}(1-ns,\chi^{n})\beta_{n,e(\psi)-k+(n-1)}+(1-q^{-1})\sum_{m=0}^{d-1}\bigl( q^{-s}\chi(\varpi) \bigr)^{2m}\beta_{n,e(\psi)-k+(2m-1)} \bigr).$$ \end{proof} \section{$n$ fold cover of $SL_2(F)$} \subsection{Construction of the cover} Let ${SL_2(F)}$ be the group of two by two matrices with entries in $F$ whose determinant is 1. Let $N(F) \simeq F$ be the group of upper triangular unipotent matrices. Let $H(F)\simeq F^*$ be the group of diagonal elements inside $SL_2(F)$. Denote $B(F)=H(F) \ltimes N(F)$. For $x \in F$, and $a\in F^*$ we shall write $$n(x)=\begin{pmatrix} _{1} & _{x}\\_{0} & _{1 } \end{pmatrix}, \quad h(a)=\begin{pmatrix} _{a} & _{0}\\_{0} & _{a^{-1} } \end{pmatrix}, \quad w=\begin{pmatrix} _{0} & _{1}\\_{-1} & _{0} \end{pmatrix}.$$ Let $\widetilde{SL_{2}(F)}$ be the topological central extension of $\slt$ by $\mu_n$ constructed using the Kubota cocylce, \cite{Kub}. More precisely, we realize $\widetilde{SL_2(F)}$ as the set $SL_2(F) \times \mu_n$ along with the multiplication $$\bigl(g,\epsilon \bigr)\bigl(g',\epsilon \bigr)=\bigl(gg',c(g,g')\epsilon \epsilon'\bigr),$$ where \begin{equation}\label{rao}c(g_1,g_2)=\bigl(x(g_1g_2)x^{-1}(g_1),x(g_1g_2)x^{-1}(g_2)\bigr).\end{equation} Here $$x \begin{pmatrix} _{a} & _{b}\\_{c} & _{d}\end{pmatrix}=\begin{cases} c & c \neq 0; \\ d & c=0. \end{cases}$$ We shall denote by $s$ the map from ${SL_{2}(F)}$ to $\widetilde{SL_{2}(F)}$ given by $s(g)=(g,1)$. For a subset $A$ of $SL_{2}(F)$ we shall denote by $\widetilde{A}$ its primage in $\widetilde{SL_{2}(F)}$. \subsection{Representations of the Cartan subgroup} \label{car rep} From \eqref{rao} it follows that $$c \bigl(h(a),h(b) \bigr)=(b,a).$$ This implies that $s(h(a))$ and $s(h(b))$ commute if and only if $(b,a)^2=1$. Let $\widetilde{H_0(F)}$ be the center of $\widetilde{H(F)}$. One immediately sees that $$\widetilde{H_0(F)}= \widetilde{H^d(F)}.$$ Define now $$H_d(F)=\{h(a) \mid a\in F^*_d \}.$$ From Lemmas \ref{kernal p n 1} and \ref{split hilbert} it follows that $\widetilde{H_d(F)}$ is a maximal Abelian subgroup of $\widetilde{H(F)}$. It is an analog to Kazhdan-Patterson's standard maximal Abelian subgroup, see \cite{KP}. Observe that if $n$ is odd then $c \bigl(H_d(F),H_d(F) \bigr)=\{1\}$. This means that $\mslt$ splits over $H_d(F)$ via the trivial section. However, if $n$ is even then $c \bigl(H_d(F),H_d(F) \bigr)=\mu_2$. A representation of $\mslt$ or any of its subgroups is called genuine if $(I_2,\epsilon)$ acts by $\epsilon$. By Lemma \ref{split hilbert} $$\bigl(h(a),\epsilon \bigr) \mapsto \epsilon \xi_{\varpi,\psi}(a)$$ is a genuine character of $\widetilde{H_d(F)}$. Since the quotient of two genuine characters of $\widetilde{H_d(F)}$ is a character which factors through the projection to $H_d(F)$ it follows that any genuine character of $\widetilde{H_d(F)}$ is given by $$\bigl(h(a),\epsilon\bigr) \mapsto \chi_{\varpi,\psi}\bigl(h(a),\epsilon\bigr)=\epsilon \chi(a) \xi_{\varpi,\psi}(a)$$ where $\chi$ is a character of $F^*_d$. Remark: Suppose that $n$ is even. If $-1 \in {F^*}^2$ then we may think of $\xi_{\varpi,\psi}$ as a map into $\mu_n$. In this case $\bigl(h(a),\epsilon \bigr) \mapsto \bigl(h(a),\epsilon \xi_{\varpi,\psi} \bigr)$ defines an isomorphism from $\widetilde{H_d(F)}$ to $H_d(F) \times \mu_n$. We shall not use this fact since the parametrization given above of the genuine characters of $\widetilde{H_d(F)}$ is sufficient for our purposes. \begin{lem} \label{cartan rep} Any genuine smooth admissible irreducible representation of $\widetilde{H(F)}$ may be realized as $$i\bigl(\chi_{\varpi,\psi}\bigr)=\operatorname{Ind}_{\widetilde{H_d(F)}}^{\widetilde{H}} \chi_{\varpi,\psi}.$$ The representations $i\bigl(\chi_{\varpi,\psi}\bigr)$ and $i\bigl(\chi'_{\varpi,\psi}\bigr)$ are isomorphic if and only if $$\chi'=\chi\eta_\varpi^{2m}$$ for some integer $m$. \end{lem} \begin{proof} By a variation of Stone-von Neumann Theorem, see Theorem 3 of \cite{Mc}, the genuine smooth admissible irreducible representations of $\widetilde{H(F)}$ are parameterized by genuine characters of $\widetilde{H_{_0}}$. A realization of a genuine smooth admissible irreducible representations $\tau$ of $\widetilde{H(F)}$ whose central character is $\chi_{\tau}$ is constructed by first extending $\chi_{\tau}$ to a character of a maximal Abelian subgroup of $\widetilde{H(F)}$ and then by inducing. This proves the first assertion. To prove the second assertion note that any character of $\widetilde{H_{_0}}$ has exactly $[\widetilde{H_d(F)}:\widetilde{H_{_0}}]=d$ extensions to $\widetilde{H_d(F)}$ and that given a genuine character $\chi_{\varpi,\psi}$ of $\widetilde{H_d(F)}$, the set $$\{(\chi \eta_\varpi^{2m})_{\varpi,\psi} \mid m=0,1,\cdots, d-1 \}$$ consists of $d$ characters of $\widetilde{H_d(F)}$ whose restriction to $\widetilde{H_{_0}}$ are equal. \end{proof} Given a complex parameter $s$ and a genuine character $\chi_{\varpi,\psi}$ of $\widetilde{H_d(F)}$ we define another genuine character $\bigl(\chi_{\varpi,\psi},s \bigr)$ of $\widetilde{H_d(F)}$ by $$g=(a,\epsilon) \mapsto \chi_{\varpi,\psi}(g)\ab a \ab^s.$$ Note that $\chi_{\varpi,\psi}=\bigl(\chi_{\varpi,\psi},0 \bigr).$ Similarly, given a genuine smooth admissible irreducible representation $\tau$ of $\widetilde{H(F)}$ whose central character is $\chi_\tau$ and a complex parameter $s$ we define $\tau_s$ to be the genuine smooth admissible irreducible representation of $\widetilde{H(F)}$ whose central character is $$(x,\epsilon)\mapsto \chi_\varpi(a,\epsilon) \ab a \ab^s.$$ Observe that if $\tau \simeq i\bigl(\chi_{\varpi,\psi}\bigr)$ then $\tau_s \simeq i\bigl(\chi_{\varpi,\psi},s \bigr)$. \subsection{Genuine principal series representations and Whittaker functionals} \label{prin whi} $\mslt$ splits over $N(F)$ via the trivial section and $\widetilde{H(F)}$ normalizes $s\bigl( N(F) \bigr)$. Therefore, any representation of $\widetilde{H(F)}$ can be extended to a representation of $\widetilde{B(F)}$ by defining it to be trivial on $s\bigl( N(F) \bigr)$. Similar to the linear case we shall identify the representations of $\widetilde{B(F)}$ and $\widetilde{H(F)}$. A genuine principal series representation of $\mslt$ is defined to be a representation parabolically induced from a genuine smooth admissible representation of $\widetilde{B(F)}$. Thus, using induction in stages, any genuine principal series representation of $\mslt$ may be realized as $$I\bigl(\chi_{\varpi,\psi},s\bigr)=\operatorname{Ind}_{\widetilde{B_d(F)}}^{\mslt} \bigl(\chi_{\varpi,\psi},s \bigr).$$ where $B_d(F)=H_d(F)N(F)$. We shall assume that parabolic inductions are normalized. Let $\psi'$ be (another) non-trivial character $F$. A $\psi'$-Whittaker functional $\lambda$ on a representation $(\pi,V)$ of $\mslt$ is a functional on $V$ which satisfies $$\lambda \circ \pi\bigl(s(x)\bigr)=\psi'(x)\lambda.$$ Let $Wh_{\psi'}\bigl(\chi_{\varpi,\psi},s \bigr)$ be the space of $\psi'$ Whittaker functionals on $I\bigl(\chi_{\varpi,\psi},s\bigr)$. Unlike the linear case the dimension of this space is not 1. Arguing exactly as in Lemmas 1.3.1 and 1.3.2 of \cite{KP} we have \begin{lem} Let $\bigl(\chi_{\varpi,\psi},s \bigr)$ be a genuine character of $\widetilde{H_d(F)}$.\\ 1. $\dim Wh_{\psi'}\bigl(\chi_{\varpi,\psi},s \bigr)=[\widetilde{H}:\widetilde{H_d(F)}]=d.$\\ \\ 2. If $\ab \chi\bigl(\varpi^d) \ab <q^{Re(s)d}$ then for any $h \in \widetilde{H}$ and $f \in I\bigl(\chi_{\varpi,\psi},s\bigr)$ the integral $$\int_F f\bigl(hs(wn(x))\bigr)\psi^{-1}(x) d_{\psi'}x$$ converges absolutely to a polynomial in $q^{-s}$. Moreover, for all $s$, $$\lim_{r \rightarrow \infty} \int_{\Pf^{-r}} f\bigl(hs(wn(x))\bigr)\psi^{-1}(x) d_{\psi'}x$$ exists.\\ \\ 3. Let $\lambda_{h,\psi',\chi_{\varpi,\psi},s}(f)$ denote the analytic continuation of the integral defined in Part 2. The map $f \mapsto \lambda_{h,\psi',\chi_{\varpi,\psi},s}(f)$ is a $\psi'$ Whittaker functional on $I\bigl(\chi_{\varpi,\psi},s\bigr)$.\\ \\ 4. Let $A$ be a set of representatives of $\widetilde{H} / \widetilde{H_d(F)}$. The set $$\{\lambda_{h,\psi',\chi_{\varpi,\psi},s} \mid h \in A \}$$ is a basis for $Wh_{\psi'}\bigl(\chi_{\varpi,\psi},s \bigr)$. \end{lem} We shall now fix once and for all the set $$A=\{ s(\varpi^i) \mid i=0,1,\cdots d-1 \}.$$ as a set of representatives of $\widetilde{H} / \widetilde{H_d(F)}$ and we shall write $$\lambda_{i,\psi',\chi_{\varpi,\psi},s}=\lambda_{ s(\varpi^i),\psi',\chi_{\varpi,\psi},s}.$$ \begin{remark} \label{last remark} If we drop the assumption that the residual characteristic of $F$ is prime to $n$, $\widetilde{H^d(F)}$ is still the center of $\widetilde{H(F)}$ but $\widetilde{H_d(F)}$ is not always Abelian. However, one can verify at once that the inverse image of $$ \{h(x^d\varpi^k) \mid x\in F^*, \, k \in \Z \}$$ inside $\msl$ is always an Abelian subgroup of $\widetilde{H(F)}$. The index of this subgroup is $[{\Of^*}^d:\Of^*]$. This index is an upper bound for the dimension of the irreducible genuine smooth admissible representations of $\widetilde{H(F)}$ and for the dimension of the space of Whittaker functionals on a genuine principal series representations of $\msl$. \end{remark} \section{Metaplectic Shahidi local coefficients} \label{meta sha} \subsection{Definition} Fix a genuine character $\chi_{\varpi,\psi}$ of $\widetilde{H_d(F)}$. If $\ab \chi\bigl(\varpi^d) \ab <q^{Re(s)d}$ then for any $f \in I\bigl(\chi_{\varpi,\psi},s\bigr)$ the integral $$\int_F f\bigl(s(w)s(n(x))g \bigr) d_{\psi'}x$$ converges absolutely to a rational function in $q^{-s}$. We shall denote its meromorphic continuation by $A_{w}\bigl(\chi_{\varpi,\psi},s\bigr)(f)$. Away from its poles, $$A_{w}\bigl(\chi_{\varpi,\psi},s\bigr):I(\chi_{\varpi,\psi},s\bigr) \rightarrow I(\chi^{-1}_{\varpi,\psi},-s\bigr)$$ is a well defined $\mslt$ map. Define now $$\lambda^w_{i,\psi',\chi_{\varpi,\psi},s}=\lambda_{i,\psi',\chi^{-1}_{\varpi,\psi},-s}\circ A_{w}\bigl(\chi_{\varpi,\psi},s\bigr).$$ We have $$\lambda^w_{i,\psi',\chi_{\varpi,\psi},s}=\sum_{j=0}^d \tau(i,j,\chi_{_{\varpi,\psi}},s,\psi')\lambda_{j,\psi',\chi_{\varpi,\psi},s},$$ where the functions $\tau(i,j,\chi_{_{\varpi,\psi}},s,\psi')$ are rational in $q^{-s}$. Define now $D(\chi_{_{\varpi,\psi}},s,\psi')$ to be the $d \times d$ matrix whose entries are $\tau(i,j,\chi_{_{\varpi,\psi}},s,\psi')$. It is a higher dimensional metaplectic analog to Shahidi local coefficients. Precisely, if $n=1$ then $D(\chi_{_{\varpi,\psi}},s,\psi')=\tau(0,0,\chi,s,\psi')$ is the inverse of the local coefficient defined by Shahidi in \cite{Sha 1}. For the metaplectic $n=2$ case see \cite{Sz 2}. \subsection{Reduction to an integral} \begin{lem} If $\ab \chi\bigl(\varpi^d) \ab <q^{Re(s)d}$ then \begin{equation} \label{local as int again }\tau(i,j,\chi_{_{\varpi,\psi}},s,\psi')=q^{j-i} (\chi(\varpi)q^{-s})^{-i-j} \times$$ $$ \lim_{r \rightarrow \infty} \int_{F^*_{d,i+j}\cap \Pf^{-r}} \ab z \ab^s \chi(z)\eta_\varpi(z)^{i-j}\xi_{\varpi,\psi}\bigl(\varpi^{-i-j}z\bigr)\psi'(z) \, d_{\psi'}^*z .\end{equation} \end{lem} \begin{proof} Define $$K_m=\{g\in \slt \mid g=I_{2n} \, ( \operatorname{mod} \, \Pf^m) \}.$$ $\widetilde{K_m}$ is an open subgroup of $\mslt$. For $m$ sufficiently large, $\mslt$ splits over $K_m$ via the trivial section. Denote $K_m^+=\widetilde{K_m}\cap s\bigl(N(F)\bigr)$. Suppose that $m>\max\{e(\chi), e(\psi')\}$. Let $f_{m,j,\chi_{_{\varpi,\psi}},s} \in I(\chi_{\varpi,\psi},s\bigr)$ be the function supported in $$\widetilde{B_d(F)}s\bigl( h(\varpi^j) K_m w\bigr)=\widetilde{B_d(F)} s\bigl(h(\varpi^j)\bigr)s(w) K^+_m $$ normalized such that $$f_{m,j,\chi_{_{\varpi,\psi}},s}(bs\bigl(h(\varpi^j)\bigr)s(w)k)=\operatorname{Vol}^{-1}_{\psi'}(\Pf^m)(\chi_{\varpi,\psi},s\bigr) \cdot \delta^\half(b)$$ for all $b \in \pbm, \, k \in K_m^+$. Arguing as in Lemma 1.31 of \cite{KP} we have $$\lambda_{i,\psi',\chi_{\varpi,\psi},s}\bigl( f_{m,j,\chi_{_{\varpi,\psi}},s} \bigr)=\begin{cases} 1 & i=j ;\\ 0 & i \neq j . \end{cases}$$ Thus, it is sufficient to show that if $\ab \chi\bigl(\varpi^d) \ab <q^{Re(s)d}$ then \begin{equation} \label{this is what we need} \lambda^w_{i,\psi',\chi_{\varpi,\psi},s}\bigl( f_{m,j,\chi_{_{\varpi,\psi}},s} \bigr)=q^{j-i} (\chi(\varpi)q^{-s})^{-i-j} \times$$ $$ \lim_{r \rightarrow \infty} \int_{F^*_{d,i+j}\cap \Pf^{-r}} \ab z \ab^s \chi(z)\eta_\varpi(z)^{i-j}\xi_{\varpi,\psi}\bigl(\varpi^{-i-j}z\bigr)\psi'(z) \, d_{\psi'}^*z .\end{equation} We have $$\lambda^w_{i,\psi',\chi_{\varpi,\psi},s}\bigl( f_{m,j,\chi_{_{\varpi,\psi}},s} \bigr)=$$ $$\lim_{r \rightarrow \infty} \int_{\Pf^{-r}}\bigl((A_{w}\bigl(\chi_{\varpi,\psi},s\bigr)\bigl( f_{m,j,\chi_{_{\varpi,\psi}},s} \bigr)\bigl(s\bigl(h(\varpi^i)\bigr)s(w)^{-1}s(n(x))\bigr) \psi'^{-1}(x) \, d_{\psi'}x=$$ \begin{equation} \label{the double integral} \lim_{r \rightarrow \infty} \int_{\Pf^{-r}} \Bigl(\int_{F^*} f_{m,j,\chi_{_{\varpi,\psi}},s}\bigl(s(w)s(n(y))s\bigl(h(\varpi^i)\bigr)s(w)s(n(x))\bigr) \, d_{\psi'}y \Bigr) \psi'^{-1}(x) \, d_{\psi'}x. \end{equation} By a matrix multiplication and by using the cocycle formula \eqref{rao} we have $$s(w)s \bigl(n(y)\bigr)s\bigl(h(\varpi^i)\bigr)s(w)s\bigl(n(x)\bigr)=\left(\begin{pmatrix} _{-\varpi^i y^{-1}} & _{\varpi^{-i}}\\_{0} & _{-\varpi^{-i}y}\end{pmatrix}wn(x-\varpi^{2i}y^{-1}),(\varpi,-1)^i(-\varpi^i,y) \right).$$ Hence, we may write the inner integral in \eqref{the double integral} as $$\int_{F^*} \ab y \ab f_{m,j,\chi_{_{\varpi,\psi}},s}\left(\begin{pmatrix} _{-\varpi^i y^{-1}} & _{\varpi^{-i}}\\_{0} & _{-\varpi^{-i}y}\end{pmatrix}wn(x-\varpi^{2i}y^{-1}), (\varpi,-1)^i(-\varpi^i,y) \right) \, d_{\psi'}^*y.$$ Making the change of variables $z=-\varpi^{2i}y^{-1}$ we obtain $$ q^{-2i} \int_{F^*} \ab z \ab^{-1} f_{m,j,\chi_{_{\varpi,\psi}},s}\left(\begin{pmatrix} _{\varpi^{-i}z} & _{-\varpi^{-i}}\\_{0} & _{z^{-1}\varpi^{i}}\end{pmatrix}wn(x+z),\eta^i_\varpi(z)(-1,z) \right) \, d_{\psi'}^*z. $$ Observe now that unless $x+z \in \Pf^m$ the last integrand vanishes. On the other hand ,if $x+z \in \Pf^m$ then $\psi'^{-1}(x)=\psi'(z)$. By the right invariance property of $f_{m,j,\chi_{_{\varpi,\psi}},s}$ we have $$\lambda^w_{i,\psi',\chi_{\varpi,\psi},s}\bigl( f_{m,j,\chi_{_{\varpi,\psi}},s} \bigr)=$$ $$ q^{-2i} \, \lim_{r \rightarrow \infty} \int_{\Pf^{-r}} \left( \int_{z \in F^*, \, x-\varpi^i z \in \Pf^m} \ab z \ab^{-1} f_{m,j,\chi_{_{\varpi,\psi}},s}\left(\begin{pmatrix} _{\varpi^{-i}z} & _{-\varpi^{-i}}\\_{0} & _{z^{-1}\varpi^i}\end{pmatrix}w,\eta^i_\varpi(z) (-1,z) \right) \psi'(z) \, d_{\psi'}^*z \right) \, d_{\psi'}x.$$ Changing the order of integration gives $$q^{-2i} \, \lim_{r \rightarrow \infty} \int_{z \in F^*} \ab z \ab^{-1} f_{m,j,\chi_{_{\varpi,\psi}},s} \left(\begin{pmatrix} _{\varpi^{-i}z} & _{-a^{-i}}\\_{0} &_{z^{-1}\varpi^i}\end{pmatrix}w,\eta^i_\varpi(z) (-1,z)\right) \psi'(z) \phi(z,r,m) \, d_{\psi'}^*z$$ where $$\phi(z,r,m)=\operatorname{Vol}_{\psi'} \bigl(\Pf^{-r} \cap z+\Pf^m \bigr).$$ Thus, $$\lambda^w_{i,\psi',\chi_{\varpi,\psi},s}\bigl( f_{m,j,\chi_{_{\varpi,\psi}},s} \bigr)=$$ $$q^{-2i}\operatorname{Vol}_{\psi'} \bigl(\Pf^m \bigr) \lim_{r \rightarrow \infty} \int_{F^* \cap \Pf^{-r}} \ab z \ab^{-1} f_{m,j,\chi_{_{\varpi,\psi}},s}\left(\begin{pmatrix} _{\varpi^{-i}z} & _{-\varpi^{-i}}\\_{0} & _{z^{-1}\varpi^i}\end{pmatrix}w,\eta^i_\varpi(z) (-1,z) \right) \psi'(z) \, d_{\psi'}^*z.$$ We now write $$\left(\begin{pmatrix} _{\varpi^{-i}z} & _{-\varpi^{-i}}\\_{0} & _{z^{-1}\varpi^i}\end{pmatrix}w,\eta^i_\varpi(z) (-1,z)\right)=\left(\begin{pmatrix} _{(\varpi)^{-i-j}z} & _{-\varpi^{j-i}}\\_{0} & _{z^{-1}\varpi^{i+j}}\end{pmatrix},\eta^{i-j}(z)\right) s\left(h(\varpi^j)\right) s(w).$$ Recalling the definition of $f_{m,j,\chi_{_{\varpi,\psi}},s}$, \eqref{this is what we need} now follows. \end{proof} Note that the factor $q^{j-i}$ in \eqref{local as int again } is eliminated if we redefine $\lambda_{i,\psi',\cdot,\cdot}$ by multiplying it by $\delta^{\half}h(\varpi^i)=q^{-i}$. We shall not do so since we want our coefficients to be compatible with those defined in \cite{KP}. \subsection{Explicit Formulas} \label{Explicit Formulas} We shall now assume, without loss of generality, that $\psi$ is spherical. Following remark \ref{remark on split} we shall also assume and that if $n=0 \, (\operatorname{mod }4)$ then $\gamma_\psi(\varpi)=1$. We shall now give formulas for $D(\chi_{_{\varpi,\psi}},s,\psi')$ under the assumption that $\psi'=\psi$. This last assumption does affect the analytic behaviour of the local coefficients. However, the computation below is sufficient for our purposes, i.e., the computation of the Plancherel measure in Theorem \ref{main formula} below. Moreover, our computation can easily be modified to include all Whittaker characters. Recall that once $\psi$ and $\varpi$ have been fixed, the actual inducing data for $I\bigl(\chi_{\varpi,\psi},s\bigr)$ is $s \, \bigl(\operatorname{mod} \frac{2\varpi i}{d \ln(q)} \Z \bigr)$ and the restriction of $\chi$ to ${F^*}^d$. Using Lemma \ref{cartan rep} we may assume that if $n$ is odd then $\chi$ is either trivial or that $\chi^{n}$ is ramified. For even $n$ we have to take another case into consideration, i.e., $\chi=\eta_\varpi$. Last, we formulate our results using $\epsilon$ factors and $L-$functions. Thus, it is convenient to think of $\chi$ as a character of $F^*$ rather than a character of $F^*_d$. Of course, our formulas for $\tau(i,j,\chi_{_{\varpi,\psi}},s,\psi)$ depends only on the restriction of $\chi$ to $F^*_d$. For an integer $k$ define $k'$ to be the unique number such that $$k'=k \, ( \operatorname{mod} \, n), \, 0 \leq k \leq n-1.$$ \begin{lem} \label{explicit odd} Suppose that $n$ is odd. We can omit the subscript ${\varpi,\psi}$ from $\chi_{_{\varpi,\psi}}$\\ 1. We have $$\tau(j,j,1,s,\psi)= L(ns,1) \times \begin{cases} (1-q^{-1}) & 2j<n-1; \\ \\ L^{-1}(1-ns,1) & 2j=n-1; \\ \\ q^{ns}(1-q^{-1}) & 2j>n-1. \end{cases}$$ and for $i \neq j$ we have $$\tau(i,j,1,s,\psi)= \begin{cases} q^{j-i+s(n-1)}\epsilon^{-1} \bigl(s,\eta_\varpi^{i-j},\psi \bigr) & j+i=n-1 ; \\ \\ 0 & otherwise . \end{cases}$$ 2. If $\chi^n$ is ramified then $$\tau(i,j,\chi,s,\psi)= \begin{cases} \chi(-1)q^{j-i} \bigl(\chi(\varpi)q^{-s}\bigr)^{-i-j}\epsilon^{-1} \bigl(s,\chi \eta_\varpi^{i-j},\psi \bigr) & j+i+e(\chi)=0 \, \operatorname{mod}(n) ; \\ \\ 0 & otherwise . \end{cases}$$ \end{lem} \begin{proof} Define $\phi=q^{-r}\1_{1+\Pf^r}$. A standard computation shows that $\widehat{\phi}(x)=\psi(x)1_{\Pf^{-r}}(x)$. Thus, $$ \int_{F^*_{d,i+j} \cap \Pf^{-r}} \ab z \ab^s \chi(z)\eta_\varpi^{i-j}(z)\psi'(z) \, d^*z=\zeta_{n,k}(s,\chi\eta_\varpi^{i-j},\widehat{\phi}).$$ It is easy to see that if $r>\max \{1,e(\chi)\}$ then $$\zeta_{n,k}(1-s,\chi^{-1}\eta_\varpi^{j-i},\phi)=\begin{cases} 1 & k=0 \, ( \operatorname{mod} \, n) ;\\ \\ 0 & otherwise. \end{cases}$$ Therefore, by Theorem \ref{aritur} we have $$\tau(i,j,\chi,s,\psi)=q^{j-i} (\chi(\varpi)q^{-s})^{-i-j}\theta_{\bigl(i+j+e(\eta_\varpi^{i-j}\chi)\bigr)'}(s,\chi\eta_\varpi^{i-j},\psi).$$ The proof is now completed by a straight forward case by case computation. \end{proof} For $n \in 2\Z$ define $$\alpha(n)=\begin{cases} 1 & n=0 \, ( \operatorname{mod} \, 4) ;\\ \\ 0 & n=2 \, ( \operatorname{mod} \, 4). \end{cases}$$ \begin{lem} \label{explicit even} Suppose that $n$ is even\\ 1. We have $$\tau(j,j,1_{\varpi,\psi},s,\psi)=\begin{cases} \frac {L(ns,1) L(\frac {1-ns}{2},1)} {L(1-ns,1)L(\frac{1+ns}{2},1)}& \, j=\frac{d-1}{2} ;\\ \\ (1-q^{-1}) L(ns,1) & otherwise \end{cases},$$ and for $i \neq j$ we have $$\tau(i,j,1_{\varpi,\psi},s,\psi)= \begin{cases} q^{j-i+s(d-1)}\epsilon^{-1}(2s,\eta_\varpi^{2(i-j)},{\psi_{_2}}) \epsilon(s+\half,\eta_\varpi^{i-j},\psi) & j+i=d-1 ; \\ \\ 0 & otherwise. \end{cases}$$ 2. If $j-i=1 \, \operatorname{mod}(d),$ then $$\tau(i,j,({\eta_\varpi})_{\varpi,\psi},s,\psi)=q^{(s+1)(j-i)}\epsilon(s+\half,\eta^d_\varpi,\psi)L(1,ns) \begin{cases} L^{-1}(1-ns,1) & (i,j)=(d-1,0) ; \\ \\ (1-q^{-1}) & otherwise \end{cases},$$ and if $j-i \neq1 ,\, \operatorname{mod}(d)$ then $$\tau(i,j,({\eta_\varpi})_{\varpi,\psi},s,\psi)= \begin{cases} q^{(j-i)+s(d-1)} \epsilon(s+\half, \eta^{i-j+1}_\varpi,\psi)\epsilon^{-1}(2s,\eta^{2(i-j+1)}_\varpi,\psi_2)& i+j=d-1 ; \\ \\ 0 & otherwise . \end{cases}$$ 3. Suppose that $\chi^n$ is ramified. If $j+i+e(\chi)=0 \, ( \operatorname{mod} \, d)$, then $$\tau(i,j,\chi_{\varpi,\psi},s,\psi)=$$ $$\ q^{j-i} (\chi(\varpi)q^{-s})^{-i-j} \chi(-1)\epsilon^{-1}(2s,\chi^2\eta_\varpi^{2(i-j)},{\psi_{_2}}) \epsilon(s+\half,\chi\eta_\varpi^{(d+1)(i-j)+\alpha(n)(e(\chi)+i+j)},\psi), $$ and otherwise $\tau(i,j,\chi_{\varpi,\psi},s,\psi)=0$. \end{lem} \begin{proof} By Lemma \ref{split porp} we have $$\tau(i,j,\chi_{\varpi,\psi},s,\psi)=q^{j-i} (\chi(\varpi)q^{-s})^{-i-j} \times $$ $$\Bigl( \int_{F^*_{n,i+j}} \ab z \ab^s \chi(z)\eta_\varpi(z)^{(d+1)(i-j)}\gamma_\psi\bigl(z\bigr)\psi'(z) \, d^*z+\int_{F^*_{n,d+i+j}} \ab z \ab^s \chi(z)\eta_\varpi(z)^{(d+1)(i-j)+d\alpha(n)}\gamma_\psi\bigl(z\bigr)\psi'(z) \, d^*z \Bigr).$$ Define again $\phi=q^{-r}\1_{1+\Pf^r}$. It was shown in Section 2 of \cite{Sz 3} that $$\widetilde{\phi}(x)=\gamma_\psi^{-1}(x)\psi(x)\1_{{\Pf^{-r}}}(x).$$ Utilizing Theorem \ref{meta ari} and arguing as in the proof of Theorem \ref{explicit odd} we obtain $$\tau(i,j,\chi_{_{\psi}},s,\psi)=q^{j-i} (\chi(\varpi)q^{-s})^{-i-j} \times$$ \begin{equation} \label{to be reminded soon} \Bigl(\widetilde{\theta}_{\bigl(e(\chi^2 \eta_\varpi^{2(i-j)})+i+j\bigr)'}(s,\chi\eta_\varpi^{(d+1)(i-j)},\psi)+\widetilde{\theta}_{\bigl(e(\chi^2 \eta_\varpi^{2(i-j)})+d+i+j\bigr)'}(s,\chi\eta_\varpi^{(d+1)(i-j)+d\alpha(n)},\psi) \Bigr). \end{equation} The proof is now reduced to a straight forward case by case computation. \end{proof} \begin{comment} $\begin{thm} Suppose that $n=0 \, \operatorname{mod}(4)$.\\ \\ 1. If $\chi=1$ then. $$\tau(j,j,1_{\varpi_\psi},s,\psi)= (1-q^{-1}) L(1,ns)$$ and for $i \neq j$ we have $$\tau(i,j,1_{\varpi_\psi},s,\psi)= \begin{cases} q^{j-i+s(d-1)}\epsilon^{-1}(\eta_\varpi^{2(i-j)},2s,{\psi_{_2}}) \epsilon(\eta_\varpi^{i-j},s+\half,\psi) & j+i=d-1 \\ \\ 0 & otherwise \end{cases}$$ 2. If $\chi=\eta_\varpi$ we hav If $j-i=1 \, \operatorname{mod}(d)$ then $$\tau(i,j,(\eta^d_\varpi)_\psi,s,\psi)=q^{(s+1)(j-i)}\epsilon(\eta^d_\varpi,s+\half,\psi)L(1,ns) \begin{cases} L^{-1}(1,1-ns) & (i,j)=(d-1,0) \\ \\ (1-q^{-1}) & otherwise \end{cases}$$ and if $j-i \neq1 ,\, \operatorname{mod}(d)$ then $$\tau(i,j,(\eta_\varpi)_\psi,s,\psi)= \begin{cases} q^{(j-i)+s(d-1)} \epsilon(\eta^{i-j+1}_\varpi,s+\half,\psi)\epsilon^{-1}(\eta^{2(i-j+1)}_\varpi,2s,\psi_2)& i+j=d-1 \\ \\ 0 & otherwise \end{cases}$$ 3. If $\chi^n$ is ramified (this means that for any integer $k$ we have$(\chi \eta_\varpi^k)^2$ is ramified and $e(\chi)=e(\chi \eta_\varpi^k).$) In this case we have: if $$j+i+e(\chi)=0 \, ( \operatorname{mod} \, d)$$ then $$\tau(i,j,\chi_{\varpi_\psi},s,\psi)=$$ $$ q^{j-i} (\chi(\varpi)q^{-s})^{-i-j} \chi(-1)\epsilon^{-1}(\chi^2\eta_\varpi^{2(i-j)},2s,{\psi_{_2}}) \epsilon(\chi\eta_\varpi^{(d+1)(i-j)+e(\chi)+i+j},s+\half,\psi) $$ \end{thm} \begin{proof} This time we have $$\tau(i,j,\chi_{_{\psi}},s,\psi)=$$ $$q^{j-i} (\chi(\varpi)q^{-s})^{-i-j} \Bigl(\widetilde{\theta}_{\bigl(e(\chi^2 \eta_\varpi^{2(i-j)})+i+j\bigr)'}(s,\chi\eta_\varpi^{(d+1)(i-j)},\psi)+\widetilde{\theta}_{\bigl(e(\chi^2 \eta_\varpi^{2(i-j)})+d+i+j\bigr)'}(s,\chi\eta_\varpi^{(d+1)(i-j)+d},\psi) \Bigr).$$ \end{proof} \end{comment} \section {An Irreducibility result.} \label{irr res} Fix $\tau$, a genuine smooth admissible irreducible representation of $\widetilde{H(F)}$. By Lemma \ref{cartan rep} $\tau \simeq i\bigl(\chi_{\varpi,\psi} \bigr)$ for some character $\chi$ of $F^*_d$. Extend $\chi$ to a character of $F^*$. Using Lemma \ref{cartan rep} again we observe that $\chi^n$ is independent of the particular chosen extension and of the splitting $\xi_{\varpi,\psi}$. Thus, it is an invariant of $\tau$. Define now $$I(\tau,s)=\operatorname{Ind}_{\widetilde{B(F)}}^{\mslt} \tau_s.$$ Let $$A_{w}(\tau,s ):I(\tau,s ) \rightarrow I(\tau^w,-s)$$ be the meromorphic continuation of the integral \begin{equation} \label{noreal} \int_F f\bigl(s(w)s(n(x))g)\, d_\psi x. \end{equation} Here $\tau^w$ is the representation $h \mapsto \tau\bigl(s(w)h s(w)^{-1} \bigr)$. For all but finitely many values of $q^s$, $A_{w}(\tau,s )$ is analytic and $$\operatorname{Hom}_{\mslt} \Bigl(I(\tau,s ),I(\tau,s) \Bigr)$$ is one dimensional. Thus, there exists a rational function in $q^{-s}$, $\mu\bigl(\tau,s\bigr)$, such that $$A_{w^{-1}}\bigl(I(\tau^w,-s)\bigr) \circ A_{w}\bigl(I(\tau,s )\bigr)=\mu^{-1}(\tau,s) \operatorname{Id}.$$ \begin{thm} \label{main formula} $$\mu^{-1}(\tau,s\bigr)=q^{e(\chi^n,\psi)}\frac{L \bigl(ns,\chi^n \bigr)L \bigl(-ns,\chi^{-n}\bigr)}{L \bigl(1-ns,\chi^{-n} \bigr)L \bigl(1+ns,\chi^{n}\bigr)}.$$ \end{thm} \begin{proof} Fixing isomorphisms $I(\tau,s ) \simeq I\bigl(\chi_{\varpi,\psi},s \bigr)$, $I(\tau^w,-s) \simeq I \bigl(\chi^{-1}_{\varpi,\psi},-s \bigr)$ one immediately sees that $$A_{w^{-1}}\bigl(\chi^{-1}_{\varpi,\psi},-s\bigr)\circ A_{w}\bigl(\chi_{\varpi,\psi},s\bigr)=\mu^{-1}\bigl(\tau,s\bigr)\operatorname{Id}$$ (we choose the Haar measure to be same one used in \eqref{noreal}). Since $d_\psi x=q^{ \frac {e(\psi)}{2}} \, dx$ it is sufficient to prove this theorem assuming that $\psi$ is spherical. We also assume, without loss of generality, that $\gamma_\psi(\varpi)=1$ and that $\chi$ is one of the characters that appears in Lemmas \ref{explicit odd} and \ref{explicit even}. Next we note that since $$A_{w^{-1}}\bigl(\chi^{-1}_{\varpi,\psi},-s\bigr)=\chi(-1)A_{w}\bigl(\chi^{-1}_{\varpi,\psi},-s\bigr),$$ the proof of the theorem is done once we show $$D(\chi_{_{\varpi,\psi}},s,\psi)D(\chi^{-1}_{_{\varpi,\psi}},-s,\psi)=\chi(-1)q^{-e(\chi^n)}\frac{L \bigl(ns,\chi^n \bigr)L \bigl(-ns,\chi^{-n}\bigr)}{L \bigl(1-ns,\chi^{-n} \bigr)L \bigl(1+ns,\chi^{n}\bigr)}\operatorname{Id}.$$ This identity is proven by a case by case matrix multiplication using the formulas in Lemmas \ref{explicit odd} and \ref{explicit even}. More precisely, these two lemmas give the formula for $D(\chi^{-1}_{_{\varpi,\psi}},-s,\psi)$ for all $\chi$ in discussion except in the case where $n$ is even and $\chi=\eta_\varpi$. The formula in this case follows directly from \eqref{to be reminded soon} as well. During the course of the computation one should use \eqref{Tate gamma}, \eqref{epsilon twist} and \eqref{epsilon product} along with the fact that for all the cases, except for the case where $n$ is even and $\chi=\eta_\varpi$, we have $e(\chi^n)=e(\chi)$. This last assertion is proven by a similar argument to the one we used in the proof of Theorem \ref{meta ari} utilizing the fact that $1+\Pf \in {F^*}^n$. \end{proof} The Knapp-Stein Dimension Theorem for quasisplit p-adic groups, \cite{Sil}, gives a reducibility criteria for parabolic induction in terms of the Plancherel measure. The proof of the analogous result for metaplectic groups should follow using similar arguments to those that appear in \cite{Sil}. Although at this point there is no written proof it is rather standard to assume this result for covering groups. See \cite{Bud} for an example involving an $n$ fold cover of $GL_2(F)$. Moreover, in \cite{Li} Li assumed the theory of R-groups for $\mspn$, the metaplectic double cover of $\spn$ which is based on the Knapp-Stein Dimension Theorem. Assuming the Knapp-Stein Dimension Theorem, it was proven in \cite{Sz 4} that all genuine principal representations of $\mspn$ induced from a unitary character are irreducible. This result was proved unconditionally in \cite{HMa}. It can be easily shown that the irreducibility results for principal series representations of the metaplectic double cover of $\gspn$ given in \cite{Sz 5} and \cite{Sz 6} also agree with the Knapp-Stein Dimension Theorem . The results in \cite{Sz 5} and \cite{Sz 6} are also unconditional since these are based on \cite{Sz 4}. In the case at hand the Knapp-Stein Dimension Theorem is reduced to the following statement: \begin{prop} Suppose that $\tau$ is unitary. Then, $I(\tau,0)$ is reducible if and only if $\tau \simeq \tau^w$ and $\mu^{-1}(\tau,s)$ is analytic at $s=0$. \end{prop} This proposition is contained in \cite{SA}, where the Knapp-Stein Dimension Theorem is proven for unitary parabolic induction from $P$ to $G$ and from $\widetilde{P}$ to $\widetilde{G}$ where $G$ is a reductive group defined over $F$, $P$ is a maximal parabolic subgroup of $G$, $\widetilde{G}$ is a central extension of $G$ by $\mu_n$ and $\widetilde{P}$ is the preimage of $P$ in $\widetilde{G}$. \begin{thm} \label{savin}Suppose that $\tau$ is unitary. Then $I(\tau,0)$ is reducible if and only if $n$ is odd and the projection of the central character of $\tau$ to $H^d(F)$ is a non-trivial quadratic character. \end{thm} \begin{proof} We first note that if $\tau \simeq i\bigl(\chi_{\varpi,\psi} \bigr)$ then $\tau^w \simeq i\bigl(\chi^{-1}_{\varpi,\psi} \bigr)$. Thus, by Lemma \ref{cartan rep} $\tau \simeq\tau^w$ if and only if $\chi=\chi^{-1}\eta_\varpi^{2m}$ for some $m=1,2, \ldots,d-1$. If we extend $\chi$ to $\F^*$ then the last equality is equivalent to $\chi^{2d}=1$. Suppose first that $n$ is even. In this case we have just shown that $\tau \simeq \tau^w$ implies that $\chi^n$ is trivial. By Theorem \ref{main formula} we occlude that $\mu^{-1}(\tau,s)$ has a pole at $s=0$. Suppose now that $n$ is odd and that $\tau \simeq \tau^w$. In this case $\chi^{2n}$ is trivial. Thus, if $\chi^n$ is not trivial then $\mu^{-1}(\tau,s)$ is analytic at $s=0$. The assertions $\chi^{2n}=1$, \, $\chi^n \neq 1$ are equivalent to the assertion that the projection of the central character of $\tau$ to $H^d(F)$ is a non-trivial quadratic character. \end{proof} \section{Appendix: A result of W. Jay Sweet.} In this appendix we shall not assume any restriction on the $p$-adic field $F$. Let $\chi$ be a character of $\F^*$ and let $\psi$ be a non-trivial character of $\F$. We intend to prove here that for $Re(s)>>0$ \begin{equation} \label{J.Sweet} \lim_{r \rightarrow \infty}\int_{\Pf^{-r}}\chi(x) \ab x \ab^s \gamma_\psi(x)^{-1} \psi(x) \, d_\psi^* x=\gamma_F^{-1}(\psi_{-1}) \chi(-1)\gamma^{-1}(2s,\chi^{2},{\psi_{_2}}) \gamma(s+\half,\chi,\psi). \end{equation} Our proof follows closely the original proof of Sweet in \cite{Sweet} although we did not keep all of his notation. \begin{lem} Suppose that \eqref{J.Sweet} holds provided that $\psi$ is unramified. Then, it holds for all non-trivial $\psi$. \end{lem} \begin{proof} Assume that $e(\psi)=n$. Then $\psi'=\psi_{(\varpi^n)}$ is unramified and $d_\psi x=q^{\frac n 2}d_{\psi'} x=q^{\frac n 2}dx$. By definition of the Weil index we have $$\gamma_\psi(x)=\gamma_{\psi'}(x)(x,\varpi^n)_2.$$ This implies that $$\lim_{r \rightarrow \infty}\int_{\Pf^{-r}}\gamma_\psi^{-1}(x)\chi(x) \ab x \ab^s \psi(x) \, d_\psi^*x=q^{\frac n 2}\lim_{r \rightarrow \infty}\int_{\Pf^{-r}}\gamma_{\psi'}^{-1}(x)(x,\varpi^n)_2\chi(x) \ab x \ab^s \psi'(x\varpi^{-n}) \, d^*x.$$ We make the change integration variables $a=x \varpi^{-n}$ and obtain $$\lim_{r \rightarrow \infty}\int_{\Pf^{-r}}\gamma_\psi^{-1}(x)\chi(x) \ab x \ab^s \psi(x) \, d_\psi^*x=q^{\frac n 2 -ns}\gamma_{\psi'}^{-1}(\varpi^n)(-1,\varpi^n)\chi(\varpi^n)\lim_{r \rightarrow \infty}\int_{\Pf^{-r}}\gamma_{\psi'}^{-1}(a)\chi(a) \ab a \ab^s \psi'(a) \, d^*a.$$ Since $\psi'$ is unramified it follows from our assumption that $$\lim_{r \rightarrow \infty}\int_{\Pf^{-r}}\gamma_\psi^{-1}(x)\chi(x) \ab x \ab^s \psi(x) \, d_\psi^*x=$$ $$q^{\frac n 2 -ns}\gamma_{\psi'}^{-1}(\varpi^n)(-1,\varpi^n)\chi(\varpi^n) \gamma_F^{-1}(\psi'_{-1})\chi(-1)\gamma^{-1}(2s,\chi^{2},\psi'_2) \gamma(s+\half,\chi,\psi').$$ By \eqref{epsilon change psi} we have $$\lim_{r \rightarrow \infty}\int_{\Pf^{-r}}\gamma_\psi^{-1}(x)\chi(x) \ab x \ab^s \psi(x) \, d_\psi^*x=$$ $$\gamma_{\psi'}^{-1}(\varpi^n)(-1,\varpi^n) \gamma_F^{-1}(\psi_{-1}')\chi(-1)\gamma^{-1}(2s,\chi^{2},{\psi_{_2}}) \gamma(s+\half,\chi,\psi).$$ It is left to show that $$\gamma_{\psi'}(\varpi^n)(-1,\varpi^n) \gamma_F(\psi_{-1}')=\gamma_F(\psi_{-1}).$$ Indeed, $$\gamma_{\psi'}(\varpi^n)(-1,\varpi^n) \gamma_F(\psi_{-1}')=\gamma_{\psi'}(-\varpi^{-n})\gamma_{\psi'}^{-1}(-1) \gamma_F(\psi_{-1}')=\gamma_F(\psi'_{(-\varpi^{-n})})=\gamma_F(\psi_{-1}).$$ \end{proof} We shall assume from this point that $\psi$ is normalized. Define $$e(2,\F)=log_q[\Of:2\Of]=-log_q \ab 2 \ab$$ and define $$c_\psi(a)= \lim_{r \rightarrow \infty}\int_{\Pf^{-r}} \psi(ax^2) \, dx.$$ With this notation: $$ \gamma_\psi(a)=\ab a \ab ^\half c_\psi(a)c^{-1}_\psi(1).$$ Proposition 3.3 of \cite{CC} states that \begin{equation} \label{Weil id} \gamma_\F(a\psi)=\ab 2a \ab^{\half}c_\psi(a).\end{equation} We fix an integer $M$ such that $M \geq \max\{2e+1,m(\chi)\}$. By Hensel's lemma, $1+\Pf^{2e+1} \subseteq {\F^*}^2$. This implies (Proposition 3.1 of \cite{CC}) that for $\ab a \ab \geq q^{-M}$ we have \begin{equation} \label{CC identity} c_\psi(a)=\int_{\Pf^{-M}} \psi(ax^2)\, dx. \end{equation} Note that since $\gamma_\psi \in \C^{1}$, \eqref{CC identity} implies \begin{equation} \label{from CC identity} \gamma_\psi^{-1}=\gamma_{(\psi_{-1})}. \end{equation} The following is Lemma 2.2 of \cite{Sweet}. We give an elementary proof. \begin{lem} \label{sweet lemma} For $z \in \F^*$ we have $$\int_{\Pf^{-M}} \psi^{-1}(zc^2-2c) \, dc=\psi(z^{-1})\ab z \ab ^{-\half}\gamma_\psi^{-1}(z)c_\psi(-1)1_{\Pf^{-M}}(z^{-1}).$$ \end{lem} \begin{proof} Write $x=c-z^{-1}.$ This gives \begin{equation} \label{var c}\int_{\Pf^{-M}} \psi^{-1}(zc^2-2c) \, dc=\psi(z^{-1}) \int _{-z^{-1}+\Pf^{-M}} \psi(-zx^2) \, dx.\end{equation} Assume first that $\ab z \ab \geq q^{-M}$. In this case $-z^{-1} \in \Pf^{-M}$. Thus, by \eqref{CC identity} and \eqref{from CC identity} we have $$\int_{\Pf^{-M}} \psi^{-1}(zc^2-2c)\, dc=\psi(z^{-1}) \int _{\Pf^{-M}} \psi(-zx^2) \, dx=c_\psi(-z)=\ab z \ab ^{-\half}\gamma_\psi^{-1}(z)c_\psi(-1).$$ The Lemma is now proven for this case. Suppose now that $\ab z \ab < q^{-M}$. We shall write $z=u_{_0}\varpi^k$, where $u \in \Of^*$, $k>M$. Note that in this case $$-z^{-1}+\Pf^{-M}=-z^{-1}(1+\Pf^{k-M}).$$ Thus, by changing $x=-z^{-1}y$ in the right hand side of \eqref{var c}, we only need to show that \begin{equation} \label{show this} \int _{1+\Pf^{k-M}} \psi(-z^{-1}y^2) \, dy=0.\end{equation} Suppose that $2M > k >M$. In this case we can pick integer $t$ such that $$\max \{2k-2M,2e+1\} \leq t <k.$$ For any $u\in \Of$, we have $1+u\varpi^t \in {F^*}^2$. We claim that we can always find a solution for the equation $x^2=1+u\varpi^t$ such that $x \in 1+\Pf^{k-M}$. Indeed, if $$\ab x^2-1 \ab \leq q^{-t} \leq q^{-(2k-2M)},$$ then since $\ab -x-1 \ab \cdot \ab x-1 \ab =\ab x^2 -1 \ab$ it is not possible that both $\ab -x-1 \ab$ and $\ab x-1 \ab$ are greater then $q^{-(k-M)}$. We shall denote this solution by $\sqrt{1+u\varpi^t}$. It follows that for any $u \in \Of$ we can change $y\mapsto y\sqrt{1+u\varpi^t}$ in the left hand side of \eqref{show this}, without changing the measure or the domain of integration. This gives $$\int _{1+\Pf^{k-M}} \psi(-z^{-1}y^2) \, dy=\int_{\Of} \int_{1+\Pf^{k-M}} \psi_{-u_0^{-1}}\bigl(\varpi^{-k}y^2(1+u\varpi^t)\bigr) \, dy \, du=$$ $$\int_{1+\Pf^{k-M}} \psi_{-u_0}(\varpi^{k} y^2) \Bigl( \int_{\Of} \psi_{-y^2u_0^{-1}}(\varpi^{t-k}u) \, du \Bigr) \, dy.$$ Since $k>t$, the last inner integral vanishes. We now deal with the case $k \geq 2M$. We change $y=1+u \varpi^{k-M}$ in the left hand side of \eqref{show this}. We have to show that $$\int _{\Of} \psi_{-u_{_0}^{-1}}(2\varpi^{-M}u) \psi_{-u_{_0}^{-1}}(\varpi^{k-2M}u^2) \, du=0.$$ This follows from the fact that since $k \geq 2M$, $\psi_{-u_{_0}^{-1}}(\varpi^{k-2M}u^2)=1$ for all $u \in \Of$ and from the fact that the map $u \mapsto \psi_{-u_{_0}^{-1}}(2\varpi^{-M}u)$ is a non-trivial character of $\Of$. \end{proof} In what follows, we shall have two different additive characters. To stress the dependence of the Fourier transom of $\phi \in S(F)$ on the additive character we shall write $\phi_\psi$ instead of $\widehat{\phi}$. While we take the $\psi$-self dual measure for the integration defining the $\psi$-Fourier transform we shall assume here that the measure in the integral defining the Mellin transforms is $d^*x$. This last convenient assumption does not change the definition of the Tate $\gamma-$factor in \eqref{tate def}. Define \begin{equation} \label{f def} f^{^M}(x)=\psi(2x) \1_{\Pf^{-M}}(x).\end{equation} By a straight forward computation one sees that \begin{equation} \label{f fou} f_{{\psi_{_2}}}^{^M}(x)=q^{M-\frac e 2} \1_{1+\Pf^{M-e}}(-x) \end{equation} and that \begin{equation} \label{f fou mel} \zeta(s,\chi^2,f_{{\psi_{_2}}}^{^M},)=q^{\frac e 2}=\ab 2 \ab ^{-\half}.\end{equation} We now come to the heart of the proof given in \cite{Sweet}. Denote $$I(M,\chi,\psi,s)=\int_{\Pf^{-M}}\gamma_\psi^{-1}(x)\chi(x) \ab x \ab^s \psi(x) \, d^*x.$$ \begin{thm} \label{sweet thm}$$I(M,\chi,\psi,s)=\gamma_F^{-1}(\psi_{-1})\chi(-1)\gamma^{-1}(2s,\chi^{2},{\psi_{_2}}) \gamma(s+\half,\chi,\psi).$$ \end{thm} \begin{proof} $$\int_{\Pf^{-M}}\gamma_\psi^{-1}(x)\chi(x) \ab x \ab^s \psi(x) \, d^*x=\int_{\F}\gamma_\psi^{-1}(x)\chi(x) \ab x \ab^s \psi(x)1_{\Pf^{-M}}(x) \, d^*x=[x \mapsto x^{-1}]=$$ $$\int_{\F}\gamma_\psi^{-1}(x)\chi^{-1}(x) \ab x \ab^{-s} \psi(x^{-1})1_{\Pf^{-M}}(x^{-1}) \, d^*x=$$ $$c^{-1}_\psi(-1)\int_{\F}\chi^{-1}(x) \ab x \ab^{\half-s} \Bigl( \ab x \ab^{-\half}\gamma_\psi^{-1}(x)c_\psi(-1) \psi(x^{-1})1_{\Pf^{-M}}(x^{-1}) \Bigr)\, d^*x.$$ By Lemma \ref{sweet lemma}, $$I(M,\chi,\psi,s)=c^{-1}_\psi(-1)\int_{\F}\chi^{-1}(x) \ab x \ab^{\half-s} \Bigl( \int_{\Pf^{-M}} \psi^{-1}(xc^2-2c) \, dc \Bigr) \, d^*x=[x \mapsto -x]=$$ $$\chi(-1)c^{-1}_\psi(-1)\int_{\F}\chi^{-1}(x) \ab x \ab^{\half-s} \Bigl( \int_{\Pf^{-M}} \psi(xc^2) \psi(2c) \, dc \Bigr) \, d^*x.$$ Recalling \eqref{f def} we obtain \begin{equation} \label{inter} I(M,\chi,\psi,s)=c^{-1}_\psi(-1)\chi(-1)\int_{\F}\chi^{-1}(x) \ab x \ab^{\half-s} \Bigl( \int_{\F} \psi(xc^2) f^{^M}(2c) \, dc \Bigr) \, d^*x.\end{equation} Let $\phi \in S(\F)$ be such that $\phi_\psi(0)=0$ and such that $\zeta(s,\chi,\phi)$ is not the zero function and define: \begin{equation} \label{I tag}I'(M,\chi,\psi,s)=I(M,\chi,\psi,s)\zeta(s+\half,\chi,\phi)\zeta(1-2s,\chi^{-2},f_{{\psi_{_2}}}^{^M}). \end{equation} By \eqref{Weil id}, \eqref{f fou mel} and \eqref{inter} we have $$I'(M,\chi,\psi,s)=\gamma^{-1}_F(\psi_{-1})\chi(-1)\int_{\F^*} \phi(y) \chi(y) \ab y \ab ^{s+\half} \, d^* \! y \int_{\F}\chi^{-1}(x) \ab x \ab^{\half-s} \Bigl( \int_{\F} \psi(xc^2) f^{^M}(2c) \, dc \Bigr) \, d^*x$$ (note that since $Re(s)>>0$, both integrals in the right hand side are absolutely convergent). Recalling that $d^*y=\frac {dy}{\ab y \ab}$ we get $$I'(M,\chi,\psi,s)=\gamma^{-1}_F(\psi_{-1})\chi(-1)\int_{\F^*}\int_{\F^*} \chi(yx^{-1})\phi(y)\ab xy^{-1} \ab ^{\half-s} \Bigl( \int_{\F} \psi(xc^2) f^{^M}(c) \, dc \Bigr) \, d^*x \, dy=$$ $$ [x=yz]=\gamma^{-1}_F(\psi_{-1})\chi(-1)\int_{\F^*}\int_{\F^*} \chi^{-1}(z)\phi(y)\ab z \ab ^{\half-s} \Bigl( \int_{\F} \psi(yzc^2) f^{^M}(c) \, dc \Bigr) \, d^*z \, dy.$$ We would like to change to $z-y$ order of integration. Therefore, we need to show that $$G(y,z)=\chi^{-1}(z)\phi(y)\ab z \ab ^{-\half-s} \Bigl( \int_{\F} \psi(yzc^2) f^{^M}(c) \, dc \Bigr)$$ is integrable on $\F \times \F$ (with respect to $dy \, dz$.) Indeed, since $\phi \in S(F)$, $G(y,z)$ vanishes for large values of $\ab y \ab$. By Lemma \ref{sweet lemma}, $G(y,z)$ also vanishes when $\ab yz \ab<q^{-M}$. This implies that as $\ab z \ab \rightarrow \infty$ then $G(y,z)$ is bounded by $c\ab z \ab ^{-\half-s}$. We also change the $y-c$ order of integration. This change is easily justified since $$K(c,y)=\phi(y) ^{-\half-s} \psi(yzc^2) f^{^M}(c)$$ is a bounded compactly supported function in the $c-y$ plane. Thus, $$I'(M,\chi,\psi,s)=\gamma^{-1}_F(\psi_{-1})\chi(-1)\int_{\F^*}\int_{\F^*} \chi^{-1}(z)f^{^M}(c)\ab z \ab ^{\half-s} \Bigl( \int_{\F}\phi(y) \psi(yzc^2) \, dy \Bigr) \,dc \, d^*z=$$ $$\gamma^{-1}_F(\psi_{-1})\chi(-1)\int_{\F^*}\int_{\F^*} \chi^{-1}(z)f^{^M}(c)\ab z \ab ^{\half-s} \phi_\psi(zc^2) \,dc \, d^*z.$$ We note that $$F(c,z)=\chi^{-1}(z)f^{^M}(c)\ab z \ab ^{\half-s} \phi_\psi(zc^2)$$ is compactly supported on the $c-z$ plane, and since we assume that $\phi_\psi$ is supported away from 0, it follows that $F(c,z)$ is bounded. This implies that we can change the $c-z$ order of integration. We then change $zc^2=t$ in the inner $d^*z$ integral and obtain: $$I'(M,\chi,\psi,s)=\gamma^{-1}_F(\psi_{-1})\chi(-1)\int_{\F^*}\int_{\F^*} \chi^{-1}(tc^{-2})f^{^M}(c)\ab tc^{-2} \ab ^{\half-s} \phi_\psi(t) \, d^*t \, dc= $$ $$\gamma_F^{-1}(\psi_{-1})\chi(-1)\int_{\F^*}\phi_\psi(t) \chi^{-1}(t) \ab t \ab^{\half-s} \, d^*t \int_{\F^*} f^{^M}(c) \chi^{2}(c) \ab c \ab^{2s} \, d^*c.$$ Recalling \eqref{I tag}, we have shown that $$\gamma^{-1}_F(\psi_{-1})\chi(-1) \zeta(-s+\half,\chi^{-1},\phi_\psi,)\zeta(2s,\chi^2,f^{^M})=I(M,\chi,\psi,s)\zeta(s+\half,\chi,\phi)\zeta(1-2s,\chi^{-2},f_{{\psi_{_2}}}^{^M}).$$ By \eqref{tate def}, the functional equation defining the Tate $\gamma$-factor, the theorem now follows. \end{proof}
1,116,691,497,480
arxiv
\section{Introduction} Probabilistic databases (PDBs) provide a framework for representing uncertain data in databases. In database theory, they have been intensely studied since the late 1990s \cite{Suciu+2011,VandenBroeckSuciu2017}. Most efforts have been directed towards tuple-independent relational databases \emph{under a set semantics}. However, it is well-known that many relational database systems use a bag semantics, where identical tuples may appear several times in the same relation. A bag semantics for PDBs has only been considered recently \cite{GroheLindner2020,GroheLindner2020a}, in the context of infinite PDBs. Note that even in the traditional setting of a PDB where only finitely many facts appear with non-zero probability, under a bag semantics we have to consider infinite probability spaces, simply because there is no a-priori bound on the number of times a fact may appear in a bag. In this work, we investigate the complexity of evaluating queries on probabilistic databases using a bag semantics. Formally, probabilistic databases are probability distributions over conventional database instances. In a database instance, the answer to a Boolean query under set semantics is either \emph{true} or \emph{false}. In a probabilistic database, the answer to such a query becomes a random variable. The problem of interest is \emph{probabilistic query evaluation}, that is, computing the probability that a Boolean query returns true, when given a probabilistic database.\footnote{We focus on Boolean queries; the problem whether a specific tuple is in the result of a higher-arity query is equivalent.} As most of the database theory literature, we study the \emph{data complexity} of query evaluation \cite{Vardi1982}, that is, the complexity of the problem, when the query $Q$ is fixed, and the PDB is the input. The standard model for complexity theoretic investigations is that of \emph{tuple-independent} PDBs, where the distinct facts constitute independent events. Probabilistic query evaluation is well-understood for the class of \emph{unions of conjunctive queries} (UCQs) on PDBs that are tuple-independent (see the related works section below). However, all existing results discuss the problem under set semantics. Here, on the contrary, we discuss the probabilistic query evaluation under \emph{bag semantics}. % For tuple-independent (set) PDBs, a variety of representation systems have been proposed (cf.\ \cite{GreenTannen2006,Suciu+2011}), although for complexity theoretic discussions, it is usually assumed that the input is just given as a table of facts, together with their marginal probabilities \cite{VandenBroeckSuciu2017}. In the bag version of tuple-independent PDBs \cite{GroheLindner2020a}, different facts are still independent. Yet, the individual facts (or, rather, their multiplicities) are $\mathbb N$-valued, instead of Boolean, random variables. This highlights the need to discuss representations of bag PDBs, before we can discuss the complexity of computational problems. Once we have settled on a suitable class of representations, we investigate the problem of probabilistic query evaluation again, subject to representation system $\mathrm{Rep}$. Note that under bag semantics, the answer to a Boolean query on a single database instance is a non-negative integer, rather than true or false. Because of this, the answer on a PDB is a probability distribution on $\mathbb N$. This gives rise to two natural computational problems regarding the evaluation of queries on PDBs under bag semantics: $\mathsf{EXPECTATION}_{\mathrm{Rep}}(Q)$, which is computing the expected outcome, and $\mathsf{PQE}_{\mathrm{Rep}}(Q,k)$ which is computing the probability that the outcome is at most $k$. For set semantics, these problems coincide, because the expected value of a $\{0,1\}$-valued random variables coincides with the probability that the outcome is $1$. But for bag-semantics, the two problems turn out to be quite different. Recall that for the class of UCQs, the corresponding problem for set semantics can either be solvable in polynomial time, or be $\sharp\@complexitystyle{P}$-hard \cite{DalviSuciu2012}. However, under only mild assumptions on the representation system, we can show that expected answer counts of UCQs can always be computed in polynomial time. For computing the probability of concrete values, the problem appears to be less accessible. Moreover, it is not at all clear, how the complexity of $\mathsf{PQE}_{\mathrm{Rep}}(Q,k)$ relates to $\mathsf{PQE}_{\mathrm{Rep}}(Q,k')$ for $k \neq k'$. Again, with respect to some assumptions on the representation, we prove that the dichotomy of Dalvi and Suciu \cite{DalviSuciu2007a} stays intact for self-join free CQs. More precisely, we find that for hierarchical Boolean self-join free CQs, $\mathsf{PQE}_{\mathrm{Rep}}(Q,k)$ is efficiently solvable for all $k \in \mathbb N$ given that we have fast access to the probabilities of fact multiplicities. In sufficiently expressive representation systems with the above access property, $\mathsf{PQE}_{\mathrm{Rep}}(Q,0)$ remains hard if its set-version is hard (which even holds for UCQs). As a last step, we show that for Boolean self-join free CQs hardness of $\mathsf{PQE}_{\mathrm{Rep}}(Q,0)$ transfers to hardness of $\mathsf{PQE}_{\mathrm{Rep}}(Q,k)$ for all other $k \in \mathbb N$. We highlight that computing expectations is easier under bag semantics than under set semantics, because semantically, disjunctions and existential quantification turn into sums. This allows us to exploit the linearity of expectation. On the other hand, this change in semantics keeps us from directly applying some central ideas from \cite{DalviSuciu2012} when analyzing $\mathsf{PQE}_{\mathrm{Rep}}(Q,k)$. Our hardness result therefore uses a novel, non-trivial reduction. The algebraic techniques that are used therein could potentially be of independent interest. \paragraph*{Related Work} The most prominent result regarding probabilistic query evaluation is the Dichotomy theorem by Dalvi and Suciu \cite{DalviSuciu2012} that provides a (polynomially time decidable) separation between unions of conjunctive queries for which probabilistic evaluation is possible in polynomial time, and such where the problem becomes $\sharp\@complexitystyle{P}$-hard. They started their investigations with self-join free conjunctive queries \cite{DalviSuciu2007a} and later extended their results to general CQs \cite{DalviSuciu2007b} and then UCQs \cite{DalviSuciu2012}. Beyond the queries they investigate, there are a few similar results for fragments with negations or inequalities \cite{FinkOlteanu2016,OlteanuHuang2008,OlteanuHuang2009}, for homomorphism-closed queries \cite{AmarilliCeylan2020} and others \cite{ReSuciu2009}, and on restricted classes of PDBs \cite{Amarilli+2016}. Good overviews over related results are given in \cite{VandenBroeckSuciu2017,Suciu2020}. In recent developments, the original dichotomies for self-join free CQs, and for general UCQs have been shown to hold even under severe restrictions to the fact probabilities that are allowed to appear \cite{AmarilliKimelfeld2021,KenigSuciu2021}. The bag semantics for CQs we use here is introduced in \cite{ChaudhuriVardi1993}. A detailed analysis of the interplay of bag and set semantics is presented in \cite{Cohen2009}. Considering multiplicities as semi-ring annotations \cite{Green+2007,GraedelTannen2017}, embeds bag semantics into a broader mathematical framework. % % \section{Preliminaries} \label{sec:prelim} In this section, we introduce the relevant background, especially from logic and database theory, that is needed in this work. We denote by $\mathbb N$ and $\mathbb N_+$ the sets of non-negative, and of positive integers, respectively. We denote open, closed and half open intervals of real numbers by $(a,b)$, $[a,b]$, $[a,b)$ and $(a,b]$, respectively, where $a\leq b$. With a subscript $\mathbb Q$, we denote the restriction of these intervals to rational numbers, like in $[0,1]_{\mathbb Q}$. By $\binom{n}{k}$ we denote the binomial coefficient and by $\binom{ n }{ n_1, \dots, n_k }$ the multinomial coefficient. \subsection{Probabilistic Bag Databases} We fix a countable, non-empty set $\mathrm{dom}$ (the \emph{domain}). A \emph{database schema} $\tau$ is a finite, non-empty set of relation symbols. Every relation symbol $R$ has an arity $\ar(R) \in \mathbb N_+$. A \emph{fact} over $\tau$ and $\mathrm{dom}$ is an expression $R( \tup a )$ where $\tup a \in \mathrm{dom}^{\ar(R)}$. A \emph{(bag) database instance} is a bag (i.\,e. multiset) of facts. Formally, a bag (instance) is specified by a function $\#_D$ that maps every fact $f$ to its multiplicity $\#_D( f )$ in $D$. The \emph{active domain} $\adom(D)$ is the set of domain elements $a$ from $\mathrm{dom}$ for which there exists a fact $f$ containing $a$ such that $\#_D( f ) > 0$. A \emph{probabilistic (bag) database} (or, \emph{(bag) PDB}) $\ensuremath{\mathcal{D}}$ is a pair $(\mathbb{D}, P)$ where $\mathbb{D}$ is a set of bag instances and $P \from 2^{\mathbb{D}} \to [0,1]$ is a probability distribution over $\mathbb{D}$. Note that, even when the total number of different facts is finite, $\mathbb{D}$ may be infinite, as facts may have arbitrarily large multiplicities. We let $\#_{\ensuremath{\mathcal{D}}}( f )$ denote the random variable $D \mapsto \#_D( f )$ for all facts $f$. If $\ensuremath{\mathcal{D}} = (\mathbb{D},P)$ is a PDB, then $\adom( \ensuremath{\mathcal{D}} ) \coloneqq \bigcup_{ D \in \mathbb{D} } \adom( D )$. We call a PDB \emph{fact-finite} if the set $\set{ f \with \#_D(f) > 0 \text{ for some } D \in \mathbb{D} }$ is finite. In this case, $\adom( \ensuremath{\mathcal{D}} )$ is finite, too. A bag PDB $\ensuremath{\mathcal{D}}$ is called \emph{tuple-independent} if for all $k \in \mathbb N$, all pairwise distinct facts $f_1, \dots, f_k$, and all $n_1, \dots, n_k \in \mathbb N$, the events $\#_D( f_i ) = n_i$ are independent, i.\,e., \[ \Pr_{ D \sim \ensuremath{\mathcal{D}} }\big( \#_D(f_i) = n_i \text{ for all } i = 1, \dots, k \big) = \prod_{ i = 1 }^{ k } \Pr_{ D \sim \ensuremath{\mathcal{D}} }\big( \#_D(f_i) = n_i \big)\text. \] \emph{Unless it is stated otherwise, all probabilistic databases we treat in this paper are assumed to be fact-finite and tuple-independent.} \subsection{UCQs with Bag Semantics} Let $\spacestyle{V}$ be a countably infinite set of variables. An \emph{atom} is an expression of the shape $R(\tup t)$ where $R \in \tau$ and $\tup t \in ( \mathrm{dom} \cup \spacestyle{V} )^{\ar(R)}$. A \emph{conjunctive query (CQ)} is a formula $Q$ of first-order logic (over $\tau$ and $\mathrm{dom}$) of the shape \[ Q = \exists x_1 \dots \exists x_m \with R_1( \tup t_1 ) \wedge \dots \wedge R_n( \tup t_n )\text, \] in which we always assume that the $x_i$ are pairwise different, and that $x_i$ appears in at least one of $\tup t_1,\dots,\tup t_n$ for all $i = 1,\dots, m$. Such a CQ is called \emph{self-join free}, if the relation symbols $R_1,\dots,R_n$ are pairwise different. If $Q$ is a CQ of the above shape, we let $Q^*$ denote the quantifier-free part $R_1( \tup t_1) \wedge \dots R_n( \tup t_n )$ of $Q$, and we call $R_i( \tup t_i )$ an \emph{atom of $Q$} for all $i = 1, \dots, n $. A \emph{union of conjunctive queries (UCQ)} is a formula of the shape $ Q = Q_1 \vee \dots \vee Q_N $ where $Q_1,\dots,Q_N$ are CQs. We use the terms query and formula synonymously. A query is called \emph{Boolean}, if it contains no free variables (that is, there are no occurrences of variables that are not bound by a quantifier). \emph{From now on, and throughout the remainder of the paper, we only discuss Boolean (U)CQs.} \medskip Recall that $\#_D$ denotes the multiplicity function of a bag instance $D$. The bag semantics of CQs and UCQs is an extension of $\#_D$ to queries. For Boolean CQs $Q = \exists x_1 \dots \exists x_m \with R_1( \tup t_1 ) \wedge \dots \wedge R_n( \tup t_n )$ we define \begin{equation}\label{eq:CQsem} \#_D( Q ) \coloneqq \sum_{ \tup a \in \adom(D)^m } \prod_{ i = 1 }^{ n } \#_D\big( R_i( \tup t_i[ \tup x / \tup a ] ) \big)\text, \end{equation} where $\tup x = (x_1,\dots,x_m)$ and $\tup a = (a_1,\dots,a_m)$, and $R_i(\tup t_i[ \tup x / \tup a ])$ denotes the fact that emerges from $R_i(\tup t_i)$ by replacing, for all $j = 1, \dots, m$, every occurrence of $x_j$ by $a_j$. If $Q = Q_1 \vee \dots \vee Q_N$ is a Boolean UCQ, then each of the $Q_i$ is a Boolean CQ. We define \begin{equation}\label{eq:UCQsem} \#_D( Q ) \coloneqq \#_D( Q_1 ) + \dots + \#_D( Q_N )\text. \end{equation} Whenever convenient, we write $\#_D Q$ instead of $\#_D(Q)$. \begin{remark} We point out that in \eqref{eq:CQsem}, conjunctions should intuitively be understood as joins rather than intersections. Our definition \eqref{eq:CQsem} for the bag semantics of CQs matches the one that was given in \cite{ChaudhuriVardi1993}. This, and the extension \eqref{eq:UCQsem} for UCQs, are essentially special cases of how semiring annotations of formulae are introduced in the provenance semiring framework \cite{Green+2007,GraedelTannen2017}, the only difference being that we use the active domain semantics. For UCQs however, this is equivalent since the value of \eqref{eq:CQsem} stays the same when the quantifiers range over arbitrary supersets of $\adom(D)$. \end{remark} Note that the result $\#_D Q$ of a Boolean UCQ on a bag instance $D$ is a non-negative integer. Thus, when evaluated over a PDB $\ensuremath{\mathcal{D}} = (\mathbb{D}, P)$, this yields a $\mathbb N$-valued random variable $\#_{\ensuremath{\mathcal{D}}} Q$ with \[ \Pr\big( \#_{\ensuremath{\mathcal{D}}} Q = k \big) = \Pr_{ D \sim \ensuremath{\mathcal{D}} }\big( \#_D Q = k \big)\text. \] \begin{example} Consider tuple-independent bag PDB over facts $R(a)$ and $S(a)$, where $R(a)$ has multiplicity $2$ or $3$, both with probability $\frac12$, and $S(a)$ has multiplicity $1$, $2$ or $3$, with probability $\frac13$ each. Then, \begin{gather*} \Pr\big( \#_{\ensuremath{\mathcal{D}}}( R(a)\wedge S(a)) = 6 \big) = \Pr\big( \#_{\ensuremath{\mathcal{D}}}( R(a) ) = 2 \big) \Pr\big( \#_{\ensuremath{\mathcal{D}}}( S(a) ) = 3 \big)\\ + \Pr\big( \#_{\ensuremath{\mathcal{D}}}( R(a) ) = 3 \big) \Pr\big( \#_{\ensuremath{\mathcal{D}}}( S(a) ) = 2 \big) = \frac13 \text. \end{gather*} \end{example} There are now two straight-forward ways to formulate the problem of answering a Boolean UCQ over a probabilistic database. We could either ask for the expectation $\E\big( \#_D Q \big)$, or compute the probability that $\#_{\ensuremath{\mathcal{D}}} Q$ is at most / at least / equal to $k$. These two options coincide for set semantics, as $\#_{\ensuremath{\mathcal{D}}} Q$ is $\set{0,1}$-valued in this setting.\footnote{In fact, in the literature both approaches have been used to introduce the problem of probabilistic query evaluation \cite{Suciu+2011,VandenBroeckSuciu2017}.} For bag PDBs, these are two separate problems to explore. Complexity-wise, we focus on \emph{data complexity} \cite{Vardi1982}. That is, the query (and for the second option, additionally the number $k$) is a parameter of the problem, so that the input is only the PDB. Before we can start working on these problems, we first need to discuss how bag PDBs are presented as an input to an algorithm. This is the purpose of the next section. % % \section{Representation Systems} \label{sec:rep} Recall that a (tuple-independent) probabilistic database without multiplicities can be encoded as a table containing all facts together with their \emph{marginal probability}. This does not easily extend to bag PDBs, as the distributions of $\#_D( f )$ for facts $f$ may have infinite support. \begin{example}\label{exa:geometric} Consider the PDB $\ensuremath{\mathcal{D}} = ( \mathbb{D}, P )$ over the single fact $f$ where $\#_{\ensuremath{\mathcal{D}}}( f )$ is geometrically distributed with parameter $\frac12$. That is, $\Pr_{ D \sim \ensuremath{\mathcal{D}} }\big( \#_D(f) = k \big) = 2^{-k}$. Despite $f$ being the only available fact, the sample space of $\ensuremath{\mathcal{D}}$ is infinite, as $\mathbb{D} = \set[\big]{ \@ifstar\@bag\@@bag{}, \@ifstar\@bag\@@bag{ f }, \@ifstar\@bag\@@bag{ f, f }, \dots }$, where each of the instances has positive probability in $\ensuremath{\mathcal{D}}$. \end{example} To pass PDBs as inputs to algorithms, we use the notion of representation systems for PDBs \cite{GreenTannen2006}. The computational problems we investigate will always be stated with respect to a representation system. Our definition follows the general setup of \cite{GreenTannen2006}. \begin{definition}\label{def:repsys} A \emph{representation system} (for bag PDBs) $\mathrm{Rep}$ is a pair $\mathrm{Rep} = \big( \mathbf{T}_{\mathrm{Rep}}, \sem{\?}_{\mathrm{Rep}} \big)$ where $\mathbf{T}$ is a non-empty set (the elements of which we call \emph{tables}), and $\sem{\?}_{ \mathrm{Rep} }$ is a function that maps every $T \in \mathbf{T}_{\mathrm{Rep}}$ to a probabilistic database $\sem{T}_{ \mathrm{Rep} }$. \end{definition} We abuse notation, and also use $T$ when referring to the PDB $\sem{ T }_{ \mathrm{Rep} }$. This definition is not tailored to tuple-independence yet, and can be used to describe representation systems of any general bag PDB. We will, however, focus on a particular kind of representation systems, akin to the representation of tuple-independent set PDBs. Instead of annotating facts with their marginal probability, they are labeled with the parameters of some parameterized distribution over multiplicities. For example, if, as in \cref{exa:geometric}, the multiplicity of $f$ should be geometrically distributed with parameter $\frac{1}{2}$, the annotation of $f$ could be $(\mathrm{Geometric},1/2)$. In general, the tables of a representation system may feature multiple parameterized distributions. We now make this idea formal and, at the same time, specify the encoding of a table in such a representation system, as it would be passed to the problems and algorithms we discuss later. \begin{definition}\label{def:paramtirep} Let $\Lambda$ be a non-empty set (the \emph{parameter set}), and let $\Sigma$ be a finite non-empty set of symbols (the \emph{encoding alphabet}). Let $\cod{ \? } \from \Lambda \to \Sigma^*$ be an invertible function and for every $\lambda \in \Lambda$, let $P_{\lambda}$ be a probability distribution on $\mathbb N$. A \emph{parameterized TI representation system} is a representation system $\mathrm{Rep} = ( \mathbf{T}_{\mathrm{Rep}}, \sem{\?}_{\mathrm{Rep}} )$ where \begin{itemize} \item $\mathbf{T}_{\mathrm{Rep}}$ is the family of all finite sets $T$ of pairs $( f, \cod{ \lambda_f } )$ with pairwise different facts $f$ and $\lambda_f \in \Lambda$ for all $f$; and \item $\sem{\?}_{\mathrm{Rep}}$ maps $T = ( f, \cod{ \lambda_f } )_f$ to the tuple-independent bag PDB $\ensuremath{\mathcal{D}}$ with multiplicity probabilities $\Pr\big( \#_{\ensuremath{\mathcal{D}}} f = k \big) = P_{\lambda_f}( k )$ (which are independent for different $f$). \end{itemize} \end{definition} \begin{figure} \centering \begin{tabular}{ c l }\toprule Relation $R$ & Parameter\\\midrule $(1,1)$ & $(\mathrm{Bernoulli},1/2)$\\ $(1,2)$ & $(\mathrm{Binomial},10,1/2)$\\ $(2,1)$ & $(\mathrm{Geometric},1/3)$\\ $(2,2)$ & $(\mathrm{Poisson},3)$\\\bottomrule \end{tabular} \caption{Example of a parameterized TI representation.} \label{fig:repexample} \end{figure} \begin{example}\label{exa:dists} \Cref{fig:repexample} shows a table $T$ from a parameterized TI representation system $\mathrm{Rep}$, illustrating how the parameters can be used to encode several multiplicity distributions. Here, we let \begin{multline*} \Lambda = \big( \set{ \mathrm{Bernoulli} } \times [0,1]_{\mathbb Q} \big) \cup \big( \set{ \mathrm{Binomial} } \times \mathbb N_+ \times [0,1]_{\mathbb Q} \big)\\ \cup \big( \set{ \mathrm{Geometric} } \times \mathbb N \times (0,1]_{\mathbb Q} \big) \cup \big( \set{ \mathrm{Poisson} } \times \mathbb N \times [0,\infty)_{\mathbb Q} \big) \end{multline*} We assume every $\lambda \in \Lambda$ is encoded by concatenating the symbolic name of its distribution with the representation of its parameters, where rational numbers are given in terms of a numerator-denominator pair. The annotation $( \mathrm{Binomial}, 10, 1/3 )$ of $R(2,1)$ specifies that $\#_T R(2,1) \sim \mathrm{Binomial}\big(10,\frac13\big)$. That is, \[ \Pr\big( \#_T R(2,1) = k \big) = \begin{cases} \binom{n}{k} (\frac13)^k (\frac23)^{n-k} & \text{if } 0 \leq k \leq n\text{ and}\\ 0 & \text{if }k > n\text. \end{cases} \] The multiplicity probabilities of the other facts are given analogously in terms of the Bernoulli, geometric, and Poisson distributions, respectively. Note that multiplicity of $R(1,1)$ is (almost surely) $0$ or $1$. The multiplicity of $R(1,2)$ can be any number between $0$ and $n$. Finally, the facts $R(2,1)$ and $R(2,2)$ can be present with arbitrarily high multiplicity. A noteworthy difference of the geometric, and the Poisson distribution is that the multiplicity probabilities for the geometric distribution always remain rational (provided that the parameter is rational). This is not true for the Poisson distribution (with any rational parameter $\lambda \neq 0$). \end{example} \begin{example}\label{exa:bernoulli} The traditional representation system for the set version of tuple-independent PDBs is reobtained from \cref{def:paramtirep} by only using the Bernoulli distribution in the way introduced in \cref{exa:dists}. \end{example} % % \section{Expectations and Variances} \label{sec:evar} Before computing the probabilities of answer counts, we discuss the computation of the expectation and the variance of the answer count. Recall that in PDBs without multiplicities, the answer to a Boolean query (under set semantics) is either $0$ (i.\,e., $\mathsf{false}$) or $1$ (i.\,e., $\mathsf{true}$). That is, the answer count is a $\set{0,1}$-valued random variable there, meaning that its expectation coincides with the probability of the answer count being $1$. Because of this correspondence, the semantics of Boolean queries on (set) PDBs are sometimes also defined in terms of the expected value \cite{VandenBroeckSuciu2017}. For bag PDBs, the situation is different, and this equivalence no longer holds. Thus, computing expectations, and computing answer count probabilities have to receive a separate treatment. For the results in this section, we will need that the moments of facts in the input PDBs can be obtained in polynomial time. \begin{definition}\label{def:polymom} Let $\ell \in \mathbb N_+$. We say that a parameterized TI representation system $\mathrm{Rep}$ has \emph{polynomially computable moments of order $\ell$} if the function $\cod{\vec\lambda} \mapsto \sum_{n \in \mathbb N} n^\ell \cdot P_{\vec\lambda}(n)$ can be computed in polynomial time. We denote the class of parameterized TI representation systems with polynomially computable moments of order $\ell$ by $\PolyMom{\ell}$. \end{definition} \begin{example} Suppose that $\mathrm{Rep}$ is the parameterized TI representation system we introduced in \cref{exa:dists}, supporting the Bernoulli, binomial, geometric, and Poisson distributions. A direct calculation immediately yields that $\E( X^\ell ) = p$ for all $\ell$ if $X \sim \mathrm{Bernoulli}(p)$. Analogously, for all fixed $\ell$, the moment $\E( X^\ell )$ is given by a polynomial in $n$ and $p$ if $X \sim \mathrm{Binomial}(n,p)$. These computations are easy because the sum defining the expected value is finite. In general, analytical tools are needed. Then, for the most common distributions, one of the following cases applies. Either, as above, a closed form expression of $\E( X^\ell )$ is known, or, the moments of $X$ are characterized in terms of its moment generating function (mgf) $\E( e^{tX} )$ where $t$ is a real-valued variable. In the latter case, the $\ell$th moment of $X$ is obtained by taking the $\ell$th derivative of the mgf and evaluating it at $t = 0$ \cite[p.~62]{CasellaBerger2002}. An inspection of the mgfs of the geometric, and the Poisson distributions \cite[p.~621f]{CasellaBerger2002} reveals that their $\ell$th moments are polynomials in their respective parameters as well. Thus, $\mathrm{Rep} \in \PolyMom{\ell}$ for all $\ell \in \mathbb N_+$. \end{example} \subsection{Expected Answer Count} Formally, the problem of computing expected answer counts in bag PDBs is defined as follows: \paramproblem{$\mathsf{EXPECTATION}_{\mathrm{Rep}}( Q )$} {A Boolean UCQ $Q$.} % {A table $T \in \mathbf{T}_{\mathrm{Rep}}$.} % {The expectation $\E\big( \#_{T}Q \big)$.} % We have pointed out above that solving the above problem for set PDBs and set semantics is equivalent to computing the probability that the query returns $\mathsf{true}$. There are conjunctive queries, for example, $Q = \exists x \exists y \with R(x) \wedge S(x,y) \wedge T(y)$, for which the latter problem is $\sharp\@complexitystyle{P}$-hard \cite{Graedel+1998,DalviSuciu2004}. Under set semantics, disjunctions and existential quantifiers semantically correspond to taking maximums instead of adding multiplicities. Under bag semantics, we are now able to exploit the linearity of expectation to easily compute expected values, which was not possible under bag semantics. \begin{lemma}\label{lem:expectation-of-CQ} Let $\ensuremath{\mathcal{D}}$ be a tuple-independent PDB and let $Q$ be a Boolean CQ, $Q = \exists x_1 \dots \exists x_m \with R_1(\tup t_1) \wedge \dots \wedge R_n( \tup t_n )$. For every $\tup a \in \adom( \ensuremath{\mathcal{D}} )^{m}$, we let $F( \tup a )$ denote the set of facts appearing in $Q^*[ \tup x / \tup a ]$, and for every $f \in F( \tup a )$, we let $\nu( f, \tup a )$ denote the number of times $f$ appears in $Q^*[ \tup x / \tup a ]$. Then \begin{equation}\label{eq:expectation-of-CQ} \E\big( \#_{ \ensuremath{\mathcal{D}} } Q \big) = \sum_{ \tup a \in \adom( \ensuremath{\mathcal{D}} )^m } \prod_{ f \in F( \tup a ) } \E\Big( \big( \#_{\ensuremath{\mathcal{D}}} f \big)^{ \nu( f, \tup a ) } \Big)\text. \end{equation} \end{lemma} \begin{proof} By definition, \begin{equation}\label{eq:quantifier-range} \#_D Q = \sum_{ \tup a \in \adom( D ) } \#_D( Q^*[ \tup x / \tup a ] ) = \sum_{ \tup a \in \adom( \ensuremath{\mathcal{D}} ) } \#_D( Q^*[ \tup x / \tup a ] ) \end{equation} for every individual instance $D$ of $\ensuremath{\mathcal{D}}$. The last equation above holds because, as $Q^*$ is assumed to contain every quantified variable, $\#_D( Q^*[ \tup x / \tup a ] ) = 0$ whenever the tuple $\tup a$ contains an element that is not in the active domain of $D$. By linearity of expectation, \[ \E\big( \#_{ \ensuremath{\mathcal{D}} } Q \big) = \sum_{ \tup a \in \adom( \ensuremath{\mathcal{D}} ) } \#_{\ensuremath{\mathcal{D}}}( Q^*[ \tup x / \tup a ] )\text. \] Recall, that $Q^*[ \tup x / \tup a ]$ is a conjunction of facts $R_i( \tup t_i[\tup x/\tup a] )$. Thus, \begin{equation}\label{eq:multiplicity-grounded-conj} \#_{\ensuremath{\mathcal{D}}}\bigg( \bigwedge_{ i = 1 }^{ n } R_i\big( \tup t_i[\tup x/\tup a] \big) \bigg) = \prod_{i=1}^{n} \#_{\ensuremath{\mathcal{D}}}\Big( R_i\big( \tup t_i[ \tup x / \tup a ] \big) \Big)\text. \end{equation} Because $\ensuremath{\mathcal{D}}$ is tuple-independent, any two facts in $F( \tup a )$ are either equal, or independent. Therefore, \begin{equation}\label{eq:expectation-grounded-conj} \E\bigg( \prod_{i=1}^{n} \#_{\ensuremath{\mathcal{D}}}\Big( R_i\big( \tup t_i[ \tup x / \tup a ] \big) \Big) \bigg) = \prod_{ f \in F( \tup a ) } \E\Big( \big(\#_{\ensuremath{\mathcal{D}}}f\big)^{ \nu( f, \tup a ) } \Big)\text, \end{equation} as the expectation of a product of independent random variables is the product of their expectations. Together, this yields the expression from \eqref{eq:expectation-of-CQ}. \end{proof} The linearity of expectation directly yields that the expectation of a UCQ is the sum of the expectations of its individual CQs. \begin{lemma}\label{lem:expectation-of-UCQ} Let $\ensuremath{\mathcal{D}}$ be a PDB and let $Q = \bigvee_{ i = 1 }^{ N } Q_i$ be a Boolean UCQ. Then \begin{equation}\label{eq:expectation-of-UCQ} \E\big( \#_{ \ensuremath{\mathcal{D}} } Q \big) = \sum_{ i = 1 }^{ N } \E\big( \#_{ \ensuremath{\mathcal{D}} } Q_i \big)\text. \end{equation} \end{lemma} Given that we can compute the necessary moments of fact multiplicities efficiently, \cref{lem:expectation-of-CQ,lem:expectation-of-UCQ} yield a polynomial time procedure to compute the expected value of a UCQ. \begin{proposition}\label{pro:expectation} Let $Q = \bigvee_{ i = 1 }^{ N } Q_i$ be a Boolean UCQ where every relation symbol appears at most $k$ times in each of the $Q_i$, and let\/ $\mathrm{Rep} \in \bigcap_{ \ell \leq k } \PolyMom{\ell}$. Then $\mathsf{EXPECTATION}_{\mathrm{Rep}}\big( \#_T Q \big)$ is computable in polynomial time. \end{proposition} \begin{proof} % Consider the formula \eqref{eq:expectation-of-UCQ}, with \eqref{eq:expectation-of-CQ} plugged into it. There are $\leq N \cdot \size{ \adom( T ) }^m \cdot m$ terms (where $m$ is the maximal number of atoms among the CQs $Q_1,\dots,Q_N$), and each of these terms is the moment of a fact multiplicity of order at most $k$. % \end{proof} We emphasize that the number $k$ from \cref{pro:expectation}, that dictates which moments we need to be able to compute efficiently, comes from the fixed query $Q$ and is therefore constant. \subsection{Variance of the Answer Count} With moderate additional overhead and slightly stronger assumptions on the representation system, we can also compute the variance of query answers in polynomial time. \paramproblem{$\mathsf{VARIANCE}_{\mathrm{Rep}}( Q )$} {A Boolean UCQ $Q$.} {A table $T \in \mathbf{T}_{\mathrm{Rep}}$.} {The variance $\Var\big( \#_T Q \big)$.} The steps towards a polynomial time algorithm for the problem are similar to those from the previous section. Naturally, to be able to calculate the variance efficiently, we need moments of up to the double order in comparison to the computation of the expected value. \begin{restatable}{proposition}{varianceproposition}\label{pro:variance} Let $Q = \bigvee_{ i = 1 }^{ N } Q_i$ be a Boolean UCQ where every relation symbol appears at most $k$ times in each of the $Q_i$, and let\/ $\mathrm{Rep} \in \bigcap_{ \ell \leq 2k } \PolyMom{\ell}$. Then $\mathsf{VARIANCE}_{\mathrm{Rep}}\big( \#_T Q \big)$ is computable in polynomial time. \end{restatable} To obtain this, we again break down the value we want to compute into expressions in terms of the moments of fact multiplicities. Because we have moved to variance, the arguments are slightly more complex. The full proof is contained in \cref{app:variance}. Despite the fact that the variance of query answers may be of independent interest, it can be also used to obtain bounds for the probability that the true value of $\#_T Q$ is close to its expectation, using the Chebyshev inequality \cite[Theorem 5.11]{Klenke2014}. This can be used to derive bounds on $\Pr(\#_T Q \leq k)$ in cases, when the exact value is hard to compute. Because of the limited space, we go without further details. % % \section{Answer Count Probabilities} \label{sec:pqe} In this section, we discuss the extension of the \emph{probabilistic query evaluation} problem to bag PDBs. \paramproblem{$\mathsf{PQE}_{\mathrm{Rep}}(Q,k)$} {A Boolean (U)CQ $Q$, and $k \in \mathbb N$.} {A table $T \in \mathbf{T}_{\mathrm{Rep}}$.} {The probability $\Pr(\#_T Q \leq k)$.} That is, the problem consists of evaluating the cumulative distribution function of the random variable $\#_T Q$ at $k$. \begin{remark} We defined the problem as computing the probability that the answer count is \emph{at most} $k$. The problem in question could also be formulated by asking for the probability of the answer count being \emph{at least} $k$, or \emph{exactly} $k$. With respect to complexity discussions, this does not make much of a difference though: with $k$ fixed, all these variants are polynomial time equivalent. \end{remark} Throughout this section, we need to be able to do calculations involving probabilities for the multiplicities of individual facts. Thus, we focus on representation systems that adhere to the following definition. \begin{definition}\label{def:polyprob} We say that a parameterized TI representation system $\mathrm{Rep}$ has \emph{polynomially computable multiplicities} if the function $\cod{\lambda} \mapsto P_{\lambda}( k )$ can be computed in polynomial time in the encoding length $\size{ \cod{\lambda} }$ of $\lambda$, and in $k$ (where $k$ is given in unary encoding). We denote the class of parameterized TI representation systems with polynomially computable multiplicities by $\mathsf{PolyProb}$. \end{definition} Note that when we defined the classes $\PolyMom{\ell}$ in \cref{def:polymom}, we did not care about how fast algorithms run in the number $\ell$. This was, because in our applications, the needed values of $\ell$ were already fixed by fixing $Q$. For the $\mathsf{PQE}$-problem, the situation is different, as the number $k$ is an independent parameter of the problem. In particular, we want to investigate $\mathsf{PQE}_{\mathrm{Rep}}(Q,k)$ for fixed $Q$ with varying values of $k$. As it turns out, solving $\mathsf{PQE}_{\mathrm{Rep}}(Q,k)$ proves to be far more intricate compared to the problems of the previous section. For our investigation, we concentrate on self-join free conjunctive queries. While some simple results follow easily from the set semantics version of the problem, the complexity theoretic discussions quickly become quite involved and requires the application of a set of interesting non-trivial techniques. \subsection{Hierarchical self-join free CQs} We reuse terminology that was introduced by \cite{DalviSuciu2007b} to formulate their dichotomy theorem for Boolean self-join free CQs. Let $Q$ be a Boolean CQ and for all variables $x$ let $\sg( x )$ denote the set of relation symbols $R$ such that $Q$ contains an atom that uses relation symbol $R$ and contains $x$. A Boolean CQ is called \emph{hierarchical} if for all pairs $x$, $y$ of distinct variables $\sg( x ) \subseteq \sg( y )$ or $\sg( y ) \subseteq \sg( x )$, whenever $\sg( x ) \cap \sg( y ) \neq \emptyset$. A variable $x$ is called \emph{maximal}, if $\sg( y ) \subseteq \sg( x )$ for all $y$ with $\sg( x ) \cap \sg( y ) \neq \emptyset$. To every CQ $Q$ we associate an undirected graph $G_Q$ whose vertices are the variables appearing in $Q$, and where two variables $x$ and $y$ are adjacent if they appear in a common atom. Let $V_1, \ldots, V_m$ be the vertex sets of the connected components of $G_Q$. We can then write the quantifier-free part $Q^*$ of $Q$ as $Q^* = Q_0^* \wedge \bigwedge_{ i = 1 }^{ m } Q_i^*$ where $Q_0^*$ is the conjunction of the constant atoms of $Q$ and $Q_1^*,\dots,Q_m^*$ are the conjunctions of atoms corresponding to the connected components $V_1,\dots,V_m$. We call $Q_1^*,\dots,Q_m^*$ the \emph{connected components} of $Q$. \begin{remark}\label{rem:hierarchicalInComponents} If $Q$ is hierarchical, then every connected component of $Q$ contains a maximal variable.\footnote{This is true, since the sets $\sg(x)$ for the variables of any connected component have a pairwise non-empty intersection, meaning that they are pairwise comparable with respect to $\subseteq$.} Moreover, if $x$ is maximal in a connected component $Q_i^*$, then $x$ appears in all atoms of $Q_i^*$. \end{remark} \begin{remark}\label{rem:altCQcomponents} For every conjunctive query $Q$ consisting of connected components $Q_1^*, \dots , Q_m^*$, and constant atoms $Q_0^*$, the answer on every instance $D$ is given by the product of the answers of the queries $Q_0,\dots,Q_m$, where $Q_i = \exists \tup x_i \with Q_i^*$ (and $Q_0 = Q_0^*$), and $\tup x_i$ are exactly the variables appearing in the component $Q_i^*$. That is, \begin{equation}\label{eq:altCQcomponents} \#_D Q = \#_D Q_0^* \cdot \prod_{ i = 1 }^{ m } \#_D\big( \exists \tup x_i \with Q_i^* \big) \text, \end{equation} (A proof can be found in \cref{app:comp}.) If convenient, we therefore use $Q_0 \wedge Q_1 \wedge \dots \wedge Q_m$ as an alternative representation of $Q$. \end{remark} The main result of this subsection is the following. \begin{restatable}{theorem}{PQEhiersjfCQeasy}\label{thm:PQEhiersjfCQeasy} Let $\mathrm{Rep} \in \mathsf{PolyProb}$, and let $Q$ be a hierarchical Boolean self-join-free CQ. Then, the problem $\mathsf{PQE}_{\mathrm{Rep}}(Q,k)$ is solvable in polynomial time for each $k \in \mathbb N$. \end{restatable} \begin{sketch} The theorem is established by giving a polynomial time algorithm that computes, and adds up the probabilities $\Pr\big( \#_T Q = k' )$. The important observation is that (as under set semantics) the components $Q_i$ of the query (and the conjunction $Q_0$ of the constant atom) are independent events, which follows since $Q$ is self-join free. In order to compute the probability of $\#_T Q = k'$, we can thus sum over all decompositions of $k'$ into a product $k' = k_0 \cdot k_1 \cdot \dots \cdot k_m$, and reduce the problem to the computations of $\Pr\big( \#_T Q_i = k_i )$. Although the cases $k = 0$, and the conjunction $Q_0$ have to be treated slightly different for technical reasons, we can proceed recursively: every connected component contains a maximal variable, and setting this variable to any constant, the component potentially breaks up into a smaller hierarchical, self-join free CQ. Investigating the expressions shows that the total number of operations on the probabilities of fact probabilities is polynomial in the size of $T$. \end{sketch} \begin{remark} The full proof of \cref{thm:PQEhiersjfCQeasy} can be found in \cref{app:comp}. As pointed out, the proof borrows main ideas from the algorithm for the probabilistic evaluation of hierarchical Boolean self-join free CQs on tuple-independent PDBs with set semantics, as presented in \cite[p.~30:15]{DalviSuciu2012} (originating in \cite{DalviSuciu2004,DalviSuciu2007a}). The novel component is the treatment of multiplicities using bag semantics. In comparison to the algorithm of Dalvi and Suciu, existential quantifiers behave quite differently here, and we additionally need to argue about the possible ways to distribute a given multiplicity over subformulae or facts. \end{remark} \subsection{Beyond Hierarchical Queries} Let $\PQE\sp{\mathsf{set}}(Q)$ denote the problem of computing the probability that, given a tuple-independent set PDB, the fixed Boolean query $Q$ evaluates to true under set semantics. For the probabilistic query evaluation problem under bag semantics, the number of answers $k$ is an additional parameter to the problem, and the problem depends on the employed representation system. We first discuss the problems $\mathsf{PQE}_{\mathrm{Rep}}(Q,k)$ for $k = 0$. There, we get the following statement almost \enquote{for free}, which lifts $\sharp\@complexitystyle{P}$-hardness from the set case \cite{DalviSuciu2012}. \begin{proposition}\label{pro:hardlambdas} Suppose that $P$ is a finite subset of $[0,1]_{\mathbb{Q}}$ and that $\mathrm{Rep} \in \mathsf{PolyProb}$ with parameter set $\Lambda$ that contains, for all $p \in P$, some $\lambda_p$ with $P_{\smash{\lambda_p}}(0)=1-p$. Let $Q$ be a Boolean UCQ. If\/ $\PQE\sp{\mathsf{set}}(Q)$\/ is $\sharp\@complexitystyle{P}$-hard on the class of tuple-independent (set) PDBs with marginal probabilities in $P$, then $\mathsf{PQE}_{\mathrm{Rep}}(Q,0)$ is $\sharp\@complexitystyle{P}$-hard. \end{proposition} \begin{proof} Let $\ensuremath{\mathcal{D}}$ be an input to $\PQE\sp{\mathsf{set}}(Q)$ where all marginal probabilities are in $P$, specified by a list of all possible facts $f$ with their marginal probability $p_f$. Let $T = \bigcup_{p \in P}\set{ (f,\cod{\lambda_p}) \colon p_f = p }$. % Let $\delta$ be the function that maps every possible world $D$ of $T$ to its deduplication $D'$ (which is an instance of $\ensuremath{\mathcal{D}}$). Then, by the choice of the parameters, we have $\Pr_{ D \sim \sem{T} }\big(\delta(D) = D'\big) = \Pr_{\ensuremath{\mathcal{D}}} \big(\set{D'}\big)$ for all $D'$. % Moreover, $\#_D Q > 0$ if and only if $\delta(D)\models Q$. Thus, \[ \Pr_{ D \sim \sem{T} }\big( \#_D Q > 0 \big) = \Pr_{ D \sim \sem{T} }\big( \delta(D) \models Q \big) = \Pr_{ D' \sim \ensuremath{\mathcal{D}} }\big( D' \models Q \big)\text. \] Therefore, $\PQE\sp{\mathsf{set}}(Q)$ over tuple-independent PDBs with marginal probabilities from $P$ is equivalent to $\mathsf{PQE}_{\mathrm{Rep}}(Q,0)$. \end{proof} Remarkably, if $Q$ is a Boolean UCQ for which $\PQE\sp{\mathsf{set}}(Q)$ is $\sharp\@complexitystyle{P}$-hard on tuple-independent PDBs, then by \cite[Theorem 2.2]{KenigSuciu2021}, $\PQE\sp{\mathsf{set}}(Q)$ is already $\sharp\@complexitystyle{P}$-hard on the class of tuple-independent PDBs with marginal probabilities in $P = \set{c,1}$, for each $c\in(0,1)_{\mathbb Q}$. Hence, $\mathsf{PQE}_\mathrm{Rep}(Q,0)$ is also $\sharp\@complexitystyle{P}$-hard on these queries, as soon as $\mathrm{Rep}$ can represent facts that appear always ($P_{\lambda_f}(0) = 0$) and facts that appear sometimes ($P_{\lambda_f}(0) \in (0,1)_{\mathbb Q}$). The following example illustrates that hardness of the set version of $\mathsf{PQE}$ does not necessarily imply hardness of the bag version, and depends on the representation system that is used. \begin{example}\label{exa:PQEQ0-easy} Consider a parameterized TI representation system where $P_{\lambda}(0) = 0$ for all $\lambda \in \Lambda$. Note that for set PDBs, this would immediately mean that every PDB only has a single possible world, as every fact has probability $1$. Thus, query evaluation is trivial. For bag PDBs, there may still be interesting examples of this, for example, if the only available distribution is the geometric distribution in the version with support $\mathbb N_+$ instead of $\mathbb N$. For such representation systems $\mathrm{Rep}$ and Boolean UCQs $Q$, however, $\mathsf{PQE}(Q,0)$ is trivial to solve, as $\Pr\big( \#_T Q \leq 0 ) = 0$ for all $T \in \mathbf{T}_{\mathrm{Rep}}$. This does, however, not yield any insights into the hardness of $\mathsf{PQE}(Q,k)$ for $k > 0$. \end{example} Our goal in the remainder of this section is to show that for self-join Boolean CQs $Q$, the $\sharp\@complexitystyle{P}$-hardness of $\mathsf{PQE}_{\mathrm{Rep}}(Q,0)$ transfers to $\mathsf{PQE}_{\mathrm{Rep}}(Q,k)$ for $k > 0$. \begin{theorem}\label{thm:zero-to-k-reduction} Let\/ $\mathrm{Rep} \in \mathsf{PolyProb}$ and let $Q$ be a Boolean self-join free CQ. Then, if\/ $\mathsf{PQE}_{\mathrm{Rep}}(Q,0)$ is $\sharp\@complexitystyle{P}$-hard, $\mathsf{PQE}_{\mathrm{Rep}}(Q,k)$ is $\sharp\@complexitystyle{P}$-hard for each $k \in \mathbb N$. \end{theorem} Recall that \cref{exa:PQEQ0-easy} evinces that the premise of \cref{thm:zero-to-k-reduction}, the $\sharp\@complexitystyle{P}$-hardness of $\mathsf{PQE}_{\mathrm{Rep}}(Q,0)$ does not only depend on $Q$ but also on $\mathrm{Rep}$. Let $\mathrm{Rep}$ be any fixed parameterized TI representation system and let $Q$ be a Boolean self-join free CQ. We demonstrate \cref{thm:zero-to-k-reduction} by presenting an algorithm that solves $\mathsf{PQE}_{\mathrm{Rep}}(Q,0)$ in polynomial time, when given an oracle for $\mathsf{PQE}_{\mathrm{Rep}}(Q,k)$ for any positive $k$. % Our algorithm essentially tackles the connected components of $Q$ individually and combines the result in the end. Per connected component, our goal is to transform the input table $T$ in such a way that $\#_T Q$ has a nice representation that we can work with algebraically. Note that our the algorithm still has to use adhere to the representation system $\mathrm{Rep}$ that is used, and is therefore severely restricted when it comes to manipulating the probabilities of fact multiplicities. It can, however, drop entries from $T$ or introduce copies of entries that use new domain elements. By creating copies, we can \enquote{inflate} the answer count of our query. \Cref{alg:inflation} exploits this in order to construct a new table $T^{(m)}$ from $T$, such that $\#_{T^{(m)}} Q$ is the sum of answer counts on $m$ independent copies of $T$. \begin{algorithm}[ht] % \caption{$\mathsf{inflate}_{Q}(T,m)$}\label{alg:inflation} \begin{algorithmic}[1] \Parameter Boolean self-join free CQ $Q$ with a single connected component and no constant atom \Input $T \in \mathbf{T}_{\mathrm{Rep}}$, $m \in \mathbb N$ \Output Inflation of order $m$ of $T$: $T^{(m)} = \bigcup_{ i = 1 }^{ m } T_{m,i} \in \mathbf{T}_{\mathrm{Rep}}$ s.\,t. \begin{enumerate}[label=(O\arabic*)] \item\label{itm:inflation1} for all $i \neq j$ we have $T_{m,i} \cap T_{m,j} = \emptyset$, \item\label{itm:inflation2} for all $i = 1, \dots, m$ we have $\#_{ T_{m,i} } Q \sim \#_T Q$ i.\,i.\,d., and \item\label{itm:inflation3} $\#_{ T^{(m)} } Q = \sum_{ i = 1 }^{ m } \#_{ T_{m,i} } Q$. \end{enumerate} \algrule% \State Initialize $T_{m,1}, \dots, T_{m,m}$ to be empty. \ForAll{atoms $R(t_1,\dots,t_k)$ appearing in $Q$} \ForAll{pairs $\big(R(a_1,\dots,a_k),\cod{\lambda}\big) \in T$ and $i = 1, \dots, m$} \State Add $\big( R(a_{i,1},\dots,a_{i,k}), \cod{\lambda} \big)$ to $T_{m,i}$ where \[a_{i,j} \coloneqq \begin{cases} a_j^{(i)} & \text{if $t_j$ is a variable}\\ a_j & \text{otherwise\text,} \end{cases}\] and $a_{j}^{(i)}$ is a new domain element, acting as the $i$th copy of $a_j$. \EndFor \EndFor \State \Return $T^{(m)} \coloneqq \bigcup_{ i = 1 }^m T_{m,i}$ \end{algorithmic} \end{algorithm} \begin{restatable}{lemma}{inflateproperties}\label{lem:inflateproperties} For every fixed $Q$, \cref{alg:inflation} runs in time $\mathcal O\big( \size{ T } \cdot m \big)$, and satisfies the output conditions \ref{itm:inflation1}, \ref{itm:inflation2} and \ref{itm:inflation3}. \end{restatable} The proof of \cref{lem:inflateproperties} is contained in \cref{app:pqe}. The assumption that $Q$ is self-join free with just a single connected component and no constant atoms is essential to establish \ref{itm:inflation1} and \ref{itm:inflation3}, because it ensures it eliminates any potential co-dependencies among individual tables $T_{m,1},\dots,T_{m,m}$ that we create. The following example shows that this reliance is inevitable, as the conditions of \cref{lem:inflateproperties} can not be established in general. \begin{example}\label{exa:not-inflatable} Let $\mathrm{Rep}$ be a parameterized TI representation system with $\Lambda = \set{ \lambda }$ such that $P_{ \lambda }( 2 ) = P_{ \lambda }( 3 ) = \frac12$ (and $P_{\lambda}( k ) = 0$ for all $k \notin \set{2,3}$). Consider the query $Q = \exists x \exists y \with R(x) \wedge S(y)$, and the $\mathrm{Rep}$-table $T = \big( (R(1), \cod{\lambda}), (S(1),\cod{\lambda} ) \big)$. Note that $Q$ has two connected components and hence does not satisfy the assumptions of Lemma~\ref{lem:inflateproperties}. Then $\#_T Q$ takes the values $4$, $6$ and $9$, with probabilities $\frac{1}{4},\frac{1}{2},\frac{1}{4}$. Thus, if $X,Y \sim \#_T Q$ i.\,i\,d., then $X+Y$ is $13$ with probability $\frac{1}{8}$. However, for every $\mathrm{Rep}$-table $T'$, the random variable $\#_{T'} Q$ almost surely takes composite numbers, as it is equal to the sum of all multiplicities of $R$-facts, times the sum of all multiplicities of $S$-facts, both of these numbers being either $0$ or at least $2$. Thus, there exists no $T' \in \mathbf{T}_{\mathrm{Rep}}$ such that $\#_{T'} Q = X+Y$. \end{example} For this reason, our main algorithm will call \cref{alg:inflation} independently, for each connected component $Q_i$ of $Q$. Then, \cref{alg:inflation} does not inflate the whole table $T$, but only the part $T_i$ corresponding to $Q_i$. Replacing $T_i$ in $T$ with its inflated version of order $n$, we get a new table with answer count $(\#_{T \setminus T_i } Q') \cdot (\#_{\smash{T_{\smash{i}}^{(n)}}}Q_i )$, where $Q = Q' \wedge Q_i$. Before further describing the reduction, let us first explore some algebraic properties of this answer count. \begin{restatable}{lemma}{sumofindepcopiestimesrv}\label{lem:sum_of_indep_copies_times_rv} Let $X$ and $Y$ be independent random variables with values in $\mathbb N$ and let $k \in \mathbb N$. Suppose $X_1,X_2, \ldots $ are i.i.d random variables with $X_1 \sim X$. Let $p_0 \coloneqq \Pr(X = 0)$ and $q_0 \coloneqq \Pr(Y = 0)$. Then, there exist $z_1, \ldots, z_k \geq 0$ such that for all $n \in \mathbb N$ holds \[ \Pr\bigg( Y \cdot \sum_{i = 1}^n X_i \leq k\bigg) = q_0 + (1-q_0) \cdot p_0^n + \sum_{j = 1}^k \binom{n}{j} \cdot p_0^{n - j} \cdot z_j \text. \] \end{restatable} The proof of this lemma can be found in \cref{app:pqe}. Note that some of the $z_j$ may be zero. This will be important in the following. \medskip In the next paragraphs, we discuss how $p_0$ can be deduced from $q_0$ and the values of $\Pr\big( Y \cdot \sum_{i = 1}^n X_i \leq k\big)$ whenever $q_0 < 1$ and $p_0 > 0$.% \footnote{As \smash{$\lim_{n \to \infty} \big((1-q_0) \cdot p_0^n + \sum_{j = 1}^k \binom{n}{j} \cdot p_0^{n - j} \cdot z_j\big)^{1/n} = p_0$}, \cref{lem:sum_of_indep_copies_times_rv} already yields an approximation algorithm for $p_0$. However, this convergence is too slow for the purpose of our reduction.} First, with the notation of \cref{lem:sum_of_indep_copies_times_rv} and $z_0 \coloneqq 1 - q_0$, consider \begin{equation}\label{eq:g} g(n) \coloneqq \Pr\bigg( Y \cdot \sum_{i = 1}^n X_i \leq k\bigg) - q_0 = \sum_{j = 0}^k \binom{n}{j} \cdot p_0^{n-j} \cdot z_j \text. \end{equation} Now, for $m \in \mathbb N$ and $x = 0, 1 \ldots, m$, we define \begin{equation}\label{eq:hm} \begin{aligned} h_m(x) &\coloneqq g(x + m) \cdot g(-x + m)\\ &= \sum_{j_1,j_2 = 0}^k \binom{x + m}{j_1} \cdot \binom{-x + m}{j_2} \cdot p_0^{2m - j_1 - j_2} \cdot z_{j_1} \cdot z_{j_2}\text. \end{aligned} \end{equation} Then $h_m$ is a polynomial in $x$ for every fixed $m$. As it will turn out, the leading coefficient $\lc(h_m)$ of $h_m$ can be used to recover the value of $p_0$ as follows: Let $j_{\max}$ be the maximum $j$ such that $z_j \neq 0$. Since $\binom{x + m}{j}$ and $\binom{-x + m}{j}$ are both polynomials of degree $j$ in $x$, we observe that the degree of $h_m$ is $2j_{\max}$ and its leading coefficient is \[ \lc(h_m) = (-1)^{j_{\max}} \cdot (j_{\max}!)^{-2}\cdot p_0^{2m - 2j_{\max}} \cdot z_{j_{\max}}^2\text, \] which yields $ p_0 = \sqrt{\tfrac{\lc(h_{m+1})}{\lc(h_m)}}\text. $ Thus, it suffices to determine $\lc(h_m)$ and $\lc(h_{m+1})$. However, we neither know $j_{\max}$, nor $z_{j_{\max}}$, and we only have access to the values of $h_m$ and $h_{m+1}$. To find the leading coefficients anyway, we use the method of finite differences, a standard tool from polynomial interpolation \cite[chapter 4]{Hildebrand1987}. Essentially, this employs an iterated computation of a discretized derivative. \begin{algorithm}[ht]% \caption{$\mathsf{solveComponent}_{Q}(T, i)$}\label{alg:comp} \begin{algorithmic}[1] \Parameter Boolean self-join free CQ $Q$ with connected components $Q_1\ldots,Q_r$. \Oracle $\mathsf{PQE}_{\mathrm{Rep}}(Q,k)$ \Input $T \in \mathbf{T}_{\mathrm{Rep}}$, $i \in \set{1,\ldots, r}$ \Output $\Pr(\#_{T_i}Q_i = 0)$ \algrule% \iIf{$P_{\lambda}(0) = 1$ for all $\lambda \in \Lambda$} \Return 0 \iEndIf\label{step:ensure-nontrivial-pdbs} \State Fix $\lambda$ with $P_{\lambda}(0) < 1$. \State Suppose $Q = Q' \wedge Q_i$, cf.\ \cref{rem:altCQcomponents}. \State Initialize $T'$ as the table emerging from the canonical database for $Q'$, together with $\lambda_f \coloneqq \lambda$ for each fact $f$ in $T'$. \State Calculate $q_0 \coloneqq \Pr\big( \#_{T'}Q' = 0 \big)$ and set $g(0) \coloneqq 1 - q_0$. \For{$n = 1,2,\ldots, 4k + 1$} \State Set $T_i^{(n)} \coloneqq \mathsf{inflate}_{Q_i}(T_i, n)$. \State Set $g(n) \coloneqq \smash{\Pr\big(\#_{T' \cup T_i^{(m)}}Q \leq k\big)} - q_0$, using the oracle. \EndFor \iIf{$g(k+1) = 0$} \Return 0 \iEndIf \label{step:check-p0-zero} \For{$x = 0,1,\ldots, 2k$ and $m = 2k, 2k + 1$} \State Set $h_m(x) \coloneqq g(m + x) \cdot g(m - x)$. \EndFor \State Initialize $\ell \coloneqq k$.\label{step:start-finite-diff} \iWhile{$\sum_{r = 0}^{2\ell} (-1)^s \binom{2\ell}{s}h_{2k}(s) = 0$} $\ell \coloneqq \ell - 1$ \iEndWhile \label{step:end-finite-diff} \State \Return $\sqrt{ \big( \sum_{s = 0}^{2\ell} (-1)^s \binom{2\ell}{s} h_{2k + 1}(s) \big) / \big( \sum_{s = 0}^{2\ell} (-1)^s \binom{2\ell}{s} h_{2k}(s) \big) }$ \end{algorithmic} \end{algorithm} The full procedure that uses the above steps to calculate $p_0$ yields \Cref{alg:comp}. Recall that it focuses on a single connected component. For the other parts of the query, we utilize a table that encodes the canonical database of the remaining components.\footnote{The canonical database belonging to a self-join free CQ is the instance containing the atoms appearing in the query, with all variables being treated as constants.} Note that $k$ is always treated as a fixed constant, and our goal is to reduce $\mathsf{PQE}_{\mathrm{Rep}}(Q,0)$ to $\mathsf{PQE}_{\mathrm{Rep}}(Q,k)$. \begin{lemma} Algorithm~\ref{alg:comp} runs in polynomial time and is correct.% \end{lemma} \begin{proof} With the notation introduced in the algorithm, we let $Y = \#_T Q'$ and $X = \#_T Q_i = \#_{T_i} Q_i$. Then, $q_0=\Pr(Y=0)$ as in \cref{lem:sum_of_indep_copies_times_rv} and the aim of the algorithm is to return $p_0$. First, step \ref{step:ensure-nontrivial-pdbs} covers the edge case that $\mathrm{Rep}$ can only represent the empty database instance. In all other cases, we fix $\lambda$ with $P_{\lambda}(0) > 0$. As $q_0 = (1 - P_{\lambda}(0))^t$ where $t$ is the number of atoms of $Q'$, we have $q_0 < 1$. From \cref{lem:inflateproperties}, we see that $\#_{\smash{T' \cup T_{\smash{i}}^{(m)}}}Q = Y \cdot \sum_{i=1}^n X_i$, so we are in the situation of \cref{lem:sum_of_indep_copies_times_rv}. Hence, $g$ and $h_m$ are as in \eqref{eq:g} and \eqref{eq:hm}. Now, as $g(k+1) = p_0 \cdot \sum_{j = 0}^k \binom{n}{j} \cdot p_{\smash{0}}^{k-j} \cdot z_j$ with $z_0 = 1-q_0 > 0$, we find that $p_0$ is zero if and only if $g(k+1)$ is zero. This is checked in line \ref{step:check-p0-zero}. Finally, the paragraphs following \cref{lem:sum_of_indep_copies_times_rv} apply, and we use the method of finite differences to determine $\ell = j_{\max}$ in lines \ref{step:start-finite-diff} and \ref{step:end-finite-diff} to return $p_0 = \sqrt{(2\ell)!\lc(h_{m+1}) / (2\ell)!\lc(h_m)}$. For the runtime, first note that, since $\mathrm{Rep} \in \mathsf{PolyProb}$, all answers of the oracle are of polynomial size in the input. Since $k$ is fixed, the algorithm performs a constant number of computation steps and each term in the calculations is either independent of the input or of polynomial size, yielding a polynomial runtime. \end{proof} \begin{proof}[Proof of \cref{thm:zero-to-k-reduction}] Suppose that $k > 0$ is fixed and that we have access to an oracle for $\mathsf{PQE}_{\mathrm{Rep}}(Q,k)$. Let $Q = Q_0 \wedge Q_1 \wedge \ldots \wedge Q_m$ be the partition of $Q$ into connected components. Then the $\#_T Q_i$ are independent and $\#_T Q = \prod_{i = 0}^m \#_T Q_i$. Therefore, \[ \Pr\big( \#_T Q = 0 \big) = 1 - \Pr( \#_T Q_0 \neq 0 ) \cdot \prod_{ i = 1 }^{ m }\Big( 1 - \Pr\big( \#_T Q_i = 0 \big) \Big)\text.\] As $\Pr( \#_T Q_0 \neq 0 )$ is easy to compute and as we can use \cref{alg:comp} to compute each $\Pr\big( \#_T Q_i = 0 \big)$ for $i=1,\ldots,k$ with an oracle $\mathsf{PQE}_{\mathrm{Rep}}(Q,k)$, this yields a polynomial time Turing-reduction from $\mathsf{PQE}_{\mathrm{Rep}}(Q,0)$ to $\mathsf{PQE}_{\mathrm{Rep}}(Q,k)$. \end{proof} By combining \cref{thm:PQEhiersjfCQeasy,pro:hardlambdas,thm:zero-to-k-reduction} with \cite[Theorem 3.1]{AmarilliKimelfeld2021}, we obtain a dichotomy for probabilistic query evaluation of Boolean self-join free CQs under bag semantics.\footnote{Alternatively, using \cite[Theorem 2.2]{KenigSuciu2021}, the statement of \cref{thm:ourdichotomy} also holds if $\mathrm{Rep}$ to supports $P_{\lambda}(0) = 0$ and $P_{\lambda'}(0) \in (0,1)_{\mathbb Q}$.} \begin{theorem}\label{thm:ourdichotomy} Suppose $\mathrm{Rep} \in \mathsf{PolyProb}$ such that there is $\lambda \in \Lambda$ with $P_{\lambda}(0)=\frac12$ and let $Q$ be a Boolean self-join free CQ. Then: \begin{enumerate} \item If $Q$ is hierarchical, then $\mathsf{PQE}_{\mathrm{Rep}}(Q,k)$ is solvable in polynomial time for each $k \in \mathbb N$. \item Otherwise, $\mathsf{PQE}_{\mathrm{Rep}}(Q,k)$ is $\sharp\@complexitystyle{P}$-hard for each $k \in \mathbb N$. \end{enumerate} \end{theorem} As in \cref{exa:PQEQ0-easy}, such a dichotomy does not hold for every representation system. \begin{example} Consider representation systems $\mathrm{Rep} \in \mathsf{PolyProb}$ in which $P_{\lambda}(0) = 0$ for all $\lambda \in \Lambda$, such as in \cref{exa:PQEQ0-easy}. Then, for each $k\in\mathbb N$, also $\mathsf{PQE}_{\mathrm{Rep}}(Q,k)$ is computable in polynomial time as follows: Check all valuations of the variables of $Q^*$. If there are more than $k$ of them, the answer is $0$. Otherwise, restrict the table to the facts appearing in satisfying valuations (whose number is bounded in terms of $k$) and solve the problem by brute-force. \end{example} % % \section{Conclusion} \label{sec:conclusion} The results of our paper extend the understanding of probabilistic query evaluation into a new direction by discussing bag semantics. We investigated two principal computational problems: computing expectations, and computing the probability of answer counts. Interestingly, even though these problems are equivalent for set semantics, they behave quite differently under bag semantics. Our central findings suggest that generally, computing expectations is the easier problem. For computing answer count probabilities, in the case of self-join free CQs, we obtained a polynomial time vs.\ $\sharp\@complexitystyle{P}$-hard dichotomy, depending on whether the query is hierarchical or not. This transfers the corresponding results of \cite{DalviSuciu2004,DalviSuciu2007a} from set to bag semantics. While our results for the expectation problem concern UCQs, the complexity of computing answer counts remains open beyond self-join free CQs. It is also unclear, how the problem behaves on bag versions of other well-representable classes of set PDBs. The complexity theoretic questions we encounter are quite subtle. For example, none of our results resolves, whether there exist a representation system in $\mathsf{PolyProb}$ and a query $Q$ such that $\mathsf{PQE}_{\mathrm{Rep}}(Q,0)$ is efficiently solvable, while $\mathsf{PQE}_{\mathrm{Rep}}(Q,k)$ is $\sharp\@complexitystyle{P}$-hard for some $k > 0$. Also, the complexity of $\mathsf{PQE}_{\mathrm{Rep}}(Q,k)$ in terms of $k$ is unknown. To formally argue about the complexity of some natural distributions such as the Poisson distribution, irrational probabilities or parameters have to be supported. This yields non-trivial complexity theoretic questions that we leave for future work. % % \clearpage % \bibliographystyle{plainurl}
1,116,691,497,481
arxiv
\section{Introduction} \subsection{Background and Motivations} {\color{blue} Wireless communication and sensing are both indispensable for critical machine-type applications, e.g., the 5th generation (5G) and the future 6th generation (6G) networks~\cite{Saad2020,You2020,ITUITShandbook}. Nevertheless, the proliferation of wireless sensing and communication infrastructures and devices will result in severe spectrum congestion problems~\cite{liu2020joint}. Joint communication and sensing (JCS) has emerged as one of the most promising 6G key techniques due to its potential in improving spectrum and energy efficiency. It aims to achieve wireless sensing and communication simultaneously using unified spectrum and transceivers, sharing the same transmitted signals~\cite{Zhang2019JCRS}.} \subsection{Related Works} {\color{blue} Since orthogonal frequency-division multiplexing (OFDM) is the most popular physical-layer signal solution for broadband wireless networks, the JCS techniques based on OFDM signals have been widely researched. Sturm \textit{et al}.~\cite{Sturm2011Waveform} proposed a fast Fourier transform (FFT)-based frequency-domain OFDM JCAS signal processing method, realizing both active range estimation and communication. By utilizing the FFT-based JCS signal processing method, Zhang \textit{et al.}~\cite{Zhang2019JCRS} proposed a practical OFDM JCS system based on the time-division-duplex (TDD) mobile network, which is suitable for downlink (DL) echo sensing. In~\cite{Kumari2018WIFI}, the authors proposed an IEEE 802.11ad-based OFDM JCS vehicle-to-vehicle (V2V) system exploiting the preamble of a single-carrier physical layer frame to achieve V2V communication and full-duplex radar in the 60 GHz band. In~\cite{Chen2021CDOFDM}, the authors proposed a code-division OFDM JCS system by introducing code-division multiplex into FFT-based OFDM JCS processing to improve the JCS sensing performance. As pointed out in~\cite{Andrew2021PMN}, the full-duplex (FD) is the critical enabler for implementing DL JCS, which can simultaneously transmit JCAS signals and receive reflections. Seyed Ali~\textit{et al.}~\cite{IBFDJCR} realized an FD JCS platform that detects targets while communicating with another node by canceling the self-leakage interference with analog and digital self-leakage canceler. Despite the above studies, there is a huge obstacle to utilizing the FFT-based OFDM JCS method in real applications. This method has to use consecutive OFDM subcarriers and symbols to estimate the range and velocity on the fixed grid, while the grid interval determines the resolution. Therefore, the range and velocity resolutions are determined by the number of used subcarriers and OFDM symbols, respectively. Thus, for mobile networks that typically have limited subcarriers and OFDM symbols, e.g., 14 OFDM symbols in each DL time slot, the sensing accuracy, especially the velocity accuracy, is largely restricted.} Besides, in~\cite{Fang2020}, the author showed that overlapped interference deteriorates sensing performance in a networking situation. Therefore, it is also important for a sensing method that can still work effectively under a low signal to interference plus noise ratio (SINR). \subsection{Our Contributions} {\color{blue} To resolve the aforementioned problems, we propose a multiple signal classification (MUSIC)-based sensing scheme for OFDM JCS systems that can achieve accurate estimation of the angle of arrival (AoA), range, and velocity, adapting to various OFDM communication signals with limited OFDM subcarriers and symbols. We also propose a JCS channel state information (CSI) enhancement method that exploits the JCS sensing results for refining CSI estimation with a Kalman filter. Furthermore, we provide some theoretical lower bound of mean square error (MSE) for the proposed MUSIC-based JCS sensing algorithms.} The main contributions of this paper are summarized as follows. \begin{itemize} \item[1.] {\color{blue} We propose a novel MUSIC-based JCS range and velocity estimation scheme, which consists of expanded two-dimensional (2D) MUSIC algorithms and two-step descent searching algorithms. The proposed scheme can use communication signals to achieve accurate range and velocity estimation.} \item[2.] {\color{blue} We propose a JCS CSI enhancement method based on the Kalman filter, which exploits the JCS sensing parameters to construct the state transfer model and refines the CSI estimation using the JCS sensing results. This method can improve the bit error rate (BER) in the case of imperfect CSI.} \item[3.] {\color{blue} We derive the theoretical MSEs for the proposed MUSIC-based JCS range and velocity estimation scheme using perturbation analysis. The theoretical MSEs of range and velocity estimation match the simulation MSEs well in the high SINR regime.} \item[4.] {\color{blue} Extensive simulations are conducted to validate the proposed JCS sensing and CSI enhancement schemes and the theoretical MSEs. The results show that the proposed sensing scheme outperforms the conventional 2D-FFT method in terms of range and Doppler estimation MSEs by more than 20 dB, and the JCS CSI enhancement method can significantly improve communication performance.} \end{itemize} \subsection{Organization and Notations} The remaining parts of this paper are organized as follows. In Section \ref{sec:system-model}, we describe the DL JCS model and transmitting signal model, and propose the JCS channel model. Section \ref{sec:JCS Downlink signal processing} proposes the MUSIC-based JCS AoA, range and velocity estimation method. Section \ref{sec:JCS performance} provides a theoretical analysis of the proposed estimation method. In Section \ref{sec:JCS result}, the simulation results are presented. Section \ref{sec:conclusion} concludes this paper. \textbf{Notations:} Bold uppercase letters denote matrices (e.g., $\textbf{M}$); bold lowercase letters denote column vectors (e.g., $\textbf{v}$); scalars are denoted by normal font (e.g., $\gamma$); the entries of vectors or matrices are referred to with brackets, for instance, the $q$th entry of vector $\textbf{v}$ is $[\textbf{v}]_{q}$, and the entry of the matrix $\textbf{M}$ at the $m$th row and $q$th column is ${[\textbf{M}]_{n,m}}$; $\left(\cdot\right)^H$, $\left(\cdot\right)^{*}$ and $\left(\cdot\right)^T$ denote Hermitian transpose, complex conjugate and transpose, respectively; ${\left\| {\mathbf{v}}_{k} \right\|_l}$ represents the $l$-norm of ${\mathbf{v}}_{k}$; $E\left( \cdot \right)$ represents the expectation of random variables; {\color{blue} ${\bf M}_1 \in \mathbb{C}^{M\times N}$ and ${\bf M}_2 \in \mathbb{R}^{M\times N}$ represent that ${\bf M}_1$ and ${\bf M}_2$ are ${M\times N}$ complex-value and real-value matrices, respectively, and $v \sim \mathcal{CN}(m,\sigma^2)$ means $v$ follows a complex Gaussian distribution with mean $m$ and variance $\sigma^2$. } \section{System Model}\label{sec:system-model} \subsection{DL JCS Model} \label{subsec:The Downlink JCS Model} \begin{figure}[!t] \captionsetup{width=0.47\textwidth} \centering \begin{minipage}[t]{0.52\linewidth} \centering \includegraphics[width=\columnwidth]{Downlink_JCAS_scenario.pdf} \caption{DL JCS Scenario.} \label{fig: Downlink JCS Model} \end{minipage} \begin{minipage}[t]{0.47\linewidth} \centering \includegraphics[width=\columnwidth]{UPA_model.pdf} \caption{UPA model.} \label{fig: UPA model} \end{minipage}% \end{figure} As shown in Fig.~\ref{fig: Downlink JCS Model}, we consider the DL JCS process between the BS and the machine-type user equipment (MUE), such as a road-side infrastructure and a vehicle. Millimeter-wave (mmWave) signal is considered for DL JCS. It is particularly suitable for JCS given its potential high resolution. {\color{blue} The BS and MUEs are equipped with uniform plane arrays (UPAs). The BS is equipped with two spatially well-separated UPAs and a self-leakage canceler to realize the FD capability, as detailed in~\cite{IBFDJCR}. Therefore, the self-leakage between arrays is ignored and not considered in the signal model in this paper. One BS array is used for transmitting the DL JCS signal, and the other is used for consistently receiving echoes of the JCS signal. MUE receives the JCS signal to demodulate the communication data, while BS receives the echoes to estimate the AoAs, ranges, and velocities. Moreover, we consider that both BS and MUE receive the superimposed co-channel interference from multiple reflected interference sources (ISs). The MUE is equipped with one UPA for receiving the communication signal. The array sizes of the BS and MUEs are ${P_t} \times {Q_t}$ and ${P_r} \times {Q_r}$, respectively. } \subsection{UPA Model} \label{subsec:UPA model} Fig.~\ref{fig: UPA model} demonstrates the model of UPAs. The uniform interval between the neighboring antenna elements is denoted by $d_a$. The size of the UPA is denoted by ${P} \times {Q}$. The two-dimensional (2D) AoA for receiving or the AoD for transmitting the $k$th far-field signal is ${{\bf{p}}_k} = {\left( {{\varphi _k},{\theta _k}} \right)^T}$, where ${\varphi _k}$ is the azimuth angle, and ${\theta _k}$ is the elevation angle. We use ${A_{p,q}}$ to denote the ($p$,$q$)th antenna element, and ${A_{0,0}}$ to represent the reference antenna element. Then, the phase difference between ${A_{p,q}}$ and ${A_{0,0}}$ is expressed as \begin{equation}\label{equ:phase_difference} {a_{p,q}} \!\left( {{{\bf{p}}}} \right) \! =\! \exp \!\left[ \!{ - j \!\frac{{2\pi }}{\lambda }{d_a} \!\!\left( {p\cos {\varphi} \sin {\theta} \!+ \! q\sin {\varphi} \sin {\theta}} \right)} \!\right], \end{equation} where $\lambda = c/f_c$ is the wavelength of the carrier, $c$ is the speed of light in vacuum, and $f_c$ is the carrier frequency. The steering vector for the array is \begin{equation}\label{equ:steeringVec} {\bf{a}}\left( {{{\bf{p}}_k}} \right) = \left[ {{a_{p,q}}\left( {{{\bf{p}}_k}} \right)} \right]\left| {_{p = 0,1,...,P - 1;q = 0,1,...,Q - 1}}\right., \end{equation} where ${\bf{a}}\left( {{{\bf{p}}_k}} \right)$ is a ${P}{Q} \times 1$ vector, and ${\left. {\left[ {{v_{p,q}}} \right]} \right|_{(p,q) \in {\bf{S}}1 \times {\bf{S}}2}}$ denotes the vector stacked by ${v_{p,q}}$ satisfying $p\in{\bf{S}}1$ and $q\in{\bf{S}}2$. The steering matrix for $K$ far-field signals is then represented as \begin{equation}\label{equ:steering Matrix} {\bf{A}} = \left[ {{\bf{a}}\left( {{{\bf{p}}_1}} \right),{\bf{a}}\left( {{{\bf{p}}_2}} \right),...,{\bf{a}}\left( {{{\bf{p}}_K}} \right)} \right], \end{equation} which is a matrix of dimension ${P}{Q} \times K$. \subsection{DL JCS Signal and Channel Model} \label{subsec:uplink_signal model} In this paper, we consider the JCS system using OFDM-based signals. The transmitting signal is \begin{equation}\label{equ:OFDM transmit signal} {s_D}\left( t \right)\! =\!\! \sum\limits_{m = 0}^{M_s \!-\! 1}\!{\sum\limits_{n = 0}^{N_c \!-\! 1} \!\!\!{\sqrt {P_t} d_{n,m}{e^{j2\pi \left( {{f_c} + n\Delta {f}} \right)t}}} } {\mathop{\rm Rect}\nolimits} (\frac{{t - mT}}{{T}}), \end{equation} where $P_t$ is the DL transmit power, $d_{n,m}$ is the $m$th baseband OFDM symbol of the $n$th subcarrier, $\Delta {f}$ is the subcarrier interval, $T = T_s + {T_g}$, $T_s = \frac{1}{{\Delta {f}}}$ is the duration of OFDM symbol, ${T_g}$ is the guard interval, $M_s$ and $N_c$ are the number of OFDM symbols and subcarriers, respectively, and ${\rm Rect}\left( t/T\right)$ is the rectangular window function of duration $T$. {\color{blue} When the DL preamble signal for beam alignment and CSI estimation is transmitted, $d_{n,m}$ is replaced by the preamble symbols, denoted by $\bar d_{n,m}$, which is known and deterministic to both BS and MUE. When the DL data signal is transmitted, $d_{n,m} \in {\Theta _{QAM}}$ is a random symbol, where ${\Theta _{QAM}}$ is the constellation of quadrature amplitude modulation (QAM). Note that $d_{n,m}$ is known to BS but unknown to MUE.} {\color{blue} Next, we present the JCS channel model. As illustrated in Fig.~\ref{fig: Downlink JCS Model}, the DL JCS channel comprises a communication channel and an echo sensing channel. \begin{itemize} \item The JCS communication channel consists of a line-of-sight (LoS) path and several non-line-of-sight (NLoS) scattering paths. \item The JCS echo sensing channel consists of the echo path from MUE as a scatterer, and the echo paths from other scatterers which may or may not contribute to the communication channel. Since the signals after multiple reflections are much smaller than those with only one reflection, we only consider echoes directly reflected from scatterers. \end{itemize} } {\color{blue} Then, the JCS sensing echo and communication channels at the $n$th subcarrier of the $m$th OFDM symbol are defined as~\cite{Chen2021CDOFDM, Zhang2019JCRS} \begin{equation}\label{equ:general_JCS_channel} {{\bf{H}}_{i,n,m}}{\rm{ = }}\sum\limits_{l = 0}^{L - 1} {\left[ {{\alpha _{i,n,m,l}}{\bf{a}}({\bf{p}}_{RX,l}^i){\bf{a}}({{\bf{p}}_{TX,l}})} \right]}, \end{equation} where ${\bf{p}}_{TX,l}$ is the AoD of BS's JCS transceivers, ${{\bf{a}}( {{\bf{p}}_{TX,l}} )} \in \mathbb{C}^{{P_t}{Q_t} \times 1}$ is the corresponding transmit steering vectors as given in \eqref{equ:steeringVec}, $L$ is the number of scatterers, $l = 0$ is for the direct path between BS and MUE, $l = 1, \cdots, L-1$ is for the reflected paths involved the $l$th scatterer. Moreover, $i = S$ and $i = C$ represent the echo sensing and communication channels, respectively; ${\bf{p}}_{RX,l}^S$ and ${\bf{p}}_{RX,l}^C$ are the AoAs of BS's echo receiver and the MUE's communication receiver, respectively; and ${\alpha _{S,n,m,l}}$ and ${\alpha _{C,n,m,l}}$ are the channel fading for the $l$th sensing echo path and communication path, respectively. } \subsubsection{JCS Echo Sensing Channel} {\color{blue} When $i = S$, ${{\bf{a}}( {\bf{p}}_{RX,l}^{S} )}\in \mathbb{C}^{{P_t}{Q_t} \times 1}$ is the receive steering vector for the $l$th echo sensing path, as given in \eqref{equ:steeringVec}. Since the mmWave array is typically small, ${\bf{p}}_{RX,l}^{S} = {{\bf{p}}_{TX,l}}$. Moreover, ${\alpha _{S,n,m,l}}$ is the fading factor for the $l$th echo (when $l = 0$, MUE acts as a scatterer), which is given by \begin{equation}\label{equ:alpha_S} {\alpha _{S,n,m,l}} = {b_{S,l}}{e^{j2\pi m{T}{f_{s,l,1}}}}{e^{ - j2\pi n\Delta {f} {{\tau _{s,l}}} }}, \end{equation} where ${f_{s,0,1}} = \frac{{2{v_{r,0,1}}}}{\lambda }$ and ${\tau _{s,0}} = \frac{{2{d_{0,1}}}}{c}$ are the echo Doppler frequency shifts and time delay between BS and MUE, with $v_{r,0,1}$ and $d_{0,1}$ being the corresponding radial relative velocity and the distance, respectively; ${f_{s,l,1}} = \frac{{2{v_{r,l,1}}}}{\lambda }$ and ${\tau _{s,l}} = \frac{{2{d_{l,1}}}}{c}$ are the echo Doppler frequency shifts and time delay between BS and the $l$th scatterer, with $v_{r,l,1}$ and ${d_{l,1}}$ being the corresponding radial relative velocity and distance, respectively. Moreover, ${b_{S,l}} = \sqrt {\frac{{{\lambda ^2}}}{{{{\left( {4\pi } \right)}^3}{d_{l,1}}^4}}} {\beta _{S,l}}$, and ${\beta _{S,l}}$ is the random reflection fading factor of the $l$th scatterer, following the complex Gaussian distribution with zero mean and variance $\sigma _{S\beta ,l}^2$. } \subsubsection{JCS Communication Channel} When $i = C$, ${\bf{a}}( {{\bf{p}}^{C}_{RX,l}} ) \in \mathbb{C}^{{P_r}{Q_r} \times 1}$ is the receive steering vector for the $l$th communication path, as given in \eqref{equ:steeringVec}. Moreover, ${\alpha _{C,n,m,l}}$ is the fading factor for the $l$th path, and is expressed as \begin{equation}\label{equ:alpha_C} {\alpha _{C,n,m,l}} \!= \! \left\{ \!\!\! \begin{array}{l} {b_{C,0}}{e^{j2\pi mT{{f_{c,d,0}}} }}{e^{ - j2\pi n\Delta {f} {{\tau _{c,0}}} }},l = 0\\ \!\!\!\begin{array}{l} {b_{C,l}}{e^{j2\pi ( {{f_{d,l,1}} + {f_{d,l,2}}} )mT}} {e^{ - j2\pi n\Delta {f}( {{\tau _{c,l,1}} + {\tau _{c,l,2}}} )}} \end{array},l > 0 \end{array} \right., \end{equation} where ${f_{c,d,0}} = \frac{{{v_{r,0,1}}}}{\lambda }$ and ${\tau _{c,0}} = \frac{{{d_{0,1}}}}{c}$ are the Doppler frequency shift and time delay of the LoS path; ${f_{d,l,1}} = \frac{{{v_{r,l,1}}}}{\lambda }$, ${f_{d,l,2}} = \frac{{{v_{r,l,2}}}}{\lambda }$, ${\tau _{c,l,1}} = \frac{{{d_{l,1}}}}{c}$ and ${\tau _{c,l,2}} = \frac{{{d_{l,2}}}}{c}$ are the Doppler frequency shifts and time delay between BS and scatterer, and between the scatterer and MUE of the $l$th NLoS path, respectively, with $v_{r,l,2}$ and ${d_{l,2}}$ being the radial relative velocity and distance between the $l$th scatterer and MUE, respectively; ${b_{C,0}} = \sqrt {\frac{{{\lambda ^2}}}{{{{(4\pi {d_0})}^2}}}}$ is the propagation loss of the LoS path, and ${b_{C,l}} = \sqrt {\frac{{{\lambda ^2}}}{{{{\left( {4\pi } \right)}^3}{d_{l,1}}^2{d_{l,2}}^2}}} {\beta _{C,l}}$ is the path fading factor of the $l$th NLoS path with ${\beta _{C,l}}$ being the scattering factor of the $l$th scatterer. {\color{blue} Here, ${\beta _{C,l}}$ is the random reflecting factor of the scatterer in the $l$th path, which is assumed to follow the complex Gaussian distribution with zero mean and variance $\sigma _{C\beta ,l}^2$.} Due to the existence of ${b_{C,l}}$, the LoS path is much stronger than the NLoS path for mmWave. {\color{blue} Note that ${\bf{H}}_{C,n,m}$ is unknown and needs to be estimated by utilizing the DL preambles, $\bar d_{n,m}$. The parameters of ${\bf{H}}_{S,n,m}$ are unknown, and BS has to estimate the AoA, range and Doppler in ${\bf{H}}_{S,n,m}$. Since BS acts as both the sensing transmitter and receiver, both $d_{n,m}$ and $\bar d_{n,m}$ can be used for DL sensing. Moreover, $l = 0$ represents a special path, for which the echo time delay and Doppler are twice of those in the communication channel, which is the theoretical basis for the JCS CSI enhancement method to be introduced in Section \ref{sec:JCS_comm}. } \begin{figure}[!t] \centering \includegraphics[width=0.40\textheight]{DL_JCAS_signal_processing.pdf}% \DeclareGraphicsExtensions. \caption{DL JCS signal processing diagram.} \label{fig: DL_JCS_signal_processing} \end{figure} \subsection{JCS Received Signal Model} \label{sec:JCS Signal_reception} In this subsection, we present the expressions for DL JCS received signals. {\color{blue} \subsubsection{DL Communication Received Signal} The frequency-domain DL communication signal received by MUE at the $m$th OFDM symbol of the $n$th subcarrier is expressed as \begin{equation}\label{equ:y_DC} y_{C,n,m} = \sqrt {P_t} d_{n,m}{\left( {{\bf{w}}_{RX}} \right)^H}{\bf{H}}_{C,n,m}{\bf{w}}_{TX} + n^{X}_{C,n,m}, \end{equation} where ${\bf{w}}_{TX} \in \mathbb{C}^{{P_t}{Q_t} \times 1}$ and ${\bf{w}}_{RX} \in \mathbb{C}^{{P_r}{Q_r} \times 1}$ are the JCS transmit and communication receive beamforming (BF) vectors, respectively; $\| {{\bf{w}}_{TX}} \|_2^2 = \| {{\bf{w}}_{RX}} \|_2^2 = 1$. In this paper, the low-complexity least-square (LS) method is used to generate ${\bf{w}}_{TX}$ and ${\bf{w}}_{RX}$ for BF. BS utilizes the known DL preambles, i.e., $d_{n,m} = \bar d_{n,m}$, to conduct beam alignment with MUE. When beam alignment is completed, ${\bf{w}}_{TX} = {c_1}{[ {{{\bf{a}}^T}({\bf{\tilde p}}_{TX,0})} ]^\dag }$ and ${\bf{w}}_{RX} = {c_2}{[ {{\bf{a}}({\bf{\tilde p}}_{RX,0})} ]^\dag }$, where ${c_1}$ and ${c_2}$ are both arbitrary complex values with modulus 1, ${\left[ \cdot \right]^\dag }$ is the pseudo-inverse operation, ${\bf{\tilde p}}_{TX,0} \approx {\bf{p}}_{TX,0}$, and ${\bf{\tilde p}}_{RX,0} \approx {\bf{p}}^{C}_{RX,0} $. Simultaneously, the unknown communication CSI, $( {{\bf{w}}_{RX}} )^H{\bf{H}}_{C,n,m}{\bf{w}}_{TX}$, can be estimated by processing the received preambles. Moreover, $n^{X}_{C,n,m} = n_{C,n,m} + \xi _{C,n,m}$ is the sum of noise and interference, $n_{C,n,m} = {\left( {{\bf{w}}_{RX}} \right)^H}{\bf{n}}_{C,n,m}$ and $\xi _{C,n,m} = {\left( {{\bf{w}}_{RX}} \right)^H}{\bf{x}}_{C,n,m}$ are transformed noise and interference, the dimensions of ${\bf{n}}_{C,n,m}$ and ${\bf{x}}_{C,n,m}$ are both ${P_r}{Q_r} \times 1$, ${\bf{n}}_{C,n,m}$ is Gaussian noise vectors with each element following ${\cal C}{\cal N}(0,\sigma _N^2)$, and ${\bf{x}}_{C,n,m}$ is the reflected interference signals from other network devices. We assume there are ${N_{ic}}$ ISs, and the reflected fading for each IS follows a Gaussian distribution. Since the superimposed one of multiple random OFDM signals is noise-like, the $p$th element of ${\bf{x}}_{C,n,m}$ can be given as ${\left[ {{\bf{x}}_{C,n,m}} \right]_p} = \sum\limits_{i = 0}^{{N_{ic}} - 1} {\sqrt {{P_{i,c}}} } \beta _{i,p}^I$, where ${P_{i,c}}$ is the power of incident signal from the $i$th IS, and $\beta _{i,p}^I \sim \mathcal{CN}(0,1)$. Let ${P_{IC}}{\rm{ = }}\sum\limits_{i = 0}^{{N_{ic}} - 1} {{P_{i,c}}}$. The interference to noise power ratio (INR) is $\gamma _C^{IN} = \frac{{{P_{IC}}}}{{\sigma _N^2}}$. Further, we define the communication SINR (C-SINR) as \begin{equation}\label{equ:gamma_c} {\gamma _{C,n,m}} = \frac{{P_t\left\| {{h_{C,n,m}}} \right\|_2^2}}{{{P_{IC}} + \sigma _N^2}}, \end{equation} where ${h_{C,n,m}} = {\left( {{\bf{w}}_{RX}} \right)^H}{\bf{H}}_{C,n,m}{\bf{w}}_{TX}$ is the gain of DL communication signal at each antenna element. } {\color{blue} \subsubsection{DL Echo Sensing Received Signal} \label{sec:DL Echo Sensing Received Signal} The echo signal that BS receives for the $m$th OFDM symbol at the $n$th subcarrier is given by \begin{equation}\label{equ:sensing_signal} {\bf{y}}_{S,n,m} = \sqrt {P_t} d_{n,m}{\bf{H}}_{S,n,m}{\bf{w}}_{TX} + {\bf{n}}_{S,n,m}^{X} = \sqrt {P_t} d_{n,m}\sum\limits_{l = 0}^{L - 1} {\left[\!\! \begin{array}{l} ( {{\alpha _{S,n,m,l}}} )\chi _{TX,l} {\bf{a}}( {{\bf{p}}_{RX,l}^{S}} ) \end{array} \!\!\right]} \!+\! {\bf{n}}_{S,n,m}^{X}, \end{equation} where $\chi _{TX,l} = {{\bf{a}}^T}( {{\bf{p}}_{TX,l}} ){\bf{w}}_{TX}$ represents the gain of the DL JCS transmit BF, ${\bf{n}}_{S,n,m}^{X} = {\bf{n}}_{S,n,m} + {\bf{x}}_{S,n,m}$ is the sum of noise and interference, ${\bf{n}}_{S,n,m}$ is the Gaussian noise vector with each element following ${\cal C}{\cal N}(0,\sigma _N^2)$, ${\bf{x}}_{S,n,m}$ is the superimposed interference vector for ${N_{is}}$ reflected ISs, and the dimensions of ${\bf{n}}_{S,n,m}$ and ${\bf{x}}_{S,n,m}$ are ${P_t}{Q_t} \times 1$. Similar to ${\bf{x}}_{C,n,m}$, the $p$th element of ${\bf{x}}_{S,n,m}$ can be given as ${\left[ {{\bf{x}}_{S,n,m}} \right]_p} = \sum\limits_{i = 0}^{{N_{is}} - 1} {\sqrt {{P_{i,s}}} } \beta _{i,p}^I$, where ${P_{i,s}}$ is the incident power of the $i$th IS, and $\beta _{i,p}^I \sim \mathcal{CN}(0,1)$. The aggregate power of each element of ${\bf{x}}_{S,n,m}$ is ${P_{IS}}{\rm{ = }}\sum\limits_{i = 0}^{{N_{ic}} - 1} {{P_{i,s}}}$. The sensing INR is defined as $\gamma _S^{IN} = {{{P_{IS}}} \mathord{/ {\vphantom {{{P_{IS}}} {\sigma _N^2}}} \kern-\nulldelimiterspace} {\sigma _N^2}}$. Further, the sensing SINR (S-SINR) is defined as \begin{equation}\label{equ:gamma_s} {\gamma _{S,n,m}} = \frac{{P_t\left\| {{h_{S,n,m,l}}} \right\|_2^2}}{{{P_{IS}} + \sigma _N^2}}, \end{equation} where ${h_{S,n,m,l}} = {\alpha _{S,n,m,l}}\chi _{TX,l}$ is the gain of DL echo sensing signal at each antenna element. By defining $s_{n,m,l} = \sqrt {P_t} d_{n,m}{\alpha _{S,n,m,l}}\chi _{TX,l}$ and ${\bf{s}}_{n,m} = { {[ {s_{n,m,l}} ]} |_{l = 0,1,...,L - 1}}$, \eqref{equ:sensing_signal} can be expressed in the matrix form as \begin{equation}\label{equ:y_s_n_m} {\bf{y}}_{S,n,m} = {{\bf{A}}_{S,RX}}{\bf{s}}_{n,m} + {\bf{n}}_{S,n,m}^{X}, \end{equation} where ${{\bf{A}}_{S,RX}} = { {[ {{\bf{a}}( {{\bf{p}}_{RX,l}^{S}} )} ]} |_{l = 0,1,...,L - 1}}$ is the steering matrix stacked by steering vectors of $L$ echoes, ${{\bf{A}}_{S,RX}} \in \mathbb{C}^{{P_t}{Q_t} \times L}$, and ${\bf{s}}_{n,m}\in \mathbb{C}^{L \times 1}$. By stacking all the $M_s$ OFDM symbols with $N_c$ subcarriers, we have \begin{equation}\label{equ:y_s_D} {\bf{Y}}_S = {{\bf{A}}_{S,RX}}{{\bf{S}}} + {\bf{N}}_{t}^{X}, \end{equation} where ${{\bf{S}}} = { {[ {{\bf{s}}_{n,m}} ]} |_{(n,m) \in [0, \cdots ,N_c] \times [0, \cdots ,M_s]}} \in \mathbb{C}^{L \times N_cM_s}$, and ${\bf{Y}}_S \in \mathbb{C}^{{P_t}{Q_t} \times {N_c}{M_s}}$. } \section{DL JCS Signal Processing}\label{sec:JCS Downlink signal processing} In this section, we demonstrate the signal processing for DL JCS sensing and communication, which is shown in Fig.~\ref{fig: DL_JCS_signal_processing}. We first present the sensing signal processing scheme, and then elaborate on the JCAS CSI enhancement method. \setcounter{equation}{13 \subsection{JCS Sensing Signal Processing} In this subsection, we first present the conventional MUSIC method for estimating the 2D AoAs, and then introduce the novel MUSIC-based range and Doppler estimation method. \subsubsection{JCS MUSIC 2D Angle Detection} First, the correlation matrix of ${\bf{Y}}_S$ is obtained as \begin{equation}\label{equ:R_x_D} {\bf{R}}_{\bf{X}}{\rm{ = }}\frac{1}{{M_sN_c}} {{\bf{Y}}_S{{[ {{\bf{Y}}_S} ]}^H}} . \end{equation} By applying eigenvalue decomposition to ${\bf{R}}_{\bf{X}}$, we have \begin{equation}\label{equ:svd_R_x_D} \left[ {\bf{U}}_x,{\bf{\Sigma }}_x \right] = {\rm eig}\left( {{\bf{R}}_{\bf{X}}} \right), \end{equation} where ${\bf{\Sigma }}_x$ is the real-value eigenvalue diagonal matrix in descending order, and ${\bf{U}}_x$ is the orthogonal eigen matrix. Calculate the average of eigenvalues and denote it as $m_x$. {\color{blue} Let ${\alpha _t}$ be a preset threshold, which is determined as elaborated in \textbf{Appendix~\ref{Appendix_alpha_t}}.} Then, the number of echo paths is determined as the number of eigenvalues no smaller than ${\alpha _t}m_x$, denoted by $N_x$. Construct ${\bf{U}}_N = {\bf{U}}_x\left( {:,N_x + 1:{P_t}{Q_t}} \right)$\footnote{${\bf{U}}_x\left( {:,N_x + 1:{P_t}{Q_t}} \right)$ means the slice matrix of $(N_x + 1)$th to the ${P_t}{Q_t}$th columns of the matrix.} as the noise subspace basis. We then use it to obtain the spatial angular spectrum function as~\cite{HAARDT2014651} \begin{equation}\label{equ:spectrum function} f_a\left( {{\bf{p}};{\bf{U}}_N} \right) = {{\bf{a}}^H}\left( {{\bf{p}}} \right) {{\bf{U}}_N{{\left( {{\bf{U}}_N} \right)}^H}} {\bf{a}}\left( {{\bf{p}}} \right), \end{equation} where ${\bf{p}} = \left( {\varphi ,\theta } \right)$ is the 2D angle, and ${\bf{a}}\left( \bf{p} \right)$ is given in \eqref{equ:steeringVec}. The spatial spectrum is represented as \cite{HAARDT2014651} \begin{equation}\label{equ:spatial_spectrum} S_a\left( {{\bf{p}};{\bf{U}}_N} \right) = {[ {{{\bf{a}}^H}\left( {\bf{p}} \right){\bf{U}}_N{{( {{\bf{U}}_N} )}^H}{\bf{a}}\left( {\bf{p}} \right)} ]^{ - 1}}. \end{equation} The maximum points of $S_a\left( {{\bf{p}};{\bf{U}}_N} \right)$, i.e., the minimum points of $f_a\left( {{\bf{p}};{\bf{U}}_N} \right)$ are the estimated AoAs~\cite{Lifu1993}. We first find $N_x$ local maximum points of $S_a\left( {{\bf{p}};{\bf{U}}_N} \right)$ using a grid searching method with relatively large granularity, then we use the Newton descent method to identify the accurate minimum point of $f_a\left( {{\bf{p}};{\bf{U}}_N} \right)$ by inputting the above local maximum points as initial points for iteration. \subsubsection{JCS Range and Doppler Detection} {\color{blue} After the AoAs are obtained, through BF at the AoA of interest, the filtered received signal at the $n$th subcarrier of the $m$th OFDM symbol can be expressed as \begin{equation}\label{equ:beamforming_receive} \bar y_{S,n,m,k} = {( {{\bf{w}}_{RX,S,k}} )^H}{\bf{y}}_{S,n,m} = \sqrt {P_t} d_{n,m}\sum\limits_{l = 0}^{L - 1} {[ {{\alpha _{S,n,m,l}}\chi _{TX,l}\varpi _{RX,l,k}} ]} + w_{t,n,m,k}, \end{equation} where $w_{t,n,m,k}{\rm{ = }}{( {{\bf{w}}_{RX,S,k}} )^H}{\bf{n}}_{S,n,m}^{X}$ is the transformed noise and interference with zero mean and variance ${\sigma _W}^2$, $\varpi _{RX,l,k} = {( {{\bf{w}}_{RX,S,k}} )^H}{\bf{a}}( {{\bf{p}}_{RX,l}^{S}} )$ is the receive BF gain, and ${{\bf{w}}_{RX,S,k}}$ is the receive BF vector for the $k$th AoA, $k \in [ {0,1,...,N_x - 1} ]$. Note that $\varpi _{RX,k,k}$ is typically larger than $\varpi _{RX,l,k}$ ($l \ne k$) due to the narrow beam feature of mmWave. } By substituting \eqref{equ:alpha_S} into \eqref{equ:beamforming_receive}, we obtain \eqref{equ:y_n_m_k}. \begin{equation}\label{equ:y_n_m_k} \begin{array}{l} \bar y_{S,n,m,k} = \sqrt {P_t} d_{n,m}{b_{S,k}}\varpi _{RX,k,k}\chi _{TX,k}{e^{j2\pi {f_{s,k,1}}m{T}}}{e^{ - j2\pi n\Delta {f}\left( {\frac{{{r_k}}}{c}} \right)}} + \\ \sum\limits_{l = 0,l \ne k}^{L - 1} {\left[ {\sqrt {P_t} d_{n,m}{b_{S,l}}\varpi _{RX,l,k}\chi _{TX,l}{e^{j2\pi {f_{s,l,1}}m{T}}}{e^{ - j2\pi n\Delta {f}\left( {\frac{{{r_l}}}{c}} \right)}}} \right]} + w_{t,n,m,k}. \end{array} \end{equation} In \eqref{equ:y_n_m_k}, there are independent complex exponential functions for range and Doppler, i.e., $e^{ - j2\pi n\Delta {f}\left( {\frac{{{r_l}}}{c}} \right)}$ and $e^{j2\pi {f_{s,l,1}}m{T}}$, respectively. \setcounter{equation}{19} Here, we define the range and Doppler steering vectors as \begin{equation}\label{equ:range_steering} {{\bf{a}}_r}\left( r \right) = { {[ {{e^{ - j2\pi n\Delta {f}\frac{r}{c}}}} ]} |_{n = 0,1,...,{N_c} - 1}}, \end{equation} \begin{equation}\label{equ:doppler_steering} {{\bf{a}}_f}\left( f \right) = { {[ {{e^{j2\pi m{T}f}}} ]} |_{m = 0,1,...,{M_s} - 1}}, \end{equation} respectively. The range and Doppler steering matrices are defined as \begin{equation}\label{equ:range_steering_matrix} {{\bf{A}}_{\bf{r}}} = { {[ {{{\bf{a}}_r}\left( {{r_l}} \right)} ]} |_{l = 0,1,...,L - 1}}, \end{equation} \begin{equation}\label{equ:Doppler_steering_matrix} {{\bf{A}}_{\bf{f}}} = { {[ {{{\bf{a}}_f}\left( {{f_{s,l,1}}} \right)} ]} |_{l = 0,1,...,L - 1}}, \end{equation} where ${{\bf{A}}_{\bf{r}}} \in \mathbb{C}^{N_c \times L}$, and ${{\bf{A}}_{\bf{f}}} \in \mathbb{C}^{M_s \times L}$. Stack $\bar y_{S,n,m,k}$ into a matrix ${\bf{\bar Y}}_S$ where ${\left[ {{\bf{\bar Y}}_S} \right]_{n,m}} = \bar y_{S,n,m,k}$, then erase the communication symbol matrix ${\bf{D}}_s$ where ${\left[ {{\bf{D}}_s} \right]_{n,m}} = d_{n,m}$. From ${\bf{\bar Y}}_S$, we obtain \begin{equation}\label{equ:Y_SSU} {\bf{\bar H}}_S = \frac{{{\bf{\bar Y}}_S}}{{{\bf{D}}_s}}, \end{equation} where the division is element-wise, and ${\bf{\bar H}}_S \in \mathbb{C}^{N_c \times M_s}$. According to \eqref{equ:y_n_m_k}, ${\bf{\bar H}}_S$ can be expressed by ${{\bf{A}}_{\bf{r}}}$ as \begin{equation}\label{equ:Y_SSU_As} {\bf{\bar H}}_S = {{\bf{A}}_{\bf{r}}}{\bf{S}}_{r,s} + {\bf{W}}_{tr}, \end{equation} where ${\bf{S}}_{r,s} = { {[ {{\bf{s}}_{r,m}} ]} |_{m = 0,1,...,{M_s} - 1}} \in \mathbb{C}^{L \times M_s}$, ${\bf{s}}_{r,m} = { {[ {\sqrt {P_t} {b_{S,l}}\varpi _{TX,l,k}\chi _{TX,l}} {e^{j2\pi m{T} {f_{s,l,1}}}} ]} |_{l = 0,1,...,L - 1}}$, and $\left[ {\bf{W}}_{tr} \right]_{n,m} = w_{t,n,m,k}$. On the other hand, the transpose of ${\bf{\bar H}}_S$, i.e., ${\left( {\bf{\bar H}}_S \right)^T}$, can be presented by ${{\bf{A}}_{\bf{f}}}$ as \begin{equation}\label{equ:Y_SSU_TAs} \left( {\bf{\bar H}}_S \right)^T = {{\bf{A}}_{\bf{f}}}{\bf{S}}_{f,s} + {\bf{W}}_{tf}, \end{equation} where ${\bf{S}}_{f,s} = { {[ {{\bf{s}}_{f,n}} ]} |_{n = 0,1,...,N_c - 1}} \in \mathbb{C}^{L \times N_c}$, ${\bf{s}}_{f,n} \!\!=\!\! { {[ {\sqrt {P_t} {b_{S,l}}\varpi _{TX,l,k}\chi _{TX,l}}{e^{ - j2\pi n\Delta {f} {\frac{{{r_l}}}{c}} }} ]} |_{l = 0,1,...,L - 1}}$, and ${\bf{W}}_{tf} = {[ {{\bf{W}}_{tr}} ]^T}$. {\color{blue} The range and Doppler can be estimated via the autocorrelation of ${\bf{\bar H}}_S$ and ${\left( {{\bf{\bar H}}_S} \right)^T}$, which are given by \begin{equation}\label{equ:R_x_tau} {{\bf{R}}_{{{X}},r}} = \frac{1}{{{M_s}}}{\bf{\bar H}}_S{( {{\bf{\bar H}}_S} )^H}, {{\bf{R}}_{{{X}},f}} = \frac{1}{{{N_c}}}{( {{\bf{\bar H}}_S} )^T}{( {{\bf{\bar H}}_S} )^*}, \end{equation} respectively. Denote the noise subspaces of ${{\bf{R}}_{{{X}},r}}$ and ${{\bf{R}}_{{{X}},f}}$ as ${{\bf{U}}_{x,rN}}$ and ${{\bf{U}}_{x,fN}}$, respectively. \begin{Theo} \label{Theo:1} {\rm The minimum of ${\| {{\bf{U}}{{_{x,r N}}^H}{{\bf{a}}_r}\left( r \right)} \|_2^2}$, denoted by ${r_{s,l}}$, is linked to the range via ${r_{s,l}} = 2{d_{l,1}}$. The minimum of ${\| {{\bf{U}}{{_{x,fN}}^H}{{\bf{a}}_f}\left( f \right)} \|_2^2}$ corresponds to the Doppler value, ${f_{s,l,1}}$. \begin{proof} The proof is presented in Appendix \ref{Theo:A}. \end{proof} } \end{Theo} By applying eigenvalue decomposition to ${\bf{R}}_{{{X}},r }$ and ${\bf{R}}_{{{X}},f}$, we have \begin{equation}\label{equ:ED_R_x_tau} \begin{aligned} \left[ {{\bf{U}}_{x,r },{\bf{\Sigma }}_{x,r }} \right] = {\rm eig}\left( {{\bf{R}}_{{{X}},r }} \right),\ \left[ {{\bf{U}}_{x,f},{\bf{\Sigma }}_{x,f}} \right] = {\rm eig}\left( {{\bf{R}}_{{{X}},f}} \right), \end{aligned} \end{equation} where ${\bf{\Sigma }}_{x,r }$ and ${\bf{\Sigma }}_{x,f}$ are the real-value diagonal matrices of eigenvalues in the descending order, and ${\bf{U}}_{x,r }$ and ${\bf{U}}_{x,f}$ are the corresponding eigenvector matrices. We use $m_{x,r }$ to denote the mean value of ${\bf{\Sigma }}_{x,r }$, and then set the threshold ${\alpha _{t,r}}$ using the method in \textbf{Appendix}~\ref{Appendix_alpha_t} by replacing ${\bf{\Sigma }}_x$ with ${\bf{\Sigma }}_{x,r }$. The number of targets in the AoA of interest, $N_{x,r }$, is then determined as the number of eigenvalues no smaller than ${\alpha _{t,r }}m_{x,r }$. Then, the noise subspace basis for range estimation is derived as ${\bf{U}}_{x,r N} = {\bf{U}}_{x,r }( {:,N_{x,r } + 1:{N_c}} )$. Since the number of targets is the same for both Doppler and range estimation, the noise subspace basis for the Doppler estimation can be derived as ${\bf{U}}_{x,fN} = {\bf{U}}_{x,f}( {:,N_{x,r} + 1:M_s} )$. We use ${\bf{U}}_{x,r N}$ and ${\bf{U}}_{x,fN}$ to derive the range and Doppler spectrum functions as \begin{equation}\label{equ:fr} \begin{aligned} f_r( r;{\bf{U}}_{x,r N} ) = {{\bf{a}}_r}{( r )^H}{\bf{U}}_{x,r N}{( {{\bf{U}}_{x,r N}} )^H}{{\bf{a}}_r}( r ),\ f_f( f;{\bf{U}}_{x,fN} ) = {{\bf{a}}_f}{( f )^H}{\bf{U}}_{x,fN}{( {{\bf{U}}_{x,fN}} )^H}{{\bf{a}}_f}( f ), \end{aligned} \end{equation} respectively. The range and Doppler spectra can be given by \begin{equation}\label{equ:Sr} \begin{aligned} {S_r}( {r;{\bf{U}}_{x,rN}} ) = {[ {{{\bf{a}}_r}{{( r )}^H}{\bf{U}}_{x,rN}{{( {{{\bf{U}}_{x,rN}}} )}^H}{{\bf{a}}_r}( r )} ]^{ - 1}},\\ {S_f}( {f;{\bf{U}}_{x,fN}} ) = {[ {{{\bf{a}}_f}{{( f )}^H}{\bf{U}}_{x,fN}{{( {{\bf{U}}_{x,fN}} )}^H}{{\bf{a}}_f}( f )} ]^{ - 1}}, \end{aligned} \end{equation} respectively. The maximum points of $S_r( r;{\bf{U}}_{x,r N} )$ and $S_f( f;{\bf{U}}_{x,fN} )$, i.e., the minimum points of $f_r( r;{\bf{U}}_{x,r N} )$ and $f_f( f;{\bf{U}}_{x,fN} )$, are the range and Doppler estimation values, denoted by ${\hat r_{s,l}}$ and $\hat f_{s,l}$, respectively. The distance, $d_{l,1}$, and radial velocity, $v_{r,0,1}$ , between BS and the target are given by ${\hat d_{l,1}} = \frac{{{{\hat r}_{s,l}}}}{2}$ and ${\hat v_{r,0,1}} = \frac{{\lambda {{\hat f}_{s,l}}}}{2}$.} The minimum of $f_r( r )$ and $f_f( f )$ can be identified using a two-step Newton descent method. We first find the local maximum points of $S_r( r )$ and $S_f( f )$ with large-granularity grid searching. Then, we use the Newton descent method to find the accurate minimum points of $f_r( r )$ and $f_f( f )$ using the above local maximum points as the initial points. The iterative expression for the Newton descent method is derived as follows. By applying the Taylor series decomposition to $f_r( r )$ and $f_f( f )$, and taking their first order derivative over $r$ and $f$, respectively, we obtain \begin{equation}\label{equ:Taylor_fr} \frac{{\partial f_r\left( {r} \right)}}{{\partial r}} \buildrel\textstyle.\over= \frac{{\partial f_r( {{r_0}} )}}{{\partial r}} + \frac{{{\partial ^2}[ {f_r( {{r_0}} )} ]}}{{{\partial ^2}r}}( {r - {r_0}} ), \end{equation} and \begin{equation}\label{equ:Taylor_ff} \frac{{\partial f_f( {f} )}}{{\partial f}} \buildrel\textstyle.\over= \frac{{\partial f_f( {{f_0}} )}}{{\partial f}} + \frac{{{\partial ^2}[ {f_f( {{f_0}} )} ]}}{{{\partial ^2}f}}( {f - {f_0}} ). \end{equation} By setting the above first-order derivative to be 0, the iterative descent expression for range and Doppler estimation can be given by \begin{equation}\label{equ:descent_r} {r^{( k )}} = {r^{( {k - 1} )}} - {\left[ {\frac{{{\partial ^2}f_r( {{r^{( {k - 1} )}}} )}}{{{\partial ^2}r}}} \right]^{ - 1}}\frac{{\partial f_r( {{r^{( {k - 1} )}}} )}}{{\partial r}}, \end{equation} \begin{equation}\label{equ:descent_f} {f^{( k )}} = {f^{( {k - 1} )}} - {\left[ {\frac{{{\partial ^2}f_f( {{f^{( {k - 1} )}}} )}}{{{\partial ^2}f}}} \right]^{ - 1}}\frac{{\partial f_f( {{f^{( {k - 1} )}}} )}}{{\partial f}}, \end{equation} respectively. From \eqref{equ:fr}, the first-order and second-order derivatives of ${f_f\left( f \right)}$ and ${f_r\left( r \right)}$ are expressed as \begin{equation}\label{equ:first_derivative_f_r} \frac{{\partial f_r( r )}}{{\partial r}} {\rm{ = }}2{\mathop{\rm Re}\nolimits} \{ {{{\bf{a}}^{( 1 )}_r}{{( r )}^H}{\bf{U}}_{x,r N}{{( {{\bf{U}}_{x,r N}} )}^H}{{\bf{a}}_r}( r )} \}, \end{equation} \begin{equation}\label{equ:first_derivative_f_f} \frac{{\partial f_f( f )}}{{\partial f}}{\rm{ = }}2{\mathop{\rm Re}\nolimits} \{ {{\bf{a}}_f^{( 1 )}{{( f )}^H}{\bf{U}}_{x,fN}{{( {{\bf{U}}_{x,fN}} )}^H}{{\bf{a}}_f}( f )} \}, \end{equation} \begin{equation}\label{equ:second_derivative_f_r} \frac{{{\partial ^2}f_r( r )}}{{{\partial ^2}r}} \!=\! 2{\mathop{\rm Re}\nolimits} \!\left\{\!\!\! \begin{array}{l} {{\bf{a}}_r^{( 2 )}}{( r )^H}{\bf{U}}_{x,r N}{( {{\bf{U}}_{x,r N}} )^H}{{\bf{a}}_r}( r ) + {{\bf{a}}_r^{( 1 )}}{( r )^H}{\bf{U}}_{x,r N}{( {{\bf{U}}_{x,r N}} )^H}{{\bf{a}}^{( 1 )}_r}( r ) \end{array} \!\!\! \right\}, \end{equation} \begin{equation}\label{equ:second_derivative_f_f} \frac{{{\partial ^2}f_f\left( f \right)}}{{{\partial ^2}f}}\! = \!2{\mathop{\rm Re}\nolimits} \!\left\{ {\bf{a}}_f^{( 2 )}{( f )^H}{\bf{U}}_{x,fN}{( {{\bf{U}}_{x,fN}} )^H}{{\bf{a}}_f}( f ) + {\bf{a}}_f^{( 1 )}{( f )^H}{\bf{U}}_{x,fN}{( {{\bf{U}}_{x,fN}} )^H}{\bf{a}}_f^{( 1 )}( f ) \right\}, \end{equation} where ${\bf{a}}_r^{( 1 )}{{( r )}}$, ${\bf{a}}_f^{( 1 )}{{( f )}}$, ${\bf{a}}_r^{( 2 )}{( r )}$, and ${\bf{a}}_f^{( 2 )}( f )$ are the first-order and second-order derivatives of ${{\bf{a}}_r}( r )$ and ${{\bf{a}}_f}( f )$, respectively. From \eqref{equ:range_steering} and \eqref{equ:doppler_steering}, these expressions are presented as \begin{equation}\label{equ:first_derivative_a} \begin{array}{c} {\bf{a}}_r^{( 1 )}( r ) = { {[ {( { - j2\pi n\frac{{\Delta {f}}}{c}} ){e^{ - j2\pi n\Delta {f}\frac{r}{c}}}} ]} |_{n = 0,1,...,{N_c} - 1}}, {\bf{a}}_f^{( 1 )}( f ) = { {[ {( {j2\pi m{T}} ){e^{j2\pi m{T}f}}} ]} |_{m = 0,1,...,{M_s} - 1}},\\ {\bf{a}}_r^{( 2 )}( r ) = { {[\! {{{( { - j2\pi n\frac{{\Delta f}}{c}} )}^2}{e^{ - j2\pi n\Delta {f}\frac{r}{c}}}} ]}\! |_{n = 0,1,...,{N_c} - 1}}, {\bf{a}}_f^{( 2 )}( f ) = { {[ {{{( {j2\pi m{T}} )}^2}{e^{j2\pi m{T}f}}} ]}\! |_{m = 0,1,...,{M_s} - 1}}. \end{array} \end{equation} {\color{blue} \subsection{JCS Communication Signal Processing} \label{sec:JCS_comm} By substituting \eqref{equ:general_JCS_channel} into \eqref{equ:y_DC}, and taking into consideration that ${\bf{w}}_{TX}$ and ${\bf{w}}_{RX}$ in \eqref{equ:y_DC} generate beams pointed at the AoD and AoA of the LoS communication path, respectively, we obtain the communication received signal as \begin{equation}\label{equ:y_Cnm} y_{C,n,m} = \sqrt {P_t} d_{n,m}h_{C,n,m} + n^{X}_{C,n,m}, \end{equation} where $h_{C,n,m} = {b_{C,0}}\varpi _{RX,0}\chi _{TX,0}{e^{j2\pi mT_s{f_{c,d,0}}}}{e^{ - j2\pi n\Delta {f}{\tau _{c,0}}}}$ is the real communication channel response, and $\varpi _{RX,0} = {\left( {{\bf{w}}_{RX}} \right)^H} {\bf{a}}( {{\bf{p}}^{C}_{RX,0}} )$ and $\chi _{TX,0} = {{\bf{a}}^T}({\bf{p}}_{TX,0}){\bf{w}}_{TX}$ are the BF transmitting and receiving gains. In the CSI estimation, $d_{n,m} = \bar d_{n,m}$, and we denote $y_{C,n,m} = \bar y_{C,n,m}$ as the received signal. The CSI estimated with the LS method is expressed as \cite{2010MIMO} \begin{equation}\label{equ:h_CD_nm_hat} \hat h_{C,n,m} = \frac{{\bar y_{C,n,m}}}{{\sqrt {P_t} \bar d_{n,m}}} = h_{C,n,m} + w_{C,n,m}, \end{equation} where $w_{C,n,m} = \frac{n^{X}_{C,n,m}}{{\sqrt {P_t} \bar d_{n,m}}}$ is the transformed noise plus interference and follows $\mathcal{CN}(0, \sigma _p^2)$, $\sigma _p^2 = ({P_{IC}} + \sigma _N^2)/P_t$. The estimated communication response matrix at $M_s$ OFDM symbols is denoted by ${\bf{\hat H}}_C$, where ${[ {{\bf{\hat H}}_C} ]_{n,m}} = \hat h_{C,n,m}$. The method for estimating $\sigma _p^2$ based on ${\bf{\hat H}}_C$ is presented in \textbf{Appendix}~\ref{Appendix_sigma_p}. The conventional communication uses ${\bf{\hat H}}_C$ to demodulate the communication data. On the other hand, ${f_{c,d,0}}$ and ${\tau _{c,0}}$ can be estimated by JCS as ${\hat f_{c,d,0}} = {{{\hat f_{s,0}}} \mathord{\left/ {\vphantom {{{f_{s,0}}} 2}} \right. \kern-\nulldelimiterspace} 2}$ and ${{\hat \tau }_{c,0}} = {{{\hat r_{s,0}}} \mathord{\left/ {\vphantom {{{r_{s,0}}} {(2c)}}} \right. \kern-\nulldelimiterspace} {(2c)}}$, respectively. Based on the prior information obtained by JCS sensing, we propose a Kalman filter-based JCS CSI enhancement method to improve CSI by leveraging the sensing estimation results of JCS. For the $m$th OFDM symbol, $\hat h_{C,n,m}$ can be regarded as the observation of $h_{C,n,m}$ as given in~\eqref{equ:h_CD_nm_hat}. Since ${b_{C,0}}$ is unchanged for the same OFDM symbol. The state transfer of $h_{C,n,m}$ is given by \begin{equation}\label{equ:state_trasfer_h_dc} h_{C,n + 1,m} = {e^{ - j2\pi \Delta {f}\left( {{\tau _{c,0}}} \right)}}h_{C,n,m}, \end{equation} The Kalman filter algorithm that utilizes $\hat \Phi = { {[ {\hat h_{C,n,m}} ]} |_{n = 0, \cdots ,N_c - 1}}$ to recursively derive the estimation of $\Phi = { {[ {h_{C,n,m}} ]} |_{n = 0, \cdots ,N_c - 1}}$ is presented in \textbf{Algorithm~\ref{DL_Kalman_CSI}}, with the details of the Kalman Filter algorithm referenced to~\cite{2017Kalman}. Note that we obtain $h_{C,n,m} = {e^{ - j2\pi n\Delta {f} {{\tau _{c,0}}} }}h_{C,0,m}$ from \eqref{equ:state_trasfer_h_dc}, based on which we can further estimate the initial observation variance as \begin{equation}\label{equ:init_obs_pw} {p_{w,0}} \!\!=\!\!\!\! {{\sum\limits_{n = 1}^{N_c - 1} {\| {{e^{j2\pi n\Delta {f}\left( {{{\hat \tau }_{c,0}}} \right)}}\hat h_{C,n,m} \!-\! \hat h_{C,0,m}} \|_2^2} } \mathord{/ {\vphantom {{\sum\limits_{n = 1}^{N_c - 1} {\left\| {{e^{j2\pi n\Delta {f}\left( {{{\hat \tau }_{c,0}}} \right)}}\hat h_{C,n,m} - \hat h_{C,0,m}} \right\|_2^2} } {(N_c - 1)}}} \kern-\nulldelimiterspace} {(N_c - 1)}}, \end{equation} \begin{algorithm}[!t] \caption{JCS CSI Enhancement method \label{DL_Kalman_CSI} \KwIn{The observation variance $\sigma _p^2$; The variance of initial estimation ${p_{w,0}}$; The initial observation $\hat h_{C,0,m}$; The transfer factor $A = {e^{ - j2\pi \Delta {f}{{\hat \tau }_{c,0}}}}$; The observation sequence $\hat \Phi$. } \KwOut{Filtered sequence ${ {[ {\bar h_{C,n,m}} ]} |_{n = 0, \cdots ,N_c - 1}}$. \textbf{Step} 1: $\bar h_{C,0,m} = \hat h_{C,0,m}$. \textbf{Step} 2: \For{$n$ = {\rm 1} to $N_c - 1$} { $\hat h_{n,m}^ - = A\bar h_{C,n-1,m}$\; $p_{w,n}^ - = A{p_{w,n - 1}}{A^*}$\; ${K_k} = {( {p_{w,n}^ - } )^*}{( {p_{w,n}^ - + \sigma _p^2} )^{ - 1}}$\; $\bar h_{C,n,m} = \hat h_{n,m}^ - + ( {\hat h_{C,n,m} - \hat h_{n,m}^ - } ) {K_k}$\; ${p_{w,n}} = ( {1 - {K_k}} )p_{w,n}^ - $\; } \Return ${ {[ {\bar h_{C,n,m}} ]} |_{n = 0, \cdots ,N_c - 1}}$. \end{algorithm} After ${ {[ {\bar h_{C,n,m}} ]} |_{n = 0, \cdots ,N_c - 1}}$ for $m = 0,...,M_s - 1$ are all derived via \textbf{Algorithm~\ref{DL_Kalman_CSI}}, we can form the enhanced CSI matrix ${\bf{\bar H}}_C$, where ${[{\bf{\bar H}}_C]_{n,m}} = \bar h_{C,n,m}$, to demodulate the data symbols. First, $y_{C,n,m}$ given in \eqref{equ:y_DC} is equalized as ${\hat r_{C,n,m}} = {{y_{C,n,m}} \mathord{/ {\vphantom {{y_{C,n,m}} {(\sqrt {P_t} \bar h_{C,n,m})}}} \kern-\nulldelimiterspace} {(\sqrt {P_t} \bar h_{C,n,m})}}$, then we use the maximum likelihood (ML) method to estimate $d_{n,m}$ as ${\hat d_{n,m}} = \mathop {\arg \min }\limits_{d \in {\Theta _{QAM}}} \| {{{\hat r}_{C,n,m}} - d} \|_2^2$, where ${\Theta _{QAM}}$ is the constellation. } \section{Performance analysis of the JCS Processing}\label{sec:JCS performance} In this section, the analytical MSE results of AoAs, range, Doppler, and location estimation of the proposed MUSIC-based JCS processing are derived using the perturbation method. \subsection{Analysis of 2D Angle Detection MSE} \setcounter{equation}{43} From \eqref{equ:y_s_D}, the noise term ${\bf{N}}_t$ can be treated as the perturbation to the useful signal, which is expressed as \begin{equation}\label{equ:noised_signal} {\bf{Y}}_S = {\bf{Y}}_{S,R} + {\bf{N}}_{t}^{X}, \end{equation} where ${\bf{Y}}_{S,R} = {{\bf{A}}_{S,RX}}{{\bf{S}}}$ is the useful signal. The singular value decomposition of ${\bf{Y}}_{S,R}$ can be expressed as \begin{equation}\label{equ:svd_of_useful_signal} {\bf{Y}}_{S,R} \!=\! {\bf{U\Sigma }}{{\bf{V}}^H} \!=\! \left[ {{{\bf{U}}_s},\!{{\bf{U}}_0}} \right]\!\left[\! { {\begin{array}{*{20}{c}} {{{\bf{\Sigma }}_s}}&{\bf{0}}\\ {\bf{0}}&{\bf{0}} \end{array}} \!} \right]\!\!\left[\! {\begin{array}{*{20}{c}} {{\bf{V}}_s^H}\\ {{\bf{V}}_0^H} \end{array}} \!\right] \!\!=\!\!{{\bf{U}}_s}{{\bf{\Sigma }}_s}{\bf{V}}_s^H, \end{equation} where ${{\bf{U}}_0}$ is the noise subspace basis, and ${{\bf{U}}_0}^H{\bf{Y}}_{S,R} = {\bf{0}}$. Further, we have ${{\bf{U}}_0}^H{{\bf{A}}_{S,RX}} = {\bf{0}}$. With noise as perturbation, ${\bf{Y}}_S$ can be expressed as \begin{equation}\label{equ:svd_of_YSD} \begin{aligned} {\bf{Y}}_S = \left[ {{{{\bf{\tilde U}}}_s},{{{\bf{\tilde U}}}_0}} \right]\left[ { {\begin{array}{*{20}{c}} {{{{\bf{\tilde \Sigma }}}_s}}&{\bf{0}}\\ {\bf{0}}&{{{{\bf{\tilde \Sigma }}}_{\bf{0}}}} \end{array}}} \right]\left[ {\begin{array}{*{20}{c}} {{\bf{\tilde V}}_s^H}\\ {{\bf{\tilde V}}_0^H} \end{array}} \right], \end{aligned} \end{equation} where ${{\bf{\tilde \Sigma }}_{\bf{0}}} = {\bf{\Delta }}{{\bf{\Sigma }}_{\bf{0}}}$, ${{\bf{\tilde \Sigma }}_s} = {{\bf{\Sigma }}_s} + {\bf{\Delta }}{{\bf{\Sigma }}_s}$, and ${{\bf{\tilde U}}_0} = {{\bf{U}}_0} + \Delta {{\bf{U}}_0}$. Here, ${{\bf{\tilde U}}_0}$ and ${{\bf{\tilde U}}_s}$ are both orthogonal unitary matrices, and ${{\bf{\tilde U}}_0}^H{\bf{Y}}_S{\rm{ = }}{\bf{\Delta }}{{\bf{\Sigma }}_{\bf{0}}}{\bf{\tilde V}}_0^H$. In the high SINR regime, solving the perturbation problem is equivalent to seeking the optimal $\Delta {\bf{U}}_0$ to minimize $\| {{{{\bf{\tilde U}}}_0}^H{\bf{Y}}_S} \|_2$ subject to the constraint ${{\bf{\tilde U}}_0}^H{{\bf{\tilde U}}_0} = {\bf{I}}$~\cite{Lifu1993}. By substituting \eqref{equ:noised_signal} into $\| {{{{\bf{\tilde U}}}_0}^H{\bf{Y}}_S} \|_2$, we have \begin{equation}\label{equ:U_YS} \| {{{{\bf{\tilde U}}}_0}^H{\bf{Y}}_S} \|_2{\rm{ = }}\| {{{( {{{\bf{U}}_0} + {\bf{\Delta }}{{\bf{U}}_{{0}}}} )}^H}( {{\bf{Y}}_{S,R} + {\bf{N}}_t^{X}} )} \|_2. \end{equation} The second-order perturbation ${( {{\bf{\Delta }}{{\bf{U}}_{{0}}}} )^H}{\bf{N}}_t$ and ${{\bf{U}}_0}^H{\bf{Y}}_{S,R}{\rm{ = }}\textbf{0}$ can be discarded. By using the LS method~\cite{Lifu1993}, ${\bf{\Delta }}{{\bf{U}}_0}$ can be presented as \begin{equation}\label{equ:delta_U0} {\bf{\Delta }}{{\bf{U}}_0} = - {{\bf{U}}_s}{{\bf{\Sigma }}_s^{ - 1}}{\bf{V}}_s^H{[ {{\bf{N}}_t} ]^H}{{\bf{U}}_0}. \end{equation} The MUSIC 2D angle estimation result is distorted by the noise perturbation, which is expressed as ${{\bf{\tilde p}}_k} = {{\bf{p}}_k} + {\bf{\Delta }}{{\bf{p}}_k}$, where ${{\bf{p}}_k}$ is the actual value of AoA. Apply Taylor series decomposition to $f_a( {{{{\bf{\tilde p}}}_k};{{{\bf{\tilde U}}}_0}} )$ in \eqref{equ:spectrum function}, and take the first three terms. Applying first-order derivative to the truncated Taylor series, we have \begin{equation}\label{equ:f_p} \frac{{\partial f_a( {{{\bf{\tilde p}}_k};{{{\bf{\tilde U}}}_0}} )}}{{\partial {\bf{p}}}} \buildrel\textstyle.\over= \frac{{\partial f_a( {{{\bf{p}}_k};{{{\bf{\tilde U}}}_0}} )}}{{\partial {\bf{p}}}} + \frac{{{\partial ^2}f_a( {{{\bf{p}}_k};{{{\bf{\tilde U}}}_0}} )}}{{{\partial ^2}{\bf{p}}}}\Delta {{\bf{p}}_k}. \end{equation} By setting \eqref{equ:f_p} to be 0, we can obtain \begin{equation}\label{equ:delta_pk} \Delta {{\bf{p}}_k} = - {{\bf{H}}_{\bf{p}}}^{ - 1}( {{{\bf{p}}_k};{{{\bf{\tilde U}}}_0}} ){{\bf{G}}_\textbf{p}}( {{{\bf{p}}_k};{{{\bf{\tilde U}}}_0}} ), \end{equation} where ${{\bf{H}}_{\bf{p}}}( {{{\bf{p}}_k};{{{\bf{\tilde U}}}_0}} ) = \frac{{{\partial ^2}f_a( {{{\bf{p}}_k};{{{\bf{\tilde U}}}_0}} )}}{{{\partial ^2}{\bf{p}}}} \in \mathbb{C}^{2 \times 2}$ is the Hessian matrix of $f_a$, and ${{\bf{G}}_\textbf{p}}( {{{\bf{p}}_k};{{{\bf{\tilde U}}}_0}} ) = \frac{{\partial f_a( {{{\bf{p}}_k};{{{\bf{\tilde U}}}_0}} )}}{{\partial {\bf{p}}}} \in \mathbb{C}^{2 \times 1}$ is the gradient vector of $f_a$. With the perturbation expression, we can obtain \begin{equation}\label{equ:Gp_expression} {{\bf{G}}_\textbf{p}}( {{{\bf{p}}_k};{{{\bf{\tilde U}}}_0}} ) = {{\bf{G}}_\textbf{p}}( {{{\bf{p}}_k};{{\bf{U}}_0}} ) + \Delta {{\bf{G}}_\textbf{p}}, \end{equation} \begin{equation}\label{equ:Hp_expression} {{\bf{H}}_{\bf{p}}}( {{{\bf{p}}_k};{{{\bf{\tilde U}}}_0}} ) = {{\bf{H}}_{\bf{p}}}( {{{\bf{p}}_k};{{\bf{U}}_0}} ) + \Delta {{\bf{H}}_{\bf{p}}}. \end{equation} From \eqref{equ:spectrum function}, we have \begin{equation}\label{equ:G_p_gradient} {{\bf{G}}_\textbf{p}}( {{{\bf{p}}_k};{\bf{U}}} ){\rm{ = }}\frac{{\partial f_a( {{\bf{p}};{\bf{U}}} )}}{{\partial {\bf{p}}}} = 2{\mathop{\rm Re}\nolimits} \{ {{\bf{a}}_{\bf{p}}^{( 1 )}{{( {\bf{p}} )}^H}{\bf{U}}{{\bf{U}}^H}{\bf{a}}( {\bf{p}} )} \}, \end{equation} \begin{equation}\label{equ:Hp} \text{vec}[ {{{\bf{H}}_{\bf{p}}}( {{\bf{p}};{\bf{U}}} )} ] = 2{\mathop{\rm Re}\nolimits} \left\{ \begin{array}{l} \text{vec}[ {{\bf{a}}_{\bf{p}}^{( 1 )}{{( {\bf{p}} )}^H}{\bf{U}}{{\bf{U}}^H}{\bf{a}}_{\bf{p}}^{( 1 )}( {\bf{p}} )} ] + {\bf{a}}_{\bf{p}}^{( 2 )}{( {\bf{p}} )^H}{\bf{U}}{{\bf{U}}^H}{\bf{a}}( {\bf{p}} ) \end{array} \right\}, \end{equation} where $\text{vec}( \cdot )$ is to vectorize a matrix, ${\bf{a}}_{\bf{p}}^{( 1 )}( {\bf{p}} )$ and ${\bf{a}}_{\bf{p}}^{( 2 )}( {\bf{p}} )$ are the first-order and second-order derivatives of ${\bf{a}}( {\bf{p}} )$ over ${\bf{p}}$, respectively, which can be derived from~\eqref{equ:steeringVec}. Since ${{\bf{U}}_0}^H{{\bf{A}}_{S,RX}} = 0$, we can obtain \begin{equation}\label{equ:G_p_0} {{\bf{G}}_{\bf{p}}}( {{{\bf{p}}_k};{{\bf{U}}_0}} ) = {\bf{0}}, \end{equation} \begin{equation}\label{equ:H_p_0} {{\bf{H}}_{\bf{p}}}( {{{\bf{p}}_k};{{\bf{U}}_0}} ) \!=\! 2{\mathop{\rm Re}\nolimits} \{ {{\bf{a}}_{\bf{p}}^{( 1 )}{{( {{{\bf{p}}_k}} )}^H}{{\bf{U}}_0}{{\bf{U}}_0}^H{\bf{a}}_{\bf{p}}^{( 1 )}( {{{\bf{p}}_k}} )} \}. \end{equation} We use ${{\bf{H}}_{{\bf{p0}}}}$ to represent ${{\bf{H}}_{\bf{p}}}( {{{\bf{p}}_k};{{\bf{U}}_0}} )$. By substituting \eqref{equ:G_p_0} and \eqref{equ:H_p_0} into \eqref{equ:Gp_expression} and \eqref{equ:Hp_expression}, \eqref{equ:delta_pk} can be rewritten as \begin{equation}\label{equ:delta_pk_last} \Delta {{\bf{p}}_k} = - {( {{{\bf{H}}_{{\bf{p0}}}} + \Delta {{\bf{H}}_{\bf{p}}}} )^{ - 1}}( {\Delta {{\bf{G}}_{{\textbf{p}}}}} ) = - \left( \begin{array}{l} {\bf{I}} - {{\bf{H}}_{{\bf{p0}}}}^{ - 1}\Delta {{\bf{H}}_{\bf{p}}} + \\ {( {{{\bf{H}}_{{\bf{p0}}}}^{ - 1}\Delta {{\bf{H}}_{\bf{p}}}} )^2} + ... \end{array} \right){{\bf{H}}_{{\bf{p0}}}}^{ - 1}\Delta {{\bf{G}}_{{\textbf{p}}}}. \end{equation} Discarding the perturbation terms that are higher than second-order in \eqref{equ:delta_pk_last}, we can rewrite \eqref{equ:delta_pk_last} as \begin{equation}\label{equ:delta_pk_lastlast} \Delta {{\bf{p}}_k} = - {{\bf{H}}_{{\bf{p0}}}}^{ - 1}\Delta {{\bf{G}}_{{\textbf{p}}}}, \end{equation} where the perturbation expression of $\Delta {{\bf{G}}_\textbf{p}}$ is derived in \textbf{Appendix} \ref{Expressions:G} as \begin{equation}\label{equ:delta_G_p_last} \Delta {{\bf{G}}_{\bf{p}}} \!\!=\!\! 2{\mathop{\rm Re}\nolimits} \{\! { - {\bf{a}}_{\bf{p}}^{( 1 )}{{( {{{\bf{p}}_k}} )}^H}{{\bf{U}}_0}{{\bf{U}}_0}^H{{[ {{\bf{N}}_t} ]}^H}{{\bf{V}}_s}{{\bf{\Sigma }}_s^{ - 1}}{{\bf{U}}_s}^H{\bf{a}}( {{{\bf{p}}_k}} )} \!\}, \end{equation} By substituting \eqref{equ:delta_G_p_last} and \eqref{equ:H_p_0} into \eqref{equ:delta_pk_lastlast}, we can obtain $\Delta {{\bf{p}}_k}$ as shown in \eqref{equ:delta_pk_lastlastlast}. \begin{equation}\label{equ:delta_pk_lastlastlast} \Delta {{\bf{p}}_k} = {[ {{\mathop{\rm Re}\nolimits} \{ {{\bf{a}}_{\bf{p}}^{( 1 )}{{( {{{\bf{p}}_k}} )}^H}{{\bf{U}}_0}{{\bf{U}}_0}^H{\bf{a}}_{\bf{p}}^{( 1 )}( {{{\bf{p}}_k}} )} \}} ]^{ - 1}}{\mathop{\rm Re}\nolimits} \{ {{\bf{a}}_{\bf{p}}^{( 1 )}{{( {{{\bf{p}}_k}} )}^H}{{\bf{U}}_0}{{\bf{U}}_0}^H[ {{\bf{N}}_t} ]{{\bf{V}}_s}{{\bf{\Sigma }}_s}^{ - 1}{{\bf{U}}_s}^H{\bf{a}}( {{{\bf{p}}_k}} )} \}. \end{equation} The MSE of angle estimation can be expressed as \setcounter{equation}{60} \begin{equation}\label{equ:MSE_delta_G_p_last} MSE( {{{\bf{p}}_k}} ) = E\{ {diag( {\Delta {{\bf{p}}_k}{{[ {\Delta {{\bf{p}}_k}} ]}^H}} )} \}. \end{equation} \subsection{Analysis of Range and Doppler Detection MSE} \subsubsection{Analysis of Range Detection MSE} The noisy signal for the range estimation, as shown in \eqref{equ:Y_SSU_As}, is rewritten as \begin{equation}\label{equ:Y_SS_U} {\bf{\bar H}}_S = {\bf{\bar H}}_{S,p} + {\bf{W}}_{tr}, \end{equation} where ${\bf{\bar H}}_{S,p} = {{\bf{A}}_{\bf{r}}}{\bf{S}}_{r,s}$ is the useful signal. The singular value decomposition of ${\bf{\bar H}}_{S,p}$ is \begin{equation}\label{equ:svd_of_Y_SS_p} {\bf{\bar H}}_{S,p} \!\!=\!\! \left[ {{{\bf{U}}_{r,s}},{{\bf{U}}_{r,0}}} \right]\left(\!\! {\begin{array}{*{20}{c}} {{{\bf{\Sigma }}_{r,s}}}&{\bf{0}}\\ {\bf{0}}&{\bf{0}} \end{array}} \!\right)\left[\! {\begin{array}{*{20}{c}} {{\bf{V}}_{r,s}^H}\\ {{\bf{V}}_{r,0}^H} \end{array}} \!\right] \!\! =\!\! {{\bf{U}}_{r,s}}{{\bf{\Sigma }}_{r,s}}{\bf{V}}_{r,s}^H, \end{equation} where ${{\bf{U}}_{r,s}}$ and ${{\bf{U}}_{r,0}}$ are orthogonal unitary matrices, and ${{\bf{U}}_{r,0}}^H{\bf{\bar H}}_{S,p} = {\bf{0}}$. We further obtain ${{\bf{U}}_{r,0}}^H{{\bf{A}}_{\bf{r}}} = {\bf{0}}$. By treating ${\bf{W}}_{tr}$ as a perturbation term, ${\bf{\bar H}}_S$ can be decomposed as \begin{equation}\label{equ:svd_of_Y_SS_U} {\bf{\bar H}}_S = \left[ {{{{\bf{\tilde U}}}_{r,s}},{{{\bf{\tilde U}}}_{r,0}}} \right]\left( {\begin{array}{*{20}{c}} {{{{\bf{\tilde \Sigma }}}_{r,s}}}&{\bf{0}}\\ {\bf{0}}&{{\bf{\Delta }}{{\bf{\Sigma }}_{r,{\bf{0}}}}} \end{array}} \right)\left[ {\begin{array}{*{20}{c}} {{\bf{\tilde V}}_{r,s}^H}\\ {{\bf{\tilde V}}_{r,0}^H} \end{array}} \right], \end{equation} where ${{\bf{\tilde \Sigma }}_{r,{\bf{0}}}} = {\bf{\Delta }}{{\bf{\Sigma }}_{r,{\bf{0}}}}$, and ${{\bf{\tilde U}}_{r,0}} = {{\bf{U}}_{r,0}} + \Delta {{\bf{U}}_{r,0}}$. Because ${{\bf{\tilde U}}_{r,s}}$ and ${{\bf{\tilde U}}_{r,0}}$ are orthogonal unitary matrices, we have ${{\bf{\tilde U}}_{r,0}}^H{\bf{\bar H}}_S{\rm{ = }}{\bf{\Delta }}{{\bf{\Sigma }}_{r,{\bf{0}}}}{\bf{\tilde V}}_{r,0}^H$. In the high SINR regime, solving the perturbation problem is equivalent to seeking the optimal $\Delta {{\bf{U}}_{r,0}}$ to minimize $\| {{{{\bf{\tilde U}}}_{r,0}}^H{\bf{\bar H}}_S} \|_2$ with the constraint ${{\bf{\tilde U}}_{r,0}}^H{{\bf{\tilde U}}_{r,0}} = {\bf{I}}$. By substituting \eqref{equ:Y_SS_U} and ${{\bf{\tilde U}}_{r,0}} = {{\bf{U}}_{r,0}} + \Delta {{\bf{U}}_{r,0}}$ into the problem, then discarding the term ${{\bf{U}}_{r,0}}^H{\bf{\bar H}}_{S,p} = {\bf{0}}$ and the second-order perturbation $\Delta {{\bf{U}}_{r,0}}^H{\bf{W}}_{tr}$, we can obtain \begin{equation}\label{equ:Ur_YSSU} \| {{{{\bf{\tilde U}}}_{r,0}}^H{\bf{\bar H}}_S} \|_2 \buildrel\textstyle.\over= \Delta {{\bf{U}}_{r,0}}^H{\bf{\bar H}}_{S,p} + {{\bf{U}}_{r,0}}^H{\bf{W}}_{tr}. \end{equation} Using the LS method and substituting \eqref{equ:svd_of_Y_SS_p} into \eqref{equ:Ur_YSSU}, we can obtain $\Delta {{\bf{U}}_{r,0}}$ as \begin{equation}\label{equ:Ur0} \Delta {{\bf{U}}_{r,0}} = - {{\bf{U}}_{r,s}}{{\bf{\Sigma }}_{r,s}}^{ - 1}{\bf{V}}_{r,s}^H{( {{\bf{W}}_{tr}} )^H}{{\bf{U}}_{r,0}}. \end{equation} Next, we derive the expression for the perturbation of range estimation, i.e., $\Delta r = r - {r_k}$, where $r_k$ is the actual value of range, and $r$ is the estimation value. Apply Taylor series decomposition to \eqref{equ:fr} at $r_k$, and keep the first three terms. Applying the first-order derivative to the truncated series with respect to $r$, we obtain the range perturbation as \begin{equation}\label{equ:taylor_fr} \frac{{\partial f_r( {r;{{{\bf{\tilde U}}}_{r,0}}} )}}{{\partial r}} = \frac{{\partial f_r( {{r_k};{{{\bf{\tilde U}}}_{r,0}}} )}}{{\partial r}} + \frac{{{\partial ^2}f_r( {{r_k};{{{\bf{\tilde U}}}_{r,0}}} )}}{{{\partial ^2}r}}\Delta r. \end{equation} Because the Newton descent method identifies the optimal point with $\frac{{\partial f_r( {r;{{{\bf{\tilde U}}}_{r,0}}} )}}{{\partial r}}{\rm{ = }}0$, the range perturbation can be expressed as \begin{equation}\label{equ:range_perturbation} \Delta r = - {[ {H_r( {r_k;{{{\bf{\tilde U}}}_{r,0}}} )} ]^{ - 1}}G_r( {r_k;{{{\bf{\tilde U}}}_{r,0}}} ), \end{equation} where \begin{equation}\label{equ:Gr_expression} G_r( {r;{\bf{U}}} ) = \frac{{\partial f_r( {r_k;{\bf{U}}} )}}{{\partial r}} = 2{\mathop{\rm Re}\nolimits} [ {{\bf{a}}_r^{( 1 )}{{( r )}^H}{\bf{U}}{{{\bf{U}}}^H}{{\bf{a}}_r}( r )} ], \end{equation} and \begin{equation}\label{equ:Hr_expression} H_r( {r;{\bf{U}}} ) = \frac{{{\partial ^2}f_r( {r;{\bf{U}}} )}}{{{\partial ^2}r}}\\ = 2{\mathop{\rm Re}\nolimits} \left[ \begin{array}{l} {\bf{a}}_r^{\left( 2 \right)}{\left( r \right)^H}{\bf{U}}{ {\bf{U}}^H}{{\bf{a}}_r}\left( r \right) + {\bf{a}}_r^{\left( 1 \right)}{\left( r \right)^H}{\bf{U}}{{\bf{U}}^H}{\bf{a}}_r^{\left( 1 \right)}\left( r \right) \end{array} \right]. \end{equation} Using the perturbation form to express $G_r( {r_k;{{{\bf{\tilde U}}}_{r,0}}} )$ and ${H_r( {r_k;{{{\bf{\tilde U}}}_{r,0}}} )}$, we have \begin{equation}\label{equ:Gr_U0bar} G_r( {r_k;{{{\bf{\tilde U}}}_{r,0}}} ) = G_r( {r_k;{{\bf{U}}_{r,0}}} ) + \Delta G_r, \end{equation} and \begin{equation}\label{equ:Hr_U0bar} H_r( {r_k;{{{\bf{\tilde U}}}_{r,0}}} ) = H_r( {r_k;{{\bf{U}}_{r,0}}} ) + \Delta H_r. \end{equation} Because ${{\bf{U}}_{r,0}}^H{{\bf{A}}_{\bf{r}}} = 0$, we have \begin{equation}\label{equ:Gr_rk} G_r( {r_k;{{\bf{U}}_{r,0}}} ) = 0, \end{equation} and \begin{equation}\label{equ:Hr_rk} H_r( {{r_k};{{\bf{U}}_{r,0}}} ) \!\!=\!\! 2{\mathop{\rm Re}\nolimits} [ {{\bf{a}}_r^{( 1 )}{{( {{r_k}} )}^H}{{\bf{U}}_{r,0}}{{( {{{\bf{U}}_{r,0}}} )}^H}{\bf{a}}_r^{( 1 )}( {{r_k}} )} ]\!\! =\!\! H_{r0}, \end{equation} By substituting \eqref{equ:Gr_rk} and \eqref{equ:Hr_rk} into \eqref{equ:Gr_U0bar} and \eqref{equ:Hr_U0bar}, respectively, \eqref{equ:range_perturbation} becomes \begin{equation}\label{equ:delta_r} \Delta r_k = - {\{ {H_{r0}[ {1 + {{( {H_{r0}} )}^{ - 1}}\Delta H_r} ]} \}^{ - 1}}\Delta G_r\\ \buildrel\textstyle.\over= - {( {H_{r0}} )^{ - 1}}\Delta G_r, \end{equation} where the last equation is obtained by discarding the second-order perturbation terms. The perturbation expression of $\Delta G_r$ is derived in \textbf{Appendix} \ref{Expressions:G}, given by \begin{equation}\label{equ:delta_Gr} \Delta G_r = 2{\mathop{\rm Re}\nolimits} [ {{\bf{a}}_r^{( 1 )}{{( r )}^H}( {{{\bf{U}}_{r,0}}\Delta {{\bf{U}}_{r,0}}^H} ){{\bf{a}}_r}( r )} ], \end{equation} By substituting \eqref{equ:Hr_rk}, \eqref{equ:delta_Gr}, and \eqref{equ:Ur0} into \eqref{equ:delta_r}, we obtain \begin{equation}\label{equ:delta_r_last} \Delta r_k \!\!=\!\! \frac{{{\mathop{\rm Re}\nolimits} [ {{\bf{a}}_r^{\!( 1 )}{{\!( {{r_k}} )}^H}\!{{\bf{U}}_{r,0}}{{\bf{U}}_{r,0}}^H \!{\bf{W}}_{tr}{{\bf{V}}_{r,s}}{{\bf{\Sigma }}_{r,s}}^{\!\! - 1}\!{{\bf{U}}_{r,s}}^H{{\bf{a}}_r}( {{r_k}} )} ]}}{{{\mathop{\rm Re}\nolimits} [ {{\bf{a}}_r^{( 1 )}{{( {{r_k}} )}^H}{{\bf{U}}_{r,0}}{{( {{{\bf{U}}_{r,0}}} )}^H}{\bf{a}}_r^{( 1 )}( {{r_k}} )} ]}}, \end{equation} where ${{\bf{a}}_r}( {{r_k}} )$ is given in \eqref{equ:range_steering}, and ${\bf{a}}_r^{( 1 )}{( {{r_k}} )}$ is given in \eqref{equ:first_derivative_a}. The MSE of the MUSIC-based JCS range estimation can be expressed as \begin{equation}\label{equ:MSE_r} MSE( r ) = E[ {\Delta {r_k^2}} ]. \end{equation} \subsubsection{Analysis of Doppler Detection MSE} Similar to the range estimation, the perturbation of Doppler estimation can be derived as \begin{equation}\label{equ:delta_f} \Delta {f_d}\!\! =\!\! \frac{{{\mathop{\rm Re}\nolimits} [ {{\bf{a}}_f^{( 1 )}{{\! ( {{f_k}} )}^H}\!{{\bf{U}}_{f,0}}\!{{\bf{U}}_{f,0}}^H \!{\bf{W}}_{tf}{{\bf{V}}_{f,s}}{{\bf{\Sigma }}_{f,s}}^{\!\! - 1}\!{{\bf{U}}_{f,s}}^{\!\! H} \!{{\bf{a}}_f} \!( {{f_k}} )} ]}}{{{\mathop{\rm Re}\nolimits} [ {{\bf{a}}_f^{( 1 )}{{( {{f_k}} )}^H}{{\bf{U}}_{f,0}}{{( {{{\bf{U}}_{f,0}}} )}^H}{\bf{a}}_f^{( 1 )}( {{f_k}} )} ]}}, \end{equation} where $f_k$ is the real Doppler value, ${{{\bf{a}}_f}( f_k )}$ is given in \eqref{equ:doppler_steering}, and ${\bf{a}}_f^{( 1 )}( f_k )$ is given in \eqref{equ:first_derivative_a}. Furthermore, the perturbation of the radial velocity estimation is \begin{equation}\label{equ:delta_v} \Delta v = \lambda \Delta f_d. \end{equation} \subsection{Analysis of Location MSE} The location of the target can be obtained after the AoA, ${{\bf{p}}_k} = \left( {{\varphi _k},{\theta _k}} \right)$, and the range, ${r_k}$, are detected. The expression for the actual location is given by \begin{equation}\label{equ:ploc} {{\bf{p}}_{loc}}\!( {{r_k},{\varphi _k},{\theta _k}} )\! = \! ( {{x_k},{y_k},{z_k}} ) \\ \!= \! ( {{r_k}\sin {\theta _k}\cos {\varphi _k},r_k\sin {\theta _k}\sin {\varphi _k},r_k\cos {\theta _k}} ). \end{equation} With the AoA and range estimation perturbation, $\Delta {{\bf{p}}_k} = ( {\Delta {\varphi _k},\Delta {\theta _k}} )$ and $\Delta {r_k}$, the location of the target is \begin{equation}\label{equ:ploc_perturbation} {{\bf{p}}_{loc}}( {{r_k} + \Delta {r_k},{\varphi _k} + \Delta {\varphi _k},{\theta _k} + \Delta {\theta _k}} )\\ = \left[ \begin{array}{l} ( {{r_k} + \Delta {r_k}} )\sin ( {{\theta _k} + \Delta {\theta _k}} )\cos ( {{\varphi _k} + \Delta {\varphi _k}} ),\\ ( {{r_k} + \Delta {r_k}} )\sin ( {{\theta _k} + \Delta {\theta _k}} )\sin ( {{\varphi _k} + \Delta {\varphi _k}} ),\\ ( {{r_k} + \Delta {r_k}} )\cos ( {{\theta _k} + \Delta {\theta _k}} ) \end{array} \right], \end{equation} Comparing \eqref{equ:ploc} with \eqref{equ:ploc_perturbation} and discarding the second-order perturbation, we can represent the perturbation of $x$, $y$, and $z$ axes coordinates as \begin{equation}\label{equ:delta_x} \Delta x \buildrel\textstyle.\over= \Delta {r_k}\sin {\theta _k}\cos {\varphi _k} + {r_k}\left(\! \begin{array}{l} \Delta {\theta _k}\cos {\theta _k}\cos {\varphi _k} - \Delta {\varphi _k}\sin {\theta _k}\sin {\varphi _k} \end{array} \!\right), \end{equation} \begin{equation}\label{equ:delta_y} \Delta y \buildrel\textstyle.\over= \Delta {r_k}\sin {\theta _k}\sin {\varphi _k} + {r_k}\left(\! \begin{array}{l} \Delta {\varphi _k}\sin {\theta _k}\cos {\varphi _k} + \Delta {\theta _k}\cos {\theta _k}\sin {\varphi _k} \end{array} \!\right), \end{equation} and \begin{equation}\label{equ:delta_z} \Delta z \buildrel\textstyle.\over= \Delta {r_k}\cos {\theta _k} - {r_k}\Delta {\theta _k}\sin {\theta _k}. \end{equation} Finally, the location error can be expressed as \begin{equation}\label{equ:ploc_delta} E\{ {\| {\Delta {{\bf{p}}_{loc}}} \|_2^2} \} = E\{ {{{( {\Delta x} )}^2} + {{( {\Delta y} )}^2} + {{( {\Delta z} )}^2}} \}. \end{equation} {\color{blue} \subsection{Cramer–Rao bound of JCS Sensing} We further derive the Cramer–Rao bound (CRB) to characterize the minimum lower bound for sensing. Based on the signal model presented in Section~\ref{sec:DL Echo Sensing Received Signal}, the echo signal of the $l$th target received by the $(p,q)$th antenna element at the $n$th subcarrier of the $m$th OFDM symbol is \begin{equation}\label{equ:CRB_RX_signal} y_{S,n,m,l}^{p,q} \!\!=\!\! \sqrt {P_t} d_{n,m}{\alpha _{S,n,m,l}}\chi _{TX,l}{a_{p,q}}( {{{\bf{p}}_l}} ) + n_{S,n,m}^{p,q} + x_{S,n,m}^{p,q}, \end{equation} where ${\alpha _{S,n,m,l}} = {b_{S,l}}{e^{j2\pi mT_s2{v_{s,l}}/\lambda }}{e^{ - j2\pi n\Delta {f}2r{_{s,l}}/c}}$ is given as~\eqref{equ:alpha_S}, $v_{s,l} = v_{r,l,1}$ and $r_{s,l} = d_{l,1}$ are the radial relative velocity and distance between BS and the $l$th target, respectively; ${a_{p,q}}\left( {{{\bf{p}}_l}} \right)$ is given in~\eqref{equ:phase_difference}, ${{\bf{p}}_l} = {\bf{p}}_{TX,l} = ({\varphi _l},{\theta _l})$ is the 2D AoA of the $l$th target; $\chi _{TX,l}$ is the transmitting BF gain; $n_{S,n,m}^{p,q}$ and $x_{S,n,m}^{p,q}$ are the noise and interference at the $(p,q)$th antenna element. Let $n_{S,n,m}^{X,p,q} \triangleq n_{S,n,m}^{p,q} + x_{S,n,m}^{p,q}$, then $n_{S,n,m}^{X,p,q}$ is independent and identically distributed, following $\mathcal{CN}(0, \sigma _W^2)$, where $\sigma _W^2 = {P_{IS}} + \sigma _N^2$. Let ${\bf{\psi }} = \left( {{r_{s,l}},{v_{s,l}},{\varphi _l},{\theta _l}} \right)$ be the set of estimation parameters. Then, the distribution of $y_{S,n,m,l}^{p,q}$ is \begin{equation}\label{equ:possibility_psi} p( {y;{\bf{\psi }}} ) \!\! = \!\! \frac{1}{{\pi \sigma _W^2}}{e^{ - \| {y - \sqrt {P_t} d_{n,m}{\alpha _{S,n,m,l}}\chi _{TX,l}{a_{p,q}}( {{{\bf{p}}_l}} )} \|_2^2/\sigma _W^2}}, \end{equation} Because there are $N_cM_s{P_t}{Q_t}$ independent symbols used for estimation, the joint distribution of these symbols is \begin{equation}\label{equ:possibility_y_vec} p\left( {{\bf{y}};{\bf{\psi }}} \right) = \rho {e^{ - \sum\limits_{(n,m,p,q)}^{N_cM_s{P_t}{Q_t}} {\| {y_{n,m}^{p,q} - {s_{n,m}}{\alpha _{S,n,m,l}}{a_{p,q}}( {{{\bf{p}}_l}} )} \|_2^2/\sigma _W^2} }}, \end{equation} where $\rho = {( {\frac{1}{{\pi \sigma _W^2}}} )^{N_cM_s{P_t}{Q_t}}}$, and ${s_{n,m}} = \sqrt {P_t} d_{n,m}\chi _{TX,l}$. Note that $d_{n,m}$ is independent and identically distributed with $E( {\| {d_{n,m}} \|_2^2} ) = 1$. According to \cite{Levy2008Principles, CRB2010}, the CRB of ${\psi _i}$, ${\psi _i} \in \left( {{r_{s,l}},{v_{s,l}},{\varphi _l},{\theta _l}} \right)$, is given by \begin{equation}\label{equ:CRB_def} {C_{{\psi _i}}} = - {\left\{ {E\left[ {\frac{{{\partial ^2}\ln p\left( {{\bf{y}};{\bf{\psi }}} \right)}}{{{\partial ^2}{\psi _i}}}} \right]} \right\}^{ - 1}}. \end{equation} With \eqref{equ:possibility_y_vec} and \eqref{equ:CRB_def}, the sensing CRBs are derived as \begin{equation} \label{equ:CRB} \begin{array}{c} {C_{{r_{s,l}}}} = \frac{{{c^2}}}{{32{\pi ^2}{\gamma _S}M_s{P_t}{Q_t}\sum\limits_{n = 0}^{N_c - 1} {{n^2}{{( {\Delta {f}} )}^2}} }}, {C_{{v_{s,l}}}} = \frac{{{\lambda ^2}}}{{32{\pi ^2}{\gamma _S}N_c{P_t}{Q_t}\sum\limits_{m = 0}^{M_s - 1} {{m^2}{{( {{T}} )}^2}} }},\\ {C_{{\varphi _l}}} = \frac{{{\lambda ^2}}}{{8{\pi ^2}d_a^2{\gamma _S}N_cM_s\sum\limits_{p,q}^{} {{{\left( {q\cos {\varphi _l}\sin {\theta _l} - p\sin {\varphi _l}\sin {\theta _l}} \right)}^2}} }}, {C_{{\theta _l}}} = \frac{{{\lambda ^2}}}{{8{\pi ^2}d_a^2{\gamma _S}N_cM_s\sum\limits_{p,q}^{} {{{\left( {p\cos {\varphi _l}\cos {\theta _l} + q\sin {\varphi _l}\cos {\theta _l}} \right)}^2}} }}, \end{array} \end{equation} where $\gamma_S$ is the S-SINR as given in \eqref{equ:gamma_s}. \begin{figure*}[!t] \centering \subfigure[Range detection spectrum.]{\includegraphics[width=0.32\textheight] {Range_spectrum.pdf} \label{figs:Spectrum_range} } \subfigure[Velocity detection spectrum.]{\includegraphics[width=0.32\textheight] {Velocity_spectrum.pdf} \label{figs:Spectrum_velocity} } \caption{Detection spectra of \textit{schemes} 1, 2 and 3.} \label{fig:Detection_Spectrum} \end{figure*} } {\color{blue} \subsection{Complexity Analysis and Comparison}\label{sec:cases_complexity} In this section, we analyze and compare the complexity of the proposed MUSIC-based JCS method with the conventional FFT-based methods. We consider three schemes: \textit{Scheme} 1 is the proposed MUSIC-based method; \textit{Scheme} 2 is the original FFT-based method in~\cite{Sturm2011Waveform}; and \textit{Scheme} 3 is the Code-division OFDM (CD-OFDM) FFT-based method in~\cite{Chen2021CDOFDM}. \textit{Scheme} 1: The main complexity is associated with the eigenvalue decomposition of ${{\bf{R}}_{X,r}}$ and ${{\bf{R}}_{X,f}}$ and the derivation of detection spectra. Therefore, for range and Doppler estimation, the computation complexities are ${\cal{O}}[ {{{( {N_c} )}^3}} ]$ and ${\cal{O}}[ {{{( {M_s} )}^3}} ]$, respectively. Because the MUSIC-based JCS method can work in parallel, the total complexity is ${\cal{O}}\left( {\max \left\{ {{M_s}^3,{N_c}^3} \right\}} \right)$. \textit{Scheme} 2: The complexity is mainly from two serial FFT operations for the $N_c \times M_s$ echo sensing channel matrix. Therefore, the complexity of \textit{Scheme} 2 is ${\cal{O}}\left( {M_sN_c\log \left( {M_sN_c} \right)} \right)$. \textit{Scheme} 3: The complexity is mainly from code-division multiplex demodulation and two serial FFT operations for the $N_c \times M_s$ echo sensing channel matrix. Therefore, the complexity of \textit{Scheme} 3 is ${\cal{O}}[ {{{( {N_c} )}^2}M_s + M_sN_c\log ( {M_sN_c} )} ]$. It can be seen that \textit{Scheme} 2 has the lowest complexity. The complexity of \textit{Scheme} 3 increases due to the additional code-division multiplex processing. The complexity of our proposed MUSIC-based JCS method has the highest complexity to achieve super-resolution detection. } \section{Numerical and Simulation Results}\label{sec:JCS result} In this section, we present extensive simulation results for the proposed MUSIC-based JCS processing method, with comparison to the \textit{Schemes} 2 and 3 as described in~Section~\ref{sec:cases_complexity}, and verify them against the analytical performance bounds derived in Section \ref{sec:JCS performance}. We also compare the BER results of communication demodulation for the proposed JCS CSI enhancement method with those in conventional communication systems. \subsection{System Setup} The system setup largely follows the specification in the 3GPP Vehicles-to-Everything (V2X) applications~\cite{3GPPV2X}. The carrier frequency is 63 GHz, the antenna interval, $d_a$, is half of the wavelength, the sizes of antenna arrays of BS and MUE are $P_t \times Q_t = 8 \times 8$ and $P_r \times Q_r = 1\times 1$, respectively. {\color{blue} The subcarrier interval is $\Delta {f} =$ 480 kHz, the subcarrier number is set to $N_c =$ 256, and the number of consecutive OFDM symbols is $M_s = $ 64. Therefore, the bandwidth for JCAS is ${{B = }}{N_c}\Delta f = $ 122.88 MHz.} The range and radial velocity resolutions are $\Delta r = \frac{c}{{2B}} = 1.22$ m and $\Delta v = \frac{{\lambda \Delta {f}}}{{2M_s}} = 17.8571$ m/s, respectively~\cite{Sturm2011Waveform}. The variance of the Gaussian noise is $\sigma_N^2 = kFTB = 4.9177\times10^{-12} $ W, where $k = 1.38 \times 10^{-23}$ J/K is the Boltzmann constant, $F = $ 10 is the noise factor, and $T = 290$ K is the standard temperature. The INRs for communication and sensing signals are $\gamma _C^{IN} = \gamma _S^{IN} = $ 3 dB. {\color{blue} \begin{figure*}[!t] \centering \subfigure[AoA detection MSE.]{\includegraphics[width=0.31\textheight]{AoA_MSE.pdf} \label{figs:phe_theta_Ms64_Nc128} } \subfigure[Range detection MSE. ]{\includegraphics[width=0.31\textheight] {Range_MSE.pdf} \label{figs:Range_Ms64_Nc128} } \\ \subfigure[Velocity detection MSE. ]{\includegraphics[width=0.31\textheight] {Velocity_MSE.pdf} \label{figs:Velocity_Ms64_Nc128} } \subfigure[Location detection MSE.]{\includegraphics[width=0.31\textheight] {location_MSE.pdf} \label{figs:location_Ms64_Nc128} }\\ \caption{MSEs for sensing parameter estimation. In Fig.~\ref{figs:phe_theta_Ms64_Nc128}, the solid curves are for the MSEs and CRBs of azimuth and elevation angles obtained in simulation, and the dashed curves are for the numerical ones via the theoretical perturbation results; in Figs.~\ref{figs:Range_Ms64_Nc128}, \ref{figs:Velocity_Ms64_Nc128} and \ref{figs:location_Ms64_Nc128}, the solid curves are for the MSEs and CRBs computed via simulation results, and the dashed curves are for theoretical perturbation results.} \label{fig:Detection_Results_Ms64_Nc128} \end{figure*} } Moreover, the location of the BS transmitting array is ${{\bf{p}}_{loc,u}} = (\rm{50, 4.75, 7} )$ m. {\color{blue} MUE moves on the $x$-axis and its antenna's location is ${{\bf{p}}_{loc,u}} = ({x, 0, 2} )$ m, where $x$ follows uniform distribution from 50 m to 155 m. The scatterer is generated uniformly in a sphere centered at BS with a radius of 100 m. BS is static, while the velocity of MUE is $(-11.11, 0, 0)$ m/s. The reflection factors of the targets are $\sigma _{C\beta ,l}^2 = \sigma _{S\beta ,l}^2 = $ 1. The BS array spins 45 degrees along the $z$-axis and has a downtilt angle of 20 degrees. For each test, the AoAs, ranges, and radial velocities between BS and MUE are then generated from the above parameters, and the JCS communication and echo sensing channel are further generated following the expressions in Section~\ref{subsec:uplink_signal model}. The transmit power of BS for each test, $P_t$, is determined using \eqref{equ:gamma_s} for the given values of S-SINR and INR. The MSEs of AoA, range, velocity, and location estimation are defined as the mean values of the squared errors of all the estimates. } \subsection{Sensing Performance} We first demonstrate the sensing spectra of \textit{schemes} 1, 2, and 3. The normalized range spectrum and radial velocity spectrum are shown in Figs. \ref{figs:Spectrum_range} and \ref{figs:Spectrum_velocity}, respectively. The S-SINR is ${\gamma _{S,n,m}} = - 20$ dB. For range estimation as shown in Fig.~\ref{figs:Spectrum_range}, the peak to sidelobe ratio (PSLR) of \textit{scheme} 1 is about 26 dB. By contrast, the PSLRs of \textit{schemes} 2 and 3 are both around 10 dB. For radial velocity estimation as shown in Fig.~\ref{figs:Spectrum_velocity}, the PSLR of \textit{scheme} 1 is about 33 dB, while the PSLRs of \textit{schemes} 2 and 3 are around 10 dB. The improvement of PSLR of the proposed MUSIC-based JCAS method is credited to the eigenvalue (or singular value) decomposition process, which separates the interference-plus-noise (IN) and signal subspace and reduces the influence of the noise on signal detection. {\color{blue} Fig.~\ref{figs:phe_theta_Ms64_Nc128} presents the AoA estimation MSE of various S-SINRs. With the increase of S-SINR, the AoA estimation MSE decreases as the receiving signal power increases. As the S-SINR is larger than $-$27 dB, the AoA estimation MSE is less than 0.5 square degrees. Since the range of azimuth angle, ${\varphi _k}$, is larger than the elevation angle, ${\theta _k}$, the MSE of ${\varphi_k}$ is larger than ${\theta_k}$ at first. With S-SINR becoming large enough, the MSE of ${\varphi _k}$ approaches that of ${\theta _k}$. Fig.~\ref{figs:Range_Ms64_Nc128} and Fig.~\ref{figs:Velocity_Ms64_Nc128} demonstrate the range and radial velocity estimation MSEs for \textit{schemes} 1, 2, and 3 under various S-SINRs, respectively. The range and velocity estimation MSEs of \textit{scheme} 3 outperform \textit{scheme} 2 because the code-division multiplex processing in \textit{scheme} 3 can suppress the interference to a certain extent. \textit{scheme} 1 achieves much lower MSEs than both \textit{schemes} 2 and 3, closer to the CRBs in the high SINR regime. This is because the resolutions of \textit{schemes} 2 and 3 are constrained by their FFT-based sensing, with $({\Delta r})^2 = 1.5$ $m^2$ and ${\left( {\Delta v} \right)^2} = 318 \, {\left( {m/s} \right)^2}$ in this simulation setting. In contrast, our proposed MUSIC-based method can sample the consecutive range and velocity spectra and achieves range and velocity MSEs lower than $10^{-3}$ $m^2$ and $10^{-3}$ $(m/s)^2$, respectively. The MSEs for \textit{scheme} 1 is about 25 dB lower than those for \textit{scheme} 3, closer to the range and velocity CRBs. These results demonstrate that the proposed MUSIC-based JCS method achieves super-resolution sensing. Moreover, the theoretical MSEs are shown to be close to the simulation MSEs in the high SINR regime. The higher QAM order results in larger MSEs for \textit{scheme} 1, because the increase of QAM order results in larger transformed noise as can be seen from \eqref{equ:y_n_m_k} and \eqref{equ:Y_SSU}. Fig.~\ref{figs:location_Ms64_Nc128} shows the location MSE versus S-SINR. With the estimated AoA and range, the location can be determined by \eqref{equ:ploc}. Given the sensing SINR, the MUSIC-based JCS method achieves better location MSE than \textit{scheme} 3. The gaps between \textit{scheme} 1 and \textit{scheme} 3 are not so large in the high SINR regime. This is because the AoA estimation error dominates the location MSE. More specifically, $E\{{({\hat r_{s,l}} - { r_{s,l}})}^2\}$ is smaller than $10^{-1}$ $m^2$, while the error of location as shown in \eqref{equ:delta_x}, \eqref{equ:delta_y}, and \eqref{equ:delta_z} can be much larger than $E\{{({\hat r_{s,l}} - { r_{s,l}})}^2\}$, because they are related to $r_{s,l}$. } {\color{blue} \subsection{Communication Performance} We first present the BERs of demodulating communication signals using the CSI obtained by the JCS CSI enhancement method, compared with using the original CSI. For the simplicity of description, we predefine 4 cases for comparison: \textit{Cases} A and B are for demodulating communication signals using the perfect CSI and original estimated CSI, respectively. \textit{Cases} C and D are for demodulating communication signals with the CSI enhanced by the MUSIC-based JCS sensing results and the CSI processed with FFT-based JCS sensing results, respectively. Fig.~\ref{fig: BER} shows the BER results when 64-QAM is used for communication. Note that when the detected target is the communication user, the relation between C-SINR and S-SINR is $\frac{{{\gamma _{C,n,m}}}}{{{\gamma _{S,n,m}}}} = \frac{{\left\| {{h_{C,n,m}}} \right\|_2^2}}{{\left\| {{h_{S,n,m,l}}} \right\|_2^2}}$, according to \eqref{equ:gamma_c} and \eqref{equ:gamma_s} under the assumption $\gamma _C^{IN} = \gamma _S^{IN}$ = 3 dB. Due to the CSI estimation error caused by noise and interference, the BER for \textit{case} B is significantly larger than that for \textit{case} A. As C-SINR increases, the BER for \textit{case} C decreases rapidly and becomes lower than that for \textit{case} B after C-SINR is larger than 20 dB. This is because the JCS CSI enhancement method exploits the accurate sensing results and improves the estimated CSI. By comparing \textit{case} D with \textit{cases} B and C, we can see that the BER for \textit{case} D is much larger, which indicates that the FFT-based JCS sensing results are not helpful for improving the CSI and for communication. Referring to the sensing MSEs of the MUSIC-based and the FFT-based JCS in Fig.~\ref{figs:Range_Ms64_Nc128}, we can see that the more accurate the sensing results are, the better CSI enhancement performance is, as the accuracy of range estimation directly determines the accuracy of $A$ in \textbf{Algorithm~\ref{DL_Kalman_CSI}}. This is also the reason that the BER for \textit{case} C decreases rapidly when the sensing MSE becomes sufficiently low. } \begin{figure}[!t] \centering \includegraphics[width=0.33\textheight]{BER.pdf}% \DeclareGraphicsExtensions. \caption{BERs of DL JCS communication} \label{fig: BER} \end{figure} \section{Conclusion}\label{sec:conclusion} {\color{blue} In this paper, we proposed a novel JCS system that can achieve accurate AoA, range, and velocity estimation based on improved MUSIC algorithms, together with improved communication performance. Compared with the conventional FFT-based sensing method, our proposed MUSIC-based sensing method can achieve much higher accuracy in range and radial velocity estimation. The proposed JCS CSI enhancement method exploits the JCS sensing results in the design of a Kalman filter for refining the CSI estimate. It is shown to significantly improve the communication performance at high SNRs, approaching the performance with perfect CSI. Moreover, we derived the theoretical lower bound MSEs for the proposed range and velocity estimators using perturbation analysis. Simulation results demonstrate that the theoretical results match the simulation results well, particularly at higher SNRs. } \begin{appendices} {\color{blue} \section{Derivation of $\alpha_t$} \label{Appendix_alpha_t} First, we denote the eigenvalue vector as ${{\bf{v}}_x} = \text{diag}({{\bf{\Sigma }}_x})$, where $\text{diag}(\bf X)$ denotes a vector taking the diagonal values of $\bf X$. The mean value of ${{\bf{v}}_x}$ is denoted by ${m_x}$, and ${{\bf{v}}_x} \in \mathbb{R}^{N \times 1}$. We assume there are $L$ incident signals. According to the property of the MUSIC algorithm, the $i$th entry of ${{\bf{v}}_x}$ can be expressed as \cite{MUSIC1986} \begin{equation}\label{equ:vx} {[ {{{\bf{v}}_x}} ]_i} = \left\{ \begin{array}{l} {P_i} + \sigma _N^2, \ i \le L\\ \sigma _N^2, \ i > L \end{array}, \right. \end{equation} where $P_i$ is the power of the $i$th incident signal, $\sigma _N^2$ is the noise power. We define the differential vector of ${{\bf{v}}_x}$ as ${{\bf{v}}_\Delta }$, where ${[ {{{\bf{v}}_\Delta }} ]_i}{\rm{ = }}{[ {{{\bf{v}}_x}} ]_i} - {[ {{{\bf{v}}_x}} ]_{i + 1}}$, and ${{\bf{v}}_\Delta } \in \mathbb{R}^{( {N - 1} ){\kern 1pt} \times 1}$, Obviously, ${[ {{{\bf{v}}_\Delta }} ]_i} \approx 0$ when $i > L$, while ${[ {{{\bf{v}}_\Delta }} ]_i} \gg 0$ when $i \le L$. Since mmWave suffers from large propagation loss, $L$ is typically much smaller than $N$. Then, we represent the mean value of the latter half of ${{\bf{v}}_\Delta }$ as $\bar v = {{\sum\limits_{k = \left\lfloor {(N - 1)/2} \right\rfloor }^{N - 1} {{{[ {{{\bf{v}}_\Delta }} ]}_k}} } \mathord{/ {\vphantom {{\sum\limits_{k = \left\lfloor {(N - 1)/2} \right\rfloor }^{N - 1} {{{[ {{{\bf{v}}_\Delta }} ]}_k}} } {( {N - \left\lfloor {(N - 1)/2} \right\rfloor } )}}} \kern-\nulldelimiterspace} {( {N - \left\lfloor {(N - 1)/2} \right\rfloor } )}}$, and $\bar v$ is close to 0. Therefore, the number of detected targets is determined as \begin{equation} \hat L = \mathop {\arg \max }\limits_i {[ {{{\bf{v}}_\Delta }} ]_i} > ( {1 + \varepsilon } )\bar v, \end{equation} where $\varepsilon $ is a parameter used to avoid false detection caused by a small error. In the simulation, we set $\varepsilon = 1$. Therefore, ${\alpha _t}$ is set as ${\alpha _t} = {{{{[ {{{\bf{v}}_x}} ]}_{\hat L}}} \mathord{\left/ {\vphantom {{{{\left[ {{{\bf{v}}_x}} \right]}_{\hat L}}} {{m_x}}}} \right. \kern-\nulldelimiterspace} {{m_x}}}$. It is a key parameter and has an important impact on sensing accuracy. When ${\alpha _t}$ is too large, the selected noise subspace will include part of the signal subspace, and thus the target may be missed; when ${\alpha _t}$ is too small, the noise subspace is not selected completely, and thus large noise may be taken into the signal space. \section{Derivation of $\sigma _p^2$} \label{Appendix_sigma_p} We first derive the eigenvalue matrix of ${\bf{\hat H}}_C{( {{\bf{\hat H}}_C} )^H}$ as ${{\bf{\Sigma }}_p}$, and obtain the eigenvalue vector as ${{\bf{v}}_p} = \text{diag}( {{{\bf{\Sigma }}_p}} )$. When the LoS signal dominates the communication channel, i.e., $L = 1$, from \eqref{equ:vx}, we can estimate $\sigma _p^2$ as \begin{equation}\label{equ:sigma_p_2} \hat \sigma _p^2 = {{\sum\limits_{i = 2}^{N_c} {{{[ {{{\bf{v}}_x}} ]}_i}} } \mathord{/ {\vphantom {{\sum\limits_{i = 2}^{N_c} {{{[ {{{\bf{v}}_p}} ]}_i}} } {(N_c - 1)}}} \kern-\nulldelimiterspace} {(N_c - 1)}}. \end{equation} } \section{Proof of Theorem~\ref{Theo:1}} \label{Theo:A} ${\bf{U}}_{x,r }$ can be divided as ${\bf{U}}_{x,r } = [ {{\bf{S}}_{x,r },{\bf{U}}_{x,r N}} ]$. Because ${\bf{U}}_{x,r }$ is an orthogonal matrix, ${[ {{\bf{S}}_{x,r }} ]^H}{\bf{U}}_{x,r N}{\rm{ = }}{\bf{0}}$ and ${[ {{\bf{U}}_{x,r N}} ]^H}{\bf{U}}_{x,r N}{\rm{ = }}{\bf{I}}$ hold. On one hand, since ${\bf{U}}_{x,r N}$ is the noise subspace of ${\bf{R}}_{{{X}},r }$, we have \begin{equation} \label{equ: Rx_tau_Utau} {\bf{R}}_{{{X}},r }{\bf{U}}_{x,r N} = {\sigma _W}^2{\bf{U}}_{x,r N}, \end{equation} where ${\sigma _W}^2$ is the Gaussian noise variance. On the other hand, we have \begin{equation} \label{equ:Rx_tau} {\bf{R}}_{{{X}},r } = {{E}}( {{\bf{\bar H}}_S{{[ {\bf{\bar H}}_S ]}^H}} ) \\ = {{\bf{A}}_{\bf{r}}}E\{ {{\bf{S}}_{r,s}{{[ {{\bf{S}}_{r,s}} ]}^H}} \}{[ {{{\bf{A}}_{\bf{r}}}} ]^H} + {\sigma _W}^2{\bf{I}} \end{equation} Therefore, \begin{equation} \label{equ:Rx_tau_U} {\bf{R}}_{{{X}},r }{\bf{U}}_{x,r N} = {{\bf{A}}_{\bf{r}}}E\{ {{\bf{S}}_{r,s}{{[ {{\bf{S}}_{r,s}} ]}^H}} \}{[ {{{\bf{A}}_{\bf{r}}}} ]^H}{\bf{U}}_{x,r N} + {\sigma _W}^2{\bf{U}}_{x,r N}. \end{equation} By comparing \eqref{equ: Rx_tau_Utau} with \eqref{equ:Rx_tau_U}, we obtain \begin{equation} \label{equ:Ar_U_x_N} {{\bf{A}}_{\bf{r}}}E\{ {{\bf{S}}_{r,s}{{[ {{\bf{S}}_{r,s}} ]}^H}} \}{[ {{{\bf{A}}_{\bf{r}}}} ]^H}{\bf{U}}_{x,r N} = {\bf{0}}. \end{equation} Thus, \begin{equation} \label{equ:U_x_N_Ar_U_x_N} {[ {{\bf{U}}_{x,r N}} ]^H}{{\bf{A}}_{\bf{r}}}E\{ {{\bf{S}}_{r,s}{{[ {{\bf{S}}_{r,s}} ]}^H}} \}{[ {{{\bf{A}}_{\bf{r}}}} ]^H}{\bf{U}}_{x,r N} = {\bf{0}}. \end{equation} Since $E\{ {{\bf{S}}_{r,s}{{[ {{\bf{S}}_{r,s}} ]}^H}} \}$ is full-rank, ${[ {{\bf{U}}_{x,r N}} ]^H}{{\bf{A}}_{\bf{r}}} = {\bf{0}}$. Therefore, the multiplication between ${[ {{\bf{U}}_{x,r N}} ]^H}$ and each column of ${{\bf{A}}_{\bf{r}}}$ is 0, i.e., ${( {{\bf{U}}_{x,r N}} )^H}{{\bf{a}}_r}( {{r_l}} ) = 0$ holds. Thus, the minimum points of ${\| {{\bf{U}}{{_{x,r N}}^H}{{\bf{a}}_r}( r )} \|_2^2}$ are the ranges. Similarly, by comparing the two expressions of ${\bf{R}}_{{{X}},f}{\bf{U}}_{x,fN}$, we obtain \begin{equation} \label{equ:U_x_N_Af_U_x_N} {[ {{\bf{U}}_{x,fN}} ]^H}{{\bf{A}}_{\bf{f}}}E\{ {{\bf{S}}_{f,s}{{[ {{\bf{S}}_{f,s}} ]}^H}} \}{[ {{{\bf{A}}_{\bf{f}}}} ]^H}{\bf{U}}_{x,fN} = {\bf{0}}. \end{equation} Because $E\{ {{\bf{S}}_{f,s}{{[ {{\bf{S}}_{f,s}} ]}^H}} \}$ is full-rank, ${[ {{\bf{U}}_{x,fN}} ]^H}{{\bf{A}}_{\bf{f}}} = {\bf{0}}$ holds. Hence, the multiplication between ${[ {{\bf{U}}_{x,fN}} ]^H}$ and each column of ${{\bf{A}}_{\bf{f}}}$ is 0, i.e., ${( {{\bf{U}}_{x,fN}} )^H}{{\bf{a}}_f}( {{f_{s,l,1}}} ) = 0$ holds. Thus, the minimum points of $\| {({\bf{U}}{{_{x,fN}}})^H{{\bf{a}}_f}( f )} \|_2^2$ are the Doppler results. The proof of \textbf{Theorem} \ref{Theo:1} is completed. \section{} \label{Expressions:G} \subsubsection{The derivatives for $\Delta {{\bf{G}}_{\bf{p}}}$} The expanded expression for ${{\bf{G}}_p}( {{{\bf{p}}_k};{{{\bf{\tilde U}}}_0}} )$ can be given by \begin{equation} \label{equ:delta_p_derivation} {{\bf{G}}_p}( {{{\bf{p}}_k};{{{\bf{\tilde U}}}_0}} ) \!=\! 2{\mathop{\rm Re}\nolimits} \left\{ {\bf{a}}_{\bf{p}}^{( 1 )}{( {{{\bf{p}}_k}} )^H}( {{{\bf{U}}_0} + \Delta {{\bf{U}}_0}} ) \times {( {{{\bf{U}}_0} + \Delta {{\bf{U}}_0}} )^H}{\bf{a}}( {{{\bf{p}}_k}} ) \right\}. \end{equation} Then, according to \eqref{equ:Gp_expression}, $\Delta {{\bf{G}}_{\bf{p}}}$ can be expressed as \begin{equation} \label{equ:delta_Gp} \begin{aligned} \Delta {{\bf{G}}_{\bf{p}}} = 2{\mathop{\rm Re}\nolimits} \left\{ {{\bf{a}}_{\bf{p}}^{( 1 )}{{( {{{\bf{p}}_k}} )}^H}\left( \begin{array}{l} {{\bf{U}}_0}\Delta {{\bf{U}}_0}^H + \Delta {{\bf{U}}_0}{{\bf{U}}_0}^H + \Delta {{\bf{U}}_0}\Delta {{\bf{U}}_0}^H \end{array} \right){\bf{a}}( {{{\bf{p}}_k}} )} \right\}. \end{aligned} \end{equation} By discarding the second-order perturbation $\Delta {{\bf{U}}_0}\Delta {{\bf{U}}_0}^H$ and ${{\bf{U}}_0}^H{\bf{a}}( {{{\bf{p}}_k}} ) = 0$, and substituting \eqref{equ:delta_U0} into \eqref{equ:delta_Gp}, we obtain \[\Delta {{\bf{G}}_{\bf{p}}} \!\!=\!\! 2{\mathop{\rm Re}\nolimits}\! \{ \!{ - {\bf{a}}_{\bf{p}}^{( 1 )}{{( {{{\bf{p}}_k}} )}^H}{{\bf{U}}_0}{{\bf{U}}_0}^H{{[ {{\bf{N}}_t} ]}^H}{{\bf{V}}_s}{{\bf{\Sigma }}_s}^{\! - 1}{{\bf{U}}_s}^{\! H} \!{\bf{a}}( {{{\bf{p}}_k}} )\!} \}.\] \subsubsection{The derivatives for $\Delta G_r$} The expanded expression for $\Delta G_r$ can be given by \begin{equation} \label{equ:delta_Gr_medium} \Delta G_r = G_r( {r;{{{\bf{\tilde U}}}_{r,0}}} ) - G_r( {r;{{\bf{U}}_{r,0}}} )\\ = 2{\mathop{\rm Re}\nolimits} [ {{\bf{a}}_r^{( 1 )}{{( r )}^H} \! ( {{{\bf{U}}_{r,0}} \!+\! \Delta {{\bf{U}}_{r,0}}} ){{\! ( {{{\bf{U}}_{r,0}} \!+\! \Delta {{\bf{U}}_{r,0}}} )}^H}\!{{\bf{a}}_r}( r )} \!]. \end{equation} By discarding the second-order perturbation term, ${{\bf{U}}_{r,0}}^H{{\bf{A}}_{\bf{r}}} = {\bf{0}}$, and substituting \eqref{equ:Ur0} into \eqref{equ:delta_Gr_medium}, we obtain \[\Delta G_r \!\! = \!\! {\rm{2Re}}[ {{\bf{a}}_r^{( 1 )}{{\! ( {{r_k}} )}^{\!H}}\!{{\bf{U}}_{r,0}}\!{{\bf{U}}_{r,0}}^{\!H}\! {\bf{W}}_{tr}{{\bf{V}}_{r,s}}{{\bf{\Sigma }}_{r,s}}^{\!\!\!\! - 1}{{\bf{U}}_{r,s}}^{\! \! H}{{\bf{a}}_r}\! ( {{r_k}} )} ].\] \end{appendices} {\small \bibliographystyle{IEEEtran}
1,116,691,497,482
arxiv
\section{Models of gas expulsion} \subsection{ICs for embedded clusters in MOND}\label{ics} MOND has been tested for decades. On large scales, MOND encounters several unsolved challenges. The strong and weak lensing data from clusters of galaxies showed that MOND also requires dark matter such as neutrinos \citep{Natarajan_Zhao2008,Angus+2007}, and one of the most famous example is the Bullet Cluster 1E0657-56 \citep{Clowe+2006}. Although there are several relativstic versions of MOND, including Bekenstein's Tensor-Vector-Scalar theory \citep[TeVeS,][]{Bekenstein2004} and Milgrom's BIMOND \citep{Milgrom2009b,Milgrom2010}, claiming that the convergence mape of the cluster can be reproduced without any dark matter, these theories bring in new problems. The convergence map of the Bullet Cluster remains a major challenge for MOND so far. Moreover, the ring-like dark matter structure around the rich galaxy cluster Cl0024+17 is also hard to understand within the framework of MOND \citep{Jee+2012}. Besides the difficults on the scale of clusters of galaxies, the Cosmic Microwave Background (CMB) radiation is another problem for MOND. To be consistent with the observations of the third peak of the acoustic power specturm of the CMB, neutrinos as dark matter are required \citep{Skordis+2006,Angus2009,Zhao2008}. Moreover, even in the presence of neutrinos, MOND is difficult to form the correct amplitude for the mass function of galaxy clusters \citep{Angus_Diaferio2011,Angus+2013,Angus+2014b}. Although MOND faces the challenges on large scales, it is very successful and promising on scales of galacxies and star clusters. It can successfully predict the rotation curves for all galaxies, including our Milky Way Galaxy \citep[e.g.,][]{Sanders_McGaugh2002,Famaey_McGaugh2012,Famaey_Binney2005,Famaey+2007}. Moreover, MOND provides a unified explanation on the escape velocity of galaxies \citep{Wu+2007,Wu+2008,Banik_Zhao2018}, the rotational speed in polar ring galaxies \citep{Lughausen+2013}, the formation of the shell structure in NGC 3923 \citep{Bilek+2013,Bilek+2014}, the velocity dispersion of M31 dwarf galaxies \citep{McGaugh_Milgrom2013,McGaugh2016}, the mass discrepancy-acceleration correlation of disc galaxies \citep{Milgrom1983b,Sanders1990,McGaugh2004,Wu_Kroupa2015} and of pressure-supported galaxies \citep{Scarpa2006}, and the relation between the baryonic and dynamical central surface densities for disc galaxies \citep{Milgrom2016}. Based on a critical test of MOND developed in Bonn \citep{Baumgardt+2005}, Newtonian dynamics can explain well the kinematics and dynamics of some distant GCs, including NGC 2419 \citep{Baumgardt+2009,Ibata+2011a,Ibata+2011b}, Palomar 4 \citep{Frank+2012} and Palomar 14 \citep{Jordi+2009}. These star clusters behave Newtonian with small values of the velocity dispersion. However, \citet{Gentile+2010} argued that using a small sample of stellar kinematics data (17 stars) for Pal 14, it is insufficient to falsify MOND. \citet{Sanders2012a,Sanders2012b} showed that polytropic models in MOND for NGC 2419 can fit well the observations of the surface brightness and the velocity dispersion profiles, and therefore NGC 2419 might not be a problem for MOND. On the other hand, recently, a number of observations and simulations on Galactic GCs suggest departures from Newtonian gravitation. In these studies, the line-of-sight (LoS) velocity dispersion profiles of such GCs, $\sigma_{\rm LoS}(r)$, are flat at large radii \citep{Scarpa+2007,Scarpa+2011,Scarpa_Falomo2010,Lane+2009,Lane+2010,Hernandez_Jimenez2012,Hernandez+2013,Durazo+2017}. The observed flat $\sigma_{\rm LoS}(r)$ profiles are apparently at odds with Newtonian dynamics, which predicts that $\sigma_{\rm LoS}(r)\propto r^{-1/2}$ at large radii, but MOND can naturally reproduce the observed outer $\sigma_{\rm LoS}(r)$ of the distant GCs \citep{Milgrom1994}. It turns out that MOND gives a good discription of the kinematics and dynamics on the scales of GCs and also the central \citep{Milgrom2009} and outer regimes (\citealt{Kroupa+2012,Kroupa2012,Kroupa2015} and also see the review of \citealt{Famaey_McGaugh2012}) of galaxies \begin{table} \begin{center}\vskip -0.0cm \caption{Parameters of ICs for the embedded clusters with a Plummer density profile: the $1_{\rm st}-5_{\rm th}$ columns are: ID of the ICs, the total mass $M$, the Plummer radius $\rp$, the half-mass radius $\rh$ and the central density $\rhoh \equiv \frac{M_{\rm b}}{2} \rh^{-3}$. The sixth column is the $3-$dimensional velocity dispersion, $\sigma$, of the overall system. The $8_{\rm th}-9_{\rm th}$ columns are the the radius enclosing $90\%$ of mass, $r_{\rm 90}$, and the crossing time, $\tcr \equiv r_{\rm 90} / \sigma$. We refer to initial models with larger $\rp$ and $\rh$ as being initially more diffuse.} \begin{tabular}{lcccccccc} \hline ICs& $M$ & $\rp$ & $\rh$ & $\rhoh$& $\sigma$ & $r_{\rm 90}$& $\tcr$ \\ \hline & $\msun$ & $\pc$& $\pc$ & $\msun \pc^{-3}$ & $\kms$ & $\pc$& $\Myr$ \\ \hline 1 & $10^7$ &5.0& 6.5 & $1.80\times10^4$ & 50.7 & 18.5 & 0.4 \\ 2 & $10^6$ &5.0& 6.5 & $1.80\times10^3$ & 16.8 & 18.5& 1.1\\ 3 & $10^5$ &5.0& 6.5 & $1.80\times10^2$ & 6.5 & 18.5 &2.8 \\ 4 & $10^5$ &10.0&13.1& $2.25\times10^1$ & 5.8 & 36.7 & 6.3\\ \hline \end{tabular} \label{plummer} \end{center} \end{table} The MOND Poisson equation that satisfies the conservation laws of energy, momentum and angular momentum is \citep{BM1984} \beq\label{mond} \divergence [\mu(X) { \vg}]=4\pi G\rho_{\rm b}, \,\,\,\, X=|{\vg}|/a_0 \, , \eeq where ${\vg}$ is the gravitational acceleration in MOND, and $a_0=3.7\pc\Myr^{-2}$ is Milgrom's gravitational constant. The interpolating function $\mu\rightarrow 1$ when $X \gg 1$ and $\mu \rightarrow X$ when $X \ll 1$, corresponding to the Newtonian and MONDian limits, respectively. In the deep MOND limit, the gravititaional acceleration $g=\sqrt{a_0 g_{\rm N}}$, where $g_{\rm N}$ is the Newtonian gravity acceleration. The circular velocity, $v_{\rm c}$, following from the centrifugal acceleration, is \beq\label{vcirc} v_{\rm c} = (GMa_0)^{1/4},\eeq where $M$ is the baryonic mass of the system. Eq. \ref{vcirc} implies that a baryonic system is embedded in a logarithmic phantom dark matter halo potential, when interpreted in terms of Newtonian dynamics. This is an effective dark matter halo. The mass of the baryonic matter together with the phantom dark matter halo is the Newtonian dynamical mass of the system. In what follows, a simple form of the $\mu$ function \citep{Famaey_Binney2005} will be used, \beq \mu(X)=\frac{X}{1+X}, \eeq which fits better the terminal velocity of the Milky Way and NGC 3198. The simple $\mu$ function transits a system from the deep MOND limit to the Newtonian limit more gradually than the 'standard' $\mu$ function of \citet{Milgrom1983b}, and works better in both very weak and very strong gravities \citep{Zhao_Famaey2006}. Before the gas expulsion, the embedded clusters are more compact and more massive than the present-day GCs. \citet{Plummer1911}'s profile is chosen for the density distribution of embedded clusters as follow, \beq \rho_{\rm b} (r)=\left(\frac{3\mb}{4\pi \rp^3} \right)\left(1+\frac{r^2}{\rp^2}\right)^{-5/2}, \eeq where $\mb$ and $\rp$ are the total mass and the scale length of the Plummer model. The stellar and gaseous density profiles are assumed to follow the Plummer profile with the same Plummer radius, an assumption which has proven to lead to successful modelling of the Orion Nebula Cluster and the Pleiades \citep{Kroupa+2001}, of NGC 3603 \citep{Banerjee_Kroupa2015}, and of R136 \citep{Banerjee_Kroupa2012} and is physically plausible in that the local star formation rate is approximately propotional to the local gas density. The MONDian potentials of the Plummer models are calculated using a MOND Possion solver \citep{Nipoti+2007,nmody}. The N-body ICs with an isotropic velocity dispersion in MOND gravity are constructed using Lucy's method \citep{Lucy1974}, and the code was originally implemented by \citet{Gerhard1991} for constructing both isotropic and anisotropic N-body models in Newtonian gravity. Here we simply replace the Newtonian potential and circular velocity with the MONDian potential and circular velocity in the N-body generator. A series of self-consistent isotropic N-body initial conditions (ICs) are constructed thereby. The parameters of the models are summerised in Table \ref{plummer}. Note that our ICs for the embedded clusters are in equilibrium, i.e., the virial ratio, $Q_0 \equiv \frac{T_0}{|W_0|} =0.5$, where $T_0$ and $W_0$ are the initial kinetic energy and initial potential energy for the models. This is a physically plausible assumption because stars form in the cloud core in a few crossing times and not instantly at the same time, and the bulk of the forming embedded cluster will be close to the $Q_0=0.5$ state when the gas expulsion occurs. There are $100,000$ equal-mass particles in each model. \subsection{N-body realisation for the gas expulsion}\label{gas} Since MOND introduces a larger dynamical mass compared to a pure Newtonian system with the same density distribution, especially for diffuse systems, a MOND gravitational potential can bind more stars with high energy. Therefore, a lower value of SFE is allowed in MOND. To examine this MOND effect, we assume that the values of the SFE range from $10\%$ to $50\%$ with a $10\%$ interval of increment. Further, the SFE is reduced down to $5\%$ and $2.5\%$ for all the embedded cluster models. Star clusters cannot survive in Newtonian gravity with such small values of the SFE in any of the existing studies \citep{Lada+1984,Goodwin1997,Boily_Kroupa2003,Fellhauer_Kroupa2005,Baumgardt_Kroupa2007,Shukirgaliyev+2017}. Because the models are massive and the number of particles is large, two-body relaxation can be ignored in the simulations. It is therefore sufficient to simulate the gas expulsion by means of a collisionless N-body code. The orbits of the N-body systems can by integrated by using the particle-mesh N-body code \emph{NMODY} \citep{nmody}, which solves gravitational accelerations and potentials in both standard Newtonian and Milgromian dynamics. More details and tests on the code in both dynamics can be found in \citet{Nipoti+2007,Nipoti+2008,Nipoti+2011,Wu+2013} and \citet{Wu+2017}. In the following simulations, we use a grid-resolution of $n_r\times n_{\theta} \times n_{\phi}=256 \times 32 \times 64$, where $n_r,~n_\theta,~n_\phi$ are the number of grid cells in radial, polar and arthimuthal dimensions. The radial grids are segmented by $r_i = r_s\times \tan \left[(i+0.5)0.5\pi /(n_r+1)\right]$ with $r_s=20\pc$ and $i=0,1,2,...,n_r$, the angular grids are equally segmented, and the angular resolution of the spherical harmonic expansion for the Poisson solver at each time step is $l_{\max}=8$. \begin{figure} \includegraphics[width=90mm]{rtidal.pdf} \caption{Upper panel: the values of tidal radius, $\rt$, of the initial star clusters assuming a Galactocentric distance of $100~\kpc$. Lower panel: the values of virial radii, $\refe$, of the initial embedded star cluster. } \label{rtidal} \end{figure} The $3$-dimensional velocity dispersion, $\sigma$, of the overall embedded clusters including gas and stars are calculated and shown in Table \ref{plummer}. The crossing time of the embedded clusters is defined as $\tcr \equiv r_{\rm 90} /\sigma$, where $r_{\rm 90}$ is the $90\%$ Lagrangian radius. Note that the crossing time in MOND is shorter than that in a Newtonian model with the same baryonic mass density distribution, because the value of $\sigma$ is larger in the deeper MOND potential. The sudden gas expulsion is modelled by changing the fraction of mass for all particles immediately, i.e., the initial particle-mass multiplies the SFE in each simulation. It has been shown that a more gradual removal of gas will leave a bound remnant with a lower SFE \citep{Baumgardt_Kroupa2007}. We will focus on the most extreme case of gas expulsion in MOND. The effect of gradual gas expulsion or the effect of a different initial stellar mass function is beyond the scope of this paper, and will be studied in a future project. The global time steps are defined as $\dt= \frac{0.1}{\sqrt{\max |\nabla \cdot {\bf g}|}}$, and thus there are about 10 time steps for a circular orbit in the densest regime. We freely evolve the GCs in their new post-gas expulsion self-gravity for about $1~\Gyr$. This is over $150~\tcr$ for the most diffuse initial embedded cluster model (model $4$) and is over $300~\tcr$ for the other models. \subsection{Truncation radius for the final products} For an isolated protocluster model in MOND, ideally, no stars can escape from the logarithmic potential. However, the external field truncates the logarithmic potential well to $1/r$ dependency at large radii and enables stars to escape \citep{Famaey+2007,Wu+2007}. The external field defines the tidal radius of a star cluster in a host galaxy \citep{Zhao_Tian2006}, \bey &\rt&=\left( \frac{M_{\rm b}^{\rm GC}}{(1+\eta)M_{\rm gal}} \right)^{1/3}D_0, \\ &\eta&=-\frac{d\ln g}{d\ln D_0},\nonumber \eey where $\eta \rightarrow 2$ in the Newtonian limit and $\eta \rightarrow 1$ in the deep MOND limit. $M_{\rm b}^{\rm GC}$ and $M_{\rm gal}$ are the baryonic mass of the embedded star cluster (i.e., the mass of the stellar component in the embedded cluster before the gas expulsion. Note that here the mass of the gas component is not included.) and of the galaxy, respectively. $D_0$ is the distance between the young star cluster and the centre of the host galaxy. The baryonic Besan{\c c}on Milky Way model \citep{Wu+2007} is used here as the host galaxy, with $M_{\rm gal} \approx 6.5\times 10^{10}\msun$. A Galactocentric distance of $100~\kpc$ is assummed for the star clusters, which stands for the distance of remote GCs such as AM 1 ($123~\kpc$), Pal 3 ($96~\kpc$), Pal 4 ($112~\kpc$), NGC 2419 ($89~\kpc$, \citealt{Harris1996}). The values of the tidal radii for the initially embedded star clusters as a function of SFE are shown in the upper panel of Fig. \ref{rtidal}. The tidal radius of a star cluster in MOND is larger than that in Newtonian gravity, since $M_{\rm gal}$ in Newtonian dynamics should include both baryonic and dark matter, which is much larger than the mass of pure baryonic matter in the MONDian Galaxy, and because the cluster generates a phantom dark matter halo around itself, boosting its effective Newtonian mass. Besides the tidal truncation of the star clusters, the uniform gravitational background field from the Galaxy plays an important role for the star clusters. A self-bound system in Newtonian dynamics is not affected by such a uniform external field, and this is known as the Strong Equivalence Principle (SEP), which is one of the basic assumptions of Einstein's theory of general relativity. However, the SEP is violated in MOND \citep{Milgrom1983a,BM1984}. Possible evidence for SEP violation has been found by \citet{Wu+2010,Wu+2017} and \citet{Thomas+2017}. The dynamics of a self-bound system is governed by both the internal and the external gravitational fields, i.e., ${\bf g} = \gint + \gext$ in Eq. \ref{mond}. A truncation radius can be roughly estimated in MOND within an external field, \beq \refe = \sqrt{GM_{\rm b}^{\rm GC}a_0}/g_{\rm ext}, \eeq where the strength of the internal gravity equals that of the external field. At a radius larger than $\refe$, the system is dominated by the external field. $\refe$ is the virial radius for a self-bound system \citep{Wu_Kroupa2015}. The phantom dark matter halo is truncated at $\refe$, and the mass of the phantom dark matter halo, sourced purely by the stars (the gas component is not used here as it is removed rapidly), is \beq M_{\rm phantom} = M_{\rm b}^{\rm GC}a_0/g_{\rm ext}. \eeq At a Galactocentric distance of $100~\kpc$, the strength of the external field from the Galaxy is $g_{\rm ext}\approx 0.087a_0$. Since the external field from the Milky Way is weak, the star clusters are simulated in isolation. The external field effect mainly reflects the truncation radii for the star clusters. The virial radii of the embedded star clusters are displayed in the lower panel of Fig. \ref{rtidal}. The virial radii are smaller than the tidal radii for all the models, indicating that the uniform external field dominates the dynamics in the star clusters at a smaller radius than the tidal field. Therefore, the external field truncates a star cluster at $\refe$. At the radius where $r>\refe$, the rotation curve of a system behaves Newtonian-like, but with the velocities being a factor $\frac{1}{\mu_{\rm ext}}=\frac{a_0}{|g_{\rm ext}|}$ larger within the weak external field. \section{Results}\label{bound} \subsection{The loss of mass}\label{mloss} The bound mass fraction at the end of the simulations, $f_{\rm bound}$, as a function of the SFE in MOND is shown in Fig. \ref{fbound}. The bound particles are defined as stars with binding energy $E_{\rm bind}=T+W < 0$ within the initial truncation radius of the stellar component in the embedded cluster shown in Fig. \ref{rtidal}. $f_{\rm bound}$ is the ratio between the bound stars at the end of the simulation and the initial stellar mass in the embedded cluster. The upper panel shows the fraction of bound mass truncated by the tidal radius, and the lower panel presents the fraction of bound mass truncated by the external field. In general, for a given model with a fixed SFE, the fraction of bound mass truncated by the external field is smaller than that by the tidal field. Since both effects should be taken into account in a MOND system, we shall truncate the star cluster at $\refe$, which is much smaller than $r_{\rm tidal}$ (see Fig. \ref{rtidal}). The values of $f_{\rm bound}$ for the surviving star clusters truncated at $\refe$ are also listed in Table \ref{tab-fbound}. The first row of the table indicates the values of the SFE and the $2_{\rm nd}-5_{\rm th}$ rows show $f_{\rm bound}$ for different values of the SFE. In models with larger values of SFE, i.e., $40\%$ and $50\%$, self-bound cores can survive after the gas expulsion in Newtonian dynamics. Thus it is possible to compare the models, with the same density profiles and the same SFE, defined in MOND and in Newtonian dynamics. Here, we find that all the MOND models leave the majority, i.e. over $85\%$, of mass being self-bound after the gas is expelled immediately. In the previous studies in Newtonian dynamics, e.g., in \citet{Boily_Kroupa2003}, $f_{\rm bound}$ is only $66\%$ for a SFE of $50\%$, and in \citet{Baumgardt_Kroupa2007}, $11\%-34\%$ and $13\%-72\%$ of mass remains bound at the end of their simulations corresponding to a SFE of $40\%$ and $50\%$, respectively. The reasons are: i) the tidal radii at the same Galactocentric distance are much smaller in Newtonian dynamics; ii) the external field effect in MOND dominates the dynamics of the star cluster only in the outer regime where $r>\refe$. Therefore, with the same SFE, a larger fraction of stars can be bound to the star cluster after sudden gas expulsion. With a low value of SFE, $10\%$, all the MOND models can leave a bound core. $f_{\rm bound}$ decreases with the reduction of the SFE, especially for the most massive initial model, model $1$. Approximately, $6\%$ of the stellar component is bound at the end of the simulation with a SFE$=10\%$, and about $37\%$ stars remain bound with a SFE$=20\%$. The less massive models (i.e., models $2$-$4$) have larger values of $f_{\rm bound}$, since these models are more dominated by deep MOND gravity, and their potentials are significantly deeper than in Newtonian dynamics. Remarkably, for models $2$, $3$ and $4$, the values of $f_{\rm bound} \approx 0.37,~0.55$ and $0.44$, respectively, for a SFE of $10\%$. For the three models, $f_{\rm bound}(r<\refe)>75\%$ with SFE$ \ge 20\%$. This implies that most of the stars are bound to the originally diffuse models after gas expulsion. This is never expected in a Newtonian model, in which GCs are completly destroyed for such an abrupt gas expulsion event with a SFE lower than $33\%$ \citep{Goodwin1997,Boily_Kroupa2003,Baumgardt_Kroupa2007}. Although it is possible for a star cluster to form a bound core with a lower value of the SFE in Newtonian gravity, for instance, for adiabatic gas expulsion \citep{Baumgardt_Kroupa2007}, an initially dynamically cold star cluster \citep{Goodwin2009} and star clusters forming in a deeper potential of a star-cluster complex \citep{Fellhauer_Kroupa2005,Smith+2011}, these physical effects should also be valid in MOND, i.e., increase the survivality of star clusters. We shall not introduce such physics into this work, given that a single star cluster formed in equilibrium undergoing an immediate gas expulsion presents an evident and robust difference between MOND and Newtonian dynamics. Moreover, simulations of the process of gas expulsion with a SFE of $5\%$ and $2.5\%$ are performed here. The results of bound mass in these simulations are presented in Fig. \ref{fbound} as well. Remarkably, it is possible to leave a bound core with a SFE of $2.5\%$ for models $2$-$4$, and $f_{\rm bound}(r<\refe) \approx 2\%-3\%$. The bound fraction is over $10\%$ for the three models with SFE=$5\%$. Note that two-body relaxation is ignored in the simulations. For the least massive models, models $3$ and $4$, with a SFE of $2.5\%$, the mass of the embedded clusters are only $2500~\msun$, and the bound fraction of the stars is only a few percent. Therefore two-body relaxation could play an important role in these ultra-low SFE systems. The bound core might not be able to survive for $1~\Gyr$ considering the effect of two-body relaxation. For models with a SFE of $5\%$, the bound fraction of stars is one order of magnitude larger than that with a SFE of $2.5\%$, and the two-body relaxation effects can be safely ignored. The effect of two-body relaxation for low mass systems with low SFE in MOND will need to be studied in future work. \begin{table} \begin{center}\vskip -0.0cm \caption{The fraction of bound mass, $f_{\rm bound}$, of the surviving star clusters after gas expulsion evaluated at an age of $1~\Gyr$. The star clusters are truncated at $\refe$ by the external field effect. Columns $2-8$ show $f_{\rm bound}$ for different values of the SFE. ICs are tabulated in Table \ref{plummer}.} \begin{tabular}{lcccccccc} \hline ICs& SFE$=0.5$ & $0.4$ & $0.3$ & $0.2$& $0.1$ & $0.05$& $0.025$ \\ \hline 1& $f_{\rm bound}=$0.96 & 0.87 & 0.69 & 0.35 & 0.06 & 0.01 & 0.000 \\ 2& 1.00 & 0.98 & 0.93 & 0.79 & 0.37 & 0.10 & 0.018 \\ 3& 0.99 & 0.98 & 0.95 & 0.86 & 0.55 & 0.20 & 0.034 \\ 4& 0.97 & 0.94 & 0.89 & 0.76 & 0.44 & 0.14 & 0.022 \\ \hline \end{tabular} \label{tab-fbound} \end{center} \end{table} \begin{figure} \includegraphics[width=90mm]{fbound.pdf} \caption{Bound mass fraction as a function of the SFE in MOND. $f_{\rm bound}$ is calculated assuming a truncation by $\rt$ in the upper panel, and a truncation by $\refe$ in the lower panel.} \label{fbound} \end{figure} \subsection{Lagrangian radii and half mass radius}\label{rmass} \begin{figure*} \begin{centering}\includegraphics[width=140mm]{lr2.pdf} \caption{Lagrangian radii (labelled as $r_{\rm L}$ in the vertical axes) increasing from $10\%$ to $90\%$ in steps of $10\%$, $L_{0.1},~...,~L_{0.9}$, of the star clusters undergoing sudden gas expulsion. The panels in columns from left to right present models $1-4$, and the panels in rows from top to bottom show the results for models with the SFE from $50\%$ to $2.5\%$. The green and red lines indicate the initial tidal radius, $\rt$, and the initial external field truncation radius, $\refe$, respectively.} \end{centering} \label{lr} \end{figure*} Fig. \ref{lr} illustrates the evolution of Lagrangian radii, i.e., the spherical radii enclosing different fractions of the initial stellar mass in a model, increasing in the range of $10\%-90\%$ in steps of $10\%$, $L_{0.1},~L_{0.2},~...~,L_{0.9}$. The initial tidal radius, $\rt$ (green lines), and initial external field truncation radius, $\refe$(red lines), for the embedded star cluster are shown in the panels if they are $ < L_{0.95}$, where $ L_{0.95}$ is the Lagrangian radius enclosing $95\%$ mass of the system. For all the models with a SFE of $50\%$ and of $40\%$, the systems revirialised within a few tens of $\Myr$ after sudden gas expulsion. The $80\%$ Lagrangian radii, $L_{0.8}$, of all the models stay constant after the revirialisation, showing tiny oscillations with an amplitude of $\approx 1\%$. This implies that all the models with a SFE of $40\%-50\%$ can survive with $80\%$ of their initial mass. This is consistent with the results of bound mass fraction in Sec. \ref{mloss}. With a SFE of $30\%$, which is close to the critical value to leave a bound object in Newtonian dynamics, the $10\%$-$70\%$ Lagrangian radii for the most massive model, model $1$, remain stable after the expansion caused by the removal of gas, and the virial radius, $\refe$, cuts the model off at the radius of $\approx L_{0.7}$. For less massive models, models $2$-$4$, the evolution of all Lagrangian radii is very similar to the same models with larger values of the SFE. When the value of the SFE is reduced to $20\%$, the $\refe$ cuts model $1$ off at a radius slightly larger than the $L_{0.3}$ radius. The Lagrangian radii larger than the tidal radius keep expanding with time, which implies that stars outside the tidal radius are unbound and are escaping from the system. Interestingly, in the less massive models, models $2$-$4$, the $70\%$ Lagrangian radii remain stable after the revirialisation. When the value of SFE is further reduced, SFE$=10\%$, although $\rt>L_{0.1}$ for model $1$, $\refe$ is smaller than the $L_{0.1}$. The $10\%$ Lagrangian radii are not fully revirialised within $1~\Gyr$ and the fraction of bound mass is $\approx 6\%$ in the lower panel of Fig. \ref{fbound} for this model. A larger fraction of bound mass for models $2$-$4$ is presented in Fig. \ref{lr}. The $10\%$-$30\%$ Lagrangian radii, $L_{0.1}-L_{0.3}$, are constants after $200~\Myr$. In model $3$, the $L_{0.4}$ is also stable. This is consistent with the results shown in Sec. \ref{mloss}. The values of SFE are further decreased to $5\%$ and $2.5\%$, and all models are truncated by their tidal radii. The whole system of model $1$ expands with time and nothing is bound at the end of the simulations. The stable portion of the Lagrangian radii increases with declining initial mass of the models with the same SFE. However, due to the external field effect, the models are cut off at $\refe \ll \rt$. The innermost $10\%$ Lagrangian radii for models $2$-$4$ with SFE of $5\%$ are stable in the late stage of the simulations. The evolution of Lagrangian radii with such a low SFE in MOND significantly differs from that in Newtonian dynamics. In the latter dynamics, any flat portion of Lagrangian radii cannot exist since nothing can be bound after the gas expulsion. Note that the smallest Lagrangian radii presented in Fig. \ref{lr} is the $L_{0.1}$ radius. Models $2$-$4$ with SFE=$2.5\%$ can survive with $2\%-3\%$ of their initial masses, but are not displayed in the bottom panels of Fig. \ref{lr} to avoid clustering. Furthermore, there is a clear trend that for the same model, for example, model $2$, the Lagrangian radii at the end of the simulations are larger when the value of the SFE decreases. Such a trend indicates that the size of the remnant is larger with smaller SFE for the same initial density distribution of an embedded cluster. To quantify the expansion of the star clusters in MOND, the $3$-dimensional half mass radius of the final product (truncated at $\refe$), $\rhf$, and the expansion of size, i.e., the ratio between $\rhf$ and the initial $\rh$ of the embedded clusters, are displayed in Fig. \ref{rh}. The values of $\rhf$ grow with the decrease of the SFE in the range of $[10\%,~50\%]$ for all models, and approach the maximal value when SFE$=10\%$. $\rhf$ drops again when the SFE is smaller than $10\%$ for each model. \footnote{For model $1$ with a SFE of $5\%$ and $2.5\%$, the fraction of bound mass is almost zero (see Fig. \ref{fbound}). Therefore, the last two data points for model $1$ can be ignored.} The shapes of the $\rhf/\rh$ curves in Fig. \ref{rh} are very similar to that of $\rh$ for all the models. In the most massive model, model $1$, $\rhf/\rh \approx 3.3$ for a SFE of $50\%$. This is a bit larger than that in Newtonian dynamics simulated by \citet{Baumgardt_Kroupa2007}, which is $2.95$ for an isolated star cluster. This should be attributed to the larger fraction of bound mass in MOND. $\rhf/\rh$ increases up to around $30$ for a SFE of $10\%$ for model $1$, which is $\approx 200~\pc$. This is a very diffuse remnant, with a bound mass of $\approx 6\times 10^4\msun$ (Table \ref{tab-fbound}). Such a stellar system is very similar to the ultra-faint and diffuse (UFD) Milky Way satellites, such as Ursa Major II, Leo T, Canes Venatici II and etc \citep{Simon_Geha2007}. For the less massive models, models $2$-$4$, $\rhf/\rh$ are much smaller than in model $1$. With a SFE of $50\%$, $\rhf/rh \in [1.3,~2.0]$ for models $2-4$, which is smaller than for the isolated model in \citet{Baumgardt_Kroupa2007}. The more diffuse the initial model is, the smaller the value of $\rhf/\rh$ is with the same SFE. For the most diffuse model $4$, the value of $\rhf/\rh$ increases from $1.3$ to $1.6$ when the SFE decreases from $50\%$ to $10\%$, and then it declines to $1.2$ with a SFE$=2.5\%$. The size of the bound remnant does not increase as much as that in model $1$, given that model $4$ is in the deep MOND limit. In addition, with a SFE of $30\%$, which is often observed in embedded clusters and is very close to the canonical value for cluster survival of gas expulsion in Newtonian simulations, the value of $\rhf$ for model $4$ is about $20~\pc$ (see Fig. \ref{rh}), while the bound mass is about $2.7\times 10^4\msun$. The stellar mass and size of the final star cluster is very similar to the ultra-faint Milky Way satellites like Willman I and Segue I \citep{Simon_Geha2007}. There are several UFD satellite galaxies with half-mass radii larger than $200~\pc$, including Ursa Major I, Canes Venatici I and Hercules. But we also note that these satellites are located at larger Galactocentric distances than $100~\kpc$. As a result, both the virial radii and bound masses are larger in MOND. Therefore, MOND naturally provides a possible understanding for the formation of the UFD satellite galaxies without dark matter. To summarise, for an initially compact model, the size expansion is larger in MOND simulations owing to a larger fraction of bound mass, while for an initially diffuse model, the size expansion is smaller due to the much deeper MOND potential. The influence of sudden removal of gas on the size of the models is more significant in the Newtonian limit than that in the deep MOND limit, because in MOND the process is less destructive. In addition, only in the mild MOND gravity (i.e., model $1$) can very large expansion factor ($>20$) be reached. \begin{figure} \includegraphics[width=90mm]{rh.pdf} \caption{Expansion after gas expulsion. Upper panel: $3$-dimensional half mass radius of the final remnants, $\rhf$. Lower panel: the ratios between $\rhf$ and $\rh$ of the embedded clusters listed in Table \ref{plummer}.} \label{rh} \end{figure} \subsection{Mass density profiles} \begin{figure*} \begin{centering} \includegraphics[width=150mm]{den4models.pdf} \caption{The normalised density profiles of the four models. Different colours indicate the bound star clusters surviving gas expulsion with different values of the SFE. The dotted curves are the analytical Plummer density profile and the black dashed curves represent the density of the initial star clusters (ICs). The radius is normalised by $\rhf$ for the final bound systems and by $\rh$ for the ICs. The density is normalised by the average density, $\rho_{\rm h}$, within $\rhf$ ($\rh$ for the ICs). The models are truncated at $r=\refe$. } \label{den} \end{centering} \end{figure*} Fig. \ref{den} presents the mass density profiles of the initial embedded star clusters (the black dashed curves) and the bound star clusters after removing the gasous component (solid coloured curves). The horizontal axes are normalised by the $3$-dimensional half mass radius, $\rh$, for the initial star clusters (ICs) and $\rhf$ for the final products. The mass density is normalised by the mean density within $\rhf$ (or $\rh$ for the ICs), \beq \rho_{h}\equiv \frac{0.5M_*}{\rhf^3} = 0.5M \times {\rm SFE}\times f_{\rm bound}\rhf^{-3}, \eeq where $M_*$ is the bound mass of the star clusters after sudden gas expulsion. Since the model star clusters are observed individually and each star cluster has its own density profile, the normalised density profiles of the models make it possible to inter-compare the concentration and slope of density profiles. In model $1$, comparing to the ICs, the mass density profiles of the bound remnants are more concentrated within $0.3\rhf$ after gas expulsion with a SFE $\in [10\%,~50\%]$. The slopes of the density profiles fall faster than that of the ICs. This is very different to that of the clusters surviving in Newtonian dynamics. In fig. 2 of \citet{Boily_Kroupa2003}, the final profiles of the clusters with a SFE of $45\%-75\%$ in Newtonian dynamics still follow Plummer's mass distribution, i.e., the density distribution of the final products in the central region is flat. Here, however, model $1$ with a SFE of $30\%$ has the steepest density profile. The $\rho(r)$ profile for the remnant with a SFE of $5\%$ is much less concentrated compared to that of the ICs and appears rather flat within $\rhf$ with (particle Poisson) noise. Note that Model $1$ with a SFE of $2.5\%$ does not leave a bound core at the end of the simulation. In models $2-3$, the density profiles for the bound star clusters are also steeper than the ICs, but not as steep as that in model $1$. Moreover, the density profiles of the star clusters surviving the gas expulsion are less concentrated compared to that of the ICs when the SFE $=2.5\%$, which is similar to the case of model $1$ with a SFE of $5\%$. In other words, a core with constant density has been left in these models. For the most diffuse embedded cluster model, model $4$, the density profiles of the final products with all SFEs are less concentrated than those of the ICs. The $\rho(r)$ profiles of the bound remnants slowly decline with the radii in the centres where $r<\rhf$, and drop rapidly in the outer regions ($r>\rhf$) with a slope similar to that of the ICs. To summarise, after gas expulsion in MOND, an originally massive embedded cluster leaves a bound star cluster with a more cuspy central density profile, while an initially less massive and more diffuse embedded cluster leaves a bound star cluster with a flat density profile within about its half-mass radius. \section{Discussion and Conclusions}\label{summary} In this work, we presented the first simulations of star clusters undergoing gas expulsion in Milgromian gravitation. A series of simulations with the SFE ranging from $2.5\%$ to $50\%$ are performed for embedded clusters with initial masses from $10^5\msun$ to $10^7\msun$. The fractions of bound masses, the Lagrangian radii, the half-mass radii and the mass density profiles of the surviving star clusters, are studied. We summarise and discuss our main results here. The kinematics (velocity dispersion profiles and velocity anisotropy profiles) will be presented in a subsequent paper. The tidal radius for a system in MOND is much larger than that in Newtonian dynamics, while the uniform background gravitational field has a much stronger effect. Consequently, the star clusters are truncated at the virial radius where the strength of the external and internal fields are comparable \citep{Wu_Kroupa2015}. For a given SFE, after gas expulsion, the fraction of bound mass is larger in the deep MOND limit than in quasi-Newtonian gravity (mild MOND). In general, the star clusters can survive a low value of the SFE, $10\%$, for all the models in MOND, which is impossible if the gas expulsion is applied to the models in Newtonian dynamics. Furthermore, the initially deep MOND models with a critical SFE of $5\%$ and $2.5\%$ can leave a bound core. Within the framework of Newtonian dynamics, in order to leave a bound star cluster, the SFE should be at least $33\%$ in sudden gas expulsion \citep{Baumgardt_Kroupa2007}, which is apparently much larger than some observed SFEs in the dense cloud clumps of GMCs \citep[e.g., ][]{Megeath+2016}. By introducing additional physical processes in Newtonian gravitation, such as gradual gas expulsion, star clusters forming in complexes or initially non-equilibrium protoclusters, the SFE can be reduced to $15\%$ to leave a bound object\citep{Baumgardt_Kroupa2007,Fellhauer_Kroupa2005,Goodwin2009,Smith+2011,Shukirgaliyev+2017}. What is more, these additional physical process can be easily incorporated in MOND, which would further reduce the critical SFE. The ultra-low SFEs allowed in MOND to yield bound stellar systems are relevant for the formation of the distant UFD satellites in the outer regions of the Milky Way, such as Hercules and Leo IV \citep{Geha+2013}. Moreover, the formation of ultra-faint tidal dwarf galaxies as seen in the Tadpole galaxy \citep{Kroupa2015}, could be another application of the ultra-low SFE in low density molecular clouds. The MOND computations show that a larger fraction of mass is bound to a surviving star cluster when the initial model for the embedded cluster is less massive. It implies that with a fixed value of the SFE, more stars are bound to the surviving star cluster when the embedded cluster model is in the deep MOND limit. The studies of the Lagrangian radii and the half-mass radii show that a more diffuse model expands less after the removal of gas. It implies that a mild MOND (quasi-Newtonian) system has a more substantial influence on the size of the surviving remnants than in the deep MOND limit. For a given SFE, the increase of size for a deep MOND system after sudden gas expulsion is much smaller than that for a quasi-Newtonian system. The mass density profiles of the surviving star clusters are more cuspy in the centre for an originally massive embedded cluster model dominated by quasi-Newtonian gravity, while the central density profile is flat for an originally less massive and more diffuse model dominated by MOND gravity. Finally, since the potential of the bound GCs in MOND are significantly deeper to that in Newtonian dynamics, the kinamtics in the final GCs should be very different in the two dynamics. We shall present an analysis of the kinematics in the two dynamics in a follow-up project. \section{Acknowledgments} The authors thank Luca Ciotti, Pasquale Londrillo and Carlo Nipoti for sharing their NMODY code. The authors acknowledge Ortwin Gerhard for sharing the code for generating the self-consistent N-body ICs using Lucy's method. XW thanks for support through NSFC grants 11503025, 11421303, Anhui NSF grant 1708085MA20, and ``the Fundamental Research Funds for the Central Universities''. XW thanks for support from ``Hundred Talents Project of Anhui Province''. This project was partially started when XW was an Alexander von Humboldt Fellow at the University of Bonn. \bibliographystyle{aasjournal}
1,116,691,497,483
arxiv
\section{Introduction} Conditional density estimation (CDE) refers to the problem of estimating a conditional density $p(y\vert x)$ for the input $x$ and target $y$. In contrast to classification where the target $y$ is simply a discrete class label, $y$ is typically continuous or high-dimensional in CDE. Furthermore, we want to estimate the full conditional density (as opposed to its conditional mean in regression), an important task the conditional distribution has multiple modes. CDE problems in which both $x$ and $y$ are high-dimensional have a wide range of important applications, including video prediction, cross-modality prediction (e.g. image-to-caption), model estimation in model-based reinforcement learning, and so on. Classical non-parametric conditional density estimators typically rely on local Euclidean distance in the original input and target spaces \citep{holmes:cde}. This approach quickly becomes ineffective in high-dimensions from both computational and statistical points of view. Recent advances in deep generative models have led to new parametric models for high-dimensional CDE tasks, namely the conditional variational autoencoders (CVAE) \citep{sohn:cvae}. CVAEs have been applied to a variety of problems, such as MNIST quadrant prediction, segmentation \citep{sohn:cvae}, attribute-based image generation \citep{yan:attribute2image}, and machine translation \citep{zhang:vaemt}. But CVAEs suffer from two statistical deficiencies. First, they do not learn the distribution of the input $x$. We argue that in the case of high-dimensional input $x$ where there might exist a low-dimensional representation (such as a low-dimensional manifold) of the data, recovering this structure is important, even if the task at hand is to learn the conditional density $p(y|x)$. Otherwise, the model is susceptible to overfitting. Second, for many CDE tasks, the acquisition of labeled points is costly, motivating the need for semi-supervised CDE. A purely conditional model would not be able to utilize any available unlabeled data.\footnote{We define a ``labeled point'' to be a paired $(x, y)$ sample, and an ``unlabeled point'' to be unpaired $x$ or $y$.} We note that while variational methods \citep{kingma:vae,rezende:vae} have been applied to semi-supervised classification (where $y$ is a class label) \citep{kingma:ssl,lars:auxiliary}, semi-supervised CDE (where $y$ is high-dimensional) remains an open problem. We focus on a set of deep conditional generative models, which we call \emph{bottleneck conditional density estimators} (BCDEs). In BCDEs, the input $x$ influences the target $y$ via layers of bottleneck stochastic variables $z=\set{z_i}$ in the generative path. The BCDE naturally has a joint generative sibling model which we denote the \emph{bottleneck joint density estimator} (BJDE), where the bottleneck $z$ generates $x$ and $y$ independently. Motivated by \citet{bishop:blend}, we propose a hybrid training framework that regularizes the conditionally-trained BCDE parameters toward the jointly-trained BJDE parameters. This is the key feature that enables semi-supervised learning for conditional density estimation in the BCDEs. Our BCDE hybrid training framework is a novel approach for leveraging unlabeled data for conditional density estimation. Using our BCDE hybrid training framework, we establish new benchmarks for the quadrant prediction task \citep{sohn:cvae} in the semi-supervised regime for MNIST, SVHN, and CelebA. Our experiments show that {\bf 1)} hybrid training is competitive for fully-supervised CDE, {\bf 2)} in semi-supervised CDE, hybrid training helps to avoid overfitting, performs significantly better than conditional training with unlabeled data pre-training, and achieves state-of-the-art results, and {\bf 3)} hybrid training encourages the model to learn better and more robust representations. \section{Background} \subsection{Variational Autoencoders} Variational Autoencoder (VAE) is a deep generative model for density estimation. It consists of a latent variable $z$ with unit Gaussian prior $z\sim \Normal(0,I_k)$, which in turn generates an observable vector $x$. The observation is usually conditionally Gaussian $x \vert z \sim \Normal\big(\mu_\theta(z), \diag(\sigma^2_\theta(z)\big)$, where $\mu$ and $\sigma^2$ are neural networks whose parameters are represented by $\theta$.\footnote{For discrete $x$, one can use a deep network to parameterize a Bernoulli or a discretized logistic distribution.} VAE can be seen as a non-linear generalization of the probabilistic PCA \citep{tipping:pca}, and thus, can recover non-linear manifolds in the data. However, VAE's flexibility makes posterior inference of the latent variables intractable. This inference issue is addressed via a recognition model $q_\phi(z\vert x)$, which serves as an amortized variational approximation of the intractable posterior $p_\theta(z\vert x)$. Learning in VAE's is done by jointly optimizing the parameters of both the generative and recognition models so as to maximize an objective that resembles an autoencoder regularized reconstruction loss \citep{kingma:vae}, i.e., \begin{align} &\sup_{\theta,\phi} \; \Expect_{q_\phi(z\vert x)} \big[\ln p_\theta(x \vert z)\big] - \KL\big(q_\phi(z\vert x)\;||\;p(z)\big). \label{eq:vae-obj-ae} \end{align} We note that the objective \cref{eq:vae-obj-ae} can be rewritten in the following form that exposes its connection to the variational lower bound of the log-likelihood \begin{align} \sup_{\theta} \Big(\ln p_\theta(x) &- \inf_\phi \; \KL\big(q_\phi(z\vert x) \; || \; p_\theta(z\vert x)\big) \Big)\nonumber\\ &=\sup_{\theta,\phi} \; \Expect_{q_\phi(z\vert x)}\brac{ \ln \frac{p_\theta(x, z)}{p_\phi(z \vert x)}}. \label{eq:vae-obj-posterior-reg} \end{align} We make two remarks regarding the minimization of the term $\KL\big(q_\phi(z\vert x) \;||\; p_\theta(z\vert x)\big)$ in Eq. \ref{eq:vae-obj-posterior-reg}. First, when $q(\cdot|\cdot)$ is a conditionally independent Gaussian, this approximation is at best as good as the mean-field approximation that minimizes $\KL\big(q \;||\; p_\theta(z\vert x)\big)$ over all independent Gaussian $q$'s. Second, this term serves as a form of amortized posterior regularization that encourages the posterior $p_\theta(z\vert x)$ to be close to an amortized variational family \citep{dayan:helmholtz,ganchev:posterior,hinton:wakesleep}. In practice, both $\theta$ and $\phi$ are jointly optimized in \cref{eq:vae-obj-ae}, and the reparameterization trick \citep{kingma:vae} is used to transform the expectation over $z\sim {q_{\phi}(z \vert x)}$ into $\epsilon\sim \Normal(0,I_k);\ z=\mu_\phi(x) + \diag\big(\sigma^2_\phi(x)\big)\epsilon$, which leads to an easily obtained stochastic gradient. \subsection{Conditional VAEs (CVAEs)} In \citet{sohn:cvae}, the authors introduce the conditional version of variational autoencoders. The conditional generative model is similar to VAE, except that the latent variable $z$ and the observed vector $y$ are both conditioned on the input $x$. The conditional generative path is \begin{align} p_\theta(z \mid x) &= \Normal\Big(z \mid \mu_{z,\theta}(x), \text{diag}\big(\sigma^2_{z,\theta}(x)\big)\Big)\\ p_\theta(y \mid x, z) &= \Normal\Big(y \mid \mu_{y,\theta}(x, z), \text{diag}\big(\sigma^2_{y,\theta}(x, z)\big)\Big), \end{align} and when we use a Bernoulli decoder is \begin{align} p_\theta(y \mid x, z) = \Ber\big(y \mid \mu_{y,\theta}(x, z)\big). \end{align} Here, $\theta$ denotes the parameters of the neural networks used in the generative path. The CVAE is trained by maximizing a lower bound of the conditional likelihood \begin{align} \ln p_\theta(y \vert x) \ge \Expect_{q_\phi(z \vert x, y)}\brac{\ln \frac{p_\theta(z \vert x)p_\theta(y \vert x, z)}{q_\phi(z \vert x, y)}},\label{eq:cvae} \end{align} but with a recognition network $q_\phi(z \vert x, y)$, which is typically Gaussian $\Normal\left(z \vert \mu_\phi(x, y), \text{diag}\big(\sigma^2_\phi(x, y)\big)\right)$, and takes both $x$ and $y$ as input. \input{figures/hybrid.tex} \subsection{Blending Generative and Discriminative}\label{sec:bishop} It is well-known that a generative model may yield sub-optimal performance when compared to the same model trained discriminatively \citep{ng:versus}, a phenomenon attributable to the generative model being mis-specified \citep{bishop:blend}. However, generative models can easily handle unlabeled data in semi-supervised setting. This is the main motivation behind blending generative and discriminative models. \citet{bishop:blend} proposed a principled method for hybrid blending by duplicating the parameter of the generative model into a discriminatively trained $\theta$ and a generatively trained $\thetap$, i.e., \begin{equation} p(\X_l, \Y_l, \X_u, \thetap, \theta) = p(\thetap, \theta)p(\X_u \vert \thetap) p(\X_l \vert \thetap) p(\Y_l \vert \X_l, \theta). \label{eq:bishop-blend} \end{equation} The discriminatively trained parameter $\theta$ is regularized toward the generatively trained parameter $\thetap$ via a prior $p(\thetap,\theta)$ that prefers small $\|\theta-\thetap\|^2$. As a result, in addition to learning from the labeled data $(\X_l,\Y_l)$, the discriminative parameter $\theta$ can be informed by the unlabeled data $\X_u$ via $\thetap$, enabling a form of semi-supervised discriminatively trained generative model. However, this approach is limited to simple generative models (e.g., naive Bayes and HMMs), where exact inference of $p(y\vert x, \theta)$ is tractable. \section{Neural Bottleneck Conditional Density Estimation} While \citet{sohn:cvae} has successfully applied the CVAE to CDE, CVAE suffers from two limitations. First, the CVAE does not learn the distribution of its input $x$, and thus, is far more susceptible to overfitting. Second, it cannot incorporate unlabeled data. To resolve these limitations, we propose a new approach to high-dimensional CDE that blends the discriminative model that learns the conditional distribution $p(y|x)$, with a generative model that learns the joint distribution $p(x, y)$. \subsection{Overview} Figure \ref{fig:bde} provides a high-level overview of our approach that consists of a new architecture and a new training procedure. Our new architecture imposes a bottleneck constraint, resulting a class of conditional density estimators, we call it \emph{bottleneck conditional density estimators} (BCDEs). Unlike CVAE, the BCDE generative path prevents $x$ from directly influencing $y$. Following the conditional training paradigm in \citet{sohn:cvae}, conditional/discriminative training of the BCDE means maximizing the lower bound of a conditional likelihood similar to \eqref{eq:cvae},i.e., \begin{align*} \ln p_\theta(y \vert x) &\ge \C(\theta, \phi; x, y) \\ &=\Expect_{q_\phi(z \vert x, y)}\brac{\ln \frac{p_\theta(z \vert x)p_\theta(y \vert z)}{q_\phi(z \vert x, y)}}. \end{align*} When trained over a dataset of paired $(\X, \Y)$ samples, the overall conditional training objective is \begin{align} \C(\theta, \phi; \X, \Y) &= \sum_{x, y \in \X, \Y} \C(\theta, \phi; x, y). \label{eq:cond} \end{align} However, this approach suffers from the same limitations as CVAE and imposes a bottleneck that limits the flexibility of the generative model. Instead, we propose a {\em hybrid} training framework that takes advantage of the bottleneck architecture to avoid overfitting and supports semi-supervision. One component in our hybrid training procedure tackles the problem of estimating the \emph{joint} density $p(x,y)$. To do this, we use the joint counterpart of the BCDE: the bottleneck joint density estimator (BJDE). Unlike conditional models, the BJDE allows us to incorporate unpaired $x$ and $y$ data during training. Thus, the BJDE can be trained in a semi-supervised fashion. We will also show that the BJDE is well-suited to \emph{factored inference} (see \cref{sec:factor}), i.e., a factorization procedure that makes the parameter space of the recognition model more compact. The BJDE also serves as a way to regularize the BCDE, where the regularization constraint can be viewed as soft-tying between the parameters of these two models' generative and recognition networks. Via this regularization, BCDE benefits from unpaired $x$ and $y$ for conditional density estimation. \input{figures/1z_models.tex} \input{tables/shared.tex} \subsection{Bottleneck Joint Density Estimation} In the BJDE, we wish to learn the joint distribution of $x$ and $y$. The bottleneck is introduced in the generative path via the bottleneck variable $z$, which points to $x$ and $y$ (see \cref{fig:vaex,fig:vaey,fig:vaexy}). Thus, the variational lower bound of the joint likelihood is \begin{align} \ln p_\thetap(x, y) &\ge \J_{xy}(\thetap, \phip; x, y) \nonumber \\ &=\Expect_{q_\phip(z \vert x, y)} \brac{\ln \frac{p(z)p_\thetap(x \vert z) p_\thetap(y \vert z)}{q_\phip(z \vert x, y)}}. \end{align} We use $\{\thetap, \phip\}$ to indicate the parameters of the BJDE networks and reserve $\set{\theta, \phi}$ for the BCDE parameters. For samples in which $x$ or $y$ is unobserved, we will need to compute the variational lower bound for the marginal likelihoods. Here, the {\em bottleneck} plays a critical role. If $x$ were to directly influence $y$ in a non-trivial manner, any attempt to incorporate unlabeled $y$ would require the recognition model to infer the unobserved $x$ from the observed $y$\textemdash a conditional density estimation problem which might be as hard as our original task. In the bottleneck architecture, the conditional independence of $x$ and $y$ given $z$ implies that only the low-dimensional bottleneck needs to be marginalized. Thus, the usual variational lower bounds for the marginal likelihoods yield \begin{align} \ln p_\thetap(x) &\ge \J_x(\thetap, \phip; x) = \Expect_{q_\phip(z \vert x)} \brac{\ln \frac{p(z)p_\thetap(x \vert z)}{q_\phip(z \vert x)}},\nonumber \\ \ln p_\thetap(y) &\ge \J_y(\thetap, \phip; y) = \Expect_{q_\phip(z \vert y)} \brac{\ln \frac{p(z)p_\thetap(y \vert z)}{q_\phip(z \vert y)}}. \nonumber \end{align} Since $z$ takes on the task of reconstructing both $x$ and $y$, the BJDE is sensitive to the distributions of $x$ and $y$ and learns a joint manifold over the two data sources. Thus, the BJDE provides the following benefits: \textbf{1}) learning the distribution of $x$ makes the inference of $z$ given $x$ robust to perturbations in the inputs, \textbf{2}) $z$ becomes a joint-embedding of $x$ and $y$, \textbf{3}) the model can leverage unlabeled data. Following the convention in \cref{eq:cond}, the joint training objectives is \begin{align} &\J(\thetap, \phip; \X_u, \Y_u, \X_l, \Y_l) = \label{eq:joint}\\ &\phantom{\hspace{0.5cm}} \J_x(\thetap, \phip; \X_u) + \J_y(\thetap, \phip; \Y_u) + \J_{xy}(\thetap, \phip; \X_l, \Y_l), \nonumber \end{align} where $(\X_l, \Y_l)$ is a dataset of paired $(x, y)$ samples, and $\X_u$ and $\Y_u$ are datasets of unpaired samples. \subsection{Blending Joint and Conditional Deep Models} Because of potential model mis-specifications, the BJDE is not expected to yield good performance if applied to the conditional task. Thus, we aim to blend the BJDE and BCDE models in the spirit of \citet{bishop:blend}. However, we note that \eqref{eq:bishop-blend} is not directly applicable since the BCDE and BJDE are two different models, and not two different views (discriminative and generative) of the same model. Therefore, it is not immediately clear how to tie the BCDE and BJDE parameters together. Further, these models involve conditional probabilities parameterized by deep networks and have no closed form for inference. Any natural prior for the BCDE parameter $\theta$ and the BJDE parameter $\thetap$ should encourage $p_{\text{BCDE}}(y\vert x,\theta)$ to be close to $p_{\text{BJDE}}(y\vert x, \thetap)$. In the presence of the latent variable $z$, it is then natural to encourage $p(z \vert x, \theta)$ to be close to $p(z \vert x, \thetap)$ and $p(y \vert z, \theta)$ to be close to $p(y \vert z, \thetap)$. However, enforcing the former condition is intractable as we do not have a closed form for $p_{\text{BJDE}}(z \vert x, \thetap)$. Fortunately, an approximation of $p_{\text{BJDE}}(z \vert x, \thetap)$ is provided by the recognition model $q(z \vert x, \phip)$. Thus, we propose to softly tie together the parameters of networks defining $p(z\vert x, \theta)$ and $q(z\vert x, \phip)$. This strategy effectively leads to a joint prior over the model network parameters, as well as the recognition network parameters $p(\phip, \thetap, \phi, \theta)$. As a result, we arrive at the following hybrid blending of deep stochastic models and its variational lower bound \begin{align} &\ln p(\X_l, \Y_l, \X_u, \Y_u, \thetap, \phip, \theta, \phi) \ge \ln p(\thetap, \phip, \theta, \phi) ~+ \nonumber \\ &\phantom{\hspace{1cm}} \J_x(\thetap, \phip; \X_u) + \J_y(\thetap, \phip; \Y_u) ~+ \nonumber \\ &\phantom{\hspace{1cm}} \J_x(\thetap, \phip; \X_l) + \C(\theta, \phi; \X_l, \Y_l). \label{eq:hybrid} \end{align} We interpret $\ln p(\thetap, \phip, \theta, \phi)$ as a $\ell_2$-regularization term that softly ties the joint parameters $(\thetap, \phip)$ and conditional parameters $(\theta, \phi)$ in an appropriate way. For the BCDE and BJDE, there is a natural one-to-one mapping from the conditional parameters to a subset of the joint parameters. For the joint model described in \cref{fig:vaexy} and conditional model in \cref{fig:cvae}, the parameter pairings are provided in \cref{table:shared}. Formally, we define $\gamma = \{\theta, \phi\}$ and use the index $\gamma_{a\vert b}$ to denote the parameter of the neural network on the Bayesian network link $b\rightarrow a$ in the BCDE. For example $\gamma_{z|x}=\theta_{z|x}$, $\gamma_{z|x,y}=\phi_{z|x,y}$. Similarly, let $\gammap = \{\thetap, \phip\}$. In the BJDE, the same notation yields $\gammap_{z|x}=\phip_{z|x}$. The hybrid blending regularization term can be written as \begin{align} \ln p(\theta, \phi, \thetap, \phip) = -\frac{\lambda}{2} \sum_{i\in I} \| \gamma_i - \gammap_i \|_2^2 + \text{const}, \end{align} where $I$ denotes the set of common indices of the joint and conditional parameters. When the index is $z\vert x$, it effectively means that $p(z \vert x, \theta)$ is softly tied to $q(z \vert x, \thetap)$, i.e., \begin{align*} \| \gamma_{z \vert x} - \gammap_{z \vert x} \|_2^2 = \| \theta_{z \vert x} - \phip_{z \vert x} \|_2^2\;. \end{align*} Setting $\lambda=0$ unties the BCDE from the BJDE, and effectively yields to a conditionally trained BCDE, while letting $\lambda \rightarrow \infty$ forces the corresponding parameters of the BCDE and BJDE to be identical. Interestingly, \cref{eq:hybrid} does not contain the term $\J_{xy}$. Since explicit training of $\J_{xy}$ may lead to learning a better joint embedding in the space of $z$, we note the following generalization of \cref{eq:hybrid} that trades off the contribution between $\J_{xy}$ and $\brac{\J_{x} + \C}$, \begin{align} &\ln p(\X_l, \Y_l, \X_u, \Y_u, \thetap, \phip, \theta, \phi) \nonumber \\ &\phantom{\hspace{0.3cm}}\ge \mathcal{H}(\thetap, \phip, \theta, \phi; \X_l, \Y_l, \X_u, \Y_u) \nonumber \\ &\phantom{\hspace{0.3cm}}=\ln p(\thetap, \phip, \theta, \phi) ~+ \nonumber \\ &\phantom{\hspace{1cm}} \J_x(\thetap, \phip; \X_u) + \J_y(\thetap, \phip; \Y_u) ~+ \nonumber \\ &\phantom{\hspace{1cm}} \alpha\cdot \J_{xy}(\thetap, \phip; \X_l, \Y_l) ~+ \nonumber \\ &\phantom{\hspace{1cm}} (1 - \alpha)\cdot\brac{\J_x(\thetap, \phip; \X_l) + \C(\theta, \phi; \X_l, \Y_l)}. \label{eq:final-hybrid} \end{align} Intuitively, the equation computes the lower bound of $p(\X_l, \Y_l)$, either using the joint parameters $\thetap, \phip$ or factorizes $p(\X_l, \Y_l)$ into $p(\X_l) p(\Y_l \mid \X_l)$ before computing the lower bound of $p(\Y_l \mid \X_l)$ with the conditional parameters. A proof that the lower bound holds for any $0\le\alpha\le 1$ is provided in \cref{sec:derivation}. For simplicity, we set $\alpha=0.5$ and do not tune $\alpha$ in our experiments. \subsection{Factored Inference}\label{sec:factor} The inference network $q_\phi(z \vert x, y)$ is usually parameterized as a single neural network that takes both $x$ and $y$ as input. Using the precision-weighted merging scheme proposed by \citet{sonderby:lvae}, we also consider an alternative parameterization of $q_\phi(z \vert x, y)$ that takes a weighted-average of the Gaussian distribution $q_\phi(z \vert x)$ and a Gaussian likelihood term $\hat{\ell}(z; y)$ (see \cref{sec:factored}). Doing so offers a more compact recognition model and more sharing parameters between the BCDE and BJDE (e.g., see the bottom two rows in \cref{table:shared}), but at the cost of lower flexibility for the variational family $q_\phi(z \vert x, y)$. \input{tables/q1q2q3.tex} \section{Experiments} We evaluated the performance of our hybrid training procedure on the permutation-invariant quadrant prediction task \citep{sohn:multimodal,sohn:cvae} for MNIST, SVHN, and CelebA. The quadrant prediction task is a conditional density estimation problem where an image data set is partially occluded. The model is given the observed region and is evaluated by its perplexity on the occluded region. The quadrant prediction task consists of four sub-tasks depending on the degree of partial observability. 1-quadrant prediction: the bottom left quadrant is observed. 2-quadrant prediction: the left half is observed. 3-quadrant prediction: the bottom right quadrant is \emph{not} observed. Top-down prediction: the top half is observed. In the fully-supervised case, the original MNIST training set $\set{x_i'}_{i=1}^{50000}$ is converted into our CDE training set $\set{\X_l, \Y_l} = \set{x_i, y_i}_{i=1}^{50000}$ by splitting each image into its observed $x$ and unobserved $y$ regions according to the quadrant prediction task. Note that the training set does not contain the original class label information. In the $n_l$-label semi-supervised case, we randomly sub-sampled $n_l$ pairs to create our labeled training set $\set{x_i, y_i}_{i=1}^{n_l}$. The remaining $n_u$ paired samples are decoupled and put into our unlabeled training sets $\X_u = \set{x_i}_{i=1}^{n_u}, \Y_u = \set{y_i}_{i=1}^{n_u}$. Test performance is the conditional density estimation performance on the entire test set, which is also split into input $x$ and target $y$ according to the quadrant prediction task. Analogous procedure is used for SVHN and CelebA. For comparison against \citet{sohn:cvae}, we evaluate the performance of our models on the MNIST 1-quadrant, 2-quadrant, and 3-quadrant prediction tasks. The MNIST digits are statically-binarized by sampling from the Bernoulli distribution according to their pixel values \citep{salakhutdinov:dbn}. We use a sigmoid layer to learn the parameter of the Bernoulli observation model. We provide the performance on the top-down prediction task for SVHN and CelebA. We used a discretized logistic observation model \citet{kingma:iaf} to model the pixel values for SVHN and a Gaussian observation model with fixed variance for CelebA. For numerical stability, we rely on the implementation of the discretized logistic distribution described in \citet{salimans:pixel}. In all cases, we extracted a validation set of $10000$ samples for hyperparameter tuning. While our training objective uses a single (IW=$1$) importance-weighted sample \cite{burda:iwae}, we measure performance using IW=$100$ to get a tighter bound on the test log-likelihood \citep{sohn:cvae}. We run replicates of all experiments and report the mean performance with standard errors. For a more expressive variational family \citep{ranganath:hvm}, we use two stochastic layers in the BCDE and perform inference via top-down inference \cite{sonderby:lvae}. We use multi-layered perceptrons (MLPs) for MNIST and SVHN, and convolutional neural networks (CNNs) for CelebA. All neural networks are batch-normalized \citep{ioffe:batchnorm} and updated with \emph{Adam} \citep{kingma:adam}. The number of training epochs is determined based on the validation set. The dimensionality of each stochastic layer is $50$, $100$, and $300$ for MNIST, CelebA, and SVHN respectively. All models were implemented in Python\footnote{\href{https://github.com/ruishu/bcde}{github.com/ruishu/bcde}} using Tensorflow \citep{tensorflow}. \subsection{Conditional Log-Likelihood Performance} \Cref{table:q1,table:q2,table:q3,table:svhn,table:celeba} show the performance comparisons between the CVAE and the BCDE. For baselines, we use the CVAE, the BCDE trained with the conditional objective, and the BCDE initialized via pre-training $\J_x(\cdot)$ and $\J_y(\cdot)$ using the available $x$ and $y$ data separately (and then trained conditionally). Against these baselines, we measure the performance of the BCDE (with and without factored inference) trained with the hybrid objective $\mathcal{H}(\cdot)$. We tuned the regularization hyperparameter $\lambda=\set{10^{-3}, 10^{-2}, \ldots, 10^{3}}$ on the MNIST 2-quadrant semi-supervised tasks and settled on using $\lambda = 10^{-2}$ for all tasks. \textbf{Fully-supervised regime}. By comparing in the fully-supervised regime for MNIST (\cref{table:q1,table:q2,table:q3}, $n_l = 50000$), we show that the hybrid BCDE achieves competitive performance against the pretrained BCDE and out-performs previously reported results for CVAE \cite{sohn:cvae}. \textbf{Semi-supervised regime}. As the labeled training size $n_l$ reduces, the benefit of having the hybrid training procedure becomes more apparent. The BCDEs trained with the hybrid objective function tend to significantly improve upon its conditionally-trained counterparts. On MNIST, hybrid training of the factored BCDE achieves the best performance. Both hybrid models achieve over a 1-nat difference than the pre-trained baseline in some cases\textemdash a significant difference for binarized MNIST \cite{wu:decoder}. Conditional BCDE performs very poorly in the semi-supervised tasks due to overfitting. On CelebA, hybrid training of the factored BCDE also achieves the best performance. Both hybrid models significantly out-perform the conditional baselines and yield better visual predictions than conditional BCDE (see \cref{sec:visualization}). The hybrid models also outperform pre-trained BCDE with only half the amount of labeled data. On SVHN, the hybrid BCDE with standard inference model significantly out-performs the conditional baselines. However, the use of factored inference results in much poorer performance. Since the decoder is a discretized logistic distribution with learnable scale, it is possible that the factored inference model is not expressive enough to model the posterior distribution. \textbf{Model entropy.} In \Cref{fig:entropy}, we sample from $p_\theta(y \vert x)$ for the conditional BCDE and the hybrid BCDE. We show that the conditionally-trained BCDE achieves poorer performance because it learns a lower-entropy model. In contrast, hybrid training learns a lower perplexity model, resulting in a high-entropy conditional image generator that spreads the conditional probability mass over the target output space \citep{theis:note}. \input{figures/entropy.tex} \subsection{Conditional Training Overfits} To demonstrate the hybrid training's regularization behavior, we show the test set performance during training (\cref{fig:overfit}) on the 2-quadrant MNIST task ($n_l = 10000$). Even with pre-trained initialization of parameters, models that were trained conditionally quickly overfit, resulting in poor test set performance. In contrast, hybrid training regularizes the conditional model toward the joint model, which is much more resilient against overfitting. \input{figures/overfit.tex} \subsection{Robustness of Representation} Since hybrid training encourages the BCDE to consider the distribution of $x$, we can demonstrate that models trained in a hybrid manner are robust against structured perturbations of the data set. To show this, we experimented with two variants of the MNIST quadrant task called the \emph{shift-sensitive} and \emph{shift-invariant} top-bottom prediction tasks. In these experiments, we set $\lambda = 0.1$. \subsubsection{Shift-Sensitive Estimation} In the shift-sensitive task, the objective is to learn to predict the bottom half of the MNIST digit ($y$) when given the top half ($x$). However, we introduce structural perturbation to the top and bottom halves of the image in our training, validation, and test sets by randomly shifting each pair $(x, y)$ horizontally by the same number of pixels (shift varies between $\set{-4, -3, \ldots, 3, 4}$). We then train the BCDE using either the conditional or hybrid objective in the fully-supervised regime. Note that compared to the original top-down prediction task, the perplexity of the conditional task remains the same after the perturbation is applied. \input{tables/td_shift_sense.tex} \Cref{table:sensitive} shows that hybrid training consistently achieves better performance than conditional training. Furthermore, the hybridly trained models were less affected by the introduction of the perturbation, demonstrating a higher degree of robustness. Because of its more compact recognition model, hybrid + factored is less vulnerable to overfitting, resulting in a smaller performance gap between performance on the shifted and original data. \subsubsection{Shift-Invariant Estimation} The shift-invariant task is similar to the shift-sensitive top-bottom task, but with one key difference: we \emph{only} introduce structural noise to the top half of the image in our training, validation, and test sets. The goal is thus to learn that the prediction of $y$ (which is always centered) is invariant to the shifted position of $x$. \input{tables/td_shift.tex} \input{figures/td_shift_connect.tex} \Cref{table:invariant} shows similar behavior to \Cref{table:sensitive}. Hybrid training continues to achieve better performance than conditional models and suffer a much smaller performance gap when structural corruption in $x$ is introduced. In \cref{fig:invariant}, we show the PCA projections of the latent space sub-region populated by digits $2$ and color-coded all points based on the degree of shift. We observe that hybrid training versus conditional training of the BCDE result in very different learned representations in the stochastic layer. Because of regularization toward the joint model, the hybrid BCDE's latent representation retrains information about $x$ and learns to untangle shift from other features. And as expected, conditional training does not encourage the BCDE to be aware of the distribution of $x$, resulting in a latent representation that is ignorant of the shift feature of $x$. \section{Conclusion} We presented a new framework for high-dimensional conditional density estimation. The building blocks of our framework are a pair of sibling models: the Bottleneck Conditional Density Estimator (BCDE) and the Bottleneck Joint Density Estimator (BJDE). These models use layers of stochastic neural networks as bottleneck between the input and output data. While the BCDE learns the conditional distribution $p(y \vert x)$, the BJDE learns the joint distribution $p(x, y)$. The bottleneck constraint implies that only the bottleneck needs to be marginalized when either the input $x$ or the output $y$ are missing during training, thus, enabling the BJDE to be trained in a semi-supervised fashion. The key component of our framework is our hybrid objective function that regularizes the BCDE towards the BJDE. Our new objective is a novel extension of \citet{bishop:blend} that enables the principle of hybrid blending to be applied to deep variational models. Our framework provides a new mechanism for the BCDE, a conditional model, to become more robust and to learn from unlabeled data in semi-supervised conditional density estimation. Our experiments showed that hybrid training is competitive in the fully-supervised regime against pre-training, and achieves superior performance in the semi-supervised quadrant prediction task in comparison to conditional models, achieving new state-of-the-art performances on MNIST, SVHN, and CelebA. Even with pre-trained weight initializations, the conditional model is still susceptible to overfitting. In contrast, hybrid training is significantly more robust against overfitting. Furthermore, hybrid training transfers the nice embedding properties of the BJDE to the BCDE, allowing the BCDE to learn better and more robust representation of the input $x$. The success of our hybrid training framework makes it a prime candidate for other high-dimensional conditional density estimation problems, especially in semi-supervised settings. \clearpage
1,116,691,497,484
arxiv
\section{\label{sec:introductionl}Introduction} \subsection{\label{subsec:description}Problem Description} With the advancement of experimental probes on the quantum spin system in both one-dimension (1D)\cite{Jompol2009} and two-dimension (2D)\cite{Gross2017}, rich ground state (GS) phases such as the disordered Tomonaga-Luttinger spin liquid and the ordered non-collinear antiferromagnetic state are revealed. An ideal model describing them is the antiferromagnetic spin-$\frac{1}{2}$ Heisenberg model. Such a model on an infinite 1D lattice is not ordered with power-law-decaying spin-spin correlations\cite{Bethe1931}. But, "more is different"\cite{Anderson1972}. When a collection of infinite 1D lattices is isotropically coupled to form an infinity-by-$N$ square lattice called spin ladder, the GS is predicted to be not ordered with exponentially-decaying correlations in a few quasi-1D lattices\cite{Greven1996} but ordered in 2D\cite{Manousakis1991}. Hereafter, we assume an infinity-by-infinity square lattice in the thermodynamic limit in our reference to a 2D lattice and confine the discussion within even $N$'s. It implies that there is at least one dimensional transition from 1D to 2D either asymptotically at $N=\infty$, as the nonlinear sigma model (NLSM) predicts\cite{Chakravarty1996,Sierra1996}, or critically at some finite width $N_c$. Note that this change of dimensional characteristics occurs purely due to a critical change of lattice topology, different from those caused by the variation of temperature or spin-spin coupling anisotropy\cite{Hoang1977,Kung2017,Raczkowski2013,Disseler2015}. Owing to the perturbative nature of mapping the spin-$\frac{1}{2}$ ladder to NLSM, its prediction that the gap exponentially decays with increasing $N$ implies the existence of some threshold beyond which the perturbation would be inapplicable. In fact, its prediction on the existence of gap was numerically checked only for $M$-by-$N$ lattices with $M\gg N$ and $N$ up to $6$\cite{White1994,Dagotto1996}. And, the latest size-scaling of $N$-by-$N$ or $M$-by-$N$ lattices in the literature did not handle larger $N$ yet and hence did not capture any dimensional transition\cite{Stoudenmire2012,Ramos2014} though the possibility is not excluded\cite{Landsman2013}. Therefore, it is worth exploring the possibility of such a quantum dimensional transition at finite $N$ for a true infinity-by-$N$ lattice of larger $N$'s, by monitoring the emerging order parameter such as the stagger magnetization and the spin-spin correlation at infinite separation. However, this is not an easy task for both analytic methods and numerical methods. Other than the aforementioned NLSM, on the analytic side, Bethe ansatz\cite{Bethe1931} only works for $N=1$; Bosonization\cite{Luther1974,Luther1975} predicts a power-law decay of the spin-spin correlation $C\left(r\right)\equiv\langle S^z_{\left(i,1\right)} S^z_{\left(i+r,1\right)}\rangle=r^{-1}$ for $N=1$, r being the spin-spin separation; Conformal field theory (CFT)\cite{Affleck1990,Affleck1998a} further predicts a logarithmic correction multiplying with the power function, which was confirmed\cite{Wang2012} to be asymptotically effective after $1000$ lattice separations; CFT\cite{Shelton1996} also gives a solution in limiting cases for $N>1$ such as the 2D Ornstein-Zernike form of spin-spin correlations for weakly coupled spin ladders; the spin wave theory (SWT)\cite{Chernyshev2009,wang2011} essentially provides an approximation in the continuum limit and assumes the magnon excitation, excluding the spinon excitation, hence giving no decisive observation of the quantum dimension transition. Numerically, it is not feasible for statistical methods such as Monte Carlo method\cite{Ceperley1980} which otherwise is powerful in searching energy of a finite system. Finite $M$-by-$N$ lattices, with $M\gg N$, were simulated\cite{Greven1996}, trying to scale away the finite size effect. A similar finite lattice was also simulated\cite{Ramos2014} by the density matrix renormalization group method (DMRG)\cite{White1992,White1993}. But sweeping an infinity-by-$N$ lattice to establish long-range spin-spin correlations for large $N$ is not yet practical. Variants of DMRG such as infinite time evolving block decimation (iTEBD) method were applied to the case of $N=2$\cite{Furukawa2010}, yielding results conflicting with that by the infinite quasi-1D entanglement perturbation theory (iqEPT)\cite{Wang2015}, and so on\cite{Kariyado2015}, but not for larger $N$ because of the rapid increase of the number of density matrix elements needed for a sufficient accuracy. Tensor network state (TNS)\cite{Nishino2000,Nishio2004} based methods such as the infinite projected entangled pair state (iPEPS)\cite{Jordan2008,Shi2016}, illustrated in Fig.\ref{fig:TNSandMPS} (a) and natively designed for an infinite 2D system, still do not show sufficient efficiency to tackle the tensor's bond index size greater than a few dozens\cite{Shi2016,Ehlers2017}. It hinders its application to investigate the very fine structure of the spin-spin correlation covering large separations within large systems\cite{Li2012}. Nevertheless, understanding the dimension transition, from the non-ordered 1D including quasi-1D spin lattices to the ordered 2D lattice, is important in taking the right numerical strategy to deal with strong correlation in low-dimensional quantum system. For instance, DMRG works extremely well in 1D but not in 2D. When dealing with a 2D lattice, the wave function obtained in DMRG has a matrix product state (MPS)\cite{Fannes1992,Oestlund1995,Verstraete2004,Chung2006,Chung2007,Chung2009,Wang2012,Garcia2007,Crosswhite2008,McCulloch2008a} form that is built on a winded 1D lattice which resembles the 2D lattice\cite{Stoudenmire2012,Ehlers2017}. See Fig.\ref{fig:TNSandMPS}(b) for an illustration. According to the area law\cite{Eisert2010}, the required MPS rank (bond index size) characterizing the entanglement in the wave function increases too rapidly in this way\cite{Schollwoeck2005}. It is obvious that rather than treating the infinity-by-$N$ lattice as a winded 1D lattice, it can also be treated as a 1D lattice by converting $N$ physical sites in the rung into an effective site\cite{Wang2015} (Fig.\ref{fig:lattice}(b)). \begin{figure} \begin{center} $\begin{array}{ccc} &\mbox{(a) original lattice} & \mbox{(b) effective lattice} \\ & \includegraphics[width=10.5pc]{original_lattice}& \includegraphics[width=10.5pc]{effective_lattice}\\ \end{array}$ \caption{\label{fig:lattice} Spin-$\frac{1}{2}$ antiferromagnetic Heisneberg model on an infinity-by-$N$ ladder, $N=4$ for example. The circle represents lattice sites, and the lines and curves connecting the nearest neighboring sites represent the spin-spin interactions. Periodic boundary conditions are assumed. (a) Original lattice. (b) Effective lattice whose single site is indicated by the dashed rectangle enclosing the $N$ lattice sites in the rung of original lattice.} \end{center} \end{figure} We take the latter approach in this study and use the infinity-by-$N$ lattice to investigate if the system wave function, no matter which dimensional characteristics its GS turns out to be, can be universally represented by the MPS whose bonding topology is the same as the lattice linking architecture (compare the effective lattice structure in Fig.\ref{fig:lattice}(b) and the MPS structure in Fig.\ref{fig:TNSandMPS}(c)). Hence, we hopefully tame the increase of MPS rank with $N$ at a manageable rate. \begin{figure} \begin{center} $\begin{array}{ccc} &\mbox{(a) TNS} & \mbox{(b) MPS(winding)} \\ & \includegraphics[width=8.5pc]{TNS} & \includegraphics[width=8.5pc]{MPS_winding}\\ \end{array}$ $\begin{array}{ccc} & \mbox{(c) MPS(effective 1D)}& \\ &\includegraphics[width=10.pc]{MPS_effective}& \\ \end{array}$ \caption{\label{fig:TNSandMPS} Various designs for the lattice wave function using tensor/matrix product. $\xi$ denotes a tensor. The solid line refers to the bonding index while the dashed line refers to the space index. (a) Tensor network state (TNS). Each tensor has four bonding indices which resemble the same lattice architecture shown in Fig.\ref{fig:lattice} and one space index which accounts for a lattice site. TNS is employed in iPEPs, etc. (b) Matrix product state (MPS) built on a winding lattice chain. Each tensor has two bonding indices that differ from the lattice linking architecture and one space index for a site. DMRG wave function after projections is reduced to this form of MPS. (c) MPS built on a lattice chain of a translational symmetry. The two bonding indices resemble the architecture of an effective lattice shown in Fig.\ref{fig:lattice}(b); the combination of dashed lines each of which corresponds to a physical site of each $\xi$ is treated as a single space index running from 1 to $2^N$ for spin-$\frac{1}{2}$.} \end{center} \end{figure} We study the model described by \begin{equation} \label{eq:hamiltonian} H=J\sum_{\langle \left(i,j\right),\left(i',j'\right)\rangle}{{\vec S}_{\left(i,j\right)}\cdot {\vec S}_{\left(i',j'\right)}}. \end{equation} where ${\vec S}_{\left(i,j\right)}$ is the spin vector operator on the ${\left(i,j\right)}^{\text{th}}$ lattice site with $\it{i}$ running from $-\infty$ to $\infty$ in the longitudinal direction (LD) and $\it{j}$ running from $1$ to $N$ in the rung. $\langle\rangle$ sums over the nearest neighboring sites. $J$ is the spin-spin coupling integral and is normalized to $1$ hereafter. The periodic boundary condition (PBC) is assumed in both directions. See Fig.\ref{fig:lattice}(a) for the schematic of the lattice geometry and interaction configurations. \subsection{\label{subsec:methodology}Methodology} As mentioned, we divide an infinity-by-$N$ square lattice into an infinite chain of effective sites, each of which is converted from the N sites in the rung. See Fig.\ref{fig:lattice}(b) for illustration. Each effective site has $2^N$ degree of freedom. The wave function is written in a matrix product state that is built on the effective sites as, \begin{equation} \label{eq:mps} \mid \psi\rangle=\sum_{\cdots r^{i-1}r^i\cdots}{tr\left(\cdots \xi_{r^{i-1}}\cdot \xi_{r^i}\cdots\right)\cdots\mid \phi_{r^{i-1}}^{i-1}\rangle\mid \phi_{r^{i}}^{i}\rangle\cdots} \end{equation} . This is composed of only one tensor $\xi$ after implementation of the antiferromagnetic checker board transformation (Sec.\ref{sec:checkerboard}). $\xi$ has three indices, as on Fig.\ref{fig:TNSandMPS}(c). The first index is associated with the local quantum state of $i^{\text{th}}$ effective site, $\mid \phi_{r^i}^i \rangle$. Therefore, it runs from $1$ to $2^N$. The other two legs are the left/right bond indices contracting with the right/left bond indices of the front/rear tensors. These two legs run from $1$ to $P$; $P$ is a chosen parameter characterizing the entanglement in the MPS wave function, the larger of which gives the more precise representation of the wave function\cite{Schollwoeck2005}. Meanwhile, the Hamiltonian $H$ is transformed to a matrix product operator (MPO)\cite{Crosswhite2008,Chung2007,Chung2009,Wang2012,Wang2015,Verstraete2004a,Pirvu2010,Schollwoeck2005} via the density operator $e^{-\beta H}$ with $\beta$ being a small positive constant\cite{Chung2006,Chung2007,Wang2012,Wang2015} as \begin{widetext} \begin{align} \label{eq:mpo} e^{-\beta H}=&\sum_{\substack{\cdots r^{i-1}r^i\cdots\\\cdots s^{i-1}s^i\cdots}}{tr\left(\cdots \Gamma_{r^{i-1}s^{i-1},mn}\left(\beta\right)\cdot \Gamma_{r^is^i,no}\left(\beta\right)\cdots\right) \cdots\mid \phi_{r^{i-1}}^{i-1}\rangle\mid \phi_{r^{i}}^{i}\rangle\cdots\langle \phi_{s^{i-1}}^{i-1}\mid\langle \phi_{s^{i}}^{i}\mid\cdots} \end{align} \end{widetext} . See Sec.\ref{sec:mpo} for details. Note that the checker board symmetry is applied as well to retain only one tensor $\Gamma\left(\beta\right)$ which has four legs. The first two legs are similar to the first leg of MPS tensor, associated with the local quantum state of $i^{\text{th}}$ effective site $\mid \phi_{r^i}^i \rangle$ and its conjugate $\langle \phi_{s^i}^i \mid$, running from 1 to $2^N$. The other two legs are the left/right bond indices. But different from MPS' bond indices, they run from 1 to $Q\equiv4^N$ (Fig.\ref{fig:mpo}, Sec.\ref{sec:mpo}). Namely, they are explicitly determined by the lattice topology and the interaction configuration. Both MPS and MPO are entangled quantities in that they cannot be simply expressed as a product of multiplicative terms each of which only involves local quanta. The entanglement in MPS is extensively studied\cite{Eisert2010,Schollwoeck2005}, while that of MPO is rarely addressed. Another entangled quantity, whose entanglement is either rarely addressed, is the observation of energy (via the density operator) that arises from a straightforward derivation from MPS and MPO as \begin{widetext} \begin{align} \label{eq:observation} \langle \psi \mid e^{-\beta H} \mid \psi\rangle =&tr\left[\cdots\left(\xi_{s^{i-1},\alpha_1\delta_1}\Gamma_{s^{i-1}r^{i-1},\alpha_2\delta_2}\xi_{r^{i-1},\alpha_3\delta_3}\right)\cdot \left(\xi_{s^{i},\delta_1\eta_1}\Gamma_{s^{i}r^{i},\delta_2\eta_2}\xi_{r^{i},\delta_3\eta_3}\right)\cdots\right] \end{align} \end{widetext} , where each parenthesized term in the trace separated by the product is a new tensor, the bond indices between which can be combined into new compound indices that give the matrix rank in equation (\ref{eq:observation}). The matrix ranks in equations (\ref{eq:mps}), (\ref{eq:mpo}), and (\ref{eq:observation}) characterize the entanglement in MPO, MPS and energy observation, respectively. It is clear that the rank of the last quantity is determined by the rank of the MPS and that of the MPO together. Explicitly, it equals to $QP^2$. The entanglement in $\langle e^{-\beta H}\rangle$ is important for controlling the burden of numerical simulation because this observation is a precursor step to variationally optimizing the wave function (See Sec.\ref{sec:mps}). It leads to a singular value decomposition (SVD) of rank of $QP^2$ for the building unit of $\langle e^{-\beta H}\rangle$. The fast increase of the rank with $N$ dominates the other processes, as explained below. On the other hand, varying $\langle e^{-\beta H}\rangle$ with respect to the MPS tensor $\xi$ yields a generalized eigenvalue equation (GEE) of rank of $2^NP^2$, where $2^N$ accounts for the local quantum space, as will be explained in Sec.\ref{sec:mps}. GEE is formed using the trial MPS wave function at the very beginning and then is updated by the solved eigenvector corresponding to the largest eigenvalue at each iteration. Eventually the eigenvector approaches the fixed state for a given rank $P$. Adding small new elements to the obtained MPS matrix ranked at $P$ to form a new trial MPS matrix ranked slightly larger at $P+\Delta P$, we carry on the previous process till convergence. Thus, those obtained quantities will converge with $P$. The final largest eigenvalue, i.e., $e^{-\beta \epsilon_0}$, gives the GS energy $\epsilon_0$. Obviously, for $N>1$, the rank $QP^2$ of SVD dominates over the rank $2^NP^2$ of GEE. Nevertheless, we show that the essential concept that distinguishes iqEPT from other MPS methods is that it expresses the Hamiltonian as a parameterized MPO\cite{Chung2006,Chung2007,Wang2012} and this parameter was used to reduce the linking complexity between the building units of MPO\cite{Wang2015}. In this work, we further point out that it is equivalent to treat the entanglement in MPO in perturbation, as will be explained in Sec.\ref{sec:ept}. In fact, owing to the small positive parameter $\beta$, \begin{equation} \label{eq:ompept} \Gamma\left(\beta\right)=\Gamma_0\left(\beta\right)+\Gamma_2\left(\beta^2\right) \end{equation} where the first term on the right-hand side collects the elements in MPO that is in the zeroth and first orders of $\beta$, while the second term includes terms of higher orders. If one sets $\beta$ to be a value as small as $10^{-7}$, only $\Gamma_0$ needs to be retained. A merit is that $\Gamma_0$ is extremely sparse. Hence, it can be reduced to a much smaller tensor whose rank (bond index size) is $3N+1$, in contrast to the original scaling of $4^N$, see Sec.\ref{sec:ept}. After this major leap to the reduction of MPO rank $Q$, GEE's scale $2^NP^2$ becomes dominant over the new rank $\left(3N+1\right)P^2$ of SVD. Then, we integrate the Jacobi-Davidson method for GEE\cite{Sleijpen1996} with both MPO and MPS, without explicitly forming GEE. The details are given in Sec.\ref{sec:davidson}. As a result, we were able to handle the unprecedented GEE rank as large as $2^{14} \times 350^2 = 2.0 \times 10^9$ when $P = 350$ for $N = 14$. We organize the remainder of discussion as follows. We first discuss the parameterized MPO for an infinity-by-$N$ spin-$\frac{1}{2}$ antiferromagnetic Heisenberg model in Sec.\ref{sec:mpo} and discuss the variation of MPS in the presence of MPO in Sec.\ref{sec:mps}. Next, we discuss the entanglement perturbation of MPO and of Hilbert space in Sec.\ref{sec:ept}. There, we show the area law is not applicable in this study. Implementation of the antiferromagnetic checkerboard symmetry is given in Sec.\ref{sec:checkerboard} to simplify the formulations in the previous sections. It is followed by the introduction of integration of the Davidson eigenvalue solver with MPS and MPO in Sec.\ref{sec:davidson}. A useful relationship between the spin-spin correlations and the staggered magnetization is discussed in Sec.\ref{sec:correlation}. There, we also demonstrate that $Ln\left(LnC_r-LnC_{r+1}\right)$, $C_r$ being the spin-spin correlation at separation $r$, can be used to interpret order/disorder. Sec.\ref{sec:reduction} introduces the space reduction in MPS. Sec.\ref{sec:result} discusses the results. Conclusion is reached in Sec.\ref{sec:Conclusion} with an outlook in Sec.\ref{sec:outlook}. \section{\label{sec:mpo}Parameterized Matrix product operator for Heisenberg spin-$\frac{1}{2}$ model on infinite by N square lattices} \begin{figure} \begin{center} $\begin{array}{cc} &\mbox{(a) construction of MPO} \\ & \includegraphics[width=20.pc]{mpo}\\ & \mbox{(b) symbolic MPO}\\ & \includegraphics[width=13.pc]{symbolic_mpo} \end{array}$ \caption{\label{fig:mpo} Illustration of MPO for $N=4$. (a) The construction. The local quantum space is represented by rectangles enclosing $4$ physical sites (circle in gray color). Out of page is the conjugate of those spaces. The building units of MPO , symbolized by $\Gamma^1$ and $\Gamma^2$ in (b), are enclosed within the tilted rectangles in between the spaces. Going from the space to its conjugate, the first four pairs of green-red solid circles are stacked shell-by-shell in sequence. They account for the on-effective-site interaction, with the green/red circle denoting $f$/$g$ operator mentioned in the context. The dashed line denotes the contraction of bond index running from $1$ to $4$. The solid line denotes the inner product between the operators operating sequentially on the same physical lattice site. Following the on-effective-site operators are one shell of $f$/$g$ operators bonding the $\left(i-1\right)^{th}$/$i^{th}$ effective sites, and another shell of $g$/$f$ operators bonding the $\left(i-2\right)^{th}$/$\left(i-1\right)^{th}$ effective sites and the $i^{th}$/$\left(i+1\right)^{th}$ effective sites, respectively. They are stacked layer-by-layer from bottom to top, forming inter-effective-site interactions and hence giving the rise of entanglement in MPO represented schematically by the horizontal dashed lines in (b). The individual space indices $r_{i=1,4}$/$s_{i=1,4}$ are combined into single indices of vertical solid lines in (b); while the individual bond indices $m_{i=1,4}$/$n_{i=1,4}$ being combined into horizontal dashed lines in (b).} \end{center} \end{figure} In what follows, the Einstein summation convention is implied for repeating indices in a formula except stated otherwise. $\left(x\leftrightarrow y\right)$ denotes pairing between two physical sites $x$ and $y$. $\lfloor \alpha_1, \cdots ,\alpha_m\rfloor$ combines $m$ individual indices $\alpha_1=1,\cdots k_1,\cdots , \alpha_m=1,\cdots, k_m$ into a single flattened index $\alpha=1,\cdots ,\prod_{i}^m{k_i}$. After the physical sites in the rung are converted into single effective sites, the Hamiltonian is further rewritten in the following form, \begin{equation} \label{eq:hbond1} H=\sum_{\{a,b\}}{\left(H_a+H_b\right)} \end{equation} , where the summation runs over the two sets of bonds, $\{a\}$ and $\{b\}$, between nearest neighboring physical sites. If the two physical sites of a bond reside on different effective sites, it is collected in the inter-effective-site set $\{a\}$; otherwise in the intra-effective-site set $\{b\}$. Fig.\ref{fig:mpo} illustrates various bonds in the case of $N=4$. In this case, each effective site, say the $i^{th}$ site, has four intra-effective-site bonds $b_1^i$, $b_2^i$, $b_3^i$ and $b_4^i$. It also participates in eight inter-effective-site bonds. The first four are labeled with index $i-1$: $a_1^{i-1}$, $a_2^{i-1}$, $a_3^{i-1}$ and $a_4^{i-1}$. They bond physical sites residing on the $\left(i-1\right)^{th}$ and $i^{th}$ effective sites. The last four are labeled with index $i$; they are $a_1^i$, $a_2^i$, $a_3^i$ and $a_4^i$ bonding physical sites residing on the $i^{th}$ and $\left(i+1\right)^{th}$ effective sites. These sets of bonds are used to rewrite the Hamiltonian as follows, \begin{align} \label{eq:density} e^{-\beta H} \approx \prod_{i}{\left(\prod_{k}{e^{-\beta H_{a_k^i}}}\prod_{l}{e^{-\beta H_{b_l^i}}}\right)}+\bigcirc\left(\beta^{2}\right) \end{align} where $i=1,\cdots ,\infty$ and $k,l=1,\cdots ,N$. Although the sequence of the bonds grouped as the single exponent in the left hand side of equation (\ref{eq:hbond1}) does not matter, it matters in the right hand side in that the ordering of $i$, $k$, $l$ is equivalent to permuting the Hamiltonian matrix. The permutation does not affect the physical property but requires the corresponding linear manipulation of the representation basis. We choose to operate $\prod_{k}{e^{-\beta H_{a_k^i}}}$ on $\prod_{l}{e^{-\beta H_{b_l^i}}}$ for given $i$. A physical site of the $i^{th}$ effective site is denoted as $x^i$, $x=1,\cdots ,N$ from bottom to top in the rung. For the set $\{b^i_l\}$, it is ordered such that $l=1:\left(1^i \leftrightarrow 2^i\right)$, $2:\left(2^i \leftrightarrow 3^i\right),\cdots,$ $N:\left(N^i \leftrightarrow 1^i\right)$. For the set $\{a^i_k\}$, it is ordered such that $k=1:\left(1^i \leftrightarrow 1^{i+1}\right)$, $2:\left(2^i \leftrightarrow 2^{i+1}\right),\cdots,$ $N:\left(N^i \leftrightarrow N^{i+1}\right)$. It is clear that the successive product of $e^{-\beta H_{b_l^i}}$ involves only one effective site. The $N$ operations are stacked in a shell-by-shell manner in and out of the page. It is followed by the successive product of $e^{-\beta H_{a_k^i}}$ which operates on the physical sites across two effective sites. They are stacked from bottom to top in a layer-by-layer manner. See Fig.\ref{fig:mpo} for details. Each individual density operator for a general bond $\left(i\leftrightarrow j\right)$ in the right side of equation (\ref{eq:density}) is Taylor-expanded, utilizing small positive $\beta$, \begin{align} \label{eq:density2} e^{-\beta\left(S^x_i S^x_j+S^y_i S^y_j+S^z_i S^z_j\right)}\approx &I-\beta\left(S^x_i S^x_j+S^y_i S^y_j+S^z_i S^z_j\right)\notag\\ \equiv &f_{\alpha}\otimes g_{\alpha} \end{align} . $f_{\alpha=1,4}$ operating on the first physical site of a bond are 2 x 2 matrices (Identity, $\sqrt{\beta}S^x$, $\sqrt{\beta}{\bar{S}}^y$ and $\sqrt{\beta}S^z$); $g_{\alpha=1,4}$ operating on the second physical site are 2 x 2 matrices (Identity, $-\sqrt{\beta}S^x$, $\sqrt{\beta}{\bar{S}}^y$ and $-\sqrt{\beta}S^z$). Note, $g$'s differ from $f$'s in that the former has the minus sign; ${\bar{S}}^y_i\otimes{\bar{S}}^y_j=S^y_i\otimes S^y_j$ with ${\bar{S}}^y$ being a real version of $S^y$ to avoid any complex element in the matrix. Now consider the product of two individual density operators of two bonds $c$ and $c'$. There follows two rules applied in different cases. Rule A. $e^{-\beta H_c}e^{-\beta H_{c'}}=f_{\alpha}\otimes g_{\alpha}\otimes f_{\gamma}\otimes g_{\gamma}$ if there is no shared physical site. Rule B. $e^{-\beta H_c}e^{-\beta H_{c'}}=f_{\alpha}\otimes \left( g_{\alpha}\cdot f_{\gamma}\right) \otimes g_{\gamma}$ if there is a shared physical site. Rule A is transparent. The formula shown in Rule B applies to the case where the shared site is the first site of bond $c^{\prime}$ and the second site of bond $c$. Nevertheless, the combination of the positions of the shared site respectively in $c$ and $c'$ is diverse, such as first vs first, second vs second, etc. They all appear in Fig.\ref{fig:mpo}. The formula in Rule B will be slightly adjusted accordingly. After applying these rules, the density operator involving the $i^{th}$ effective site is \begin{align} \label{eq:omp} \left(f_{n_1}\cdot g_{m_1}\cdot g_{l_4}\cdot f_{l_1}\right)\notag\\ \otimes \left(f_{n_2}\cdot g_{m_2}\cdot f_{l_2}\cdot g_{l_1}\right)\notag\\ \otimes \left(f_{n_3}\cdot g_{m_3}\cdot f_{l_3}\cdot g_{l_2}\right)\notag\\ \otimes \left(f_{n_4}\cdot g_{m_4}\cdot f_{l_4}\cdot g_{l_3}\right) \end{align} . Each of the index of bond set $\{b^i\}$, $l_1$, $l_2$, $l_3$ and $l_4$, appears twice in expression (\ref{eq:omp}), implying self-contraction of intra-effective-bonds. The un-contracted indices $m_1$, $m_2$, $m_3$, and $m_4$ entangle the $i^{th}$ effective site with the $\left(i-1\right)^{th}$ effective site through the bond set $\{a^{i-1}\}$; The other un-contracted indices $n_1$, $n_2$, $n_3$, and $n_4$ entangle the $i^{th}$ effective site with the $\left(i+1\right)^{th}$ effective site through the bond set $\{a^i\}$. In fact, fixing indices $m's$ and $n's$ to $\{m\}_0$ and $\{n\}_0$, each resultant quantity in every parenthesis of (\ref{eq:omp}) is a local density matrix $\left(\rho_{r_u,s_u}\right)$. $u=1,\cdots,4$ refer to the four parenthesis. Each of $r's$ or $s's$ runs from $1$ to $2$ accounting for spin-$\frac{1}{2}$. The combinations $r^i\equiv\lfloor r_1,r_2,r_3,r_4\rfloor$ and $s^i\equiv \lfloor s_1,s_2,s_3,s_4\rfloor$ run from 1 to $2^N$. The direct product between the four local density matrices spans a resultant density matrix $\Gamma_{r^is^i,\{m\}_0\{n\}_0}$ of rank of $2^N$ for the $i^{th}$ effective site. Allowing $m$'s and $n$'s to vary, $\Gamma$ becomes a four-leg tensor $\Gamma_{r^is^i,mn}$. The combinations $m\equiv\lfloor m_1,m_2,m_3,m_4\rfloor$ and $n\equiv \lfloor n_1,n_2,n_3,n_4\rfloor$ are the bond indices and run from 1 to $4^N$. Considering the bipartite structure due to the antiferromagnetic nature, equation \eqref{eq:density} is transformed to a parameterized MPO as follows, \begin{widetext} \begin{equation} \label{eq:omp2} e^{-\beta H}=\sum_{\substack{\cdots r^{i-1}r^i\cdots\\\cdots s^{i-1}s^i\cdots}}{tr\left(\cdots\Gamma_{r^{i-1}s^{i-1},mn}^1\left(\beta\right)\cdot\Gamma_{r^is^i,no}^2\left(\beta\right)\cdots\right)\cdots\mid \phi_{r^{i-1}}^{i-1}\rangle\mid \phi_{r^i}^i\rangle\cdots \langle \phi_{s^{i-1}}^{i-1}\mid \langle\phi_{s^i}^i\mid\cdots } \end{equation} \end{widetext} . Note that, in equations \eqref{eq:density} and \eqref{eq:omp2} the small parameter $\beta$ is used to express $e^{-\beta H}$ as a successive product of operators according to BCH formula, implying that parts of high-order terms in $\beta$ are already omitted controllably. Nevertheless, many other high-order terms in $\beta$ are still remaining in the operator, which is a consistency check of formula (\ref{eq:omp}). In what follows in Sec.\ref{sec:mps}, we show that the remaining high-order terms in $\beta$ could be treated perturbatively to reduce the last two (bond) indices of $\Gamma^{1,2}$ from $4^N$ to $3N+1$. But for the moment we first discuss how to variationally optimize the MPS wave function in the presence of MPO. \section{\label{sec:mps}Variational optimization of MPS in the presence of MPO } The system wave function is expressed as a MPS \begin{equation} \label{eq:mps1} \mid\Psi\rangle=\sum_{\cdots r^{i-1}r^i\cdots}{tr\left(\cdots\xi^1_{r^{i-1}}\cdot\xi^2_{r^i}\cdots\right)\cdots\mid\phi_{r^{i-1}}^{i-1}\rangle\mid\phi_{r^i}^i\rangle\cdots} \end{equation} , which has the $\xi^{1,2}$ repetition structure, similar to that for MPO, due to the antiferromagnetic condition. Note that $\xi^{1,2}$ are 3-leg tensors. For example, $\xi^2_{r^i}$ for the $i^{th}$ effective site is explicitly denoted with the index of the first leg, $r^i=1,2,\cdots ,2^N$, which refers to the local quantum state $\mid \phi_{r^{i}}^i\rangle$. Explicitly writing down the other two legs, $m$ and $n$ in $\xi_{r^i,mn}^2$ are the left/right indices of a matrix, fixing $r^i$. Therefore, each of $\xi^{1,2}$ has $2^NP^2$ variables when the matrix rank is $P$. Given a configuration (Fock vector) of the local state of all effective sites, a specific $P \times P$ matrix is then assigned to each effective site; the left/right indices of that matrix contract in a closed form with the right/left indices of the matrix of the front/rear effective sites to yield a trace-out. The resultant scalar value is the superposition coefficient of the configuration in the wave function. Optimization of the wave function by pinpointing the superposition coefficient is equivalent to optimization of those $P\times P$ matrices, $2\times 2^N$ in total, in the MPS with the bipartition structure. The MPS matrices can be optimized in various ways. One way used in DMRG is to start with an exact solution of a small part of system and then to renormalize the representation basis every time the new parts interacting with the processed part are added. The MPS matrix is a fixed point after many projections of the DMRG solution\cite{Schollwoeck2005}. The other way is to variate the energy observation with respect to the MPS matrices, hence to optimize them simultaneously. Illustrated in Fig.\ref{fig:observation}, the energy observation is expressed as \begin{widetext} \begin{align} \label{eq:observation2} &\langle \psi \mid e^{-\beta H} \mid \psi\rangle\notag\\ =&\left[\sum_{\cdots s^{i-1}s^i\cdots}{tr\left(\cdots\xi^1_{s^{i-1},\alpha_1\gamma_1}\cdot\xi^2_{s^i,\gamma_1\eta_1}\cdots\right)\cdots\langle\phi_{s^{i-1}}^{i-1}\mid\langle\phi_{s^i}^i\mid\cdots}\right]\left[\sum_{\cdots r^{i-1}r^i\cdots}{tr\left(\cdots\xi^1_{r^{i-1},\alpha_3\gamma_3}\xi^2_{r^i,\gamma_3\eta_3}\cdots\right)\cdots\mid\phi_{r^{i-1}}^{i-1}\rangle\mid\phi_{r^i}^i\rangle\cdots}\right]\notag\\ & \left[\sum_{\substack{\cdots z^{i-1}z^i\cdots\\\cdots w^{i-1}w^i\cdots}}{tr\left(\cdots\Gamma_{z^{i-1}w^{i-1},\alpha_2\gamma_2}^1\Gamma_{z^iw^i,\gamma_2\eta_2}^2\cdots\right)\cdots\mid \phi_{z^{i-1}}^{i-1}\rangle\mid \phi_{z^i}^i\rangle\cdots \langle \phi_{w^{i-1}}^{i-1}\mid \langle\phi_{w^i}^i\mid\cdots }\right]\notag\\ =&tr\left(\cdots A_{\alpha\gamma}B_{\gamma\eta}\cdots\right) \end{align} \end{widetext} where the Kronecker delta function $\langle \phi_{s^i}^i \mid \phi_{z^i}^i \rangle=\delta_{s^i,z^i}$, etc, is used to reduce the summations. And, \begin{align} \label{eq:relabel} A_{\alpha\equiv\lfloor\alpha_1\alpha_2\alpha_3\rfloor ,\gamma\equiv\lfloor\gamma_1\gamma_2\gamma_3\rfloor}\equiv &\xi_{s^{i-1},\alpha_1\gamma_1}^1\Gamma_{s^{i-1}r^{i-1},\alpha_2\gamma_2}^1\xi^1_{r^{i-1},\alpha_3\gamma_3}\notag\\ B_{\gamma\equiv\lfloor\gamma_1\gamma_2\gamma_3\rfloor ,\eta\equiv\lfloor\eta_1\eta_2\eta_3\rfloor}\equiv &\xi_{s^i,\gamma_1\eta_1}^2\Gamma_{s^ir^i,\gamma_2\eta_2}^2\xi_{r^i,\gamma_3\eta_3}^2 \end{align} . $\alpha\equiv\lfloor\alpha_1\alpha_2\alpha_3\rfloor$ denotes the combination of indices $\alpha_1,\alpha_3=1,2,\cdots,P$ and $\alpha_2=1,2,\cdots,4^N$, giving rise to a single index $\alpha=1,2,\cdots,4^N P^2$, etc. Meanwhile, the normalization factor is \begin{widetext} \begin{align} \label{eq:normalization} &\langle \psi \mid \psi\rangle\notag\\ =&\left[\sum_{\cdots s^{i-1}s^i\cdots}{tr\left(\cdots\xi^1_{s^{i-1},\alpha_1\gamma_1}\xi^2_{s^i,\gamma_1\eta_1}\cdots\right)\cdots\langle\phi_{s^{i-1}}^{i-1}\mid\langle\phi_{s^i}^i\mid\cdots}\right]\left[\sum_{\cdots r^{i-1}r^i\cdots}{tr\left(\cdots\xi^1_{r^{i-1},\alpha_3\gamma_3}\xi^2_{r^i,\gamma_3\eta_3}\cdots\right)\cdots\mid\phi_{r^{i-1}}^{i-1}\rangle\mid\phi_{r^i}^i\rangle\cdots}\right]\notag\\ =&tr\left(\cdots C_{\alpha\gamma}D_{\gamma\eta}\cdots\right) \end{align} \end{widetext} . Matrices C and D are formed as \begin{align} \label{eq:relabel1} C_{\alpha\equiv\lfloor\alpha_1\alpha_3\rfloor ,\gamma\equiv\lfloor\gamma_1\gamma_3\rfloor}\equiv &\xi_{r^{i-1},\alpha_1\gamma_1}^1\xi^1_{r^{i-1},\alpha_3\gamma_3}\notag\\ D_{\gamma\equiv\lfloor\gamma_1\gamma_3\rfloor ,\eta\equiv\lfloor\eta_1\eta_3\rfloor}\equiv &\xi_{r^i,\gamma_1\eta_1}^2\xi_{r^i,\gamma_3\eta_3}^2 \end{align} where the combination of indices $\alpha_1,\alpha_3=1,2,\cdots,P$ gives rise to a single index $\alpha=1,2,\cdots,P^2$, etc. Equation (\ref{eq:observation2}) is rewritten as \begin{equation} \label{eq:observation3} \langle e^{-\beta H}\rangle=\frac{\langle \psi \mid e^{-\beta H} \mid \psi\rangle}{\langle \psi \mid \psi\rangle}=\lim\limits_{M\rightarrow\infty}{\frac{tr\left(AB\right)^M}{tr\left(CD\right)^M}} \end{equation} . Then, the first derivative of $\langle e^{-\beta H}\rangle$ with respect to $\xi^{1,2}$, say $\xi^1$, leads to \begin{equation} \label{eq:derivative} \frac{\partial \langle \psi \mid e^{-\beta H} \mid \psi\rangle}{\partial \xi^1}=\langle e^{-\beta H}\rangle\frac{\partial \langle \psi \mid \psi\rangle}{\partial \xi^1} \end{equation} Substituting the numerator and denominator in the right hand side of equation (\ref{eq:observation3}) into both sides of equation (\ref{eq:derivative}) respectively, we arrive at \begin{equation} \label{eq:derivative1} tr\left[\left(AB\right)^{M-1}\frac{\partial A}{\partial \xi^1}B\right]=\langle e^{-\beta H}\rangle tr\left[\left(CD\right)^{M-1}\frac{\partial C}{\partial \xi^1}D\right] \end{equation} . Singular-value-decomposing the building units in equation (\ref{eq:observation3}), we have \begin{align} \label{eq:ABCD} AB=& u\Lambda v\notag\\ CD=& u'\Delta v' \end{align} Substituting equation (\ref{eq:ABCD}) into equation (\ref{eq:derivative1}) derives that \begin{equation} \label{eq:derivative2} v\frac{\partial A}{\partial \xi^1}Bu=\langle e^{-\beta H}\rangle\left(\frac{\Delta_1}{\Lambda_1}\right)^{M-1}v'\frac{\partial C}{\partial \xi^1}Du' \end{equation} In the above, only the largest eigenvalues $\Lambda_1\left(\Delta_1\right)$ and the corresponding right/left eigenvectors $v/u$ ($v/u'$) survive when $M\rightarrow \infty$. On the other hand, substituting equation (\ref{eq:ABCD}) into equation (\ref{eq:observation3}) leads to \begin{equation} \label{eq:derivative3} \langle e^{-\beta H}\rangle^{\frac{1}{M}}=\frac{\Lambda_1}{\Delta_1} \end{equation} Substituting equation (\ref{eq:derivative3}) into equation (\ref{eq:derivative2}), it arrives at \begin{equation} \label{eq:derivative4} v\frac{\partial A}{\partial \xi^1}Bu=\langle e^{-\beta H}\rangle^{\frac{1}{M}}v'\frac{\partial C}{\partial \xi^1}Du' \end{equation} In fact equation (\ref{eq:derivative4}) is a generalized eigenvalue equation. To confirm it, it is instructive to explicitly rewrite both sides as \begin{align} \label{eq:gee} v\frac{\partial A}{\partial \xi^1}Bu=&X_{\lfloor r^{i-1},\alpha_1,\gamma_1\rfloor ,\lfloor s^{i-1},\alpha_3,\gamma_3\rfloor}\chi^1_{\lfloor s^{i-1},\alpha_3,\gamma_3\rfloor}&=X\chi^1\notag\\ v'\frac{\partial C}{\partial \xi^1}Du'=&Y_{\lfloor r^{i-1},\alpha_1,\gamma_1\rfloor ,\lfloor s^{i-1},\alpha_3,\gamma_3\rfloor}\chi^1_{\lfloor s^{i-1},\alpha_3,\gamma_3\rfloor}&=Y\chi^1 \end{align} where $\chi^1_{\lfloor s^{i-1},\alpha_3,\gamma_3\rfloor}$ is the vector version of 3-leg tensor $\xi^1_{s^{i-1},\alpha_3\gamma_3}$. And, \begin{align} \label{eq:gee1} &X_{\lfloor r^{i-1},\alpha_1,\gamma_1\rfloor ,\lfloor s^{i-1},\alpha_3,\gamma_3\rfloor}\notag\\ =& v_{\alpha_1\alpha_2\alpha_3}\Gamma_{r^{i-1}s^{i-1},\alpha_2\gamma_2}^1\xi_{r^{i},\gamma_1\eta_1}^2\Gamma^2_{r^{i}s^{i},\gamma_2\eta_2}\xi_{s^{i},\gamma_3\eta_3}^2 u_{\eta_1\eta_2\eta_3}\notag\\ &Y_{\lfloor r^{i-1},\alpha_1,\gamma_1\rfloor ,\lfloor s^{i-1},\alpha_3,\gamma_3\rfloor}\notag\\ =& {v'}_{\alpha_1\alpha_3}\delta_{r^{i-1},s^{i-1}}\xi_{r^{i},\gamma_1\eta_1}^2\xi_{r^{i},\gamma_3\eta_3}^2 {u'}_{\eta_1\eta_3} \end{align} are matrices. Thus, \begin{align} \label{eq:gee2} X\chi^1=&\langle e^{-\beta H}\rangle^{\frac{1}{M}}Y\chi^1\notag\\ X'\chi^2=&\langle e^{-\beta H}\rangle^{\frac{1}{M}}Y'\chi^2 \end{align} where another generalized eigenvalue equation is generated for $\chi^2$, the vector version of the 3-leg tensor $\xi^2$. It is straightforward that $X/X'$ and $Y/Y'$ are functional of $\chi^2/\chi^1$. If we start to generate the GEEs with the trial vectors $\chi_{0,P_0}^{1,2}$, the rank of each being as small as $2^NP_0^2$, a new set of vectors $\chi_{1,P_0}^{1,2}$ are obtained by solving those GEEs and are used to update the GEEs. Iterations are carried on until the norm of $\| \chi_{m+1,P_0}^{1,2}-\chi_{m,P_0}^{1,2} \|$ is less than a threshold value. The converged vectors for $P_0$ are used to generate the trial vectors for a slightly larger rank $P_0+\Delta P$ by adding small new elements to the enlarged vectors. A second type convergence with respect to $P$ should eventually bring the identical largest eigenvalue $\lambda=\frac{\Lambda_1}{\Delta_1}$ for equation (\ref{eq:gee2}). We have \begin{equation} \label{eq:energy} \langle e^{-\beta H}\rangle=\lambda^M \end{equation} . The GS energy of an infinite by N lattice is \begin{equation} \label{eq:energy1} \epsilon_0=-\beta^{-1}ln{\langle e^{-\beta H}\rangle}=-M\beta^{-1}ln{\lambda} \end{equation} . The GS energy per spin is \begin{equation} \label{eq:energy2} \bar{\epsilon}_0=\frac{\epsilon_0}{2MN}=-\left(2N\beta\right)^{-1}ln{\lambda} \end{equation} \begin{figure} \begin{center} $\begin{array}{ccc} &\mbox{(a) energy observation} & \mbox{(b) normalization} \\ & \includegraphics[width=9.5pc]{energy_observation}& \includegraphics[width=9.5pc]{normalization}\\ \end{array}$ \caption{\label{fig:observation} (a) Energy observation in the presence of MPS and MPO. The quantities enclosed in an eclipse is combined into new tensors A/B, each of which has a larger rank of $P^2Q$ with $P$ and $Q$ being the MPS and MPO ranks, respectively. (b) Normalization of MPS wave function. The quantities enclosed in an eclipse is combined into new tensors C/D, each of which has a rank of $P^2$.} \end{center} \end{figure} \section{\label{sec:ept}quasi-1D Entanglement perturbation theory for an infinite by $N$ Heisenberg square lattice } \subsection{\label{sec:mpoept} Entanglement Perturbation in Hamiltonian Space} There are two kinds of eigenvalue equations to be solved in this method. The first kind is the SVDs in equation (\ref{eq:ABCD}) for the left/right eigenvectors of two asymmetric matrices. The first matrix has rank of $R_1=4^NP^2$ while the second has rank of $R^{\prime}_1=P^2$; the second kind is the GEEs in equation (\ref{eq:gee2}) of rank of $R_2=2^NP^2$. It is obvious that $R_1$ dominates $R_2$ when $N>1$. The largest simulation scale in our study is for $N=14$ and $P=350$. It corresponds to $R_1=3.3\times 10^{13}$ and demands $30$ $\it{Tbyte}$ memory to store even a single eigenvector, not mentioning that solving an eigenvalue equation requires much more memory allocation than that merely to store a single eigenvector. This data scale is apparently not practical for modern computers. Fortunately, there is a simple way to overcome this difficulty. Examining $f_{\alpha=1,\cdots,4}$ and $g_{\alpha=1,\cdots,4}$, the $\alpha=1$ terms are the identity matrix, zeroth order of $\beta$; the terms of $\alpha=2,3,4$ are all in the order of $\sqrt{\beta}$. But whenever there is a term of order $\sqrt{\beta}$, there should be a counterpart in the same order at the other end of a bond. They actually generate terms in the first order of $\beta$. On the other hand, according to formula (\ref{eq:omp}), the bond index of $\Gamma^{1,2}$, say, $m\equiv\lfloor m_1,\cdots,m_N\rfloor$ is a combination of $m_1,\cdots,m_N$, each of which is the index of $f$ or $g$ running from $1$ to $4$. Because $\beta\le 10^{-7}$, it is safe to discard the terms in the order of $\beta^2$ and the beyond. Therefore, it amounts to keeping, among $4^N$ combinations of $\lfloor m_1,\cdots,m_N\rfloor$, those terms with at most one of $m's$ not being 1. The new bond index of $\Gamma^{1,2}$, after reduction, now runs from 1 to $3N+1$ instead from $1$ to $4^N$. In the case of $N=4$, they are $1$ for $1111$, $2$ for $1112$, $3$ for $1113$, $4$ for $1114$, $5$ for $1121$, $6$ for $1131$, etc. At this point, we reduced the rank of MPO tensor in perturbation of the small positive parameter $\beta$. Now we show that the reduction of rank is corresponding to the entanglement reduction in MPO. The MPO is an entangled quantity in that it cannot be expressed as the product of individual multiplicative quantities that is associated with local quanta, namely the local state of the effective site in this study. Like the Hilbert space for the wave function of a quantum lattice, $\cdots\otimes\{\mid \phi_i\rangle\}\otimes\{\mid \phi_{i+1}\rangle\}\otimes\cdots$, we define a space $\mathcal{H}$, $\cdots\otimes\{\mid \phi_i\rangle\langle\phi_i^{\prime}\mid\}\otimes\{\mid \phi_{i+1}\rangle\langle\phi_{i+1}^{\prime}\mid\}\otimes\cdots$ for the Hamiltonian. It is not hard to show that it is indeed a vector space after defining an inner product, i.e., vector-vector multiplication as \begin{align} \label{eq:hamiltonianspace} &\vec{h}_1\cdot\vec{h}_2\notag\\=&\cdots\left(\langle \varphi_i^{\prime}\mid\phi_i\rangle \langle \phi_i^{\prime}\mid\varphi_i\rangle\right)\left(\langle \varphi_{i+1}^{\prime}\mid\phi_{i+1}\rangle \langle \phi_{i+1}^{\prime}\mid\varphi_{i+1}\rangle\right)\cdots\\ &\begin{array}{c} \forall\vec{h}_1,\vec{h}_2\in \mathcal{H}\\ \vec{h}_1=\cdots\otimes\{\mid \phi_i\rangle\langle\phi_i^{\prime}\mid\}\otimes\{\mid \phi_{i+1}\rangle\langle\phi_{i+1}^{\prime}\mid\}\otimes\cdots\notag\\ \vec{h}_2=\cdots\otimes\{\mid \varphi_i\rangle\langle\varphi_i^{\prime}\mid\}\otimes\{\mid \varphi_{i+1}\rangle\langle\varphi_{i+1}^{\prime}\mid\}\otimes\cdots \end{array} \end{align} . A MPO is exactly a vector in the form of MPS in $\mathcal{H}$. Analogously, the entanglement in this vector is characterized by matrix rank\cite{Schollwoeck2005}. Without treating MPO perturbatively in $\beta$, this entanglement is explicitly determined by the lattice topology and the types of interactions. Nevertheless, the entanglement in MPO is reduced in a simple yet systematic way in this method. It is called the entanglement perturbation theory for a quantum Hamiltonian. The benefit of entanglement reduction in MPO is immediate in that it reduces an eigenvalue problem of rank of $4^N P^2$, which dominates over the GEEs of rank of $2^N P^2$ in terms of computational burden, to that of rank of $\left(3N+1\right)P^2$. It is clear that the bottleneck of simulation of an infinite by $N$ Heisenberg lattice becomes how to efficiently solve GEEs. When $N=14$ and $P=350$ (the parameters causing the most computational burden in the present work), their rank is more than two billion. Yet, solving such a large GEE is not feasible to any existing numerical tool. Thus, we integrate the Jacobi-Davidson method with MPS and MPO to solve the GEEs without explicitly forming them. Details are explained in Sec.\ref{sec:davidson}. \subsection{\label{subsec:mpsept} Entanglement Entropy and Area Law in Hilbert Space} It is the entanglement in Hilbert space between an isolated quantum lattice set $\it{I}$ and the surrounding environment $\it{E}$ that is of special interest in the design of a many-body method when using MPS, since the area law\cite{Eisert2010} states that \begin{align} \label{eq:entropy1} s\left(\it{I}\right)\propto\mid \partial \it{I}\mid \end{align} and \begin{align} \label{eq:entropy2} s\left(\it{I}\right)\le 2 log_2\left(P\right) \end{align} where $\mid \partial \it{I}\mid$ is the area of boundary $\partial \it{I}$ and $P$ is the MPS rank. The $\it{von}$ $\it{Neumann}$ entanglement entropy $s\left(\it{I}\right)$ is defined as \begin{align} \label{eq:entropy} s\left(\it{I}\right)\equiv -\sum_{i}{\rho_i log_2{\rho_i}} \end{align} . Now that the local Hilbert space $H_I\equiv \{\mid \phi_{i}\rangle\}$ is embedded in the complete Hilbert space $H$ to write down a system wave function $\mid \Psi\rangle$, the reduced density matrix is \begin{align} \label{eq:densitymatrix0} \rho_{ij}=Tr_E\langle \phi_{i}\mid \Psi\rangle\langle \Psi\mid \phi_{j}\rangle \end{align} . In the case where $\mid \Psi\rangle$ is expressed as a MPS, the reduced density matrix is evaluated as shown in equation \eqref{eq:construction}. See Sec.\ref{sec:reduction} for details. After diagonalizing the reduced density matrix, the entanglement entropy can be readily computed using the diagonal density matrix elements. Fig.\ref{fig:area_law}(a) shows the applicable scenario of the area law. An isolated partition increases from $A_1$ to $A_2$, embedded in a given large 2D quantum lattice whose sites (shown as circles) interact in both directions (shown as solid lines). Note that the boundaries $\partial A_1$ and $\partial A_2$ are composed of both vertical and horizontal dashed lines. When the isolation is enlarged, its boundary increases nearly linearly. According to equation \eqref{eq:entropy2}, the MPS rank $P$ which is required to obtain a certain precision for the nearly linearly increasing entanglement entropy should almost exponentially increase. Since the existing many-body methods such as DMRG, the density matrix embedding theory (DMET)\cite{Knizia2012} and the dynamic mean field theory (DMFT)\cite{Metzner1989} all start from or focus on an isolated quantum lattice set surrounded by a large environment as shown in Fig.\ref{fig:area_law}(a), they encounter the same difficulty inherited from the area law of entanglement entropy. In our method of converting the $N$ lattice sites in the rung into an effective site and then building a MPS on the effective chain, there are two major differences from the scenario shown in Fig.\ref{fig:area_law}(a). First, two lattices of $N_1$ shown in (c) and $N_2$ in (d) of Fig.\ref{fig:area_law} actually have two different Hamiltonians. Second, imagining $N=\infty$ in both (a) and (d) of Fig.\ref{fig:area_law} such that their Hamiltonians are identical, their boundaries $\partial A_{\infty}$ and $\partial D_{\infty}$ are topologically different, because the former is connected and the latter is disconnected. Therefore, the entanglement of isolations $A$ in (a) and $D$ in (d) is different. That is to say, the area law of entanglement entropy is not applicable in our method. However, Fig.\ref{fig:decoupled} and Fig.\ref{fig:entanglement_decoupled}(c) in Sec.\ref{subsec:decoupled} confirm that the area law coincidently applies when we solve the decoupled spin ladder (Fig.\ref{fig:area_law}(b)) using our method. There are two reasons. First, Hamiltonian $H(n)$ of a decoupled spin ladder of $N=n$ now has the perfect extension property, i.e., $H(n)=nH(1)$ where $H(1)$ is the spin chain Hamiltonian. Second, there is no difference whether or not the boundary is enclosed by the dotted horizontal lines because there is no vertical interaction. \begin{figure} \begin{center} $\begin{array}{c} \includegraphics[width=16.pc]{area_law}\\ \end{array}$ \caption{\label{fig:area_law} Schematic of applicable and inapplicable scenarios of the area law. Applicable: (a) isolations embedded in a large 2D lattice whose sites (circles) interacting in both directions (solid lines). The boundaries in dashed lines are enclosed; (b) the boundary of isolation embedded in a decoupled lattice shows no difference with or without the horizontal dotted lines to enclose them. Inapplicable: from (c) to (d), the width $N$ of the ladder is increasing. They are described by two distinct Hamiltonians. When $N=\infty$ in both (a) and (d), the boundaries are topologically different. The former is single-connected while the latter is disconnected.}\end{center} \end{figure} In fact, the entanglement entropy is crucially controlled by the density matrix of an effective site when treating a quantum ladder by our method. Let's consider the following two ideal cases. Case 1. The Hamiltonian has a perfect extensive property, i.e., $H\left(c\right)=cH\left(1\right)$ with $c$ being an integer. Obviously, the decoupled ladder qualifies for this category. In this case, the diagonalized density matrix $\left(\rho\left(c\it{I}\right)\right)$ of isolation $c\it{I}$ is the direct product of $\left(\rho\left(\it{I}\right)\right)=\left(\substack{x,0\\0,y}\right)$ for $c$ times. Then the diagonal element of $\left(\rho\left(c\it{I}\right)\right)$ form a set $\{x^k y^{c-k};k=0,\cdots ,c\}$ with the degeneracy set $\{\left(\substack{c\\k} \right);k=0,\cdots ,c \}$. We have \begin{align} \label{eq:entrop3} s\left(c\it{I}\right)=&-\sum_{k=0}^{c}{\left(\substack{c\\k}\right) x^{c-k}y^k log_2\left(x^{c-k}y^k\right) }\notag\\ =&c\left[\sum_{k=0}^{c-1}{\left(\substack{c-1\\k}\right)x^{c-1-k}y^k}\right] \left(-xlog_2{x}-ylog_2{y}\right)\notag\\ =&c\left(x+y\right)^{c-1}s\left(\it{I}\right) \end{align} . Since $x+y=1$, \begin{align} \label{eq:entrop4} s\left(c\it{I}\right)=cs\left(\it{I}\right) \end{align} . The area law apparently applies. If $x=y$, i.e., the diagonalized density matrix is always equally weighted regardless of the size of $\it{cI}$, there is no dominant element. The MPS rank $P$ that reveals the physical properties such as GS energy to a certain precision should equal to what reveals the entanglement (i.e., the Fock vector configuration in a wave function). Equation \eqref{eq:entropy2} determines that $P$ strictly exponentially increases with $N$ for a demanded energy accuracy. Whereas, the diagonalized density matrix of a decoupled ladder of width $N$ always have dominant elements because $x\ne y$ for a single chain. In turn, our result in Sec.\ref{subsec:decoupled} shows that the increase of $P$ will be slower than an exponential function of $N$ for smaller $N$'s but asymptotically be an exponential function $2^N$ when $N\rightarrow \infty$. Case 2. There is a limited number of dominant diagonal elements. We discuss an extreme example: \begin{align} \rho_1=1-2^{-c}\notag\\ \sum_{i=2}^{\it{R}\left(\it{H}_{c\it{I}}\right)}{\rho_i}=2^{-c} \end{align} . Here, $\it{R}\left(\it{H}_{c\it{I}}\right)$ is the Hilbert space rank of $c\it{I}$, and $\rho_{i=2,\cdots,\it{R}\left(\it{H}_{c\it{I}}\right)}$ are equally weighted. In this case, it is easy to show that $s\left(c\it{I}\right)< cs\left(\it{I}\right)$ and that the entanglement entropy saturates when $c$ is large. The area law does not apply. In a realistic strongly coupled infinity-by-$N$ spin lattice, our results in Sec.\ref{sec:result} confirm that the area law of entanglement entropy does not apply and the density matrix of an effective site has few, not single, dominant diagonal elements whose number saturates with increasing $N$. Letting an effective site have a large local space, our method take the dominant basis vectors into account so as to simulate the physical quantities efficiently with a smaller MPS rank. Packing the entanglement contributed by the dominant basis vectors in a smaller MPS by blocking $N$ quantum lattices in the rung is an implicit entanglement perturbation in the Hilbert space. Meanwhile, space reduction according to the significance of the diagonalized density matrix element is possible as well. We present the details in Sec.\ref{sec:reduction}. For the moment, we discuss in the following three sections other specific properties of the model in this study. \section{\label{sec:checkerboard} Implementation of Checkerboard Symmetry} \begin{figure} \begin{center} $\begin{array}{cc} \mbox{(a)} &\mbox{(b)} \\ \includegraphics[width=7.5pc]{before_checkerboard} &\includegraphics[width=7.5pc]{after_checkerboard}\\ \end{array}$ \caption{\label{fig:checkerboard} (a) Owing to the antiferromagnetic nature, the tensors of MPO and MPS have a bipartite checkerboard symmetry in the original spin basis. (b) After applying the checkerboard transformation on any sub-lattice, for instance the blank lattice as shown, those tensors need not be distinguished anymore in the new spin basis.} \end{center} \end{figure} So far, we have assumed a bipartite structure for both MPO and MPS according to the antiferromagnetic nature of the studied model. All the formulation can be straightforwardly extended for even more complicated structures. In an opposite limit, we specifically simplify both MPO and MPS to employ a single tensor $\Gamma$ and $\xi$ by a checkerboard transformation applied to a sub-lattice shown in Fig.\ref{fig:checkerboard}(a) as follows, \begin{align} \label{eq:newspin} \mid \uparrow\prime\rangle\equiv\mid \downarrow\rangle\notag\\ \mid \downarrow\prime\rangle\equiv\mid \uparrow\rangle \end{align} . Assuming the second site within a bond is on the transformed sub-lattice, we rewrite $H_{bond}$ as follows, \begin{equation} H_{bond}=\vec{S}_i\cdot\vec{S}_j=S^z_i\cdot S^z_j+\frac{1}{2}\left(S^{+}_i\cdot S^{-}_j+S^{-}_i\cdot S^{+}_j\right) \end{equation} where $S^{+}$ and $S^{-}$ are the spin flip-up and -down operators, respectively. They operate on the transformed site as follows \begin{align} \label{eq:checkerboard} S^z\mid \uparrow\prime\rangle=&-\frac{1}{2}\mid \uparrow\prime\rangle;\hspace{5pc} & S^z\mid \downarrow\prime\rangle=&\frac{1}{2}\mid \downarrow\prime\rangle\notag\\ S^{+}\mid \uparrow\prime\rangle=&\mid \downarrow\prime\rangle; \hspace{5pc} &S^{+}\mid \downarrow\prime\rangle=& 0\notag\\ S^{-}_i\mid \uparrow\prime\rangle=& 0; \hspace{5pc} &S^{-}\mid \downarrow\prime\rangle=& \mid\uparrow\prime\rangle \end{align} . Equation (\ref{eq:checkerboard}) is used to rewrite $H_{bond}$ in the checkerboard-transformed basis as \begin{equation} \label{eq:checkerboard1} H_{bond}=-S^z_i\cdot S^z_j+\frac{1}{2}\left(S^{+}_i\cdot S^{\prime +}_j+S^{-}_i\cdot S^{\prime -}_j\right) \end{equation} , where $S^{\prime +}$ ($S^{\prime -}$) has the same matrix representation of $S^{+}$ ($S^{-}$) but flips up (down) the newly defined down (up) spins in equation (\ref{eq:newspin}). After the transformation, we obtain the MPS format in equation (\ref{eq:mps1}) and MPO format in equation (\ref{eq:mpo}). Accordingly, the other related formulation will be simplified and we only present the simplified formulation when they are specifically needed in the remaining formula of this manuscript. Note that a proper sign is needed when a physical quantity is being calculated with the solved wave function in the checkerboard-transformed basis. Also, note that compared with equation \eqref{eq:energy2} before the checkerboard transformation, the GS energy per spin now becomes \begin{equation} \label{eq:energy3} \bar{\epsilon}_0=-\left(N\beta\right)^{-1}ln{\lambda} \end{equation} where $\lambda$ is the largest eigenvalue of the single GEE. \section{\label{sec:davidson} Integration of Jacobi-Davidson eigenvalue solver for GEE with MPO and MPS} We discuss how to obtain the largest eigenvalue and the corresponding eigenvector by iteratively solving a GEE given by, \begin{equation} \label{eq:gee4} Xx=\lambda Yx \end{equation} , where $X$ and $Y$ are $n\times n$ square matrices; $x$ is an eigenvector and $\lambda$ is the corresponding eigenvalue. When only a few eigenvectors are needed (only the largest eigenvalue is needed in this study) and when $n$ is very large, an iterative approach such as the Jacobi-Davidson method\cite{Sleijpen1996} is more desirable than the typical method of factorization. It starts with an initial $n\times m$ matrix $W_0$ whose columns are n-element vectors $w_i,i=1,\cdots,m\ll n$. It is used to transform $X$ and $Y$ to those of rank of $m$ as follows, \begin{align} \label{eq:davidson1} E_0\equiv W^T_0 X W_0\notag\\ F_0\equiv W_0^T Y W_0 \end{align} . The following new GEE \begin{equation} \label{eq:davidson2} E_0y^0=\tau^0F_0y^0 \end{equation} is much easier to solve for all its eigenvalues $\{\tau^0_i\}$ and the corresponding eigenvectors $\{y^0_i\}$. $\{\tau^0_i\}$ are ordered such that $\tau^0_1>\tau^0_2>\cdots$. Then, a new vector is constructed as follows \begin{equation} \label{eq:davidson3} Q_0=\left(XW_0-\tau^0_1YW_0\right)y^0_1 \end{equation} . To accelerate the convergence, the vector $Q_0$ is processed to obtain a new vector $w_{m+1}$ as follows, \begin{equation} \label{eq:davidson4} w_{m+1}\left(i\right)=\frac{Q_0\left(i\right)}{\tau^0_1Y\left(i,i\right)-X\left(i,i\right)}; i=1,\cdots,n \end{equation} . $w_{m+1}$ is attached to $W_0$ forming an $n\times \left(m+1\right)$ matrix $W_1$. In this procedure, Gram-Schmidt method is used to make $w_{m+1}$ orthonormal to the existing column vectors. On the other hand, the approximated eigenvector corresponding to the largest eigenvalue is \begin{equation} \label{eq:davidson6} x_0=W_0y_1^0 \end{equation} . The precision of this approximation is checked by whether $Xx_0$ and $Yx_0$ are parallel. If so, the iteration stops with the solution of the largest eigenvalue $\tau^0_1$ and the corresponding eigenvector $x_0$. If not, we carry on the process with \begin{align} \label{eq:davidson7} E_1\equiv W_1^T X W_1\notag\\ F_1\equiv W_1^T Y W_1 \end{align} until $Xx_j$ and $Yx_j$ are parallel at the $j^{th}$ step. Satisfactory convergence is obtained after about 100 steps in this study even for GEE of rank of billions. So far, we introduced the basic steps of Jacobi-Davidson method for GEE. The steps are merely formal because matrices $X$ and $Y$ in fact never appear in the explicit form as shown in equations (\ref{eq:davidson1}), (\ref{eq:davidson3}) and \eqref{eq:davidson7}. They even cannot be generated and stored when their rank is greater than $10^5$, which turns out to be still much behind the requirement to investigate the long-range spin-spin correlation in this study, due to insufficient computational resources. As a workaround, we integrate the aforementioned Jacobi-David method with MPS and MPO, without explicitly forming the left/right matrices of GEE. After the checkerboard transformation, we only need one MPO tensor $\Gamma$ that is still formulated as in \eqref{eq:omp}. $\Gamma$ is extremely sparse. We only store its nonzero elements in $\{\Omega_j:\mid \Omega_j\mid >0\}$ and the set $\{\left(r_j,s_j,\alpha_j,\gamma_j\right)\}$, each of whose elements is an array of the indices of the $j^{th}$ nonzero element $\Omega_j\equiv\Gamma_{r_js_j,\alpha_j\gamma_j}$. Now, we simplify $X$ and $Y$ in equation \eqref{eq:gee1} as \begin{align} \label{eq:newxy} X_{\lfloor r,\alpha_1,\gamma_1\rfloor ,\lfloor s,\alpha_3,\gamma_3\rfloor}&=v_{\alpha_1\alpha_2\alpha_3}\Gamma_{rs,\alpha_2\gamma_2} u_{\gamma_1\gamma_2\gamma_3}\notag\\ Y_{\lfloor r,\alpha_1,\gamma_1\rfloor ,\lfloor s,\alpha_3,\gamma_3\rfloor}&={v'}_{\alpha_1\alpha_3}\delta_{r,s} {u'}_{\gamma_1\gamma_3} \end{align} . These are dense matrices. The notion of effective site's label is no longer needed since the lattice is translationally symmetric after the checkerboard transformation. Except explicitly evaluating equation \eqref{eq:newxy} for the diagonal element of $X$ and $Y$ that are needed in equation (\ref{eq:davidson4}), the matrix-vector multiplication only in which $X$ and $Y$ explicitly participate an operation (equations (\ref{eq:davidson1}), (\ref{eq:davidson3}) and \eqref{eq:davidson7}) can be evaluated as follows, \begin{equation} \label{eq:davidson8} z_{\lfloor r,\alpha_1,\gamma_1\rfloor}=\sum_{j}{v_{\alpha_1\alpha_j\alpha_3}\Gamma_{r_js_j,\alpha_j\gamma_j} u_{\gamma_1\gamma_j\gamma_3} w_{\lfloor s_j,\alpha_3\gamma_3\rfloor}} \end{equation} , where $r$ is dynamically updated by $r_{j}$ during the summation over $j$. We further rewrite equation (\ref{eq:davidson8}) as follows \begin{equation} \label{eq:davidson9} z_{\lfloor r,\alpha_1,\gamma_1\rfloor}=\sum_{j}{\Gamma_{r_js_j,\alpha_j\gamma_j}\left(\pi^{\alpha_j}\cdot\varpi^{s_j}\cdot\rho^{\gamma_j}\right)_{\alpha_1 \gamma_1}} \end{equation} . In the above, three newly defined matrices participate in a chain of product to yield a resultant matrix in the parentheses. Equation \eqref{eq:davidson9} is equivalent to update $z$ with the resultant matrix for $L$ times, where $L$ is the number of nonzero elements of $\Gamma$ and these elements are the updating coefficients. Those three matrices in parentheses are defined using the tensors appearing in \eqref{eq:davidson8} as \begin{align} \label{eq:davidson10} \left(\pi^k\right)_{ij}\equiv v_{ikj}; \hspace{1pc}\left(\varpi^k\right)_{ij}\equiv w_{\lfloor kij\rfloor}; \hspace{1pc}\left(\rho^k\right)_{ij}\equiv u_{jki} \end{align} . Note the difference of in the right hand side of the first and third equations in (\ref{eq:davidson10}), which reflects the sandwich structure constructed by the MPS wave function, its conjugate and the MPO. The transformation from expression of matrix-vector multiplication in equation (\ref{eq:davidson8}) to that in (\ref{eq:davidson9}) is crucial in that the summations are broken into successive matrix product of matrices, which reduces the computational cost by a few orders of matrix rank $P$. Again, the method that associates many loops of index-summation by matrix product, helps reduce the computational burden when solving GEE in the presence of MPS and MPO. Meanwhile, the SVDs in equation \eqref{eq:ABCD} are simplified after the checkerboard transformation as \begin{align} \label{eq:ABCD1} A=& u\Lambda v\notag\\ C=& u'\Delta v' \end{align} where \begin{align} \label{eq:relabel2} A_{\alpha\equiv\lfloor\alpha_1\alpha_2\alpha_3\rfloor ,\gamma\equiv\lfloor\gamma_1\gamma_2\gamma_3\rfloor}\equiv &\xi_{r,\alpha_1\gamma_1}\Gamma_{rs,\alpha_2\gamma_2}\xi_{s,\alpha_3\gamma_3}\notag\\ C_{\alpha\equiv\lfloor\alpha_1\alpha_3\rfloor ,\gamma\equiv\lfloor\gamma_1\gamma_3\rfloor}\equiv &\xi_{r,\alpha_1\gamma_1}\xi_{r,\alpha_3\gamma_3} \end{align} which have the rank of $T=\left(3N+1\right)P^2$ and $O=P^2$, respectively. Both of them will be very large in some cases. For instance, the largest $T$ value encountered in this study is $1.0\times 10^8$ when $P=2000$ for $N=8$. It is not accessible by ordinary SVD solvers. In this study, SVDs are solved by the power method without forming $A$ and $C$. The matrix-vector multiplication is transformed into a successive matrix product similar to equation \eqref{eq:davidson9}. The composite matrices $A$, $B$ defined in equation \eqref{eq:relabel} have rank $\left(3N+1\right)P^2$; $C$ and $D$ defined in equation \eqref{eq:relabel1} have rank $P^2$. The composite-matrix-vector multiplication is the time-controlling factor in the Davidson method. It is decomposed to a chain of lower-ranked matrix-vector products after decomposing the composite matrix as in equation \eqref{eq:davidson9}. The lower rank is $P$. Thus, the time consumption in iqEPT is bilinear with both $P^{2.5}$ (if using a Lapack routine) and $2^N$ (which determines the number of nonzero elements in $\Gamma$). In the largest simulation scale of this work, it took $2\times 10^4$ seconds to solve the GEE in a series of iterations for $N=10$ and $P=1400$, using $6$ Dual-Intel-Xeon nodes each of which has $20$ cores and $256$ GB memory installed. It took $50$ iterations to obtain the converged data for that set of $N$ and $P$. \section{\label{sec:correlation} spin-spin correlation and local magnetization} The spin-spin correlation is defined as $C_r\equiv\langle S^z_{\left(i,j\right)} S^z_{\left(i+r,j\right)}\rangle$ where the operators are separated by $r$ spins in LD of an infinity-by-$N$ lattice. Without loss of generality we set $j=1$. After converting the lattice into a chain of effective lattice sites, the operators are redefined as \begin{align} \label{eq:operator} S^z_{i,\text{eff}}\equiv S^z_{\left(i,1\right)}\otimes I_{\left(i,2\right)}\otimes\cdots\otimes I_{\left(i,N\right)} \end{align} , where $I_{\left(i,m\right)}$ is an identity operator operating on $\left(i,m\right)^{th}$ physical lattice site. It is straightforward to construct a tensor $\Theta$ for $S^z_{i,\text{eff}}$. After implementing the checkerboard transformation, the same tensor also applies to $S^z_{i+r,\text{eff}}$. Therefore, \begin{widetext} \begin{align} \label{eq:correlation} C_r=\frac{tr\left[\cdots\left(\xi_{s^{i},\alpha_i\alpha_{i+1}}\Theta_{s^{i}r^{i}}\xi_{r^{i},\gamma_i\gamma_{i+1}}\right)\cdot\left(\xi_{s^{i+1},\alpha_{i+1}\alpha_{i+2}}\xi_{s^{i+1},\gamma_{i+1}\gamma_{i+2}}\right)\cdots \left(\xi_{s^{i+r},\alpha_{i+r}\alpha_{i+r+1}}\Theta_{s^{i+r}r^{i+r}}\xi_{r^{i+r},\gamma_{i+r}\gamma_{i+r+1}}\right)\cdots\right]}{tr\left[\cdots \left(\xi_{s^{i},\alpha_{i}\alpha_{i+1}}\xi_{s^{i},\gamma_{i}\gamma_{i+1}}\right)\cdot \left(\xi_{s^{i+1},\alpha_{i+1}\alpha_{i+2}}\xi_{s^{i+1},\gamma_{i+1}\gamma_{i+2}}\right) \cdots\right]} \end{align} \end{widetext} We convert the quantities in the first and second parentheses in the numerator of equation (\ref{eq:correlation}) to matrices $G$ and $B$ as we did in equation \eqref{eq:relabel} in Sec.\ref{sec:mps}. Equation (\ref{eq:correlation}) becomes \begin{align} \label{eq:correlation1} C_r=\lim_{M\rightarrow \infty}{\frac{tr\left(G\cdot B^r\cdot G\cdot B^{M-r-2}\right)}{tr\left(B^M\right)}} \end{align} . Since equation (\ref{eq:correlation1}) only yields positive values, a proper sign should be given to $C(r)$ according to even or odd $r$, to reflect the checkerboard transformation. We singular-value-decompose $B$ as $B=\mu \varrho \nu$. $\{\varrho_i\}$ is sorted in descending absolute magnitude. Equation \eqref{eq:correlation1} leads to \begin{align} \label{eq:correlation2} C_r=\frac{tr\left({\nu}^T_1\cdot G\cdot B^r\cdot G\cdot{\mu}_1\right)}{\varrho_1^{r+2}} \end{align} . When $r\rightarrow \infty$, we have \begin{align} \label{eq:correlation3} C_{\infty}=\frac{F_1^2}{\varrho_1^2} \end{align} where \begin{align} \label{eq:F} F_1\equiv {\nu}^T_1\cdot G\cdot{\mu}_1 \end{align} . The numerator may be zero or nonzero, giving disorder including quasi-long-range-order (QLRO) or order of the lattice. Moreover, it is straightforward that the local magnetization is \begin{align} \label{eq:correlation4} \bar{M}\equiv\langle S^z\rangle=\lim_{M\rightarrow \infty}{\frac{tr\left(G\cdot B^{M-1}\right)}{tr\left(B^M\right)}}=\frac{F_1}{\varrho_1} \end{align} . Thus, we arrive at a theorem that the spin-spin correlations after infinite spin-spin separation is square of the local magnetization, \begin{align} \label{eq:correlation5} C_{\infty}=\bar{M}^2 \end{align} . This theorem is a useful supplement to the commonly used definitions relating spin-spin correlations to staggered magnetization\cite{Manousakis1991} though they are not universally agreed upon\cite{Kaplan1989}. A new quantity \begin{align} \label{eq:tau} \tau_r\equiv Ln\left(LnC_r-LnC_{r+1}\right) \end{align} is proven in Sec.\ref{subsec:coupled} to be useful. Although only the largest eigenvalue of $B$ determines the asymptotic $C_\infty$, the first eigenvalue $\varrho_{\bar{k}}$ that has significant non-vanishing \begin{align} \label{eq:Fk} F_{\bar{k}}\equiv {\nu}^T_1\cdot G\cdot{\mu}_{\bar{k}}={\nu}^T_{\bar{k}}\cdot G\cdot{\mu}_1 \end{align} is also important to determine how $C_r$ approaches $C_\infty$ asymptotically. Since \begin{align} \label{eq:correlation6} C_r\approx \frac{F_1^2}{\varrho_1^2}+\frac{F_{\bar{k}}^2}{\varrho_1^2}\left(\frac{\varrho_{\bar{k}}}{\varrho_1}\right)^{r-1} \end{align} , we have \begin{align} \label{eq:correlation7} \tau_r \approx \left(r-1\right)\left(Ln\varrho_{\bar{k}}-Ln\varrho_1\right)+\it{f} \end{align} where \begin{align} \label{eq:correlation8} \it{f}\equiv 2\left(LnF_{\bar{k}}-LnF_1\right)+Ln\left(1-\varrho_{\bar{k}}/\varrho_1\right) \end{align} . Equation \eqref{eq:correlation4} tells that $F_1$ is not vanishing for an ordered lattice. However, $F_1$ neither vanishes when the simulation of staggered magnetization $\bar{M}$ does not reach the convergence of zero yet for a disordered lattice. In both cases, equation \eqref{eq:correlation7} reads that $\tau_r$ is linear with $r$. There are three different scenarios of $\tau_r$ versus $r$ after $F_1$ is converged with $P$. The first two scenarios are, Case 1. $F_1\ne 0$. The slope of $\tau_r$ versus $r$ is a nonzero constant $p_0 \equiv Ln\varrho_2-Ln\varrho_1$. Then, $LnC_r$ hence $C_r$ becomes constant for large $r$. The lattice is ordered. Case 2. $F_1=0$. Equation \eqref{eq:correlation7} loses definition. IF $\varrho_{\bar{k}}$ is significantly smaller than $\varrho_1$, Equation \eqref{eq:correlation6} reads as \begin{align} \label{eq:correlation9} C_r\approx \frac{F_{\bar{k}}^2}{\varrho_1^2}\left(\frac{\varrho_{\bar{k}}}{\varrho_1}\right)^{r-1} \end{align} . The slope of $\tau_r$ versus $r$ is zero. $C_r$ exponentially decays with $r$. The lattice is disordered and gapped. In the following third scenario, $\tau_r$ behaves uniquely. Case 3. The first few largest eigenvalues are almost degenerate with $\varrho_1$. But, they have less significant $F_j$. They give very small correlation-contribution that slowly decays. The eigenvalues which have significant $F_j$'s are however definitely smaller than $\varrho_1$. They give correlation-contribution that is large for small $r$ but exponentially decays. All summed up, the resultant correlation function shows a power-law decay in a large range of $r$. $\tau_r$ is linear with $Ln\left(r\right)$ instead of $r$. The lattice is QLRO. See Appendix.\ref{sec:appendix} for an example of spin chain. \section{\label{sec:reduction} space reduction in matrix product state} The use of density matrix is crucial in simulating quantum lattice model in that it guides reduction of Hilbert space/subspace which exponentially increases with respect to the system/subsystem size. Given a bipartite structure $A$ and $E$ of a system, the system wave function $\mid \Psi\rangle$ is an entangled quantity composed of basis vectors from subspaces $\{\mid \phi_i\rangle\}$ for $A$ and $\{\mid \psi_i\rangle\}$ for $E$, \begin{equation} \label{eq:systemwavefunction} \mid \Psi\rangle=\sum_{i,j}{X_{ij}\mid \phi_{i}\rangle\mid \psi_{j}\rangle} \end{equation} where $X$ is a tensor entangling $A$ and $E$. The density operator for a subspace, say $\{\mid \phi_i\rangle\}$, is defined as $\hat{\rho}\equiv Tr_E\mid \Psi\rangle\langle \Psi\mid$, where $Tr_E$ means that the degree of freedom in subsystem $E$ is traced out. Its matrix representation is \begin{align} \label{eq:densitymatrix} \left(\rho_{ij}\right)\equiv \langle \phi_i\mid \hat{\rho}\mid \phi_j\rangle=X_{ik}X^*_{jk} \end{align} . Note that normalization of $\mid \Psi\rangle$ implies $tr\rho=1$. The system wave function can be reconstructed as $\mid \Psi^{\prime}\rangle$ using a reduced space $\{\mid\theta_i\rangle\}$ consisted of $M$ basis vectors for $A$ along with the unaltered space $\{\mid \psi_i \rangle\}$ for $E$. The density matrix built in $\{\mid \phi_i\rangle\}$ is used to make the residual vector $\mid R\rangle\equiv \mid \Psi\rangle-\mid \Psi^{\prime}\rangle$ have a minimum norm\cite{Schollwoeck2005,White1992,White1993}. Explicitly, the density matrix is diagonalized and only $M$ eigenvectors $\{v_i;i=1,\cdots ,M\}$ need to be retained. They correspond to the most significant $M$ diagonal elements $\{\eta_i\}$. New basis vectors are constructed as \begin{align} \label{eq:basis_transformation} \mid \theta_i\rangle=v_i\left(k\right)\mid \phi_k\rangle \end{align} where $v_i\left(k\right)$ is the $k^{th}$ element of the eigenvector $v_i$. And, $\mid \Psi^{\prime}\rangle$ is constructed as \begin{align} \label{eq:reduced_function} \mid \Psi^{\prime}\rangle=Y_{ij}\mid \theta_{i}\rangle\mid \psi_{j}\rangle \end{align} where \begin{align} \label{eq:function_transformation} Y_{ij}=v_i\left(k\right)X_{kj} \end{align} . If the formal reduction is unitary (zero reduction), substituting equations \eqref{eq:basis_transformation} and \eqref{eq:function_transformation} into equation \eqref{eq:reduced_function} restores the wave function in equation \eqref{eq:systemwavefunction}. When the truncation of space of $A$ takes place, i.e, when the eigenvector matrix kept is rectangular, hence no longer unitary, we have \begin{equation} \label{eq:restore_function} \mid \Psi^{\prime}\rangle = X_{kj}v_i(k)v_i(l)\mid \phi_{l}\rangle\mid \psi_{j}\rangle = \mid \Psi\rangle-\mid R\rangle \end{equation} where \begin{equation} \label{eq:residual} \mid R\rangle=X_{kj}\Delta_{kl}\mid \phi_{l}\rangle\mid \psi_{j}\rangle \end{equation} . Here, $\Delta_{ij}=\sum_{k=M+1}^{n}v_i\left(k\right)v_j\left(k\right)$. It is straightforward to show $||\mid R\rangle||^2=\sum_{i=M+1}^{n}{\eta_i}$. Following the line of local space reduction, DMRG uses the density matrix to keep a fixed amount of transformed basis vectors for an enlarged part of system. Meanwhile, DMET provides an alternative to DMFT, using the density matrix to improve the impurity state of a fragment embedded in background. We implement the density matrix in a different way where it is used to reduce spaces in MPS. Dividing a quantum lattice into $L$ blocks each of which contains $N$ physical sites, a MPS is built as follows \begin{align} \label{eq:mps2} \mid \Psi\rangle=\sum_{\cdots r^i\cdots r^L}{tr\left(\cdots \xi_{r^{i-1}}^{i-1}\cdot \xi_{r^i}^i\cdots\right)\cdots\mid \phi_{r^{i-1}}^{i-1}\rangle\mid \phi_{r^{i}}^{i}\rangle\cdots} \end{align} where each MPS tensor $\xi^i$ is associated with a block, say, the $i^{th}$ block. Different from equation \eqref{eq:mps1}, the MPS in equation \eqref{eq:mps2} has a more general form. Its tensor does not necessarily have a unit cell structure and the space index $r^i$ runs from $1$ to $Q\equiv R^N$ meaning that each of the $N$ physical sites in a block has a general space rank $R$. The computational burden of variational optimization of each MPS tensor is determined by both bonding rank $P$ and space rank $Q$. One needs a tractable strategy to balance between choices of $P$ and $Q$. Choosing blocks that contain more physical sites has the following benefits. First, there are fewer tensors to solve. Second, it uses smaller $P$ to achieve the same precision. In the extreme case when a block contains the whole system, one just needs a MPS tensor of rank $1$ to precisely represent the system wave function. However, as $Q$ increases exponentially with $N$ to exclude possibility of building MPS on a block containing the whole system, one still needs to solve multiple MPS tensors, while the same computational resource only allows smaller $P$ when $N$ is larger. We propose a scheme to overcome this difficulty when building MPS on a blocked quantum lattice. A MPS ranked $P_1$ in the original spaces, $\mid \Psi\rangle_{\perp P= P_1}$, is used to construct density matrix for each block, say the $i^{th}$ block, as \begin{widetext} \begin{align} \label{eq:construction} \rho_{ab}^{i}=tr\left[\cdots\left(\xi^{i-1}_{r^{i-1},\alpha_1\gamma_1}\xi^{i-1}_{r^{i-1},\alpha_2\gamma_2}\right)\left(\xi^i_{a,\gamma_1\eta_1}\xi^i_{b,\gamma_2\eta_2}\right)\left(\xi^{i+1}_{r^{i+1},\eta_1\theta_1}\xi^{i+1}_{r^{i+1},\eta_2\theta_2}\right)\cdots\right] \end{align} \end{widetext} . In $\xi^i_{a,\gamma_1\eta_1}$, $a$ denotes the space index and the bond indices $\gamma_1$ and $\eta_1$ are explicitly shown here. Note that the density matrix element in equation \eqref{eq:construction} should be adjusted according to the MPS normalization. Density matrices will be constructed for all $L$ tensors (blocks) and are diagonalized simultaneously. For each block, only eigenvectors corresponding to the most significant $M$ diagonal elements are used to transform the space $\{\mid \phi^i_j\rangle\}$ to the smaller one $\{\mid \theta^i_j \rangle\}$ according to equation \eqref{eq:basis_transformation}, where the superscript refers to the block's label. Note that MPS in the reduced space set $\{H^{i,\prime}\equiv\{\theta^i_j\}\}$ for $P\le P_1$, $\mid \Psi^{\prime}\rangle_{\perp P\le P_1}$, can be reconstructed from the existing MPS $\mid \Psi\rangle_{\perp P\le P_1}$ as \begin{align} \label{eq:newmps} \mid \Psi^{\prime}\rangle=\sum_{\cdots s^i\cdots s^L}{tr\left(\cdots \kappa_{s^{i-1}}^{i-1}\cdot \kappa_{s^i}^i\cdots\right)\cdots\mid \theta_{s^{i-1}}^{i-1}\rangle\mid \theta_{s^{i}}^{i}\rangle\cdots} \end{align} where \begin{align} \label{eq:newmpstensor} \kappa^i_{a}=v_a\left(b\right)\xi^i_{b} \end{align} . $\mid \Psi^{\prime}\rangle_{\perp P> P_1}$ is then variationally determined. Or, the MPS in reduced spaces can be variationally determined for all the range of $P$. The variational solution is described in Sec.\ref{sec:ept}. In both ways, since $\{H^{i,\prime}\}$ has a smaller rank $Q^{\prime}<Q$, the same computational resource now allows larger $P$, yielding better accuracy in turn. For the spin ladder studied in this work, we showed that the linking complexity between the building units of MPO is reduced to a linear dependence on $N$ and the large-size GEE is efficiently handled by integrating the Jacobi-Davidson method with MPS and MPO. The only remaining factor that gains exponential complexity with increasing lattice width is the space rank $2^N$ of an effective site. This rank has been crucial in handling even larger lattices or different complex scenarios by iqEPT. This exponential complexity is now systematically overcome by the space reduction in MPS. Meanwhile, the density matrix can be used to reveal the system properties since it determines which kind of basis vector contributes the most in the GS wave function. To this end, we define the following quantity to reveal the spontaneous spin rotational symmetry breaking, \begin{align} \label{eq:neworder} \bar{S^z}_j \equiv \langle \theta_j^i \mid \sum_{k=1}^{N}{\left(-1\right)^k S^z_{\left(i,k\right)}} \mid \theta_j^i\rangle \end{align} where $\mid \theta^i_j\rangle$ is the $j^{th}$ new basis vector in the space (only unitarily transformed, not necessarily reduced) of the $i^{th}$ effective site. $\left(i,k\right)$ is the 2D coordinates of a physical site. For a wave function having the broken symmetry, one spin configuration is no longer equivalent to its up-side-down counterpart. One is expected to have a larger amplitude than the other in GS. It leads to nonzero values of $\bar{S^z}_j$ with the same sign for those most significant $j's$. By contrast, this quantity is zero when there is no spin rotational symmetry breaking. \section{\label{sec:result}results} \subsection{\label{subsec:decoupled} Benchmark and comparison} \begin{table*}[ht] \centering \begin{tabular}{c|cccc|cc} \hline & \multicolumn{4}{c|}{iqEPT} & DMRG&\\ \cline{2-5} N & $P^{\prime}$ & $\epsilon_{P^{\prime}}$ & extrapolation & $\Delta$ & extrapolation & MC Loop\\ \hline $2$ & $40$ & $-0.578043$ & $-0.578043$ & & $-0.578043$ & $-0.57802$\\ $3$ & $500$ & $-0.600538$ & $-0.600538$ & & $-0.600537$ & $-0.60063$\\ $4$ & $640$ & $-0.618567$ & $-0.618567$ & & $-0.618566$ & $-0.61873$\\ $5$ & $1200$ & $-0.627781$ & $-0.627787$ & $9.6\times 10^{-6}$ & $-0.62776$ & $-0.62784$\\ $6$ & $1600$ & $-0.634681$ & $-0.634690$ & $1.4\times 10^{-5}$ & $-0.6346$ & $-0.635(1)$\\ \hline \end{tabular} \caption{\label{table:compare}Comparison of GS energy per site among iqEPT, DMRG and MC loop algorithm for spin ladders with open boundary condition imposed in the rung.} \end{table*} \begin{figure} \begin{center} $\begin{array}{c} \mbox{(a)}\\ \includegraphics[width=18pc]{decoupled} \\ \mbox{(b)}\\ \includegraphics[width=18pc]{fixed_point_decoupled}\\ \end{array}$ \caption{\label{fig:decoupled} (a) Relative error versus MPS rank $P$ for decoupled ladders of $N=1, 2, 3, 4, 5, 6, 7$ and $8$, from left hand side to right hand side. (b) Ratio of $P_N$ (for $N$) to $P_{N-1}$ (for $N-1$), with respect to $\frac{1}{N}$. At both $P_N$ and $P_{N-1}$ the same accuracy is obtained. The comparison is made for accuracies $99.9\%$, $99.7\%$, and $99\%$ from top to bottom. The tendency of $\frac{P_N}{P_{N-1}}\rightarrow 2$ for $\frac{1}{N}\rightarrow 0$ implies that the MPS rank $P$ will asymptotically increase as an exponential function $2^N$.} \end{center} \end{figure} \begin{figure} \begin{center} $\begin{array}{c} \mbox{(a)}\\ \includegraphics[width=18.pc]{entropy_extrapolation_decoupled}\\ \mbox{(b)}\\ \includegraphics[width=18.pc]{entropy_extrapolation_OBC}\\ \mbox{(c)}\\ \includegraphics[width=18.pc]{entropy_decoupled_OBC}\\ \end{array}$ \caption{\label{fig:entanglement_decoupled} Entanglement entropy of a rung versus $P^{-1/3}$ for (a) decoupled ladders and (b) coupled ladders with OBC in the rung. (c) Entanglement entropy versus $N$. Open and solid circles represent coupled and decoupled ladders, respectively.} \end{center} \end{figure} \begin{figure} \begin{center} $\begin{array}{c} \mbox{(a)}\\ \includegraphics[width=18pc]{log_log_fit5}\\ \mbox{(b)}\\ \includegraphics[width=18pc]{openbc5} \end{array}$ \caption{\label{fig:loglogfit} (a) Least square versus GS energy per site and (b) log-log view of relative error versus MPS rank $P$ for an isotropically coupled ladder of $N=5$ with OBC in the rung. The minimum of curve in (a) gives the extrapolation at $P\rightarrow\infty$ for GS Energy per site needed to calculate the relative error in (b).} \end{center} \end{figure} \begin{figure} \begin{center} $\begin{array}{c} \mbox{(a)}\\ \includegraphics[width=18pc]{compare_DMRG_MC4}\\ \mbox{(b)}\\ \includegraphics[width=18pc]{compare_DMRG_MC6}\\ \end{array}$ \caption{\label{fig:compare} Comparison of GS energy per site for spin ladders of $N$ (a) $=4$ and (b) $=6$ both in the energy scale of $0.003$, by iqEPT, DMRG and MC loop algorithm. The discrepancy between the short dotted DMRG extrapolation and the converged iqEPT result, that overlaps with its dashed extrapolation, increases by two orders in magnitude when $N$ increases from $4$ to $6$. Meanwhile, MC loop algorithm result is not reliable in the energy scale shown.} \end{center} \end{figure} In order to check the correctness of our algorithm, we benchmark the method on decoupled spin-$\frac{1}{2}$ ladders for various width $N$'s. Regardless of $N$, the ground state energy per site $\epsilon_0$ should be equal to the exact value $-0.443147$ by Bethe ansatz\cite{Bethe1931} of a single spin chain. Previously, the results for $N=1$ along a similar line of reasoning were reported\cite{Wang2012} to agree with the exact results. There was no need to utilize the entanglement reduction in MPO described in Sec.\ref{sec:mpoept} and didn't apply the algorithm extensions presented in this work. For $N=2$, our extended algorithm reproduced the exact energy at $P=2000$, proving its correctness. Fig.\ref{fig:decoupled}(a) shows a linear log-log relationship between the error of iqEPT data, relative to the exact value, and the MPS rank $P$. Fig.\ref{fig:decoupled}(b) further shows that the ratio of $P_N$ (for $N$) to $P_{N-1}$ (for $N-1$) approaches $2$ when $1/N\rightarrow 0$. This indicates that $P$ will asymptotically increase as an exponential function $2^N$ when $N\rightarrow \infty$. This increase is very rapid. For example, the MPS rank needed to obtain an energy accuracy of $99.99\%$ is about $6\times 10^5$ for $N=6$. It is clear that treating a decoupled spin ladder by iqEPT is inefficient. We compute the entanglement entropy according to equation \eqref{eq:entropy} after diagonalizing the density matrix of a rung obtained in equation \eqref{eq:construction}. Fig.\ref{fig:entanglement_decoupled}(a) shows that it has a linear dependence on $P^{-1/3}$. It can be used to make reliable extrapolations used in (c). The linear dependence of the entanglement entropy versus $N$, shown as open circles, confirms our prediction made in Sec.\ref{subsec:mpsept} that the area law of entanglement entropy coincidently applies to decoupled ladders in our method. Note that, the convergence of entropy is continuous in (a) for the gapless decoupled ladders. In what follows, however, the scenario changes drastically for an isotropically coupled spin ladder with either PBC or open boundary condition (OBC) imposed in the rung. First is the ladder with OBC in the rung for $N$ up to $6$, to directly compare with the existing methods in the literature. The solid circles in Fig.\ref{fig:entanglement_decoupled}(c) show that the area law does not apply to the coupled ladder (see Sec.\ref{subsec:mpsept} for explanation). And, the sudden convergence of entanglement entropy in (b) shows that the coupled ladder with OBC in the rung is gapped\cite{Eisert2010} for $N=2, 4$ and $6$. However, the gap does not exponentially decay with increasing $N$ because the entanglement entropy will otherwise be linear\cite{Irani2010,Gottesman2010,Eisert2010} with $N$. This observation does not yet violate NLSM's prediction on the exponential decay of the gap because this prediction only applies to the coupled ladder with PBC in the rung. To this end, the entanglement entropy in the target model, i.e., the coupled ladder with PBC, is computed. Fig.\ref{fig:entanglement_coupled}(c) partially confirms the prediction for $N=2, 4$ and $6$, as the entropy segment within this interval of $N$ is indeed linear. Nevertheless, the saturation starting from $N=8$ suggests that NLSM's prediction does not apply to larger $N$'s. See Sec.\ref{subsec:coupled} for details. For the moment, we continue to use the model with OBC to compare with the existing methods. Since there is no exact result to calculate the relative error directly, we extrapolate the asymptotic energy $\bar{\epsilon}\equiv \epsilon_{P\rightarrow\infty}$ to obtain the relative error. Assuming a power relationship between the relative error and $P$ in iqEPT for isotropically coupled ladders, the parameterized least square is defined as follows \begin{align} \label{eq:fitting} \it{l}\left(\bar{\epsilon}\right)\equiv \sum_{i}\left(b_1 logP_i+b_2-log\frac{\bar{\epsilon}-\epsilon_{P_i}}{\bar{\epsilon}}\right)^2\notag\\ \end{align} where \begin{align} \label{eq:fitting1} b_1=&\frac{m\sum_i{logP_i log\frac{\bar{\epsilon}-\epsilon_{P_i}}{\bar{\epsilon}}}-\sum_i{logP_i}\sum_i{log\frac{\bar{\epsilon}-\epsilon_{P_i}}{\bar{\epsilon}}} }{m\sum_i{\left(logP_i\right)^2}-\left(\sum_i{logP_i}\right)^2}\notag\\ b_2=&\frac{1}{m}\left(\sum_i{log\frac{\bar{\epsilon}-\epsilon_{P_i}}{\bar{\epsilon}}}-b_1\sum_i{logP_i}\right) \end{align} . Minimizing equation \eqref{eq:fitting} gives the optimal extrapolation $\bar{\epsilon}$. Fig.\ref{fig:loglogfit}(a) shows an example of such extrapolation for $N=5$. Fig.\ref{fig:loglogfit}(b) confirms the assumed power relationship, making the extrapolation self-consistent. Extrapolations are made for all ladders in comparison. Fig.\ref{fig:compare} shows that the GS energy for $N=4$ and $6$ have converged in the scale shown. Table.\ref{table:compare} compares results by iqEPT, DMRG\cite{Ramos2014} and MC loop algorithm\cite{Frischmuth1996}. Data given by DMRG in \cite{Ramos2014} is not the actually obtained data but its extrapolation after two loops of scaling which vary both the finite ladder length and the number of kept diagonal elements in density matrix. Both Fig.\ref{fig:compare} and Table.\ref{table:compare} show that the discrepancy between iqEPT results (including extrapolation) and DMRG extrapolation rapidly increases when $N$ increases from $4$ to $6$. Explicitly, it is $1.62\times 10^{-6}$ ($1.62\times 10^{-6}$) for $N=4$ and $1.28 \times 10^{-4}$ ($1.42 \times 10^{-4}$) for $N=6$, increasing by two orders in magnitude. For larger $N$, the discrepancy is expected to be progressively larger. As the next subsection will show, for the ladder of interest which has PBC imposed in both directions, the relative error of iqEPT result at $P=2000$ for $N=8$ is about $7.5\times 10^{-5}$. For $N=12$, it is about $10^{-3}$. See Table.\ref{table:energy} for details. It is obvious that the relative error in iqEPT scales much more slowly with respect to $N$ than that in DMRG for a spin ladder. One last interesting observation is that the ladders with OBC in the rung are more computationally demanding in iqEPT. For instance, to obtain the same relative error $3.0\times 10^{-5}$ for $N=6$, $P=900$ for OBC while $P=560$ for PBC. By contrast, DMRG favors OBC\cite{Schollwoeck2005}. This difference has two-fold meanings. First, the entanglement entropy of a rung in the ladder with OBC (solid circles in Fig.\ref{fig:entanglement_coupled}(c)) is greater than that with PBC (Fig.\ref{fig:entanglement_coupled}(b)) for each $N$ in comparison. It determines that OBC in the rung is necessarily more challenging to simulate. Second, DMRG's difficulty with PBC is caused by the winding MPS form\cite{Eisert2010} (Fig.\ref{fig:TNSandMPS}(b)). \subsection{\label{subsec:coupled} GS properties of target model} \begin{figure} \begin{center} $\begin{array}{c} \mbox{(a)}\\ \includegraphics[width=18.pc]{energy_p}\\ \mbox{(b)}\\ \includegraphics[width=18.pc]{energy_inversep}\\ \mbox{(c)}\\ \includegraphics[width=18.pc]{energy_extrapolation} \end{array}$ \caption{\label{fig:energy} (a) The convergence of GS energy per site with respect to the MPS rank $P$, for $N=4, 6, 8, 10$ and $12$ from bottom to top. The inset is for $N=2$ in a distinct energy scale. (b) The convergence of energy with respect to $1/P$. The inset is for $N=4$ in a distinct energy scale. They give extrapolations used in (c). (c) The GS energy per site approaches the thermodynamic value as a fourth order function of $\frac{1}{N}\rightarrow 0$, one order faster than the approach from $N$-by-$N$ lattices. Only data for $N=6, 8, 10$ and $12$ are shown due to the very fast decay of $N^{-4}$.} \end{center} \end{figure} Our target model is the isotropically coupled ladder of even $N$'s, with PBC imposed in both directions. One of our main results is the GS energy per site for $N$ up to $14$, shown in Fig.\ref{fig:energy}. In (a), the varying of energy with respect to the MPS rank $P$ is hardly noticeable for a large $P$ value in the energy scale of $0.1$ shown for $N$ up to $12$. Recall that Fig.\ref{fig:decoupled} shows a power relationship between the relative error and $P$ and shows negative linear coefficients $b_1$ defined in equation \eqref{eq:fitting1}. Therefore, the straight lines of energy versus $1/P$ should give reliable extrapolations when $1/P \rightarrow 0$, which is a simpler alternative to the extrapolating process in equation \eqref{eq:fitting}. Fig.\ref{fig:energy}(b) shows energy versus $1/P$ in the scale of $0.01$ for $N=6, 8, 10$ and $12$ from bottom to top, while in the scale of $0.001$ for $N=4$ in the inset. Indeed, the straight lines steadily approach the extrapolations. Table \ref{table:energy} lists the largest MPS rank $P^{\prime}$ used for each $N$'s, the simulated GS energy per site $\epsilon_{P^{\prime}}$, the extrapolated energy $\bar{\epsilon}$ and the relative error $\Delta$. Note that, the kept digits $-0.8593457\left(1\right)$ when $P\ge 16$ for $N=2$ is much more than other $N^{\prime}$s. We plot the extrapolated energies for each $N$ versus $N^{-4}$ in Fig.\ref{fig:energy}(c). Only data for $N=6, 8, 10$ and $12$ are shown due to the very fast decay of $N^{-4}$. They quickly approach the thermodynamic limit value of $-0.66984$ with an uncertainty of $9.6\times 10^{-6}$. Our value agrees well with the accepted values such as $-0.6696\pm0.0003$ by series expansions\cite{Singh1989} and $-0.6693(1)$ by the cluster algorithm\cite{Wiese1994}. It can be compared with the DMRG result of $-0.6768$\cite{Ramos2014}. It is worth mentioning that the finite-size effect fades away in our work by one order of $1/N$ faster than when approaching from $N$-by-$N$ lattice. The energy for an infinity-by-N ($N=12$) lattice has a $2.5\times 10^{-4}$ difference relative to the thermodynamic limit value, as close as that for a $22 \times 22$ lattice (interpolation from Fig. 5 of \cite{Manousakis1991}). \begin{table} \begin{center} \begin{tabular}{ccccc} \hline N & $P^{\prime}$ & $\epsilon_{P^{\prime}}$ &$\bar{\epsilon}$ &$\Delta$ \\ \hline 2 & $32$ & $-0.85935$ & $-0.85935$ & \\ 4 & $266$ & $-0.68328$ & $-0.68329$ & $1.5\times 10^{-5}$\\ 6 & $560$ & $-0.67277$ & $-0.67279$ & $3.0\times 10^{-5}$\\ 8 & $2000$& $-0.67074$ & $-0.67078$ & $6.0\times 10^{-5}$\\ 10 & $1400$& $-0.66996$ & $-0.67017$ & $3.1\times 10^{-4}$\\ 12 & $560$ & $-0.66871$ & $-0.67001$ & $1.9\times 10^{-3}$\\ 14 & $350$ & $-0.66636$ & $-0.66993^*$ & $5.3\times 10^{-3}$\\ \hline \end{tabular} \caption{\label{table:energy}The simulated GS energy per site $\epsilon_{P^{\prime}}$ at the largest MPS rank tried $P^{\prime}$, the extrapolated energy $\bar{\epsilon}$ and the relative error $\Delta$ for various $N$'s of an infinity-by-$N$ lattice with periodic BC imposed in both directions.$^*$ the extrapolation for $N=14$ is replaced by the interpolation in Fig.\ref{fig:energy}(c).} \end{center} \end{table} \begin{figure} \begin{center} $\begin{array}{c} \mbox{(a)} \\ \includegraphics[width=18.5pc]{entanglement_inversen}\\ \mbox{(b)}\\ \includegraphics[width=18.5pc]{determine_entanglement}\\ \end{array}$ \caption{\label{fig:entanglement} (a) Ratio of the MPS rank $P_N$ (for $N$) to $P_{N-2}$ (for $N-2$), with respect to $\frac{1}{N}$. At both $P_N$ and $P_{N-2}$ the same accuracy is obtained. The tendency of $\frac{P_N}{P_{N-2}}\rightarrow 1$ for $\frac{1}{N}\rightarrow 0$ implies the saturating MPS rank with increasing $N$. The inset shows a larger scale starting from $\frac{P_2}{P_1}$. (b) The horizontal dashed line intercepts the curves of 'relative error versus $1/P$', giving $P_N$'s used in (a), given certain relative error, say $1.9\times 10^{-3}$. The comparison between $N=12$ and $14$ is made for the relative error of $5.3 \times 10^{-3}$, where $P=195$ for $N=12$ and $P=350$ for $N=14$, respectively.} \end{center} \end{figure} \begin{figure} \begin{center} $\begin{array}{c} \mbox{(a)}\\ \includegraphics[width=18.5pc]{entropy_extrapolation_PBC}\\ \mbox{(b)}\\ \includegraphics[width=18.5pc]{entropy_PBC}\\ \end{array}$ \caption{\label{fig:entanglement_coupled} Entanglement entropy of an effective site for coupled ladders with PBC in the rung. (a) Entanglement entropy versus $P^{-1/3}$. The steady tendency to $P^{-1/3}=0$ gives extrapolation for $N=8, 10$ and $12$ shown as rectangles, circles and triangles, respectively. Those of $N=2, 4$ and $6$ from bottom to top in the inset exhibit sudden convergence. (b) Entanglement entropy versus $N$. The dashed guiding line shows that the curve are three-segmented. The second segment is linear, covering $N=2,4$ and $6$. The third segment starts to bend when $N\ge 8$ and will saturate when $N\rightarrow\infty$.} \end{center} \end{figure} Meanwhile, the most intriguing information from the energy observation is the plot in Fig.\ref{fig:entanglement}(a). It shows the ratio of the MPS rank for a given accuracy, say, $1.9\times 10^{-3}$ for $N$ to that for $N-2$, with respect to $1/N$. Fig.\ref{fig:entanglement}(b) explains how to get each $P_N$. In (a), the dashed guiding line shows the tendency that $P_N/P_{N-2} = 1$ when $1/N\rightarrow 0$. It implies that the increase of entanglement in a MPS wave function built on an effective 1D lattice, whose site is converted from the $N$ sites in the rung of an infinity-by-$N$ lattice, will slow down with $N$ and possibly will be saturated for larger $N$. The mechanism of this saturation of $P$ with $N$ is accounted for by the saturating entanglement entropy of an effective site, shown in Fig.\ref{fig:entanglement_coupled}(b). It is now clear that, treating an infinity-by-$N$ lattice as if in 1D does bypass the area law of entanglement entropy for the strongly correlated 2D quantum system only if larger $N$ can be reached. Fig.\ref{fig:entanglement_coupled}(a) shows that the computed entanglement entropy has a linear dependence on $P^{-1/3}$. For the ladders of $N=2, 4$ and $6$, it does suddenly converge\cite{Eisert2010} when $P$ reaches a threshold, forming plateaus shown in the inset. Recall that the definitely gapless decoupled ladder shows continuous convergence of entanglement entropy with respect to the MPS rank in Fig.\ref{fig:entanglement_decoupled}(a). Now that the ladders of $N=8,10$, and $12$ show no plateau either, it is necessary to check whether the sudden convergence of a gapped ladder is not reached yet or the ladder is gapless. These two possibilities shall be explored with more physical quantities. At the moment, however, an immediate assertion can be made that, starting from $N=8$ the lattice is out of the applicable regime of NLSM's prediction that the ladder has a gap which exponentially decays with increasing width. Otherwise, the entanglement entropy shall be linear with all $N$'s. We now show that the ladder is still gapped for $N=8$ and that it is ordered hence gapless for $N\ge 10$. We study $C_r$ versus $r$, where $C_r$ is spin-spin correlation at separation $r$ in LD. Hereafter, we discuss the absolute value of the correlations, despite that they have alternating signs due to the antiferromagnetism. \begin{figure} \begin{center} $\begin{array}{cc} \mbox{(a)}&\\ &\includegraphics[width=18pc]{correlation2}\\ \mbox{(b)}&\\ &\includegraphics[width=18pc]{correlation4}\\ \mbox{(c)}&\\ &\includegraphics[width=18pc]{correlation6} \end{array}$ \caption{\label{fig:correlation} $LnC_r$ versus $r$. The linear tilted lines in logarithmic scale suggest the exponential decay of correlations with respect to separations. They approach the fixed one from bottom when $P=4$ to top when $P=28$ (every augment of $4$ for $P$) for $N=2$ in (a); from bottom when $P=40$ to top when $P=200$ (every $40$) for $N=4$ in (b). However, the beginning lines are flat at top in (c) for $N=6$, starting from top when $P=190$ to the last flat one in the middle zone when $P=250$ (every $20$). It suddenly jumps down to the tilted line at the bottom when $P=270$ and approaches the fixed tilted line when $P=560$ (every $20$).} \end{center} \end{figure} \begin{figure} \begin{center} $\begin{array}{cc} \mbox{(a)}&\\ & \includegraphics[width=18pc]{N8-correlation-decay0}\\ \mbox{(b)}&\\ & \includegraphics[width=18pc]{N8-correlation-decay}\\ \mbox{(c)}&\\ & \includegraphics[width=18pc]{N8-correlation-decay1}\\ \end{array}$ \caption{\label{fig:N8C} $C_r$ and $\tau_r\equiv Ln\left(LnC_r-LnC_{r+1}\right)$ for $N=8$. (a) $\tau_r$ versus $P^{-3/4}$ at separations $r=10, 20, 30, 40$ and $50$ from top to bottom. (b) $\tau_r$ versus $r$ at various $P$'s. They asymptotically approach the dashed curve obtained by the extrapolation in (a). Trace along the asymptotic curve in (c), starting from $LnC_1=Ln\frac{\bar{\epsilon}_0}{6}$, yields the asymptotic curve of $LnC_r$ versus $r$.} \end{center} \end{figure} \begin{figure} \begin{center} $\begin{array}{cc} \mbox{(a)}&\\ & \includegraphics[width=18pc]{N10-correlation-decay0}\\ \mbox{(b)}&\\ & \includegraphics[width=18pc]{N10-correlation-spacing}\\ \mbox{(c)}&\\ & \includegraphics[width=18pc]{N10-correlation-decay}\\ \mbox{(d)}&\\ & \includegraphics[width=18pc]{correlation10-1} \end{array}$ \caption{\label{fig:N10C} $C_r$ and $\tau_r\equiv Ln\left(LnC_r-LnC_{r+1}\right)$ for $N=10$. (a) $\tau_r$ versus $P^{-3/4}$ at $r=10, 20, 30, 40$ and $50$ from top to bottom. (b) $\left(\tau_r-\tau_{r+10}\right)/\left(\tau_{10}-\tau_{20}\right)$ versus $P^{-3/4}$ at $r=20, 30$ and $40$. (c) $\tau_r$ versus $r$ at various $P$'s. They asymptotically approach the dashed curve obtained by the extrapolation in (a). (d) Trace along the asymptotic curve in (c) yields the asymptotic curve of $LnC_r$ versus $r$.} \end{center} \end{figure} Fig.\ref{fig:correlation} is shown in the semi-logarithmic scale for $N=2, 4$ and $6$. The straight tilted lines indicate exponential decays with respect to the spin-spin separation. Comparison of the spin-spin separation needed for the same value of spin-spin correlation gives the ratio of correlation lengths for these three lattices. It is $1:4:9$. It is worth noting the behavior of $N=6$. It looks straight when $P\le 250$ but then jumps down to the bottom when $P=270$, and finally converges to the fixed line. It is a clear indication of the competition between order and disorder. For $N\ge 8$, we didn't obtain the converged plot of $C_r$ versus $r$ due to the larger entanglement entropy. For them, we study a new quantity $\tau_r\equiv Ln\left(LnC_r-LnC_{r+1}\right)$. It is the varying rate of $LnC_r$. If this rate is a negative constant, the correlation decays exponentially with $r$. If the rate somehow decays with $r$, the correlation is a constant at infinite separation. Hence, the lattice is ordered. Fig.\ref{fig:N8C}(b) and Fig.\ref{fig:N10C}(b) both show that this quantity is linear with $r$ at various MPS rank $P$'s, as explained in Sec.\ref{sec:correlation}. But their asymptotic ($P\rightarrow \infty$) behaviors are different. For $N=8$, $\tau_r$ asymptotically becomes a negative constant for large $r$'s shown as the dashed curve in Fig.\ref{fig:N8C}(b). This negative constant is obtained when the lines of $\tau_r$ versus $P^{-3/4}$ for various large $r$'s converge to the same value when $p\rightarrow \infty$ in (a). Starting from $C_1=\bar{\epsilon}_0/6$ and then tracing along the asymptotic curve in (b), we obtain the dependence of $LnC_r$ on $r$ in (c). It is seen that, $C_r$ for $N=8$ decays exponentially with $r$. Nevertheless, For $N=10$, Fig.\ref{fig:N10C}(a) shows that the lines of $\tau_r$ versus $P^{-3/4}$ don't converge to the same value when $p\rightarrow \infty$. (b) further shows that $\tau_{r}-\tau_{r+10}$ are equal for $r=10, 20, 30$ and $40$, implying that the lines in (a) are equally spaced. Thus, the dashed asymptotic curve in (c) has a constant negative slope for large $r$'s. Tracing along the asymptotic curve, we obtain the dependence of $LnC_r$ on $r$ in (d). $LnC_r$ hence $C_r$ becomes a nonzero constant at infinite spin-spin separation. The ladder of $N\ge 10$ is ordered. Since no external pinning magnetic field\cite{White2007} is applied, it implies that the spin rotational symmetry is spontaneously broken. Our finding, that lattice of $N\le 6$ is not ordered, is fully consistent with the previous report\cite{Greven1996} that a gap exist for lattices of $300 \times N$, $N\le6$. The gap leads to the fast exponential-like decay for spin-spin correlations reported for those lattices. Nevertheless, for the first time we show with strong numerical evidence that the spin rotational symmetry spontaneously breaks for a spin-$\frac{1}{2}$ lattice of $N\ge 10$. Since a spontaneously ordered GS is regarded as a 2D characteristic by the existing theories, such as SWT, NLSM and Mermin-Wager theory\cite{Mermin1966,Coleman1973}, the spontaneous symmetry breaking defines a quantum dimensional transition from 1D including quasi-1D to 2D at a finite ladder width $N$. \subsection{\label{subsec:space-reduction} effects of space reduction in matrix product state} \begin{figure} \begin{center} $\begin{array}{cc} \mbox{(a)}&\\ & \includegraphics[width=18pc]{compare-10-p}\\ \mbox{(b)}&\\ & \includegraphics[width=18pc]{compare-10-inversep}\\ \mbox{(c)}&\\ & \includegraphics[width=18pc]{new-extrapolation-10} \end{array}$ \caption{\label{fig:energy1}Effect of space reduction in MPS for $N=10$. (a) GS energy per site versus MPS rank $P$. Open circles denote the solution before reduction. Result at $P_1=100$ is used to reduce the space rank to $128$, $256$, $384$ and $512$, yielding new solutions shown as dot-dashed, dotted, dashed and solid curves. (b) GS energy versus $1/P$. Tangents of convergence yield the extrapolated energies for various space ranks, $128$, $256$, $384$, $512$ and $1024$ (unreduced) from top to bottom. (c) Extrapolation of the energy in unreduced spaces using those obtained in reduced spaces.} \end{center} \end{figure} \begin{figure} \begin{center} $\begin{array}{c} \includegraphics[width=18.pc]{compare-14-inversep}\\ \end{array}$ \caption{\label{fig:energy2}Use of space reduction in MPS for $N=14$. Tangents of the convergence of GS energy per site versus $1/P$ yield extrapolated energies for various space ranks, $512$, $1024$, $1536$ and $2048$ from top to bottom. The inset extrapolates GS energy per site in the unreduced space using those extrapolations obtained in reduced spaces.} \end{center} \end{figure} \begin{figure} \begin{center} $\begin{array}{cc} \mbox{(a)}&\\ & \includegraphics[width=18.pc]{space_error}\\ \mbox{(b)}&\\ & \includegraphics[width=18.pc]{space_error1}\\ \end{array}$ \caption{\label{fig:spaceerror} Relative error versus (a) $M/2^N$ (reduction ratio) and (b) $2^N/M$ (inverse ratio). In (a), The dotted line gives a reference of zero error, while the dashed line intercepts each curve to give the reduction ratio at a certain accuracy. (b) Energies obtained in reduced spaces for lattices of larger $N$ approach more linearly to those in unreduced space. Fig.\ref{fig:energy1}(c) and Fig.\ref{fig:energy2} show such examples for $N=10$ and $14$, respectively.} \end{center} \end{figure} \begin{figure} \begin{center} $\begin{array}{ccc} &\includegraphics[width=18.pc]{space_fixpoint}& \\ \end{array}$ \caption{\label{fig:fixedpoint} $M_N/M_{N-2}$ (ratio of numbers of basis vectors kept for $N$ and $N-2$, respectively) versus $1/N$. The linear fit overlaps with the guiding dashed line to $1$ when $N\rightarrow \infty$.} \end{center} \end{figure} The effect of space reduction in MPS is shown with the example of $N=10$ in Fig.\ref{fig:energy1}. In (a), simulated data in the original space of rank of $2^{10}$ are shown as open circles. At $P_1=100$, the solution is used to reduce the space rank to $128$, $256$, $384$ and $512$ to yield solutions in dot-dashed, dotted, dashed, and solid curves, respectively. Except the reduced rank $128$, simulations for other reductions reproduce the solution before reduction when $P\le P_1$. The closing gaps between flattening curves are confirmed in (b), where the energies versus $1/P$ are plotted for space ranks $128$, $256$, $384$ and $512$ from top to bottom. The simulation in original space is also carried on after $P_1$, shown as the bottom curve in the same plot. All curves show convergency. The extrapolation by tangents of those converging curves yields energies in spaces of both various reduced sizes and the original size. Those in the reduced spaces are used to extrapolate the energy in the unreduced space, as shown in (c). There, the linear fit yields $-0.6704$, agreeing well with $-0.67022$ by extrapolation using the data obtained before space reduction in (b). Note that this scheme which extrapolates the result in the original space with the data obtained in reduced spaces, is much more computationally efficient so as to allow simulation at larger $P$ values. We run simulations for $N=14$ in various reduced spaces of ranks $512$, $1024$, $1536$ and $2048$ up to $P=500$, shown from top to bottom in Fig.\ref{fig:energy2}. The lowest energy without extrapolation is $-0.66676$ at $P=500$ in the reduced space of size $2048$, lower than the previously reported value of $-0.66636$ at $P=350$ that is the largest $P$ value handleable in the unreduced spaces. Meanwhile, the inset extrapolates to $-0.66998$ with the difference of $5 \times 10^{-5}$ from $-0.66993$ which was obtained by the interpolation in Fig.\ref{fig:energy}(c). Fig.\ref{fig:spaceerror}(a) shows the result of $\epsilon$ versus $M/2^N$ for various $M^{\prime}s$ and $N^{\prime}s$, where $2^N$ is the original space rank; $\epsilon$ is the relative error between the energies obtained before and after reduction. It is seen that only $1/8$ of the original space size $2^{12}$ is needed for $N=12$, to achieve a relative error of $4.1\times 10^{-3}$. In comparison, the same accuracy for $N=4$ is obtained with $15$ out of $2^4$ basis vectors. For $N=2$, no reduction will achieve good accuracy. Fig.\ref{fig:spaceerror}(b) shows the dependence of relative error on $2^N/M$. Larger lattices ($N\ge 8$) show a linear dependence, which is a reconfirmation for the reliability of extrapolating results using simulation in reduced spaces. Fig.\ref{fig:energy}(b) and (c) illustrate such an example for a lattice of $N=10$. Fig.\ref{fig:energy2} shows another example for $N=14$. We plot in Fig.\ref{fig:fixedpoint} $M_N/M_{N-2}$ (ratio of numbers of basis vectors kept to achieve the same accuracy for $N$ and $N-2$, respectively) versus $1/N$. It shows that this ratio tends to approach $1$ when $N\rightarrow \infty$. As discussed in Sec.\ref{subsec:mpsept}, a saturating number of significant diagonal density matrix element of an effective is responsible for the saturating entanglement entropy hence for the saturating MPS rank $P$, when $N$ increases. Fig.\ref{fig:entanglement}(b) and Fig.\ref{fig:fixedpoint} are indeed consistent. \section{\label{sec:Conclusion}Conclusion} In conclusion, the way we treated the infinity-by-$N$ quantum lattice as 1D effective lattice, converting $N$ lattices in the rung into an effective site, enables us to handle the unprecedented lattice size with $N$ up to $14$. We show that both the number of significant diagonal density matrix elements and the entanglement entropy of an effective site saturate with increasing $N$. The former is responsible for the latter. It bypasses the area law of entanglement entropy for the 2D quantum lattice. Our results for such a lattice with OBC in the rung are progressively more accurate for larger $N$'s than DMRG. For the target model with PBC in both rung and LD, NLSM's prediction, that the lattice will have a gap which exponentially decays with $N$ till $N\rightarrow\infty$, is shown to only fully apply to $N \le 6$ and partially apply to $N=8$ whose gap does not exponentially decay. By contrast, our data revealed the quantum dimensional transition from 1D (including quasi-1D) to 2D that takes place at a critical width $N=10$, with emerging\cite{Anderson1972} order parameters. At last, it is worth comparing our observation of quantum dimensional transition with the assertion of Mermin-Wagner theory\cite{Mermin1966,Coleman1973}. It states that the Heisenberg model cannot have spontaneous ordering (spin rotational symmetry breaking) at any finite temperature in both 1D and 2D. For such a model, spontaneous symmetry breaking of GS is different. But, despite the possible failure\cite{Vojta2005} of quantum-classical mapping, it is used to show within the framework of Mermin-Wagner theory that Heisenberg model supports spontaneous ordering in 2D but excludes magnetic order in a pure 1D. When an infinity-by-$N$ square lattice is converted into an effective 1D chain, we show that this effective chain may or may not support spontaneous ordering depending on $N$. It is consistent with the previous findings of spontaneous ordering for a 1D chain which is not so pure as to include unequal spins\cite{Tian1997,Kolezhuk1997}. \section{\label{sec:outlook}Outlook} The saturating entanglement grantees that the MPS rank, which otherwise exponentially increases with $N$, saturates as well, relieving the major computational burden related to the MPS size. It is instructive to exhaust other factors which will cause an exponential growth of computational burden with respect to $N$ in this method. The first such a factor is the linking complexity in MPO that, however, in this work is reduced to a linear relationship with $N$ by the entanglement perturbation of MPO. The second and also the last such a factor is the exponentially increasing number of local quantum states on the effective site. It is $2^N$ for spin-$\frac{1}{2}$. The limited number of significant diagonal density matrix elements of an effective site enables an efficient reduction of the space in MPS hence eliminates the last exponential factor in this method. It is possible that a 2D infinity-by-infinity quantum lattice physically behaves like a 1D lattice which has limited significant local states on a slice and is linked with limited entanglement between neighboring slices, when looked from any direction of its two dimensions. The method used in this work is a promising numerical tool when studying the 2D strong correlation in this way. Meanwhile, the emerging local magnetization in those infinity-by-$N$ lattices with $N\ge 10$ shows different finite-size effect from that of an $N$-by-$N$ or $\alpha N$-by-$N$\cite{White2007} ($\alpha$ is a small integer) lattice. Since no pinning magnetic field $B$ is needed, extrapolating the thermodynamic limit value will be simpler. Staggered magnetization, one of the most fundamental physical quantities for quantum spins, is worthy more investigation along this line. Note that the space reduction in MPS shown in this study can be readily extended to any form of MPS or TNS based methods such as PEPS, whenever they are built on a blocked quantum system. \hspace{11cm} \begin{acknowledgments} This work was supported by NRF (National Honor Scientist Program 2010-0020414) and KISTI (KSC-2017-C3-0081). We thank D. C. Yang for proofreading and discussion. \end{acknowledgments}
1,116,691,497,485
arxiv
\section{Introduction} \label{sec:intro} Speaker verification solves the task whether two utterances are spoken by the same person. A recent shift towards neural network based speaker verification systems resulted in significantly better performance compared to the more traditional i-vector based systems \cite{x_vectors, x_vector_wide}. Current speaker verification systems consist of a Deep Neural Network (DNN), initially trained to classify utterances of a large number of training speakers. The most popular architectures are Time Delay Neural Networks (TDNN)~\cite{tdnn} and ResNet~\cite{resnet} based systems. A powerful improvement is the incorporation of an angular margin penalty to the standard softmax classification loss~\cite{arcface}. The statistics pooling layer that maps the variable length input to a fixed-length representation can be enhanced through a temporal attention mechanism~\cite{self_att, att_stat}. After training the network on the speaker identification task, fixed-length speaker characterizing embeddings can be constructed from the activations of the penultimate layer in the network. Subsequently, these embeddings can be used to score a speaker verification trial between new speakers. The most straightforward scoring method is the evaluation of the cosine distance between the enrollment and test embedding of the trials. An alternative Bayesian scoring technique is Probabilistic Linear Discriminant Analysis (PLDA) \cite{plda}. Often, this is followed by a score normalization step such as adaptive s-norm \cite{s_norm_2, s_norm_3}. Finally, logistic regression based score calibration can be used to map the output scores to reliable log-likelihood-ratios~\cite{bosaris}. In this work, we increase the discriminative power of the neural network embedding extractor by introducing a large margin fine-tuning strategy. We show that using longer training segments allow the use of a larger margin penalty in the Additive Angular Margin (AAM) \cite{arcface} loss, which in its turn avoids the expected overfitting to the training speakers. This fine-tuning approach increases the inter-speaker distances between the more reliable speaker centers, while simultaneously ensuring compact speaker classes. We further enhance the speaker verification performance with a quality-aware calibration method. This logistic regression based calibration method is able to model various conditions of the trial utterances by including quality metrics. This results in more consistent speaker similarity scores across a wide range of conditions, and thus better performance given a single decision threshold in the evaluation metrics. The paper is organized as follows: Section 2 describes our baseline speaker verification system. Section 3 and 4 will explain and motivate our proposed large margin fine-tuning and quality-aware calibration strategies respectively. Section 5 describes the experimental setup to analyse the proposed methods, while Section 6 explains and analyzes the results. Finally, Section 7 will give some concluding remarks. \section{Baseline system} \label{sec:baseline} Building further on our previously established work, we use the ECAPA-TDNN architecture~\cite{ecapa_tdnn} as our baseline speaker verification system. This TDNN based speaker embedding extractor improves the popular x-vector architecture~\cite{x_vectors} by incorporating a channel-and context-dependent attention system in the statistics pooling layer. It also introduces a 1-dimensional variant of Squeeze-Excitation (SE) blocks~\cite{se_block} to inject global information in frame-level layers of the network. The integration of Res2-blocks~\cite{res2net} improves performance while simultaneously reducing the total parameter count. Finally, multi-layer feature aggregation and feature summation allows the network to efficiently exploit knowledge learned in the preceding layers. The complete architecture is depicted in Figure~\ref{fig:res}, more details can be found in the ECAPA-TDNN publication~\cite{ecapa_tdnn}. We scale up the network compared to~\cite{ecapa_tdnn} by using 2048 feature channels and add a fourth SE-Res2Block with a dilation factor of 5 for optimized verification performance. \begin{figure}[h] \begin{minipage}[h]{1.0\linewidth} \centering \centerline{\includegraphics[scale=0.28]{images/full_ecapa.png}} \end{minipage} \caption{ECAPA-TDNN baseline system architecture. $T$ indicates the number of input frames, $C$ the amount of intermediate feature channels and $S$ the number of classification speakers. We denote \textit{k} and \textit{d} in the SE-Res2Block for the kernel size and dilation factor, respectively. See~\cite{ecapa_tdnn} for more details.} \label{fig:res} \end{figure} \section{Large margin fine-tuning} \label{sec:fine-tuning} One of the most successful loss functions in fine-grained classification and verification problems is AAM-softmax. AAM-softmax is an extension of the standard softmax loss function by introducing an $L_2$-normalization step on the embeddings and applying an angular margin penalty during the estimation of the log-likelihood of the target class during training. This enforces the network to increase inter-speaker distances while ensuring intra-speaker compactness. The AAM-softmax loss is given by: \begin{equation} \label{aam_softmax} L = -\frac{1}{n} \sum^{n}_{i=1} log \frac{e^{s(cos(\theta_{y_{i}} +m ))}}{e^{s(cos(\theta_{y_{i}} +m))} + \sum_{j=1, j \neq y_i}^N e^{s(cos(\theta_{j}))}} \end{equation} with $n$ indicating the batch size, $\theta_{y_i}$ the angle between the current embedding $\textbf{x}_{i}$ and the AAM-softmax class prototype $\textbf{y}_{i}$ with speaker identity $y_i$. The margin penalty is indicated with $m$. A scaling factor $s$ is applied to increase the range of the output log-likelihoods. Higher values of $m$ result in more compact classes with larger inter-class distances, which should allow the network to better characterize differences between speakers. However, initial training of the network with a high margin penalty is difficult and often leads to poor results. Therefore it is common to train the network with a lower and probably sub-optimal margin. We propose a large margin fine-tuning strategy which further refines a network that was trained to convergence with a low margin value of $m$. Several changes on the level of the data preprocessing, data sampling and learning rate scheduling are proposed to stabilize and to optimize the fine-tuning stage at high margin settings. \subsection{Fine-tuning configuration} \label{ssec:fine-tuning_config} During fine-tuning, we increase the duration of the training utterances. Most neural network based speaker verification systems are trained with short random temporal crops of 2 to 3 seconds to prevent overfitting. In this training configuration high margin penalties are too challenging and longer training sequences alleviate this issue by offering more speaker-specific information to the system. Moreover, this can potentially correct the duration mismatch between training and test conditions~\cite{magneto}. An effective method to decrease GPU memory requirements and to prevent overfitting when training with longer length utterances is to freeze the pre-pooling layers of the model~\cite{garcia_fine_tune}. However, we argue this can prevent these layers from sufficiently adapting to the increased duration condition, especially when such layers share global statistics through the SE-blocks in the ECAPA-TDNN architecture. Thus, all weights in the network remain trainable during the fine-tuning stage. To further prevent overfitting, we switch to a Cyclical Learning Rate (CLR) schedule~\cite{clr} with a lower maximum learning rate and shorter cycle-length. We also enable the Hard Prototype Mining (HPM) sampling algorithm~\cite{sdsvc_paper} to create mini-batches with utterances from speakers which confuse the model the most. This speaker confusion is modelled through a speaker similarity matrix constructed by calculating the cosine distance between all AAM speaker prototype pairs. The sampling algorithm construct mini-batches by iterating randomly over all $N$ training speakers. Each iteration determines $S$ speakers, irrespective of their similarity, for which $U$ random utterances are sampled from each of their $I$ most similar speakers, including the selected speaker. This implies that $S \times U \times I$ should be equal to the batch size $n$. When we have iterated over all training speakers, the similarity matrix is updated and the batch generating process is repeated. \section{Quality-Aware Score Calibration} \label{sec:quality-aware_score_calibration} Score calibration is a post-processing step in speaker verification systems to map similarity output scores to log-likelihood-ratios that can be converted to interpretable probabilities~\cite{bosaris}. Well-calibrated scores allow a theoretical estimation of the optimal evaluation metric decision threshold on a wide range of possible decision error costs and prior probabilities of target and non-target verification trials. It also allows for easy score fusion by producing a weighted average across the calibrated system scores~\cite{bosaris}. Research indicates that including quality measurements in the calibration stage can make the output scores more robust towards score-shifting caused by variability in recording quality and duration conditions \cite{bosaris, quality_cal_2013, quality_cal_2016}. However, most of this work is established with i-vector based speaker verification systems. We argue neural network based systems can benefit from quality measurements in the calibration step as well. Especially in the case of varying duration conditions, since most of these systems are trained with fixed-length audio crops for computational efficiency. Calibration can be based on logistic regression which learns a weight $w_{s}$ and bias $b$ from a set of calibration trials to scale and shift the original output score to obtain a log-likelihood-ratio $l(s)$. However, this corresponds with a monotonic mapping from the score $s$ to $l(s)$. As a result, single system calibration will not improve speaker verification performance metrics based on a fixed decision threshold, such as EER and MinDCF. The proposed quality-aware calibration mapping from input score $s$ to log-likelihood-ratio $l(s)$ is represented by: \begin{equation} \label{standard_cal} l(s) = w_{s}s + \textbf{w}_{q}^{T} \textbf{q} + b \end{equation} By introducing additional quality measurements $\textbf{q}$ with learnable weights $\textbf{w}_{q}$, the calibration system becomes able to improve verification performance as the decision threshold implicitly becomes condition dependent. The next subsections will describe and motivate various quality measurements included in our analysis. \subsection{Duration-based quality measures} \label{ssec:duration} The most straightforward quality measurement is the duration of the utterance, which we will define as the number of input frames. The longer the audio input, the more confident the speaker verification system should be about its decision. However, the fraction of relevant speech frames could vary between utterances. Therefore, we also consider the amount of speech frames detected by the Voice Activity Detection (VAD) preprocessing module of SPRAAK~\cite{spraak} as a quality measure. Optionally, duration-based measurements can be clipped to a maximum length to reduce the impact on unexpectedly long recordings. \subsection{Embedding-based quality measures} \label{ssec:magnitude} The magnitude of an embedding generated by a speaker embedding extractor trained with a softmax-based loss function could contain quality information about the original utterance \cite{ring_loss}. Small magnitudes could potentially indicate lower quality embeddings. Additionally, small changes on the embeddings close to the origin can have a big impact on the angles with the speaker prototypes. \subsection{Imposter-based quality measures} \label{ssec:imposter} Prior to calibration, score normalization is often used to enhance speaker verification performance. The most common technique is s-normalization, which calculates a mean $\mu$ and standard deviation $\sigma$ of the distances between a test embedding and an imposter cohort. Adaptive s-normalization improves performance further by restricting the cohort to the most similar imposter speakers of the considered embedding. The expected mean imposter score $\mu$ can be used as a quality metric, as the test utterance noise characteristics will have an impact on this similarity score. Moreover, a high $\mu$ indicates the system is wrongly confusing the test speaker with some of the training speakers and is probably not modelling the unseen speaker properly. However, experiments indicate that the magnitudes of the test embeddings play a crucial role. The average inner product between the test embedding and the speaker embeddings of its corresponding adaptive s-norm imposter cohort outperforms the average cosine similarity as an imposter-based quality metric~\cite{voxsrc_2020_technical_report}. Further research is required to analyze this interaction between embedding magnitudes and embedding cosine similarity. \subsection{Symmetric Quality Measure Functions (QMFs)} \label{ssec:side} Since we want the role of the enrollment utterance and test utterance to be interchangeable during the verification trial, we enforce symmetric Quality Measure Functions (QMFs). The most straightforward way of combining the quality measurements of both utterances is by taking the arithmetic mean. However, this could result in the loss of valuable quality information as the output similarity score is potentially affected the most by the lowest-quality side of the trial. A simple solution would be to only consider the minimum quality measurement value along both sides of the trial. By also adding the maximum of the measurements as a separate feature we give the system the potential to model the quality difference between two utterances. \begin{table*} \centering \begin{tabular}{clcccccccc} \toprule \multicolumn{1}{c}{} & \multicolumn{1}{l}{\textbf{System Configuration}} & \multicolumn{2}{c}{\textbf{VoxCeleb1}} & \multicolumn{2}{c}{\textbf{VoxCeleb1-E}} & \multicolumn{2}{c}{\textbf{VoxCeleb1-H}} & \multicolumn{2}{c}{\textbf{VoxSRC-20 Val}} \\ \cmidrule(lr){3-4} \cmidrule(lr){5-6} \cmidrule(lr){7-8} \cmidrule(lr){9-10} \multicolumn{2}{c}{\textbf{}} & \multicolumn{1}{c}{\textbf{EER(\%)}} & \multicolumn{1}{c}{\textbf{MinDCF}} & \multicolumn{1}{c}{\textbf{EER(\%)}} & \multicolumn{1}{c}{\textbf{MinDCF}} & \multicolumn{1}{c}{\textbf{EER(\%)}} & \multicolumn{1}{c}{\textbf{MinDCF}} & \multicolumn{1}{c}{\textbf{EER(\%)}} & \multicolumn{1}{c}{\textbf{MinDCF}}\\ \midrule & ECAPA-TDNN (C=2048) & 0.86 & 0.0960 & 1.08 & 0.1223 & 2.01 & 0.2004 & 3.25 & 0.2646\\ & ECAPA-TDNN (fine-tuned) & 0.68 & 0.0753 & 0.91 & 0.1006 & 1.72 & 0.1695 & 2.89 & 0.2274\\ \midrule \midrule A.1 & + duration QMF & 0.64 & 0.0764 & 0.88 & 0.0970 & 1.65 & 0.1638 & 2.68 & 0.2226 \\ A.2 & + speech duration QMF & 0.63 & 0.0760 & 0.88 & 0.0970 & 1.64 & \textbf{0.1631} & 2.67 & 0.2218 \\ A.3 & + magnitude QMF & 0.66 & 0.0765 & 0.90 & 0.1009 & 1.67 & 0.1694 & 2.87 & 0.2268 \\ A.4 & + imposter mean QMF & 0.64 & \textbf{0.0700} & 0.89 & 0.1001 & 1.65 & 0.1724 & 2.81 & 0.2257 \\ \midrule A.5 & + \makecell{speech duration QMF \& \\ imposter mean QMF} & \textbf{0.56} & 0.0743 & \textbf{0.84} & \textbf{0.0969} & \textbf{1.57} & 0.1644 & \textbf{2.59} & \textbf{0.2185}\\ \bottomrule \end{tabular} \caption{Performance impact of large margin fine-tuning and quality-aware score calibration on the ECAPA-TDNN system.} \label{tab:exp_results} \end{table*} \section{Experimental setup} \label{sec:exp_setup} We train the ECAPA-TDNN baseline model on the development set of the popular VoxCeleb2 dataset~\cite{vox2}. This training set contains over one million utterances across 5994 different speakers. We also create 6 additional augmented copies using the MUSAN~\cite{musan} corpus (babble, noise), the RIR~\cite{rirs} (reverb) dataset and the SoX (tempo up, tempo down) and FFmpeg (compression) libraries. The system is trained on random crops of 2~s to prevent overfitting. The input features are 80-dimensional MFCCs with a window size of 25~ms and a frame shift of 10~ms. To further improve robustness of the model, we apply SpecAugment \cite{specaugment} to the log mel-spectrograms which randomly masks 0 to 5 frames in the time-domain and 0 to 8 frequency bands. Subsequently, the MFCCs of the cropped segment are cepstral mean normalized. The initial margin penalty of the AAM-softmax layer is set to 0.2. We also apply a weight decay of 2e-5 on the weights in the network, except for the AAM-softmax layer, which uses a slightly higher value of 2e-4. The system is trained using the Adam optimizer \cite{adam} with a Cyclical Learning Rate (CLR) using the \textit{triangular2} policy as described in~\cite{clr}. The maximum and minimum learning rates are set at 1e-3 and 1e-8 respectively. We use a cycle length of 130k iterations with a batch size of 128. The model is trained for three full cycles. We use adaptive s-normalization for all experiments in this paper. The imposter cohort consists of the average of the length-normalized utterance-based embeddings of each training speaker. The imposter cohort size is set to 100. \subsection{Large margin fine-tuning setup} \label{ssec:verification} We apply our proposed large margin fine-tuning strategy on the converged baseline model. The margin of the AAM-softmax layer is increased to 0.5. SpecAugment is disabled and the length of the random crop is increased to 6~s, we noticed no further performance improvements by choosing a longer duration. The CLR cycle length is decreased to 60k, with the maximum learning rate lowered to \mbox{1e-5}. These shorter and less exploratory learning rate cycles should prevent the model from diverging too much from its initial starting position during fine-tuning. No layers in the network are frozen. Finally, the sampling strategy is changed to HPM as described in Section~\ref{ssec:fine-tuning_config} with parameter values $S = 16$, $I = 8$ and $U = 1$. \subsection{Quality-aware calibration} \label{ssec:exp_calibration} To train our calibration system we create a set of trials from the VoxCeleb2 training dataset. We want our quality-aware calibration system to be robust against varying levels of duration in the trials. Subsequently, we define three types of trials: \textit{short-short}, \textit{short-long} and \textit{long-long} with \textit{short} indicating an utterance between 2 and 6 seconds and \textit{long} ranging from 6~s to the maximum length utterance in the VoxCeleb2 dataset. We include 10k trials of each type in our calibration trial set, resulting in a total of 30k trials. The amount of target and non-target trials is balanced. \subsection{Evaluation protocol} \label{ssec:eval_prot} The proposed methods are evaluated on the public VoxCeleb1~\cite{vox1} test sets, which all have a similar duration distribution as the VoxCeleb2 training set. We also include system performance on the VoxSRC-20~\cite{voxsrc_2020_paper} validation set, which contains out-of-domain data. We report the EER and MinDCF metric using a $P_{target}$ value of $10^{-2}$ with $C_{FA}$ and $C_{Miss}$ set to 1. For the VoxSRC-20~\cite{voxsrc_2020_paper} test set results reported in Table~\ref{tab:ablation_voxsrc}, the MinDCF is evaluated as defined in the challenge with a $P_{target}$ value of $0.05$. Only the discriminatory ability of the system is evaluated by these metrics. We do not evaluate the actual calibration quality by e.g.\ ActDCF or $C_{llr}$~\cite{bosaris}. \section{Results} \label{sec:results} In Table~\ref{tab:exp_results}, we show the performance impact of the large margin fine-tuning and quality-aware calibration on all VoxCeleb1 data sets. The ECAPA-TDNN baseline achieves strong results on all sets. Large-margin fine-tuning is beneficial, and results in an average relative improvement of 15\% in EER and 17\% in MinDCF. Experiments \textit{A.1-5} show that quality-aware calibration with the minimum and maximum QMF further improves these results. Evaluations \textit{A.1} and \textit{A.2} show that calibration with a duration-based QMF is very effective. The speech duration metric only delivers marginal gains compared to the basic total duration metric, indicating that the speech fraction of each utterance in VoxCeleb is rather consistent. Adding speech duration information during score calibration leads to an average improvement of 6\% in EER and 2\% in MinDCF across all datasets. The embedding magnitude is shown to be the weakest quality metric in experiment \textit{A.3}. However, it still improves performance which strengthens our belief to use the inner product as a similarity metric during the calculation of the imposter mean QMF in experiment \textit{A.4}. As shown in experiment \textit{A.5}, the speech duration and the imposter mean QMFs are clearly complementary and improve the performance by 11\% in EER and 3\% in MinDCF on average on all datasets compared to the fine-tuned baseline. We found no additional benefits by fusing more quality metrics or using QMFs with a logarithmic function. \begin{table}[h] \centering \begin{tabular}{clcc} \toprule & \textbf{Systems} & \multicolumn{1}{c}{\textbf{EER(\%)}} & \multicolumn{1}{c}{\textbf{MinDCF}} \\ \midrule & No Fine-Tuning & 3.25 & 0.2646 \\ & Large Margin Fine-Tuning & \textbf{2.89} & \textbf{0.2274} \\ \midrule \midrule B.1 & No Margin Increase & 3.36 & 0.2672 \\ B.2 & No Duration Increase & 3.58 & 0.2884 \\ B.3 & No CLR Decrease & 4.87 & 0.3689 \\ B.4 & No Hard Sampling & 2.95 & 0.2345 \\ B.5 & Frozen Pre-Pooling Layers & 3.12 & 0.2399 \\ \bottomrule \end{tabular} \caption{Ablation study of large margin fine-tuning on the \mbox{VoxSRC-20} validation set.} \label{tab:ablation_ft} \end{table} A detailed ablation study on our proposed large margin fine-tuning protocol is given in Table~\ref{tab:ablation_ft}. We observe significant relative performance improvements over the baseline system of 11\% in EER and 14\% in MinDCF on the VoxSRC-20 validation set. Experiments \textit{B.1-3} indicate the importance of combining the large margin setting with the selection of longer training utterances and reduction of the maximum learning rate in the CLR. In~\textit{B.3} the training becomes unstable and the results significantly degrade compared to the baseline. Hard sampling during fine-tuning is beneficial as indicated in experiment \textit{B.4}. Experiment \textit{B.5} shows that by freezing the pre-pooling layers during fine-tuning, potential performance gains are lost. Extra experiments on the imposter mean QMF indicate that it is crucial to combine a top imposter cohort selection with the inner product speaker similarity metric. Imposter mean QMF variants that use cosine similarity or score against all training speakers do not deliver performance gains. For a concise ablation study of the proposed mean imposter QMF we refer to our technical report submitted to the VoxSRC-20 workshop~\cite{voxsrc_2020_technical_report}. \begin{table}[h] \centering \begin{tabular}{lcc} \toprule \textbf{Systems} & \multicolumn{1}{c}{\textbf{EER(\%)}} & \multicolumn{1}{c}{\textbf{MinDCF}} \\ \midrule Fusion & 4.20 & 0.2052 \\ Fusion + Fine-Tuning & 4.06 & 0.1890 \\ Fusion + Fine-Tuning + QMFs & \textbf{3.73} & \textbf{0.1772} \\ \bottomrule \end{tabular} \caption{Evaluation of the proposed fine-tuning and quality-aware calibration (QMFs) on the VoxSRC-20 test set.} \label{tab:ablation_voxsrc} \end{table} For a final evaluation of our proposed methods, we provide a result overview in Table~\ref{tab:ablation_voxsrc} of our winning submission in the VoxSRC-20 closed speaker verification track. The fusion system is a score-level fusion between 6 ECAPA-TDNN and 4 Resnet34 variants, see~\cite{voxsrc_2020_technical_report} for more details. Large-margin fine-tuning of all models results in a relative improvement of 3\% in EER and 8\% in MinDCF. Using quality-aware score calibration of the fused score with the speech duration and imposter mean QMFs resulted in an additional 8\% EER and 6\% MinDCF relative improvement. \section{Conclusion} \label{sec:conclusion} Large margin DNN fine-tuning can result in the generation of more speaker discriminative embeddings, provided that longer training utterances and a reduced learning rate are used. In addition, quality-aware score calibration that uses quality metrics of a verification trial is able to produce more robust log-likelihood-ratios. Applying both techniques on an ECAPA-TDNN model resulted in state-of-the-art performance on all VoxCeleb1 test sets. Our submission with system fusion in the VoxSRC-20 competition delivered a relative improvement of 11\% in EER and 14\% in MinDCF by applying both enhancements. This approach resulted in a first-place finish on both supervised speaker verification tracks. \vfill\pagebreak \bibliographystyle{IEEEbib}
1,116,691,497,486
arxiv
\section{Introduction} A multiplet of the Standard Model gauge group $SU(2)_L\times U(1)_Y$ \cite{hisano03} would be the simplest candidate of dark matter as it requires no additional `ad hoc' interaction or couplings. Its electroweak interaction leads to a right thermal relic density if its mass is in the multi TeV range, and its stability is guaranteed automatically for a certain higher dimensional multiplet \cite{minimal_dm}. Another important feature of such an ``Electro-Weak Dark Matter'' (EWDM) is non-perturbative effects on its annihilation into gauge bosons which modifies significantly the tree-level results in the determination of its thermal relic density and its indirect detection rate \cite{hisano03,cirelli07}. In this paper, we study such a non-perturbative correction for a Majorana or Dirac fermion EWDM in a wide range of its mass. That is, we will consider a ``non-minimal'' EWDM allowing an unspecified non-standard cosmology for the generation of a right relic density and a certain discrete symmetry for its stability. In the present Universe, the dark matter is highly non-relativistic and thus the wave-functions of the effective non-relativistic two-body EWDM states can get strongly modified by the non-abelian electroweak(EW) potential in the process of their pair annihilation \cite{hisano03}. This is a generalization of the ``Sommerfeld effect'' \cite{sommerfeld} well-known for a single-component dark matter carrying an abelian gauge charge. Recall that the non-perturbative correction enhances (reduces) the annihilation cross section in case of an attractive (repulsive) Coulomb potential \cite{cirelli07}. It will be interesting to note that an EWDM exhibits not only the usual Sommerfeld enhancement and resonance peaks but also a suppressed cross section for a certain choice of the parameters. This is a realization of the ``Ramsauer-Townsend (RT) effect'' \cite{ramsauer} observed in low-energy electron scattering off gas atoms. We will show that such effects are caused mainly by the electromagnetic(EM) interaction in the two-body states of the charged components of an electroweak multiplet. An important feature is that the velocity distribution of dark matter particles has to be included in the non-perturbative calculation of the EWDM annihilation rate, as the ``Sommerfeld-Ramsauer-Townsend'' (SRT) resonance effect occurs when a certain condition is met among the model parameters, including the kinetic energy and so the speed of the annihilating states. The non-perturbative correction to the EWDM annihilation cross section is important to set a limit on the EWDM, the most stringent constraints coming from the PAMELA measurement of the cosmic anti-proton flux \cite{pamela2008,pamela2010} and from the recent FERMI-LAT measurement of the diffuse gamma-ray emission in dwarf spheroidal galaxies \cite{fermi11}. In the present paper we will focus on the PAMELA antiproton limit, deriving a constraint which is a slightly stronger than the FERMI--LAT limit, and comparable to the result of Ref.~\cite{belanger12} with the `MED' propagation astrophysical parameters and a fixed secondary background. In Section \ref{General-EWDM}, we will give a general description of various electroweak multiplet dark matter candidates and present formulae to calculate the non-perturbative effect, summarizing the results in Refs.~\cite{hisano03,cirelli07}. Depending on whether the EWDM carries a hypercharge or not, it can be a Dirac or a Majorana fermion. In the first case, the neutral Dirac components have to split into two Majorana fermions with a mass gap sufficient to suppress the (inelastic) nucleonic scattering through the exchange of a $Z$ boson. In Section \ref{sec:srt}, we discuss various features of the non-perturbative correction showing the SRT effect for the simplest example of a triplet EWDM with no hypercharge. Section \ref{sec:direct} discusses values of the mass splitting which are large enough to avoid the current direct detection bound but still detectable in future experiments, depending on the dark matter mass. The antiproton yield from the EWDM annihilation to gauge bosons is analyzed in Section \ref{sec:pbar} to place a limit from the current cosmic antiproton flux measurements. We then calculate non-perturbative annihilation cross sections for various EWDM candidates to put a mass limit from the PAMELA data on the antiproton flux in Section 6. We conclude in Section \ref{sec:conclusions}. \section{General EWDM and non-perturbative effect} \label{General-EWDM} The dark matter particle can be the neutral component of an $SU(2)_L \times U(1)_Y$ fermion multiplet. As a specific example, we will consider a vector-like (Dirac) doublet with $Y=\pm 1/2$ (Higgsino-like), a (Majorana) triplet with $Y=0$ (wino-like) and a vector-like (Dirac) triplet with $Y=\pm1$. Note that a certain symmetry like $Z_2$ has to be imposed for the stability of these EWDM candidates. Furthermore, the dark matter component of a Dirac multiplet is charged under $U(1)_Y$ and thus its scattering cross section with nuclei through $Z$ boson exchange is far above the current limit from direct detection experiments. This constraint is however invalidated if there is a mass splitting in the Dirac dark matter fermion and thus the heavier Majorana fermion component cannot be excited by the nucleonic scattering of the lighter one (assumed to be the dark matter). A detailed analysis will be presented later to show that a mass splitting of order 0.2 MeV would be detectable while still allowed by the current data. Such a mass splitting can come from a higher dimensional operator between the EWDM and the Higgs doublet. For instance, the Higgsino-like dark matter multiplet, denoted by $\chi_u = (\chi^+, \chi^0)$ and $\chi_d= (\chi^0_d, \chi^-_d)$ in the chiral representation, allows the dimension-four operators: \begin{equation} {1\over \Lambda} (H_u \chi_d)^2, \quad {1\over \Lambda} (H_d \chi_u)^2,\quad {1\over \Lambda} (H_u \chi_d) (H_d \chi_u), \end{equation} where $H_{u}=(H^+, H^0)$ and $H_d=\epsilon H_u^*$ represent the Higgs doublets coupling to the up and down type quarks, respectively. Similarly, for the triplet EWDM multiplet with $Y=\pm1$ consisting of two chiral fermions, $\chi_u=(\chi_u^{++}, \chi_u^+, \chi_u^0)$ and $\chi_d=(\chi_d^{0}, \chi_d^{-}, \chi_d^{--})$, the mass splitting between the Dirac pair $\chi_{u,d}^0$ can arise from: \begin{equation} {1\over \Lambda^3} (H_u H_u \chi_d)^2, \quad {1\over \Lambda^3} (H_d H_d \chi_u)^2,\quad {1\over \Lambda^3} (H_u H_u \chi_d) (H_d H_d \chi_u) . \end{equation} On the other hand, the wino-like EWDM multiplet, a triplet with $Y=0$ denoted by $\chi=(\chi^+, \chi^0, \chi^-)$ has only one Majorana neutral component. Note that a mass splitting between the charged and neutral components of order 0.1 GeV arises from the electroweak one-loop correction \cite{minimal_dm}. In the following sections, we will assume arbitrarily small mass gaps among the multiplet components which make a big impact on the non-perturbative annihilation rate of the EWDM particle. \medskip In the non-relativistic pair annihilation of the EWDM, the non-perturbative effect due to the exchange of the electroweak gauge bosons mixes together the two-body states of the multiplet components. In the case of the Higgsino-like EWDM, there are three states formed by the charged (Dirac) component and two neutral (Majorana) components: $\chi_u^+ \chi_d^-$, $\chi_1^0 \chi_1^0$ and $\chi_0^0 \chi_0^0$, where $\chi_0^0$ denotes the dark matter component. For the wino-like EWDM, we have two two-body states: $\chi^+ \chi^-$ and $\chi^0\chi^0$. The triplet EWDM with $Y=\pm1$ has four two-body states: $\chi_u^{++}\chi_d^{--}$, $\chi_u^+ \chi_d^-$, $\chi_1^0 \chi_1^0$ and $\chi_0^0 \chi_0^0$. The Green's functions $g_{ij}$ corresponding to the processes summarized above, where the indices $i$ and $j$ run over the two-body states of each EWDM candidate, verify the Schr\"odinger equation \cite{hisano03}: \begin{equation} \label{schroedinger} -{1\over m_{DM}} {\partial^2 g_{ij} (r) \over \partial r^2} + V_{ik}(r) g_{kj} (r) = K g_{ij}(r), \end{equation} with $m_{DM}$ the mass of the dark matter particle, and the boundary condition $g_{ij}(0)=\delta_{ij}$ and $\partial g_{ij}(\infty) /\partial r = i \sqrt{m_{DM} (K-V_{ii}(\infty))}g_{ij}(\infty)$. Here $K=m_{DM} \beta^2$ is the total kinetic energy of the two initial dark matter particles in the annihilation process, where $\beta$ is the velocity of the DM particle in the frame of the galactic halo. Then, the dark matter pair annihilation cross section is given by: \begin{equation} \label{sigmaDM} \sigma v ( \chi^0_0 \chi^0_0 \to A B) = 2 d_{0i} d^*_{0j} \Gamma_{ij}^{AB}, \end{equation} where $d_{0j} = g_{0j}(\infty)$ and $v=2 \beta$ is the relative velocity between the two incident DM particles. Here $A,B$ run over the gauge bosons $(W^+, W^-, Z, \gamma)$, that is, the gauge boson final states are $AB=(W^+ W^-, ZZ, \gamma Z, \gamma\gamma)$. Taking the normalization of the covariant derivative $D_\mu = \partial_\mu + i g_2 A_\mu T^A$ for each gauge boson $A$, the potential matrix in Eq.~(\ref{schroedinger}) and the tree-level annihilation matrix $\Gamma_{ij}$ are given by \cite{cirelli07}: \begin{eqnarray} && V_{ij}(r) = 2 \,\delta m_{i0}\, \delta_{ij} - \alpha_2 N_i N_j \sum_{A} \left[ T^A_{ij}\right]^2 {e^{-m_A r} \over r}, \end{eqnarray} where $\delta m_{i0} = m_{\chi_i} - m_{\chi_0}$, and: \begin{eqnarray} && \Gamma_{ij}^{AB} = {\pi \alpha_2^2 \over 2(1+\delta_{AB}) m^2_{DM}} f(x_A,x_B) N_i N_j \left\{ T^A, T^B \right\}_{ii} \left\{ T^A, T^B \right\}_{jj}\,, \\ && \mbox{where}\;\; f(x_A,x_B) \equiv { \left(1-{x_A+x_B \over 2}\right) \over \left( 1 - {x_A+x_B \over 4} \right)^2 } \sqrt{ 1 - {x_A+x_B \over 2} +{( x_A - x_B)^2 \over 16}} \; \mbox{with}\; x_A = {m_A^2 \over m^2_{DM} } \,.\nonumber \end{eqnarray} Here the normalization factor $N_i$ is $1$ or $ \sqrt{2}$ for the Dirac (charged) or Majorana (neutral) two-body state, respectively. \section{Sommerfeld-Ramsauer-Townsend effect} \label{sec:srt} In this section, we present a detailed study of the non-perturbative effect on the EWDM annihilation cross section $\sigma v^{WW+ZZ} \equiv \sigma v^{WW} + \sigma v^{ZZ}$ including both final states $W^+W^-$ and $ZZ$. The wino-like EWDM system has the smallest number of states and parameters: two bound states ($\chi^+\chi^-$ and $\chi^0 \chi^0$) and one mass gap between them. For this reason, to simplify our discussion in this section we will focus on the example of wino-like dark matter, unless otherwise stated. As will be discussed in Section \ref{sec:pbar}, EWDM can copiously produce $W$ and $Z$ bosons leading to a sizable contribution to the antiproton flux measured by cosmic-ray detection experiments such as PAMELA. This puts strong constraints on the masses of the EWDM particles, as will be analyzed in Section \ref{PAMELA-limit}. One of the key observations in this work is that the non-perturbative effect on the EWDM annihilation cross section includes not only the Sommerfeld effect \cite{sommerfeld} which induces both an overall enhancement of the cross section and resonance peaks for particular values of masses and couplings\cite{hisano03}, but also a suppression or resonance dips. We will refer to this as the ``Ramsauer-Townsend effect'' in the DM annihilation processes. The Ramsauer-Townsend effect is a quantum mechanical phenomenon found in the scattering of electrons by noble gas atoms: the collision probability reaches a minimum when the electron kinetic energy take a certain value \cite{ramsauer}. This is analogous to what happens to the transmission coefficient of a one-dimensional potential well, which is enhanced (corresponding to a vanishing reflection probability) when certain conditions are met between the kinetic energy of the incident particle and the potential depth and width. A similar phenomenon can occur in the process of EWDM annihilation in presence of a non-perturbative electroweak potential. In order to see how such a Ramsauer-Townsend resonance arises in this system, we will perform a numerical analysis of the Schr\"odinger equation (\ref{schroedinger}) by changing the mass gaps, the electromagnetic, $W$ and $Z$ potentials, and the velocity of the DM particle. In particular, the latter is an important factor which can drastically change the behavior of the resonance peaks and dips. For this reason, in Section \ref{sec:pamela_limits} we will include in our calculation of the annihilation cross section a convolution over the Galactic velocity distribution of the impinging dark matter particles, in order to compare it to the experimental bound. We mention here that a plot showing dips for particular values of masses and couplings in the annihilation cross section of Dark Matter particles interacting in a non--perturbative electroweak potential was presented in Ref.~\cite{cirelli07}. To our knowledge, this is the only instance where the Ramsauer-Townsend effect has been shown in the literature in the context of dark matter annihilation. However, the authors of Ref.~\cite{cirelli07} did not mention this effect in their discussion. \subsection{Dependence on $\delta m_+$} In the determination of the non-perturbative effects of EWDM annihilation, the splitting between the masses of the dark matter and charged states is a crucial factor, since it controls the transition of the DM state to a particle able to experience the long-range effect of the electromagnetic(EM) interaction. The wino-like EWDM has two states $\chi^\pm$ and $\chi^0$ whose mass splitting is defined by $\delta m_+ \equiv m_{\chi^+}-m_{\chi^0}$. In Figure \ref{Fig_5-1}, we present the cross sections of the wino-like DM annihilation to the $W^+W^-$ and $ZZ$ final states $\sigma v^{WW+ZZ}$, as a function of $m_{\rm DM}$ for the two representative values $\delta m_+ =$ 166 and 15 MeV, where 166 MeV is the typical mass splitting arising from the EW one-loop correction \cite{minimal_dm}. The velocity of the EWDM is fixed to be the typical value of $v/c=10^{-3}$. Our result with $\delta m_+=166$ MeV, which shows the Sommerfeld enhancement and a resonance peak within the mass range presented, is consistent with Ref.~\cite{hisano03}. For the smaller mass gap, one finds that the peak positions shift to smaller $m_{\rm DM}$ and a dip appears that before was missing. In this latter case, the smaller mass gap allows an easier access to the charged state, inducing a stronger non-perturbative effect which activates the Ramsauer-Townsend suppression. Actually, in the next subsection we will confirm that the Ramsauer-Townsend dips appear mainly due to the electromagnetic interaction of the charged states. \begin{figure} \begin{center}< \includegraphics[width=0.65\linewidth]{5-1.eps} \end{center} \caption{Annihilation cross sections of the wino-like EWDM for the mass splitting $\delta m_+ \equiv m_{\chi^+}-m_{\chi^0} =$166 MeV (blue solid) and 15 MeV (red dotted). The brown dot-dashed line shows the tree-level cross section.} \label{Fig_5-1} \end{figure} \subsection{Dependence on the EW interactions} In this subsection, we will modify the electroweak potentials in order to see how the SRT effect is affected. In Figure~\ref{Fig_5-2}, we first plot the annihilation cross sections to $W^+ W^-$ and $ZZ$ as a function of $m_{\rm DM}$ for $\delta m_+ =$ 15 MeV with and without the EM interaction. Then in Figure~\ref{Fig_5-3}, we display the annihilation cross section for the same parameters of the previous figure, but we vary the masses of the $Z$ or $W$ boson. In particular, in Figure~\ref{Fig_5-3}(a) we take $m_Z\rightarrow n\cdot m_Z$ and in Figure~\ref{Fig_5-3}(b) $m_W\rightarrow n\cdot m_W$, where in both cases $n=1/3, 1,$ and 3. \begin{figure} \begin{center} \includegraphics[width=0.65\linewidth]{5-2.eps} \end{center} \caption{Annihilation cross sections for the wino-like EWDM with $\delta m_+ =$ 15 MeV. The solid blue and red dotted lines show the results with and without the EM interaction, respectively. The brown dot-dashed line is the tree-level cross section.} \label{Fig_5-2} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.49\linewidth]{5-3a.eps} \includegraphics[width=0.49\linewidth]{5-3b.eps} \end{center} \caption{Annihilation cross sections for the wino-like EWDM with $\delta m_+ =$ 15 MeV. Each panel shows the dependence on the interactions via the (a) $Z$ and (b) $W$ bosons. The purple dashed, blue solid, and red dotted lines respectively correspond to the cases of (a) $n\cdot m_Z$ and (b) $n\cdot m_W$ with $n=1/3, 1,$ and 3. The brown dot-dashed line is the tree-level cross section.} \label{Fig_5-3} \end{figure} Figure \ref{Fig_5-2} shows that the dip disappears when the EM interaction is turned off, confirming the role played by the EM potential in the Ramsauer-Townsend effect when the mass gap between the dark matter and charged states is small. Furthermore, one finds that the behavior of the annihilation cross section without the EM interaction is almost the same as in Figure \ref{Fig_5-1}, where the influence of the EM interaction is blocked by a larger mass gap. This tells us that the SRT effect is insensitive to the mass gap as far as there is no long-range EM interaction. The weak interaction via the $Z$ or $W$ boson exchange behaves like a long range interaction for a large DM mass, as can be seen in the large mass limit of Figures~\ref{Fig_5-1} and \ref{Fig_5-2}, where a sizeable SRT enhancement is present. A similar effect is expected to occur if smaller $Z$ or $W$ boson masses are taken. However, this may change the resonance conditions for the SRT peaks and dips as well. As a consequence, Figure \ref{Fig_5-3} shows that the peaks move to smaller values of the DM mass for smaller $Z$ and $W$ boson masses. Furthermore, one can find that the SRT effect is more sensitive to a change of the $W$ boson mass compared to that of the $Z$ boson mass. This can be explained by the fact that a change of the $W$ boson mass influences not only the strength of the weak interaction via the $W$ boson propagator, but also the accessibility thorough an EW charged current to a state able to interact electromagnetically. Moreover, Figure \ref{Fig_5-3} (b) shows a drastic change in the Ramsauer-Townsend effect: in particular, the RT resonance condition cannot be met for a light $W$ boson mass. From the above discussion, it is also expected that the SRT peak and dip positions move to lighter DM masses as the electroweak interaction strength is increased. This is indeed what happens in the case of the quintuplet EWDM discussed in Ref.~\cite{cirelli07}, which shows a RT dip at around $m_{DM} = 2$ TeV for a mass gap of order 100 MeV. On the other hand, no RT effect for such large values of the mass gap is observed for lower--dimensional EWDM and a dark matter mass within the multi-TeV range. \subsection{Amplitudes of the wave functions} As can be seen in Figures \ref{Fig_5-1}, \ref{Fig_5-2}, and \ref{Fig_5-3}, the annihilation cross sections of the EWDM show peaks and dips due to the SRT effect. To see their behavior in more detail, we present in Figure \ref{Fig_5-4} the amplitudes of the wave functions $d_{00}$ and $d_{0+}$ connecting the two bound states $\chi^0\chi^0$ and $\chi^+ \chi^-$ [see Eqs.~(\ref{schroedinger}) and (\ref{sigmaDM})] of the wino-like EWDM with $\delta m_+ = 15$ MeV. One finds that each wave function has peaks and dips, implying both a constructive and a destructive resonance behavior. However, some difference is observed in the behavior of the peaks and dips. In particular, while peak positions coincide in the two wave function amplitudes, the dip positions do not. As a consequence, the dip in the annihilation cross section (the red-dotted line in Figure~\ref{Fig_5-1}) appears between the two dips in the wave functions (the two lines in Figure~\ref{Fig_5-4}) and is broader. However, the dip in the cross section can be more pronounced and even as narrow as in the case of Sommerfeld peaks in situations where the dips in the two wave functions are closer. \begin{figure} \begin{center} \includegraphics[width=0.65\linewidth]{5-4.eps} \end{center} \caption{Wave function amplitudes for the two-body states of the wino-like EWDM with $\delta m_+ =$ 15 MeV. The blue solid and red dotted lines show $|d_{00}|$ and $|d_{0+}|$, respectively.} \label{Fig_5-4} \end{figure} \subsection{Dependence on DM velocity} \label{velocity} As discussed in previous subsections, the SRT resonances occur when certain conditions are satisfied among the EWDM parameters such as the electroweak interaction strength, the mass gap and the kinetic energy (which depends on the DM velocity). So far, we fixed the DM velocity\footnote{This is also done in other analyses, for instance, in Refs.~\protect\cite{hisano03,cirelli07}.} to $v/c = 10^{-3}$. However, the DM particles in the halo of our Galaxy have a velocity distribution which is expected, for some particular values of the parameters, to smooth out the pattern of peaks and dips produced by the SRT effect. Therefore, for a real physical system it will be crucial to include the integration over the velocity distribution in the calculation the EWDM annihilation rate. In Figure \ref{Fig_5-5}, we present the values of $\sigma v^{WW+ZZ}$ in terms of the relative velocity $v/c$ of the two DM particles for three representative values $m_{\rm DM} = 1423, 2550$, and 3380 GeV, which correspond to the positions of the peaks and the dips of the red dotted line in Figure \ref{Fig_5-1}. As expected, the DM annihilation cross section shows a dependence on the velocity of the incoming DM particles. For the analysis of the PAMELA antiproton flux limit in Section \ref{PAMELA-limit}, we will use the annihilation cross section obtained after velocity integration according to the following formula: \begin{equation} \langle \sigma v \rangle=N(v_{esc}) \int_0^{2 v_{esc}}[\sigma v(v)]v^2exp\left[-\frac{3}{4} \left (\frac{v}{v_{rms}} \right)^2\right] \;dv, \end{equation} where $v_{rms}$=270 km/s, $v_{esc}$=550 km/s and $N(v_{esc})$ is the normalization constant. \begin{figure} \begin{center} \includegraphics[width=0.49\linewidth]{5-5a.eps} \includegraphics[width=0.49\linewidth]{5-5b.eps} \includegraphics[width=0.49\linewidth]{5-5c.eps} \end{center} \caption{Annihilation cross sections for the wino-like EWDM with $\delta m_+ =$ 15 MeV as functions of the relative velocity, $v/c$. For each panel, the used DM mass, $m_{\rm DM}$, corresponds to the positions of the peaks and the dips of the red dotted line in Figure \protect\ref{Fig_5-1}. The brown dot-dashed line is the tree-level cross section for each DM mass.} \label{Fig_5-5} \end{figure} \subsection{Dependence on $\delta m_N$} Before closing this section, we will check the dependence of the SRT effect on the mass splitting between the neutral states. The wino-like EWDM multiplet has only one Majorana neutral component, but, in general, EWDM multiplets can have more than one Majorana neutral component, whose masses are generically different from each other. In order to see the effect of the neutral mass splitting on the non-perturbative annihilation rate, we show $\sigma v^{WW+ZZ}$ for the Higgsino-like EWDM taking two values of the mass splitting: $\delta m_N \equiv m_{\chi_1^0}-m_{\chi_0^0} =$0.2 and 200 MeV in Figure \ref{Fig_5-6}. Here the mass difference between the DM and charged states, $\delta m_+ \equiv m_{\chi^+}-m_{\chi_0^0}$, is fixed to be 341 MeV which is the typical mass splitting arising from the EW one-loop correction of Higgsino-like EWDM \cite{minimal_dm}. As one can see in Figure \ref{Fig_5-6}, the position of the peak is shifted from $m_{\rm DM} \approx 8000$ GeV to $m_{\rm DM} \approx 6800$ GeV when $\delta m_N$ is changed from 200 MeV to 0.2 MeV. Note that in the previous Figure \ref{Fig_5-1}, the peak position moved from $m_{\rm DM} \approx 2200$ GeV to $m_{\rm DM} \approx 1200$ GeV when $\delta m_+$ was changed from 166 MeV to 15 MeV. Thus, one can conclude that the SRT effect for EWDM is much less sensitive to the neutral mass splitting compared to the splitting between the DM and charged states. In the following analysis, we will fix $\delta m_N$ to 0.2 MeV (when applicable) corresponding to a situation for which direct detection is not excluded by the current experimental data and might yield a positive signal, as will be shown in Section \ref{sec:direct}. \begin{figure} \begin{center} \includegraphics[width=0.65\linewidth]{5-6.eps} \end{center} \caption{Annihilation cross sections for the Higgsino-like EWDM when $\delta m_N \equiv m_{\chi_1^0}-m_{\chi_0^0} =$0.2 MeV (blue solid) and 200 MeV (red dotted). The mass difference between the charged and DM states, $\delta m_+ \equiv m_{\chi^+}-m_{\chi_0^0}$, is fixed as 341 MeV. The brown dot-dashed line shows the tree-level cross section.} \label{Fig_5-6} \end{figure} \section{Direct detection} \label{sec:direct} As already pointed out in Section~\ref{General-EWDM}, EWDMs with non-vanishing hypercharge $Y$ have large couplings to nucleons driven by the exchange of a $Z$ boson, so their elastic cross sections on nuclei is excluded by Dark Matter direct--detection experiments similarly to what typically happens for the simplest realizations of scenarios where the Dark Matter particle is a massive Dirac neutrino or a sneutrino. If, however, the elastic cross section for the EWDM is suppressed, inelastic scattering where the DM particle makes a transition to a slightly heavier neutral mass eigenstate is possible \cite{inelastic}. Notice that the wino--like EWDM has a vanishing hypercharge so is not subject to constraints from direct detection. In the case of the EWDM with $Y\ne$0, the neutral (Dirac) component of the multiplet is split into two Majorana particles (the lightest of which is the DM particle), and thus the elastic cross section vanishes and only an inelastic transition of the DM particle to the heavier mass neutral state EWDM$^{\prime}$ is allowed. In particular, among the cases discussed in the previous sections, this scenario can be realized for the Higgsino--like EWDM ($T=1/2$, $Y=\pm 1/2$) or for the triplet EWDM ($T=1$) with $Y=\pm 1$. The detection rate of inelastic scattering is suppressed by the relevant mass splitting $\delta m_{N}$. In particular, for a given recoil energy $E_{R}$ there exists a minimal velocity for the dark matter $\beta_{min}$ below which the kinetic energy is not sufficient to allow the transition to the excited state: \begin{equation} \beta_{min}=\sqrt{\frac{1}{2 M_N E_R}}\left (\frac{M_N E_R}{\mu}+\delta m_{N} \right ). \label{eq:beta_min} \end{equation} In the above equation $M_N$ is the nuclear mass and $\mu$ is the reduced mass of the DM-particle and nucleus. Since the incoming velocity of DM particles is bounded from above by their escape velocity $v_{esc}$ in the Galaxy while every DM direct detection experiment is able to detect DM scatterings only above a given lower-energy threshold $E_{th}$, detectable values of the parameter $\delta m_{N}$ are bounded from above, typically a few hundreds keV depending on the target material and the energy threshold, while a comparison of the expected signals with the measured event rates allows to determine a lower bound on $\delta m_{N}$. Moreover, the EWDM-nucleon inelastic cross section by $Z$ boson exchange is fully determined when the DM mass $m_{DM}$ is fixed \cite{minimal_dm}: \begin{equation} \sigma_{EWDM,nucleon\rightarrow EWDM',nucleon}= c\frac{G_{\rm F}^2M_{N}^2}{2\pi} Y^2(N - (1-4s_{\rm W}^2) Z)^2\,, \label{eq:direct_detection_cross_section} \end{equation} where the mass splitting between the two neutral states is neglected, $c=1$ for a fermionic EWDM, and $Z$ and $N$ are the number of protons and of neutrons in the target nucleus with mass $M_{N}$. This implies that the allowed range for $\delta m_{N}$ can be plotted as a function of $m_{DM}$. This is done in Figure \ref{fig:xenon_inelastic} for the case of the latest constraints from the XENON100 experiment \cite{xenon100}. In this figure, the solid lower curve refers to the case $Y=\pm 1$ while the dashed lower curve corresponds to $Y=\pm 1/2$. As explained above, values of $\delta m_{N}$ above the upper curve are non--detectable, since in this case $c\beta_{min} > v_{max}=v_{esc}+v_{earth}$.\footnote{Notice that $\beta_{min}$ is defined in the detector rest frame, while $v_{esc}$ is defined in the Galactic rest frame. For the Galilean boost we have assumed $v_{earth}$=232 km/s.} This is a purely kinematic constraint that does not depend on the EWDM--nucleus cross section, so the upper solid curve in the figure is common to the cases with $Y=\pm 1$ and $Y=\pm 1/2$. On the other hand, the lower curves represent the 90\% C.L. lower bounds on $\delta m_{N}$ obtained by applying Yellin's maximal gap procedure \cite{yellin} to the spectrum (consisting of two nuclear recoil candidates at recoil energies 7.1 keV and 7.8 keV) detected by XENON100 in the range 6.6 keV$< E_r<$ 43.3 \cite{xenon100}. In the calculation, we have assumed the standard value $\rho_{DM}$=0.3 GeV/cm$^3$ for the DM density in the neighborhood of the Sun and a Maxwellian velocity distribution truncated at $v_{esc}$=550 km/s with $v_{rms}$=270 km/s.\footnote{In Ref.~\protect\cite{xenon100}, the DM Region Of Interest (ROE) 6.6 keV$< E_r<$ 43.3 is used when analyzing the data with a Profile Likelihood method, but is reduced to 6.6 keV$< E_r<$ 30.5 keV when applying the maximum-gap method. While this does not imply a significant change in the limit for elastic scattering, the inelastic scattering bound is very sensitive to the upper bound of the ROI. For this reason, we derive our limit using the whole energy range of the XENON100 measurement.} \begin{figure} \begin{center} \includegraphics[width=0.50\columnwidth]{xenon_inelastic} \end{center} \caption{Values for the mass splitting $\delta m_{N}$ which are detectable and experimentally allowed by the XENON100 direct detection experiment \protect\cite{xenon100} as a function of the DM mass $m_{DM}$. The solid lower curve refers to the case $Y=1$ and the dashed lower curve to the case $Y=1/2$, while the solid upper does not depend on the value of the cross section and is common to both cases. When $\delta m_{N}$ is above the upper curve the recoil energy is always below the XENON100 lowest-energy threshold $E_{th}$=4 keV. On the other hand, values below the lower curves are excluded at 90\% C.L. by the 2 nuclear-recoil candidate events observed by XENON100 in the range 6.6 keV$< E_r<$ 43.3 keV by applying Yellin's maximal gap method \protect\cite{yellin}.} \label{fig:xenon_inelastic} \end{figure} As it will be shown in detail, antiproton fluxes are almost insensitive to the particular choice of $\delta m_N$, while are very sensitive to the other mass splittings. Nevertheless, in the following sections we will fix $\delta m_N$=200 keV (when applicable) corresponding to a situation for which direct detection is not excluded by present constraints and might yield a positive signal. \section{Constraints from antiproton fluxes} \label{sec:pbar} Since the EWDM is $SU(2)$ charged, their annihilations are expected to produce $W/Z$ bosons copiously whenever this channel is kinematically allowed, leading to a sizeable primary contribution to the antiproton flux detected by experiments measuring cosmic--rays. The antiproton primary contribution from DM annihilation must be summed to the secondary antiproton contribution produced by energetic cosmic rays impinging on the interstellar medium. Although still affected by uncertainties, the latter contribution can be calculated in a relatively reliable way, and is in fair agreement to observation. This implies that no much room is left for the additional contribution from DM annihilation, and antiproton data can put constraints on the EWDM, namely on the EWDM annihilation cross section-times-velocity $\sigma v$. This is shown in Figure \ref{fig:antiproton_fit}, where the circles represent the top-of-atmosphere antiproton flux as measured by PAMELA \cite{pamela2010}, while the dashed line is the secondary flux as calculated in Ref.~\cite{ptuskin}, and rescaled by an overall factor 0.84. In this way, the model fits the data particularly well, with $\chi^2=12.1$ with 23 degrees of freedom. For this reason, we adopt this model as an estimation of the secondary flux. As far as the primary antiproton flux is concerned, we have used both the antiproton yields per annihilation and the propagation model according to Ref.~\cite{pppc}, adopting for the latter an Einasto density profile with median values of the propagation parameters. Notice that, since the antiproton yields per annihilation corresponding to the two different final states $WW$ and $ZZ$ are practically undistinguishable, in the calculation of the expected signal the total cross section $\sigma^{WW}+\sigma^{ZZ} $ is factorized. The result of our analysis is shown in Figures~\ref{fig:antiproton_fit} and \ref{fig:pamela_sigmav_bound}. In particular, for each choice of $m_{DM}$ and $\sigma v^{WW}+\sigma v^{ZZ}$ we sum the primary and secondary contributions of the expected antiproton flux and use the PAMELA data points to calculate a $\chi^2$, for which we assume an upper bound $\chi^2<$44.2, corresponding to the 99.5\% C.L. with 23 degrees of freedom. In this way, we obtain the solid line shown in Figure \ref{fig:pamela_sigmav_bound}. In Figure \ref{fig:antiproton_fit}, the three solid lines show the expected antiproton flux for $m_{DM}$=200 GeV, 500 GeV and 1 TeV when $\sigma v^{WW}+\sigma v^{ZZ}$ lies on the boundary given in Figure \ref{fig:pamela_sigmav_bound}. \begin{figure} \begin{center} \includegraphics[width=0.50\columnwidth]{antiproton_fit_2012} \end{center} \caption{The circles show the antiproton top-of-atmosphere flux measured by PAMELA \protect\cite{pamela2010} as a function of the antiproton kinetic energy. The dashed line represents the secondary flux as calculated in \protect\cite{ptuskin} and rescaled by an overall factor 0.84. The solid lines show three expected fluxes from DM annihilation calculated as described in Section \protect\ref{sec:pbar} for $m_{DM}$=200 GeV,500 GeV, and 1 TeV, all corresponding to $\chi^2$=44.2.} \label{fig:antiproton_fit} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.50\columnwidth]{pamela_sigmav_bound_2012} \end{center} \caption{Estimation of the 99.5 \% C.L. upper bound on the annihilation cross section times velocity $\langle\sigma v \rangle$ as a function of the DM mass $m_{DM}$. The values of $\langle\sigma v \rangle$ above the solid line have $\chi^2>$44.2 with 23 degrees of freedom when compared to the PAMELA data \cite{pamela2010}.} \label{fig:pamela_sigmav_bound} \end{figure} \section{PAMELA limit on various EWDM}\label{PAMELA-limit} \label{sec:pamela_limits} As shown in Section \ref{sec:pbar}, $W$ and $Z$ bosons produced by the annihilations of EWDMs provide a sizable contribution to the cosmic ray antiproton flux measured by PAMELA. Thus, for various EWDM models we will examine the DM mass ranges satisfying the PAMELA antiproton flux bound on $\sigma v^{WW+ZZ}$ as obtained in Section \ref{sec:pbar}. As explained in Section \ref{velocity}, the annihilation cross section of the EWDM shows a DM velocity dependence. In this section, we will therefore calculate the annihilation cross section considering the velocity integration effect, and also, for comparison, present the result with a fixed velocity, $v/c=10^{-3}$. \subsection{Higgsino-like EWDM} We first show the results for the Higgsino-like (doublet with $Y=\pm 1/2$) EWDM which is the smallest multiplet. In Figure \ref{Fig_6-1}, we present the annihilation cross sections (blue solid lines) for two representative values $\delta m_+ =$ 341 and 8 MeV as a functions of $m_{\rm DM}$, where the velocity integration effect is included. For comparison, we also show our results for a fixed relative DM velocity $v/c=10^{-3}$ (red dotted lines). As already explained, in this plot and in the following ones the neutral mass splitting $\delta m_N$ is taken to be 0.2 MeV, when applicable. For the higher mass splitting $\delta m_+ = 341$ MeV (a), we extended the DM mass range up to 10 TeV to see the first Sommerfeld peak. If we further extended the mass range, Ramsauer-Townsend dips would appear as well. On the other hand, for the smaller mass gap $\delta m_+ = 8$ MeV (b), the SRT resonances appear in the smaller DM mass range as discussed in Section 3. \begin{figure} \begin{center} \includegraphics[width=0.49\linewidth]{6-1a.eps} \includegraphics[width=0.49\linewidth]{6-1b.eps} \end{center} \caption{Annihilation cross sections to $W^+W^-$ and $ZZ$ for the Higgsino-like EWDM with $\delta m_+ =$ (a) 341 MeV and (b) 8 MeV, and $\delta m_N=$ 0.2 MeV. In each panel, the blue solid line is our final result obtained after velocity integration, while the red dotted line is the cross section with the fixed velocity, $v/c=10^{-3}$. The black long-dashed line shows the upper limit obtained from the PAMELA antiproton flux data analysis in Section \protect\ref{sec:pbar}. The brown dot-dashed line is the tree-level cross section.} \label{Fig_6-1} \end{figure} As shown in the figures, the velocity integration smooths out the peaks and dips and changes the positions of the peaks in (b). With only the tree-level cross section, the region $m_{\rm DM} \lesssim 364$ GeV is ruled out by the current PAMELA data. However, the annihilation cross section is enhanced by the SRT and consequently the excluded region is a bit extended to $m_{\rm DM} \lesssim 382$ GeV in the case of the typical charged mass splitting for the Higgsino-like EWDM, $\delta m_+ =$ 341 MeV \cite{minimal_dm}. In the case of a smaller mass splitting, the PAMELA limit can constrain also small bands around the Sommerfeld peaks, in addition to the low mass region, as can be seen in Figure \ref{Fig_6-1} (b). \subsection{Wino-like EWDM} Figure \ref{Fig_6-2} shows the annihilation cross sections for the wino-like (triplet with $Y=0$) EWDM in the same way as in the Higgsino-like EWDM case. The two values $\delta m_+ =$ 166 and 6 MeV are taken for the analysis, in order to show the dependence on the charged mass splitting. One can see from Figure \ref{Fig_6-2} that the velocity integration makes a big change in the case of the smaller mass gap, erasing out some of the peaks and dips in \ref{Fig_6-2} (b). While the tree-level result excludes the mass range $m_{\rm DM} \lesssim 533$ GeV, the non-perturbative effect extends the constrained region up to $m_{\rm DM} \approx$ 664 GeV for the representative charged mass splitting of the wino--like EWDM, $\delta m_+ =$ 166 MeV\cite{minimal_dm}. For the smaller mass gap $\delta m_+ = 6$ MeV the PAMELA limit gets stronger, excluding DM masses below 900 GeV and also bands around the peaks which are larger compared to the Higgsino-like EWDM case. \begin{figure} \begin{center} \includegraphics[width=0.49\linewidth]{6-2a.eps} \includegraphics[width=0.49\linewidth]{6-2b.eps} \end{center} \caption{Annihilation cross sections to $W^+W^-$ and $ZZ$ for the wino-like EWDM when $\delta m_+ =$ (a) 166 MeV and (b) 6 MeV. Each line represents the same thing as in Figure \protect\ref{Fig_6-1}. } \label{Fig_6-2} \end{figure} \subsection{Hyper-charged Triplet EWDM} Now let us consider a more complicated case, the triplet EWDM with $Y=\pm1$ which has one doubly charged, one singly charged, and two neutral components. Thus, the DM component $\chi_0^0$ can have three mass splittings: $\delta m_{++} \equiv m_{\chi^{++}}-m_{\chi_0^0}$, $\delta m_+ \equiv m_{\chi^+}-m_{\chi_0^0}$, and $\delta m_N \equiv m_{\chi_1^0}-m_{\chi_0^0}$. Figure \ref{Fig_6-3} presents the annihilation cross sections for $(\delta m_{++}, \delta m_+) = (1400, 525), (100, 525), (1400, 15),$ and $(100, 15)$ MeV fixing $\delta m_N = 0.2$ MeV. Note that $\delta m_{++} =$ 1400 MeV and $\delta m_+ =$ 525 MeV correspond to the typical mass splittings due to the EW one-loop corrections \cite{minimal_dm}. \begin{figure} \begin{center} \includegraphics[width=0.49\linewidth]{6-3a.eps} \includegraphics[width=0.49\linewidth]{6-3b.eps} \includegraphics[width=0.49\linewidth]{6-3c.eps} \includegraphics[width=0.49\linewidth]{6-3d.eps} \end{center} \caption{Annihilation cross sections to $W^+W^-$ and $ZZ$ for the triplet EWDM with $Y=\pm1$. The used values of $\delta m_{++, +, N}$ are shown in each panel. Each line represents the same thing as in Figure \protect\ref{Fig_6-1}.} \label{Fig_6-3} \end{figure} One can see the dependence of the SRT effect on the doubly charged mass splitting $\delta m_{++}$ comparing the two cases $(\delta m_{++}, \delta m_+) = (1400, 525)$ and $(100, 525)$ MeV in Figures \ref{Fig_6-3} (a) and (b), while the dependence on the singly charged mass splitting $\delta m_+$ can be understood comparing the two cases $(\delta m_{++}, \delta m_+) = (1400, 525)$ and $(1400, 15)$ MeV in Figures \ref{Fig_6-3} (a) and (c). In particular, one can see that a larger number of peaks and dips appears when $\delta m_{++}$ is reduced by a factor 14 compared to the case when $\delta m_{+}$ is reduced by a factor 35. This shows that the SRT effect is more sensitive to $\delta m_{++}$ than to $\delta m_{+}$ due to the stronger EM interaction of multiply-charged states. In addition, Figure \ref{Fig_6-3} (d) shows the combined effect of the changes of $\delta m_{++}$ and $\delta m_+$ shown separately in Figures \ref{Fig_6-3} (b) and (c). The hyper-charged triplet EWDM has stronger EW interactions compared to the Higgsino-like or wino-like EWDM and thus exhibits a stronger SRT effect. As a consequence, the excluded mass region reaches about 3 TeV for the typical mass gaps of $\delta m_{++} =$ 1400 MeV and $\delta m_+ =$ 525 MeV, while the tree-level limit is as low as about 800 GeV as shown in Figure \ref{Fig_6-3} (a). It is interesting to see that regions of small DM mass are allowed for smaller mass gaps, as shown in Figures \ref{Fig_6-3} (b,c,d) due to the RT effect. One would expect to find more regions of lower DM mass allowed for lower mass gaps. \section{Conclusions} \label{sec:conclusions} In the present paper, we have discussed the non-perturbative effects occurring in the annihilation cross section of an ``Electro-Weak Dark Matter''(EWDM) particle belonging to an $SU(2)_L\times U(1)_Y$ multiplet, when the splittings between the DM state mass and that of the other charged or neutral component(s) of the multiplet are treated as free parameters. In particular, we have considered a vector-like (Dirac) doublet with $Y=\pm 1/2$ (Higgsino-like), a (Majorana) triplet with $Y=0$ (wino-like) and a vector-like (Dirac) triplet with $Y=\pm1$. In all these examples, an ad hoc symmetry has to be imposed for the stability of EWDM, and we have allowed for an unspecified non-standard cosmology for the generation of the right dark matter relic density, since the thermal abundance of EWDM is typically below that required by observation unless its mass is in the multi--TeV range. Moreover, in the case of EWDM charged under $U(1)_Y$, severe constraints from direct detection searches exist on the elastic cross section off nuclei. However, these limits can be circumvented in presence of a sufficiently large mass splitting (of the order of 0.2 MeV) in the Dirac dark matter fermion, so that only inelastic scattering is allowed and kinematically suppressed. As a result of our analysis, it is shown that EWDM exhibits not only the usual Sommerfeld enhancement of the cross section with resonance peaks at particular values of the dark matter mass, but also a suppressed cross section for particular choices of the parameters. The latter phenomenon is a realization of the ``Ramsauer-Townsend effect'' observed in low-energy electron scattering off gas atoms. Moreover, we have shown that the EWDM mass for which non-perturbative effects become important is particularly sensitive to the mass splittings between the dark matter and the charged components of the EW multiplet, and is driven below the TeV scale when this mass splitting is reduced to a few MeV. In particular, we have shown that when the mass splitting gets smaller the transition of the dark matter particles to electrically charged states is made easier, and it is the electromagnetic long-range interaction between these charged states which is responsible both of the Sommerfeld enhancement of the cross section and of the Ramsauer-Townsend suppression, even when the dark matter mass is not much larger than the EW gauge boson masses. Notice that only mass splittings larger than 100 MeV, induced by EW radiative corrections, have been considered so far in the literature, so that, before our analysis, the Sommerfeld effect had been discussed only in the context of multi-TeV scale dark matter. Based on the results explained above, we have then used available experimental constraints on the exotic component of the antiproton flux in cosmic rays to put constraints to the EWDM parameter space. Since non-perturbative effects depend on the velocity of the dark matter particles, we have calculated the annihilation cross section considering the effect of the convolution of the cross section with the velocity distribution of the dark matter particles in the Galaxy, showing that in some cases this can smear out or significantly modify the pattern of Sommerfeld-Ramsauer-Townsend peaks and dips obtained for a fixed value of the velocity. Typically, we have found constraints on the EWDM mass ranging between a few hundred GeV to a few TeV, depending on the specific EWDM realization and on the values of the mass splittings. However, we have also found that, by an appropriate choice of the mass splittings, sometimes the Ramsauer-Townsend suppression in the annihilation cross section can allow to recover narrow intervals at lower values of the EWDM mass. Finally, in the case of EWDM charged under $U(1)_Y$ we have found that the phenomenology described above is not particularly sensitive to the mass splitting between the two neutral Majorana states. As a consequence, this mass difference can be chosen so that the inelastic cross section of the EWDM off nuclei is allowed by present direct detection constraints and at the same time is within the reach of future experiments. \medskip \acknowledgments S.S. acknowledges support by the National Research Foundation of Korea (NRF) with a grant funded by the Korea government (MEST) no. 2011-0024836.
1,116,691,497,487
arxiv
\section{Introduction and main results} One of the most striking properties of holomorphic functions in several variables is the fact that the property of being holomorphic can be checked separately on each variable while the others are kept as constants. This fact is known as Hartogs Theorem. Another remarkable property, which descend for instance from integral representation formulas, is that these functions are uniquely determined by their value on smaller sets such as the boundary of a domain or more generally on a generic manifold (see section \ref{S3} for the definition). It is natural to ask if these two properties can be combined or in other words:Given a function $f$ on the boundary of a domain $\Omega\subset \mathbb C^n$ such that $f$ is holomorphic when restricted on a set of complex curves (like the coordinate lines) is then $f$ the boundary value of a holomorphic function $F\in \text{Hol}(\Omega)$ ? This last question needs to be clarified because the function is defined on a set of dimension smaller than $2n$ and so the restriction on complex curves or with respect to a group of variables might not have sense. We shall denote by $\D=\{ \tau\in \mathbb C |\ |\tau|<1\}$ the standard unit disc in $\mathbb C$, by $\overline{\D}$ its closure, $\partial \D$ its boundary. For a general disc of radius $R$ and center $c$ we write $\D_{c,R}=\{\tau\in\mathbb C|\ |\tau-c|<R\}$. \begin{definition} An analytic disc in $\mathbb C^n$ for $n>1$ is a continuous, injective map $A:\overline{\D}\rightarrow \mathbb C^n$ which is holomorphic on $\D$. The set $A(\partial\D)$ is called the boundary of $A$ and is denoted by $\partial A$. We say that $A$ is attached to a set $M$ if $\partial A \subset M$. \end{definition} \begin{definition} Let $M\subset \mathbb C^n$ and $f:M\rightarrow \mathbb C$ a continuous function. If $A$ is an analytic disc attached to $M$ we say that $f$ extends holomorphically on $A$ if there exists a continuous function $F:\overline{\D} \rightarrow \mathbb C$ holomorphic on $\D$ such that $f(A(\tau))=F(\tau)$ for $\tau\in \partial\D$. \end{definition} We will use the term "disc" for both the map $A$ and also the image $A(\D)$. We will say that a disc is straight or a line if $A(\D)$ is contained in a complex line. Given a smooth domain $\Omega\subset \mathbb C^n$ and a function $f:\partial\Omega\rightarrow \mathbb C$ by saying that $f$ extends holomorphically on a complex line $l(\tau)=a\tau +b$, $a,b\in\mathbb C^n$ we mean that if $D:=\{\tau\in \mathbb C |\ l(\tau) \in \Omega\}$ is a bounded simply connected domain and if $\phi :\D \rightarrow D$ is the Riemann's map, then the function $f\circ \phi :\partial\D \rightarrow \mathbb C$ extends holomorphically. If $F$ is the holomorphic extension of $f\circ \phi$ we will refer to the extension of $f$ on the line $l$ as the function on the analytic disc $A=l\circ \phi$ (i.e. on the image of the map $A$) $f(A(\tau)) :=F(\tau)$. Let $\Lambda\subset {\mathbb R}^m$ be a set of parameters, a family of discs on $\Lambda$ is a continuous map $A:\bar{\D}\times \Lambda\rightarrow\mathbb C^n$, $A(\tau,\lambda)=A_{\lambda}(\tau)$ which is holomorphic in $\tau$. \begin{definition} Let $A_\lambda$, $\lambda\in\Lambda$ be a family of analytic discs attached to the boundary of a domain $\Omega\subset \mathbb C^n$. We say that $A_\lambda$ is a testing family for $\mathcal{C}^k$ functions if the following holds: If $f\in \mathcal{C}^k(\partial\Omega)$ extends holomorphically on $A_\lambda$ for all $\lambda\in\Lambda$ then $f$ is the boundary value of a holomorphic function $F\in \text{Hol}(\Omega)\cap \mathcal{C}^k(\overline{\Omega})$ \end{definition} It is difficult for general domains to prove that a given family of discs is testing and it is harder to determine the optimal families (i.e. the smallest possible). A model case which can help to understand the problem is when $\Omega$ is the unit ball and the testing discs are chosen among the linear ones. This particular case has been widely studied. We mention here the work of \cite{A11,B13,B16,G12,G12bis,L07,L18} and the most refined version of this case which is in \cite{G12}. In that paper it is considered the family of lines passing through three non-aligned points $a,b,c\in \mathbb C^2$ where at least one of the joining lines (say $a$ with $b$) meets the interior of the ball. Such family of lines is a testing family for continuous functions if and only if $a\cdot b \neq 1$ and either $a\cdot c\neq 1$ or $b\cdot c\neq 1$ here $\cdot$ is the Hermitian inner product. Some attempts have been made by the authors to remove the condition on the joining line (see \cite{BP18} and \cite{BF19}) at the price of requiring $f$ to be real analytic. We note that while in \cite{BP18} only two points $a,b$ are needed for real analytic functions if the joining line is tangent to the ball, for lower regularities this is not true as the following counterexample shows \begin{example} The function $f:\mathbb{S}^3 \rightarrow \mathbb C$ \begin{equation}\label{example} f(z_1,z_2)=\left(\frac{\bar{z}_1 -\bar{t}_1}{\bar{z}_2 -1}\right)(z_1\bar{t}_2 +z_2-1)\exp\left(\frac{-1}{\sqrt{1-z_2}}\right) \end{equation} is smooth, extends holomorphically on all lines concurrent to $(t_1,1)$ and $(t_2,1)$. Yet $f$ is not the boundary value of a holomorphic function because it is not CR (see Definition \ref{CR}.) Note that multiplying by extra factors of type $(z_1\bar{t}_k+z_2-1)$ we have a function that extends on lines concurrent to a finite set of points $(t_k,1)$. So the set of lines concurrent to a finite number of points on the same line tangent to the sphere is not a testing family. \end{example} In this paper we want to treat the case when one of the joining lines, say $a$ with $b$, is tangent to the unit sphere. In this paper we prove that \begin{theorem}\label{t1} Let $a,b \in \mathbb C^2\setminus \mathbb B^2 $ be two distinct points such that the line joining $a$ and $b$ is tangent to the unit sphere $\mathbb{S}^3$ at a point $p$. Let $f:\partial\mathbb B^2 \rightarrow \mathbb C$ be a continuous function such that $f$ extends holomorphically on every complex line through $a$ and $b$. Then $f$ extends holomorphically on every complex line through $p$. \end{theorem} As a consequence we have the following \begin{corollary} \label{C1} Let $a,b,c \in \mathbb C^2\setminus \mathbb B^2 $ be three points not on the same complex line and assume that the line joining $a$ and $b$ is tangent to the unit sphere $\mathbb{S}^3$. Let $f:\partial\mathbb B^2 \rightarrow \mathbb C$ be a continuous function such that $f$ extends holomorphically on every complex line through $a,b$ and $c$. Then $f$ is the boundary value of a holomorphic function $F\in \text{Hol}(\mathbb B^2)\cap \mathcal{C}^0(\overline{\mathbb B^2})$. \end{corollary} The proof is inspired by the one in \cite{G12}. The paper is divided in five sections: In section \ref{S2} we introduce a subgroup of automorphisms with which we perform an averaging procedure on $f$ that allows us to reduce our problem to one complex dimension. In section \ref{S3} we introduce the CR geometry techniques that we need in section \ref{S4} to prove that the "averaged" function is $0$. With this in hand we will prove in section \ref{S5} the Theorem \ref{t1} and from this and \cite{G12} we have Corollary \ref{C1}. \section{Reduction to a one dimensional problem }\label{S2} \subsection{The group $G$ of preserving automorphisms} It is not restrictive to assume that $a=(t_1,1)$ and $b=(t_2,1)$ with $t_1,t_2 \in \mathbb C\setminus \{ 0\}$. In order to simplify the problem, we exploit the symmetry of the ball and find the automorphisms that preserve the concurrent lines to $a$ and $b$. First we recall the general expression of an automorphism of the unit ball: \begin{definition} For a nonzero vector $a=(a_1,a_2)\in \mathbb B^2$ we define the projection over $a$ as $P_a(z):=a\frac{\left\langle z,a\right\rangle }{|a|^2}$ and the orthogonal projection $Q_a(z):=z-P_a(z)$. We also define $s_a=\sqrt{1-|a|^2}$, and for $a=0$, $P_0(z)=0$ and $Q_0(z) =z$. \end{definition} From \cite{R} if $\phi\in \text{Aut}(\mathbb B^2)$ then $\phi $ must be of the form \begin{equation}\label{aut} \phi(z)=U\left( \frac{a-P_a(z)-s_a Q_a(z)}{1- \left\langle z,a\right\rangle }\right) \end{equation} for some $a\in\mathbb B^2$ and some unitary matrix $U$. Clearly the automorphisms of the ball are meromorphic maps of $\mathbb C^2$ and they transform complex lines into complex lines. Because of this we want to find the subgroup $G$ of $\text{Aut}(\mathbb B^2)$ of automorphisms that fix the points of the line $z_2=1$. We have \begin{lemma}\label{l1} The group $G$ is given by $$G=\left\lbrace \phi_{a_2}(z)=\frac{1-\bar{a_2}}{1-z_2\bar{a_2}} \begin{pmatrix} z_1 \\ \frac{a_2-z_2}{a_2-1}\end{pmatrix}\ | \quad a_2=\frac 12(1+e^{i\theta})\ \theta\in [0,2\pi)\right\rbrace .$$ Moreover the map $\Phi:{\mathbb R}\rightarrow G $ $$ \Phi (y)=\phi_{\frac{iy}{1+iy}}$$ is a group homomorphism. \end{lemma} \begin{proof} We begin by finding those $a\in \mathbb B^2$ and $U$ such that $\phi\in G$. First we must have by \eqref{aut} that \begin{equation}\label{f1} \phi(z_1,1) =U\left( \frac{a-P_a(z)-s_a Q_a(z)}{1- \left\langle z,a\right\rangle }\right) =\begin{pmatrix} z_1 \\ 1\end{pmatrix} \quad \forall z_1\in \mathbb C \end{equation} which gives, by the linearity of $P_a$ and $Q_a$, that the denominator of the left-hand side does not depend on $z_1$, thus $a_1=0$. We have \begin{equation}\label{*} \phi(z)=\frac{1}{1-z_2\bar{a}_2} \begin{pmatrix} U_{11}&U_{12} \\ U_{21}&U_{22} \end{pmatrix}\begin{pmatrix} -\sqrt{1-|a_2|^2} z_1 \\ a_2-z_2 \end{pmatrix} \end{equation} and again by \eqref{f1} we have that $U$ is a diagonal matrix hence $U_{12}=U_{21}=0$. Moreover, since $U$ is unitary $$\begin{cases} -U_{11} \frac{\sqrt{1-|a_2|^2}}{1-\bar{a}_2}=1 &\text{ and } |U_{11}|=1 \\ U_{22}\frac{a_2-1}{1-\bar{a}_2} =1 &\text{ and } |U_{22}|=1 \end{cases}.$$ We find after elementary computations that \begin{equation}\label{**} \begin{cases} U_{22}= \frac{1-\bar{a}_2}{a_2-1} \\ U_{11}=\frac{\bar{a}_2 -1}{\sqrt{1-|a_2|^2}} \\ |a_2|^2-\text{Re} a_2=0 \end{cases}. \end{equation} The last equation implies that $a_2$ is on a circle of radius $\frac 12$ and center $\frac 12$ therefore $a_2=\frac 12 (1+e^{i\theta})$. Finally, by \eqref{*} and \eqref{**} we have that $$ \phi_{a_2}(z):=\frac{1-\bar{a}_2}{1-z_2\bar{a}_2} \begin{pmatrix} z_1 \\ \frac{a_2-z_2}{a_2-1}\end{pmatrix} \in G .$$ The elements of the group $G$ are in a one to one correspondence with the circle $\mathcal{C}$ of equation $|a_2|^2-\text{Re} a_2=0,\ a_2\neq 1$ and there is a bijection between $\mathcal{C}$ and ${\mathbb R}$ via the following map $\nu :{\mathbb R} \rightarrow \mathcal{C}$, $\nu(y)= \frac{iy}{iy+1}$. We check now that $\Phi (y)=\phi_{\nu(y)}$ is a group homomorphism i.e. we verify that $\Phi(y_1+y_2)=\Phi(y_1)\circ \Phi(y_2)$ or equivalently $\phi_{\nu(y_1+y_2)} =\phi_{\nu(y_1)}\circ \phi_{\nu(y_2)}$. We start by considering the fact that given $\phi_a \in G $, then the second component, which is $\frac{(1-\bar{a})(a-z_2)}{(1-z_2\bar{a})(a-1)}$, depends only on $z_2$, it has precisely one zero and such zero coincide with $a$. If we consider two such maps $\phi_a$ and $\phi_b$ in $G$, we have $\phi_b\circ\phi_a\in G$ and therefore there exists $c$ such that $\phi_b\circ\phi_a=\phi_c$. To determine $c$ we just have to find the unique zero of the second component of $\phi_b\circ\phi_a$ which is done by solving \begin{equation* b-\frac{(1-\bar{a})(a-z_2)}{(a-1)(1-z_2\bar{a})}=0 \end{equation*} which yields \begin{equation}\label{1bis} c=\frac{b(a-1)-a(1-\bar{a})}{\bar{a}b(a-1)-(1-\bar{a})}. \end{equation} Since $a\in \mathcal{C}$ we have $|a|^2=\text{Re} a$ and $1-|a|^2 =(a-1)(\bar{a}-1)$. With this identity we rewrite \eqref{1bis} and we have \begin{equation}\label{2} c=\frac{b(a-1)-a+1+|a|^2-1 }{\bar{a}b(a-1)-1+|a|^2+\bar{a}-|a|^2}= \frac{b-\bar{a}}{\bar{a}b-2\bar{a}+1}. \end{equation} We choose $a=\frac{iy_1}{1+iy_1}=\nu(y_1)$ and $b=\frac{iy_2}{1+iy_2}=\nu(y_2)$ and replacing into \eqref{2} we have $$ c=\frac{i(y_1+y_2)}{1+i(y_1+y_2)}=\nu(y_1+y_2)$$ and this gives the conclusion $\phi_{\nu(y_1)}\circ\phi_{\nu(y_2)}=\phi_{\nu(y_1+y_2)}$. \end{proof} \begin{remark}\label{r1} The group $G$ acts on the sphere $\mathbb{S}^3$ in the natural way $\star:G\times \mathbb{S}^3 \rightarrow \mathbb{S}^3$: \begin{equation*} \phi_{\nu(y)}\star (z_1,z_2)=\phi_{\nu(y)}(z_1,z_2) \end{equation*} and this action has a unique fixed point which is $(0,1)$. If $(z_1,z_2)\in\mathbb{S}^3$ then the orbit under the action $\star$ is the boundary of the straight disc passing through $(0,1)$ and $(z_1,z_2)$ deprived of the point $(0,1)$. Note that $\lim_{y\to\infty}\phi_{\nu(y)}(z_1,z_2)=(0,1)$. \end{remark} The next step is to take the averages of $f$ on the orbits of the action by taking the following integral $$ \tilde{f}(z_1,z_2):=\int_{\mathbb R} \!f(\phi_{\nu(y)}(z_1,z_2))\,dy $$ to make sure that the integral converges we multiply $f$ by a holomorphic function vanishing to a high order in $(0,1)$ and non-zero elsewhere. So instead of considering $f$ we take $g(z_1,z_2)f(z_1,z_2)$ where $g(z_1,z_2)=\exp{\left( \frac{-1}{\sqrt{1-z_2}}\right) }$. \begin{proposition}\label{p1} Let $f:\mathbb{S}^3\rightarrow \mathbb C$ be a continuous function and let $\tilde{f}$ be the following function: \begin{equation}\label{3} \tilde{f}(z_1,z_2):=\!\int_{\mathbb R} \!g(\phi_{\nu(y)}(z_1,z_2))f(\phi_{\nu(y)}(z_1,z_2))\,dy \end{equation} then $\tilde{f}$ is a continuous function on $\mathbb{S}^3\setminus\{ (0,1)\}$ and $G$ invariant: $$ \tilde{f}\circ \phi =\tilde{f}\quad \forall \phi\in G .$$ Moreover if $f$ extends holomorphically on the family of lines through a point $(t,1),\ t\in\mathbb C$, $t\neq 0$ then so does $\tilde{f}$. \end{proposition} \begin{proof} First we note that the integral in \eqref{3} converges. To prove that it is enough to check the behavior of the integral for $y\to \infty$. Since $$\phi_{\nu(y)}(z_1,z_2)=\left( \frac{z_1}{iy(z_2-1)+1},\frac{iy(z_2-1)+z_2}{iy(z_2-1)+1}\right) $$ we see that it tends to the point $(0,1)$ for $y$ large with a $\frac{1}{y}$ rate, in fact: $$ \left\| \left(\frac{z_1}{iy(z_2-1)+1},\frac{iy(z_2-1)+z_2}{iy(z_2-1)+1}\right) -(0,1)\right\| \le \frac{|z_1|+|z_2-1|}{|1+iy(z_2-1)|}\le\frac{|z_1|+|z_2-1|}{|y|(1-x_2)}. $$ Therefore the integrand functions in \eqref{3} is dominated by $\exp(-|y|^{\frac 12}) $ and so the integral converges and moreover $\tilde{f}$ is continuous in $\mathbb{S}^3 \setminus \{(0,1)\}$. If $\phi_{\nu(y_1)}(z_1,z_2)$ is a point in the $G$-orbit of $(z_1,z_2)$ then, because $\phi_{\nu(y)}\!\circ\!\phi_{\nu(y_1)}=\phi_{\nu(y+y_1)}$, one sees that $$ \tilde{f}(\phi_{\nu(y_1)}(z_1,z_2)) =\int_{\mathbb R} \!g(\phi_{\nu(y+y_1)}(z_1,z_2))f(\phi_{\nu(y+y_1)}(z_1,z_2))\,dy=\tilde{f}(z_1,z_2) .$$ If $A$ is a straight disc through a point $(t,1)$ then $\phi_{\nu(y)}\!\circ A$ is a straight disc through $(t,1)$ (because $\phi_{\nu(y)}$ fixes $(t,1)$ for all $y$ ). If $f$ extends holomorphically on every straight disc through $(t,1)$, for one of such discs $A$ and for all $n\in {\mathbb N}$ we have $$ \int_{\partial \Delta}\tau^n \tilde{f}(A(\tau))\,d\tau=\int_{\partial\Delta}\! \int_{\mathbb R} \tau^n fg(\phi_{\nu(y)}(A(\tau))) \,dyd\tau = \int_{\mathbb R}\!\int_{\partial\Delta} \tau^n fg(\phi_{\nu(y)}(A(\tau))) \,d\tau dy=0 $$ the last equality holds because $fg$ extends holomorphically on the disc $\phi_{\nu(y)}(A)$ for all $y$. So $\tilde{f}$ extends holomorphically on $A$. \end{proof} Since by Proposition \ref{p1} $\tilde{f}$ is constant on the $G$-orbits we have that $\tilde{f}$ induces a function on $\mathbb{S}^3/\star$. Since each orbit can be identified with a complex line through $(0,1)$ we can use the "complex slope" $\zeta=\frac{z_1}{z_2-1}$ of this line as a coordinate for $\mathbb{S}^3/\star$. Therefore, we define $h(\zeta):=\tilde{f}(z_1,z_2)$. Note that $\zeta=0$ corresponds to the line $z_1=0$ and we associate to the line $z_2=1$ the symbol $\zeta=\infty$. \begin{proposition}\label{p2} If $\tilde{f}$ extends holomorphically on all lines concurrent to $(t,1)$ then $h$ extends holomorphically on the family of circles centered at $\dfrac{-1}{\bar{t}}$. \end{proposition} \begin{proof} Since the map $\zeta(z_1,z_2)=\frac{z_1}{z_2-1}$ is holomorphic, if $\tilde{f}$ extends holomorphically on a disc $A$ then $h$ extends holomorphically on the image disc $\zeta (A)$. We determine now the image under the map $\zeta(z_1,z_2)=\frac{z_1}{z_2-1}$ of the discs through $(t,1)$. Let $v=(v_1,v_2)$ be a unitary complex vector and let $\tau v+(t,1)$ be the complex line through $(t,1)$ and parallel to $v$. The intersection with the unit sphere yields the circle in the $\tau$-plane of equation $|\tau|^2+2\text{Re}(\tau(v_1\bar{t}+v_2)) +|t|^2=0$, which has center $c$ and squared radius $R^2$: \begin{equation}\label{cR} c=-(\bar{v_1}t+\bar{v_2}),\ R^2=|\overline{v}_1t+\overline{v}_2|^2-|t|^2 \end{equation} we call this circle $\mathcal{C}_{c,R}$. We see that to this disc there corresponds a circle in the $\zeta$-plane by \begin{equation}\label{4} \zeta(\tau v+ (0,1))=\frac{t+\tau v_1}{\tau v_2}=\frac{v_1}{v_2}+\frac{t}{v_2}\left(\frac{1}{\tau}\right) \quad \tau\in\mathcal{C}_{c,R} . \end{equation} Since the inversion $\frac{1}{\tau}$ transforms a circle of center $c$ and radius $R$ to a circle of center $\frac{\bar{c}}{|c|^2-R^2}$ and radius $\frac{R}{|R^2-|c|^2|}$ by \eqref{cR} we have that \eqref{4} gives a circle in the $\zeta$-plane of center $\dfrac{-1}{\bar{t}}$ and radius $\sqrt{\frac{|v_2|^2(1-|t|^2)+2\text{Re}(t\overline{v}_1v_2)}{|v_2t|^2}}$. Clearly if $v_2$ is small the resulting radius tends to $\infty$ therefore by continuity we have all circles centered at $\dfrac{-1}{\bar{t}}$. \end{proof} \subsection{Behavior of $h$ at infinity} Clearly $\tilde{f}$ is continuous on $\mathbb{S}^3\setminus (0,1)$ and $\tilde{f}(0,1)=0$ because when $z_1=0, z_2=1$ the integrand function in \eqref{3} is identically zero. Since $\tilde{f}$ is constant on the $G$-orbits it cannot be continuous in $(0,1)$ unless it is constant. We can prove that the induced function $h$ is continuous and infinitesimal at $\infty$ \begin{proposition}\label{p3} If $f:\mathbb{S}^3\rightarrow\mathbb C$ is continuous then for the corresponding function $h$ we have $h(\zeta)=O_\infty(|\zeta| e^{-|\zeta|})$ . \end{proposition} \begin{proof} Since $\zeta=\frac{z_1}{z_2-1}$ we have to prove that $h(\frac{z_1}{z_2-1})=\tilde{f}(z_1,z_2)$ is small when $|\frac{z_1}{z_2-1}|$ is large. We begin with \begin{equation}\label{5}|\tilde{f}(z_1,z_2)|\le\int_{\mathbb R} |gf(\phi_{\nu(y)})|\,dy\le C\int_{\mathbb R} \left| \exp\left( -\sqrt{\frac{iy(z_2-1)+1}{1-z_2}}\right) \right|\,dy \end{equation} where $C=\max{f}$. If we put $z_2=x_2+iy_2$ the last integral in \eqref{5} is equal to \begin{equation} \int_{\mathbb R}\left| \exp\left( -\sqrt{\frac{1-x_2+iy_2}{(1-x_2)^2+y^2_2}-iy}\right) \right| \,dy =\int_{\mathbb R}\left| \exp\left( -\sqrt{\underbrace{\frac{1-x_2}{(1-x_2)^2+y^2_2}}_{=\alpha(z_2)}+iy}\right) \right| \,dy \end{equation} To estimate this last integral we consider the real part of the complex square root inside the exponential (for short we write $\alpha$ instead of $\alpha(z_2)$): $$\text{Re} \sqrt{\alpha+iy}=(\alpha^2+y^2)^{\frac{1}{4}}\cos\frac{\arg(\alpha+iy)}2 =\frac{1}{\sqrt{2}}\sqrt{\alpha+\sqrt{\alpha^2+y^2}} $$ so the integral becomes, after another change of variable \begin{align} \int_{\mathbb R} \alpha&\exp\left( -\sqrt{\frac{\alpha}{2}}\sqrt{1+\sqrt{1+\left(\frac{y}{\alpha}\right)^2}}\right) \,d\left( \frac{y}{\alpha}\right)=\int_{\mathbb R} \alpha\exp\left( -\sqrt{\frac{\alpha}{2}}\sqrt{1+\sqrt{1+y^2}}\right) \,dy \nonumber \\ &\le \int_{\mathbb R} \alpha\exp\left( -\sqrt{\frac{\alpha}{2}}\sqrt{1+|y|}\right) \,dy\le \int_{\mathbb R} \alpha\exp \left( - \frac{\sqrt{\alpha}}{2}(1+\sqrt{|y|})\right) \,dy \nonumber \\ \label{8} &\le \alpha e^{-\frac{\sqrt{\alpha}}{2}}\int_{\mathbb R} \exp \left( -\frac{\sqrt{\alpha} |y|}{2}\right) \,dy \end{align} and if $\alpha$ is large then the integral is uniformly small. If $(z_1,z_2)\in \mathbb{S}^3$ is such that $|\frac{z_1}{z_2-1}|>M$ we have \begin{equation} \begin{cases} \label{6} |z_1|^2+|z_2|^2 =1 \\ |z_1|^2>M^2|z_2-1|^2 \end{cases} \end{equation} from \eqref{6} follows $(M^2+1)|z_2|^2 -2M^2x_2+M^2-1<0$ and this implies \begin{equation} \label{7}\frac{1-x_2}{|z_2-1|^2} >\frac{M^2+1}2 \end{equation} which is $\alpha(z_2)>\frac{M^2+1}{2}$. Therefore, for $|\zeta|>M$, let $(z_1,z_2)$ be such that $\zeta =\frac{z_1}{z_2-1}$, by \eqref{7} and \eqref{8} we have \begin{equation}\label{21bis} h(\zeta)=\tilde{f}(z_1,z_2)=O_\infty(M e^{-M}) .\end{equation} \end{proof} \section{CR Geometry}\label{S3} We recall here some definitions about the Cauchy-Riemann geometry (in short CR) that we shall need. This language is important when describing the structure and the properties of real submanifolds in a complex space. First we recall the definition of the complex structure $J$: \begin{definition} In $\mathbb C^n$ with coordinates $z=(z_1,...,z_n)=(x_1+iy_1,...,x_n+iy_n)$ the standard complex structure is the linear map $J$ defined on the tangent bundle $J:T\mathbb C^n\rightarrow T\mathbb C^n$ such that $J(\partial_{x_h})=\partial_{y_h}$ and $J(\partial_{y_h})=-\partial_{x_h}$ for $1\le h\le n$. \end{definition} Since $J$ is an anti involution, $J^2=-Id$, we have that $J$ has two complex eigenvalues $\pm i$. This induces a decompositions on $\mathbb C\otimes T_z\mathbb C^n \simeq T^{1,0}_z\mathbb C^n \oplus T^{0,1}_z\mathbb C^n$ of holomorphic and anti-holomophic vector bundles. \begin{definition} We define $$ T^{1,0}_z\mathbb C^n=\{\sum_{i=1}^{n}a_i \partial_{z_i} |\ a_i\in\mathbb C\}$$ and similarly $$ T^{0,1}_z\mathbb C^n=\{\sum_{i=1}^{n}a_i \partial_{\bar{z}_i} |\ a_i\in\mathbb C\}.$$ \end{definition} \begin{definition} Let $M\subset \mathbb C^n$ be a real submanifold, the complex tangent space of $M$ at a point $z$ is $$ T^\mathbb C_z M:= T_zM\cap JT_z M$$ which is the largest complex space contained in $T_zM$. We say that $M$ is a CR manifold if $T^\mathbb C_z M$ has constant dimension for $z\in M$. In this case $\dim_\mathbb C T^\mathbb C M $ is called the CR-dimension of $M$. If the CR-dimension of $M$ is $0$ then we say that $M$ is totally real. We say that $M$ is generic if $T_zM+JT_zM=T\mathbb C_z^n$, and we say that $M$ is maximally totally real if it is totally real and generic. \end{definition} We want to introduce the anti-holomorphic tangent bundle of a CR manifold: this is the bundle of anti-holomorphic vectors that are tangent to $M$ $$T^{0,1}_zM := (\mathbb C\otimes TM)\cap T^{0,1}_z\mathbb C^n .$$ \begin{example} We recall here that manifolds of co-dimension $1$ are generic CR manifolds and that ${\mathbb R}^n\hookrightarrow \mathbb C^n$ is a maximally totally real manifold. \end{example} Analogously, we define $T^{1,0}M:=\mathbb C\otimes TM\cap T^{1,0}\mathbb C^n$. It is easy to check that if $M$ is a CR manifold then $T^{0,1}M$ is a bundle whose complex dimension is equal to the CR dimension of $M$ and moreover we have $\mathbb C\otimes T^\mathbb C M =T^{0,1}M\oplus T^{1,0}M$. We define now the CR functions which are a natural generalization of holomorphic functions: \begin{definition}\label{CR} A $\mathcal{C}^1$ function on a CR manifold $M$ $f:M\rightarrow \mathbb C$ is said to be CR if for all vector fields $X\in T^{0,1}M$ we have $Xf=0$. \end{definition} The above definition can be easily extended in a weak sense, using distributions, to continuous functions. All results on CR functions that we shall use hold for this weaker notion. If $M$ is a CR manifold with boundary, $N$ we say that $f:M\rightarrow \mathbb C$ is CR if it is continuous and CR in the interior of $M$. Restrictions and boundary values of holomorphic functions are CR functions, in contrast not all CR functions are restrictions of holomorphic ones. Whether a CR function has this property has been widely studied, and it has been proved that a crucial role is played by $T^\mathbb C M$ and by its integral manifolds. In general the bundle $T^\mathbb C M$ is not integrable, so we give the following \begin{definition} A CR manifold $M$ is called Levi-flat if the bundle $T^\mathbb C M$ is integrable. The integral leaves of $T^\mathbb C M$, which are complex manifolds of dimension equal to the CR dimension of $M$, are called Levi-leaves. \end{definition} A continuous function on a Levi-flat manifold is CR if and only if its restrictions on the Levi-leaves are holomorphic. Levi-leaves play an important role in propagation of regularity, but we will come back later to this point when we need it. We will need the following tools from CR geometry and for our considerations all manifolds will be assumed smooth. \begin{definition}\label{def} We say that a CR manifold $M'$ is attached to $M$ at a point $z\in M$ in direction $X\in T_z\mathbb C^n$ if, in a neighborhood of $z$, $M'$ is a manifold with boundary $M$, $X$ is tangent to $M'$ and $X$ points inside $M'$. A CR function $f$ on $M$ extends CR at $z\in M$ in a direction $X$ if there exists a CR manifold $M'$ attached to $M$ in direction $X$ and a continuous CR function $F$ on $M'$ such that $F|_M=f$. \end{definition} When $M$ is a hyper-surface, a manifold $M'$ attached to $M$ corresponds to one of the two components in which the ambient space is divided by $M$. In this case, the vector $X$ points inside the component. The definition \ref{def} can be extended to CR manifolds with generic boundary. \begin{definition} Let $M$ be a CR manifold with boundary $N$, $z\in N$ a point, $X\in T_z\mathbb C^n \setminus T_zM$ a tangent vector. We say that a manifold $M'$ is attached to $M\cup N$ in direction $X$ if $M'$ is a CR manifold that locally around $z$ is a manifold with boundary $N$ such that $X$ is tangent to $M'$ and points inside $M'$. \end{definition} This definition has been introduced in \cite{T97} to describe the propagation of CR extension on CR manifolds with boundary. It turns out that CR curves, i.e. curves whose velocity is in $T^\mathbb C M$, are propagators of CR extension. If $\gamma :[0,1]\rightarrow M$ is a curve with end-points $p_0=\gamma(0),\ p_1=\gamma(1)$, such that $\dot{\gamma}\in T^\mathbb C M$ and if $f$ CR-extends at $p_0$ in direction $X_0$ then $f$ CR-extends at $p_1$ to a direction $X_1$ which is related to $X_0$. The relation between $X_0$ and $X_1$ is strong and evident on the components complementary to a certain sub-space. Namely, let $S\subset M$ be the smallest manifold which contains all the CR curves through $p_0$, the so-called CR orbit of $p_0$. We have $\dim_{CR} S=\dim_{CR}M$, if $p_0\in S$ then the entire curve $\gamma \subset S$. We introduce the following bundle \begin{definition} Let $M$ be a CR manifold with generic boundary $N$, $S\subset M$ a CR manifold with boundary $S_0\subset N$ such that $\dim_{CR}M=\dim_{CR}S$. If $z\in M$ we define \begin{equation} \label{bundle} \mathscr{E}_z:= \frac{T_z\mathbb C^n}{T_zM+JT_zS} \end{equation} and for $z\in N$, \begin{equation*} \mathscr{E}_z:= \frac{T_z\mathbb C^n}{T_zN +JT_z S_0}. \end{equation*} \end{definition} Note that since $N$ is generic we have, on points of $N$, $TN+JTS_0=TM+JTS$ so we can keep \eqref{bundle} as a definition for $\mathscr{E}$ at boundary points. It turns out there exists a partial connection and a parallel displacement $\Pi$ on this bundle which relates the classes of $X_0$ and $X_1$ in $\mathscr{E}$. \begin{theorem}[Tumanov] \label{tumanov} Let $M$ be a CR manifold with generic boundary $N$ and let $S\subset M$ be a CR submanifold of $M$ with boundary $S_0 \subset N$ such that $\dim_{CR}M=\dim_{CR}S$. Let $p_0$ and $p_1$ be two points in $M\cup N$ joined by a CR curve $\gamma :[0,1]\rightarrow M\cup N$. Suppose that $f$ CR-extends at $p_0$ in direction $X_0$ then there exists for every $\epsilon>0$ a manifold $M''$ attached to $M\cup N$ at $p_1$ in direction $X_1$ such that \begin{equation} \left\|\Pi_\gamma ([X_0]) -[X_1]\right\|_{\mathscr{E}_{p_1}}\le \epsilon \end{equation} and $f$ CR-extends to $M''$. \end{theorem} \begin{remark} If $M$ in Theorem \ref{tumanov} is a Levi-flat hyper-surface we have that $S$ is a Levi-leaf, therefore $TS=T^\mathbb C M =JTS$ which in turn implies $\mathscr{E}_z=\frac{T_z\mathbb C^n}{T_zM}=\mathscr{N}_z(M)$ the normal bundle which has real dimension 1. Moreover, since $S$ is complex every curve $\gamma$ in $S$ is CR. The description of $\Pi$ in \cite{T97} is possible, and easier, by means of its dual $\Pi^*$ which is realized in the following way: Since $\mathscr{E}_z^*=\mathscr{N}_z^*(M)$ this space can be identified with the holomorphic $1$-forms whose real part vanishes on $T_zM$ . This bundle is generated by $\partial r$ where $r$ is an equation of $M$. Since $M$ is Levi-flat it can be shown that $\mathscr{E}^*$ is itself a CR manifold in $T^*\mathbb C^n$ and that for any CR curve $\gamma$ there exists a lift $\gamma^* (t)=(\gamma(t),\lambda(t)\partial r(\gamma(t)))$ which is a CR curve in $\mathscr{E}^*$. Thanks to this, we can express a duality relation between $X_0$ and $\Pi([X_0])$ \begin{equation}\label{duality} \left\langle \gamma^*(1),\Pi_\gamma([X_0])\right\rangle =c(\gamma)\left\langle \gamma^*(0),X_0\right\rangle \end{equation} where $c(\gamma)$ is a positive constant (for details see \cite{T97} and the references therein). \end{remark} \begin{remark}\label{remark2} If $M$ is a Levi-flat CR hyper-surface in $\mathbb C^2$ whose equation is $r=\text{Re}( g(z_1,z_2))=0$ where $g$ is a holomorphic function, then the Levi-leaves are the complex curves $g(z_1,z_2)=iC$ for $C\in {\mathbb R}$. Every leaf has a holomorphic lift to $\mathscr{N}^*(M)$ which is given by $(z_1,z_2;\partial r(z_1,z_2))$ because $\partial \text{Re} g(z_1,z_2)=\partial_{z_1} g dz_1 +\partial_{z_2}g dz_2$ is holomorphic. Therefore, $\mathscr{N}^*(M)$ is foliated by complex curves. Let $\gamma$ be a CR curve in $M$, this will be contained in a Levi-leaf, the lift $\gamma^*$ can be easily computed: $\gamma^*(t)=(\gamma(t),\partial g(\gamma(t)))$. \end{remark} \section{Proof of main theorem}\label{S4} In this section we consider the following complexification of $\mathbb C$: \begin{definition} Let $\mathbb C$ be the complex plane with coordinate $\zeta$, we define the map \begin{align*} \iota:&\mathbb C \rightarrow \mathbb C^2 \\ &\zeta\rightarrow (\zeta,\bar{\zeta}) \end{align*} which embeds the complex plane $\mathbb C$ into the maximal totally real set $\triangle:=\left\{ (\zeta,\bar{\zeta}),\ \zeta\in\mathbb C\right\}$ . \end{definition} In the complex plane for every circle $\mathcal{C}_{c,R}$ of equation $|\zeta-c|^2=R^2$ we consider its complexification in $\mathbb C^2$ which is obtained by replacing $\bar{\zeta}$ with $\eta$ in the equation $(\zeta-c)(\eta-\bar{c})=R^2$. The intersection of this quadric with $\triangle$ is $\iota(\mathcal{C}_{c,R})$. We identify the disc $\D_{c,R}$, whose boundary is $\mathcal{C}_{c,R}$, with the part of the quadric that projects over it which means that for $0<|\zeta-c|\le R $ we take $\eta=\bar{c}+\frac{R^2}{\zeta-c}$. Note that in this procedure the center of the disc is sent to $\infty$ and the boundary circle $\mathcal{C}_{c,R}$ is sent to $\iota(\mathcal{C}_{c,R})$. We denote by $$ \overline{\D}_{c,R}^\mathbb C :=\left\{ \left(\zeta,\bar{c}+\frac{R^2}{\zeta-c}\right) \quad :\ 0<|\zeta-c|\le R \right\} $$ If $h:\mathbb C\rightarrow \mathbb C$ extends holomorphically from the circle $\mathcal{C}_{c,R}$ then we define the extension on $\D_{c,R}^\mathbb C$ in the following way $$\tilde{h}(\zeta,\eta)=\frac{1}{2\pi i}\int_{\partial\D_{c,R}} \frac{h(w)}{w-\zeta}\, dw \quad \text{ for }(\zeta,\eta)\in \overline{\D}^\mathbb C_{c,R}.$$ Note that $\tilde{h}(\zeta,\bar{\zeta})=h(\zeta)$ for $\zeta\in \mathcal{C}_{c,R}$. Since the function $h$ that we are considering extends holomorphically on families of concentric circles we introduce the following set \begin{definition} Let $c\in \mathbb C$ we define \begin{equation} \label{mc} M_c:= \bigcup_{R\ge 0} \overline{\D}_{c,R}^\mathbb C \end{equation} \end{definition} \begin{remark} If $R_1\neq R_2$ then it is easy to check that $\overline{\D}_{c,R_1}^\mathbb C\cap \overline{\D}_{c,R_2}^\mathbb C=\emptyset$. Therefore if $h$ extends holomorphically on the family of circles centered in $c$ then the extension $\tilde{h}$ is well defined on $M_c$. We note that two circles with different centers may have non-trivial intersection. Let $\mathcal{C}_{c_1,R_1},\mathcal{C}_{c_2,R_2}$ be two circles then $\overline{\D}_{c_1,R_1}^\mathbb C \cap \overline{\D}_{c_2,R_2}^\mathbb C $ contains at most $2$ points, in fact taking the intersection of the quadrics \begin{equation}\label{10} \begin{cases} (\zeta-c_1)(\eta-\bar{c}_1)=R^2_1 \\ (\zeta-c_2)(\eta-\bar{c}_2)=R^2_2 \end{cases} \end{equation} we have that $\zeta$ has to satisfy the following equation \begin{equation}\label{11} (\bar{c}_1-\bar{c}_2)\zeta^2+(R_1^2-R^2_2-(\bar{c}_1-\bar{c}_2)(c_1+c_2))\zeta+(\bar{c}_1-\bar{c}_2)c_1c_2+R^2_2c_1-R^2_1c_2=0 \end{equation} which has only two solutions (if $c_1\neq c_2$). If the circles $\mathcal{C}_{c_1,R_1},\mathcal{C}_{c_2,R_2}$ intersect at $\zeta_1,\zeta_2$ then the solutions of \eqref{10} will be $(\zeta_i,\bar{\zeta}_i),\ i=1,2$. In order for $\overline{\D}_{c_1,R_1}^\mathbb C,\overline{\D}_{c_2,R_2}^\mathbb C$ to have non-empty intersection outside $\triangle$, the corresponding solutions of \eqref{11} must lay inside the circles bounded by $\mathcal{C}_{c_1,R_1},\mathcal{C}_{c_2,R_2}$ and this happens if and only if one of the two circles surrounds the other (see \cite{T07, G12}). \end{remark} \begin{proposition} The set $M_c\setminus \{ (c,\bar{c})\}$ is a Levi-flat hyper-surface with boundary $\triangle \setminus \{(c,\bar{c})\}$. \end{proposition} \begin{proof} We first note that $M_c$ is contained in the subset of $(\zeta,\eta)\in \mathbb C^2$ such that $(\zeta-c)(\eta-\bar{c})=R^2$ for some $R$. This in turn yields \begin{equation} M_c\subset \{ (\zeta,\eta) \in \mathbb C^2 |\ \text{Im} \left( (\zeta-c)(\eta-\bar{c})\right)=0 \} \end{equation} and the right-hand side is a Levi flat hyper-surface in $\mathbb C^2$ except at the singular point $(c,\bar{c})$. Therefore, where $M_c$ is a manifold it is Levi-flat. A point $(\zeta,\eta)$ is in $M_c$ if and only if \begin{equation}\label{9} \begin{cases} (\zeta-c)(\eta-\bar{c})\text{ is real and non-negative} \\ |\zeta-c|^2\le(\zeta-c)(\eta-\bar{c}). \end{cases} \end{equation} Where the second inequality is strict, we have that $M_c$ is charted by the following map \begin{equation*} (\zeta,R)\mapsto (\zeta,\bar{c} +\frac{R^2}{\zeta-c}) \end{equation*} which has rank $3$ therefore at those points $M_c$ is a manifold. The boundary of $M_c$ is given when equality holds in the second equation of \eqref{9}, this happens only at points where $\eta=\bar{\zeta}$ which is $\triangle \setminus (c,\bar{c})$. \end{proof} In the sequel, we will adopt the following convention: for $c\in \mathbb C$ we set $r_c=\text{Im} \left( (\zeta-c)(\eta-\bar{c})\right)$ and $g_c(\zeta,\eta)=(\zeta-c)(\eta-\bar{c})$ so that at regular points a local equation for $M_c$ is $r_c=0$. In the next proposition we want to study the intersection of two of such manifolds. It is not restrictive to consider the intersection of $M_0$ and $M_1$ because the general case can be easily reduced to this one. \begin{proposition} \label{p5} The intersection $M_0\cap M_1 =\triangle\cup T$ where $T=\{ (\zeta,\eta)\in {\mathbb R}^2\ |\ (\zeta\le 0,\ \eta\le \zeta )\text{ or } (\zeta\ge 1,\ \eta\ge \zeta)\}$. The two manifolds are transverse at $\triangle$ except along the points $(t,t)$ for $t\in{\mathbb R}$. \end{proposition} \begin{proof} We begin by computing the intersection of $M_0$ and $M_1$. Every point of the intersection can be found as the intersection of two of the quadrics that form $M_0$ and $M_1$: \begin{equation}\label{13} \begin{cases} \zeta\eta=R^2_1 \\ (\zeta-1)(\eta-1)=R^2_2 \\ |\zeta|\le R_1 \\ |\zeta -1|\le R_2 \end{cases} \end{equation} after replacing $\zeta\eta$ in the second equation we have that $-\zeta-\eta =R^2_2-R^2_1-1$ which implies that $\zeta=t+iu$ and $\eta=s-iu$ have opposite imaginary parts. Replacing in the first equation, since $\zeta\eta$ has to be real, we have $(t-s)u=0$. We have either \begin{equation}\label{19} t-s=0 \text{ which implies } \eta=\bar{\zeta} \end{equation} or \begin{equation} \label{20} u=0 \text{ which forces } \zeta, \eta\in{\mathbb R}. \end{equation} If \eqref{19} holds then we get the points of $\triangle$ that we know belong to the boundary of $M_0$ and $M_1$. If $\zeta,\eta\in{\mathbb R}$ we have by \eqref{13} that $\zeta^2+(R^2_2-R^2_1-1)\zeta+R^2_1=0$. From this equation we see that for only one of its roots $|\zeta|\le R_1$. To have real solutions, it must be $(R^2_2-R^2_1-1)^2\ge 4R^2_1$. If $R^2_2-R^2_1-1\ge0$ we find $R^2_2-R^2_1-1-2R_1\ge 0$ from which follows \begin{equation}\label{14} R_2\ge R_1+1. \end{equation} If instead $R^2_2-R^2_1-1<0$ it must be $(R_1-1)^2>R^2_2$ from which follows \begin{equation}\label{15} R_1-1>R_2 \text{ or } R_1+R_2<1 \end{equation} If \eqref{14} holds, then we find that \begin{equation} \zeta= \frac{R_1^2+1-R^2_2+\sqrt{(R_1^2+1-R^2_2)^2-4R_1^2}}{2},\eta=\frac{R_1^2+1-R^2_2-\sqrt{(R_1^2+1-R^2_2)^2-4R_1^2}}{2} \end{equation} And this solution also satisfies the last inequality in \eqref{13}. Collecting all solutions obtained in this way by varying $R_1,R_2$ we get the region $\zeta>\eta, \zeta<0$. If \eqref{15} holds, the solution to \eqref{13} is \begin{equation} \zeta= \frac{R_1^2+1-R^2_2-\sqrt{(R_1^2+1-R^2_2)^2-4R_1^2}}{2},\eta=\frac{R_1^2+1-R^2_2+\sqrt{(R_1^2+1-R^2_2)^2-4R_1^2}}{2} \end{equation} while for $R_1+R_2<1$ there are no solutions. Collecting all points of this kind, we have the region $\zeta>1, \zeta<\eta$. The interior of $M_0$ has equation $r_0:\frac{\zeta\eta-\overline{\zeta\eta}}i=0$ whose complex differential is $\partial r_0=\frac{\eta}i \,d\zeta+\frac{\zeta}i\,d\eta$ and similarly for $M_1$ we have $r_1:\frac{(\zeta-1)(\eta-1)-\overline{(\zeta-1)(\eta-1)}}i=0$ and $\partial r_1 =\frac{\eta-1}i \,d\zeta+\frac{\zeta-1}i\,d\eta$. At points of type $(\zeta,\bar\zeta)\in \triangle$ the tangent space of $M_0$ is $T_{(\zeta,\bar\zeta)}M_0 =T_{(\zeta,\bar\zeta)}\triangle\oplus \langle X\rangle $ where $X$ is a complex tangential vector to $M_0$ at $(\zeta,\bar\zeta)$. We have that $\text{Re} \partial r_0$ vanishes on $TM_0$ and $\partial r_0$ vanishes on $X$. Similarly, for $M_1$ we have $T_{(\zeta,\bar\zeta)}M_1=T_{(\zeta,\bar{\zeta})}\triangle\oplus\langle Y\rangle$ where $Y$ is a complex tangent vector to $M_1$ at $(\zeta,\bar{\zeta})$ so if $\partial r_0$ and $\partial r_1$ are independent at $(\zeta,\bar\zeta)$ we have that $TM_0$ and $TM_1$ are transverse. This happens if $\left|\begin{array}{cc} \eta & \zeta \\ \eta-1 &\zeta-1 \end{array}\right| =-\eta+\zeta\neq 0$. At points $(\zeta,\bar\zeta)$ where $\text{Im}\zeta \neq 0$, $M_0$ and $M_1$ are transverse. If $\zeta_0\in{\mathbb R}$ we have that in $(\zeta_0,\zeta_0)$ $M_0$ and $M_1$ have the same tangent space or better consider the two circles $\mathcal{C}_{0,|\zeta_0|}$ and $\mathcal{C}_{1,|\zeta_0 -1|} $. The complex quadric $\mathcal{C}_{0,|\zeta_0|}^\mathbb C$ has equation \begin{equation} \eta=\frac{\zeta_0^2}{\zeta},\ |\zeta|\le|\zeta_0| \end{equation} and we see that the complex tangent direction to $M_0$ is given by $(-\zeta_0,\zeta_0)$ and similarly for $M_1$ a tangent direction is given by $(-\zeta_0+1,\zeta_0-1)$. We notice that for $\zeta_0<0$ or $\zeta_0>1$ the two manifolds lay on the same side of the edge. If instead $0<\zeta<1$ then $M_0$ and $M_1$ are opposite. \end{proof} We have this proposition on $h$: \begin{proposition}\label{p4} Let $c_1,c_2\in \mathbb C$ with $c_1\neq c_2$, $h:\mathbb C\rightarrow \mathbb C$ be a continuous function such that: $h$ extends holomorphically on every circle centered at $c_1$ or $c_2$, and $ h(\zeta)=O_\infty(|\zeta| e^{-|\zeta|})$. Then $h(\zeta)=0$ for all $\zeta$. \end{proposition} \begin{proof The proof of this proposition is divided in $3$ steps which aim to prove that $h$ is holomorphic and hence $0$ by Liouville. It is not restrictive to assume that $c_1=0$ and $c_2=1$. \subsubsection*{Step 1. Extension of $h$} Since $h$ extends holomorphically on all discs centered at $0$ and $1$, we have that $h$ defines an extension $\tilde{h}_0$ on $M_0$ and another one $\tilde{h}_ 1$ on $M_1$. Since $M_0$ and $M_1$ have nontrivial intersections, further analysis is needed for these points. Inside $M_0\cap M_1$ there is $\triangle \setminus \{(0,0),(1,1)\}$ and at these points the two extensions match because for both $\tilde{h}(\zeta,\bar{\zeta})=h(\zeta)$. The other points outside the diagonal will be treated in Step 3 of the proof. Outside the intersection, a continuous function $\tilde{h} : \left((M_0\cup M_1)\setminus (M_0\cap M_1)\right)\cup\triangle \rightarrow \mathbb C $ is well-defined, and moreover is $CR$. We note that at points $(\zeta_0,\bar{\zeta}_0)$ with $\text{Im}\zeta_0\neq 0$ the function $\tilde{h}|_\triangle$ extends to be CR on the two transverse manifolds $M_0$ and $M_1$ by Proposition \ref{p5}. Consider the complex curve $\overline{\D}^\mathbb C_{0,|\zeta_0|}\subset M_0$ which is the CR orbit of $(\zeta_0,\bar{\zeta}_0)$ inside $M_0$. We have that $\tilde{h}$ is a CR function on $M_0$ and we have that $\tilde{h}$ CR-extends at $(\zeta_0,\bar{\zeta}_0)$ in direction $Y(\zeta_0)=(-\zeta_0+1,\bar{\zeta}_0-1)$. By \cite{T97} this extension propagates on the points of $\overline{\D}^\mathbb C_{0,|\zeta_0|}$ in $M_0$. This means, since $M_0$ is a hypersurface in $\mathbb C^2$, that for all points $(\zeta,\eta)\in \overline{\D}^\mathbb C_{0,|\zeta_0|}$ there exists a neighborhood $U_{(\zeta,\eta)}$ and a holomorphic function $\tilde{H}$ defined on one of the two components of $U_{(\zeta,\eta)}\setminus M_0$, continuous up to the boundary and such that $\tilde{H}|_{U_{(\zeta,\eta)}\cap M_0}=\tilde{h}$. The side of extension depends continuously on the initial direction $Y(\zeta_0)$ at $(\zeta_0,\bar{\zeta}_0)$. Let $\gamma$ be a real curve inside $\overline{\D}^\mathbb C_{0,|\zeta_0}$ connecting $(\zeta_0,\bar{\zeta}_0)$ to $(\zeta,\eta)$, by Remark \ref{remark2} and by \eqref{duality} if $\partial r_0$ is a non-vanishing conormal of $M_0$, the sign of $\text{Re}\langle \partial r_0 (\zeta,\eta), \Pi[Y_{\zeta_0}]\rangle$ is the same as $\text{Re} \langle \partial r_0(\zeta_0),Y(\zeta_0)\rangle$. We can repeat the same procedure by choosing a different initial point on $\overline{\D}^\mathbb C_{0,|\zeta_0|} \cap \triangle$, in this case we choose $(\zeta_0 e^{i\theta},\bar{\zeta}_0e^{-i\theta})$, and we have $Y(\zeta_0 e^{i\theta}) = (-\zeta_0 e^{i\theta}+1,\bar{\zeta}_0 e^{-i\theta}-1) $ and $\partial r_0 (\zeta_0 e^{i\theta},\bar{\zeta}_0e^{-i\theta})=\frac{1}{i}(\bar{\zeta}_0e^{-i\theta},\zeta_0 e^{i\theta})$. If we take then $\text{Re} \langle \partial r_0,Y \rangle = \frac{\bar{\zeta}_0e^{-i\theta} - \zeta_0 e^{i\theta}}{i}$ we see that for different $\theta$ it assumes values of both signs which means that at every point $(\zeta,\eta)\in \overline{\D}^\mathbb C_{0,|\zeta_0|}$ the function $\tilde{h}$ extends to both sides of $M_0$ thus $\tilde{h}$ extends holomorphically to a neighborhood of $(\zeta,\eta)$. Since we can repeat the same argument for all circles of the family, we get that $\tilde{h}$ extends holomorphically to a neighborhood of $(M_0\cup M_1)\setminus (M_0\cap M_1)$. \subsubsection*{Step 2. Analytic continuation to $M_0\cap M_1$.} We notice that $\tilde{h}_0$ extends holomorphically in a neighborhood of $M_0\setminus (M_0\cap M_1)$, using again propagation we have that $\tilde{h}_0$ extends holomorphically to a neighborhood of $M_0\setminus \triangle$. We note that near a point of $\triangle \setminus \{(0,0)\}$ the set $M_0$ is bi-holomorphically equivalent to a half hyper-plane in $\mathbb C^2$. By a change of coordinates \begin{equation*} \begin{cases} w_1=\zeta\eta \\ w_2=i\log\left(\frac{\eta}{\zeta}\right) \end{cases} \end{equation*} we have that $M_0$ is equivalent to $\text{Im}( w_1)=0, \text{Im}( w_2) \ge 0$. By the classical "local Bochner's tube" (see \cite{K}, \cite{T97}) we have that $\tilde{h}_0$ extends holomorphically to a wedge $\mathcal{W}_0$ with edge $\triangle\setminus (0,0)$. Note that $M_0\setminus \triangle$ is contained in the interior of $\mathcal{W}_0$. Similarly, the same holds for $\tilde{h}_1$ which extends holomorphically to a neighborhood of $M_1\setminus \triangle$ and near the boundary to a wedge $\mathcal{W}_1$ with edge $\triangle \setminus \{(1,1)\}$. By Proposition \ref{p5} $M_0\cap M_1 =\triangle\cup T$ is a connected set, and $\tilde{h}_0=\tilde{h}_1$ on $\triangle$. The other points of the intersection $(\zeta,\eta)\in T\subset {\mathbb R}^2$ are in the interior of $M_0$ and $M_1$, and on these points we have to prove that these two extensions match. Let $(\zeta,\eta)\in T$ which means that either $\zeta>1$ and $\eta>\zeta$ or $\zeta<0$ and $\eta<\zeta$. Let us take a point of the first kind, and consider the curve $\gamma_{(\zeta,\eta)}(t)=(\eta,(1-t)\zeta +t\eta)$ for $0\le t\le 1$ which joins the point $(\zeta,\eta)$ to the edge $\triangle$. We note that in a neighborhood of the point $(\eta,\eta)\in\triangle$ the function $\tilde{h}$ extends to a holomorphic function in two wedges $\mathcal{W}_0$ and $\mathcal{W}_1$. Since $\gamma_{(\zeta,\eta)}(t)\in \mathcal{W}_0 \cap \mathcal{W}_1$ for $t$ close to $1$ this shows that there is a wedge $\mathcal{V}$ with edge a small neighborhood of $(\eta,\eta)$ in $\triangle$ and contained in $\mathcal{W}_0 \cap \mathcal{W}_1$. It follows that on $\mathcal{V}$ the functions $\tilde{h}_0$ and $\tilde{h}_1$ are holomorphic and have the same value at the edge, hence they must coincide on $\mathcal{V}$. By analytic continuation $\tilde{h}_0 =\tilde{h}_1$ on the whole connected component of $T$ that contains $(\zeta,\eta)$. This proves that $\tilde{h}_0$ and $\tilde{h}_1$ coincide on $T$. We call again $\tilde{h}$ the so defined function on $M_0\cup M_1$. \subsubsection*{Step 3. Extension of $\tilde{h}$ outside $M_0\cup M_1$} In this part of the proof we aim to prove that $\tilde{h}$ really depends only on $\zeta$ and hence that $h$ is holomorphic. To this end, we follow the idea of \cite{T07} and exploit the Hans Lewy's technique: For every $\zeta$ and $j=0,1$ let $$ E^\zeta_j=\{\eta\in \mathbb C | (\zeta,\eta)\in M_j \} $$ and $E^\zeta:= E^\zeta_0\cup E^\zeta_1$. It is easy to see that $E_\zeta$ is made of two half-lines issued from $\bar{\zeta}$. We have \begin{equation} (\zeta,\eta)\in M_0 \text{ if and only if } \eta =\lambda \bar{\zeta} \text{ for some $\lambda\ge 1$} \end{equation} and similarly \begin{equation} (\zeta,\eta)\in M_1 \text{ if and only if } \eta =1+\lambda (\bar{\zeta}-1) \text{ for some $\lambda\ge 1$}. \end{equation} $E^\zeta$ divides the plane in two regions, and we put the orientation on $E^\zeta$ as the boundary of the region containing the real line. With this choice of orientation we define for $(\zeta,\eta)\notin M_0\cup M_1$ \begin{equation}\label{22} F(\zeta,\eta):= \frac{1}{2\pi i} \int_{E^\zeta} \frac{\tilde{h}(\zeta,w)}{w-\eta}\, dw . \end{equation} We note that the integral converges in spite of the fact that $E^\zeta$ is not bounded; in fact for instance we have that on $E^\zeta_0$ $$ \tilde{h}(\zeta,\lambda\bar{\zeta})=\frac{1}{2\pi i}\int_{|\tau|=\sqrt{\lambda}|\zeta|} \frac{h(\tau)}{\tau-\zeta}\,d\tau =O(e^{-\sqrt{\lambda}}) $$ and the last equality follows from \eqref{21bis}.This is enough to ensure that the integral converges. Clearly $F$ is holomorphic in $\eta$, it remains to prove that it is holomorphic in $\zeta$. To this end, we adopt the technique of Hans-Lewy to prove the holomorphic regularity of $F$ by the Morera theorem. For $(\zeta_0,\eta_0)\notin M_0\cup M_1$, keeping $\eta$ fixed, let $\gamma(t)=c+\varepsilon e^{it}$ be a small circle in the $\zeta$-plane in a neighborhood of $\zeta_0$ and let $\Gamma$ be the disc with boundary $\gamma$. We consider \begin{equation}\label{21} \int_\gamma F(\zeta,\eta_0)\,d\zeta =\int_\gamma d\zeta\int_{E^\zeta} \frac{\tilde{h}(\zeta,w)}{w-\eta_0}\,dw \end{equation} and prove that this integral is $0$. We decompose $E^\zeta=E^\zeta_0\cup E^\zeta_1$ and treat each part separately. Let $$ V^{c,\varepsilon}_0:=\bigcup_{0\le t\le 2\pi} E^{\gamma(t)}_0 $$ be the set parametrized by $\varphi_0:[0,2\pi]\times [1,+\infty) \rightarrow V^{c,\varepsilon}_0$, $ \varphi_0(t,s)=(c+\varepsilon e^{it},s(\bar{c}+\varepsilon e^{-it}))$ and we put on $V^{c,\varepsilon}_0$ the orientation induced by $\varphi_0$. Let $\iota(\Gamma)$ be the disc in $\triangle$ charted by $(\zeta,\bar{\zeta}), \zeta\in \Gamma$. We have that $V_0^{c,\varepsilon}\cup \iota(\Gamma)$ is a boundary in $M_0$ which means that there is $W_0\subset M_0$ such that $\partial W_0= \iota(\Gamma)\cup V_0^{c,\varepsilon}$. This set $W_0$ is foliated by complex leaves in the following way: for every $R$ let $\zeta\eta = R^2$ be the equation of one of the complex leaves forming $M_0$. We study the intersection of this leaf with $\iota(\Gamma)\cup V^{c,\varepsilon}_0$ we have $(\zeta,\eta)\in V^{c,\varepsilon}_0$ if and only if $$ \zeta=c+\varepsilon e^{it}, \text{ }\eta=\frac{R^2}{c+\varepsilon e^{it}}=\frac{R^2(\bar{c}+\varepsilon e^{-it})}{|c+\varepsilon e^{it}|^2} \text{ with } |c+\varepsilon e^{it}|^2\le R^2. $$ We see that if $R^2\ge |c|^2+\varepsilon^2$ we get a curve which bounds the disc $|\zeta -c|\le \varepsilon, \eta=\frac{R^2}{\zeta}$, the disc is contained in $M_0$. If $ |c|^2 - \varepsilon^2 \le R^2\le |c|^2+ \varepsilon^2$ the intersection of the complex curve $\zeta\eta=R^2$ with $V^{c,\varepsilon}_0$ is an open curve whose endpoints are in $\triangle$. We close this curve with an arc of the circle $\zeta\eta=R^2, \eta=\bar{\zeta}$ which is inside $\iota(\Gamma)$. By Stokes formula we have \begin{equation} \int_\gamma d\zeta\int_{E^\zeta_0} \frac{\tilde{h}(\zeta,w)}{w-\eta_0}\,dw -\iint_{\iota(\Gamma)} \frac{\tilde{h}(\zeta,w)}{w-\eta_0}\,d\zeta dw=\int_{W_0} \bar\partial \left(\frac{\tilde{h}(\zeta,w)}{w-\eta_0}\right) \,d\zeta dw =0 \end{equation} where the last equality follows because $\widetilde{H}$ is CR on $M_0$. For the same reason we have a similar formula for $E^\zeta_1$ and since the term of integration on $\iota(\Gamma)$ appears on both terms, we have \begin{equation} \int_\gamma\int_{E^\zeta}\frac{\tilde{h}(\zeta,w)}{w-\eta_0} \, d\zeta dw =\int_{V_0}\frac{\tilde{h}(\zeta,w)}{w-\eta_0} \, d\zeta dw -\int_{V_1}\frac{\tilde{h}(\zeta,w)}{w-\eta_0} \, d\zeta dw=0 \end{equation} which proves that $F$ is holomorphic in $\zeta$ and $\eta$. We note that in \eqref{22} when $\zeta$ tends to the real axis, for $\zeta>1$ or $\zeta<0$, we have that the two lines $E^\zeta_0$ and $E^\zeta_1$ overlap hence $F$ tends to $0$. Since $F$ tends to $0$ on a generic subset we have that $F$ is identically $0$. By the Plemelij-Schotty formula, we have that $\tilde{h}$ is $0$ on $M_0\cup M_1$ and hence $h(\zeta)=\tilde{h}(\zeta,\bar{\zeta})=0\ \forall \zeta$ . \end{proof} \section{End of the proof of Theorem \ref{t1}}\label{S5} We need the following \begin{lemma}[Phragmen-Lindel{\"o}f on a sector]\label{l5} Let $U$ be the half plane $\text{Re}( w)>0$. Let $h$ be a continuous function on the closure of $U$ and holomorphic on $U$. Assume there exists two constants $C>0$ and $\alpha<1$ such that \begin{equation*} |h(w)|\le Ce^{|w|^\alpha} \text{ for all } w\in U. \end{equation*} If $|h|\le 1$ on the imaginary axis then $|h|\le 1$ on $U$. \end{lemma} for a proof we refer to \cite{L99}. \begin{proof}[Proof of Theorem \ref{t1}] By Proposition \ref{p4} we have that for all $(z_1,z_2)\in \mathbb{S}^3$ : \begin{equation} \int_{\mathbb R} gf\left( \frac{z_1}{iy(z_2-1)+1},1+\frac{(z_2-1)}{iy(z_2-1)+1}\right) dy =0 \end{equation} We can rewrite this integral using the following change of variable $\tau=\frac{1}{\zeta(z_2-1)+1}$ from which $\zeta=\left( \frac{1}{(z_2-1)\tau} -1\right) $ and $d\zeta =-\frac{1}{\tau^2 (z_2-1)} d\tau$. The imaginary line in the $\zeta$-variable is mapped to a circle $\mathcal{C}$ through $0$ in the $\tau$-variable and we have \begin{equation*} \int_\mathcal{C} gf(\tau z_1,1+(z_2-1)\tau)\left( \frac{1}{\tau^2(z_2-1)}\right)\, d\tau =0 \end{equation*} If we apply the same reasoning to $(z_2-1)^kf(z_1,z_2)$ we have that \begin{equation} \label{23} \int_\mathcal{C} gf(\tau z_1,1+(z_2-1)\tau)(z_2-1)^{k-1}\tau^{k-2}\, d\tau =0 \end{equation} and since \eqref{23} holds for all $k$ we have that $gf$ extends holomorphically on the disc through $(z_1,z_2)$ and $(0,1)$ for all $(z_1,z_2)\in \mathbb{S}^3$. To prove that $f$ extends holomorphically we have to divide by $g$. Let $A_z:\D \rightarrow \mathbb B^2$ be the straight analytic disc through $(0,1)$ and $z=(z_1,z_2)$ such that $A_z(1)=(0,1)$ (we assume that $A_z(\tau)$ is linear in $\tau$). Let $gf(A_z(\tau))=F(\tau)$ for $\tau\in \partial\D$ where $F\in \text{Hol}(\D)\cap C^0(\overline{\D})$. We consider the function on $\D$ defined by ${\bold f}(\tau):=\frac{F(\tau)}{g(A_z(\tau))}.$ We have that ${\bold f}$ is continuous on $\overline{\D}\setminus\{ 1\}$ and that ${\bold f}(\tau)=f(A_z(\tau))$ for $\tau \in \partial\D\setminus\{ 1\}$. It remains to prove that ${\bold f}$ extends continuously at $1$. To this end we consider the change of variable $\tau=\varphi(w)=\frac{w-1}{w+1}$ which interchanges the unit disc $\D$ with the half-plane $\text{Re}(w)>0$ and sends $\tau=1$ to $\infty$. We note that since $\frac{1}{|g(A_z(\varphi (w)))|}\le C_1e^{C_2|w|^\frac{1}{2}}$, it follows that \begin{equation*} |{\bold f}(\varphi(w))|\le C_3 e^{C_2|w|^\frac{1}{2}} \end{equation*} where $C_1$, $C_2$ and $C_3$ are constants. Since ${\bold f}$ is bounded on $\partial \D\setminus\{ 1\}$ by the supremum of $f$ we have by Lemma \ref{l5} that ${\bold f}(\varphi(\cdot))$ is uniformly bounded on the closed half-plane, hence ${\bold f}$ is uniformly bounded on $\D$. It follows that ${\bold f}$ has a boundary value on $\partial\D$ which coincides with the continuous function $f(A_z(\tau))$ and we have the conclusion. \end{proof} Thanks to Theorem \ref{t1} we are now in position to apply \cite{G12} and prove Corollary \ref{C1}. \begin{proof}[Proof of Corollary \ref{C1}] After a rotation, assume that $p=(0,1)$. By Theorem \ref{t1} we have that $f$ extends holomorphically on the lines concurrent to $p$. The line joining $p$ and $c$ intersects the ball $\mathbb B$ so we can apply Corollary 1.3 of \cite{G12} and so $f$ is the boundary value of a holomorphic function $F\in \text{Hol}(\mathbb B^2)$. \end{proof} \bibliographystyle{alpha}
1,116,691,497,488
arxiv
\section{Idea Management} Idea Management (IM) is the process by which organizations approach their communities of clients, employees, suppliers, and interested stakeholders to (1) request ideas, (2) collect and (3) evaluate them, and (4) select the most promising ones to source their innovation needs or to address a defined organization's problem \cite{baumgartner2008introduction}. The execution of IM processes can be supported by specially-designed software tools known as IM systems. In such systems, organizations can describe an innovation problem they want to solve (e.g., innovate the public transportation system) and setup campaigns through which collect proposed solutions. At the same time, IM systems let users suggest ideas as well as evaluate and place opinions on other users' ideas. \subsection{IdeaScale} We specifically consider the case of IdeaScale\footnote{\url{https://ideascale.com}} as idea management platform. IdeaScale is one of today's leading technologies for supporting the execution of IM processes and used by big companies like Microsoft and Xerox, and by emblematic institutions such as NASA and White House. In IdeaScale, ideation initiatives are created by setting up a community website in which organizers describe the goals of the initiatives and define campaigns through which ideas are collected. To submit an idea, users, previously registered as member of the community, have to provide a title, description, and associate a campaign to the idea. Optionally, they can label the idea with tags and attach an image or file to enrich the description. \begin{figure} \centering \includegraphics[width=0.55\textwidth]{figures/ideascale_ui-with_ideation-letters} \caption{(a) Snapshot of a community's website; (b) Idea submission features; (c) Detailed view of an idea, commenting and voting functions} \label{fig_ideascale} \end{figure} Members can also comment and assign positive/negative valuations (votes) to others' ideas and comments. They can also reply to existing comments. These functionalities enable them to not only set their positions regarding the ideas and comments but also to help in refining the content of the proposals. Figure \ref{fig_ideascale} illustrates the described features of IdeaScale. \section*{Acknowledgment} This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 690962. This work was also supported by the project ``Evaluation and enhancement of social, economic and emotional wellbeing of older adults" under the agreement no. 14.Z50.310029, Tomsk Polytechnic University. \bibliographystyle{IEEEtran} \section{Introduction} The increasing competitiveness of the markets forces organizations to sustain a continuous process of innovation fueled with ideas originated from managers, employees and, for some time now, even from outside the organization. Idea Management (IM) is the process of requesting, collecting, selecting and evaluating ideas to develop new, innovative products, services or regulations, or to improve existing ones \cite{baumgartner2008introduction}. The goal of IM is to capture ideas that can deliver benefits to the organization by leading to innovations or by solving specific problems \cite{westerski2011road}. The emergence of social and collaborative web-based technologies has transformed the physical suggestion boxes ---the former preferred method to listen to customers--- into dedicated IM platforms, which lets people propose ideas, as well as rate and place comments on other users' suggestions \cite{hrastinski2010review}. Examples of popular IM platforms are IdeaScale (\url{http://ideascale.com}), Crowdicity (\url{http://crowdicity.com}), Spigit (\url{http://www.spigit.com}). The offerings and market are growing. The adoption of IM practices and platforms have been empowering various innovation initiatives around the world. Almost 200,000 people have been participating in My Starbuck Idea, the world-wide IM initiative conducted by Starbucks to collect ideas from its customers about future products and services \cite{schoultz2012starbucks}. Similar participation rates can be found when analyzing Idea Storm, the IM initiative sponsored by the giant computer company Dell \cite{di2009steal}. But, its application has not been limited only to commercial domains. In the political and civic domain, the Icelandic participatory constitution-writing process represents an emblematic case. Here, the population at large has been invited to contribute to the constitution draft with suggestions, proposals, and ideas \cite{landemore2015inclusive}. About communities What communities Who uses How it is used - collectively - level of individual users Why is it important it is a trend nothing in the literature -> practice Are there clearly defined collective behaviors? How the characteristics of the communities are related to collective behavior of the community and users? \section{Related Works} IM has been playing a key role in efficiently managing grassroots innovation initiatives \cite{bakker2010idea}. In this context, IM platforms have proven able to properly instrument campaigns for soliciting ideas from large-scale crowds, both in the business and the public sector \cite{bailey2010s}. Discussions on how to extend and improve online IM platforms and similar have taken different directions among industrial and academic researchers, from methods to define suggestions, to mechanisms to display streams of ideas, to features to assess proposals, to solutions to find promising contributions \cite{westerski2013semantic}. Deliberation maps have been presented in \cite{klein2008supporting} to structure participants' contribution as problem trees containing the problem to solve, potential solutions, and arguments for and against proposed solutions. The use of semantic technologies has been proposed by Westerski et al. to organize, link and classify the proposed ideas using meta data annotations \cite{westerski2010model}. Improving scoring methods used to rate the ideas has been the goal of Xu et al. who have proposed a reference-based scoring model as an alternative to the traditional thumbs up/down voting systems \cite{xu2012reference}. Faridani et al. have introduced a two-dimensional visualization plane as an approach to address the filter-bubble effect ---narrowing the exposure to recent, popular, or controversial information--- of linear listings used to display opinions in online sites \cite{faridani2010opinion}. Convertino et al. have targeted information overload in the evaluation phase by employing natural language processing methods to automatically identify the core of the proposals \cite{convertino2013idea}. Along this line, Bothos et al. have introduced the application of information aggregation markets to facilitate the evaluation of the ideas \cite{bothos2008collaborative}. From an analytical perspective, \cite{bjork2009good} has employed social network analysis techniques to study the relationship between the quality of the ideas and the connectivity (degree centrality) of the contributors. Social sharing features, e.g., share and tweet buttons, have been the preferred approach to integrate IM platforms and social networking sites. These solutions have been proposed to quickly and easily export content of IM discussions into general purpose social networks (e.g., Facebook, Twitter) for creating awareness, gaining visibility, and attracting new participants. Although pervasively used across the Internet in general and in IM platforms in particular, their effectiveness to actually increase participation and productivity in IM initiatives have been lately put in doubt \cite{saldivar2016effectiveness}. Alternatively, IdeaScale and Spigit ---two of the big players in the field have proposed solutions that extend Facebook's native features proving IM-specific features, e.g., voting mechanisms, filtering, tagging, and searching functionalities \cite{ideascale2013facebook, spigit2013facebook}. Through an application that enhanced Facebook with deliberation functionalities (e.g., survey features, polling tools, moderation capabilities), Bendor et al. have examined the suitability of Facebook discussion groups to engage the public in conversations about the innovation of Vancouver's public transportation \cite{bendor2012s}. Their promising results provide further support to the idea of using Facebook to carry out product/service innovation initiatives. \subsubsection{Coach Application} A web-based application allows training experts to run a \emph{virtual gym}, providing support for the training and community aspects with the following features: \begin{easylist} \ListProperties(Hide=100, Hang=true,Style*=--~) & \textit{Community building}. To start a virtual gym from scratch, defining and managing the members of the community (trainees and other coaches). Public (bulletin board) and private channels (messages) are in place to help the coach engage the trainees and build a sense of community. & \textit{Definition of training activities}. To organise the training activities (video exercises augmented with metadata) around fitness classes, targeting groups of users with similar needs but different abilities. The coach can associate different intensity levels and performance indicators (e.g. measured with questionnaires or sensors) to these training classes. & \textit{Initial assessment}. The coach can define pre-assessment exercises (and require specific aptitude information) before accepting the trainee, and then use this information to set a starting intensity level. Special requirements can be logged in the online diary and used to tailor the exercises. & \textit{Monitoring}. To continuously monitor the progress of trainees. The monitoring covers as a minimum the participation of trainees and the completeness of the exercises, and can incorporate the performance indicators defined by the coach (self-reported data and measures from sensors). & \textit{Personalisation and safety in the training program.} To increase the intensity level of individual trainees based on their performance (according to the pre-defined intensity profiles). In addition, the coach can tune the program to stop (and eventually resume) individual exercises, for example, in case of injuries. & \textit{Personalised feedback.} Reports facilitate the process of looking at the performance of trainees, and providing personalised feedback in context. Feedback is sent to trainees using the private message feature. \end{easylist} \section{Components of successful training applications} \section{A VIRTUAL GYM FOR OLDER ADULTS} Designing fitness applications to \emph{enable} and \emph{motivate} independent-living older adults - of potentially different abilities - to follow group exercises from home, pose many design challenges. In this section we follow the design process of Gymcentral - an application that addressed the aforementioned scenario - to motivate the usage scenario and provide design recommendations. \begin{figure*} \includegraphics[width=\textwidth]{figures/evolution} \caption{Training applications. a) Active Lifestyle app, exploring the use of individual and social persuasion strategies; b) Virtual Social Gym, exploring the use of activity monitors in home-based interventions; c) Gymcentral early design alternatives; d) Gymcentral application in its current form. }~\label{fig:figure1} \end{figure*} \subsection{Design space and rationale} The design of Gymcentral as a tool for online group exercising is informed by evidence in the literature (see Section \ref{sec:relwork}) as well as previous experiences that progressively shaped the current implementation of the application. \emph{Active Lifestyle} (Figure \ref{fig:figure1}a) explored the feasibility of providing a home-based strength and balance exercise program by means of video exercises in a tablet device \cite{silveira2013motivating}. In addition, it studied the effects of using individual (e.g., positive and negative reinforcement) and social persuasion strategies (e.g., collaboration and competition) in the adherence to the training programs \cite{silveira2013tablet}. This experience suggests that i) tablet-based physical interventions for independent-living older adults are feasible, ii) persuasion strategies have a significant positive effect on adherence, and that iii) social persuasion strategies are more effective than individual strategies in motivating older adults to exercise. The \emph{Virtual Social Gym} (Figure \ref{fig:figure1}b) application added domain knowledge from training experts to provide tailored home-based exercise programs to independent-living older adults. This application allowed the training expert to define training programs, along with training profiles corresponding to different levels of intensity, and monitor the progress of users along the training program. Sensors collected user activity data, which was presented to the expert in a web-based dashboard. Results from this project i) stressed the importance of tailoring exercise programs, ii) reinforced previous studies suggesting the importance of a human coach, and iii) confirmed the feasibility of performing remote monitoring by employing an activity monitor in the context of a home-based physical intervention (full study protocol in \cite{geraedts2014adherence}). From these previous experiences and literature we derive the following main dimensions and related recommendations: \begin{easylist} \ListProperties(Hide=100, Hang=true,Style*=--~) & \emph{Tailored training and feedback}. Tailoring a training program is an essential part of the coaching process \cite{Chi-Wai2011}, and as such should be incorporated in the design. It involves assessing the abilities of the trainee and constantly tuning the program so that it remains both safe and effective \cite{geraedts2014adherence}. & \emph{Human expert in the loop}. Coaching either by real or virtual coaches can be more effective and motivating than no coaching for the trainees at home \cite{ijsselsteijn2004virtual}. However, when dealing with the older population, studies emphasise the need for a real coach \cite{Hanneton2009,geraedts2014adherence}. & \emph{(Social) Persuasion Strategies}. Self-efficacy (i.e., perceived capability and confidence), a strong predictor of adherence to physical exercises, is less exhibited in older adults compared to populations of different age groups \cite{phillips2004motivating}. Studies have shown that the use of persuasive features (especially social persuasion strategies) increases the adherence to training programs \cite{silveira2013tablet}. & \emph{Social Interactions}. Engaging in activities with others can help stimulate social interactions \cite{leonardi2008supporting}. This is particularly beneficial for older adults with limited opportunities to interact - in most cases for the same reasons they need home training. Training together could then potentially help older adults to stay physically and socially active. \end{easylist} In this paper we're addressing the additional challenge of enabling older adults of different abilities, and despite this difference, to engage in \emph{group exercises} from home. Providing this experience poses extra design requirements that were not addressed in the aforementioned works. Thus, we explored different design alternatives to realise the group exercising (see Figure \ref{fig:figure1}c.), from simply indicating that another trainee was also training (online status) to having a real-time motion and visualisation (3D and motion), each alternative with a different level of immersion, feedback and requirements in terms of technology. The design alternative materialised in the current version of the tool (Figure \ref{fig:figure1}d), relies on the following design aspects: \begin{easylist} \ListProperties(Hide=100, Hang=true,Style*=--~) & \emph{Virtual environments}. Virtual environments have been shown to increase the sense of presence, or psychological immersion \cite{grinberg2014social}. & \emph{Social presence and privacy}. Social presence, along with user embodiment (avatars), help to reduce physical barriers and get users more engaged in the activities while preserving their privacy \cite{siriaraya2014exploring}. & \emph{Keeping disparities invisible to the group}. Avatars do not follow the actual trainee's movements but predefined movements. This was both a practical constraint (i.e., to keep the technological requirement to a minimum) and a design constraint (i.e., to keep the specifics of the exercise performed hidden from others) to avoid the negative effects of face-to-face group exercising \cite{de2011older}. \end{easylist} Gymcentral thus relies on the metaphor of a \emph{virtual gym} for the added benefits explained before, as well as to compensate for the added complexity. The specifics of this version are discussed below. \begin{figure*} \centering \includegraphics[width=\textwidth]{figures/application.pdf} \caption{Overview of the Gymcentral service.}~\label{fig:figure2} \end{figure*} \subsection{Gymcentral Applications} The Gymcentral platform is organised in two main applications that serve the needs of both trainees and the coach. Together, these applications can support a typical workflow as illustrated in Figure \ref{fig:figure2}. \input{./sections/application-trainee.tex} \input{./sections/application-coach.tex} \subsubsection{Trainee's Application} It allows the trainees to follow tailored training programs from home, unassisted, using sensors and a tablet device. The design of this application relies on the metaphor of a \emph{gym}, providing similar spaces and services (Figure \ref{fig:features}): \begin{easylist} \ListProperties(Hide=100, Hang=true,Style*=--~) & \textit{Reception.} The entry point of the Gym, where the user has access to all the services. A virtual receptionist helps the user in getting oriented, e.g., informs of new messages and upcoming sessions. & \textit{Locker Room.} A space where trainees usually meet each other and get ready for the training classes. In the locker room, users can see each other (as avatars), interact by means of predefined messages (e.g. ``Hi, let's go to the classroom"), and invite members who are not online to join. & \textit{Classroom.} A space where users have access to the exercise instructions (video blended with the gym environment). Users in the classroom can see the coach as well as the other trainees (as avatars). & \textit{Agenda.} It displays the training schedule of the current week, highlighting user participation to the sessions. & \textit{Messages}. The bulletin board is a community feature where trainees can exchange public messages. Performance and exercise achievements of the trainees are also automatically published on the bulletin board. Private messaging allows users to interact one-to-one with other trainees and the coach. & \textit{Progress report.} It displays the progress of the trainee in the training program by means of a growing garden metaphor. \end{easylist} In the above sections, the Trainee app implements \emph{persuasive strategies} (e.g., self-monitoring via progress reports), \emph{social interactions} (e.g., messages and real-time interactions), and allows for \emph{Coach feedback} (e.g., via private messages). \section{Discussion} \subsection{Usability and technology acceptance} The design proposed in this paper entailed a set of features that added to the complexity of the tool with respect to a more traditional home-based training application. Given the target population, we set to investigate how older adults would react to this added complexity and how their impressions would change over time. Two different versions of the Trainee application were tested: one supporting a traditional home-based exercise program (Control group) and the other representing the proposed virtual group exercising (Social group). Not surprisingly, the usability of the application was lower for the Social group at the beginning of the study, reflecting participants’ initial difficulties to deal with a more complex user interface. However, over time, usability improved more in the Social group than in the Control group, suggesting that, although the social application was perceived as more complicated at the beginning, perceived usability increased after the training program, approaching the top end of the scale. The results on technology acceptance were more controversial. The analyses suggested a general improvement of the dimensions related to technology acceptance, especially for the Social group. However, one of the limitations of the study was the small sample size and the subsequent limited statistical power. To overcome this constraint, we used matched random assignment to allocate participants to conditions, combined with a closer exploration of the data analysing both the studentized residuals to examine the error variance of specific cases and the Cook's distance, to measure the overall influence of single cases on the model \cite{field2013discovering}. Overall, while Internet connection was an intermittent issue, both usability and technology acceptance of the Trainee application (both versions) generally improved. For the full application, these results mean that users could handle the extra complexity and learn to use this type of tool. \subsection{Design dimensions and user feedback} A set of design dimensions were derived from previous experiences and literature on home-based training. These dimensions were materialised in Gymcentral and tested in the eight-weeks intervention study. The user feedback on the features of each dimension \emph{reinforces the value of group exercising, social interactions and persuasion} features. The results also give us more insights on the socialisation aspects. Participants regarded as highly useful the interactions with the Coach, followed by the interactions with the community as a whole, and to a lesser extent the private messages with other trainees. These results, along with the analysis of the individual interactions, point to the need for more effective mechanisms to motivate social interactions among community members. From the virtual environments we learned that participants enjoyed seeing each other, and this was particularly true for the Classroom where the participants were engaging of an activity together (i.e., training). Seeing each other and interacting in the Locker room was not perceived as very useful, potential reasons being: lacking activities that would motivate users to stay in the room, contextual messages not being expressive enough, or not being used to real-time messages. Based on these results, we are currently extending the Virtual Gym to include \emph{environments} that would offer social activities (e.g., listening to music or watching videos) that would motivate users to interact and build stronger relationships \subsection{Nature of social interactions} Qualitative analysis on the social features showed that both participants and the Coach used the bulletin board and private messages differently. Community building activities were of primary importance, especially in the bulletin board where participants played a central role. This suggests that, besides the training, they were also involved in strengthening social interactions with each other, using humour to enhance their relationships, and the bulletin board was the preferred channel. When talking about physical activity, participants tended to publicly support and congratulate each other on the bulletin board, while rarely doing so in private. Private conversations were directed mostly to the Coach to discuss personal experiences with the training. Not surprisingly, the Coach was the user who most engaged in these discussions. Regarding the application, putting aside the messages about problems, the bulletin board was used to express satisfaction whereas private messages for information or questions. These results highlight the need for having both type of channels, since they serve very different purposes. On one hand allowing for community building in public channels, and for more personal conversations in private. We should also note that compared to other technology-based interventions, where social features (e.g., forums or social networks) were rarely used \cite{aalbers2011characteristics}, in this study the social features have been largely used by the participants. Further research is required to investigate whether this result is significant and related to the design of the tool. \subsection{Feasibility of online group-exercising} Although not in scope, it is important to note that the design presented in this paper has been effective in enabling and engaging users to exercise from home, producing significant benefits in muscle strength and gait speed\footnote{Summary of adherence and physical outcomes at \url{http://gymcentral.net/trento.php}}. \subsection{Limitations} \textbf{Different channels for Coach support}. The interactions of the Coach with the participants were scheduled to give the same type of support. However, in absence of social features in the version of the app used by the Control group, the support was given by phone calls. This difference in the communication channel might have introduced a potential bias in the motivation to participate. \textbf{Sample size and gender imbalance}. Random variability, probably due to the small sample size, might have influenced the initial difference between groups in some of the measures. We also acknowledge the gender imbalance as a potential limitation to the generalisation of the results. \section{CONCLUSION} In this paper we have described the design of a virtual fitness environment to facilitate and motivate older adults of different abilities to follow virtual group exercises from home. The application, Gymcentral, was evaluated in an intervention study - where older adults were effectively engaged in the training \cite{far2015interplay} We have derived a set of useful design dimensions and recommendations for home-based training and presented an approach to group exercising that could accommodate to a heterogeneous group of older adults - a very common setting in this population. The feedback from users provided insights into the usefulness of the features as well as areas for future work. In particular, we have seen a higher number of social interaction than previous home-based training interventions, as well as a high learnability despite the complexity of the application. We attribute these positive results to the use of the research-derived design recommendations but further research is needed for more conclusive results. Real-time interactions, however, were not as successful. Reasons for this may be the lack of activities that would motivate users to stay in the virtual room, contextual messages not being expressive enough, or users not being used to real-time messages. We should note that, while related to fitness applications, the above results can also inform researchers and practitioners of social applications, including collaborative applications, targeting older adults on the aspects to consider when designing social interaction mechanisms and deciding on interface metaphors. Indeed, as an ongoing work, we are expanding the virtual environment to include \emph{leisure spaces} and \emph{productive spaces} (volunteering), as a way to explore social interactions during purposeful activities and crowd-sourcing in virtual environments, and more importantly, to cater on the opportunities of providing tools for enabling online contributions by older adults \cite{ibarratools}. \section{Introduction} Engaging in physical activity can bring multiple benefits to the health and well-being of older adults \cite{spirduso2001exercise}. It reduces risk of falls \cite{thibaud2012impact}, slows progression of degenerative diseases \cite{stuart2008community}, and even improves cognitive performance and mood \cite{landi2010moving}. Sedentary behaviour, on the other hand, is associated with negative effects such as increased risk of diabetes, cardiovascular disease, and cardiovascular and all-cause mortality \cite{wilmot2012sedentary}. However, a variety of barriers make engagement in regular physical activity difficult for older adults: lack of adequate facilities and infrastructures, reduced functional abilities, lack of motivation \cite{schutzer2004barriers}, and in general, simply because it is no longer easy to leave their homes and participate in physical activities on a regular basis. Thus, and in spite of the growing evidence of the benefits of physical activity, as well as the adverse effects of sedentary behaviour, physical inactivity is still prevalent in older adults \cite{harvey2013prevalence}. For similar reasons, older adults - especially those with decreased mobility - have more limited opportunities to engage in social activities. This reduced participation in social activities, along with changes in social roles, puts them at risk of social isolation, which has been associated with negative effects in physical and mental health \cite{bower1997social}. Engaging in social interactions is, ironically, one important factor in helping to counter these effects \cite{fratiglioni2000influence}. Home-based interventions for physical training have the potential to overcome these limitations and enable the homebound to engage in physical and social activities \cite{stuck2002home}. Current solutions provide effective support for the general population, especially for outdoor activities and for people that do not require expert coaching \cite{farFitness2016}. However, as we will see in the paper, sensible groups such as older adults find less support, with solutions not coping with their specific needs, motivational drives, and social context. In this paper we present a tablet-based fitness environment, namely Gymcentral, designed to keep independent-living older adults physically and socially active. We do this by providing trainees with a virtual environment that is both \emph{personal}, i.e., the training program and feedback are personalised, and \emph{social}, i.e., members can interact and participate to group exercise sessions even if they have different physical abilities. The application is built on years of research on home based-training \cite{silveira2013motivating, silveira2013tablet, far2014virtual}, and has been shown to enable and motivate older adults to exercise \cite{far2015interplay}. In this paper, we focus on the design aspects, with the following contributions: \begin{itemize} \item identification of relevant \emph{design dimensions} for technology seeking to provide online group exercising to older adults, including groups with different physical abilities \item implementation and evaluation of a \emph{virtual fitness environment} that builds on the design dimensions and principles identified, in a physical home-based intervention study \item a qualitative study of \emph{online social interactions} resulting from a training context, which provides insights to improving social features in online fitness environments \end{itemize} In what follows we describe the related literature, the design rationale and the results from the intervention study. \section{METHODS} In this section, we explain the objectives of the study, its participants and the intervention design. \subsection{Objectives} The study reported in the paper was performed in the context of a larger intervention study aiming at evaluating the feasibility of the technology and its effects on physical wellbeing. In this study we focus on the former, investigating the following design aspects related to the virtual group exercising: \begin{easylist} \ListProperties(Hide=100, Hang=true,Style*=--~) & \emph{usability and learnability of the application by older adults}; aiming to understand the usability at the beginning and at the end of the intervention, especially in relation to its complexity. & \emph{acceptance of the technology by older adults}, looking at different subjective aspects of the experience with technology as a whole. & \emph{perceived value of main design dimensions}, aiming to understand the usefulness of the proposed features as perceived by the users. & \emph{nature of social interactions} originated within the system, looking especially at the emerging themes in the conversations in different scenarios. \end{easylist} In the following subsections we describe the study design and measures used to elaborate these aspects. \subsection{Participants} Participants aged 65+, self-sufficient and with a non-frail, transitionally frail, or a mild frailty level were considered eligible for the study. Frailty level was measured using the Groningen Frailty Indicator \cite{steverink2001measuring}, a validated questionnaire that screens for self-reported limitations in older adults. A total of 40 participants between 65 and 87 years old were recruited through two local volunteering organizations (29 females and 11 males, mean age = 71, s.d. = 5.7). All participants obtained a formal written approval by their family doctor to allow them to participate in the study. Both doctors and participants received a written outline and explanation of the study before participating. Five participants withdrew at different times during the course of the study due to unpredictable health or family problems. One participant was substituted because the withdrawal occurred before the beginning of the study, while the others could not be replaced since they withdrew during the course of the study. Results are therefore based on the data from 36 participants (27 females and 9 males, mean age = 71.2, s.d. = 5.8, between 65 and 87 years old). \subsection{Study design} The study followed a framework for the design and evaluation of complex interventions in health settings \cite{campbell2000framework} and lasted for a total of 10 weeks, including one week at the beginning for technical deployment, application testing and the collection of initial questionnaires, and one week at the end for the administration of the final questionnaires. Using a matched random assignment procedure based on age and frailty level, participants were assigned to either an experimental (social) or control condition. Participants of the social group were assigned the full version of the Trainee App, including the personalised program along with the social and persuasion features (the condition with the more complex set of features). The control group, instead, had a basic version of the application, which included the personalised exercise program, but no persuasive, social and self-monitoring features. In the social condition, participants could communicate with each other and with the coach using the messaging features of the application (bulletin board and private messages), while participants in the control group could do so by telephone. In order to mirror the time and attention provided to the social group (able to communicate with the coach through the application), periodic telephonic contact was maintained with the control group by a community manager \cite{michaelinterventions}. Prior to the beginning of the intervention, participants took part in a workshop to learn how to use the tablet and the Gymcentral application, and were provided with handouts containing information about the study, the use of the tablet and of the application. Additionally, each participant received a necklace sensor that included a 3D accelerometer and a barometric pressure sensor in order to monitor their physical activity. They also participated in individual sessions with the personal trainer, who assessed their physical health and ability, and assigned them an initial training level. The intervention consisted in 8 weeks of physical training based on the Otago Exercise Program, specifically tailored for older adults \cite{gardner2001practical}. The training program consisted of 10 levels of increasing intensity, which included simple exercises based on functional everyday movements. During the exercise program, participants were asked to perform at least two training sessions per week. In both social and control groups, level-up was gradually suggested every week by the application. If participants agreed to level-up, the following level was unlocked, requiring a confirmation from the personal trainer through the Gymcentral coach application in the case of the social group. The study received ethical approval from the CREATE-NET Ethics Committee on ICT Research Involving Human Beings (Application N. 2014-001). \subsection{Measures} \subsubsection{Usability} The usability was assessed using the System Usability Scale \cite{brooke1996sus}, a 10 item questionnaire with five response options (1 = completely disagree, 5 = completely agree), at two time points: at the beginning of the study (after the tutorial on the application, when participants used the application for the first time), and at the end of the study. These measures were obtained for both groups in order to compare the usability of the different interface complexity levels. \subsubsection{Technology acceptance} To evaluate technology acceptance, we developed a questionnaire on the basis of previous literature \cite{phang2006senior} investigating the following dimensions: \emph{anxiety} towards Gymcentral, \emph{attractiveness} and acceptance of the application, \emph{satisfaction} of the service provided and perceived \emph{usefulness} of the application. Participants expressed their preferences on a 5-points Likert scale (1 = completely disagree, 5 = completely agree). We expected these dimensions to improve after the training program. \subsubsection{Usefulness by feature} To analyse usefulness by feature we provided a short questionnaire\footnote{\url{https://goo.gl/zl7daL}}, asking participants to report on a 5 point Likert scale how useful they though each feature to be. \subsubsection{Nature of social interactions} To investigate the nature of social interactions within the application, we performed a qualitative analysis of both the messages posted to the bulletin board and private messages. We developed a coding scheme in two steps: first, we categorized the messages without using pre-existing categories, then we compared our classification to those provided in the relevant literature about online behaviour and communities (e.g., \cite{pfeil2007patterns}), developing a final coding scheme composed of 5 top- and 12 sub-categories. \section{Related Work} \label{sec:relwork} \subsection{Home-based interventions} Physical intervention programs in the form of group-exercise sessions or home-based training have shown equivalent physical outcomes \cite{freene2013physiotherapist}. However, group-based interventions have shown to achieve higher levels of participation in the long-term \cite{van2002effectiveness}, while in the short-term the results are comparable or still not conclusive \cite{van2002effectiveness,freene2013physiotherapist}. The evidence in favour of group-based exercising can be explained by the importance of socialising as a motivating factor in physical training \cite{phillips2004motivating,de2011older}. A study by de Groot \cite{de2011older} reported that older adults do indeed prefer training with others rather than individually. However, group exercising might be a challenging (or infeasible) setting for older adults due to their heterogeneity. In particular, different levels of skill between participants might result in motivation problems and, consequently, affect the effectiveness of the exercises \cite{de2011older}. This and other obstacles that older adults experience, such as reduced mobility, makes of home-based individual interventions the only option for some older adults to attend to group exercises. \subsection{Technology for home based-training} DVDs \cite{Wojcicki2014} and tablets \cite{Silveira2013a} have been used to facilitate home-based training for older adults, and increasingly, gaming technology. For example, the Wii has been used both as customised \cite{Carmichael2010} and off-the-shelf \cite{Agmon2011} solutions to train balance or physical activity in general. Nonetheless, none of these focus on virtual group-exercising, and moreover, only a few have been tested by older adults at home \cite{Agmon2011,Silveira2013a,Wojcicki2014}. Virtual environments (VE) are also commonly used in these systems. Older adults prefer technologies that have familiarities with their everyday life \cite{Ijsselsteijn2007a,ThengYin-leng;ChuaPuay-Hoe;Pham2012} and show preference towards VE over video tutorials \cite{Waller1998}. In this sense virtual environments can better represent the real-life experience of a gym. Nonetheless, most solutions are focused on the physical training experience and overlook other aspects such as the feeling of training together with others. \subsection{Persuasion technology} Persuasion strategies \cite{Oinas-Kukkonen2008} have been used in home-based training applications in order to motivate older adults to increase training duration and adherence. Strategies can be categorised in two ways: as individual, those that do not need a social community (i.e. reminders and suggestions, positive and negative reinforcement, self-monitoring and rewards); and social, including motivation instruments that are leveraged by social interaction. While individual strategies have been tried \cite{Rodriguez2012}, older adults seem more inclined to applications that enable social interactions \cite{Brox2011,Ijsselsteijn2007a}. Studies reveal that older adults are more interested in exercises that provide healthy competition and collaboration \cite{Ganesan2012}, and prefer to socialise with their friends while performing similar activities \cite{Vargheese2013}. Successful implementations include a tablet-based exercise intervention \cite{Silveira2013a}, which found that older adults adhere longer to a training plan leveraging on social strategies such as family support, competition and collaboration. \subsection{Coaching and tailoring} Coaching can be an essential part of training: \emph{before} training, to identify trainees' needs, abilities and goals, and to prescribe tailored training plans; as well as \emph{during} and \emph{after}, to provide support, monitor progress and adjust training plans accordingly \cite{Chi-Wai2011}. Technology in the form of virtual coaching, and sensors (e.g. Fitbit, Nike\texttt{+}) can help to monitor performance and as a result provide feedback and establish new training goals. Virtual coaching can provide the support that older adults require when exercising \cite{Chi-Wai2011}. Solutions that include coaching have achieved longer adherence times than those that do not \cite{Watson2012}, personal feedback has lead to improved accuracy of exercising and performance, by giving trainees a better understanding of the instructions \cite{Qian2010}. Nonetheless, however useful these tools might be, they cannot replace a human coach yet. Human coaching has been shown to provide better emotional and psychological support during training, and it is still required for risk assessment and tailoring of training \cite{Chi-Wai2011,Hanneton2009}. \subsection{Usage analysis and social interactions} \begin{figure*} \centering \includegraphics[width=1.8\columnwidth]{figures/behavoir-all} \caption{a)Time Spend by each user in Training, Social and persuasion, b)Contribution and productivity in the Bulletin board and Private messaging features, C)Trainee's application feature usage}~\label{fig:figureAll} \end{figure*} Participatns interaction with the trainee's application were recorded during the training program. The results inferred from the interaction logs reveals the actual usage of different features of the virtual gym. We categorize these usage in three groups of i) \textit{training}, refering to the classroom that were desined to provide training exercises, ii) \textit{social}, referring to social interaction tools (e.g., locker-room, bulletin board, and proviate message) and iii) \textit{persuasion}, refering to the view and usage of the garden. Figure \ref{fig:figureAll} \emph{a} depicts the time spent by participants in each of the three mentioned categories. Given that participants spent most of their times in the classroom exercising (Training=92\%, Social=6\%, Persuasion=1\%). However, the social features such as the locker room, bulletin board, and private messaging did not require a lot of time, yet the frequency of visiting is higher (Figure \ref{fig:figureAll} C) than the training features. Performance views however has smaller usage in both time spent and frequency of views. Given the high usage of the social features, we analysed the contribution (How much the participant used the feature as a passive viewer) and productivity (how much the participants posted a message) of each participants. Figure \ref{fig:figureAll} B depicts the usage for both Bulletin board and the Private messaging feature. Notably, the productivity of the private messaging is higher but the deviation in the bulletin board post is lower and depicts a more common usage. \subsubsection{Feature Analysis} Interaction logs obtained from the trainee's application demostrates the usage of all features by participants. From these, the usage of \textit{training features} (Agenda, Locker-room, classroom and performance view) along with the usage of \textit{social features} (bulletin Board, Private messaging, Invitation tool, and realtime interaction) were analysed. \textit{Training Features.} Participants were given the option to enter to the virtual classroom for exercising from an agenda or through the virtual locker room. The agenda also shows the weekly training plan. Participants viewed the Agenda per user $(mean=21.09 views, S.D = 21.25)$ and per day $(mean=8.06 views, S.D = 6.29)$. The usage over time depicts a slight decline in using the agenda. However, most of the times, participants started the exercising through the virtual locker-room. Usage of the locker room per user $(mean=104.28 views, S.D = 39.40)$ and per day $(mean=40.20 views, S.D = 17.75)$ depicts that the locker-room were the entry room to most of the users equally despite that they could use the Agenda to enter to the classroom. In addition, participants use the performance monitoring per user $(mean=21.14 views, S.D = 19.80)$ and per day $(mean=8.26 views, S.D = 4.51)$. The deviation between the participants is relatively high, desmotrating that some participants were interested to follow their progress while some used it less. However the daily deviation is not significant and the usage by time did not have a significant raise or decline. \textit{Social Features.} Bulletin board were viewed per user $(mean=51.04 views, S.D = 40.14)$ and per day $(mean=19.00 views, S.D = 11.26)$. The usage over the 53 days of training depicts a raise in usage $(r^2 = 3.458E-3)$. Participants posted public posts to the bulletin board per user $(mean=3.61 posts, S.D = 5.21)$ and per day $(mean=1.41 posts, S.D = 1.68)$. Posting to the bulletin board over the 53 days of training had a decline $(r^2 = 0.042)$ over time. The results depicts consistence usage of the bulletin board feature. Despite the raise in viewing the bulletin board, participants tend to post less however. Private messaging feature were viewed per user $(mean=35.33 views, S.D = 26.59)$ and per day $(mean=13.04 views, S.D = 8.21)$. The usage over the 53 days of training depicts a decline in usage $(r^2 = 0.042)$. Participants wrote private messages to the coach and other club members per user $(mean=10.33 message, S.D = 12.68)$ and per day $(mean=4.01 message, S.D = 2.91)$. Writing private message has also a decline $(r^2 = 0.011)$ over time. The deviation between participants usage depicts that some participants were highly more active than the others. In addition, the decline in the usage can refer to less need of using the private message to communicate with the coach and technician for help, since the feature were mainly used to communicate about the questions and challenges of the training. \textcolor{red}{How did you come to this conclusion? Qualitative analysis tells more about what was posted, but it's not longitudinal.} Participants send invitations to their co-trainees per user $(mean=24.14 invitations, S.D = 74.26)$ and per day $(mean=9.37 invitations, S.D = 6.58)$. The high deviation between the number of intvitations sent by each participants depicts that while some of them were actively used the invitation tool to invite their co-trainees, some used it rarely or none. However, the usage over time did not change significantly. Beside this, only 1 participants tried the realtime interaction tool that was proviated in the locker room. The main reason is that participants spend a very little time in the locker room and the chances to meet other people were very low. The overall analysis shows that the social group mainly used the training features. However, all participants were consistently using the social features and in particular the bulletin board. figure \ref{fig:figure4} \subsection{Trainee's feedback on the features } \subsubsection{Usage and perceived usefulness} \begin{figure} \centering \includegraphics[width=\columnwidth]{figures/perceived-usefulness.pdf} \caption{Perceived usefulness of each of the features. The number of users that experienced the feature is indicated in parentheses.}~\label{fig:figureAll} \end{figure} In order to understand the value of design dimensions and recommendations that we have identified, we asked participants of the Social group - which were assigned the full-featured version of the app - to report on the usage and perceived usefulness of the features of Gymcentral. The results are illustrated in Figure \ref{fig:figureAll}. The features that are instrumental to the training were naturally experienced by most of the trainees, and this includes \emph{exercising in the classroom} and \emph{checking out the schedule}. What is interesting is that \emph{training in company} was also experienced by most trainees. Together, these features enabling the group training were highly regarded by trainees. Persuasion features were also among the most experienced and valued. This includes, \emph{following the progress} and visualising their own \emph{progress in the garden} and, still very positive but to a lesser extent, \emph{inviting others to join} a training session. Social interaction features received mixed results. The most useful and experienced feature was \emph{private messaging with the Coach}, followed by the public messages in the \emph{bulletin board}. Interestingly, \emph{private messages with other trainees} were perceived as less useful, indicating a higher preference of trainees for interaction with the entire group rather than individually. We expand on the nature of these interactions in the next subsection. While the \emph{social presence in the Classroom} was highly rated, participants regarded the features present in the \emph{Locker room} among the least useful. The Locker room was designed as a place for trainees to meet and socialise before starting a training session. They would invite others, wait for them before starting the session, and in the meantime interact via predefined real-time messages. In practice, the user behaviour was different. The application logs show that users were not waiting for others in the Locker room after sending their invitations, instead they would go directly to the Classroom and wait there for others to join. \subsubsection{Positive and negative aspects} Participants from both groups were asked to provide feedback about the positive and negative aspects of the experience. \textbf{What aspect was the most fun and motivating?} In the Control group, the topics that dominated the feedback were the possibility of training from home (``Being able to exercise at any time, and from my living room"), personal satisfaction (``Satisfaction of performing the exercises every day") and discipline (``The personal commitment to perform the exercises"). Interestingly, one participant reported the physical meetings as the most fun part (``The meetings with the project organisers"). In the Social group the dominant aspect was the social features, with participants citing the possibility of exercising with others (``Feeling that you're training with others and followed by the Coach"), being invited to join (``Being invited to exercise in the virtual gym"), and messaging with other participants and the Coach (``Very nice to find messages in the bulletin board"). As in the previous group, one participant reported also the initial meetings as one of the highlights. \textbf{What aspect did you like the least?} Both groups reported the same negative aspects regarding the experience. The dominant aspect was the problems with the application, which were due to Internet connectivity issues in some areas of the city (``When [the training] was not loading"). We highlight the feedback from one participant of the Social group, who reported to have gone to a friend's house to use Internet (``The tablet was not working at home, so I had to go to a friend's house to exercise"). Another issue reported was the monotony of exercises (``I've found the exercises repetitive") which is probably related to the long term nature of the training. \subsubsection{Discussion} The user feedback \emph{reinforces the value of group exercising, social interactions and persuasion} features of the application, and of the design recommendations on which it is built. The results also point to i) the need for more effective mechanisms to motivate social interactions among community members, as shown by the perception and usage of messaging features, and ii) the need for more effective environments for motivating real-time social interactions, as shown by the user feedback on the Locker room feature. \subsection{Analysis of online social interactions} \begin{figure*}[tb] \centering \includegraphics[width=\textwidth]{figures/messages} \caption{Nature of social interaction in private and public messages.}~\label{fig:figure5} \end{figure*} Private messages were preferred (411 messages) over the bulletin board (133). To better understand the type of messages exchanged, a manual classification was done. A 20\% random sample of all messages was coded manually, initially without using pre-existing categories, and later coded with the scheme detailed in Table \ref{table:coding}. The coding scheme was developed based on relevant behaviour and communities literature (e.g., \cite{pfeil2007patterns}). \begin{table}[!ht] \caption{Messages coding scheme} \small \begin{center} \begin{tabular}{ |p{0.99\columnwidth}| } \hline \textbf{Community building} \\ \hline \textit{Togetherness.} Interacting with the community or particular members, including invitations and messages of welcome, to stimulate participation, and to stress the value of the community. \\ \textit{Thank you.} Thanking the community or a member for the help, support or for their understanding. \\ \textit{Sorry.} Apologising for an action.\\ \textit{Entertainment.} Sharing jokes, quotes or aphorisms. \\ \hline \textbf{Gymcentral Application} \\ \hline \textit{Satisfaction.} Sharing a positive experience with the application or the study. \textit{Problem.} Reporting problems with the application (e.g. internet connection or technical issues). \textit{Information.} Providing information or announcements about the application. Giving advice, recommendations, and suggestions. \textit{Question.} Asking for information about the app and technical issues. \\ \hline \textbf{Physical Activity} \\ \hline \textit{Support.} Offering advice, support or sympathy to the community or a particular member. Encouraging others to participate. \textit{Congratulations.} Congratulating the community or a particular member for participating in the program or completing an exercise. \textit{Me.} Sharing personal experience on the training (e.g. level of commitment, participation, level-up intention, problems). \textit{Question.} Asking for information about the training or exercise performance. Requesting for a level-up.\\ \hline \textbf{Self-disclosure} \\ \hline Sharing personal experience or information not related to the training (e.g. personal stories, daily activities) \\ \hline \textbf{Other} \\ \hline Messages that did not fit in any of the other categories\\ \hline \end{tabular} \label{table:coding} \end{center} \end{table} Two independent coders classified the messages sampled. Cohen’s kappa coefficients for the bulletin board were .85 for top-categories and .84 for sub-categories; and for the private messages coefficients were .87 for top-categories and .85 for sub-categories, indicating a general high agreement. After the independent coding, a single coder classified all messages and combined the results. These are shown in Figure \ref{fig:figure5}. \textbf{Bulletin board.} Used mainly to promote \emph{community building}, in particular togetherness. Participants had an active role, they posted greeting messages (e.g. ``Good morning everybody!") and used a humorous tone in the conversation (e.g. ``You are a little crazy"). The bulletin board was also used to publicly thank the Coach and other participants for their help or invitations to train together. To a lesser extent, the Coach contributed to community building by welcoming and greeting participants (e.g. ``Have a nice start of the week everybody!"). The talk about \emph{physical activity} was centred in congratulating and offering support for the training. In particular, the Coach was very active, encouraging participants to attend to the training sessions, and congratulating them for their performance and the level-up requests (e.g. ``Well done everybody... many of you wrote me... to level-up"). The messages regarding the \emph{Gymcentral} application were mostly about technical issues. The technician used the bulletin board to broadcast advice and information on these issues. At certain points during the study, participants experienced slow connection problems that compromised the proper functioning of the application, especially the streaming of exercise videos. However, there were also positive comments about the application and the garden metaphor (e.g. ``Oh! A bright butterfly appeared in the garden, wonderful, thank you!"). \textbf{Private messages.} Most messages were about the \emph{Gymcentral} application. As in the bulletin board, almost half of the messages about the application were directed to the technician and the Coach to report technical issues or the inability to train because of connection problems. Participants also exchanged some positive notes about the application. Considering the messages of \emph{community building}, we can observe that, similarly to the bulletin board, participants promoted a sense of togetherness, but the messages were more personal than the ones in the bulletin board (e.g. ``How are you?", " ...we missed you"). The more intimate nature of this channel was also used for self-disclosure. Participants talked about their lives outside of the virtual gym, even engaging in conversation with the Coach and the community manager. In contrast to the bulletin board, when discussing \emph{physical activity}, participants did not use the private messages to congratulate and support each other. Instead, participants talked about their personal experience with the exercise, and in particular talked to the coach and asked for advice. \textbf{Role of the coach.} The coach was the most active user in both communication channels. In the bulletin board, 24 messages were posted, and regarding private messages, 120 were sent and 117 received. Interestingly, the use of the bulletin board and the private messages was different. The coach used the bulletin board to congratulate participants publicly, but private messages were sent to encourage participants to train and follow the training program. \textbf{Discussion}. These results highlight the need for having both types of channels, since they serve very different purposes. On one hand allowing for community building in public channels, and for more personal conversations in private. We should also note that compared to other technology-based interventions, where social features (e.g., forums or social networks) were rarely used \cite{aalbers2011characteristics}, in this study the social features have been largely used by the participants. Further research is required to investigate whether this result is significant and related to the design of the tool. \subsection{Usability and technology acceptance} Pre- and post- scores of usability and technology acceptance for the social and control group are shown in Table~\ref{table:usaacc}. \subsubsection{Usability} A mixed between-within subjects analysis of variance was conducted to compare pre- and post- scores of the System Usability Scale between participants in the experimental and in the control group. The analysis showed a significant interaction between group and time (F(1, 34)= 8.286, p = .007), and a significant main effect for time (F(1, 34)= 37.113, p $<$ .001), and for group (F(1, 34)= 14.614, p = .001). This suggests that, while at the beginning there was a noticeable difference in the usability of the two Gymcentral versions, with the basic version performing better, the usability of the full application improved significantly more over time in the social group than in the control group. Although initially the full application was reported as more difficult to use, at the end of the study its perceived usability increased to reach a level comparable to the one of the basic version. \begin{table} \caption{Usability and technology acceptance pre- and post- measures (range: 1 to 5)} \small \begin{tabular}{| l | l | l | l | l | } \hline \multirow{2}{*}{} & \multicolumn{2}{c}{Control} & \multicolumn{2}{c|}{Social} \\ & Pre (Err) & Post (Err) & Pre (Err) & Post (Err) \\ \hline Usability & 4.25 (.16) & 4.62 (.11) & 3.33 (.13) & 4.36 (.13) \\ \hline Anxiety & 1.44 (.24) & 1.66 (.25) & 2.40 (.25) & 1.78 (.16) \\ \hline Attractiveness & 4.06 (.24) & 4.53 (.18) & 3.85 (.24) & 4.50 (.16) \\ \hline Satisfaction & 4.44 (.20) & 4.60 (.14) & 4.13 (.23) & 4.65 (.09) \\ \hline Usefulness & 4.50 (.15) & 4.56 (.18) & 3.93 (.24) & 4.65 (.13) \\ \hline \end{tabular} \label{table:usaacc} \end{table} \subsubsection{Technology acceptance} A mixed between-within subjects analysis of variance was conducted to compare each of the following dimensions of technology acceptance at the beginning and at the end of the study. \textbf{Anxiety.} The analysis showed no significant interaction between group and time (p = .069) and no significant main effect for time (p = .372), but showed a significant main effect for group (F(1, 34)= 5.543, p = .024). A closer analysis of the studentized residuals allowed us to detect two outliers in the data. After removing those observations, the analysis showed a significant interaction between group and time (F(1,32) = 4.713, p = .037), suggesting that anxiety towards the application \emph{significantly decreased over time for the social but not for the control group}. \textbf{Attractiveness.} The analysis did not reveal a significant interaction between time and group (p = .661), nor a significant main effect for group (p = .576), but it showed a significant main effect for time (F(1,34) = 7.448, p = .01). Multiple comparison tests with Bonferroni correction showed that attractiveness of the application significantly increased for the social group (p = .023) but not for the control group (p = .134). While the results of the analysis of variance suggests that, taken together, both groups reported to like the trainee application significantly more at the end of the study, post-hoc comparisons suggest that this \emph{difference was significant for the social group but not for the control group}. \textbf{Satisfaction.} The analysis did not reveal a significant interaction between time and group (p = .308), nor a significant main effect for group (p = .49) or for time (p = .051). However, multiple comparison tests with Bonferroni correction showed that \emph{satisfaction significantly increased from pre- to post- for the social} (p = .028) \emph{but not for the control group} (p = .513). Furthermore, an exploration of the studentized residuals revealed the presence of one outlier. The analysis of variance repeated after excluding the outlier showed a significant main effect for time (F(1,33) = 6.561, p = .015). \textbf{Usefulness.} The analysis did not reveal a significant interaction between time and group (p = .09), nor a significant effect of the main effect for group (p = .192), but it showed a significant main effect for time (F(1,34) = 4.291, p = .046). Multiple comparison tests with Bonferroni correction suggest that there was a significant increase in the perceived usefulness of the application for the social (p = .007) but not for the control group (p = .827). Consistently with the previous analyses, this suggests that, overall, participants perceived Gymcentral as more useful after the training program, and that \emph{perceived usefulness in the social group may have improved more with respect to the control group}. \subsubsection{Discussion} Not surprisingly, the usability of the application was lower for the Social group at the beginning of the study, reflecting participants’ initial difficulties to deal with a more complex user interface. However, by the end of the training program usability had increased significantly, approaching the top end of the scale. We should further investigate the role played by the use of the metaphor of the virtual environment in the learnability of the application. Overall, while Internet connection was an intermittent issue, both usability and technology acceptance of the Trainee application (both versions) generally improved. For the full application, these results mean that users could handle the extra complexity and learn to use this type of tool. \section{Results} \input{./sections/results-usability.tex} \input{./sections/results-features.tex} \input{./sections/results-interaction.tex} \subsection{Limitations} \textbf{Different channels for Coach support}. The interactions of the Coach with the participants were scheduled to give the same type of support. However, in absence of social features in the version of the app used by the Control group, the support was given through phone calls. This difference in the communication channel might have introduced a potential bias in the motivation to participate. \textbf{Sample size and gender imbalance}. Random variability, probably due to the small sample size, might have influenced the initial difference between groups in some of the measures. We also acknowledge the gender imbalance as a potential limitation to the generalisation of the results.
1,116,691,497,489
arxiv
\section{Introduction} Since the discovery of halo nuclei \cite{THH85}, the physics of exotic nuclei attracted much interest in the nuclear physics community \cite{TSK13,SLY03}. Halo nuclei are characterized by a low separation energy of the last nucleon(s), and therefore by an unusually large radius. More generally, exotic nuclei, which are at the limit of stability, present short lifetimes, and are essentially studied through reactions. The recent advances of radioactive beams opened many new perspectives in the physics of exotic nuclei. In parallel it becomes more and more necessary to develop theoretical models, which can help in the interpretation of the data. Among the various theories, the Continuum Discretized Coupled Channel (CDCC) method \cite{Ra74,AIK87} is well suited for reactions involving exotic nuclei. In the CDCC method, the continuum of the projectile is taken into account. This effect was first shown in $d+$ nucleus data, owing to the low binding energy of the deuteron \cite{Ra74}. The CDCC formalism was then successfully applied to reactions involving weakly bound nuclei such as $^{11}$Be \cite{DSM12} or $^6$He \cite{MHO04}. The first variant of the CDCC method considered a two-body projectile (typically $d=p+n$) on a structureless target. More recently, it was extended to three-body projectiles \cite{MHO04}, and even to two-body projectile and target \cite{De18}. The present works addresses the $\lip$ and $\lid$ reactions, which have been experimentally investigated recently \cite{TKA17,KST15}. One of the main conclusions drawn in Refs.\ \cite{TKA17,KST15} is the presence of a dipole resonance in $\oli$. The authors measured the elastic cross section, together with an inelastic cross section to a broad $1^-$ resonance of $\oli$. The main goal of the present work is to analyze both systems in a common framework, i.e. with the same $\oli$ wave functions, and with the same $^9$Li+nucleon interaction. For this purpose, I use a three-body $\linn$ model to describe $\oli$, and apply the CDCC theory for the scattering cross sections. Using a common approach for $\lip$ and $\lid$, however, requires a generalization of the CDCC method to three-body + two-body systems. This extension raises significant numerical difficulties, but can be performed with modern computing facilities. The $\lip$ reaction was recently investigated by Matsumoto {\sl et al}.\ \cite{MTO19} in the CDCC formalism. The authors, however, do not consider $\lid$ scattering and essentially focus on a possible Feshbach resonance in the $\linn$ system. The text is organized as follows. In Sec. II, I discuss the $\oli$ three-body model and, in particular, E1 transitions to the continuum. Section III is devoted to the CDCC formalism which is presented in a way which is valid for any number of constituents. In Sections IV and V, I show the $\lip$ and $\lid$ cross sections, respectively. I also discuss equivalent potentials. The conclusion and outlook are presented in Sec. VI. \section{Three-body description of $\oli$} \subsection{Outline of the hyperspherical method} The hyperspherical method is well adapted to three-body systems (see for example, Ref.\ \cite{ZDF93}), even for scattering states \cite{DTB06}. I consider a three-body nucleus, made of a core (the spin is neglected) and of two neutrons. The nucleon number and the charge of the core are denoted as $A_1$ and $Z_1 e$, respectively. There are three possible sets of scaled Jacobi coordinates ($\pmb{x},\pmb{y} $) (see Refs.\ \cite{RR70,ZDF93} for more information). I choose \begin{eqnarray} \pmb{x}=\frac{1}{\sqrt{2}}\left(\pmb{r}_3-\pmb{r}_2\right), \pmb{y}=\sqrt{\frac{2A_1}{A_1+2}}\left(\pmb{r}_1-\frac{\pmb{r}_2+\pmb{r}_3}{2}\right), \label{eq1} \end{eqnarray} where $\pmb{r}_1,\pmb{r}_2 $ and $\pmb{r}_3$ are the coordinates of the core and of the neutrons. This choice permits a natural symmetry of the wave functions regarding the exchange of the two neutrons. In these coordinates, the $\oli$ Hamiltonian is given by \begin{eqnarray} H_{0}=-\frac{\hbar^2}{2m_N}\left(\Delta_x+\Delta_y\right)+\sum_{i<j}V_{ij}, \label{eq2} \end{eqnarray} where $m_N$ is the nucleon mass, and $V_{ij}$ are two-body potentials ($n+n$ and $\linb$). The hyperadius $\rho$ and the hyperangle $\alpha$ are defined as \begin{eqnarray} \rho^2=x^2+y^2,\qquad \alpha=\arctan\frac{y}{x}, \label{eq3} \end{eqnarray} and the wave function in angular momentum $j$ and parity $\pi$ is expanded as \begin{eqnarray} \Psi^{jm\pi}=\rho^{-5/2}\sum_{K=0}^{\infty}\sum_{\gamma} \chi^{j\pi}_{\gamma K}(\rho) {\cal Y}^{jm}_{\gamma K}(\Omega_{5\rho}). \label{eq4} \end{eqnarray} In this definition, $K$ is the hypermoment, and $\gamma=(\ell_x,\ell_y,\ell,S)$ represents a set of quantum numbers \cite{DDB03}. The summation over $K$ is truncated at a maximum value $\kmax$. The hyperspherical harmonics ${\cal Y}^{jm}_{\gamma K}$ depend on five angles $\Omega_{5\rho}=(\Omega_x,\Omega_y,\alpha)$; they are defined in Ref.\ \cite{DDB03}. In Eq.\ (\ref{eq4}), the hyperradial functions $\chi^{j\pi}_{\gamma K}(\rho)$ are obtained from a set of coupled differential equations \begin{eqnarray} &&\biggl( -\frac{\hbar^2}{2m_N} \biggl[\dfrac{d^2}{d\rho^2}-\dfrac{{\cal L}_K({\cal L}_K+1)}{\rho^2}\biggr]-E \biggr) \chi^{j\pi}_{\gamma K}(\rho) \nonumber\\ &&\hspace{1cm} +\sum_{K' \gamma'}V^{j\pi}_{\gamma K ,\gamma' K'}(\rho)\, \chi^{j\pi}_{\gamma' K'}(\rho)=0, \label{eq5} \end{eqnarray} where ${\cal L}_K=K+3/2$ and where $V^{j\pi}_{\gamma K ,\gamma' K'}(\rho)$ are the coupling potentials, determined from the matrix elements of the two-body potentials in (\ref{eq2}) between hyperspherical harmonics. In the present work, I am looking for square-integrable solutions of Eq.\ (\ref{eq5}). I expand the hyperradial functions over a set of $N$ basis functions $u_i(\rho)$ as \begin{eqnarray} \chi^{j\pi}_{\gamma K}(\rho)=\sum_{i=1}^N c^{j\pi}_{\gamma K i}u_i(\rho). \label{eq6} \end{eqnarray} In practice, I choose Lagrange functions \cite{Ba15} which allow a simple and accurate calculation of matrix elements, and which have been used in previous works \cite{PDB12,DDC15}. \subsection{Description and properties of $\oli$} The $n+n$ potential is the central part of the Minnesota potential with the exchange parameter $u=1$ \cite{TLT77}. The $\linb$ potential is chosen as in Ref.\ \cite{EBH97}, and reproduces various properties of $^{10}$Li, such as the scattering length. Notice that this potential contains a forbidden state for the $s$ and $p_{3/2}$ partial waves. To avoid spurious three-body states in the solution of (\ref{eq5}), a supersymmetric transformation \cite{Ba87} is applied. As in Ref.\ \cite{PDB12}, I scale the $\linb$ potential by a factor $1.0051$ to reproduce the $\oli$ two-neutron binding energy $S_{2n}=0.378$ MeV \cite{BAG08}. For the basis functions $u_i(\rho)$, I use a Gauss-Laguerre mesh with $N=20$ and a scale parameter $h=0.3$ fm. I adopt $\kmax=20$ for the ground state, which guarantees the convergence of the energy and of the r.m.s. radius. Using a $^9$Li radius of 2.43 fm \cite{EAA02}, I find a $\oli$ r.m.s radius of 3.12 fm, in fair agreement with experiment $3.16 \pm 0.11$ fm \cite{TKY88}. The structure and the E1 distribution of $\oli$ have been previously discussed \cite{PDB12}. In the present work, I want to address the E1 distribution more precisely. A recent experimental work on $\lip$ scattering \cite{TKA17} suggests the existence of a low-energy $1^-$ resonance in $\oli$, and that the E1 transition probability to the ground state should have an isoscalar character. According to the authors, this property arises from the halo structure of $\oli$. Let me start with a microscopic interpretation of the E1 transitions. At the long-wavelength approximation, the E1 operator is given, in a $A$-nucleon model, by \begin{eqnarray} {\cal M}^{E1}_{\mu}=e\sum_{i=1}^A \bigl(\frac{1}{2}-t_{iz}\bigr)r'_i Y_1^{\mu}(\Omega'_i ), \label{eq7} \end{eqnarray} where $\pmb{r}'_i=\pmb{r}_i-\pmb{R}_{c.m.}$, $\pmb{r}_i$ being the space coordinate of nucleon $i$, and $\pmb{R}_{c.m.}$ the center-of-mass coordinate. Subtracting the c.m. coordinate ensures the Galilean invariance of the operator. In this microscopic description, $t_{iz}$ is the isospin projection ($t_{iz}=+1/2$ for neutrons and $t_{iz}=-1/2$ for protons). The first term is called isoscalar (it does not depend on isospin) and exactly vanishes for E1 transitions. The second term is the isovector contribution which is essentially responsible for E1 transitions. If the isospin of the initial and final states is $T=0$, the isovector term also vanishes. A typical example is $^{16}$O and the $^{12}{\rm C}(\alpha,\gamma)^{16}$O reaction. Using the long wavelength approximation with $T=0$ wave functions provides exactly zero for E1 transitions. These transitions, however, play an important role in the capture reaction, and are due to small $T=1$ components \cite{DB87c}. When I adapt definition (\ref{eq7}) to a non-microscopic model involving $N_C$ clusters with charges $Z_k$, I have \begin{eqnarray} {\cal M}^{E1}_{\mu}=e\sum_{k=1}^{N_C} Z_k r'_k Y_1^{\mu}(\Omega'_k ), \label{eq8} \end{eqnarray} where $\pmb{r}'_k$ are now defined from the space coordinates of the clusters. In a two-cluster model with nucleon numbers ($A_1,A_2$) and charges ($Z_1e,Z_2e$), this leads to the well-known definition \begin{eqnarray} {\cal M}^{E1}_{\mu}=e \biggl( \frac{Z_1}{A_1}-\frac{Z_2}{A_2}\biggr) r Y_1^{\mu}(\Omega_{r}), \label{eq9} \end{eqnarray} where $\pmb{r}$ is the relative coordinate between the clusters. In the present three-body model, involving a core and two neutrons, the dipole operator is, at the long wavelength approximation \cite{DDB03}, \begin{eqnarray} {\cal M}^{E1}_{\mu}=e Z_1 \sqrt{ \frac{2}{A_1(A_1+2)}} y Y_1^{\mu}(\Omega_y). \label{eq10} \end{eqnarray} This form does not explicitly include isoscalar and isovector terms, as this wording is specific to the microscopic definition (\ref{eq7}). However, the operator (\ref{eq10}) is directly deduced from (\ref{eq7}) and, therefore, is associated with an isovector contribution, since the isoscalar term vanishes. The E1 transition probabilities between an initial state $J_i\pi_i$ and a final state $J_j\pi_f$ are defined as \begin{eqnarray} &&B(E1,J_i\pi_i \rightarrow J_f\pi_f n)=\nonumber \\ &&\hspace{1 cm}\frac{2J_f+1}{2J_i+1} \vert \langle \Psi^{J_j\pi_f n}\Vert {\cal M}^{E1} \Vert \Psi^{J_i\pi_i}\rangle \vert^2. \label{eq10b} \end{eqnarray} As in Ref.\ \cite{PDB12}, a smooth E1 distribution $dB(E1)/dE$ is obtained by folding the discrete $B(E1)$ with a Gaussian function centered at $\ecm$ (the width is $\sigma=0.3$ MeV, close to the experimental resolution). Here $J_i\pi_i$ corresponds to the ground state, and $J_j\pi_f n$ to the final pseudostates. The E1 distribution computed with (\ref{eq10}) has been presented in Ref.\ \cite{PDB12}. It presents a peak around $\ecm=0.5$ MeV, associated with a dipole resonance, and is qualitatively in agreement with the experimental data \cite{NVS06} ($\ecm$ is the energy with respect to the $\linn$ threshold). Recently, Tanaka {\sl et al}.\ \cite{TKA17} have measured the elastic and inelastic $\lip$ cross sections at $\elab=66$ MeV. This work suggests a dipole resonance at $E_x=0.8$ MeV, in good agreement with the theoretical prediction. According to Tanaka {\sl et al.}, the dipole transition should have an isoscalar component, due to the halo nature of $\oli$. I have tested this hypothesis by extending the definition of the E1 operator beyond the long wavelength approximation. The E1 operator then reads, in a microscopic model \cite{LC82}, \begin{eqnarray} {\cal M}^{E1}_{\mu}&=&e\sum_{i=1}^A \bigl(\frac{1}{2}-t_{iz}\bigr)r'_i Y_1^{\mu}(\Omega'_i ) \biggl(1-\frac{(k_{\gamma}r'_i)^2}{10}\biggr) \nonumber \\ &&+i \frac{ek_{\gamma}}{4m_N c} \sum_{i=1}^A \bigl(\frac{1}{2}-t_{iz}\bigr)r'_i Y_1^{\mu}(\Omega'_i ) \pmb{r}'_i \cdot \pmb{p}'_i, \label{eq11} \end{eqnarray} where $\pmb{p}_i$ is the momentum of nucleon $i$, and $k_{\gamma}$ is the photon momentum (a spin-dependent term is neglected). With this generalization, isoscalar transitions are possible. However they are not expected to be important since $(k_{\gamma}r_i)^2$ is small (the photon energies are of the order of a few MeV). I have estimated the isoscalar component in the present three-body model, with the first term of definition (\ref{eq11}). In hyperspherical coordinates, the dipole operator (\ref{eq10}) is therefore complemented by an additional term \begin{eqnarray} {\cal M}^{E1,{\rm add.}}_{\mu}=-\frac{1}{10} \biggl(\sqrt{ \frac{2}{A_1(A_1+2)}} k_{\gamma} y\biggr)^2 {\cal M}^{E1}_{\mu} \label{eq12} \end{eqnarray} Figure \ref{fig_e1}(a) presents the $1^-$ $\linn$ three-body phase shift, as calculated in Ref.\ \cite{PDB12}. The E1 distribution is shown in Fig.\ \ref{fig_e1}(b) with the first-order term (dotted line) and with the full operator (solid line). The presence of a dipole resonance around $\ecm=0.5$ MeV is consistent in both figures, and seems well established from experiment (breakup \cite{NVS06} and inelastic scattering \cite{TKA17}). However, Fig.\ \ref{fig_e1}(b) shows that the contribution of higher-order terms (dashed line) is negligible. The present three-body calculation therefore confirms the existence of a dipole resonance at low energies (let me emphasize that the only parameter, a scaling factor of the $\linb$ potential, is adjusted on the ground-state energy), but does not support the interpretation of an isoscalar character. This is not surprising since the next-order term is proportional to $(k_{\gamma}r)^2$. Even if typical radii of halo nuclei are larger than in stable nuclei, the factor $k_{\gamma}=E_{\gamma}/\hbar c$ is of the order of 0.01 fm$^{-1}$, and makes the correction quite small. Of course this argument could not be true in $T=0$ nuclei, since the leading term of the E1 operator exactly vanishes. \begin{figure}[htb] \begin{center} \epsfig{file=li11_e1.eps,width=6.5cm} \caption{$\oli$ $1^-$ phase shift (a) (see Ref.\ \cite{PDB12}) and E1 distribution (b). In (b), the dashed line represents the contribution of Eq.\ (\ref{eq12}) only. The solid and dotted lines are obtained with the full E1 operator, and with the long-wavelength approximation (\ref{eq10}), respectively. $\ecm$ is the energy with respect to the $\linn$ threshold.} \label{fig_e1} \end{center} \end{figure} \section{Brief overview of the CDCC theory} Originally, the CDCC method has been developed to describe $d+$nucleus scattering \cite{Ra74}. Owing to the low breakup threshold of the deuteron, elastic scattering cannot be satisfactorily described if breakup effects are neglected. The basic idea of the CDCC method is to simulate breakup effects by approximations of the deuteron continuum, referred to as pseudostates (PS). These PS correspond to positive eigenvalues of the Schr\"odinger equation associated with the projectile. They do not have a specific physical meaning but represent an approximation of the continuum. The CDCC formalism was very successful to reproduce various $d+$nucleus data. With the advent of radioactive beams, the CDCC method turned out to be a useful tool to analyse reactions involving exotic nuclei \cite{YOM12}. As for the deuteron, the neutron or proton separation energy of exotic nuclei is low, and breakup effects are expected to play an important role in reactions. The original CDCC formalism was developed for two-body projectiles on structureless targets \cite{KYI86,AIK87}. This is well adapted to the scattering of typical two-body nuclei, such as $d$, $^7$Li, $^{11}$Be on heavy targets. The formalism was then extended to three-body projectiles such as $^6$He \cite{MHO04} or $^9$Be \cite{DDC15}, and to systems involving two-body projectile and target, such as $^{11}{\rm Be}+d$ \cite{De18}. The goal of the present work is to analyse recent $\lip$ and $\lid$ data \cite{TKA17,KST15} in the CDCC framework. A realistic description of $\oli$ requires a three-body $\linn$ model, as discussed in Sect. II. On the other hand, the $\lid$ reaction also involves the deuteron, which should be described by a $p+n$ structure. Previous calculations on $^{11}{\rm Be}+d$ in a four-body CDCC model \cite{De17,De18} have shown that these calculations lead to a large number of channels (up to several thousands), but provide an excellent description of elastic scattering. Let me consider a system of two nuclei described by a set of internal coordinates $\pmb{\xi}_i$ (see Fig.\ \ref{fig_conf}), and by an internal Hamiltonian $H_i$. For a two-body system, I have \begin{eqnarray} &&\pmb{\xi}_i=\pmb{r}_i, \nonumber \\ &&H_i=-\frac{\hbar^2}{2\mu_i}\Delta_i+v_{12}(r_i), \end{eqnarray} where $\mu_i$ is the reduced mass and $v_{12}(r_i)$ a (real) nucleus-nucleus potential. In a three-body system \begin{eqnarray} \pmb{\xi}_i&=&(\pmb{x},\pmb{y}),\nonumber \\ H_{i}&=&-\frac{\hbar^2}{2m_N}\left(\bigtriangleup_x+\bigtriangleup_y\right)+v_{12}(x)+v_{13}(x,y)\nonumber \\ &&+v_{23}(x,y). \label{eq14} \end{eqnarray} \begin{figure}[htb] \begin{center} \epsfig{file=config.eps,width=6.5cm} \caption{Cluster configurations and coordinates for $3+1$ (a) and $3+2$ (b) systems.} \label{fig_conf} \end{center} \end{figure} The starting point of all CDCC calculations is to solve the Schr\"{o}dinger equation associated with the colliding nuclei, i.e. \begin{eqnarray} H_i\, \Phi^{jm\pi}_{k}=E^{j\pi}_{k} \, \Phi^{jm\pi}_{k}, \label{eq15} \end{eqnarray} where \begin{eqnarray} \Phi^{jm\pi}_{k}&=&r^{-1}g^{j\pi}_{\ell k}(r)\, \bigl[ Y_{\ell}(\Omega)\otimes \chi^s\bigr]^{jm} \ {\rm for \ a \ 2-body \ system} \nonumber \\ &=&\rho^{-5/2}\sum_{\gamma K}\chi^{j\pi}_{\gamma K k}(\rho) {\cal Y}^{jm}_{\gamma K}(\Omega_{\rho}) \ {\rm for \ a \ 3-body \ system} \nonumber \\ \label{eq16} \end{eqnarray} In (\ref{eq15}), index $k$ refers to the excitation level. Energies with $E^{j\pi}_{k}<0$ correspond to physical states, and $E^{j\pi}_{k}>0$ correspond to PS. Let me now consider the Hamiltonian of the projectile + target system, which reads \begin{eqnarray} H=H_1+H_2+T_R+\sum_{ij}V_{ij}(\pmb{R},\pmb{\xi}_1,\pmb{\xi}_2), \label{eq17} \end{eqnarray} where $\pmb{R}$ is the relative coordinate (see Fig. 2) and $V_{ij}$ are optical potentials between the fragments. In $\lip$, I need $\lipb$ and $n+p$ optical potentials, whereas $\lid$ require the additional $^9{\rm Li}+n$ and $n+n$ potentials. The total wave function is expanded over a set of PS as \begin{eqnarray} \Psi^{JM\pi}=\sum_{c LI} u^{J\pi}_{c LI}(R)\, \varphi^{JM\pi}_{c LI} (\Omega_R,\pmb{\xi}_1,\pmb{\xi}_2), \label{eq18} \end{eqnarray} where index $c$ stands for $c=(j_1,k_1,j_2,k_2)$, $L$ is the relative angular momentum and $I$ the channel spin. The channel functions $\varphi^{JM\pi}_{c LI}$ are defined from the internal wave functions of the projectile and target as \begin{eqnarray} &&\varphi^{JM\pi}_{c LI} (\Omega_R,\pmb{\xi}_1,\pmb{\xi}_2)= \nonumber \\ && \hspace*{1cm} \biggl[ \bigl[ \Phi^{j_1}_{k_1}(\pmb{\xi}_1)\otimes \Phi^{j_2}_{k_2}(\pmb{\xi}_2)\bigr]^I \otimes Y_L(\Omega_R)\biggr]^{JM}. \label{eq19} \end{eqnarray} The summations over the spins $j_1,j_2$ and over the excitation levels $k_1,k_2$ are controlled by truncation parameters $\jmax$ and $\emax$ (which can be different for the target and for the projectile). The radial functions $u^{J\pi}_{c LI}(R)$ in (\ref{eq18}) are obtained from a set of coupled equations \begin{eqnarray} &&\biggl[-\frac{\hbar^2}{2\mu}\biggl(\frac{d^2}{dR^2}-\frac{L(L+1)}{R^2} \biggr) +E^{j_1}_{k_1}+E^{j_2}_{k_2}-E \biggr]u^{J\pi}_{c LI}(R)\nonumber \\ &&\hspace{1cm}+\sum_{c'L'I'}V^{J\pi}_{cLI,c'L'I'}(R)\, u^{J\pi}_{c' L'I'}(R)=0, \label{eq20} \end{eqnarray} where $\mu$ is the reduced mass, and where the coupling potentials $V^{J\pi}_{cLI,c'L'I'}(R)$ are defined from the matrix elements \begin{eqnarray} V^{J\pi}_{cLI,c'L'I'}(R)=\langle \varphi^{JM\pi}_{c LI} \vert \sum_{ij}V_{ij} \vert \varphi^{JM\pi}_{c' L'I'}\rangle, \label{eq21} \end{eqnarray} and involve integration over $\pmb{\xi}_1,\pmb{\xi}_2$ and $\Omega_R$. The calculation of these coupling potentials is simple for two-body projectiles on structureless targets (see, for example, Ref.\ \cite{DBD10}). For 3-body projectiles, the calculations are far more complicated \cite{MHO04}. Here, I still go beyond this situation, since I consider a three-body projectile $\oli=\linn$ on a two-body target $d=p+n$. Some technical information is given in the Appendix. The most challenging part of CDCC calculations, however, is not the calculation of the coupling potentials. The main problem is that system (\ref{eq20}) may involve several thousands of coupled equations, and must be solved for each $J\pi$. In practice I use the $R$-matrix method with a Lagrange mesh \cite{DB10,De16a}. This approach provides the scattering matrices, and therefore the scattering cross sections. \section{The $\lip$ scattering} \subsection{Conditions of the calculation} The coupled-channel system (\ref{eq20}) is solved with the $R$-matrix method and Lagrange functions \cite{DB10,De16a}. Typically I use a channel radius $a=25$ fm with 50 basis functions. Small variations of these conditions do not bring any significant change in the cross sections. I use the Koning-Delaroche potential \cite{KD03} (referred to as KD) for $^9{\rm Li}+p$, and the Minnesota interaction \cite{TLT77} for $n+p$. Of course the KD global potential is not fitted on $^9{\rm Li}+p$ data, which do no exist, and is therefore not expected to provide an excellent description of $\lip$. To assess the sensitivity of the cross sections, I also use the Chapel Hill \cite{VTM91} parametrization (referred to as CH) for $^9{\rm Li}+p$. The Coulomb potential is treated exactly. In contrast with Ref.\ \cite{MTO19}, who use the JLM potential, I do not introduce any renormalization factor in the coupling potentials. The $\oli$ wave functions are described in Sec.\ II. I include pseudostates for $j=0^+,1^-,2^+,3^-$ up to a maximum energy $\emax=10$ MeV, which provides an excellent convergence. The $\kmax$ values are $20,17,14,13$, respectively (the number of components in (\ref{eq4}) rapidly increases with $j$ and $\kmax$). These states are illustrated in Fig.\ \ref{fig_spec}. As in all CDCC calculations, only bound states (and narrow resonances) can be associated with physical states. Other states are used to simulate the $\linn$ continuum, and depend on the choice of the basis. Converged calculations, however, should not depend on the $\oli$ basis. \begin{figure}[htb] \begin{center} \epsfig{file=spectre.eps,width=7.5cm} \caption{Pseudostate energies of $\oli$ for $j=0^+,1^-,2^+,3^-$ with the Lagrange basis defined in Sec. II.B.} \label{fig_spec} \end{center} \end{figure} \subsection{$\lip$ cross sections} The convergence of the elastic cross section at $E_{\rm Li}=66$ MeV ($\ecm=5.5$ MeV) is illustrated in Fig.\ \ref{fig_sig_lip}(a). A linear scale is used to highlight the differences between the calculations. For $\theta \lesssim 90^{\circ}$, the cross section is weakly sensitive to breakup effects. At large angles, however, the single-channel calculation, involving the $\oli$ ground state only, provides large cross sections, in contradiction with experiment (see Fig.\ \ref{fig_sig_lip}(b)). Including more $\oli$ partial waves reduces the cross section at large angles. The $2^+$ pseudostates play the dominant role, whereas $j=1^-$ is less important. The cross sections for $\jmax=2$ and $\jmax=3$ are almost superimposed, which shows that the calculation is converged. The dashed line in Fig.\ \ref{fig_sig_lip}(a) is obtained with $\jmax=3$, but limiting the pseudostates to $\emax=5.5$ MeV. In other words, open channels only are included in the expansion (\ref{eq18}). At large angles, the role of closed channels is therefore not negligible, as shown by the solid and dashed black curves (both with $\jmax=3$). This confirms the conclusion of Ref.\ \cite{OY16}, i.e. that closed channels cannot be neglected to obtain converged cross sections. \begin{figure}[htb] \begin{center} \epsfig{file=lip_sig.eps,width=6.5cm} \caption{$\lip$ cross sections divided by the Rutherford cross sections at $\elab=66$ MeV ($\ecm=5.5$ MeV). (a) Convergence with respect to $\jmax$. The dashed line corresponds to the truncation energy $\emax=5.5$ MeV, where closed channels are neglected. (b) Comparison of the CDCC cross sections with optical model (OM) calculations using the KD and CH $\lip$ potentials. The dashed line is obtained with a smaller $\oli$ basis (see text). The experimental data are taken from Ref.\ \cite{TKA17}.} \label{fig_sig_lip} \end{center} \end{figure} I compare the CDCC results with experiment in Fig.\ \ref{fig_sig_lip}(b). The solid black curve is the same as in Fig.\ \ref{fig_sig_lip}(a). With the same conditions ($\jmax=3,\emax=10$ MeV), I test the influence of two other inputs of the calculation. The dashed line is obtained with the $\lipb$ CH interaction \cite{VTM91}. None of the available global parametrization provides $\lipb$ precise potentials and, strictly speaking, should not be used for light nuclei such as $^9$Li. However, since no scattering data exist, the tradition in the literature is to use these compilations. The comparison between KD and CH illustrates the precision that I may expect from the choice of the $\lipb$ interaction. The other input of the calculation is the $\oli$ basis. In order to reduce the computer times, I have used a smaller basis for $\oli : N=15,h=0.25$ fm. This does not affect the $\oli$ ground state, but changes the continuum spectrum shown in Fig.\ \ref{fig_spec} (the density is lower). Keeping $\jmax=3$ and $\emax=10$ MeV provides the dashed line of Fig.\ \ref{fig_sig_lip}(b). Again the effect is weak (a few percents at maximum) and shows up for $\theta > 100^{\circ}$ only. This smaller basis will be used for the five-body $\lid$ calculations, where reducing the number of PS is a critical issue. In Ref.\ \cite{TKA17}, the existence of a dipole resonance in $\oli$ was suggested from an inelastic measurement $\oli (p,p')$. Excitation functions in different angular ranges show a peak around $E_x\approx 0.8$ MeV. From a theoretical point of view, however, since the $1^-$ resonance discussed in Sec.\ II is quite broad, an inelastic cross section cannot be defined rigorously. Consequently, in order to provide a link between the maximum in the $E1$ distribution (see Fig.\ \ref{fig_e1}) and the scattering process, I have computed the integrated breakup cross section. The breakup cross section to a pseudostate $n$ is defined from the scattering matrices as \beq \sigma_{\rm BU}^n(E,E_n)=\frac{\pi}{k^2}\sum_{J\pi}(2J+1)\sum_L \vert U^{J\pi}_{\omega,nL}\vert^2, \label{eq22} \eeq where $\omega$ is the entrance channel, and $L$ is the relative angular momentum, which may take several values for pseudostates with $j>0$. Equation (\ref{eq22}) gives the breakup cross section to a specific pseudostate at the breakup energy $E_n$. To derive a smooth curve, I use a standard folding method \cite{PDB12} with a Gaussian factor $f(E_x,E_n)$ (the width $\sigma$ is chosen as $\sigma=0.3$ MeV \cite{PDB12}). This leads to the total cross section \beq \sigma_{\rm BU}(E,E_x)=\sum_n f(E_x,E_n) \sigma_{\rm BU}^n(E,E_n), \label{eq23} \eeq where $E_x$ is the $\linn$ three-body energy. The breakup cross section (\ref{eq23}) is shown in Fig.\ \ref{fig_bu}, where I separate the contributions of the different partial waves in $\oli$. As expected from the $E1$ distribution of Fig.\ \ref{fig_e1}, the breakup cross section presents a maximum near $E_x=0.8$ MeV for $j=1^-$. This is supported by the experimental inelastic cross sections of Ref.\ \cite{TKA17}. The $j=0^+$ contribution is small but the $j=2^+$ component is dominant for $E_x\gtrsim 2$ MeV. The $2^+$ three-body phase shift presents a broad structure between 1 and 4 MeV \cite{PDB12}. In the discretized continuum approximation, this structure shows up as broad peaks, which are visible in Fig.\ \ref{fig_bu} but which do not correspond to physical states. \begin{figure}[htb] \begin{center} \epsfig{file=lip_bu.eps,width=6.5cm} \caption{$\lip$ breakup cross sections at $\elab=66$ MeV ($\ecm=5.5$ MeV) for different $j$ values.} \label{fig_bu} \end{center} \end{figure} \subsection{$\lip$ and $\lin$ equivalent potentials} As mentioned before, CDCC calculations involve a large number of channels. It is, however, possible to simulate these large-scale calculations by equivalent optical potentials. The procedure follows Refs.\ \cite{TNL89,De18}, and provides potentials which approximately reproduce the multichannel CDCC calculations. Having this equivalent optical potential, it is important to assess its accuracy to reproduce the CDCC elastic cross section. The $\lip$ equivalent potential $V_{\rm eq}(R)$ is shown in Fig.\ \ref{fig_pot_lip}(a), where I also plot the KD potential for the sake of comparison. The general shapes of the real and imaginary terms are similar for both potentials. The real component is a typical volume term, and the imaginary part corresponds to a surface absorption. The inset of Fig.\ \ref{fig_pot_lip}(a) focuses on radial distances near the barrier, where the sensitivity of the cross section is the largest. As expected, the role of breakup channels in CDCC is to reduce the barrier, and to increase the absorption at the surface. This effect can be seen in Fig.\ \ref{fig_pot_lip}(b), where the cross sections are presented. Although this global parametrization is not fitted on exotic nuclei such as $\oli$, the calculation with the $\lip$ KD global potential reproduces fairly well the data up to $\theta\approx 100^{\circ}$, but strongly deviates at large angles. The cross section obtained with the equivalent CDCC potential (solid curve) is in excellent agreement with the full CDCC calculation (dotted curve), which shows that the potential of Fig.\ \ref{fig_pot_lip}(a) provides a good approximation of the CDCC model. \begin{figure}[htb] \begin{center} \epsfig{file=pot_lip.eps,width=6.5cm} \caption{(a) Real and imaginary equivalent $\lip$ potentials with the CDCC (black curves). The KD potentials are shown in red. (b) Corresponding cross sections obtained with the potentials (solid lines) and with the full CDCC calculations (dotted line). The experimental data are taken from Ref.\ \cite{TKA17}.} \label{fig_pot_lip} \end{center} \end{figure} I complement this study with the $\lin$ scattering at the same energy. The goal is twofold: (i) to test the KD global potential for neutrons; (ii) to define a $\lin$ equivalent potential which, together with the $\lip$ potential discussed before, will be used to investigate $\lid$ scattering. Figure \ref{fig_pot_lin} contains the potentials (a) and cross sections (b) at $\ecm=5.5$ MeV. The conclusions are similar to those of Fig.\ \ref{fig_pot_lip}. Breakup effects have an important role around $R\sim 4-6$ fm. The KD global potential provides a similar cross section, but only predicts one minimum. The equivalent potential gives a cross section very close to the original CDCC calculation. \begin{figure}[htb] \begin{center} \epsfig{file=pot_lin.eps,width=6.5cm} \caption{See caption to Fig.\ \ref{fig_pot_lip} for $\lin$.} \label{fig_pot_lin} \end{center} \end{figure} \section{The $\lid$ scattering} The dipole resonance observed in the $\lip$ inelastic cross sections \cite{TKA17} was first suggested in a $\lid$ experiment \cite{KST15}. In addition to the broad nature of this dipole resonance, which makes theoretical models rather complicated, the low breakup threshold of the deuteron requires a $3+2$ model. The principle of the CDCC formalism remains unchanged with respect to $\lip$ ($3+1$ model) but the calculations are much longer since $(i)$ the coupling potentials involve multidimension integrals (see Appendix), $(ii)$ the number of channels in the coupled system (\ref{eq20}) is the product of the numbers of pseudostates in $\oli$ and in $d$. Having a full convergence of the cross sections, in a wide angular range, is therefore a challenge. I have started this exploratory study by using a conventional CDCC approach, where the breakup of $\oli$ is simulated by the equivalent $\lip$ and $\lin$ potentials defined in Sec.\ IV.C. In this way, I deal with a standard CDCC calculation which only includes deuteron pseudostates. The cross section is presented in Fig.\ \ref{fig_lid1}, with the experimental data of Ref.\ \cite{KST15}. As for $\lip$, I adopt a linear scale to highlight the convergence of the calculation. I include deuteron partial waves up to $\jmax=6$, and the maximum energy is $ \emax=20$ MeV. At small angles ($\theta \lesssim 30^{\circ}$), the calculation converges rapidly. The data are consistent with a maximum around $\theta \approx 60^{\circ}$, which is supported by theory, but the experimental amplitude is lower by a factor 5. Above $\theta \approx 60^{\circ}$, the experimental oscillation is reproduced by the calculation, but the convergence with respect to the deuteron angular momentum is slow. \begin{figure}[htb] \begin{center} \epsfig{file=lid1.eps,width=7.5cm} \caption{$\lid$ elastic cross section at $\elab=55.3$ MeV with the $\lip$ and $\lin$ equivalent potentials. The labels indicate the $\jmax$ value in the deuteron. The experimental data are taken from Ref. \cite{KST15}.} \label{fig_lid1} \end{center} \end{figure} The full $3+2$ cross sections are displayed in Fig.\ \ref{fig_lid2}. As these calculations, involving $\oli$ and $d$ breakup simultaneously, are extremely time consuming, I first consider single breakup. Figure \ref{fig_lid2}(a) includes $\oli$ breakup only, the deuteron remaining in the ground state. Again, the calculation predicts a maximum near $\theta \approx 60^{\circ}$, but the amplitude is overestimated. As for $\lip$, the role of $j=1^-$ is minor, but $j=2^+$ PS significantly modify the cross section. In Fig.\ \ref{fig_lid2}(b), the $\oli$ breakup is neglected, and partial waves up to $\jmax=4$ are included in the deuteron. Clearly, increasing $\jmax$ reduces the amplitude of the peak but the calculation still overestimates the data. Figure \ref{fig_lid2}(c) illustrates the various possibilities. When breakup effects are included in $\oli$ and in $d$, the amplitude is reduced, but the small experimental values around $\theta \approx 60^{\circ}$ cannot be reproduced. Of course, the convergence is not fully achieved. To keep calculations within reasonable limits, I have set $\jmax=2$ for $\oli$, and $\jmax=4$ for the deuteron. With these conditions, the number of $\lid$ states is 1100, and the size of the coupled-channel system (\ref{eq20}) is close to 9000 when the channel spin $I$ and the orbital momentum $L$ are taken into account. At the moment, it is virtually impossible to go beyond these values, but increasing the CDCC basis might slightly reduce the amplitude of the peak. \onecolumngrid \begin{figure}[htb] \begin{center} \epsfig{file=lid2.eps,width=13cm} \caption{$\lid$ elastic cross sections at $\elab=55.3$ MeV. The experimental data are taken from Ref.\ \cite{KST15}. (a) Only $\oli$ breakup is included. (b) Only deuteron breakup is included. (c) Convergence of the full five-body calculation. (d) Comparison of the CDCC calculation with the Rutherford cross section and with the optical potential of Ref.\ \cite{KST15}. } \label{fig_lid2} \end{center} \end{figure} \twocolumngrid In order to analyze the CDCC results, I show in Fig.\ \ref{fig_lid2}(d) (logarithmic scale) the Rutherford cross section, and the cross section computed with the optical potential of Ref.\ \cite{KST15}. Surprisingly the experimental data are close to a pure Rutherford scattering. The optical potential of Ref.\ \cite{KST15} nicely reproduces the data. This potential is compared to the CDCC equivalent potential in Fig.\ \ref{fig_lid_pot}. The main difference between them is that the CDCC predicts a larger range for the real and imaginary parts. In the single-channel approximation (grey lines), the range of the imaginary part is slightly smaller than in the full calculation. In a reaction involving two fragile nuclei, it seems natural that their interaction extends to large distances. However the optical potential which fits the data is characterized by a fairly short range. This apparent contradiction certainly deserves more experimental studies, in particular at other scattering energies. \begin{figure}[htb] \begin{center} \epsfig{file=li11d_pot.eps,width=7.5cm} \caption{Real (solid line) and imaginary (dashed line) $\lid$ equivalent potential compared to the optical potential of Ref.\ \cite{KST15}. The grey lines correspond to the single-channel calculation. The inset presents a zoom on the imaginary potential at large distances.} \label{fig_lid_pot} \end{center} \end{figure} \section{Conclusion} The main goal of this work is the simultaneous investigation of $\lip$ and $\lid$ scattering at low energies. $\oli$ and the deuteron have a low breakup threshold, which makes the continuum quite important. In both reactions, I have used the same three-body model for $\oli$. I paid a special attention to E1 transitions and I extended a previous calculation of the E1 distribution \cite{PDB12} by considering a correction to the long-wavelength approximation. This correction could give rise to isoscalar E1 transitions, as suggested in Refs.\ \cite{TKA17,KST15} on the basis of a large $\oli$ radius. However, if I confirm the existence of a dipole resonance at low energies, the isoscalar part of the E1 matrix element is negligible, owing to the low photon energies. For the $\lip$ elastic scattering, the CDCC calculation reproduces the experiment fairly well. The role of the $\linn$ continuum can been seen at large angles $(\theta > 90^{\circ})$, small angles being weakly sensitive. Notice that the calculations depend on a $\lipb$ potential, which is available from global parametrizations only. Experimental data on $\lipb$ elastic scattering would be helpful to derive a more accurate optical potential. From the CDCC formalism, I have determined $\lip$ and $\lin$ equivalent potentials. The goal is to use them in $\lid$ scattering with the additional deuteron breakup. The comparison of these equivalent potentials with the global potential of Ref.\ \cite{KD03} shows that the main difference is in the range. This result is not surprising since the large radius of $\oli$, as well as its low binding energy, are not considered in global parametrizations. The theoretical description of $\lid$ is a difficult challenge. The incident energy of $\oli$ is almost the same as in $\lip$ (55.3 MeV) and the c.m. energy is therefore almost double. In spite of this, the $\lid$ data are compatible with a pure Rutherford scattering, as shown in Fig.\ \ref{fig_lid2}(d). I have investigated the $\lid$ system in two ways: in the former, I use a standard three-body model using $\lip$ and $\lin$ interactions, and in the latter I extend the CDCC formalism to five bodies, with $\oli$ described as $\linn$. Both approaches provide qualitatively similar cross sections. At small angles $(\theta \lesssim 60^{\circ})$, the apparent peak in the data is present in the calculation, but its amplitude is much larger. Increasing the number of pseudostates reduces the amplitude, but it remains overestimated. The five-body calculation is a numerical challenge, owing to the very large number of channels. It is difficult to get a perfect convergence although my calculation should not be far from convergence. Of course such calculations have shortcomings: (i) the $\lipb$ and $\linb$ optical potentials are not experimentally known, (ii) antisymmetrization effects between the neutrons of $\oli$ and of the deuteron are neglected, (iii) at these low energies, the $\lipb$ and $\linb$ Pauli forbidden states may play a role. On the experimental side, data so close to Rutherford scattering are unexpected. The authors of Ref.\ \cite{KST15} fit these data with an optical potential presenting a short range. This is illustrated in Fig.\ \ref{fig_lid_pot}, where I compare the CDCC equivalent potential with the optical model of Ref.\ \cite{KST15}. As for $\lipb$ scattering, more data on $\lid$, especially at small angles, would be welcome to confirm the short-range of the $\lid$ potentials. \section*{Acknowledgments} I am grateful to R. Kanungo for useful discussions about the experimental data. This work was supported by the Fonds de la Recherche Scientifique - FNRS under Grant Numbers 4.45.10.08 and J.0049.19. It benefited from computational resources made available on the Tier-1 supercomputer of the F\'ed\'eration Wallonie-Bruxelles, infrastructure funded by the Walloon Region under the grant agreement No. 1117545. \onecolumngrid
1,116,691,497,490
arxiv
\section{Introduction} We consider the problem \begin{equation} \label{eq:LowRankOpti} \min_{X \in \mathbb{R}_{\le r}^{m \times n}} f(X) \end{equation} of minimizing a differentiable function $f : \mathbb{R}^{m \times n} \to \mathbb{R}$ with locally Lipschitz continuous gradient on the \emph{determinantal variety} \cite[Lecture~9]{Harris} \begin{equation} \label{eq:RealDeterminantalVariety} \mathbb{R}_{\le r}^{m \times n} := \{X \in \mathbb{R}^{m \times n} \mid \rank X \le r\}, \end{equation} $m$, $n$, and $r$ being positive integers such that $r < \min\{m,n\}$. This problem has been shown to appear in several applications such as matrix equations, model reduction, matrix sensing, and matrix completion; see, e.g., \cite{SchneiderUschmajew2015,HaLiuFoygel2020} and the references therein. As problem \eqref{eq:LowRankOpti} is in general intractable \cite{GillisGlineur2011}, our goal is just to find a \emph{stationary point} of this problem, i.e., a zero of the \emph{stationarity measure} \begin{equation} \label{eq:StationarityMeasure} \mathrm{s}_f : \mathbb{R}_{\le r}^{m \times n} \to \mathbb{R} : X \mapsto \norm{\proj{\tancone{\mathbb{R}_{\le r}^{m \times n}}{X}}{-\nabla f(X)}} \end{equation} that returns the norm of any projection of $-\nabla f(X)$ onto the tangent cone to $\mathbb{R}_{\le r}^{m \times n}$ at $X$; the notation is introduced in Section~\ref{sec:Preliminaries}. To the best of our knowledge, no first-order algorithm in the literature (see Proposition~\ref{prop:NoRankRelatedRetractionLocallyRadiallyLipschitzContinuouslyDifferentiable}) has been proved to produce, when given an arbitrary initial iterate, a feasible sequence $(X_i)_{i \in \mathbb{N}}$ such that $\mathrm{s}_f$ goes to zero along every convergent subsequence. Furthermore, this property would not be sufficient to guarantee that every accumulation point of $(X_i)_{i \in \mathbb{N}}$ is a stationary point. Indeed, even if $(X_i)_{i \in \mathbb{N}}$ is convergent and $\lim_{i \to \infty} \mathrm{s}_f(X_i) = 0$, because of the discontinuity of $\tancone{\mathbb{R}_{\le r}^{m \times n}}{\cdot}$ \cite[Theorem~4.1]{OlikierAbsil2021}, $\mathrm{s}_f$ may fail to be lower semicontinuous at the limit $X$ of $(X_i)_{i \in \mathbb{N}}$, i.e., it may happen that $\mathrm{s}_f(X) > 0$, the triplet $(X, (X_i)_{i \in \mathbb{N}}, f)$ being then called an \emph{apocalypse} according to \cite[Definition~2.7]{LevinKileelBoumal2021}. As a matter of fact, in \cite[\S 2.2]{LevinKileelBoumal2021} is presented an example of a polynomial function $f : \mathbb{R}_{\le 2}^{3 \times 3} \to \mathbb{R}$ and of an initial iterate $X_0 \in \mathbb{R}_{\le 2}^{3 \times 3}$ such that the sequence $(X_i)_{i \in \mathbb{N}}$ produced by \cite[Algorithm~3]{SchneiderUschmajew2015}---dubbed $\mathrm{P}^2\mathrm{GD}$---converges to $X$, and $(X, (X_i)_{i \in \mathbb{N}}, f)$ is an apocalypse. In \cite[\S 3]{LevinKileelBoumal2021}, a second-order algorithm is proposed that, under some assumptions on $f$, produces sequences the accumulation points of which are stationary. In \cite[\S 4]{LevinKileelBoumal2021}, the following question is raised: ``Is there an algorithm running directly on $\mathbb{R}_{\le r}^{m \times n}$ that only uses first-order information about the cost function and which is guaranteed to converge to a stationary point?'' In this paper, we introduce a first-order algorithm on $\mathbb{R}_{\le r}^{m \times n}$ (Algorithm~\ref{algo:P2GDRankReduction}) that is apocalypse-free, in the sense that every accumulation point of the generated sequence is a stationary point (Theorem~\ref{thm:P2GDRankReductionPolakConvergence}), which also implies that $\mathrm{s}_f$ goes to zero along every convergent subsequence (Corollary~\ref{coro:P2GDRankReductionPolakConvergence}). This algorithm applies the main step of $\mathrm{P}^2\mathrm{GD}$ but by taking the numerical rank into account to perform suitable rank reductions. As mentioned in \cite[Remark~2.11]{LevinKileelBoumal2021}, low-rank optimization algorithms using rank reductions can already be found in the literature; see, e.g., \cite[Algorithm~3]{ZhouEtAl2016} and \cite[\S 3.3]{Levin2020}. However, the proposed algorithm uses rank reductions to exploit the continuity of both the singular values and the restriction of $\mathrm{s}_f$ to the smooth manifold \cite[Proposition~4.1]{HelmkeShayman1995} \begin{equation} \label{eq:RealFixedRankManifold} \mathbb{R}_{\ushort{r}}^{m \times n} := \{X \in \mathbb{R}^{m \times n} \mid \rank X = \ushort{r}\} \end{equation} for every $\ushort{r} \in \{1, \dots, r\}$; see Corollary~\ref{coro:ContinuityStationarityMeasureLowRankOptiConstantRank} for the latter. This allows us to prove Theorem~\ref{thm:P2GDRankReductionPolakConvergence}. An overview of Algorithm~\ref{algo:P2GDRankReduction}'s design and analysis strategy can be found in Section~\ref{sec:ProposedAlgorithm}. This paper is organized as follows. After preliminaries in Section~\ref{sec:Preliminaries}, we analyze $\mathrm{P}^2\mathrm{GD}$ in Section~\ref{sec:ProjectedSteepestDescentBacktrackingLineSearchDeterminantalVariety} under the assumption of local Lipschitz continuity of $\nabla f$. Then, we introduce the proposed algorithm in Section~\ref{sec:ProposedAlgorithm}, and conduct a convergence analysis in Section~\ref{sec:ConvergenceAnalysis} based on the results collected in the preceding sections. Finally, Section~\ref{sec:Conclusion} gathers concluding remarks, and Appendix~\ref{sec:Appendix} complementary results. \section{Preliminaries} \label{sec:Preliminaries} This section gathers notation and preliminary results that will be used throughout the paper. Section~\ref{subsec:ProjectionOntoClosedCones} recalls basic facts about the projection onto closed cones in $\mathbb{R}^{m \times n}$, and Section~\ref{subsec:NumericalRankRankReduction} focuses in particular on the projection onto $\mathbb{R}_{\le \ushort{r}}^{m \times n}$ for every nonnegative integer $\ushort{r} < \min\{m,n\}$, with the convention $\mathbb{R}_{\le 0}^{m \times n} := \mathbb{R}_0^{m \times n} := \{0_{m \times n}\}$. Based on Section~\ref{subsec:ProjectionOntoClosedCones}, we review in Section~\ref{subsec:TangentConeStationarityMeasureDeterminantalVariety} the notion of tangent cone and the stationarity measure $\mathrm{s}_f$ defined in \eqref{eq:StationarityMeasure}. Then, in Section~\ref{subsec:UpperBoundDistanceToDeterminantalVarietyFromTangentLine}, we derive an upper bound on the distance to $\mathbb{R}_{\le r}^{m \times n}$ from any of its tangent lines. Next, in Section~\ref{subsec:ContinuityRestrictionStationarityMeasureLowRankOptiToFixedRank}, we prove based on \cite[Theorem~4.1]{OlikierAbsil2021} that, for every $\ushort{r} \in \{1, \dots, r\}$, the restriction of $\mathrm{s}_f$ to the smooth manifold $\mathbb{R}_{\ushort{r}}^{m \times n}$ defined in \eqref{eq:RealFixedRankManifold} is continuous. Finally, in Section~\ref{subsec:DeterminantalVarietyNoSerendipitousPoint}, we prove that $\mathbb{R}_{\le r}^{m \times n}$ has no serendipitous point in the sense of \cite[Definition~2.8]{LevinKileelBoumal2021}. \subsection{Projection onto closed cones} \label{subsec:ProjectionOntoClosedCones} In this paper, $\mathbb{R}^{m \times n}$ is endowed with the Frobenius inner product $\ip{\cdot}{\cdot}$, $\norm{\cdot}$ denotes the Frobenius norm, and, for every $X \in \mathbb{R}^{m \times n}$ and every $\rho \in (0,\infty)$, $B[X,\rho] := \{Y \in \mathbb{R}^{m \times n} \mid \norm{X-Y} \le \rho\}$ is the closed ball of center $X$ and radius $\rho$ in $\mathbb{R}^{m \times n}$. For every nonempty subset $\mathcal{S}$ of $\mathbb{R}^{m \times n}$ and every $X \in \mathbb{R}^{m \times n}$, $d(X,\mathcal{S}) := \inf_{Y \in \mathcal{S}} \norm{X-Y}$ is the distance from $X$ to $\mathcal{S}$, and $\proj{\mathcal{S}}{X} := \argmin_{Y \in \mathcal{S}} \norm{X-Y}$ is the projection of $X$ onto $\mathcal{S}$; $\proj{\mathcal{S}}{X}$ can be empty in general but not if $\mathcal{S}$ is closed, as formulated in the following proposition. \begin{proposition}[{\cite[Example~1.20]{RockafellarWets}}] \label{eq:ProjectionOntoClosedSet} For every nonempty closed subset $\mathcal{S}$ of $\mathbb{R}^{m \times n}$ and every $X \in \mathbb{R}^{m \times n}$, $\proj{\mathcal{S}}{X}$ is nonempty and compact. \end{proposition} A nonempty subset $\mathcal{C}$ of $\mathbb{R}^{m \times n}$ is said to be a \emph{cone} if, for every $X \in \mathcal{C}$ and every $\lambda \in [0,\infty)$, $\lambda X \in \mathcal{C}$. In this paper, we will mostly project onto closed cones and, in that case, the preceding proposition can be completed as follows. \begin{proposition}[{\cite[Proposition~A.6]{LevinKileelBoumal2021}}] \label{prop:NormProjectionOntoClosedCone} Let $\mathcal{C} \subseteq \mathbb{R}^{m \times n}$ be a closed cone. For every $X \in \mathbb{R}^{m \times n}$ and every $Y \in \proj{\mathcal{C}}{X}$, \begin{equation*} \norm{Y}^2 = \norm{X}^2 - d(X,\mathcal{C})^2. \end{equation*} \end{proposition} \subsection{Numerical rank and rank reduction} \label{subsec:NumericalRankRankReduction} The goal of this section is to prove Proposition~\ref{prop:LocalDeltaRank} on which the proposed algorithm crucially relies. To this end, we first need to review basic facts about singular values, numerical rank, and rank reduction. In what follows, the singular values of $X \in \mathbb{R}^{m \times n}$ are denoted by $\sigma_1(X) \ge \dots \ge \sigma_{\min\{m,n\}}(X) \ge 0$, as in \cite[\S 2.4.1]{GolubVanLoan}. Moreover, if $X \ne 0_{m \times n}$, then $\sigma_1(X)$ and $\sigma_{\rank X}(X)$ are respectively denoted by $\sigma_{\max}(X)$ and $\sigma_{\min}(X)$. The following proposition shows that the singular values are Lipschitz continuous with Lipschitz constant $1$, a property that will be used in the proof of Proposition~\ref{prop:LocalDeltaRank} and in the convergence analysis in Section~\ref{sec:ConvergenceAnalysis}. \begin{proposition}[{\cite[Corollary~8.6.2]{GolubVanLoan}}] \label{prop:SingularValuesLipschitz} For every $X, Y \in \mathbb{R}^{m \times n}$ and every $j \in \{1, \dots, \min\{m,n\}\}$, \begin{equation*} |\sigma_j(X)-\sigma_j(Y)| \le \sigma_1(X-Y) \le \norm{X-Y}. \end{equation*} \end{proposition} The numerical rank can be defined as follows. \begin{definition}[{\cite[p. 276]{GolubVanLoan}}] \label{defi:DeltaRank} Given $\Delta \in [0,\infty)$ and $X \in \mathbb{R}^{m \times n}$, the \emph{$\Delta$-rank} of $X$ is \begin{equation*} \rank_\Delta X := \left\{\begin{array}{ll} 0 & \text{if } X = 0_{m \times n},\\ \max\{j \in \{1, \dots, \rank X\} \mid \sigma_j(X) > \Delta\} & \text{otherwise}. \end{array}\right. \end{equation*} \end{definition} By reducing the rank of $X \in \mathbb{R}^{m \times n}$, we mean computing an element of $\proj{\mathbb{R}_{\le \ushort{r}}^{m \times n}}{X}$ for some nonnegative integer $\ushort{r} < \rank X$. According to the Eckart--Young theorem \cite{EckartYoung1936}, this can be achieved by truncating the SVD of $X$. In particular, for every nonnegative integer $\ushort{r} < \min\{m,n\}$ and every $X \in \mathbb{R}^{m \times n}$: \begin{enumerate} \item if $\rank X \le \ushort{r}$, then $\proj{\mathbb{R}_{\le \ushort{r}}^{m \times n}}{X} = \{X\}$; \item if $\rank X > \ushort{r}$, then $d(X,\mathbb{R}_{\le \ushort{r}}^{m \times n}) = d(X,\mathbb{R}_{\ushort{r}}^{m \times n}) = \sqrt{\sum_{j=\ushort{r}+1}^{\rank X} \sigma_j^2(X)}$ and $\proj{\mathbb{R}_{\le \ushort{r}}^{m \times n}}{X} = \proj{\mathbb{R}_{\ushort{r}}^{m \times n}}{X}$. \end{enumerate} We can now introduce Proposition~\ref{prop:LocalDeltaRank}; it will be invoked in the convergence analysis conducted in Section~\ref{sec:ConvergenceAnalysis}. For convenience, we introduce the notation $\mathbb{R}_{< \ushort{r}}^{m \times n} := \mathbb{R}_{\le \ushort{r}}^{m \times n} \setminus \mathbb{R}_{\ushort{r}}^{m \times n}$ and $\mathbb{R}_{\ge \ushort{r}}^{m \times n} := \mathbb{R}^{m \times n} \setminus \mathbb{R}_{< \ushort{r}}^{m \times n}$ for every positive integer $\ushort{r} \le \min\{m,n\}$. \begin{proposition} \label{prop:LocalDeltaRank} Let $\ushort{r} \in \{1, \dots, \min\{m,n\}\}$ and $X \in \mathbb{R}_{\ushort{r}}^{m \times n}$. For every $\varepsilon \in (0,\infty)$ and every $Y \in B[X,\varepsilon]$, $\proj{\mathbb{R}_{\le \ushort{r}}^{m \times n}}{Y} \subset B[X,2\varepsilon]$. Moreover, for every $\Delta \in (0,\infty)$, every $\varepsilon \in (0,\sigma_{\ushort{r}}(X)) \cap (0,\Delta]$, and every $Y \in B[X,\varepsilon]$, it holds that $\rank_\Delta Y \le \ushort{r} \le \rank Y$. \end{proposition} \begin{proof} The inclusion holds because, for every $\hat{Y} \in \proj{\mathbb{R}_{\le \ushort{r}}^{m \times n}}{Y}$, \begin{equation*} \norm{X-\hat{Y}} \le \norm{X-Y} + \norm{Y-\hat{Y}} = \norm{X-Y} + d(Y,\mathbb{R}_{\ushort{r}}^{m \times n}) \le \norm{X-Y} + \norm{X-Y} \le 2 \varepsilon. \end{equation*} The second inequality holds because $B[X,\varepsilon] \subset \mathbb{R}_{\ge \ushort{r}}^{m \times n}$ since $\varepsilon < \sigma_{\ushort{r}}(X) = d(X,\mathbb{R}_{< \ushort{r}}^{m \times n})$. Besides, the first inequality is trivial if the second one is an equality. Let us therefore consider $Y \in B[X,\varepsilon]$ such that $\ushort{r} < \rank Y$. Then, the first inequality holds because \begin{equation*} \sigma_{\ushort{r}+1}(Y) = |\sigma_{\ushort{r}+1}(Y)-\sigma_{\ushort{r}+1}(X)| \le \norm{X-Y} \le \varepsilon \le \Delta, \end{equation*} where the first inequality follows from Proposition~\ref{prop:SingularValuesLipschitz}. \end{proof} \subsection{Tangent cone and stationarity measure on the determinantal variety} \label{subsec:TangentConeStationarityMeasureDeterminantalVariety} The concept of tangent cone plays a fundamental role in constrained optimization. \begin{proposition} \label{prop:TangentCone} For every nonempty subset $\mathcal{S}$ of $\mathbb{R}^{m \times n}$ and every $X \in \mathcal{S}$, the set \begin{equation*} \tancone{\mathcal{S}}{X} := \Big\{V \in \mathbb{R}^{m \times n} \mid \exists \begin{array}{l} (t_i)_{i \in \mathbb{N}} \text{ in } (0,\infty) \text{ converging to } 0 \\ (V_i)_{i \in \mathbb{N}} \text{ in } \mathbb{R}^{m \times n} \text{ converging to } V \end{array} : X+t_iV_i \in \mathcal{S} \; \forall i \in \mathbb{N}\Big\} \end{equation*} is a closed cone, not necessarily convex however, called the \emph{tangent cone} to $\mathcal{S}$ at $X$. \end{proposition} \begin{proof} We refer to \cite[\S 2.3]{OlikierAbsil2021} and the references therein for a more complete review of the concept of tangent cone. \end{proof} We review in the forthcoming Proposition~\ref{prop:ProjectionOntoTangentConeDeterminantalVariety} formulas describing $\tancone{\mathbb{R}_{\le r}^{m \times n}}{X}$ and $\proj{\tancone{\mathbb{R}_{\le r}^{m \times n}}{X}}{G}$ for every $X \in \mathbb{R}_{\le r}^{m \times n}$ and every $G \in \mathbb{R}^{m \times n}$ based on the SVD of $X$. Before that, we can already observe that Propositions~\ref{prop:NormProjectionOntoClosedCone} and \ref{prop:TangentCone} together imply that the stationarity measure $\mathrm{s}_f$ introduced in \eqref{eq:StationarityMeasure} is well defined with, for every $X \in \mathbb{R}_{\le r}^{m \times n}$, \begin{equation} \label{eq:StationarityMeasureExplicitFormula} \mathrm{s}_f(X) = \sqrt{\norm{\nabla f(X)}^2 - d(-\nabla f(X),\tancone{\mathbb{R}_{\le r}^{m \times n}}{X})^2}. \end{equation} The formulas given in the following proposition can be obtained from the definition in Proposition~\ref{prop:TangentCone}. \begin{proposition}[{\cite[Theorem~3.2 and Corollary~3.3]{SchneiderUschmajew2015}}] \label{prop:ProjectionOntoTangentConeDeterminantalVariety} Let $\ushort{r} \in \{1, \dots, r\}$, $X \in \mathbb{R}_{\ushort{r}}^{m \times n}$, and \begin{equation*} X = [U \; U_\perp] \begin{bmatrix} \Sigma & \\ & 0_{m-\ushort{r} \times n-\ushort{r}} \end{bmatrix} [V \; V_\perp]^\top \end{equation*} be an SVD. Then, \begin{equation*} \tancone{\mathbb{R}_{\le r}^{m \times n}}{X} = [U \; U_\perp] \begin{bmatrix} \mathbb{R}^{\ushort{r} \times \ushort{r}} & \mathbb{R}^{\ushort{r} \times n-\ushort{r}} \\ \mathbb{R}^{m-\ushort{r} \times \ushort{r}} & \mathbb{R}_{\le r-\ushort{r}}^{m-\ushort{r} \times n-\ushort{r}} \end{bmatrix} [V \; V_\perp]^\top. \end{equation*} Moreover, for every $G \in \mathbb{R}^{m \times n}$, \begin{equation*} G = [U \; U_\perp] \begin{bmatrix} A & B \\ C & D \end{bmatrix} [V \; V_\perp]^\top \end{equation*} with $A = U^\top G V$, $B = U^\top G V_\perp$, $C = U_\perp^\top G V$, and $D = U_\perp^\top G V_\perp$, and \begin{equation*} \proj{\tancone{\mathbb{R}_{\le r}^{m \times n}}{X}}{G} = [U \; U_\perp] \begin{bmatrix} A & B \\ C & \proj{\mathbb{R}_{\le r-\ushort{r}}^{m-\ushort{r} \times n-\ushort{r}}}{D} \end{bmatrix} [V \; V_\perp]^\top. \end{equation*} Finally, \begin{equation} \label{eq:StationarityMeasureGradientNorm} \norm{\nabla f(X)} \ge \mathrm{s}_f(X) \ge \sqrt{\frac{r-\ushort{r}}{\min\{m,n\}-\ushort{r}}} \norm{\nabla f(X)}. \end{equation} \end{proposition} We are now going to review a fundamental result concerning $\mathrm{s}_f$. To this end, we need to recall a basic definition from set-valued analysis. A \emph{correspondence}, or a \emph{set-valued mapping}, is a triple $F := (A, B, G)$ where $A$ and $B$ are sets respectively called the \emph{set of departure} and the \emph{set of destination} of $F$, and $G$ is a subset of $A \times B$ called the \emph{graph} of $F$. If $F := (A, B, G)$ is a correspondence, written $F : A \multimap B$, then the \emph{image} of $x \in A$ by $F$ is $F(x) := \{y \in B \mid (x,y) \in G\}$, and the \emph{domain} of $F$ is $\dom F := \{x \in A \mid F(x) \ne \emptyset\}$. \begin{proposition} For every differentiable function $\phi : \mathbb{R}^{m \times n} \to \mathbb{R}$: \begin{enumerate} \item the correspondence \begin{equation*} \mathbb{R}_{\le r}^{m \times n} \multimap \mathbb{R}^{m \times n} : X \mapsto \proj{\tancone{\mathbb{R}_{\le r}^{m \times n}}{X}}{-\nabla \phi(X)} \end{equation*} depends on $\phi$ only through its restriction $\phi|_{\mathbb{R}_{\le r}^{m \times n}}$; \item if $X \in \mathbb{R}_{\le r}^{m \times n}$ is a local minimizer of $\phi|_{\mathbb{R}_{\le r}^{m \times n}}$, then $\mathrm{s}_\phi(X) = 0$. \end{enumerate} \end{proposition} \begin{proof} The first part corresponds to \cite[Theorem~A.9(a)]{LevinKileelBoumal2021}, and the second follows from \cite[Theorem~6.12]{RockafellarWets} and \cite[Proposition~A.6]{LevinKileelBoumal2021}. \end{proof} \subsection{An upper bound on the distance to the determinantal variety from a tangent line} \label{subsec:UpperBoundDistanceToDeterminantalVarietyFromTangentLine} In this section, we look for an upper bound on $d(X+G, \mathbb{R}_{\le r}^{m \times n})$ holding for every $X \in \mathbb{R}_{\le r}^{m \times n} \setminus \{0_{m \times n}\}$ and every $G \in \tancone{\mathbb{R}_{\le r}^{m \times n}}{X}$. A trivial bound is $\norm{G}$, and \cite[Proposition~3.6]{SchneiderUschmajew2015} tightens it to $\frac{1}{\sqrt{2}} \norm{G}$. However, we will need an upper bound proportional to $\norm{G}^2$ in the proof of Proposition~\ref{prop:P2GDstepUpperBoundCost}; such a bound is given in Proposition~\ref{prop:UpperBoundDistanceToDeterminantalVarietyFromTangentLine} and shown to be tight in Proposition~\ref{prop:TightUpperBoundDistanceToDeterminantalVarietyFromTangentLine}. \begin{proposition} \label{prop:UpperBoundDistanceToDeterminantalVarietyFromTangentLine} For every $X \in \mathbb{R}_{\le r}^{m \times n} \setminus \{0_{m \times n}\}$ and every $G \in \tancone{\mathbb{R}_{\le r}^{m \times n}}{X}$, \begin{equation*} d(X+G, \mathbb{R}_{\le r}^{m \times n}) \le \frac{\sqrt{\rank X}}{2\sigma_{\min}(X)}\norm{G}^2. \end{equation*} \end{proposition} \begin{proof} Let $\ushort{r} := \rank X$. Let \begin{equation*} X = [U \; U_\perp] \begin{bmatrix} \Sigma & \\ & 0_{m-\ushort{r} \times n-\ushort{r}} \end{bmatrix} [V \; V_\perp]^\top \end{equation*} be an SVD. By Proposition~\ref{prop:ProjectionOntoTangentConeDeterminantalVariety}, there are $A \in \mathbb{R}^{\ushort{r} \times \ushort{r}}$, $B \in \mathbb{R}^{\ushort{r} \times n-\ushort{r}}$, $C \in \mathbb{R}^{m-\ushort{r} \times \ushort{r}}$, and $D \in \mathbb{R}_{\le r-\ushort{r}}^{m-\ushort{r} \times n-\ushort{r}}$ such that \begin{equation*} G = [U \; U_\perp] \begin{bmatrix} A & B \\ C & D \end{bmatrix} [V \; V_\perp]^\top. \end{equation*} Let us define $\gamma : [0,\infty) \to \mathbb{R}_{\le r}^{m \times n}$ by \begin{equation*} \gamma(t) := \big(U+t(U_\perp C + {\textstyle\frac{1}{2}}UA)\Sigma^{-1}\big) \Sigma \big(V+t(V_\perp B^\top + {\textstyle\frac{1}{2}}VA^\top)\Sigma^{-1}\big)^\top + tU_\perp D V_\perp^\top, \end{equation*} where the first term is inspired from \cite[(13)]{ZhouEtAl2016}; $\gamma$ is well defined since the ranks of the two terms are respectively upper bounded by $\ushort{r}$ and $r-\ushort{r}$. For every $t \in [0,\infty)$, \begin{equation*} \gamma(t) = X + t G + \frac{t^2}{4} [U \; U_\perp] \begin{bmatrix} A \\ 2C \end{bmatrix} \Sigma^{-1} \begin{bmatrix} A & 2B \end{bmatrix} [V \; V_\perp]^\top \end{equation*} thus \begin{equation*} d(X+tG, \mathbb{R}_{\le r}^{m \times n}) \le \norm{(X+tG)-\gamma(t)} = \frac{t^2}{4} \left\|\begin{bmatrix} A\Sigma^{-1}A & 2A\Sigma^{-1}B \\ 2C\Sigma^{-1}A & 4C\Sigma^{-1}B\end{bmatrix}\right\|. \end{equation*} Observe that \begin{align*} &\hspace*{-1cm}\left\|\begin{bmatrix} A\Sigma^{-1}A & 2A\Sigma^{-1}B \\ 2C\Sigma^{-1}A & 4C\Sigma^{-1}B\end{bmatrix}\right\|^2\\ &= \norm{A\Sigma^{-1}A}^2 + 4 \norm{A\Sigma^{-1}B}^2 + 4 \norm{C\Sigma^{-1}A}^2 + 16 \norm{C\Sigma^{-1}B}^2\\ &\le \norm{\Sigma^{-1}}^2 \left(\norm{A}^4 + 4 \norm{A}^2 \norm{B}^2 + 4 \norm{A}^2 \norm{C}^2 + 16 \norm{B}^2 \norm{C}^2\right)\\ &\le \norm{\Sigma^{-1}}^2 \norm{G}^4 \max_{\substack{x, y, z \in \mathbb{R} \\ x^2+y^2+z^2 = 1}} x^4 + 4 x^2 y^2 + 4 x^2 z^2 + 16 y^2 z^2\\ &= 4 \norm{\Sigma^{-1}}^2 \norm{G}^4. \end{align*} Furthermore, \begin{equation*} \norm{\Sigma^{-1}} \le \frac{\sqrt{\ushort{r}}}{\sigma_{\ushort{r}}(X)}. \end{equation*} Therefore, for every $t \in [0,\infty)$, \begin{equation*} d(X+tG, \mathbb{R}_{\le r}^{m \times n}) \le t^2 \frac{\sqrt{\ushort{r}}}{2 \sigma_{\ushort{r}}(X)} \norm{G}^2. \end{equation*} Choosing $t = 1$ yields the result. \end{proof} \subsection{Continuity of the restriction of the stationarity measure to any fixed-rank manifold} \label{subsec:ContinuityRestrictionStationarityMeasureLowRankOptiToFixedRank} In this section, we prove that, for every $\ushort{r} \in \{1, \dots, r\}$, the restriction of $\mathrm{s}_f$ to $\mathbb{R}_{\ushort{r}}^{m \times n}$ is continuous; this will play a fundamental role in the convergence analysis conducted in Section~\ref{sec:ConvergenceAnalysis}. We refer to \cite[\S 2.2]{OlikierAbsil2021} and the references therein for the definition of continuity of correspondences between metric spaces that appears in this paper only in the two following results. \begin{proposition} \label{prop:DistanceToSetJointlyContinuous} Let $G : \mathbb{R}^{m \times n} \to \mathbb{R}^{m \times n}$ be continuous, $\mathcal{T} : \mathbb{R}^{m \times n} \multimap \mathbb{R}^{m \times n}$ be closed-valued, and $\mathcal{S}$ be a nonempty subset of $\dom \mathcal{T}$. If $\mathcal{T}$ is continuous at $X \in \mathcal{S}$ relative to $\mathcal{S}$, then the function \begin{equation*} \dom \mathcal{T} \to \mathbb{R} : Y \mapsto d(G(Y),\mathcal{T}(Y)) \end{equation*} is continuous at $X$ relative to $\mathcal{S}$. \end{proposition} \begin{proof} Let $X \in \mathcal{S}$. For every $Y \in \mathcal{S}$, \begin{align*} |d(G(X),\mathcal{T}(X))-d(G(Y),\mathcal{T}(Y))| \le\;& |d(G(X),\mathcal{T}(X))-d(G(X),\mathcal{T}(Y))|\\ &+ |d(G(X),\mathcal{T}(Y))-d(G(Y),\mathcal{T}(Y))| \end{align*} and, by \cite[Proposition~1.3.17]{Willem}, \begin{equation*} |d(G(X),\mathcal{T}(Y))-d(G(Y),\mathcal{T}(Y))| \le \norm{G(X)-G(Y)}. \end{equation*} Let $\varepsilon \in (0,\infty)$. First, by \cite[Proposition~5.11(c)]{RockafellarWets}, the function \begin{equation*} \dom \mathcal{T} \to \mathbb{R} : Y \mapsto d(G(X),\mathcal{T}(Y)) \end{equation*} is continuous at $X$ relative to $\mathcal{S}$. Thus, there exists $\delta_1 \in (0,\infty)$ such that, for every $Y \in B[X,\delta_1] \cap \mathcal{S}$, \begin{equation*} |d(G(X),\mathcal{T}(X))-d(G(X),\mathcal{T}(Y))| \le \frac{\varepsilon}{2}. \end{equation*} Second, since $G$ is continuous at $X$, there exists $\delta_2 \in (0,\infty)$ such that, for every $Y \in B[X,\delta_2]$, \begin{equation*} \norm{G(X)-G(Y)} \le \frac{\varepsilon}{2}. \end{equation*} Therefore, if $\delta := \min\{\delta_1,\delta_2\}$, then, for every $Y \in B[X,\delta] \cap \mathcal{S}$, \begin{equation*} |d(G(X),\mathcal{T}(X))-d(G(Y),\mathcal{T}(Y))| \le \varepsilon, \end{equation*} which completes the proof. \end{proof} \begin{corollary} \label{coro:ContinuityStationarityMeasureLowRankOptiConstantRank} For every $\ushort{r} \in \{1, \dots, r\}$ and every continuously differentiable function $\phi : \mathbb{R}^{m \times n} \to \mathbb{R}$, the restriction $\mathrm{s}_\phi|_{\mathbb{R}_{\ushort{r}}^{m \times n}}$ is continuous. \end{corollary} \begin{proof} Let $X \in \mathbb{R}_{\ushort{r}}^{m \times n}$. We have to prove that $\mathrm{s}_\phi$ is continuous at $X$ relative to $\mathbb{R}_{\ushort{r}}^{m \times n}$. By Proposition~\ref{prop:TangentCone} and \cite[Theorem~4.1]{OlikierAbsil2021}, $\tancone{\mathbb{R}_{\le r}^{m \times n}}{\cdot}$ is closed-valued and continuous at $X$ relative to $\mathbb{R}_{\ushort{r}}^{m \times n}$. Thus, by the preceding proposition, the function \begin{equation*} \mathbb{R}_{\le r}^{m \times n} \to \mathbb{R} : Y \mapsto d(-\nabla \phi(Y),\tancone{\mathbb{R}_{\le r}^{m \times n}}{Y}) \end{equation*} is continuous at $X$ relative to $\mathbb{R}_{\ushort{r}}^{m \times n}$. The result then follows from \eqref{eq:StationarityMeasureExplicitFormula}. \end{proof} \subsection{The determinantal variety has no serendipitous point} \label{subsec:DeterminantalVarietyNoSerendipitousPoint} According to \cite[Definition~2.8]{LevinKileelBoumal2021}, $X \in \mathbb{R}_{\le r}^{m \times n}$ is said to be \emph{serendipitous} if there exist a sequence $(X_i)_{i \in \mathbb{N}}$ in $\mathbb{R}_{\le r}^{m \times n}$ converging to $X$, a continuously differentiable function $\phi : \mathbb{R}^{m \times n} \to \mathbb{R}$, and $\varepsilon \in (0,\infty)$ such that $\mathrm{s}_\phi(X_i) > \varepsilon$ for every $i \in \mathbb{N}$, yet $\mathrm{s}_\phi(X) = 0$. The next result will be invoked in the proof of Corollary~\ref{coro:P2GDRankReductionPolakConvergence}. \begin{proposition} \label{prop:DeterminantalVarietyNoSerendipitousPoint} No point of $\mathbb{R}_{\le r}^{m \times n}$ is serendipitous. \end{proposition} \begin{proof} Let $X \in \mathbb{R}_{\le r}^{m \times n}$. Let us prove that $X$ is not serendipitous using \cite[Theorem~2.17]{LevinKileelBoumal2021}. To this end, we need to recall two definitions for which we refer to \cite[\S 2]{OlikierAbsil2021} and the references therein: \begin{itemize} \item for every nonempty subset $\mathcal{S}$ of $\mathbb{R}^{m \times n}$, \begin{equation*} \mathcal{S}^- := \{Y \in \mathbb{R}^{m \times n} \mid \ip{Y}{Z} \le 0 \; \forall Z \in \mathcal{S}\} \end{equation*} is a closed convex cone called the \emph{(negative) polar} of $\mathcal{S}$; \item for every sequence $(\mathcal{S}_i)_{i \in \mathbb{N}}$ of nonempty subsets of $\mathbb{R}^{m \times n}$, \begin{equation*} \outlim_{i \to \infty} \mathcal{S}_i := \big\{Y \in \mathbb{R}^{m \times n} \mid \liminf_{i \to \infty} d(Y,\mathcal{S}_i) = 0\big\} \end{equation*} is a closed set called the \emph{outer limit} of $(\mathcal{S}_i)_{i \in \mathbb{N}}$. \end{itemize} Let $(X_i)_{i \in \mathbb{N}}$ be a sequence in $\mathbb{R}_{\le r}^{m \times n}$converging to $X$. If $\rank X = r$, then \cite[Proposition~4.3]{OlikierAbsil2021} yieds $\outlim_{i \to \infty} \tancone{\mathbb{R}_{\le r}^{m \times n}}{X_i} = \tancone{\mathbb{R}_{\le r}^{m \times n}}{X}$, and thus $\big(\outlim_{i \to \infty} \tancone{\mathbb{R}_{\le r}^{m \times n}}{X_i}\big)^- = \big(\tancone{\mathbb{R}_{\le r}^{m \times n}}{X}\big)^-$. If $\rank X < r$, then \begin{equation*} \big(\tancone{\mathbb{R}_{\le r}^{m \times n}}{X}\big)^- = \{0_{m \times n}\} \subseteq \Big(\outlim_{i \to \infty} \tancone{\mathbb{R}_{\le r}^{m \times n}}{X_i}\Big)^-, \end{equation*} where the equality follows from \cite[(25)]{OlikierAbsil2021}. \end{proof} \section{Projected steepest descent with backtracking line search on $\mathbb{R}_{\le r}^{m \times n}$} \label{sec:ProjectedSteepestDescentBacktrackingLineSearchDeterminantalVariety} In this section, we analyze Algorithm~\ref{algo:P2GDstep}---that corresponds to the main step of $\mathrm{P}^2\mathrm{GD}$ \cite[Algorithm~3]{SchneiderUschmajew2015} except that the initial step size for the backtracking procedure is chosen in a given bounded interval---under the assumption that $f$ is differentiable with $\nabla f$ locally Lipschitz continuous. This will serve as a basis for the convergence analysis conducted in Section~\ref{sec:ConvergenceAnalysis} since Algorithm~\ref{algo:P2GDstep} is used as a subroutine in Algorithm~\ref{algo:P2GDRankReduction}. \begin{algorithm}[H] \caption{One step of $\mathrm{P}^2\mathrm{GD}$ (projected steepest descent with backtracking line search on $\mathbb{R}_{\le r}^{m \times n}$)} \label{algo:P2GDstep} \begin{algorithmic}[1] \Require $(f, X, \ushort{\alpha}, \bar{\alpha}, \beta, c)$ where $f : \mathbb{R}^{m \times n} \to \mathbb{R}$ is differentiable with locally Lipschitz continuous gradient, $X \in \mathbb{R}_{\le r}^{m \times n}$ is such that $\mathrm{s}_f(X) > 0$, $0 < \ushort{\alpha} < \bar{\alpha} < \infty$, and $\beta, c \in (0,1)$. \State Choose $G \in \proj{\tancone{\mathbb{R}_{\le r}^{m \times n}}{X}}{-\nabla f(X)}$, $\alpha \in [\ushort{\alpha},\bar{\alpha}]$, and $Y \in \proj{\mathbb{R}_{\le r}^{m \times n}}{X + \alpha G}$; \While {$f(Y) > f(X) - c \alpha \mathrm{s}_f(X)^2$} \State $\alpha \gets \alpha \beta$; \State \label{algo:P2GDstep:LineSearch} Choose $Y \in \proj{\mathbb{R}_{\le r}^{m \times n}}{X + \alpha G}$; \EndWhile \State Return $Y$. \end{algorithmic} \end{algorithm} Following \cite{LevinKileelBoumal2021}, the set of all possible outputs of Algorithm~\ref{algo:P2GDstep} is denoted by $\mathrm{P}^2\mathrm{GD}(f, X, \ushort{\alpha}, \bar{\alpha}, \beta, c)$ in the rest of the paper. Let us recall that, since $\nabla f$ is locally Lipschitz continuous, for every closed ball $\mathcal{B} \subset \mathbb{R}^{m \times n}$, \begin{equation*} \lip_{\mathcal{B}}(\nabla f) := \sup_{\substack{X, Y \in \mathcal{B} \\ X \ne Y}} \frac{\norm{\nabla f(X) - \nabla f(Y)}}{\norm{X-Y}} < \infty, \end{equation*} which implies, by \cite[Lemma~1.2.3]{Nesterov2018}, that, for every $X, Y \in \mathcal{B}$, \begin{equation} \label{eq:InequalityLipschitzContinuousGradient} |f(Y) - f(X) - \ip{\nabla f(X)}{Y-X}| \le \frac{\lip_{\mathcal{B}}(\nabla f)}{2} \norm{Y-X}^2. \end{equation} \begin{proposition} \label{prop:P2GDstepUpperBoundCost} Let $X \in \mathbb{R}_{\le r}^{m \times n}$ and $\bar{\alpha} \in (0,\infty)$. Let $\mathcal{B} \subset \mathbb{R}^{m \times n}$ be a closed ball such that, for every $G \in \proj{\tancone{\mathbb{R}_{\le r}^{m \times n}}{X}}{-\nabla f(X)}$ and every $\alpha \in [0,\bar{\alpha}]$, $\proj{\mathbb{R}_{\le r}^{m \times n}}{X + \alpha G} \subset \mathcal{B}$; an example of such a ball is $B[X,\ushort{\rho}(X)]$ with \begin{equation*} \ushort{\rho}(X) := \left\{\begin{array}{ll} \bar{\alpha}\mathrm{s}_f(X) & \text{if } X = 0_{m \times n},\\ (1+\frac{1}{\sqrt{2}})\bar{\alpha}\mathrm{s}_f(X) & \text{otherwise}. \end{array}\right. \end{equation*} Then, for every $G \in \proj{\tancone{\mathbb{R}_{\le r}^{m \times n}}{X}}{-\nabla f(X)}$ and every $\alpha \in [0,\bar{\alpha}]$, \begin{equation} \label{eq:P2GDstepUpperBoundCost} \sup f(\proj{\mathbb{R}_{\le r}^{m \times n}}{X + \alpha G}) \le f(X) + \mathrm{s}_f(X)^2 \alpha \left(-1+\kappa_\mathcal{B}(f, X, \bar{\alpha})\alpha\right), \end{equation} where \begin{equation*} \kappa_\mathcal{B}(f, X, \bar{\alpha}) := \left\{\begin{array}{l} \frac{1}{2} \lip\limits_\mathcal{B}(\nabla f) \quad \text{if } X = 0_{m \times n},\\ \frac{\sqrt{\rank X}}{2\sigma_{\min}(X)} \norm{\nabla f(X)} + \frac{1}{2} \lip\limits_\mathcal{B}(\nabla f) \left(\frac{\sqrt{\rank X}}{2\sigma_{\min}(X)}\bar{\alpha} \mathrm{s}_f(X)+1\right)^2 \text{otherwise}. \end{array}\right. \end{equation*} \end{proposition} \begin{proof} The example $B[X,\ushort{\rho}(X)]$ is correct by \cite[Proposition~3.6]{SchneiderUschmajew2015}. The proof of \eqref{eq:P2GDstepUpperBoundCost} is based on \eqref{eq:InequalityLipschitzContinuousGradient} and the equality $\ip{\nabla f(X)}{G} = -\mathrm{s}_f(X)^2$ given in the proof of \cite[Proposition~2.6]{SchneiderUschmajew2015}. The result follows readily from \eqref{eq:InequalityLipschitzContinuousGradient} if $X = 0_{m \times n}$ since $\tancone{\mathbb{R}_{\le r}^{m \times n}}{0_{m \times n}} = \mathbb{R}_{\le r}^{m \times n}$. Let us therefore consider the case where $X \ne 0_{m \times n}$. Let $L := \lip_\mathcal{B}(\nabla f)$. For every $Y \in \proj{\mathbb{R}_{\le r}^{m \times n}}{X + \alpha G}$, \begin{align*} &f(Y)-f(X)\\ &\le \ip{\nabla f(X)}{Y-X} + \frac{L}{2} \norm{Y-X}^2\\ &= \ip{\nabla f(X)}{Y-(X+\alpha G)+\alpha G} + \frac{L}{2} \norm{Y-(X+\alpha G)+\alpha G}^2\\ &= - \alpha \mathrm{s}_f(X)^2 + \ip{\nabla f(X)}{Y-(X+\alpha G)} + \frac{L}{2} \norm{Y-(X+\alpha G)+\alpha G}^2\\ &\le - \alpha \mathrm{s}_f(X)^2 + \norm{\nabla f(X)}d(X+\alpha G, \mathbb{R}_{\le r}^{m \times n}) + \frac{L}{2} \left(d(X+\alpha G, \mathbb{R}_{\le r}^{m \times n})+\alpha \mathrm{s}_f(X)\right)^2\\ &\le - \alpha \mathrm{s}_f(X)^2 + \norm{\nabla f(X)} \frac{\sqrt{\rank X}}{2\sigma_{\min}(X)} \alpha^2 \mathrm{s}_f(X)^2 + \frac{L}{2} \left(\frac{\sqrt{\rank X}}{2\sigma_{\min}(X)} \alpha^2 \mathrm{s}_f(X)^2+\alpha \mathrm{s}_f(X)\right)^2\\ &= \alpha \mathrm{s}_f(X)^2 \left(-1+\alpha\left(\norm{\nabla f(X)} \frac{\sqrt{\rank X}}{2\sigma_{\min}(X)} + \frac{L}{2} \left(\frac{\sqrt{\rank X}}{2\sigma_{\min}(X)}\alpha \mathrm{s}_f(X)+1\right)^2\right)\right)\\ &\le \alpha \mathrm{s}_f(X)^2 \left(-1+\alpha\kappa_\mathcal{B}(f, X, \bar{\alpha})\right), \end{align*} where the third inequality follows from Proposition~\ref{prop:UpperBoundDistanceToDeterminantalVarietyFromTangentLine}. \end{proof} \begin{corollary} \label{coro:P2GDstepCostDecrease} Each point $Y$ produced by Algorithm~\ref{algo:P2GDstep} satisfies the Armijo condition \begin{equation*} f(Y) \le f(X) - c \alpha \mathrm{s}_f(X)^2 \end{equation*} for some $\alpha \in \big[\min\{\ushort{\alpha}, \beta\frac{1-c}{\kappa_\mathcal{B}(f, X, \bar{\alpha})}\}, \bar{\alpha}\big]$, where $\mathcal{B}$ is any closed ball as in Proposition~\ref{prop:P2GDstepUpperBoundCost}. \end{corollary} \begin{proof} For all $\alpha \in (0,\infty)$, \begin{equation*} f(X) + \mathrm{s}_f(X)^2 \alpha \big(-1+\kappa_\mathcal{B}(f, X, \bar{\alpha})\alpha\big) \le f(X) - c \mathrm{s}_f(X)^2 \alpha \iff \alpha \le \frac{1-c}{\kappa_\mathcal{B}(f, X, \bar{\alpha})}. \end{equation*} Since the left-hand side is an upper bound on $f(\proj{\mathbb{R}_{\le r}^{m \times n}}{X + \alpha G})$ for every $\alpha \in (0,\bar{\alpha}]$, the Armijo condition is necessarily satisfied if $\alpha \in (0,\min\{\bar{\alpha}, \frac{1-c}{\kappa_\mathcal{B}(f, X, \bar{\alpha})}\}]$. Therefore, either the initial step size chosen in $[\ushort{\alpha},\bar{\alpha}]$ satisfies the Armijo condition or the while loop ends with $\alpha$ such that $\frac{\alpha}{\beta} > \frac{1-c}{\kappa_\mathcal{B}(f, X, \bar{\alpha})}$. \end{proof} \section{The proposed algorithm} \label{sec:ProposedAlgorithm} We now introduce a first-order algorithm for low-rank optimization as Algorithm~\ref{algo:P2GDRankReduction}. At each iteration, this algorithm applies Algorithm~\ref{algo:P2GDstep} (i.e., one step of $\mathrm{P}^2\mathrm{GD}$) to the current iterate $Y$ but also, if $\rank_\Delta Y < \rank Y$, to $\hat{Y}^j \in \proj{\mathbb{R}_{\rank Y - j}^{m \times n}}{Y}$ for every $j \in \{1, \dots, \rank Y - \rank_\Delta Y\}$. Then, it forms the next iterate by choosing the result that decreases $f$ the most; in particular, it produces sequences along which $f$ is strictly decreasing. This will allow us to prove in the next section that it is apocalypse-free based on the following idea: whenever the current iterate $Y$ is sufficiently close to a non-stationary point $X \in \mathbb{R}_{\le r}^{m \times n}$, Algorithm~\ref{algo:P2GDRankReduction} will apply Algorithm~\ref{algo:P2GDstep} in particular to a projection of $Y$ onto $\mathbb{R}_{\rank X}^{m \times n}$ which, by continuity of $\mathrm{s}_f|_{\mathbb{R}_{\rank X}^{m \times n}}$ and $\sigma_{\rank X}$, will produce a sufficient decrease in the cost function $f$ in view of Corollary~\ref{coro:P2GDstepCostDecrease}. \begin{algorithm}[H] \caption{$\mathrm{P}^2\mathrm{GD}$ with rank reduction} \label{algo:P2GDRankReduction} \begin{algorithmic}[1] \Require $(f, X_0, \ushort{\alpha}, \bar{\alpha}, \beta, c, \Delta)$ where $f : \mathbb{R}^{m \times n} \to \mathbb{R}$ is differentiable with locally Lipschitz continuous gradient, $X_0 \in \mathbb{R}_{\le r}^{m \times n}$, $0 < \ushort{\alpha} < \bar{\alpha} < \infty$, $\beta, c \in (0,1)$, and $\Delta \in (0,\infty)$. \State $i \gets 0$; \While {$\mathrm{s}_f(X_i) > 0$} \For {$j \in \{0, \dots, \rank X_i - \rank_\Delta X_i\}$} \Comment{See Definition~\ref{defi:DeltaRank}.} \State Choose $\hat{X}_i^j \in \proj{\mathbb{R}_{\rank X_i - j}^{m \times n}}{X_i}$; \State Choose $\tilde{X}_i^j \in \hyperref[algo:P2GDstep]{\mathrm{P}^2\mathrm{GD}}(f, \hat{X}_i^j, \ushort{\alpha}, \bar{\alpha}, \beta, c)$; \Comment{See Algorithm~\ref{algo:P2GDstep}.} \EndFor \State Choose $X_{i+1} \in \argmin_{\{\tilde{X}_i^j \mid j \in \{0, \dots, \rank X_i - \rank_\Delta X_i\}\}} f$; \State $i \gets i+1$; \EndWhile \end{algorithmic} \end{algorithm} \section{Convergence analysis} \label{sec:ConvergenceAnalysis} The purpose of this section is to prove Theorem~\ref{thm:P2GDRankReductionPolakConvergence}. To this end, we will use the abstract framework proposed in \cite[\S 1.3]{Polak1971}. Indeed, the problem considered in this paper can be formulated as follows: find $X \in \mathbb{R}_{\le r}^{m \times n}$ such that $\mathrm{s}_f(X) = 0$. It is thus a particular instance of \cite[Abstract Problem~1]{Polak1971} where the Banach space is $\mathbb{R}^{m \times n}$, its closed subset is $\mathbb{R}_{\le r}^{m \times n}$ and ``desirable'' means ``stationary''. Moreover, Algorithm~\ref{algo:P2GDRankReduction} is a particular instance of \cite[Algorithm Model~9]{Polak1971} where the stop rule is $f$ and the search function is Algorithm~\ref{algo:P2GDRankReductionSearchFunction}. \begin{algorithm}[H] \caption{Search function of Algorithm~\ref{algo:P2GDRankReduction}} \label{algo:P2GDRankReductionSearchFunction} \begin{algorithmic}[1] \Require $(f, X, \ushort{\alpha}, \bar{\alpha}, \beta, c, \Delta)$ where $f : \mathbb{R}^{m \times n} \to \mathbb{R}$ is differentiable with locally Lipschitz continuous gradient, $X \in \mathbb{R}_{\le r}^{m \times n}$ is such that $\mathrm{s}_f(X) > 0$, $0 < \ushort{\alpha} < \bar{\alpha} < \infty$, $\beta, c \in (0,1)$, and $\Delta \in (0,\infty)$. \For {$j \in \{0, \dots, \rank X - \rank_\Delta X\}$} \State Choose $\hat{X}^j \in \proj{\mathbb{R}_{\rank X - j}^{m \times n}}{X}$; \State Choose $\tilde{X}^j \in \hyperref[algo:P2GDstep]{\mathrm{P}^2\mathrm{GD}}(f, \hat{X}^j, \ushort{\alpha}, \bar{\alpha}, \beta, c)$; \EndFor \State Return $Y \in \argmin_{\{\tilde{X}^j \mid j \in \{0, \dots, \rank X - \rank_\Delta X\}\}} f$. \end{algorithmic} \end{algorithm} Thus, to prove Theorem~\ref{thm:P2GDRankReductionPolakConvergence}, it suffices to verify that Algorithm~\ref{algo:P2GDRankReduction} satisfies the two assumptions of \cite[Theorem~10]{Polak1971}, which we do below. The first assumption is that the objective function $f$ is continuous at each nondesirable point or bounded from below on $\mathbb{R}_{\le r}^{m \times n}$. It is thus satisfied since $f$ is continuous. The following proposition exactly states that the second assumption is satisfied. \begin{proposition} \label{prop:P2GDRankReductionPolak} For every $X \in \mathbb{R}_{\le r}^{m \times n}$ such that $\mathrm{s}_f(X) > 0$, there exist real numbers $\varepsilon(X), \delta(X) > 0$ such that, for every $Y \in B[X,\varepsilon(X)] \cap \mathbb{R}_{\le r}^{m \times n}$ and every $Z$ produced by Algorithm~\ref{algo:P2GDRankReductionSearchFunction} applied to $(f, Y, \ushort{\alpha}, \bar{\alpha}, \beta, c, \Delta)$, \begin{equation*} f(Z) - f(Y) \le - \delta(X). \end{equation*} \end{proposition} \begin{proof} Let $X \in \mathbb{R}_{\le r}^{m \times n}$ be such that $\mathrm{s}_f(X) > 0$. Let us first consider the case where $X = 0_{m \times n}$. Let $\varepsilon(0_{m \times n}) := \Delta$. Then, for every $Y \in B[0_{m \times n},\varepsilon(0_{m \times n})] \cap \mathbb{R}_{\le r}^{m \times n}$, $\rank_\Delta Y = 0$ since $\sigma_1(Y) \le \norm{Y} \le \varepsilon(0_{m \times n}) = \Delta$. Therefore, Algorithm~\ref{algo:P2GDRankReductionSearchFunction} will consider $\hat{Y}^{\rank Y} = 0_{m \times n}$ and $\tilde{Y}^{\rank Y} \in \hyperref[algo:P2GDstep]{\mathrm{P}^2\mathrm{GD}}(f, 0_{m \times n}, \ushort{\alpha}, \bar{\alpha}, \beta, c)$. Thus, by Corollary~\ref{coro:P2GDstepCostDecrease}, $\delta(0_{m \times n}) := c \mathrm{s}_f(0_{m \times n})^2 \min\{\ushort{\alpha}, \frac{\beta(1-c)}{\kappa_{B[0_{m \times n},\ushort{\rho}(0_{m \times n})]}(f, 0_{m \times n}, \bar{\alpha})}\}$ is a valid choice. Let us now consider the case where $X \ne 0_{m \times n}$. Let $\ushort{r} := \rank X$. By Corollary~\ref{coro:ContinuityStationarityMeasureLowRankOptiConstantRank}, $\mathrm{s}_f|_{\mathbb{R}_{\ushort{r}}^{m \times n}}$ is continuous at $X$ and thus there exists $\rho(X) \in (0,\infty)$ such that $\mathrm{s}_f(B[X,\rho(X)] \cap \mathbb{R}_{\ushort{r}}^{m \times n}) \subseteq [\frac{1}{2}\mathrm{s}_f(X), \frac{3}{2}\mathrm{s}_f(X)]$. Let $\varepsilon(X) := \min\{\Delta, \frac{1}{3}\sigma_{\ushort{r}}(X), \frac{1}{2}\rho(X)\}$. By Proposition~\ref{prop:LocalDeltaRank}, for every $Y \in B[X,\varepsilon(X)] \cap \mathbb{R}_{\le r}^{m \times n}$, $\rank_\Delta Y \le \ushort{r} \le \rank Y$ and $\proj{\mathbb{R}_{\ushort{r}}^{m \times n}}{Y} \subset B[X,2\varepsilon(X)]$. Thus, $0 \le \rank Y - \ushort{r} \le \rank Y - \rank_\Delta Y$ and Algorithm~\ref{algo:P2GDRankReductionSearchFunction} will therefore consider $\hat{Y}^{\rank Y-\ushort{r}} \in \proj{\mathbb{R}_{\ushort{r}}^{m \times n}}{Y} \subset B[X,2\varepsilon(X)] \cap \mathbb{R}_{\ushort{r}}^{m \times n}$ and $\tilde{Y}^{\rank Y-\ushort{r}} \in \hyperref[algo:P2GDstep]{\mathrm{P}^2\mathrm{GD}}(f, \hat{Y}^{\rank Y-\ushort{r}}, \ushort{\alpha}, \bar{\alpha}, \beta, c)$. Since $2\varepsilon(X) \le \rho(X)$, $\frac{1}{2} \mathrm{s}_f(X) \le \mathrm{s}_f(\hat{Y}^{\rank Y-\ushort{r}}) \le \frac{3}{2} \mathrm{s}_f(X)$. Let $\bar{\rho}(X) := \frac{3}{2} \ushort{\rho}(X) + 2 \varepsilon(X)$. For every $\alpha \in [0,\bar{\alpha}]$ and every $G \in \proj{\tancone{\mathbb{R}_{\le r}^{m \times n}}{\hat{Y}^{\rank Y-\ushort{r}}}}{-\nabla f(\hat{Y}^{\rank Y-\ushort{r}})}$, $\proj{\mathbb{R}_{\le r}^{m \times n}}{\hat{Y}^{\rank Y-\ushort{r}} + \alpha G} \subset B[X,\bar{\rho}(X)]$ since, for every $Z \in \proj{\mathbb{R}_{\le r}^{m \times n}}{\hat{Y}^{\rank Y-\ushort{r}} + \alpha G}$, $\norm{Z-X} \le \norm{Z-\hat{Y}^{\rank Y-\ushort{r}}} + \norm{\hat{Y}^{\rank Y-\ushort{r}}-X} \le \ushort{\rho}(\hat{Y}^{\rank Y-\ushort{r}}) + 2\varepsilon(X)$ and $\ushort{\rho}(\hat{Y}^{\rank Y-\ushort{r}}) \le \frac{3}{2} \ushort{\rho}(X)$. Thus, by Corollary~\ref{coro:P2GDstepCostDecrease}, $\tilde{Y}^{\rank Y-\ushort{r}}$ satisfies the Armijo condition with a step size at least $\min\{\ushort{\alpha}, \frac{\beta(1-c)}{\kappa_{B[X,\bar{\rho}(X)]}(f, \hat{Y}^{\rank Y-\ushort{r}}, \bar{\alpha})}\}$. Let $L := \lip_{B[X,\bar{\rho}(X)]}(\nabla f)$. Observe that \begin{align*} \norm{\nabla f(\hat{Y}^{\rank Y-\ushort{r}})} &\le \norm{\nabla f(X)} + L \norm{X-\hat{Y}^{\rank Y-\ushort{r}}}\\ &\le \norm{\nabla f(X)} + 2 L \varepsilon(X)\\ &\le \norm{\nabla f(X)} + {\textstyle\frac{2}{3}} L \sigma_{\ushort{r}}(X), \end{align*} and that, by Proposition~\ref{prop:SingularValuesLipschitz}, \begin{equation*} \sigma_{\ushort{r}}(\hat{Y}^{\rank Y-\ushort{r}}) \ge \sigma_{\ushort{r}}(X) - \norm{X-\hat{Y}^{\rank Y-\ushort{r}}} \ge \sigma_{\ushort{r}}(X) - 2 \varepsilon(X) \ge {\textstyle\frac{1}{3}}\sigma_{\ushort{r}}(X). \end{equation*} Thus, \begin{align*} \kappa_{B[X,\bar{\rho}(X)]}(f, \hat{Y}^{\rank Y-\ushort{r}}, \bar{\alpha}) &\le \bar{\kappa}(f,X,\bar{\alpha})\\ &:= \sqrt{\ushort{r}} \left(\frac{3\norm{\nabla f(X)}}{2\sigma_{\ushort{r}}(X)}+L\right) + \frac{L}{2} \left(\frac{9\bar{\alpha}\sqrt{\ushort{r}}\mathrm{s}_f(X)}{4\sigma_{\ushort{r}}(X)}+1\right)^2. \end{align*} It follows that \begin{equation*} \delta(X) := c \min\big\{\ushort{\alpha}, \frac{\beta(1-c)}{\bar{\kappa}(f,X,\bar{\alpha})}\big\} \frac{\mathrm{s}_f(X)^2}{4} \end{equation*} is a valid choice. \end{proof} We have thus proved the following. \begin{theorem} \label{thm:P2GDRankReductionPolakConvergence} Consider the sequence constructed by Algorithm~\ref{algo:P2GDRankReduction}. If this sequence is finite, then its last element is stationary. If it is infinite, then each of its accumulation points is stationary, i.e., is a zero of the stationarity measure $\mathrm{s}_f$ defined in \eqref{eq:StationarityMeasure}. \end{theorem} \begin{corollary} \label{coro:P2GDRankReductionPolakConvergence} Assume that Algorithm~\ref{algo:P2GDRankReduction} produces a sequence $(X_i)_{i \in \mathbb{N}}$. The sequence has at least one accumulation point if and only if $\liminf_{i \to \infty} \norm{X_i} < \infty$. For every convergent subsequence $(X_{i_k})_{k \in \mathbb{N}}$, $\lim_{k \to \infty} \mathrm{s}_f(X_{i_k}) = 0$. If $(X_i)_{i \in \mathbb{N}}$ is bounded, which is the case notably if the sublevel set $\{X \in \mathbb{R}_{\le r}^{m \times n} \mid f(X) \le f(X_0)\}$ is bounded, then $\lim_{i \to \infty} \mathrm{s}_f(X_i) = 0$, and all accumulation points have the same image by $f$. \end{corollary} \begin{proof} The ``if and only if'' statement is a classical result. The two limits follow from Proposition~\ref{prop:DeterminantalVarietyNoSerendipitousPoint}. The final claim follows from the argument given in the proof of \cite[Theorem~65]{Polak1971}. \end{proof} \section{Conclusion} \label{sec:Conclusion} We close this work with four concluding remarks. \begin{enumerate} \item As in \cite{LevinKileelBoumal2021}, everything said in this paper remains true if $f$ is only defined on an open subset of $\mathbb{R}^{m \times n}$ containing $\mathbb{R}_{\le r}^{m \times n}$. \item It is possible to prove Proposition~\ref{prop:P2GDRankReductionPolak} without relying on Corollary~\ref{coro:ContinuityStationarityMeasureLowRankOptiConstantRank}. Indeed, using the same notation, if $\rank X = r$, then $\mathrm{s}_f$ is continuous at $X$ since $\mathbb{R}_{\le r}^{m \times n}$ is identical to the smooth manifold $\mathbb{R}_r^{m \times n}$ around $X$, and $\mathrm{s}_f$ therefore coincides with the norm of the Riemannian gradient of $f|_{\mathbb{R}_r^{m \times n}}$, which is continuous. If $\rank X < r$, then, in view of \eqref{eq:StationarityMeasureGradientNorm} and the continuity of $\nabla f$, $\mathrm{s}_f$ is bounded away from zero on the intersection of $\mathbb{R}_{< r}^{m \times n}$ and a sufficiently small ball centered at $X$. However, we chose to keep the proof of Proposition~\ref{prop:P2GDRankReductionPolak} as it stands because it allows us to treat the cases where $\rank X = r$ and $\rank X < r$ together. \item In many practical situations, when $\Delta$ is chosen reasonably small, all the iterates of Algorithm~\ref{algo:P2GDRankReduction} satisfy $\rank_\Delta X_i = \rank X_i$. In this case, the range of values of $j$ in the for-loop of Algorithm~\ref{algo:P2GDRankReduction} always reduces to $\{0\}$, and Algorithm~\ref{algo:P2GDRankReduction} generates the same sequence $(X_i)_{i \in \mathbb{N}}$ as $\mathrm{P}^2\mathrm{GD}$. In this scenario, the only computational overhead in Algorithm~\ref{algo:P2GDRankReduction} is the computation of $\rank X_i$ and $\rank_\Delta X_i$. However, for all $i \ge 1$, in view of line~\ref{algo:P2GDstep:LineSearch} of Algorithm~\ref{algo:P2GDstep}, it is reasonable to assume that $X_i$ has been obtained by a truncated SVD, in which case $\rank X_i$ and $\rank_\Delta X_i$ are immediately available, making the overhead insignificant. In summary, Algorithm~\ref{algo:P2GDRankReduction} offers stronger convergence properties than $\mathrm{P}^2\mathrm{GD}$, and while incurring an insignificant overhead in many practical situations. \item However, the range of values of $j$ in the for-loop of Algorithm~\ref{algo:P2GDRankReduction} can also be as large as $\{0, \dots, r\}$, and there are situations where this occurs each time the for-loop is reached (e.g., in the case of a bounded sublevel set, when $\Delta$ is chosen so large that $\rank_\Delta X = 0$ for all $X$ in the sublevel set). One can thus wonder if is possible to restrict (conditionally or not) the range of values of $j$ while preserving the apocalypse-free property of Algorithm~\ref{algo:P2GDRankReduction}. This is an open question. We can just recall that the bound given in Proposition~\ref{prop:UpperBoundDistanceToDeterminantalVarietyFromTangentLine} is tight (Proposition~\ref{prop:TightUpperBoundDistanceToDeterminantalVarietyFromTangentLine}), and observe that, in the neighborhood of any $X \in \mathbb{R}_{< r}^{m \times n}$, $\sigma_j$ can be arbitrarily small for every $j \in \{\rank X + 1, \dots, r\}$. Hence it seems unlikely that Algorithm~\ref{algo:P2GDRankReduction} with a restricted for-loop can be analyzed along the lines of Section~\ref{sec:ConvergenceAnalysis}. On the other hand, should the answer of the open question be negative, another counterexample as the one of~\cite[\S 2.2]{LevinKileelBoumal2021} would be required in view of~\cite[Remark~2.11]{LevinKileelBoumal2021}. \end{enumerate}
1,116,691,497,491
arxiv
\section{Introduction} The bilateral and nonlocal means filters \cite{tomasi1998bilateral,buades2005non} are widely used for edge-preserving smoothing and denoising of images \cite{paris2009bilateral,milanfar2013tour}. These are instances of kernel filters, where the similarity (affinity) between pixels is measured using a symmetric kernel. We refer the reader to \cite{milanfar2013tour} for an excellent review of kernel filters. While they have proven to be useful in practice, a flip side of kernel filtering, including bilateral filtering (BLF) and nonlocal means (NLM), is their computational complexity \cite{paris2009bilateral}. Nevertheless, several fast algorithms have been proposed, e.g. \cite{durand2002fast,paris2006fast,chen2007real,Porikli2008,Yang2009,Chaudhury2011,Kamata2015,Yang2015,Chaudhury2016,Ghosh2016,Sugimoto2016,ipol2017184,papari2017fast,adams2009gaussian,adams2010fast,gastal2012adaptive,mozerov2015global,pravin2017filtering,mahmoudi2005,wang2006,darbon2008,ghosh2016nlm}, which can speed up BLF and NLM, without compromising their filtering quality. See \cite{mozerov2015global,Kamata2015,ghosh2016nlm} for a survey of these algorithms. Unfortunately, most algorithms only work with grayscale images, and cannot be extended to color, multispectral, and hyperspectral images. Algorithms for fast BLF of color images have been proposed in \cite{Yang2015,mozerov2015global,ghosh2016fast,tu2016constant,sugimoto2016fast}. However, to the best of our knowledge, these methods have not been extended for multispectral and hyperspectral images. Fast algorithms for generic high-dimensional BLF and NLM have been proposed in \cite{adams2009gaussian,adams2010fast,gastal2012adaptive,karam2018monte}. A common feature of these algorithms is that they use data clustering or tessellation in high-dimensions. The state-of-the-art fast algorithms for color BLF are \cite{adams2010fast,mozerov2015global}, and for color NLM is \cite{gastal2012adaptive}. More recently, it was shown in \cite{Sugimoto2016,papari2017fast} that fast BLF of grayscale images can be performed using the partial eigendecomposition of the kernel matrix. In fact, the interpretation of BLF (and NLM) as kernel filters goes back to \cite{talebi2014global,talebi2014nonlocal,talebi2016asymptotic}. While the Nystr$\ddot{\text{o}}$m method has widely been used in machine learning \cite{williams01usingthe,fowlkes2004spectral,zhang2008improved}, it appears that \cite{talebi2014global} is the first to apply this for image filtering. Note that, unlike \cite{Sugimoto2016,papari2017fast}, the spatial and range kernel are treated as a single kernel in \cite{talebi2014global,talebi2014nonlocal,talebi2016asymptotic}. The differences between our and related approaches are: \indent $\bullet$ As explained in detail in \S \ref{Background}, it is difficult to scale \cite{Sugimoto2016,papari2017fast} for filtering high-dimensional (even color) images, since one needs to populate a huge kernel matrix and compute its eigendecomposition. We propose to use the Nystr$\ddot{\text{o}}$m method to solve this problem. As a result, we are able to perform BLF and NLM of color and hyperspectral images. \indent $\bullet$ The first difference with \cite{talebi2014global,talebi2014nonlocal,talebi2016asymptotic} is that we use clustering instead of uniform sampling for the Nystr$\ddot{\text{o}}$m approximation. A significant improvement in filtering accuracy is achieved as a result. The other difference is that if a spatial kernel has to incorporated in \cite{talebi2014global,talebi2014nonlocal,talebi2016asymptotic}, then the Nystr$\ddot{\text{o}}$m approximation needs to be performed in the spatio-range space. However, we handle the spatial and range components differently---fast convolutions are used for the spatial component and Nystr$\ddot{\text{o}}$m approximation is used for the range component. As a result, we require lesser samples for the Nystr$\ddot{\text{o}}$m approximation. \indent $\bullet$ In \cite{tu2016constant,sugimoto2016fast}, clustering is used to compute ``intermediate'' images, which are interpolated to get the final output. On the other hand, clustering is used in our method just to obtain the ``landmark points'' for the Nystr$\ddot{\text{o}}$m approximation. \indent $\bullet$ Compared to \cite{adams2009gaussian,adams2010fast,gastal2012adaptive,mozerov2015global}, our algorithm is conceptually simple and easy to implement. Moreover, we are able to derive a bound on the filtering error incurred by the approximation. Such a guarantee is not offered by \cite{adams2009gaussian,adams2010fast,gastal2012adaptive,mozerov2015global}. The rest of the paper is organized as follows. In \S \ref{Background}, we introduce the notion of kernel filtering, and explain the core problem in relation to the spectral approximations in \cite{Sugimoto2016,papari2017fast}. We use the Nystr$\ddot{\text{o}}$m method in \S \ref{Proposed} to overcome this problem. Numerical results are reported in \S \ref{Numerical} and we conclude in \S \ref{Conc}. \section{Background} \label{Background} We begin by formulating BLF and NLM as kernel filters \cite{milanfar2013tour}. Suppose the input image is $\boldsymbol{f}: \Omega \to [0,R]^n$, where $\Omega \subset \mathbb{Z}^{d}$ is the spatial domain, $[0,R]^n$ is the range space, and $d$ (resp. $n$) is the dimension of the domain (resp. range). Let $\boldsymbol{p} : \Omega \to [0,R]^{\rho}$ be the \textit{guide} image, which is used to control the filtering. For standard BLF, $\boldsymbol{f}$ and $\boldsymbol{p}$ are identical, and $n=\rho=1$ and $3$ for grayscale and color images. However, $\boldsymbol{f}$ and $\boldsymbol{p}$ (also $n$ and $\rho$) can be different for joint BLF \cite{paris2009bilateral}. For NLM, $\rho$ is generally larger than $n$, where $\rho$ is the number of pixels in a \textit{patch} \cite{buades2005non}. Let $\kappa : \mathbb{R}^{\rho} \times \mathbb{R}^{\rho} \to \mathbb{R}$ be the \textit{range} kernel. The filtered output $\boldsymbol{g}: \Omega \to [0,R]^n$ is given by \begin{equation} \label{num} \boldsymbol{g}(\boldsymbol{x})=\frac{\sum_{\boldsymbol{y} \in W_{\boldsymbol{x}}} \omega(\boldsymbol{x}-\boldsymbol{y}) \kappa\big(\boldsymbol{p}(\boldsymbol{x}),\boldsymbol{p}(\boldsymbol{y})\big) \boldsymbol{f}(\boldsymbol{y})}{\sum_{\boldsymbol{y} \in W_{\boldsymbol{x}}} \omega(\boldsymbol{x}-\boldsymbol{y}) \kappa\big(\boldsymbol{p}(\boldsymbol{x}),\boldsymbol{p}(\boldsymbol{y})\big)}, \end{equation} where $W_{\boldsymbol{x}}$ is a square window around $\boldsymbol{x} \in \Omega$ consisting of $(2S+1)^d$ pixels, with $S$ being the window radius. The \textit{spatial} kernel $\omega: \mathbb{Z}^d \to \mathbb{R}$ controls the weighting of the neighboring pixels involved in the averaging. At this point, we just assume that $\kappa$ is symmetric, i.e., $\kappa(\boldsymbol{t},\boldsymbol{s})=\kappa(\boldsymbol{s},\boldsymbol{t})$ for $\boldsymbol{t},\boldsymbol{s} \in \mathbb{R}^{\rho}$. For example, $\kappa(\boldsymbol{t},\boldsymbol{s}) = \exp(- \theta \lVert \boldsymbol{s}- \boldsymbol{t}\rVert^2), \theta > 0$, for standard BLF and NLM, where $\lVert \cdot \rVert$ is the Euclidean norm. It was shown in \cite{Sugimoto2016,papari2017fast} that the non-linear operations in \eqref{num} can be computed using convolutions by approximating $\kappa$. For convenience, we will describe this using our notations. Let the actual range of $\boldsymbol{p}$ be \begin{equation} \label{range} \mathfrak{R} = \big\{\boldsymbol{p}(\boldsymbol{x}) : \boldsymbol{x} \in \Omega \big\}. \end{equation} We emphasize that $\mathfrak{R}$ is a list and not a set, i.e., we allow repetition of elements in $\mathfrak{R}$. In particular, let $\mathfrak{R} =\{\boldsymbol{r}_1,\boldsymbol{r}_2,....,\boldsymbol{r}_m\}$ be some ordering of the elements in $\mathfrak{R}$, where $m$ is the number of elements. This means that, given $\ell \in [1,m]$, $\boldsymbol{r}_{\ell}=\boldsymbol{p}(\boldsymbol{x})$ for some $\boldsymbol{x} \in \Omega$. We track this correspondence using the \textit{index map} $\iota: \Omega \to [1,m]$, where \begin{equation} \label{indexdef} \iota(\boldsymbol{x})=\ell \qquad \text{if} \ r_{\ell}=p(\boldsymbol{x}). \end{equation} We next define the kernel matrix $\mathbf{K} \in \mathbb{R}^{m \times m}$ given by \begin{equation} \label{kernel} \mathbf{K}(i,j) = \kappa(\boldsymbol{r}_i, \boldsymbol{r}_j). \end{equation} In terms of \eqref{kernel}, we can write \eqref{num} as \begin{equation} \label{num1} \boldsymbol{g}(\boldsymbol{x})=\frac{\sum_{\boldsymbol{y} \in W_{\boldsymbol{x}}} \omega(\boldsymbol{x}-\boldsymbol{y}) \mathbf{K}\big(\iota(\boldsymbol{x}),\iota(\boldsymbol{y})\big) \boldsymbol{f}(\boldsymbol{y})}{\sum_{\boldsymbol{y} \in W_{\boldsymbol{x}}} \omega(\boldsymbol{x}-\boldsymbol{y}) \mathbf{K}\big(\iota(\boldsymbol{x}),\iota(\boldsymbol{y})\big)} \end{equation} It is clear from \eqref{kernel} that $\mathbf{K}$ is symmetric. In particular, let the eigendecomposition of $\mathbf{K}$ be \begin{equation} \label{eigdecom} \mathbf{K} = \sum_{k=1}^{m} \lambda_k \boldsymbol{u}_k \boldsymbol{u}_k^{\top}, \end{equation} where $\lambda_1,\ldots, \lambda_m \in \mathbb{R}$ are its eigenvalues, and $\boldsymbol{u}_1,\ldots,\boldsymbol{u}_m \in \mathbb{R}^m$ are the corresponding eigenvectors. Substituting \eqref{eigdecom} in \eqref{num1}, we can write its numerator as \begin{equation*} \sum_{\boldsymbol{y} \in W_{\boldsymbol{x}}} \omega(\boldsymbol{x}-\boldsymbol{y}) \left\{ \sum_{k=1}^{m} \lambda_k \boldsymbol{u}_k \big(\iota(\boldsymbol{x}) \big) \boldsymbol{u}_k \big(\iota(\boldsymbol{y}) \big)\right\} \boldsymbol{f}(\boldsymbol{y}). \end{equation*} On switching the sums, this becomes \begin{equation} \label{switch} \sum_{k=1}^{m} \lambda_k \boldsymbol{u}_k \big(\iota(\boldsymbol{x}) \big) (\omega \ast \boldsymbol{h}_k)(\boldsymbol{x}), \end{equation} where $\omega \ast \boldsymbol{h}_k$ denotes the convolution of the image $\boldsymbol{h}_k(\boldsymbol{x}) = \boldsymbol{u}_k \big(\iota(\boldsymbol{x}) \big) \boldsymbol{f}(\boldsymbol{x})$ with $\omega$. An identical argument applies for the denominator. In summary, we can compute \eqref{num1} using convolutions, for which several efficient algorithms are available \cite{young1995recursive,sugimoto2013fast}. Moreover, by considering just the largest eigenvalues, fast and accurate approximations can be obtained \cite{Sugimoto2016,papari2017fast}. Unfortunately, computing the full kernel and its eigendecomposition becomes prohibitively expensive when $\rho$ is large. Just as an example, consider an $8$-bit color image for which $R=255$ and $\rho=3$. Even if we assume that $m$ is just $10\%$ of the maximum range cardinality ($=256^3$), we will still need to populate a $3 \text{ million} \times 3 \text{ million}$ matrix, and compute its eigenvalues. The situation is worse for hyperspectral images, where $\rho$ is of the order of tens, or even hundreds. \section{Proposed Method} \label{Proposed} Originally, the Nystr$\ddot{\text{o}}$m method was used for approximating the solution of functional eigenvalue problems \cite{nystrom1930praktische,baker1977numerical}. The method has found useful applications in machine learning and computer vision for approximating the eigendecomposition of large matrices \cite{williams01usingthe,fowlkes2004spectral,talebi2014global}. In the present context, the goal is to approximate \eqref{eigdecom} using a decomposition of the form \begin{equation} \label{hatK} \widehat{\mathbf{K}}=\sum_{k=1}^{m_0} \alpha_k \boldsymbol{v}_k \boldsymbol{v}_k^\top, \end{equation} where $\alpha_k \in \mathbb{R}$ and $\boldsymbol{v}_k \in \mathbb{R}^{m}$. Clearly, the rank of $\widehat{\mathbf{K}}$ is at most $m_0$. Thus, for small $m_0$, $\widehat{\mathbf{K}}$ is a low-rank approximation of $\mathbf{K}$. A large $m_0$ results in a better approximation, but at higher computational cost. In practice, a good tradeoff is required. The original kernel $\mathbf{K}$ is defined on $\mathfrak{R}$. In the Nystr$\ddot{\text{o}}$m method \cite{nystrom1930praktische,baker1977numerical}, we first construct a smaller kernel $\mathbf{A}$, compute its eigendecomposition, and then ``extrapolate'' the eigenvectors of $\mathbf{A}$ to approximate those of $\mathbf{K}$. More precisely, we pick few \textit{landmarks} points from $\mathfrak{R}$, say, $\mathfrak{R}_0 = \{\amu_1,\ldots,\amu_{m_0}\}$, and define a kernel $\mathbf{A} \in \mathbb{R}^{m_0 \times m_0}$ on $\mathfrak{R}_0$: \begin{equation} \label{defA} \mathbf{A}(i,j) = \kappa(\amu_i, \amu_j) \qquad \big(i,j \in [1,m_0] \big). \end{equation} Clearly, $\mathbf{A}$ is symmetric, and its size is much smaller than $\mathbf{K}$. Thus, we can efficiently compute its eigendecomposition: \begin{equation} \label{eigA} \mathbf{A}=\sum_{k=1}^{m_0} \alpha_k \boldsymbol{w}_k \boldsymbol{w}_k^\top, \end{equation} where $\alpha_k \in \mathbb{R}$ and $\boldsymbol{w}_k \in \mathbb{R}^{m_0}$. We next construct $\mathbf{B} \in \mathbb{R}^{m_0 \times m}$ on $\mathfrak{R}_0 \times \mathfrak{R}$ given by \begin{equation} \label{defB} \mathbf{B}(i,j) = \kappa(\amu_i, \boldsymbol{r}_j), \end{equation} where $i \in [1,m_0]$ and $j \in [1,m]$. This captures the kernel values between the points in $\mathfrak{R}$ and the landmark points. This matrix is used to extrapolate $\boldsymbol{w}_k$ as follows: \begin{equation} \label{vk} \boldsymbol{v}_k = \frac{1}{\alpha_k} \mathbf{B}^\top \!\boldsymbol{w}_k \qquad \big(k \in [1,m_0] \big). \end{equation} This completes the specification of $\alpha_k$ and $\boldsymbol{v}_k$ in \eqref{hatK}. We refer the reader to \cite{fowlkes2004spectral} for the intuition behind the approximation. The effective speedup of replacing \eqref{eigdecom} by \eqref{hatK} is $\mathcal{O}(m/m_0)^3$. This is because the complexity of eigendecomposition of a $k\times k$ matrix is $\mathcal{O}(k^3)$ \cite{pan1999complexity}. In particular, the speedup is significant since $m_0\ll m$. As will be evident shortly, we just need to compute $(\alpha_k)$ and $(\boldsymbol{v}_k)$; we will not use $\widehat{\mathbf{K}}$ explicitly. Following \cite{zhang2008improved}, we select the landmark points by clustering $\mathfrak{R}$. More specifically, we partition $\mathfrak{R}$ into $m_0$ disjoint sets using $k$-means clustering, and take the centroids to be the landmarks. Note that, though $\mathfrak{R}_0$ is not guaranteed to be a subset of $\mathfrak{R}$, we can still apply the above approximation. It was shown in \cite{zhang2008improved} that the \textit{kernel error} can be bounded by the \textit{quantization error}. More specifically, let $\Vert \mathbf{K} - \widehat\mathbf{K} \rVert_{\text{F}}$ be the kernel error ($\lVert \cdot \rVert_{\text{F}}$ is the Frobenius norm), and let \begin{equation*} e = \sum_{i=1}^m \lVert \boldsymbol{r}_i - \amu_{c(i)} \rVert^2 \end{equation*} be the quantization error, where $c(i)$ is the minimizer of $\lVert \boldsymbol{r}_i - \amu_j \rVert$ over $j \in [1,m_0]$. Then the following bound holds \cite{zhang2008improved}. \begin{proposition} \label{proposition1} \textit{Suppose there exists some $L>0$ such that, for $\boldsymbol{w},\boldsymbol{x},\boldsymbol{y},\boldsymbol{z} \in \mathfrak{R}$, \begin{equation*} {\big(\kappa(\boldsymbol{x},\boldsymbol{y}) - \kappa(\boldsymbol{w},\boldsymbol{z})\big)}^2 \leq L \big({\lVert(\boldsymbol{x}-\boldsymbol{w})\rVert}^2 + {\lVert(\boldsymbol{y}-\boldsymbol{z})\rVert}^2\big). \end{equation*} Then the approximation error can be bounded as \begin{equation} \label{Nystrombound} \Vert \mathbf{K} - \widehat\mathbf{K} \rVert_{\mathrm{F}} \leq c_1 \sqrt{e} + c_2 e, \end{equation} where the positive constants $c_1$ and $c_2$ do not depend on $e$. In particular, \eqref{Nystrombound} holds when $\kappa$ is a Gaussian.} \end{proposition} Proposition \ref{proposition1} suggests that we can reduce the kernel error by making $e$ small. However, $e$ measures how well $\Theta$ is represented by the landmark points. Following this observation, $k$-means clustering was used in \cite{zhang2008improved} for determining the landmarks. It was empirically shown in \cite{zhang2008improved} that clustering indeed results in smaller error over uniform sampling \cite{fowlkes2004spectral,talebi2014global}. We will see that this is also true for our algorithm. We arrive at a fast algorithm by replacing $\mathbf{K}$ by $\widehat{\mathbf{K}}$. It is clear from \eqref{switch} that the resulting approximation is given by \begin{equation} \label{finalnum} \hat\boldsymbol{g}(\boldsymbol{x}) = \frac{1}{\hat\eta(\boldsymbol{x}) }\sum_{k=1}^{m_0} \alpha_k \boldsymbol{v}_k \big(\iota(\boldsymbol{x}) \big) (\omega \ast \boldsymbol{h}_k)(\boldsymbol{x}), \end{equation} \begin{equation} \label{finalden} \hat\eta(\boldsymbol{x}) = \sum_{k=1}^{m_0} \alpha_k \boldsymbol{v}_k \big(\iota(\boldsymbol{x}) \big) (\omega \ast d_k)(\boldsymbol{x}), \end{equation} where $d_k: \Omega \to \mathbb{R}$ and $\boldsymbol{h}_k: \Omega \to \mathbb{R}^n$ are defined as $d_k(\boldsymbol{x}) = \boldsymbol{v}_k (\iota(\boldsymbol{x}) )$ and $\boldsymbol{h}_k(\boldsymbol{x}) = d_k(\boldsymbol{x}) \boldsymbol{f}(\boldsymbol{x})$. The computation of \eqref{finalnum} and \eqref{finalden} involves $(n+1)m_0$ convolutions, since for each $k \in [1,m_0]$, there are $n$ convolutions in \eqref{finalnum} and one in \eqref{finalden}. The main point is that we have been able to express the non-linear kernel filter using convolutions, for which efficient algorithms are available. In particular, \eqref{finalnum} and \eqref{finalden} can be performed using $\mathcal{O}(1)$ operations (w.r.t. the size of the spatial kernel), when $\omega$ is a box or Gaussian \cite{deriche1993recursively,young1995recursive,sugimoto2013fast}. The overall algorithm is described in Algorithm \ref{fastalgo} (source code in \cite{Sourcecode}), where the symbols $\oplus,\otimes$ and $\oslash$ are used to denote pixelwise addition, multiplication, and division. The complexity of $k$-means clustering and the eigendecomposition of $\mathbf{A}$ are $\mathcal{O}(|\Omega| m_0 \rho)$ \cite{Tan2005} and $\mathcal{O}({m_0}^3)$ \cite{pan1999complexity}. On the other hand, the complexity of the convolutions in \eqref{finalnum} and \eqref{finalden} is $\mathcal{O}(|\Omega| m_0(n+\rho))$, where $|\Omega|$ is the number of pixels. Since the complexity of the brute-force implementation is $\mathcal{O}\big(|\Omega| (2S+1)^d(n+\rho)\big)$ \cite{paris2009bilateral}, and convolutions are the dominant operations in our algorithm, we obtain an effective speedup of $(2S+1)^d/m_0$. This is significant as $S$ is typically large \cite{paris2009bilateral}. \begin{algorithm} \textbf{Input}: $\boldsymbol{f} : \Omega \to \mathbb{R}^n$ and $\boldsymbol{p} : \Omega \to \mathbb{R}^\rho$, kernels $\omega$ and $\kappa$\; \textbf{Parameter}: Number of landmarks $m_0$\; \textbf{Output}: Approximation in \eqref{finalnum}\; Form $\mathfrak{R}$ in \eqref{range} and index map $\iota$ in \eqref{indexdef}\; $\{\amu_k\} \leftarrow$ partition $\mathfrak{R}$ into $m_0$ clusters using $k$-means\; \label{cluster} Construct ${\mathbf{A}}$ and $\mathbf{B}$ in \eqref{defA} and \eqref{defB} using $\kappa$ and $\boldsymbol{p}$\; Compute the eigendecomposition of $\mathbf{A}$ in \eqref{eigA}\; Initialize $ \boldsymbol{\zeta}: \Omega \to \mathbb{R}^n$ and $\eta: \Omega \to \mathbb{R}$ with zeros\; \For{$k=1,\ldots,m_0$}{ $\boldsymbol{v}_k = (1/\alpha_k) \mathbf{B}^{\top}\! \boldsymbol{w}_k$\; \For{$\boldsymbol{x} \in \Omega$}{ $d_k(\boldsymbol{x}) = \boldsymbol{v}_k(\iota(\boldsymbol{x}))$\; $\boldsymbol{h}_k(\boldsymbol{x}) =d_k(\boldsymbol{x}) \boldsymbol{f}(\boldsymbol{x})$\; } $ \boldsymbol{\zeta} \leftarrow \boldsymbol{\zeta} \oplus \big(\alpha_k \cdot d_k \otimes(\omega \ast \boldsymbol{h}_k)\big)$\; \label{conv1} $\eta \leftarrow \eta \oplus \big(\alpha_k \cdot d_k \otimes (\omega \ast d_k)\big)$\; \label{conv2} } ${\hat\boldsymbol{g}} \leftarrow \boldsymbol{\zeta} \oslash \eta$. \caption{Fast Kernel Filtering.} \label{fastalgo} \end{algorithm} We now comment on the filtering accuracy, namely, how well is \eqref{num} approximated by \eqref{finalnum}. Intuitively, we expect the approximation to be accurate if $\widehat{\mathbf{K}} \approx \mathbf{K}$. In fact, since the difference $\Vert \mathbf{K} - \widehat\mathbf{K} \rVert_{\mathrm{F}}$ is controlled by the quantization error (Proposition \ref{proposition1}), we have the following result. \begin{theorem} \label{theorem1} \textit{Suppose $\omega$ and $\kappa$ are positive, and $\kappa$ satisfies the property in Proposition \ref{proposition1}. Then \begin{equation} \label{bound} \|\hat\boldsymbol{g}-\boldsymbol{g}\|_\infty = \max_{\boldsymbol{x} \in \Omega} \ \lVert \hat\boldsymbol{g}(\boldsymbol{x}) - \boldsymbol{g}(\boldsymbol{x}) \rVert \leq C_1 \sqrt{e} + C_2 e, \end{equation} where $C_1,C_2 >0$ do not depend on $e$.} \end{theorem} The main steps of the derivation are given in the supplement. Theorem \ref{theorem1} is true for BLF and NLM, where $\kappa$ is a Gaussian. A practical implication of this result is that the filtering accuracy is guaranteed to increase with $m_0$ (Figure $4$ in the supplement). Deriving a similar bound is difficult for \cite{adams2009gaussian,adams2010fast,gastal2012adaptive,mozerov2015global}. \section{Results} \label{Numerical} We demonstrate the effectiveness of our algorithm for BLF and NLM of high-dimensional images by comparing it with state-of-the-art algorithms. Instead of standard NLM \cite{buades2005non}, we have used PCA-NLM \cite{tasdizen2009principal}, where the denoising performance of the former is improved by applying PCA on the collection of patches. As for the dataset, we have used the color images from \cite{ImageSource2} and the hyperspectral images from \cite{ImageSource4}. Experiments were performed using Matlab on a $3.4$ GHz quad-core machine with $32$ GB memory. The spatial kernel $\omega$ for BLF is a Gaussian (covariance $\sigma^2 \mathbf{I}$ and $S=3\sigma$), while it is a box in PCA-NLM. The range kernel $\kappa$ is Gaussian (covariance $\theta^2 \mathbf{I}$) for both BLF and PCA-NLM. We have used the fast $\mathcal{O}(1)$ algorithm in \cite{young1995recursive} when $\omega$ is a Gaussian, and the Matlab routine ``imfilter'' when $\omega$ is a box. Note that we can also use other fast Gaussian filters \cite{deriche1993recursively,sugimoto2013fast} if higher accuracy is desired. \begin{figure*} \centering \subfloat[Clean/Noisy ($20$ dB).]{\includegraphics[width=0.19\linewidth,height=0.18\linewidth]{Lenacleannoisy-eps-converted-to.pdf}} \hspace{0.5mm} \subfloat[ ($420$, $30.2$, $0.88$).]{\includegraphics[width=0.19\linewidth,height=0.18\linewidth]{LenadenoisedAM-eps-converted-to.pdf}} \hspace{0.5mm} \subfloat[\textbf{Ours} ($2.6$, $30.1$, $0.88$).]{\includegraphics[width=0.19\linewidth,height=0.18\linewidth]{LenadenoisedPCANLM-eps-converted-to.pdf}} \hspace{0.5mm} \subfloat[AM ($5.8$, $29.4$, $0.87$).]{\includegraphics[width=0.19\linewidth,height=0.18\linewidth]{Lenadenoisedproposed-eps-converted-to.pdf}} \hspace{0.5mm} \subfloat[BM3D ($5$, $33.01$, $0.93$).]{\includegraphics[width=0.19\linewidth,height=0.18\linewidth]{Lenadenoisedproposed-eps-converted-to.pdf}} \caption{Gaussian denoising (noise level $25/255$) of a color image using (b) PCA-NLM, its fast approximations (c) and (d), and (e) BM3D. The respective (Timing (sec), PSNR (dB), SSIM) is shown in the caption.} \label{DenoiseNLM} \end{figure*} \begin{figure} \centering \subfloat[Input ($256 \times 256$).]{\includegraphics[width=0.31\linewidth,height=0.28\linewidth]{peppersinput-eps-converted-to.pdf}} \hspace{0.8mm} \subfloat[\textbf{Ours ($108, 48.4$)}.]{\includegraphics[width=0.31\linewidth,height=0.28\linewidth]{peppersproposed-eps-converted-to.pdf}}\hspace{0.8mm} \subfloat[\cite{mozerov2015global} ($107$, $46.5$).]{\includegraphics[width=0.31\linewidth,height=0.28\linewidth]{peppersGCS-eps-converted-to.pdf}} \subfloat[Brute-force, $4$ min.]{\includegraphics[width=0.31\linewidth,height=0.28\linewidth]{peppersdirect-eps-converted-to.pdf}} \hspace{0.8mm} \subfloat[\cite{gastal2012adaptive}, ($212$, $38.7$).]{\includegraphics[width=0.31\linewidth,height=0.28\linewidth]{peppersAM-eps-converted-to.pdf}}\hspace{0.8mm} \subfloat[\cite{adams2010fast}, ($44.5$).]{\includegraphics[width=0.30\linewidth,height=0.28\linewidth]{outpeppers.png}} \caption{Visual comparison for fast BLF at $\sigma=5$, $\theta=50$, and $|{\Omega}_0|=(6\sigma+1)^2$. The (timing, PSNR) are mentioned, where timing is in milliseconds and PSNR is in dB. Timing is not mentioned for \cite{adams2010fast} which is implemented in C++. The breakup of timing for the proposed method is as follows: clustering ($11$ ms), eigendecomposition ($1$ ms), and convolutions ($96$ ms). Note that the overall timing is dominated by the convolutions. } \label{Colorbil} \end{figure} \begin{figure} \centering \subfloat[Clean/Noisy.]{\includegraphics[width=0.45\linewidth,height=0.57\linewidth]{PaviaUcleannoisy-eps-converted-to.pdf}} \hspace{1mm} \subfloat[\textbf{Ours} ($37$ sec, $31.2$ dB, $0.87$). ]{\includegraphics[width=0.45\linewidth,height=0.57\linewidth]{Paviaproposed-eps-converted-to.pdf}} \subfloat[\cite{fan2017hyperspectral} ($3$ min, $30.54$ dB, $0.89$).]{\includegraphics[width=0.45\linewidth,height=0.57\linewidth]{Pavia_SSLLR-eps-converted-to.pdf}} \hspace{1mm} \subfloat[\cite{zhao2015hyperspectral} ($20$ min, $30.09$ dB, $0.74$).]{\includegraphics[width=0.45\linewidth,height=0.57\linewidth]{Pavia_SPRLR-eps-converted-to.pdf}} \caption{Hyperspectral denoising of a natural image corrupted with Gaussian noise of level $25/255$. (Timing, PSNR, SSIM) are shown for all methods.}\label{Hyperspectral1} \label{Hyperspectral2} \end{figure} \textbf{Color BLF}. The state-of-the-art fast algorithms for color BLF are Adaptive Manifolds (AM) \cite{gastal2012adaptive}, Permutohedral Lattice (PL) \cite{adams2010fast}, and Global Color Sparseness (GCS) \cite{mozerov2015global}. We have compared with them in Figure \ref{Colorbil}. The number of manifolds is set automatically in AM, whereas we have used $15$ clusters in GCS and for the Nystr$\ddot{\text{o}}$m approximation. Following \cite{gastal2012adaptive,mozerov2015global}, we used PSNR to measure the error between the brute-force and fast implementations. In Figure \ref{Colorbil}, notice that while our PSNR marginally exceeds that of GCS, it is however much better than PL and AM. Also notice the significant acceleration over the brute-force implementation obtained using our algorithm. We have also provided a table comparing the different methods on the Kodak dataset \cite{ImageSource2} in the supplement. The table shows that our method is better than GCS and PL when $\theta > 40$. As claimed in the introduction, we can see from the table that clustering provides a significant boost in filtering accuracy ($10\mbox{-}20$ dB) over uniform sampling. \textbf{Color NLM}. AM is the state-of-the-art fast algorithm for color NLM (and PCA-NLM). In NLM, $\rho=3(2r+1)^2$, where $r$ is the patch radius \cite{buades2005non}. On the other hand, $\rho$ is reduced to a smaller value in PCA-NLM using PCA. Following \cite{tasdizen2009principal}, we set $\theta$ to be three times the noise level for all experiments. Denoising results are shown in Figure \ref{DenoiseNLM}, where $S=10$ and $r=3$. For (b), (c), and (d), PCA was used to reduce the range dimension from $3 \times 7^2$ to $25$. We used $31$ clusters (resp. manifolds) for the Nystr$\ddot{\text{o}}$m approximation (resp. AM). Following \cite{wang2004}, we measured the denoising performance using PSNR and SSIM (between the clean and denoised images). Note that we are superior to AM both in terms of accuracy and timing. Importantly, our PSNR is close to PCA-NLM (the method being approximated), but we are about $160\times$ faster. In comparison with BM3D \cite{dabov2006image}, our PSNR is $3$ dB less. However, our timing is about half that of BM3D, since our complexity is much less than that of BM3D. Additional visual comparisons and accuracy analysis is provided in the supplement. \textbf{Hyperspectral BLF}. Finally, we present a denoising result for a hyperspectral image of size $(610 \times 340) \times 103$ bands using BLF ($\sigma=3, \theta=100$). We have also compared with state-of-the-art methods for hyperspectral denoising \cite{fan2017hyperspectral,zhao2015hyperspectral}, whose parameters have been tuned accordingly. The results are shown in Figure \ref{Hyperspectral2}. We have used $m_0=32$ landmarks for the Nystr$\ddot{\text{o}}$m approximation. As a standard practice, the $\mathrm{PSNR}$ and SSIM values are averaged over the spectral bands. Notice that our method can restore details better, which results in higher PSNR/SSIM values. In particular, the color is not satisfactorily restored in \cite{fan2017hyperspectral} and grains can be seen in \cite{zhao2015hyperspectral}. Being a one-shot method, we are much faster than \cite{fan2017hyperspectral,zhao2015hyperspectral}. \section{Conclusion} \label{Conc} We showed that fast bilateral and nonlocal means filtering of high-dimensional images can be performed using the Nystr$\ddot{\text{o}}$m approximation. The proposed algorithm can significantly accelerate the brute-force implementation of these filters, without compromising the visual quality. In particular, our algorithm is often competitive with state-of-the-art fast algorithms, and comes with provable guarantee on the filtering accuracy. \bibliographystyle{IEEEtran}
1,116,691,497,492
arxiv
\section{Introduction} The transport of radiation through dust plays a central role in many astrophysical objects as dust is efficient in absorbing and scattering ultraviolet (UV) through near-infrared photons and re-radiating the absorbed energy in the infrared and submillimeter (submm). The interpretation of observations from the UV to the submm is critically linked to an accurate calculation of the radiative transfer (RT) through dust. The dust RT problem is difficult to solve as the time-independent version is six dimensional (space, angle, and wavelength) with strongly anisotropic scattering and non-linear coupling between different spatial regions. The complexity of the solution is especially evident when the object of interest is intrinsically three-dimensional (3D). Solutions using Ray-Tracing and Monte Carlo techniques (and mixtures of the two) are used in modern RT codes to solve this problem. \citet{Steinacker13} gives an in-depth review of the 3D RT problem, current solution techniques, and an overview of existing codes, the number of which has grown significantly in the last 15 years. While it is common to provide benchmark solutions for specific objects to ensure that different codes produce the same answer within some tolerance, there are no existing intrinsically 3D RT benchmarks; existing benchmarks focus on one-dimensional (1D) or two-dimensional (2D) objects \citep{Ivezic97, Pascucci04, Pinte09}. In addition to being 1D or 2D, existing RT benchmarks do not include the full dust radiative transfer solution, using approximations in either dust scattering (e.g., isotropic only) or dust emission (e.g., equilibrium only emission) processes. This has motivated a group of 3D dust RT coders to come together and propose a suite of 3D benchmarks that will test the many aspects of the dust RT solution in a range of geometries. This suite of benchmarks is named TRUST (benchmarks for the Transport of Radiation through a dUSTy medium)\footnote{http://ipag.osug.fr/RT13/RTTRUST/}. In general, code benchmarks are motivated by and designed to test for coding errors, ensure accurate calculations, compare how differences between codes impact the results, and test the relative speed of different codes. The TRUST effort is focusing on the first three goals as testing the speed of codes is of much lower priority than ensuring that the codes are accurate, error free, and produce consistent results. Ideally a full analytic solution would be adopted as the target solution for the TRUST benchmarks as this would allow for all benchmarking goals to be achieved. Unfortunately, no such analytic solutions exist for the dust radiative transfer equation for any geometries that are intrinsically 3D. Thus for 3D dust RT we are left with using a converged solution that has been validated by multiple codes, ideally with different solution techniques. This should ensure that the benchmark solution is not affected by coding errors and is likely to be correct. This paper presents the results for a geometrically simple, yet still 3D, benchmark. This first simple benchmark is an externally illuminated slab of dust and is presented specifically to test the components of dust RT at the basic level. Future benchmarks will test specific capabilities of codes in more complex geometries including shells, filaments, shadowed regions, and galaxy disks with spiral structures. \section{Setup} \begin{table} \caption{Slab setup details} \label{tab_slab_setup} \begin{tabular}{lc} \hline\hline Name & Values \\ \hline \multicolumn{2}{c}{Dust Geometry} \\ \hline system size & 10 $\times$ 10 $\times$ 10 pc \\ system coordinates & -5 to 5 pc \\ slab z extent & -2 to -5 pc \\ slab xy extent & -5 to 5 pc \\ \mbox{$\tau_z(1~\micron$)} & 0.01, 0.1, 1, 10 \\ \hline \multicolumn{2}{c}{Blackbody Source} \\ \hline location (x,y,z) & (0 pc, 0 pc, 4 pc) \\ temperature & 10\,000 K \\ luminosity & 100\,000 L$_\sun$ \\ & $3.839 \times 10^{38}$~ergs~s$^{-1}$ \\ \hline \multicolumn{2}{c}{Observer} \\ \hline distance & 10\,000 pc \\ \hline \end{tabular} \end{table} \begin{figure} \resizebox{\hsize}{!}{\includegraphics{model_geometry.pdf}} \caption{The setup of the slab benchmark is graphically illustrated. The external views for the output SEDs and images are shown.} \label{fig_setup_schematic} \end{figure} The overall geometry of this benchmark is a rectilinear slab that is externally illuminated by a single blackbody source. The values for the slab and point source are given in Table~\ref{tab_slab_setup} and the geometry illustrated in Figure~\ref{fig_setup_schematic}. The luminosity of the source is set by integrating the source spectral energy distribution (SED) between 0.09 and 2100~\mbox{$\mu$m}. All the dust in the system is uniformly distributed through the slab with the rest of the model space being completely empty except for the blackbody source. \begin{figure} \resizebox{\hsize}{!}{\includegraphics{dirty_emis_type}} \caption{The global SEDs are shown with the three different dust emission choices. The example shown is for $\mbox{$\tau_z(1~\micron$)} = 1$ and $\theta = 90^\circ$.} \label{fig_example_emission_type} \end{figure} The dust grains have the BARE-GR-S properties from \citet{Zubko04} and are discussed in detail by \citet{Camps15b}. Given the computational complexity of including the full treatment of dust emission in the RT solution, this benchmark provides results including stochastic heating (sto, the full solution) as well as the equilibrium only heating (equ) and the single effective grain (eff) approximations. In the effective grain approximation, the grain properties are integrated over the grain size distribution and summed over the grain components to produce a single grain with effective properties. This effective grain is an extreme approximation \citep{Steinacker13} that has the benefit of allowing fast calculation of the dust emission spectrum from the radiation field. The equilibrium-only dust emission approximation assumes all grains are in equilibrium with the radiation field, even for those smaller grains that, physically, should be stochastically heated. The differences in the global SED from a model with the three different methods for calculating the emission from the dust are illustrated in Fig.~\ref{fig_example_emission_type}. This figure clearly illustrates the importance of including the full solution to achieve accurate mid-IR and, to a lesser extent, far-IR results for cases like those in this benchmark. Data files that contain the full grain properties are available online\footnote{http://www.shg.ugent.be} \citep{Camps15b}. The density of the slab is set by the optical depth along the z-direction, defined as \mbox{$\tau_z(1~\micron$)}. The \mbox{$\tau_z(1~\micron$)}\ values chosen here (Table~\ref{tab_slab_setup}) provide a full sampling of optical depths from optically thin to thick. For $\mbox{$\tau_z(1~\micron$)}=0.01$, the model is optically thin at all wavelengths with a maximum of $\tau_z(0.09~\mbox{$\mu$m}) \sim 0.18$. The next $\mbox{$\tau_z(1~\micron$)}$ value of 0.1 is optically thick in the UV with $\tau_z(0.09~\mbox{$\mu$m}) \sim 1.8$. For $\mbox{$\tau_z(1~\micron$)} = 1$, the slab is optically thick to all UV and optical photons. Finally, the $\mbox{$\tau_z(1~\micron$)} = 10$ case is optically thick for all $\lambda \lesssim 4~\mbox{$\mu$m}$ and very optically thick in the UV $\tau_z > 100$. Each RT code generates global SEDs and spatially resolved, multi-wavelength images. These outputs from each code are compared against all the other codes to estimate the fidelity of the RT solution for each configuration. We choose to use outputs for comparison as these quantities are what is generally compared with observations and the internal representation of quantities in RT codes (e.g., radiation field density) are often stored on quite different spatial grids. Previous benchmarks have also compared the internal dust temperature structure. We do not as only for the eff grain approximation is a single dust temperature defined. For the other two cases (equ and sto), there are a range of dust temperatures in each grid cell given the range of grain sizes and compositions. Global SEDs are computed for the total as well as decomposed into the different RT components (direct stellar, scattered stellar, direct dust emission, and scattered dust emission). \begin{figure} \resizebox{\hsize}{!}{\includegraphics{wavelengthgrid.png}} \caption{The BASIC and FULL wavelength grids are shown. The BASIC grid is used for the models using the effective grain and equilibrium-only emission approximations. The FULL grid is used for the models computing the full dust emission including stochastically heated grains. Vertical blue lines between $\sim$3-23\mbox{$\mu$m}\ represent the dense sampling of wavelength points to resolve the PAH emission.} \label{fig_wavegrid} \end{figure} \begin{table} \caption{Physical constants} \label{tab_constants} \begin{tabular}{ll} \hline\hline Constant & Description \\ \hline $c = 2.99792458 \times 10^8$ m s$^{-1}$ & speed of light \\ $h = 6.62606957 \times 10^{-34}$ m$^2$ kg s$^{-1}$ & Planck constant\\ $k = 1.3806488 \times 10^{-23}$ m$^{^2}$ kg s$^{-2}$ K$^{-1}$ & Boltzmann constant \\ \hline \end{tabular} \end{table} The SEDs are output in units of Jy and images are output in units of MJy/sr. SEDs and images are generated at seven viewing angles (0\degr, 30\degr, 60\degr, 90\degr, 120\degr, 150\degr, and 180\degr; see Figure~\ref{fig_setup_schematic}), probing the full range of scattering angles from back-scattering ($\theta = 0\degr$) to forward scattering ($\theta = 180\degr$). At lower optical depths ($\mbox{$\tau_z(1~\micron$)} = 0.01, 0.1, 1.0$), the resolution of the images is 300$\times$300~pixels while at $\mbox{$\tau_z(1~\micron$)} = 10$, the image resolution is 600$\times$600~pixels. The resolution is set by the need to resolve RT effects in the front surface of the dust slab in the infrared. In all cases, the physical size covered by the images is 15$\times$15~pc to account for all possible rotations of the system. The two wavelength grids used are shown in Fig.~\ref{fig_wavegrid} and can be obtained from the TRUST website\footnote{http://ipag.osug.fr/RT13/RTTRUST/}. The BASIC wavelength grid is used for the effective grain and equilibrium only emission approximations as the equilibrium grain temperatures only depend on the the total absorbed energy. The FULL wavelength grid is used for the full emission solution as the calculation of the stochastically heated grains requires a finer resolution sampling to calculate the temperature probability distribution. The FULL wavelength solution includes a very fine spacing in the mid-IR to resolve the aromatic/PAH emission features. Finally, we give the adopted values of relevant physical constants in Table~\ref{tab_constants} as their exact values will change the output SEDs and images. \subsection{Example outputs\label{sec:ex_outputs}} \begin{figure*} \resizebox{\hsize}{!}{\includegraphics{dirty_sed_comp_example} \includegraphics{dirty_sed_angles}} \caption{Examples SEDs are shown from models run with the full dust emission solution. On the left, the total and components of the global SED are shown for the $\mbox{$\tau_z(1~\micron$)} = 1$ and $\theta = 150^\circ$ case. On the right, the total SEDs are shown for all \mbox{$\tau_z(1~\micron$)}\ values and $\theta$ values of $0^\circ$, $90^\circ$, and $180^\circ$.} \label{fig_example_sed_components} \end{figure*} Fig.~\ref{fig_example_sed_components} gives an example of the global SED outputs. On the left, the total SED (Total) is shown along with the different components that comprise the total. Decomposing the total SED into components is important to test the differences in the different parts of the dust RT solution between models. The components include the stellar flux attenuated by any line-of-sight dust (Direct Stellar), the stellar flux scattered by the dust (Scattered Stellar), the thermal emission from the dust (Direct Dust Emission), and the scattered dust emission (Scattered Dust Emission). In addition, the stellar flux from the dust-free model (Transparent Stellar) is also shown as it is diagnostic of how each code treats the input stellar photons. The particular example shown illustrates the importance of the dust scattered stellar component in the ultraviolet and optical. The right panel shows the total SEDs covering the full \mbox{$\tau_z(1~\micron$)}\ and $\theta$ range. The $\theta = 180^\circ$ SEDs at shorter wavelengths clearly illustrate the impact of observing the star through the dust slab. The impact of dust self-absorption can be seen at $\mbox{$\tau_z(1~\micron$)} = 10$ most easily from the increasing depth of the silicate absorption feature at $\sim$$10~\mbox{$\mu$m}$ with increasing viewing angle $\theta$. \begin{figure*} \resizebox{\hsize}{!}{\includegraphics{skirt_images_example}} \caption{The output images from the SKIRT model are shown for the $\mbox{$\tau_z(1~\micron$)} = 0.1$ case with the full dust emission solution for two representative wavelengths.} \label{fig_example_images} \end{figure*} Examples of the output images are shown in Fig.~\ref{fig_example_images} for the full range of viewing angles $\theta$. The output SED at $\lambda = 0.15~\mbox{$\mu$m}$ is dominated by scattering and the corresponding images illustrate the strongly forward scattering nature of dust grains with the brightness of the scattered light increasing from $\theta = 0^\circ$ (back scattering) to $\theta = 180^\circ$ (forward scattering). The $\lambda = 151.99~\mbox{$\mu$m}$ images show that the infrared emission has a different dependence on viewing angle compared to the $\lambda = 0.15~\mbox{$\mu$m}$ images, symmetrically peaking at $\theta = 90^\circ$, reflecting the isotropic nature of the dust emission. All the model outputs for all the codes are available online\footnote{https://doi.org/10.5281/zenodo.163632}. \section{Models} There are seven 3D dust RT codes that participated in this benchmark. Six of them are based on Monte Carlo techniques (CRT, DIRTY, Hyperion, SKIRT, TRADING, and SOC) and one is based on Ray-Tracing (DART-Ray). Here we provide a very brief description of each code. The reader is encouraged to read the references provided for the details of how each code implements the solution to the RT problem. \subsection{CRT} CRT is a 3D Monte Carlo radiative transfer code \citep{Juvela2003, Lunttila2012}. It uses nested Cartesian grids for representing the dust distribution. The program can use several methods for accelerating calculations, for example packet weighting \citep{Juvela2005}, polychromatic photon packets \citep{Jonsson2006}, and subgrid iteration \citep{Lunttila2012}. CRT includes a high-performance library for computing emission from arbitrary dust models, including stochastically heated grains \citep[see][]{Camps15b}; however, it also allows using an external program for dust emission calculations \citep[see, e.g.,][]{Ysard2011}. The main application of CRT has been studies of molecular clouds and Galactic star formation. In particular, it has often been used for creating synthetic scattered light and dust thermal continuum observations of cloud models from magnetohydrodynamic simulations to help quantify the uncertainty in observationally derived cloud properties such as column density and core mass distribution \citep[e.g.,][]{Juvela2006, Malinen2011}. Other applications have included stability analysis of non-isothermal Bonnor-Ebert spheres \citep{Sipila2011} and galaxy mergers \citep{Karl2015}. \subsection{DART-Ray} DART-Ray is a purely ray-tracing 3D dust radiative transfer code. Its core algorithm has been presented by \citep{Natale14} and further developed by Natale et al. (2017, in preparation). This algorithm reduces the amount of calculations that would be required in a brute-force ray-tracing approach by limiting the propagation of rays from each radiation source throughout the RT model within the source influence volume. The latter is the volume around a radiation source within which the source contributes significantly to the radiation field energy density. This code utilises adaptive Cartesian grids to define the distributions of stars and dust and an optimization technique to set the angular density of rays cast from each source. The dust emission can be calculated both for dust in equilibrium with the radiation field and dust that is stochastically heated \citep{Natale15}. The current version of DART-Ray does not include dust self-heating and dust emission scattering. In the context of this benchmark, the results are expected to be accurate in the infrared only for the cases where the dust emission is optically thin. We note that, in contrast with the other participating codes, the maximum number of scattering iterations is not set by the user in DART-Ray. Instead, scattering iterations are stopped when the remaining scattered radiation luminosity to be processed is less than 1\% of the initial scattered radiation luminosity. \subsection{DIRTY} The DIRTY radiative transfer code is a 3D Monte Carlo radiative transfer code \citep{Gordon01, Misselt01}. It can handle arbitrary geometries efficiently using nested Cartesian grids. It includes a full dust grain model description allowing for arbitrary dust grain models to be used. The dust emission calculation includes the full solution including both equilibrium and non-equilibrium (stochastically heated) emission. This code has been used to study the general behavior of radiative transfer through clumpy dust \citep{Witt96}, Milky Way reflection nebulae \citep{Gordon94, Calzetti95, Lewis09}, regions of galaxies \citep{Witt00, Misselt01}, starburst galaxies locally \citep{Gordon97, Gordon00, Law11} and at high redshift \citep{Vijh03}, and disk galaxies \citep{Pierini04}. For this benchmark, the spacing in the z direction was log and linear in the x and y directions. Motivated by this benchmark, the composite biasing technique \citep{Baes16} was added to better sample scattering at high optical depths and the dust emission from radiation fields with strong spatial variations. \subsection{Hyperion} Hyperion \citep{Robitaille11} is an open-source\footnote{http://www.hyperion-rt.org/} dust continuum 3D Monte Carlo radiative transfer code. It is designed to be modular, and can be used to compute radiative transfer for arbitrary 3D geometries and has been applied to, for example, analytical models of forming stars \citep{Robitaille14,Koepferl15,Johnston15}, galaxy formation simulations \citep{Narayanan15}, and large-scale Galactic emission \citep{Robitaille12}. A number of grid geometries are supported, including Cartesian, spherical and cylindrical polar, nested Cartesian, octree, and Voronoi grids. Grid cells are never required to be regularly spaced, so for the models presented here, the cells in the $z$ direction are spaced logarithmically (with the first cell below the surface of the slab located at $-2.001$). Multi-process parallelization is implemented using MPI and used for this benchmark. Hyperion supports computing the radiative transfer for one or more dust populations, but does not include full support for stochastic heating (instead, computing models with small grains and PAHs can be done using precomputed templates, as done in \citealt{Robitaille12}). Since the original implementation of the code presented in \citet{Robitaille11}, forced first scattering using the \citet{Baes16} algorithm has been added (and the results here assume $\xi=0.2$). In addition, the process for monochromatic radiative transfer (described in \S2.6.4 of \citealt{Robitaille11}) has now changed - when computing the contribution to the scattered light emission, photon packets are scattered and lose energy until their energy goes below a certain threshold (set to $10^{-120}$ for this paper), as opposed to randomly sampling whether to scatter or absorb at each interaction (which caused many scattering scenarios to be under-sampled and therefore required large numbers of photons in order to attain a good signal-to-noise in certain situations). \subsection{SKIRT} SKIRT is a public\footnote{http://www.skirt.ugent.be} multi-purpose 3D Monte Carlo dust radiative transfer code \citep{Baes11, Camps15a} for simulating the effect of dust on radiation in astrophysical systems. It offers full treatment of absorption and multiple anisotropic scattering by the dust, computes the temperature distribution of the dust and the thermal dust re-emission self-consistently, and supports stochastic heating of dust grains \citep{Camps15b}. The code handles multiple dust mixtures and arbitrary 3D geometries for radiation sources and dust populations, including grid- or particle-based representations generated by hydrodynamical simulations. The dust density distribution is discretized using one of the built-in dust grids, including state-of-the art octree \citep{Saftly13}, $k$-d tree \citep{Saftly14}, and Voronoi \citep{Camps13} grids. The wide range of built-in components can be configured to construct complex models in a parameter file or through a user-friendly interface \citep{Camps15a, Baes15}. SKIRT implements hybrid parallelization, allowing an arbitrary mix of multiple threads and processes possibly across multiple computing nodes (Verstocken et al., in preparation). While SKIRT is predominantly used to study dusty galaxies \citep[e.g.,][]{Baes02, Baes10, Baes11, DeLooze12, DeGeyter14,Saftly15}, it has also been applied to active galactic nuclei \citep{Stalevski12}, molecular clouds \citep{Hendrix15}, and stellar systems \citep{Deschamps15}. \subsection{TRADING} TRADING \citep{Bianchi08} is a 3D Monte Carlo radiative transfer code originally designed to study the effects of clumping in simulations of dust extinction and emission in spiral galaxies. Developed from an earlier regular-grid, thermal-emission-only code \citep{Bianchi96,Bianchi00}, TRADING includes an octree grid for the dust distribution, stochastic heating, and dust self-absorption. It neglects scattering of dust-emitted radiation. It has been used to model dust extinction \citep{Bianchi07} and emission \citep{Bianchi11,Holwerda12} in edge-on galaxies, and to study the dust heating in the host galaxy of high-z Quasi-Stellar Objects \citep{Schneider15}. A few adaptations were made to TRADING for this benchmark: while each photon packet wavelength is drawn from a (tabulated) probability distribution function (PDF), at odds with other RT codes, the weight of the photons was adjusted so that a similar number of packets falls in each bin of a logarithm-spaced grid (i.e.,\ a weight $1/\lambda$ was used). Forced scattering was used for the first scattering event and along all paths with $\tau > 5$, applying the composite biasing scheme described by \citet{Baes16} for half the events along these paths. The models presented here were all run using an octree grid of $\approx 7.5\times 10^{5}$ cells, having a higher resolution in the part of the slab facing the source. \subsection{SOC} SOC is a new Monte Carlo radiative transfer code that is parallelized using OpenCL libraries (Juvela et al., in preparation). The models can be defined on regular Cartesian, modified Cartesian, or octree grids. In this paper, calculations employ modified Cartesian grids that are defined by cell boundaries located at arbitrary positions along the three main axes. In dust-scattering calculations, SOC uses photon packages that each contain photons from four consecutive frequency bins. The differences between the optical depths and scattering functions at those frequencies are compensated by weighting \citep[see][]{Juvela2005, Jonsson2006}. Calculations for dust emission proceed one frequency at a time. The dust temperature distributions and the dust emission (per cell) of stochastically heated grains are calculated with an external program that is common with CRT. For dust models assuming an equilibrium temperature, calculations are done within SOC itself. \section{Output comparisons} The comparison between the results from the different codes is based on measuring the average absolute deviation from the median result from all the codes. Explicitly, for each code ($j$) we calculate \begin{equation} \bar{\Delta_j} = \frac{1}{n} \sum_i{|(x_{ij} - \mu_i)|} ,\end{equation} where $n$ is the number of wavelength/spatial points, $x_i$ is the $i$th wavelength/spatial point for the global SED/image, and \begin{equation} \mu_i = \mbox{med}_j(x_{ij}). \end{equation} We use the median for comparison as there is no analytic solution and use of a median is more robust to outliers. Given this metric, the deviation of one code from the reference being smaller than the deviation of other codes is not significant per se; it simply means that it is nearer the median. The goal of these quantitative comparisons is to determine the overall agreement between codes and to determine whether or not the differences between code results are random or systematic. \begin{table*} \caption{Code parameter values} \label{tab_model_params} \begin{tabular}{llccccccc} \hline\hline Name & Description & CRT & DART-ray & DIRTY & Hyperion & SKIRT & TRADING & SOC \\ \hline $N$ & \# of photons/rays per wavelength & $10^9$ & $10^9$--$10^{10}$ & $10^8$ & $10^9$ & $10^8$ & $2\times 10^8$ & $5\times 10^8$\\ $n_{xy}$ & \# of bins in $x$ or $y$ & 100 & 90 -- 992 & 100 & 100 & 100 & 40 -- 160 & 150 \\ $n_z$ & \# of bins in $z$ & 150 -- 300 & 29 -- 31 & 100 & 200 & 100 & 144 & 150 \\ $m_\mathrm{scat}$ & max \# of scatterings & $\sim$60 & 8 & 500 & $\sim$270 & $\sim$250 & $\sim$390 & 20 \\ $m_\mathrm{iter}$ & max \# of dust heating iterations & 5 & 0 & 4 & 5 & 50 & 4 & 4 \\ \hline \end{tabular} \end{table*} As certain parameters are particularly important for the precision of the results, as revealed by the convergence tests (\S\ref{sec_convergence}), we give the values for such parameters for all the codes in Table~\ref{tab_model_params}. The number of rays/photons per wavelength ($N$) is only roughly comparable between different codes as, for example, the emission from the stellar source may be biased towards the slab or not. In addition, rays or photons may be split multiplying the impact of a single ray or photon. The value of $m_\mathrm{scat}$ is also only roughly comparable between codes as the scattering may be done by forcing no scattering, forcing the first scattering, or forcing all scatterings. The maximum number of heating iterations $m_\mathrm{iter}$ controls whether or not dust self-heating is taken into account and how many iterations are allowed to achieve a specified convergence level. If $m_\mathrm{iter} = 0$, then dust self-heating is not accounted for and the dust emission is only determined by the absorbed stellar photons. While these parameters are only roughly comparable between codes, they do provide a qualitative comparison between codes of the precision of different radiative transfer solution components. While most of the codes include scattering of the dust emission, DART-ray and TRADING do not. This can affect the accuracy of the infrared images from these codes, but it is unlikely to affect the global SEDs as the total scattered dust emission is negligible compared to the total direct dust emission. The adopted distance was not used fully in creating the images by all the different codes. Some codes took into account the projection effects due to the finite distance while others assumed an infinite distance resulting in no projection affects. The resulting differences are small in the images, except for the pixels sampling the edge of the slab and are noticeable for a viewing angle of $\theta = 90^\circ$. These edge pixels were not used for the image comparisons to avoid injecting differences that are purely geometrical into the comparisons. \subsection{Example comparisons} \begin{figure*} \resizebox{\hsize}{!}{\includegraphics{slab_t1e-1_i150_decomposed_sed_comp.pdf}} \caption{An example of the model global SED outputs are shown for the $\mbox{$\tau_z(1~\micron$)} = 0.1$, $\theta = 150^\circ$, and effective grain case. The plot in the upper left corner gives the median SED from all the models, both as a total and decomposed into components. The other plots give the percentage differences from the median for each of the components.} \label{fig_example_sed} \end{figure*} \begin{figure*} \resizebox{\hsize}{!}{\includegraphics{slab_t1e0_i090_w000_53_image_comp.pdf}} \caption{An example of the model image outputs is shown for the $\mbox{$\tau_z(1~\micron$)} = 1$ and effective grain case. In addition to the total images for each model, Y and X slices are shown along with the differences for each model from the median slice. The X and Y slices refer to the output image dimensions, not the axes in the 3D model space. The locations of each slice are shown over-plotted on the 1st (Y-slice) and 2nd (X-slice) model images where the slice is computed as an average over the slice width. } \label{fig_example_image} \end{figure*} The comparisons between the results of the different codes was done both with the global SEDs and images as shown in Figs.~\ref{fig_example_sed} and \ref{fig_example_image}. The global SED comparisons were done both for the total SED as well as the different components (\S\ref{sec:ex_outputs}) of the RT solution. The image comparisons included both side-by-side display of the images as well as quantitative comparisons using two slices, one in the X and one in the Y direction. The slices were averaged in the direction perpendicular to the slice. The comparison between models was focused on a comparison of the behavior of these averaged slices. The global SED comparisons were diagnostic of issues with the different components of the solution (e.g., dust scattering and emission). The image comparisons were diagnostic of general issues in creating the images as well as cases seen only for a limited range of parameters (e.g., UV images at high optical depths). Quantitative analysis of the images focused on the Y slices as these were diagnostic of systematic issues at all $\theta$ values while being more robust to noise associated with the number of photons/rays. The comparisons for all cases are given in the Slab section of the TRUST website\footnote{http://ipag.osug.fr/RT13/RTTRUST/BM1.php}. \subsection{Precision goal} In contrast with problems that have an analytic solution, for problems such as 3D dust radiative transfer that require a numerical approach, the solution will never converge fully to infinite precision. Thus, we need to define a robust metric to judge the convergence of the solutions presented here. One criteria could be when the convergence is well below the expected accuracy of the observations that the models are attempting to reproduce. Many observations are limited to accuracies of 1\% or larger based on uncertainties in absolute flux or surface brightness calibration \citep[e.g.,][]{Bohlin11, Balog13}. Another approach is to use the differences seen in previous 2D dust RT benchmark as an upper limit. The differences between the global SEDs in the \citet{Pascucci04} 2D disk geometry benchmark for all optical depths ($\tau(V) = 0.1 - 100$) was $< 10\%$ and $< 3\%$ for the the lowest optical depth. The \cite{Pinte09} circumstellar disk geometry benchmark for high optical depths ($\tau(I) \approx 10^3 - 10^6$) found differences between codes on the order of 10\%. Finally, available computational power imposes a limit - it should be possible to carry out the computations in a reasonable time. This allows for more codes to participate in this benchmark and makes it reasonable for new codes to use these benchmarks to test their accuracy. Combining all these points, we adopt a precision goal of 1\%\ on the global SED at lower optical depths and a more relaxed 10\%\ for higher optical depths. For the Y slice image-based comparisons, we have chosen a 10\%\ precision goal. \subsection{Resulting precisions} \begin{table} \caption{SED average deviations} \label{tab_sed_dev} \begin{tabular}{lcc} \hline\hline Component & $\mbox{$\tau_z(1~\micron$)} \leq 1$ & $\mbox{$\tau_z(1~\micron$)} = 10$ \\ \hline Direct Stellar & 0.3\% & 1.7\% \\ Scattered Stellar & 0.9\% & 3--58\% \\ \hline \multicolumn{3}{c}{eff} \\ \hline Direct Dust Emission & 0.7\% & 4.0\% \\ Scattered Dust Emission & 0.7\% & 3.7\%\\ \hline \multicolumn{3}{c}{equ} \\ \hline Direct Dust Emission & 0.3\% & 3.1\% \\ Scattered Dust Emission & 1.3\% & 3.3\% \\ \hline \multicolumn{3}{c}{sto} \\ \hline Direct Dust Emission & 2.9\% & 3.2\% \\ Scattered Dust Emission & 2.8\% & 2.8\% \\ \hline \end{tabular} \end{table} \begin{table} \caption{Y Slice average deviations} \label{tab_yslice_dev} \begin{tabular}{rcc} \hline\hline $\lambda$ (\mbox{$\mu$m}) & $\mbox{$\tau_z(1~\micron$)} \leq 1$ & $\mbox{$\tau_z(1~\micron$)} = 10$ \\ \hline 0.15 & 5.3\% & $\gg$1000\% \\ 0.53 & 3.5\% & 119\% \\ \hline \multicolumn{3}{c}{eff} \\ \hline 35.11 & 8.3\% & 3.9\%\\ 151.99 & 3.4\% & 2.1\% \\ \hline \multicolumn{3}{c}{equ} \\ \hline 35.11 & 3.8\% & 5.0\%\\ 151.99 & 4.7\% & 0.9\% \\ \hline \multicolumn{3}{c}{sto} \\ \hline 8.11 & 10.7\% & 8.7\%\\ 23.10 & 6.7\% & 19.9\% \\ 151.99 & 6.5\% & 4.1\% \\ \hline \end{tabular} \\ \end{table} The percentage differences between codes for the global SED components are summarized in Table~\ref{tab_sed_dev} and for the Y slices at specific wavelengths in Table~\ref{tab_yslice_dev}. In general, we find the results to be within the goal precisions with the notable exceptions of the stellar scattered and dust emission components for $\mbox{$\tau_z(1~\micron$)} = 10$. The details of these comparisons are discussed below. \subsubsection{Stellar radiative transfer} \begin{figure*} \resizebox{\hsize}{!}{\includegraphics{slab_qcomp_image_all_eff_uvopt.pdf}} \caption{The average deviation of the global SEDs from the median results are shown versus $\theta$ for the direct and scattered stellar photons in the top row. The Y slice average deviation from the median results are shown versus $\theta$ for two UV/optical wavelengths probing the scattered stellar photons in the bottom row. The results for $\mbox{$\tau_z(1~\micron$)} = 0.1$ and $0.01$ are similar to those for $\mbox{$\tau_z(1~\micron$)} = 1.0$.} \label{fig_eff_stell_comp} \end{figure*} While the properties of the calculated dust emission are sensitive to whether an approximation is used (eff or equ) or the full solution is calculated (sto) - see Fig.~\ref{fig_example_emission_type} - the radiative transfer problem itself is not. In fact, the radiative transfer of photons through dust is mathematically equivalent regardless of whether the photon interaction is computed separately for each size and composition in a distribution, or computed for an effective grain generated by integrating the grain properties over the size distributions and chemical compositions \citep{Steinacker13}. Hence, comparisons of the direct and scattered stellar light provide a comparison of each code's treatment of the radiative transfer problem, free from the additional computational complications arising from computing the dust emission. The direct and scattered stellar comparisons for the global SEDs are shown in Fig.~\ref{fig_eff_stell_comp} and summarized in Table~\ref{tab_sed_dev}. The direct stellar component shows the largest differences at high $\theta$ values where the slab occults the illuminating star. They are largest at $\theta = 150^\circ$, grow with increasing \mbox{$\tau_z(1~\micron$)}, and are just above the goal of 1\% at all optical depths. The $\theta = 150^\circ$ case provides the maximum optical depth from the star to the observer. The differences for $\theta < 150^\circ$ are due to different numerical representations of the intrinsic SED between codes and are $\ll 0.1$\%. For the scattered stellar component, the difference is $<$1\% for $\mbox{$\tau_z(1~\micron$)} \leq 1$ and 3\% for $\mbox{$\tau_z(1~\micron$)} = 10$ with the exception of $\theta = 180^\circ$ where the difference is 50\%. Larger differences are not unexpected at $\theta = 180$ as there are no paths for scattered photons that do not require them to penetrate the entire slab, making the results very sensitive to the number of photon packets run for a given model. Thus, the $\mbox{$\tau_z(1~\micron$)} = 10$ and $\theta = 180^\circ$ is a very sensitive test of the dust scattering at high optical depths. The comparisons for the Y slices of the images at two representative wavelengths (one in the UV and one in the optical) are shown in the bottom row of Fig.~\ref{fig_eff_stell_comp} and summarized in Table~\ref{tab_yslice_dev}. For $\mbox{$\tau_z(1~\micron$)} \leq 1$, the precisions are well within the goal of 10\%. At $\mbox{$\tau_z(1~\micron$)} = 10$, the discrepancies are much, much larger than this goal. The plot shows a bifurcated behavior for $\theta > 0^\circ$ that is the signature of very large variations between the median and all of the code results. The results that are below the median by a large value give $\bar{\Delta} \sim 100\%$ as they are effectively zero. The results that are above the median by a large value give $\bar{\Delta} \gg 10^4\%$. For $\theta = 0^\circ$, all the codes have $\bar{\Delta} < 10\%$ as the scattered stellar images are dominated by back scattering off the surface of the slab. Physically, one would expect that, even for strongly forward-scattering grains at very high optical depth, there would be very little scattered stellar light in the models for viewing angles near $\theta = 180^\circ$. Indeed, that is what is observed for this benchmark. However, given that the amount of scattered stellar light is very small, small differences in how different codes handle the scattering (and the RT problem in general) can lead to very large relative discrepancies, as observed. These results illustrate that 3D dust RT at high optical depth is still a challenging numerical problem for RT codes and can provide a sensitive probe of the efficacy of the numerical solution implemented. \subsubsection{Dust emission} The dust emission changes significantly depending on the assumption used as shown in Fig.~\ref{fig_example_emission_type}. The results for each of the dust emission approximations (eff = effective grain, equ = equilibrium only, and sto = full solution including stochastically heated grains) are given. Almost all of the codes provide results for all three dust emission cases. The exceptions are Hyperion, which only provided results for eff case results, and SOC, which does not provide the equ case results. In both cases, the codes could compute the missing cases, but time limitations meant they were not computed. \begin{figure*} \resizebox{\hsize}{!}{\includegraphics{dirty_converge_emissscat.pdf}} \caption{The contribution the scattered dust emission at $\lambda = 35.11~\mbox{$\mu$m}$ makes to the Y slice for $\mbox{$\tau_z(1~\micron$)} = 10$, $\theta = 90^\circ$ is shown. The comparison between all the code results is shown on the left, where the DART-ray and TRADING results are significantly below the results from the other five codes. On the right, the results from DIRTY are shown decomposed into the three contributing components. For the back three quarters of the slab, the scattered dust emission dominates over the direct dust emission and scattered stellar emission. The differences between DART-ray and TRADING and the rest of the codes is due to not calculating the scattered dust emisison component.} \label{fig_dscat_importance} \end{figure*} DART-ray and TRADING do not compute the dust-emission scattered component. The importance of the dust-scattered emission is illustrated in Fig.~\ref{fig_dscat_importance}. In addition, DART-ray does not allow for the heating of dust due to its own emission. The importance of dust self-heating is discussed in \S\ref{sec_self_heating}. The lack of the full dust emission radiative transfer calculation in these two codes means that it is expected that their results will be less accurate for high optical depths. High optical depths are where the dust-emission scattering and self-heating are particularly important. For these reasons and to be conservative, we do not include DART-Ray in determining the precisions of the solution for any optical depth for the global SED dust-emission components. In addition, we do not include DART-ray or TRADING for the $\mbox{$\tau_z(1~\micron$)} = 10$ results for determining the precision of the dust emission Y slices. We do include all the results in the figures, with those results not used for precision calculations shown as faint lines. For longer IR wavelengths the results for TRADING do not seem to be significantly affected by not including the dust-emission scattering calculation, but for the purposes of establishing the precision of the benchmark, we conservatively do not include them. \begin{figure*} \resizebox{\hsize}{!}{\includegraphics{slab_qcomp_image_all_eff_ir.pdf}} \caption{The average deviation from the median results are shown versus $\theta$ for the direct and scattered dust emission for the effective grain approximation in the top row. The Y slice average deviation from the median results are shown versus $\theta$ for two IR wavelengths probing the dust emission in the bottom row. The results for $\mbox{$\tau_z(1~\micron$)} = 0.1$ and $0.01$ are similar to those for $\mbox{$\tau_z(1~\micron$)} = 1.0$. Models not used in the precision calculation are shown as faint lines.} \label{fig_eff_demis_comp} \end{figure*} \begin{figure*} \resizebox{\hsize}{!}{\includegraphics{slab_qcomp_image_all_equ_ir.pdf}} \caption{The average deviation from the median results are shown versus $\theta$ for the direct and scattered dust emission for the equilibrium only grain approximation in the top row. The Y slice average deviation from the median results are shown versus $\theta$ for two IR wavelengths probing the dust emission in the bottom row. The results for $\mbox{$\tau_z(1~\micron$)} = 0.1$ and $0.01$ are similar to those for $\mbox{$\tau_z(1~\micron$)} = 1.0$. Models not used in the precision calculation are shown as faint lines.} \label{fig_equ_demis_comp} \end{figure*} \begin{figure*} \resizebox{\hsize}{!}{\includegraphics{slab_qcomp_image_all_sto_ir.pdf}} \caption{The average deviation from the median results are shown versus $\theta$ for the direct and scattered dust emission for the full grain solution (equilibrium and non-equilibrium dust emission) in the top row. The Y slice average deviation from the median results are shown versus $\theta$ for three IR wavelengths probing the dust emission in the bottom row. The results for $\mbox{$\tau_z(1~\micron$)} = 0.1$ and $0.01$ are similar to those for $\mbox{$\tau_z(1~\micron$)} = 1.0$. Models not used in the precision calculation are shown as faint lines.} \label{fig_sto_demis_comp} \end{figure*} The direct and scattered dust-emission comparisons for the global SED components are shown in Fig.~\ref{fig_eff_demis_comp}, \ref{fig_equ_demis_comp}, and \ref{fig_sto_demis_comp} for the eff, equ, and sto cases, respectively, and summarized in Table~\ref{tab_sed_dev}. The comparisons for the Y slices of the images at two representative wavelengths (one in the mid-IR and one in the far-IR) are shown in the bottom row of the same figures and summarized in Table~\ref{tab_yslice_dev}. For the eff case, the precisions achieved are at or better than the goals. For the equ case, the precisions achieved meet the goals, except for the global- dust-emission scattered component that has a precision of 2\%. For the sto case, the precision was 3\% for the global dust-emission components, well above the goal of 1\%. The Y slice precisions were better than the goal of 10\%, except for the $\lambda = 23.10$ where the differences were at the 20\% level. While the goal precisions were achieved for the eff case, it is worth noting that the differences between the Hyperion results and most of the codes are likely caused by differences in the sampling of the photons' wavelengths. Specifically, most of the codes sample the photon wavelengths directly from the wavelength grid while Hyperion samples the photon wavelengths from a continuous spectrum and the wavelength grid is only used for output quantities. With a higher-resolution wavelength grid, it is likely that most of the codes would cluster around the Hyperion results. \section{Convergence tests} \label{sec_convergence} \begin{table} \caption{DIRTY base parameter values} \label{tab_model_params_converge} \begin{tabular}{lcl} \hline\hline Name & Values & Description \\ \hline $N$ & $10^8$ & \# of photons per wavelength \\ $n_{xy}$ & 100 & \# of bins in $x$ or $y$ \\ $n_z$ & 100 & \# of bins in $z$ \\ $m_\mathrm{scat}$ & 500 & max \# of scatterings \\ $n_\mathrm{iter}$ & 4 & \# of dust heating iterations \\ $\xi_\mathrm{scat}$ & 0.5 & scattering composite bias \\ $\xi_\mathrm{emis}$ & 0.5 & emission composite bias \\ \hline \end{tabular} \end{table} In the absence of an analytic solution, another method for building confidence in the numerical solution is to perform convergence tests. Such tests also provide insight into the effects different limits place on the solution (e.g., the importance of scattered photons or dust self-heating). These tests involve changing numerical tolerances and quantifying how the solution changes. Convergence testing is most often done based on an experience-based understanding of the relative importance of the model parameters in influencing the solution. We have performed a number of quantitative convergence tests for this benchmark using the DIRTY code. Similar results would be expected for the other Monte Carlo codes and, to a lesser extent, with Ray-Tracing codes. For the convergence tests, the parameters not being tested were set to values given in Table~\ref{tab_model_params_converge}. We have not exhaustively searched the possible parameter space in our convergence tests, due to the significant computational resources necessary for each set of parameter tests. Instead, we have fixed all the parameters, except the one being varied, to values that are expected to provide reasonable precision based on previous runs. We have performed the convergence tests for $\mbox{$\tau_z(1~\micron$)} = 1$ and 10. Practically, we found that testing the precision for $\mbox{$\tau_z(1~\micron$)} = 1$ could be done in a reasonable amount of computer time and the results for lower optical depths will have comparable or better precision. The $\mbox{$\tau_z(1~\micron$)} = 10$ was challenging for all codes and the precision remains limited by computer time (i.e., $< 1$ month or so single threaded being reasonable within the scope of this study). \subsection{Number of photons/rays} The number of photons or rays that are computed at each wavelength ($N$) is clearly a parameter that will strongly influence the precision of the model results. This model parameter controls the precision of the scattered light calculation and of the dust emission from each grid cell. A significant portion of the computations scale directly with $N$. \begin{figure*} \resizebox{\hsize}{!}{\includegraphics{dirty_converge_nphot.pdf}} \caption{The average deviations ($\bar{\Delta}$) versus the number of photons or rays that are computed at each wavelength ($N$) are shown for the total global SEDs and Y image slices. The images are for the most challenging case of $\mbox{$\tau_z(1~\micron$)} = 10$ and illustrate the qualitative impact of increasing $N$. Plots and images are shown for $\theta = 0^\circ$, $90^\circ$, and $180^\circ$. The dotted horizontal lines give the 1\% (global SED) and 10\% (Y slices) levels. The images are plotted with the same log scaling for each wavelength and angle combination. The results for $N = 10^8$ are not shown as $\bar{\Delta}$ is computed relative to this case where $\bar{\Delta} = 0$ by definition.} \label{fig_converge_nphot} \end{figure*} The $N$ convergence tests were computed for three angles to illustrate the impact of $N$ on backscattered photons ($0^\circ$), penetration depth into the slab ($90^\circ$), and penetration through the entire slab ($180^\circ$). Fig.~\ref{fig_converge_nphot} plots the average deviation as a function of $N$ for the total global SED and the Y slices. The average deviation is computed compared to the model run with the largest $N$ (e.g., $10^8$). For $\mbox{$\tau_z(1~\micron$)} = 1$, convergence of the global SED and component SEDs (not shown) to 1\% is achieved with $N = 10^6$ and $N = 10^7$, respectively. For $\mbox{$\tau_z(1~\micron$)} = 10$, a similar behavior is seen except for the scattered stellar component where convergence is not seen and the average deviation remains high ($\sim$20-80\%) for all values of $N$ tested. The convergence for the Y slices is more complicated. Convergence to 10\% is reached by $N = 2 \times 10^7$ for small angles (e.g., $0^\circ$) at all wavelengths. For $\mbox{$\tau_z(1~\micron$)} = 1$, the goal convergence is seen for the IR wavelengths around $10^6$ and for the UV/optical wavelengths at $10^7$. For $\mbox{$\tau_z(1~\micron$)} = 10$, the goal convergence is again seen for the IR wavelengths around $10^6$, but the convergence for the UV/optical wavelengths is well beyond the number of photons tested with an extrapolated prediction of convergence around $N > 10^9$ photons. From these calculations, we can see that $N = 10^8$ will provide good precision for all but the UV/optical wavelengths for the $\mbox{$\tau_z(1~\micron$)} = 10$ case. \subsection{Spatial grid} Another obvious model configuration parameter to test for convergence is the number of spatial bins that describe the slab. The computation time required for a model scales with the number of bins through the need to compute the dust emission from each bin as well as computing the radiative transfer through all the bins. The finer the slab is divided, the more exact the solution becomes, but also the more time required for the computations. It is useful to note that using the Monte Carlo solution technique with a uniform dust density results in the scattering component of the solution being independent of the number of spatial bins used. This is because the scattering location is computed exactly for each photon packet independent of the grid. The slab geometry naturally lends itself to simple division into bins in the $x$, $y$, and $z$ dimensions. The convergence tests were done using linear $x$ and $y$ spacing and logarithmic $z$ spacing starting from the slab front facing nearest the star. We found that logarithmic bin spacing in $z$ provided equivalent results to linear $z$ bin spacing but required fewer bins for the same precision. \begin{figure*} \resizebox{\hsize}{!}{\includegraphics{dirty_converge_nxy.pdf}} \caption{The average deviations ($\bar{\Delta}$) versus $n_{xy}$ are shown for the total global SEDs and Y image slices for $\mbox{$\tau_z(1~\micron$)} = 1$ and $10$, and $\theta = 0^\circ$ and $180^\circ$. The images are for the $\mbox{$\tau_z(1~\micron$)} = 1$ and $\theta = 0^\circ$ case and illustrate the qualitative impact of increasing $n_{xy}$. The images are log scaled over the same range. The image Y slice results are shown only for $\theta = 0^\circ$ as the results for $\theta = 180^\circ$ are very similar. Only the image slices at two diagnostic infrared wavelengths probing the dust emission are shown as the ultraviolet and optical scattered light images are not sensitive to $n_{xy}$ for Monte Carlo codes. The dashed and dotted horizontal lines give the 1\% (global SED) and 10\% (Y slice) lines. The results for $n_{xy} = 200$ ($\mbox{$\tau_z(1~\micron$)} \leq 1$) and $n_{xy} = 100$ ($\mbox{$\tau_z(1~\micron$)} = 10$) are not shown as $\bar{\Delta}$ is computed relative to these cases where $\bar{\Delta} = 0$ by definition.} \label{fig_converge_nxy} \end{figure*} The convergence tests for the number of $x$ and $y$ bins ($n_x = n_y \equiv n_{xy}$) were computed for the $\theta = 0^\circ$ and $180^\circ$ cases as these two viewing angles are the most sensitive to $n_{xy}$. Fig.~\ref{fig_converge_nxy} plots the average deviation as a function of $n_{xy}$ for the total global SEDs and Y slices. The average deviation is computed compared to the model run with the largest $n_{xy}$. Convergence of the global SEDs to 1\% is achieved with $n_{xy} = 2$ for the total and all the components. Similarly, 10\% convergence for the image slices is achieved by $n_{xy} = 10$. \begin{figure*} \resizebox{\hsize}{!}{\includegraphics{dirty_converge_nz.pdf}} \caption{The average deviations ($\bar{\Delta}$) versus $n_{z}$ are shown for the total global SEDs and Y image slices for $\mbox{$\tau_z(1~\micron$)} = 1$ and $10$ and $\theta = 90^\circ$. The images are for the $\mbox{$\tau_z(1~\micron$)} = 1$ and $\theta = 90^\circ$ case and illustrate the qualitative impact of increasing $n_{z}$. The images are log scaled over the same range. The DIRTY spatial grid uses a log spacing along the z axis and this is reflected in the images shown. Only the image slices at two diagnostic infrared wavelengths probing the dust emission are shown as the ultraviolet and optical scattered light images are not sensitive to $n_{z}$ for Monte Carlo codes. The dashed and dotted horizontal lines give the 1\% (global SED) and 10\% (Y slice) lines. The results for $n_z = 200$ are not shown as $\bar{\Delta}$ is computed relative to this case where $\bar{\Delta} = 0$ by definition.} \label{fig_converge_nz} \end{figure*} The $n_z$ convergence tests were computed for $\theta = 90^\circ$ as this viewing angle is the most sensitive to $n_z$. Fig.~\ref{fig_converge_nz} plots the average deviation as a function of $n_{z}$ for the total global SEDs and Y slices where average deviation is computed compared to the model run with the largest $n_{z}$. For $\mbox{$\tau_z(1~\micron$)} = 1$, convergence of the global SEDs to 1\% is achieved with $n_{z} \sim 4$. Convergence to 1\% for the components of the global SEDs is achieved for $\mbox{$\tau_z(1~\micron$)} = 10$ at $n_z \sim 30$. The image slice convergence to 10\% is achieved at $n_z \sim 20$ for $\mbox{$\tau_z(1~\micron$)} = 1$ and $n_z \sim 35$ for $\mbox{$\tau_z(1~\micron$)} = 10$. \subsection{Number of scatterings} \begin{figure*} \resizebox{\hsize}{!}{\includegraphics{dirty_converge_mscat.pdf}} \caption{The $\lambda = 0.15$ (left), $35.11$ (center) and $151.99~\mbox{$\mu$m}$ (right) Y slices for $\theta = 90^\circ$ and $\mbox{$\tau_z(1~\micron$)} = 1$ and 10 are plotted for a range of maximum allowed number of scatterings ($m_\mathrm{scat}$).} \label{fig_converge_mscat} \end{figure*} Calculation of the scattered photons is one of the challenging parts of the radiative transfer solution, especially in light of the importance of multiple scattered photons at higher optical depths. In general, RT codes set a "maximum number of scatterings" to avoid photon packets getting "stuck", especially in very high optical depth environments. Of course, if that maximum value is set below the typical number of scatterings a photon might undergo, it can lead to erroneous results from the simulation for the scattered signal. To quantify the importance of multiple scattering, we carried out convergence tests at $\theta$ values of $0^\circ$, $90^\circ$, and $180^\circ$. For $\mbox{$\tau_z(1~\micron$)} \leq 1$, the number of scatterings needed to achieve the goal precisions is on the order of 5. For $\mbox{$\tau_z(1~\micron$)} = 10$, approximately 20 scatterings are needed for the goal precisions with this driven mainly by the convergence at the shortest wavelengths. We illustrate the importance of multiple scattering for the $\mbox{$\tau_z(1~\micron$)} = 1$ and 10 cases in Fig.~\ref{fig_converge_mscat}. This figure gives the Y slices and percentage deviations from the $m_\mathrm{scat} = 500$ case for $\lambda = 0.15$, $35.11$, and $151.99$~\mbox{$\mu$m}. We only show the $90^\circ$ case as it clearly illustrates the changing importance of multiple scattering as a function of penetration depth in the slab. Multiple scattering is important for both the direct scattering at $\lambda = 0.15$~\mbox{$\mu$m}\ and the dust emission at both IR wavelengths. The dependence of the IR wavelengths on the number of scatterings clearly shows the importance of dust heating due to scattered photons. As expected, the dependence on multiple scattering is largest at higher optical depths. \subsection{Self-heating iterations} \label{sec_self_heating} \begin{figure*} \resizebox{\hsize}{!}{\includegraphics{dirty_converge_miter.pdf}} \caption{The $\lambda = 35.11$ (left) and $151.99~\mbox{$\mu$m}$ (right) Y slices for $\mbox{$\tau_z(1~\micron$)} = 10$ and $\theta = 90^\circ$ are plotted for a range of dust self-heating iterations ($m_\mathrm{iter}$).} \label{fig_converge_miter} \end{figure*} The thermalized radiation emitted by dust that is heated by the primary radiation source can in turn be absorbed by other dust grains in the model space, leading to dust self-heating. This self-heating increases in importance as the optical depth increases, since, depending on the location, the dominant radiation source may be the re-emitted dust emission. Most RT codes account for dust self-heating by iterating between the dust emission and the dust absorption and scattering, stopping when a preset energy convergence is achieved. We carried out convergence tests to quantify the importance of dust self-heating. For $\mbox{$\tau_z(1~\micron$)} \leq 1$, both the global SED and Y-slice comparisons show that no iterations are necessary to meet our precision goals; the effect of dust self-heating is small enough that neglecting it does not change the precision of the resultant model. This is not the case for $\mbox{$\tau_z(1~\micron$)} = 10$ where neglecting dust self-heating results in global SEDs in error larger than the goal precisions for all cases and the Y slices for the $\theta = 90^\circ$ case. The $\theta = 90^\circ$ case clearly shows the impact of dust self-heating and we illustrate this in Fig.~\ref{fig_converge_miter}. This figure gives the Y slices for a range of dust self-heating iterations starting with no self-heating ($m_\mathrm{iter} = 0$). The impact of dust self-heating is to raise the emission in the front of the slab at shorter wavelengths (e.g., $\lambda = 35.11~\mbox{$\mu$m}$) and, more dramatically, over most of the slab at longer wavelengths (e.g., $\lambda = 151.99$). Fortunately, only a single dust self-heating iteration is needed, with additional self-heating iterations providing only small gains. \subsection{Scattering at high optical depths} \label{sec_high_tau_scat} \begin{figure*} \resizebox{\hsize}{!}{\includegraphics{dirty_newforcebiasxi_converge_t1e1_w000_15.pdf}} \resizebox{\hsize}{!}{\includegraphics{dirty_newforcebiasxi_converge_t1e1_w151_99.pdf}} \caption{Images in the UV and IR are shown for $\mbox{$\tau_z(1~\micron$)} = 10$, $\theta = 0^\circ$, $90^\circ$, and $180^\circ$, and a range of $\xi_\mathrm{scat}$ values. The images are log scaled and share the same scaling at each unique combination of $\lambda$ and $\theta$.} \label{fig_xiscat} \end{figure*} Radiative transfer at high optical depths through dust is challenging for most numerical solution techniques. There are approximations possible such as the diffusion approximation \citep{Kuiper2010}, but such approximations impose real limitations on the accuracy of the resulting calculations. Motivated by the work on this benchmark, a new technique based on composite biasing was introduced to the Monte Carlo 3D dust RT community by \citet{Baes16}. The composite bias technique provides a way to sample two probability distributions while controlling the amplification of the resulting photon weight. Basically, the site of the next scattering is chosen from one of two different distributions with the frequency with which each distribution is used is controlled by the parameter $\xi_\mathrm{scat}$ that varies between 0 and 1. The first distribution is the standard $e^{-\tau}$ and the second is a much flatter distribution (e.g., a uniform distribution $\tau$). For example, if $\xi_\mathrm{scat} = 0.5$, then one half the time the first distribution is used and the other half the second is used. The weight of the photon is modified to account for the difference of the composite of the two distributions from the standard $e^{-\tau}$ distribution. The $\xi_\mathrm{scat} = 0$ case corresponds to the standard $e^{-\tau}$ distribution. The need for such a composite technique can be illustrated by considering the $\mbox{$\tau_z(1~\micron$)} = 10$ and $\theta = 90^\circ$ case. For the standard scattering distribution ($\xi_\mathrm{scat} = 0.0$), the back of the slab was approximately $10^{-100}$ fainter than the front of the slab for the $0.15~\mbox{$\mu$m}$ image. The approximate difference between the scattered light component from the front to back of the slab can be calculated by the simple approximation that intensity from singly scattered photons should be the albedo multiplied by $e^{-\tau}$. Assuming the scattered light at the front of the slab has $\tau = 0$ and the back of the slab has $\tau \sim 79$ (at $0.15~\mbox{$\mu$m}$ for $\mbox{$\tau_z(1~\micron$)} = 10$), then the ratio should be $e^{-79} = 1.8 \times 10^{-35}$. This is much, much higher than seen using the standard scattering prescription. Fig.~\ref{fig_xiscat} illustrates the results for a range of $\xi_\mathrm{scat}$ at two representative wavelengths for the $\mbox{$\tau_z(1~\micron$)} = 10$ model. For $\xi_\mathrm{scat} = 0.0$ it shows that most of the scattered photons are missing at the back of the slab. These scattered photons are computed with even small values $\xi_\mathrm{scat}$ as the interaction site for scattering is sampled from the composite function providing reasonable sampling at low {\em and} high optical depths. At $\xi_\mathrm{scat} = 1.0$, a significant amount of non-Gaussian noise is seen at $\lambda = 151.99~\mbox{$\mu$m}$ due to the extreme amplification of a small number of photons in the calculation. Setting $\xi_\mathrm{scat} = 0.5$ provides a good compromise between sampling the low probability scattering events and controlling the amplification of the photon weight to be always less than two \citep{Baes16}. \section{Discussion} For the $\mbox{$\tau_z(1~\micron$)} \leq 1$ cases, the results from the different codes agree within 0.3--2.9\% for the global SEDs and within 3--11\% for the Y slices. These are near or below the goal precisions for this benchmark. For the $\mbox{$\tau_z(1~\micron$)} = 10$ case, the results from the different codes agree within 1.7--4.0\% for the global SEDs, except for the scattered stellar component where the disagreement is up to 58\%. The infrared Y slices agree within the goal precisions except for the sto case at 23.10~\mbox{$\mu$m}\ where the disagreement is 20\%. The optical and UV Y slice deviations are very large, well beyond the goal precisions. These disagreements in the scattered flux are due to the continued challenge of performing accurate calculations at high optical depths. The most diagnostic viewing angle for these calculations is $\theta = 180^\circ$ and it is striking that none of the seven codes produces equivalent results with systematic differences up to $10^4$. While the disagreements are large for the scattered component at high optical depths, the scattered flux at high optical depths is not important as a heating term for the dust emission as evidenced by the agreement between codes for the IR emission SED and images (Fig.~\ref{fig_eff_demis_comp}). The large differences for the mid-IR wavelengths for DART-Ray and TRADING are due to those codes not calculating the scattered component of the dust emission radiative transfer. In addition, some of the differences in the IR emission for DART-Ray are due to the lack of inclusion of dust self-heating. The lack of agreement or convergence for the scattered component at high optical depths for dust RT codes is shown quantitatively for the first time in this work. While it has been known for some time that high optical depths are challenging for dust RT codes, we have shown here that the main issue is with the scattered component and not the directly absorbed component. Previous benchmarks \citep{Pascucci04, Pinte09} provided tests for high optical depths, but due to the 2D disk geometry used, they do not explicitly test the scattered component at such optical depths. Further, the 2D disk geometry in previous benchmarks had very high optical depths along the mid-plane of the disk, but very low optical depths along the rotation axis of the system. Thus, the global scattered flux from these benchmarks is dominated by scattering at low optical depths, not the high disk plane optical depths. The slab benchmark $\mbox{$\tau_z(1~\micron$)} = 10$ case with $\theta = 180^\circ$ explicitly tests the scattered flux at very high optical depths as there are no paths for the scattered photons to the observer that do not go through at least the $\tau_z$ optical depth. Many of the codes in this benchmark have been run for the previous benchmarks and have given results that are consistent with the literature \citep[e.g.,][]{Bianchi08, Robitaille11, Peest17}. For the reasons already stated, there is no conflict between reproducing the previous benchmarks at high optical depths and the same codes not agreeing for the scattered component at high optical depths for the benchmark presented in this paper. For the dust emission approximations, the codes agree best for the effective grain approximation (eff), with somewhat lower precision for the equilibrium-only approximation (equ), and worst for the full solution including stochastic emission (sto). The larger disagreement for the equ results may be due to the number of dust grain size bins adopted by each model and how the dust properties were averaged or interpolated for these bins. The larger disagreement for the sto results was expected given the different solution techniques used in the codes for the stochastically heated grains \citep{Camps15b} The convergence tests provide insight into the potential origin of the differences between codes when combined with Table~\ref{tab_model_params}. The number of photons each model was run with ($N$ - Table~\ref{tab_model_params}) proved sufficient to reach the desired precision in all cases except for stellar scattering for the $\mbox{$\tau_z(1~\micron$)} = 10$. Convergence testing indicates that, to reach the desired precision, the number of photon packets would exceed $10^{9}$ with $\mbox{$\tau_z(1~\micron$)} = 10$. This is a potential cause of some of the large differences between models at UV/optical wavelengths at $\mbox{$\tau_z(1~\micron$)} = 10,$ since some models were run with $10^{8}$ photon packets. But this is not the only cause, as targeted additional tests of just the UV scattering for the $\mbox{$\tau_z(1~\micron$)} = 10$ case with many more than $10^9$ photons do not show convergence. The number of x and y bins ($n_{xy}$) are sufficient in all the model runs. The number of z bins ($n_z$) are sufficient for all but the DART-Ray results for the $\mbox{$\tau_z(1~\micron$)} = 10$ case. The maximum number of scatterings ($m_\mathrm{scat}$) is of the order of 20 with most codes computing many more scatterings, with the exception of DART-ray and (marginally) SOC. The maximum number of dust heating iterations ($m_\mathrm{iter}$) is sufficient for all the codes except for DART-Ray for the $\mbox{$\tau_z(1~\micron$)} = 10$ case. The inclusion of six Monte Carlo based radiative transfer codes provided a wealth of comparisons between codes based on the same technique. In addition, the inclusion of the DART-Ray code that is based on the alternative technique of Ray-Tracing allowed for comparisons to be made between the two techniques. This provided ample opportunity to find and remove bugs in all the codes. The solutions based on Monte Carlo and Ray-Tracing techniques were consistent, with the notable exception of the scattered component in the $\mbox{$\tau_z(1~\micron$)} = 10$ case. \subsection{Lessons learned} To very carefully setup, specify, and define parameters as much as possible in order to ensure that all the codes perform the same calculation was found to be critical. One area that was found to be important was to clearly specify the wavelength grid and ensure that each code performed calculations at the defined wavelengths. Additionally, the normalization of the slab optical depth was initially at 0.55~\mbox{$\mu$m}, but was changed to be exactly at one of the specified grid points (1~\mbox{$\mu$m}) to avoid, as much as possible, interpolation errors in the normalization. It was also critical to get all the results of the models into the same format and orientations, something that took a surprising amount of time due to the different assumptions made in the different codes. We also spent significant time establishing a common terminology for different parts of the models and benchmark. Another lesson learned was that there were minor bugs and/or different conventions that the initial comparisons revealed. Most of the participating codes made improvements as a result. These improvements included removing minor bugs that did not significantly change the results but did improve the ability to compare different codes. Some of these bugs were revealed due to the large parameter space explored by this benchmark, often beyond the range that had been tested in the codes previously. A major lesson learned was that the codes had significant difficulty with the scattered component for the $\mbox{$\tau_z(1~\micron$)} = 10$ case. This case pushed the codes into an area where dust scattering is critical, both for the stellar and dust emission photons. The initial results revealed approximately $10^{60}$ differences at the back of the slab in the stellar scattered light for the $\lambda = 0.15~\mbox{$\mu$m}$ Y slice (\S\ref{sec_high_tau_scat}). In particular, the Ray-Tracing (DART-ray) results were much higher than many of the Monte Carlo results. As Monte Carlo codes have a known limitation in not fully probing the high optical depths, this was not particularly surprising. Test cases with some of the Monte Carlo codes at this wavelength with more photons did show smaller differences, but these were still very large. Additional test cases were run allowing for larger numbers of scatterings and these also showed smaller differences, but again these were still relatively large. These differences motivated the inclusion of the composite biasing technique as part of all of the Monte Carlo codes to better probe the high optical depth scatterings \citep{Baes16}. While the codes still have significant differences for this case and wavelength, they are much smaller at only $\sim 10^4$. Clearly more work is needed to understand the origin of these differences. This work has started and has indicated that the origin may be related to very low probability multiple scattering events that dominate the scattered light images at these high optical depths. While under active investigation, the solution to this issue is beyond the scope of this paper, however this problem did reveal that there may be other very low probability parts of parameter space that are not probed well. An example of this is the dust emission when it varies strongly over the model grid. This is the case for mid-IR wavelengths for $\mbox{$\tau_z(1~\micron$)} = 10$ where the dust emission at the front of the slab is many orders of magnitude larger than that from the back of the slab. Using the dust emission as the probability for emitting a mid-IR photon can lead to very few photons being emitted in the back of the slab. Of course increasing the number of photons in a run will alleviate this issue, but at the expense of longer run times. The composite biasing technique can be used to provide for better sampling of the spatial dust emission by, for example, emitting photons one half of the time sampled from the dust emission and one half of the time sampled uniformly in the slab. This was implemented in the DIRTY code producing much better sampled IR images for the same number of photons emitted. Another solution may be to use the \citet{Bjorkman01} instantaneous emission technique where the spatial locations of the dust emission photons are based on the locations of dust absorptions, but possibly at the expense of longer run times \citep{Baes05, Chakrabarti09}. One minor lesson learned was that it would be useful to have the convergence tests run prior to having all the codes run the benchmark. The convergence tests for this benchmark were generally run after the comparison of results from all the codes was well underway. While none of the convergence tests revealed that the codes needed to rerun the benchmarks, we did discover that the BASIC wavelength grid for the eff case was close to being too coarse for our goal precision. The impact of the wavelength grid resolution appeared in the dust emission spectrum with higher resolution grids showing slightly hotter dust. Convergence tests on the wavelength grid resolution were done with multiple codes for the eff case showing that the global SED convergence was near 1\% for the adopted BASIC grid for the $\mbox{$\tau_z(1~\micron$)} = 0.01$ case. The global SED convergence precision improved for higher \mbox{$\tau_z(1~\micron$)}\ values. While this is within the goal precision, in hindsight we likely would have adopted a finer resolution grid to ensure that this issue was well below the goal precision of 1\%. We all learned that there is no "easy" benchmark. The expectation by many of us was that the slab benchmark would be straightforward and take relatively little time to complete. This was not the case and many of us found this geometrically simple benchmark to be deceptively complex. This benchmark was useful not only for debugging and refining our codes, but also for deepening our understanding of dust radiative transfer in general. Many of the lessons learned during the work on this benchmark should be applicable to other 3D dust radiative transfer geometries. The lack of agreement between the codes for the scattered light at high optical depths ($\tau > 10$) calls for continued caution in interpreting results obtained from any code. The importance of the convergence tests in illuminating the importance of different parameters in the precision of the solution for this benchmark illustrates that such convergence tests should be done for all geometries. Convergence tests can be done by anyone using a dust radiative transfer code (not just the coders), and will provide confidence in the results as well as a deeper understanding of the complex dust radiative transfer problem. \section{Summary} We present the first 3D dust radiative transfer benchmark. This benchmark is composed of a rectangular slab of constant density dust externally illuminated by a hot, UV-bright star. The cases in this benchmark include optical depths from $\mbox{$\tau_z(1~\micron$)} = 0.01$--$10$ and three different dust emission assumptions (effective grain, equilibrium-only grains, and the full solution including stochastically heated grains). Results from seven codes are presented, six based on Monte Carlo techniques and one based on Ray-Tracing. The results are given as global SEDs and images at selected wavelengths for a range of viewing angles. The results are in good agreement for $\mbox{$\tau_z(1~\micron$)} \leq 1$ with precisions of $\sim$1\% for the global SEDs and $\leq 10\%$ for slices through the images. The results are in good agreement for the $\mbox{$\tau_z(1~\micron$)} = 10$ case, except for the stellar scattered component. The setup of this benchmark to explicitly probe the components of the dust radiative transfer problem allowed us to quantify the lack of agreement for the scattered component at high optical depths. This work provides a benchmark for future 3D RT codes and illustrates remaining challenges for 3D dust RT in the very optically thick regime. \begin{acknowledgements} We thank the referee for insightful comments that improved the paper. This research made use of matplotlib, a Python library for publication quality graphics \citep{Hunter07}. This research made use of Astropy, a community-developed core Python package for Astronomy \citep{Astropy13}. RK acknowledges financial support within the Emmy Noether research group on ``Accretion Flows and Feedback in Realistic Models of Massive Star-formation” funded by the German Research Foundation under grant no.~KU 2849/3-1. MJ acknowledges the support of the Academy of Finland Grant No. 285769. MB and SB acknowledge support from the European Research Council (ERC) in the form of the FP7 project DustPedia (P.I. Jonathan Davies, proposal 606824). TL acknowledges the support from the Swedish National Space Board (SNSB) as well as the Chalmers Centre for Computational Science and Engineering (C3SE) and the Swedish National Infrastructure for Computing (SNIC) for providing computational resources. G.N. acknowledges support by Leverhulme Trust research project grant RPG-2013-418. The development of DART-Ray was supported by the UK Science and Technology Facilities Council (STFC; grant ST/G002681/1). KG acknowledges the support of Space Telescope Science Institute. \end{acknowledgements} \bibliographystyle{aa}
1,116,691,497,493
arxiv
\section{Introduction} In recent times, our knowledge of non-perturbative string theory has greatly progressed. A key element in this progress has been our ability to identify certain quantum mechanical (or field-theoretical) systems which describe non perturbative states in string theory. More precisely, BPS states in a given string model are often put in one-to-one correspondence with the ground states of some associated supersymmetric system. Two examples are paramount: the quantum mechanical reduction of a 6-d, N=1 supersymmetric (8 real supercharges) field theory with one Abelian vector multiplet, and a charged hypermultiplet, and the quantum mechanical reduction of a d=10, N=1 supersymmetric (16 real supercharges) $SU(N)$ gauge theory. The first system describes essentially an H-monopole~\cite{W1} of the SO(32) superstring, while the second describes $N$ D0 branes. In both cases string duality arguments predict the existence of exactly one bound state at threshold for these systems, since string dualities map the BPS states they describe into Kaluza Klein modes of the graviton. To check the existence of these ground states at threshold directly in the QM model is thus a powerful check of the correctness of the duality hypothesis; the existence of a normalizable bound state for any $N$ in the D0 brane system is also essential for the consistency of the M(atrix) theory proposal of ref.~\cite{BFSS}. The problem in counting the ground states arises from the fact that they are at threshold~\footnotemark \footnotetext{For the D0 brane, this has been proven rigorously in~\cite{dWN1,dWN2}.}. This is because in the QM models described above, the bosonic coordinates take value in a non-compact manifold, and the potential energy has zero-energy valleys extending to infinity. A possible way of counting the number of normalizable ground states (better, the difference between the multiplicity of bosonic and fermionic ground states), without explicitly solving the Schr\"odinger equations is to use the Witten index $W[\beta]={\rm Tr}\, (-)^F \exp(-\beta H)$~\cite{W2}, and to notice that when the low-energy spectrum is not too pathological, and ${\rm tr}\,(-)^F\exp(-\beta H)$ is trace-class, the following identity holds: $n_B-n_F=\lim_{\beta\rightarrow \infty}W[\beta]$. Here, ${\rm Tr}\,$ denotes the trace over the whole Hilbert space while ${\rm tr}\,$ is the trace over the (finite-dimensional) fermionic Hilbert space. Moreover, when the Hamiltonian of a supersymmetric QM has a continuous spectrum without a gap at zero energy, the usual arguments that ensure the invariance of the Witten index~\cite{W2} have to be carefully re-examined. In refs.~\cite{SS1,SS2}, such analysis has been performed for the H-monopole, and for the D0-brane Hamiltonian with $N=2$ (see also~\cite{Y}). The result of refs.~\cite{SS1,SS2,Y} is that in both cases $\lim_{\beta\rightarrow\infty}W[\beta]=1$. In this paper, using a different method, we find that $n_B-n_F=1$ in the H-monopole system. We also find that the same result holds for D0 branes in 9 dimensions, provided that three-dimensional D0 branes have no bound states\footnotemark. \footnotetext{In ref.~\cite{FH} it was proven rigorously that no bound states exist for D0 branes in two dimensions. An argument due to S. Shenker shows that a similar result should also hold in three dimensions, see also~\cite{dW}.} Our method complements the arguments of refs.~\cite{SS1,SS2} since it proves the existence of D0 brane bound states with $N$ any prime number. Moreover, it appears to be straightforwardly generalizable to other, more complicated QM systems relevant to non-perturbative string theory. One chief example is the QM model of ref.~\cite{KT}, which is the reduction of a 4-d N=1 supersymmetric theory, and describes a configuration of four three-branes corresponding to a BPS black hole with nonzero entropy. To completely check the prediction of duality, one should find a vanishing theorem proving the uniqueness of the bound states. Unfortunately, such a theorem does not exist yet in the literature. On the other hand, progress towards proving the {\em absence} of D0 brane bound states in three dimensions were recently made in~\cite{H1}. The main idea of this paper is the following. In a supersymmetric QM with at least two (real) supercharges, there exists a {\em real} superpotential function $W$~\cite{F}. A change in the superpotential, $w$, induces a change in the supercharges as follows: \begin{equation} Q\rightarrow e^{-w}Qe^{w}\equiv Q_{w}, \;\;\; \bar{Q}\rightarrow e^{w}\bar{Q}e^{-w}\equiv \bar{Q}_{w}. \eeq{1} If the bosonic fields take value in a compact space, or if $w /W$ goes to zero at infinity in field space, the $L_2$ cohomologies of $Q$ and $Q_{w}$ coincide, and therefore the number of normalizable ground states of the QM with superpotential $W+w$ is the same as for superpotential $W$~\cite{W2}. When the field space is non-compact, and the potential obtained from $W$\footnotemark has zero-energy valleys, \footnotetext{Together with appropriate D-terms~\cite{F}.} the argument does not work, generically: the local cohomologies of $Q$ and $Q_{w}$ coincide, but the $L_2$ ones do not. Typically, one of the two cohomologies is normalizable while the other is not. In this paper, we show that, nevertheless, in the case of the H-monopole and D0 brane, it is possible to find an appropriate $w$ which acts as a ``dam'' for the valleys, lifting the flat directions in the potential, while maintaining the normalizability properties of the cohomology. This special perturbation allows us to reduce the computation of the index $n_B-n_F$ to computing the Witten index of a system with mass gap and isolated minima in the scalar potential. Such index can then be computed in the semiclassical approximation. The paper is organized as follows. In Section 2, we describe more precisely the cohomology argument explained above. In Section 3, we find the cohomology-preserving perturbation $w$ for the H-monopole QM, while the perturbation for the D0 brane QM is discussed in Section 4, together with an argument showing that no bound states exist in $d<5$. Section 5 contains our conclusions and a discussion of possible generalizations. Some technical results are collected in the Appendices. Appendix A contains a discussion about the validity of the adiabatic (Born-Oppenheimer) approximation used in this paper, and an argument to justify it. Appendix B discusses some aspects of the semiclassical approximation in supersymmetric QM, which are used in Section 4. The eigenvalues of the fermion mass matrix of the perturbed D0 brane at the stationary point are computed in Appendix C, together with some relevant properties of the Fock vacuum associated to that mass matrix. \section{The Cohomology Argument} In this section we provide some criteria for the existence of the normalizable supersymmetric ground states of a Hamiltonians with flat directions. These criteria are based on the properties of the ground states of the perturbed Hamiltonian $H_w\approx\ha\{Q_w,\bar{Q}_w\}$. As it was mentioned above, despite the fact that the local cohomologies of the operators $Q$ and $Q_w$ are equivalent, the mere existence of normalizable ground states of the Hamiltonian $H_w$ does not prove that they exist also in the original Hamiltonian $H_0$. However, they do exists if additional properties are fulfilled. We summarize our method in the following \newtheorem{guess}{Lemma} \begin{guess} There exists a normalizable ground state of the Hamiltonian $H_0$ if: \begin{description} \item[i)] The Witten index of the Hamiltonian $H_w$ is not zero. \item[ii)] $\psi_w^{\pm}\equiv e^{\pm w}\phi_w$ are normalizable, where $\phi_w$ is a normalizable ground state of the perturbed Hamiltonian $H_w$. \end{description} \end{guess} The condition (i) ensures the existence of a normalizable solution of the perturbed Hamiltonian $H_w$. The functions $\psi^\pm_w$ are cohomology representatives of the operators $Q, \bar{Q}$, since they obey: \begin{equation} Q\psi_w^+=\bar{Q}\psi_w^-=0. \eeq{madd1} On cohomological grounds we may write: \begin{eqnarray} \psi_w^+&=&\al+\gamma,\;\;\; \psi_w^-=\bar{\al}+\bar{\gamma},\nonumber \\ \gamma &\in& \overline{{\cal I}\,Q}, \;\;\;\alpha\notin\overline{{\cal I}\,Q}, \nonumber \\ \bar{\gamma}&\in& \overline{{\cal I}\,\bar{Q}},\;\;\; \bar{\alpha}\notin \overline{{\cal I}\,\bar{Q}}. \eea{madd8} Here $\overline{{\cal I}\,Q}$ is the closure in $L_2$ of the image of $Q$ etc. If both $\psi^+_w$ and $\psi^-_w$ are $L_2$, then so are $\alpha$ and $\bar{\alpha}$. By the definition of $\gamma$ and $\bar{\gamma}$, there exist two sequences of $L_2$ states, $\beta_n$, $\bar{\beta}_n$, such that \begin{equation} \gamma =\lim_{n\rightarrow \infty}Q\beta_n, \;\;\; \bar{\gamma}=\lim_{n\rightarrow \infty}\bar{Q}\bar{\beta}_n . \eeq{clos} By computing the scalar product $(\psi^+_w,\psi^-_w)$ we find \begin{equation} 1=(\psi_w^+,\psi_w^-)=\lim_{n\rightarrow \infty}(\alpha + Q\beta_n,\psi^-_w)= (\alpha,\bar{\alpha}) + \lim_{n\rightarrow \infty} (\beta_n, \bar{Q}\psi^-_w)=(\alpha,\bar{\alpha}). \eeq{co1} This equation, which is well defined when $\alpha$, $\beta_n$ etc. are in $L_2$, implies that $\alpha$ and $\bar\alpha$ cannot be zero. Notice that this result follows only if {\em both} $\psi^\pm_w$ are $L_2$. The non-triviality of the cohomologies ensures the existence of a normalizable ground state of $H_0$. This is a well known fact in mathematics; for completeness, we give here below a short proof of this result. Define two orthogonal projectors, $P_n$, $R_n$, such that $P_n + R_n =1$. $P_n$ is the spectral projector associated to $H_0$, projecting over eigenstates with energy $E\geq 1/n$, while $R_n$ projects over states with energy less than $1/n$. For all $n > 0$, $P_n\alpha$ can be written as $Q\beta_n$. By defining $\beta_n=\bar{Q}H^{-1}P_n \alpha$ (see also ref.~\cite{W2}); $\beta_n$ is $L_2$ since the operator $\bar{Q}H^{-1}P_n$ is bounded. Analogously, $\bar{\beta}_n\equiv P_n\bar{\alpha}$ is $\bar{Q}$ exact. Moreover, Since $\alpha$ is $L_2$, and ${\cal I}R_n \subset {\cal I}R_m$ for $n>m$, the sequence $\alpha_n\equiv R_n\alpha$ is Cauchy; therefore, it converges to a state $\tilde{\alpha}=\lim_{n\rightarrow \infty}\alpha_n$ which, by construction, is normalizable and obeys $H_0\tilde{\alpha}=0$. The state $\tilde{\alpha}$ cannot be zero because of eq.~(\ref{co1}) and the definition of $\alpha$ given in eq.~(\ref{madd8}). \noindent Q.E.D. As we just noticed, if only one of the $\psi_w^\pm$ is normalizable, it may happen that $(Q\beta_n, \psi^-_w)\neq (\beta_n,\bar{Q}\psi^-_w)=0$, so that the cohomology may still be empty. To illustrate this phenomenon, let us consider a simple example: a free supersymmetric particle in one dimension, which is known to have no normalizable ground states. Take a perturbing superpotential $w=\ha x^2$. The perturbed problem has a zero energy normalizable solution $\phi_w=\exp(-\ha x^2)$, so that $\psi_w^-$ is normalizable, while $\psi_w^+$ is not. What actually happens in this example, is that $\psi_w^-$ is cohomologically trivial, i.e. there exists $\chi$ s.t. $\psi_w^-=Q\chi$. Note that the argument in the lemma cannot be reversed: an infinite norm of the $e^{\pm w}\phi_w$ does not imply non-normalizability for $\phi_0$. The Hamiltonian $H_w$ should be chosen in such a way that avoids the typical problems associated with flat directions. Particularly simple for the analysis is the case when the potential $V_w$ of the supersymmetric Hamiltonian $H_w$ has only one isolated minimum. Then, since the Witten index is nonzero, this system is guaranteed to have (at least) a normalizable ground states of zero energy, i.e. there exists (at least) a state $\phi_w$ s.t. $H_w\phi_w=0$, and $|\!|\phi_w|\!|<\infty$. Given that the perturbing potential is a non-singular function, and that the function $\phi_w$ is square integrable, the normalizability of the $\psi^{\pm}_w$ has to be checked only at infinity, since the integral over a ball of radius $R_0$ yields: \begin{equation} |\!|\psi^{\pm}_w|\!|^2_{\downarrow R_0}\equiv \int_{|{\bf R}|<R_0}\,d{\bf R} \lf|\psi^{\pm}_w\rt|^2\leq\exp\lf(2\,\mbox{sup}_{|\bf R|<R_0}\,w\rt)\, |\!|\phi_w|\!|^2_{\downarrow R_0}<\infty. \eeq{madd2} \section{The H-Monopole QM} The quantum mechanical system that describes the H-monopoles (better, the ``missing states'' in the problem~\cite{W1}) can be thought of as the dimensional reduction of an N=2 field theory in four dimensions, whose degrees of freedom are an Abelian vector multiplet $V$, a neutral scalar multiplet $X$, and two charged chiral multiplets, $\Phi^\pm$, of charge $\pm 1$ with respect to the $U(1)$ gauge group. The kinetic terms are canonical, while the superpotential is: \begin{equation} W=\Phi^+\Phi^- X. \eeq{m1} The scalar potential $V$ is given by the quantum-mechanical reduction of the standard N=1 formula, namely: \begin{equation} V=W_i W^*_i + {1\over 4}D^2 + (|\phi^+|^2 + |\phi^-|^2)A^2_\mu. \eeq{m2} Here, $\phi^\pm$ etc. denote the scalars of the chiral multiplets; $W_i$ denotes the derivative of the superpotential with respect to the $i$-th scalar, and $A_{\mu}$, $\mu=1,2,3$ is the component of the vector in the multiplet $V$ along the direction $\mu$. The D-terms are given by \begin{equation} D= |\phi^+|^2-|\phi^-|^2. \eeq{m3} Following our general strategy, we add to this superpotential a perturbation $w$, and we proceed to show that a) with the new superpotential $W+w$, there exists a unique point where the (non-negative) scalar potential vanishes, with non-vanishing Hessian, and thus its index $n_B-n_F=1$; b) the perturbation lifts the moduli (valleys) $dW=0$ and does not introduce new valleys; c) given a normalizable ground state of the perturbed problem, $\phi_w$, the states $\psi^\pm=e^{\pm \Re w}\phi_w$ are {\em normalizable} representatives of the cohomology of the original supersymmetric quantum mechanics. The perturbation we choose is \begin{equation} w= k X, \eeq{m4} with $k$ an arbitrary nonzero constant. This perturbation, being linear in the coordinate $X$, does not change the fermionic part of the Hamiltonian. To prove point a), we have to solve the F-term equations $(W+w)_i=0$, and set to zero the D-terms. These equations read: \begin{equation} \phi^\pm x =0, \;\;\; \phi^+\phi^- + k =0,\;\;\; |\phi^+|^2-|\phi^-|^2=0. \eeq{m5} Here $x$ is the complex scalar in the supermultiplet $X$. The D-term equation implies $|\phi^+|=|\phi^-|$, while the F-term equations give (modulo a gauge transformation, obviously) \begin{equation} x=0,\;\;\; |\phi^\pm|=|k|^{1/2}; \;\;\; A_\mu=0,\; \mu=1,2,3. \eeq{m6} We do not need to compute the Hessian to see that it it nonzero. Indeed, since $\phi^\pm\neq 0$, the 4-d N=1 model is in the Higgs phase: all fields are massive. Upon dimensional reduction, this means that the Hessian of the scalar potential $V$ is nonzero. To prove point b) it is convenient to rewrite the scalar potential as \begin{equation} V={1\over 4}(|\phi^+|^2 + |\phi^-|^2)^2 +|k|^2 + 2\Re k\phi^+\phi^- + (|x|^2+A^2_\mu)(|\phi^+|^2 + |\phi^-|^2) . \eeq{m7} We want to prove that for $R^2\equiv (|\phi^+|^2 + |\phi^-|^2 + |x|^2 + A^2_\mu)\rightarrow \infty$, the potential attains its minimum value, $|k|^2$, along the valleys at $\phi^\pm=0$. The minimization in $\phi^\pm$, keeping $ (|\phi^+|^2 + |\phi^-|^2)$ fixed, gives $|\phi^+|=|\phi^-|$, and allows us to rewrite the potential as \begin{eqnarray} V&=&{1\over 4}R^4\cos^4 \theta -|k|R^2\cos^2 \theta +R^4\cos^2 \theta \sin^2 \theta + |k|^2, \nonumber \\ && R^2\cos^2 \theta \equiv |\phi^+|^2 + |\phi^-|^2,\;\;\; R^2\sin^2 \theta\equiv |x|^2+A^2_\mu. \eea{m8} For $R\rightarrow \infty$, the minimization in $\theta$ gives a minimum at $\cos \theta =0$, i.e. along the flat directions of the unperturbed model. Point c) is crucial in showing that the perturbed problem has the same cohomology as the unperturbed one. To prove it, we need only study the asymptotic form of the ground state in the perturbed problem, since $w$ is regular (and bounded) on any compact set. To apply Lemma 1, one has to uncover the asymptotic form of the zero energy solution in the perturbed problem at large distances. While unable to solve the problem exactly, we may use the Born-Oppenheimer approximation in the asymptotic region. Indeed, as it follows from the form of the scalar potential, the frequency of the oscillations in the directions transverse to the moduli subspace $\phi^\pm=0$ is $\om_{trans.}\geq \ha R^2$, while the characteristic frequency of the oscillations on the moduli is $\om_{moduli}\leq |k|$. For $R$ large enough, the separation into ``slow'' and ``fast'' modes holds. We can label the 5-d moduli space with the vector \begin{equation} {\bf x}\equiv(\Re x, \Im x, A_1, A_2, A_3), \eeq{m9} and define: \begin{equation} {\bf k}\equiv(\Re k, \Im k, 0,0,0). \eeq{m10} Along the flat directions, the wave function, $\Psi({\bf x})$, satisfies: \begin{equation}\label{equ} \lf(-\ha\nabla_{\bf x}\nabla_{\bf x} +\ha \lf|{\bf k}\rt|^2\rt) \Psi({\bf x})=0. \end{equation} Re-writing this equation in spherical coordinates, and denoting by $\om$ the angular variables, and by $r$ the radius $|{\bf x}|$, one obtains: \begin{equation} \label{equ'} \lf({d-1\ov r}{\pa\ov\pa r}+{\pa^2\ov\pa r^2}+{1\ov r^2}{\pa^2\ov\pa \om^2}-k^2\rt)\Psi({\bf x})=0. \end{equation} After separation of variables; eq.~(\ref{equ'}) gives, for the radial part of the wave function, $\Psi({\bf x})=R(r)\Om(\om)$: \begin{equation} \label{bes1} \lf({d-1\ov r}{\pa\ov\pa r}+{\pa^2\ov\pa r^2}-\lf({{\bf L}^2\ov r^2}+k^2\rt)\rt)R(r)=0, \end{equation} where ${\bf L}^2$ is the square of the total angular momentum. The asymptotic behavior of the solutions to eq. (\ref{bes1}) at large $r$ is: \begin{equation} \label{sol2} R(r)\stackrel{r\rightarrow\infty}{\longrightarrow}{\rm const}\, {e^{-kr}\ov r^{{d-1\ov 2}}}\lf(1+{\cal O}\lf({1\ov r^2}\rt)\rt). \end{equation} The wave function in the transverse directions decays at least as fast as that of a harmonic oscillator, and the perturbation $w$, being linear in coordinates, cannot spoil the normalizability. The normalizability of the ``slow'' modes functions $\psi^{\pm}({\bf x})=e^{\pm\Re w}\Psi({\bf x})$ can be checked as follows: \begin{eqnarray}\label{int} & &\int_{|{\bf x}|>r_0}d{\bf x}\,\lf|\psi^\pm({\bf x})\rt|^2={\rm const}\int_{r>r_0}d{\bf x}{e^{-2|k|r\pm 2{\bf k}{\bf x}}\ov r^{d-1}}\nonumber\\ &=&{\rm const}\, \Om_{d-2}\int_{r=r_0}^{\infty}dr\, e^{-2|k|r} \int_{0}^{\pi}d\th \sin^{d-2}\th \,e^{\pm 2|k|r\cos\th}= {\rm const}\,\int_{r=r_0}^{\infty}{dr\ov r^{{d-1\ov 2}}}. \end{eqnarray} The integral in (\ref{int}) converges if $d>3$. Since in the case of the H-monopole the dimension of the moduli space is $d=5$, the criteria for the existence of the normalizable supersymmetric ground state are satisfied. Our argument proves that there always exists at least a supersymmetric ground state in the H-monopole problem. Barring accidental degenerations, this ground state should also be unique, in agreement with the predictions of S-duality. \section{The D0 Brane QM} The conjectured duality between the type IIA string theory and M theory~\cite{W3} as well as the M(atrix) model conjecture~\cite{BFSS} require that a system of $N$ D0 branes in Type IIA string theory has a unique bound state at threshold for any $N$. In this section, we show that such a state exists for any prime $N$. For $N=2$, the existence of a zero-energy bound state has been proven in ref.~\cite{SS2}. It is convenient to rewrite the D0 supersymmetric QM in a formalism that makes manifest just four of its 16 supersymmetries, as we did for the H-monopole. This is achieved by first reducing the 10-d $SU(N)$ super Yang-Mills theory~\cite{BSS} --which describes $N$ D0 branes-- to four dimensions, and then perform another dimensional reduction to the 1-d QM~\cite{CH}. In the 4-d, N=1 language, the theory is made of a vector superfield, $V$, and 3 chiral superfields, that we shall denote with $\Phi_i$, $i=1,2,3$. The bosonic degrees of freedom of the vector multiplet are a 4-d vector $A_\mu$ in the adjoint of the gauge group, and some auxiliary fields. The bosonic degrees of freedom of the scalar multiplets are some auxiliary fields and the complex scalars $\phi_i$, also in the adjoint of the gauge group. The model has canonical kinetic terms, and it is completely determined by its superpotential, which is a holomorphic, gauge-invariant function: \begin{equation} W={1\over 6} \epsilon_{ijk}{\rm tr}\, \Phi_i[\Phi_j,\Phi_k]. \eeq{m11} Here ${\rm tr}\,$ denotes a trace over the gauge indices. This choice of superpotential ensures that the model is invariant under 16 supersymmetries, and not just the four which are manifest in this formalism. The scalar potential of the quantum mechanical model reads, in the $A_0=0$ gauge: \begin{equation} V = \left|{\partial W\over \partial \phi_i}\right|^2 + {\rm tr}\, [\bar{\phi}_{\bar{\imath}},\phi_i][\bar{\phi}_{\bar{\jmath}},\phi_j] +{\rm tr}\, [A_\mu,\phi_i][A_\mu,\bar{\phi}_{\bar{\imath}}] + {1\over 4}{\rm tr}\, [A_\mu,A_\nu]^2 . \eeq{m12} Here, $\mu, \nu= 1,2,3$. \subsection{The Deformation} To exploit the technique described in Section 2, we must first deform the model by adding to it an appropriate superpotential $w$. Our choice for $w$ is \begin{equation} w=-{1\over 2}m{\rm tr}\,\Phi_i^2, \eeq{m13} with $m$ a nonzero constant. This perturbation has been introduced in ref.~\cite{VW}; in 4-d it gives a mass $|m|$ to the chiral multiplets. The zeroes of the perturbed potential are given by the solutions of the following equations, modded out by the action of the gauge group: \begin{eqnarray} [A_\mu,A_\nu]&=&0,\;\;\; [A_\mu,\phi_i]=0,\;\;\; \mu=1,2,3, \label{m14} \\ D&=&[\bar{\phi}_{\bar{\imath}},\phi_i]=0, \label{m15} \\ W_k + w_k=0 &\Rightarrow & [\phi_i,\phi_j]= m\epsilon_{ijk}\phi_k. \eea{m16} Eqs.~(\ref{m15},\ref{m16}) set to zero the D- and F-terms. This is equivalent to solve eq.~(\ref{m16}) and mod out by the complexified gauge group. The total Witten index is obtained by summing over the contributions to the index of each such solution. These equations have been studied in ref.~\cite{VW}, with the result that their solutions are in one-to-one correspondence with the complex conjugacy classes of $SU(2)$ into the gauge group $G$, that is, with the inequivalent representations of $SU(2)$ into the fundamental of $G$. There are three types of representations to be considered, generically: the trivial ($\phi_i=0$), the irreducibile, and some reducible ones. The trivial representation is given by $\phi_i=0$. At this point the chiral multiplets have a 4-d ``mass'' $|m|$ that, upon dimensional reduction, implies that these modes have a nonzero frequency, and thus do not contribute to the index. This means that at $\phi_i=0$ the D0 brane system is effectively three dimensional. If, as suggested in ref.~\cite{dW}, the 3-d D0 brane has no bound states, then the $\phi_i=0$ minimum gives a null contribution to the index. In the rest of the paper we will assume that this result holds; later in this section we will report an argument supporting this assumption. When the gauge group is $SU(N)$, and $N$ is prime, the reducible representations break the gauge group to $U(1)^{N-1}$. The light degrees of freedom parametrize an Abelian theory, which again gives a null contribution to the index. Finally, the only nonzero contribution to the index comes from the unique irreducible representation of dimension $N$. It breaks $SU(N)$ completely, implying that all degrees of freedom are massive, i.e. that the minimum is at $A_\mu=0$ and that the Hessian determinant of the potential is nonzero. To sum up, the irreducible representation gives an isolated minimum contributing 1 to the index, while all other minima give no contribution. \subsection{The $L_2$ Cohomology} The argument above implies that $n_B-n_F=1$\footnotemark, \footnotetext{Obviously, the definition of bosons and fermions in supersymmetric QM is conventional, and it can be changed with a unitary redefinition of the fermionic Fock vacuum.} i.e. that in the perturbed problem with superpotential $W+w$ there exists at least one normalizable ground state. Next, following our general strategy outlined in Section 2, we must show that it is possible to associate to the perturbed ground state, $\phi_w$, two {\em normalizable} representatives, $\psi^\pm_w$, of the unperturbed cohomologies $Q$ and $\bar{Q}$. As in the H-monopole case, we set \begin{equation} \psi^\pm= (\exp\pm \Re w )\phi_w. \eeq{m19} To study the normalizability of $\psi^\pm_w$ we can limit ourselves to the asymptotic, large $\phi_i$ region, since for finite $\phi_i$ both $\phi_w$ and $\psi_w^\pm$ are smooth and obviously normalizable. We want, first of all, to find the new ``valleys'' of the perturbed potential, $V_w$, for large $\phi_i$, i.e. to find its minima for fixed, large radius $R=({\rm tr}\, \phi_i \bar{\phi}_{\bar{\imath}})^{1/2}$. This can be done by adding to the potential a Lagrange multiplier, and looking for the stationary points of \begin{eqnarray} V_w+ \lambda({\rm tr}\, \phi_i \bar{\phi}_{\bar{\imath}}-R^2) &=& \left|{\partial (W+w)\over \partial \phi_i}\right|^2 + {\rm tr}\, [\bar{\phi}_{\bar{\imath}},\phi_i][\bar{\phi}_{\bar{\jmath}},\phi_j] +{\rm tr}\, [A_\mu,\phi_i][A_\mu,\bar{\phi}_{\bar{\imath}}] + {1\over 4}{\rm tr}\, [A_\mu,A_\nu]^2 +\nonumber \\ && +\lambda({\rm tr}\, \phi_i \bar{\phi}_{\bar{\imath}}-R^2), \eea{m20} in the limit $R\rightarrow \infty$. Minimization of eq.~(\ref{m20}) for large $R$ shows that the valleys of the perturbed potential are the same of the unperturbed one: \begin{equation} [\phi_i,\phi_j]=[\phi_i,\bar{\phi}_{\bar{\imath}}]=0, \eeq{m21} and that along the valleys $V_w=|m|^2 R^2$. Our aim is to prove that the following integral converges: \begin{equation} \int_{{\rm tr}\, \phi_i \bar{\phi}_{\bar{\imath}}\geq R^2}d\mu \psi^{\pm\, *}_w \psi^\pm_w . \eeq{m23a} Here $d\mu$ denotes the (flat) integration measure of all bosonic variables. For $R \gg 1$, we can use again the adiabatic approximation, and reduce the integral to the estimate of the behavior of the adiabatic wave function on the $9(N-1)$\footnotemark moduli space given by eq.~(\ref{m21}). \footnotetext{Here we write all formulae for the gauge group $SU(N)$, of course.} The asymptotic behavior of the wave function $\phi_w$ may be estimated as follows. On the moduli space, the Born-Oppenheimer wave function must satisfy the equation \begin{equation} H\phi_w=0, \;\;\; H=H_1+H_2,\;\;\; H_1= -\triangle_x,\;\;\; H_2= - \nabla_{y}\nabla_{\bar{y}} + |m|^2|y|^2 +|m|[\psi_\alpha^\dagger \psi_\alpha-3(N-1)]. \eeq{m24} The effective Hamiltonian $H$ is just the reduction of the perturbed D0 brane Hamiltonian to the moduli space. Here, $\triangle_x$ is the $3(N-1)$-dimensional Laplacian acting on the bosonic moduli of the vector multiplet: $x\equiv (A_1, A_2, A_3)$. We parametrized the moduli space of the chiral multiplets $\Phi_i$ by $3(N-1)$ superfields $Y=y +\theta^\alpha \psi_\alpha$, $\alpha=1,2$. The asymptotic behavior of $\phi_w$ is {\em generically} the same as that of the Green function $G=H^{-1}$, even though it may be better, in some exceptional cases. By diagonalizing $H$ and denoting by $K=0,..,6(N-1)$ the eigenvalue of the fermion number operator $\psi_\alpha^\dagger \psi_\alpha$ we find: \begin{equation} G(x,y,K|x',y', K')=\delta_{KK'}\sum_{n=0}^\infty \int {d^{3(N-1)} p\over (2\pi)^{3(N-1)}} {1\over p^2 + |m|(n+K)} e^{ip(x-x')} \Phi_n(y)^*\Phi_n(y'), \eeq{m25} where the $\Phi_n(y)$ are the usual eigenstates of the $6(N-1)$-dimensional harmonic oscillator with frequency $|m|$. By using the Schwinger representation for the free propagator and integrating in $p$ we find: \begin{equation} G(x,y,K|x',y', K')={1\over 2}(2\pi)^{-3(N-1)/2}\int_{t=0}^\infty dt t^{-3(N-1)/2} \exp(-|x-x'|^2/2t) \langle y,K | \exp(-t H_2) |y',K'\rangle. \eeq{m26} Eq.~(\ref{m26}) can be computed with a standard Euclidean functional integral, which gives \begin{eqnarray} G(x,y,K|x',y',K')&=&\delta_{KK'} {1\over 2}(2\pi)^{-3(N-1)/2}\int_0^\infty dt t^{-3(N-1)/2} \exp\{-|x-x'|^2/2t+|m|t [3(N+\nonumber \\ && -1)- K]\} \left(|m|\over \sinh |m|t\right)^{3(N-1)} \exp\Big\{-{|m|\over \sinh |m|t} [(|y|^2 +\nonumber \\ && + |y'|^2)\cosh |m|t -2\Re (y^*y')]\Big\}. \eea{m27} The ground-state wave function $\phi_w$, and thus $\psi^\pm_w$, is not an eigenstate of the fermion number operator $\psi^\dagger_\alpha\psi_\alpha$, as explained in more detail in Appendix B. Rather, $\psi^\pm_w$ is a superposition of wave functions with different fermion numbers. Asymptotically in $|y|$ one has: \begin{equation} \psi^\pm_w\approx \sum_{K=0}^{6(N-1)}c_KG(x,y,K|x',y',K)\exp(\pm \Re m y^2), \eeq{m27aaa} for some constant coefficients $c_K$. The asymptotic decay rate of the Green function~(\ref{m27}) increases with $K$, which implies that, for large $|y|$: \begin{equation} \psi^\pm_w\approx c_{K^o}G(x,y,K^o|x',y',K^o)\exp(\pm \Re m y^2), \eeq{m27aa} where $K^o$ is the smallest $K$ appearing in equation~(\ref{m27aaa}) with a nonzero coefficient $c_{K}$. To prove that the $\psi_w^\pm$ are normalizable we need only study the large $\phi_i$ region, i.e. $|y| \gg |y'|$. We first integrate in $x$ and obtain \begin{eqnarray} \int d^{3(N-1)}x |\psi_w^\pm(x,y)|^2 &\approx & {|c_{K^o}|^2\over 2} \int_0^\infty ds\int_0^\infty dt (t+s)^{-3(N-1)/2} \exp\{|m|(t+s) [3(N-1)+ \nonumber \\ && -K^o]\} \left(|m|^2\over \sinh |m|s\sinh |m|t \right)^{3(N-1)} \exp[-|m|(\coth |m|s+ \nonumber \\ && +\coth |m| t)|y|^2 \pm 2\Re m y^2]. \eea{m27a} We can restrict the region of integration to large values of $y$ by restricting $y$ to lie outside of the hypercube ${\cal C}$, defined by $ \Re y_i, \Im y_i\leq R \;\forall i$, with $R \gg |m|^{1/2}$. We find: \begin{eqnarray} \int_{R^{6(N-1)}-{\cal C}} d^{6(N-1)}y |\psi_w^\pm|^2 &\approx & A\int_0^\infty ds\int_0^\infty dt (t+s)^{-3(N-1)/2} \exp\{|m|(t+s) [3(N -1)- K^o]\}\times \nonumber \\ && \{\sinh^2 |m|s\sinh^2 |m|t[(\coth |m|s + \coth |m|t)^2 -4]\}^{-3(N-1)/2} \times \nonumber \\ && \exp[-R^2|m|(\coth |m|s +\coth |m|t -2)], \eea{m27b} where $A$ is a finite, positive constant. The dangerous region of integration for the norm of $\psi_w^\pm$ is where $s$ and $t$ are both large. There, eq.~(\ref{m27b}) simplifies considerably, and one finds \begin{eqnarray} \int_{R^{6(N-1)}-{\cal C}} d^{6(N-1)}y |\psi_w^\pm|^2 &\approx & A\int_M^\infty ds\int_M^\infty dt (t+s)^{-3(N-1)/2}\exp\{|m|[3(N-1)/2-K^o](s+t)\} \nonumber \\ && \times[\cosh |m|(t-s)]^{-3(N-1)/2}+B = \int_{2M}^\infty du \int_{-\infty}^{\infty}dv u^{-3(N-1)/2}\times \nonumber \\ && \exp \{|m|[3(N-1)/2-K^o]u\}(\cosh|m|v)^{-3(N-1)/2} +B. \eea{m27c} Here $M\gg 1/|m|$, $u=s+t$, $v=s-t$, and $B$ is another finite, positive constant. As explained in Appendix C, the ground-state wave function is the completely filled state in the Fock space generated by a subset of the oscillators $\psi_\alpha$, $\psi^\dagger_\alpha$. Using this result, it is easy to compute the fermion number $K^o$. Indeed, for each $2\leq l\leq 2j$ there exist $2\times 3$ such oscillators, while for $l=1$ there are only two such oscillators\footnotemark; therefore, \begin{equation} K^o=\sum_{l=2}^{2j}6 +2= 6(2j-1) +2 = 6N -10. \eeq{m27d} The integral in eq.~(\ref{m27b}) is finite if and only if $K^o>3(N-1)/2$. This happens for $N>17/9$, and therefore the cohomology representatives $\psi^\pm_w$ are $L_2$ for all $SU(N)$ with $N$ prime. \footnotetext{See Appendix C for notations. The factor 2 comes from the index $\alpha=1,2$; the factor 3 comes from the index $s_3=0,\pm 1$. For $l=1$ only $s_3=0$ is allowed.} By applying the method of Section 2, we conclude that a system of $N$ D0 brane in 9 dimensions has at least a zero-energy bound state for any prime $N$. This conclusion, in agreement with expectations from string duality and Matrix theory, holds if no bound states exist in the 3-d D0 brane system. An argument which shows that this is the case has been formulated by S. Shenker. It goes as follows. The asymptotic moduli space can be partitioned into $N-1$ subregions in which the VEVS of the scalars break $SU(N)$ to $SU(N-n)\times U(1)^n$, $n=1,..,N-1$. Geometrically, this corresponds to have $N-n$ D0 branes close to each other and $n$ far away. Along this flat direction, the ground-state wave function approximately obeys a free $dn$-dimensional Laplace equation, whose long-distance behavior is generically given by the Green function $G(x,0)=|x|^{2-dn}$, $x\in R^{dn}$. This function is square summable at large $x$ only if $nd>4$; for $n=1$ one gets $d>4$. This argument is incomplete (in some cases the asymptotic behavior of the wave function can be better than that of the Green function), but still rather compelling, especially when coupled with the rigorous results of~\cite{FH}. \section{Conclusions} In this paper, we proposed a novel way of studying the existence of bound states at threshold in supersymmetric QM systems. We applied our method to the case of H-monopoles, and re-derived a known result, namely that there is (at least) one normalizable bound state at threshold in the $U(1)$ theory describing the ``missing'' H-monopoles~\cite{W1}. We also studied a QM with gauge group $SU(N)$ and 16 supercharges, that describes a system of $N$ D0 branes in nine dimensions. Applying our method we found that this system possesses (at least) a bound state at threshold for any prime $N$, assuming that no bound states of D0 branes exist in three dimensions. This result complements the one of ref.~\cite{SS2}, in which the existence of a bound state was proven for $N=2$. We believe that the restriction on $N$ is purely technical and not due to any fundamental limitation of our method. Our method uses the stability of the supercharge cohomology under appropriate deformations of the superpotential. Compared with the technique of refs.~\cite{SS1,SS2}, it has the advantage that it does not require the computation of a massive multi-dimensional ``bulk'' integral, and the delicate estimate of a boundary term. Besides these technical points, we think that our method may be of interest since it seems possible to extended it to the computation of the index of more complicated (and less supersymmetric) QM models, such as the QM describing 4-d BPS black holes described in~\cite{KT}. Finally, let us emphasize that our method always determines wave functions which are in the same cohomology class of the true ground state, rather than the ground state itself. This means, in particular, that our method does not determine the true asymptotic behavior of the ground-state wave function, which may be very different from that of the cohomology representatives. \vskip .2in \noindent {\bf Acknowledgments}\vskip .1in \noindent We would like to thank J. Cheeger for useful discussions, and M. Stern for useful comments on the manuscript. M.P. is supported in part by NSF grants no. PHY-9318781, PHY-9722083; A.R. is supported in part by a Margaret and Herman Sokol Research Fellowship. A.R. would like to thank the LPTHE at the University of Paris VI for its kind hospitality and support. \section*{Appendix A} \renewcommand{\theequation}{A.\arabic{equation}} \setcounter{equation}{0} The deformed Hamiltonian $H_w$ may still turn out to be too complicated to be solved exactly. However, we are interested only in the asymptotic behavior of the solutions, which can be found in the Born-Oppeheimer approximation. Typically, since the perturbation is small compared to the original Hamiltonian away from the subspace of flat directions, and the valleys in the original problem become increasingly narrow, the frequency of the oscillations transverse to the original valleys is very large compared to the frequency along the valleys. Given that, one separates the variables into ``slow'' $\xi$, and ``fast'' $\eta$, and introduces the corresponding Hamiltonians: \begin{equation}\label{hh} H(\xi,\eta)=H_1(\xi,\eta)+H_2(\xi) \end{equation} The $H_1$ piece depends on the slow variables $\xi$ only parametrically, i.e. it contains no derivatives with respect to $\xi$. The eigenfunctions of the full Hamiltonian satisfy \begin{equation}\label{anz2} (H_1+H_2)\Phi(\xi,\eta)=E(\xi,\eta)\Phi(\xi,\eta), \end{equation} and in adiabatic approximation are factorized as: \begin{equation} \label{anz} \Phi(\xi,\eta)=\sum \Xi(\xi)\Psi(\xi,\eta). \end{equation} The $\Psi(\xi,\eta)$ are the eigenfunctions of $H_1$: \begin{equation}\label{add4} H_1(\xi,\eta)\Psi(\xi,\eta)=E'(\xi)\Psi(\xi,\eta). \end{equation} Both Hamiltonians $H_{1,2}$ are supersymmetric and hence non-negative, and for that reason in order to find the ground state of (\ref{anz2}), we have to look for zero energy solutions of the corresponding Hamiltonians. The existence of a solution to (\ref{anz2}) is implied by the non-vanishing of the Witten index, and we are only concerned about the asymptotic form of the solutions. We want to show that the ground state $\Psi^{\lf(0\rt)}(\xi,\eta)$ of eq.~(\ref{add4}), and the ground state $\Xi^{\lf(0\rt)}(\xi)$, solution of \begin{equation}\label{add5} H_2\Xi^{\lf(0\rt)}(\xi)=0, \end{equation} give an adequate asymptotic approximation of the ground state wave function $\Phi^{\lf(0\rt)}(\xi,\eta)$. Recall that the asymptotic form of a solution of our Fredholm differential operators is not affected by terms of order $(\mbox{distance})^{-2}$. The zeroth order terms give the leading exponent, while the terms of order $(\mbox{distance})^{-1}$ provide the subleading power correction. Therefore, when solving eq.~(\ref{anz2}), one may neglect corrections of order $(\mbox{distance})^{-2}$ in the equation for $\Phi(\xi,\eta)$. Since $H_2(\xi)=\ha\{Q_\xi,\bar{Q}_\xi\}$ is a supersymmetric Hamiltonian, one has $Q_\xi\Xi^{\lf(0\rt)}(\xi)=\bar{Q}_\xi\Xi^{\lf(0\rt)}(\xi)=0$. Therefore: \begin{equation} \label{boe1} \lf(H_1(\xi,\eta)+H_2(\xi)\rt)\Xi^{\lf(0\rt)}(\xi)\Psi^{\lf(0\rt)}(\xi,\eta) =-\Xi^{\lf(0\rt)}(\xi)\lf(\bigtriangleup_{\xi}\Psi^{\lf(0\rt)}(\xi,\eta)\rt). \end{equation} Multiplying the r.h.s of eq.~(\ref{boe1}) by $\Psi^{\lf(0\rt)^*}(\xi,\eta)$, and integrating over $\eta$, one obtains: \begin{equation}\label{boe2} \Xi^{\lf(0\rt)}(\xi) N^2(\xi)\int \, d\eta \,\Psi^{\lf(0\rt)^*}(\xi,\eta)\lf({\pa^2\ov\pa\xi^2}+ {d_{\xi}-1\ov\xi}{\pa\ov\pa\xi}\rt) \Psi^{\lf(0\rt)}(\xi,\eta). \end{equation} The variable $\xi$ is treated as a parameter in the equation $H_1(\xi,\eta)\Psi^{\lf(0\rt)}(\xi,\eta)=0$. The function $\Psi^{\lf(0\rt)}(\xi,\eta)$ is of the form \begin{equation} \Psi^{\lf(0\rt)}(\xi,\eta)=N(\xi)\,{\cal F}(\xi^\al\eta), \eeq{madd4} where $N$ is a normalization factor, and $\al(\eta)\geq\al_0>0$ depends on the particular potential. For a harmonic oscillator, $\al$ is of course $\ha$. After a change of variables to $\mu\equiv\xi^\al\eta$, eq.~(\ref{boe2}) takes the form: \begin{equation}\label{boe3} \Xi^{\lf(0\rt)}(\xi) N'^2\int \, d\mu \,{\cal F}^*(\mu)\lf({\pa^2\ov\pa\xi^2}+{d_{\xi}-1\ov\xi}{\pa\ov\pa\xi}\rt){\cal F}(\mu), \end{equation} the where $N'$ is just a number: \begin{equation} {1\ov N'^2}=\int \, d\mu \lf|{\cal F}(\mu)\rt|^2. \eeq{madd5} Due to the definition of $\mu$, each differentiation in eq.~(\ref{boe3}) brings down one power of $\xi$, so that eq.~(\ref{boe2}) becomes: \begin{equation}\label{add6} {\Xi^{\lf(0\rt)}(\xi)\ov\xi^2}\times\lf(\mbox{integral independent of}\,\xi\rt). \end{equation} Consequently, $\Phi^{\lf(0\rt)}(\xi,\eta)=\Xi^{\lf(0\rt)}(\xi) \Psi^{\lf(0\rt)}(\xi,\eta)$, obtained from eqs.~(\ref{add4},\ref{add5}), gives the ground state of eq.~(\ref{hh}) up to order $\xi^{-2}$. \section*{Appendix B} \renewcommand{\theequation}{B.\arabic{equation}} \setcounter{equation}{0} In this appendix, we want to study the semiclassical approximation for a supersymmetric quantum mechanics. In this approximation, the zero-energy wave function takes the form \begin{equation} \Psi(x)=\exp[-S(x)/\hbar] F(x) + O(\hbar). \eeq{mb1} Here $x$ denotes all bosonic variables, while $F(x)$ is a map from $x$ into the fermionic Fock space. The zero-energy ground state of a supersymmetric system obeys a Schr\"odinger equation which reads, schematically: \begin{equation} \left[-{\hbar^2\over 2}\nabla^2_x + V(x) + \hbar M(x) \right] \Psi(x)=0. \eeq{mb2} $V(x)\geq 0$ is the scalar potential, which by assumption has an isolated, non-degenerate zero at $x=0$, while the ``fermion mass'' term $M(x)$ is a map from $x$ into the linear operators of the Fock space, i.e. it is an $x$-dependent finite-dimensional matrix. By substituting the ansatz eq.~(\ref{mb1}) into the Schr\"odinger equation and expanding in powers of $\hbar$, we find the equations: \begin{equation} {1\over 2}\nabla_x S\nabla_x S - V=0, \;\;\; \left[(\nabla_x S)\nabla_x + M(x) + {1\over 2}(\nabla^2_xS)\right]F(x)=0. \eeq{mb3} The first equation is the familiar Hamilton-Jacoby equation of a bosonic system with potential $-V(x)$, while the second determines the vector $F$ in the Fock vacuum. By denoting with $N(x)$ the matrix $M(x) + I(\nabla_x^2S)/2$ ($I$ is the identity in the Fock space), we can easily write down the solution for $F(x)$ as \begin{equation} F(x)=\lim_{\tau\rightarrow -\infty}T\exp[-\int_{\tau}^{t(x)} N(x(s))] F(0). \eeq{mb4} Here, $T$ denotes the ordering of the matrices $N$ in the parameter $s$. $x(s)$ is the zero-energy classical trajectory approaching $x=0$ at $s\rightarrow -\infty$, and reaching $x$ at $s=t(x)$. A general property of a supersymmetric system is that the eigenvalues of $N(x)$ are non-negative near $x=0$. In particular, $N(0)$ has a unique zero eigenvalue, $|0\rangle$, which may be used to define the Fock vacuum. Notice also that the trajectory $x(s)$ in eq.~(\ref{mb4}) spends an infinite amount of time near $x=0$. This fact, together with the semipositive-definitness of $N$, implies that all components of $F(0)$ are projected out by the operator $\lim_{\tau\rightarrow -\infty} T\exp[-\int_{\tau}^{t(x)}dsN(x(s))]$, except the one parallel to the Fock vacuum $|0\rangle$. Therefore, we can choose as initial condition in eq.~(\ref{mb4}) $F(0)={\rm const}\, |0\rangle$. Since $N(x(s))$ has fermion number zero, $F(x)$ has the same fermion number as $F(0)$, i.e. it is proportional to the Fock vacuum {\em defined by} $N(0)$. This last remark is important, since by using another $N(x)$ one may define another Fock vacuum, which may not coincide with the previous one. In terms of the new Fock vacuum, the old one (generically) is not even an eigenstate of the new fermion number operator. For the case of the D0 brane, this was explicitly shown in ref.~\cite{dWN1}. \section*{Appendix C} \renewcommand{\theequation}{C.\arabic{equation}} \setcounter{equation}{0} The second derivative of the superpotential of the perturbed D0 brane system is: \begin{equation} (W+w)''_{ij}(\phi)=\epsilon_{ilj}\phi_l-m\delta_{ij}I, \eeq{c1} where $I$ is the identity matrix acting (as $\phi_l$ does) on the adjoint of $SU(N)$. It determines the frequencies of the fermionic oscillators in the chiral multiplets, since the fermionic Hamiltonian is \begin{equation} H_F=\epsilon_{\alpha\beta}\psi_\alpha^i (W+w)''_{ij}(\phi)\psi_\beta^j+ {\rm h.c.} +{\rm gaugino \; terms}, \;\;\; \alpha,\beta=1,2. \eeq{c1a} The superpotential $W+w$, given by eqs.~(\ref{m11},\ref{m13}) has an isolated stationary point determined by eqs.~(\ref{m14}-\ref{m16}). Thanks to eq.~(\ref{m16}) the $\phi_i$ obey, up to a rescaling, the commutation relations of the generators of $SU(2)$. Moreover, as discussed in the text, they act on the fundamental of $SU(N)$ as the irreducible representation of dimension $N$. This means that one can define \begin{equation} \phi_i=-imL_i. \eeq{c2} $L_i$ acts on the adjoint of $SU(N)$, that is on the traceless product of two fundamentals. In terms of representations of $SU(2)$, $L_i$ acts on the reducible representation \begin{equation} j\times j - 0= 2j + (2j-1) +... + 1, \;\;\; j=(N-1)/2. \eeq{c3} Noticing that $i\epsilon_{ilj}$ acts on the index $i$ as the $j=1$ representation of $SU(2)$, we can re-write eq.~(\ref{c1}) in a remarkably simple way: \begin{equation} (W+w)''= -m (S_iL_i +1). \eeq{c4} In this equation, $S_i$ are the generators of $SU(2)$ in the $j=1$ representation. Eq.~(\ref{c4}) is easily diagonalized in terms of the eigenstates of the ``total angular momentum'' $\vec{J}^2=(\vec{S}+ \vec{L})^2$. The eigenvalues and their multiplicity are given in Table 1. \begin{table}[h]\centering \caption{Eigenvalues of $(W+w)''$} \begin{tabular}{|c|c|c|c|} \hline $\vec{L}$ irrep & $\vec{J}$ irrep & Eigenvalue & Multiplicity \\ \hline\hline $1\leq l \leq 2j$ & $l+1 $ & $-m(l+1)$ & $2l + 3$ \\ \hline $l$ & $l$ & $0$ & $2l +1$ \\ \hline $l$ & $l-1$ & $ml$ & $2l -1$ \\ \hline \end{tabular} \end{table} The zero eigenvalues of $(W+w)''$ are due to gauge invariance, while the corresponding eigenvectors mix with the gauginos to give states with ``fermion mass'' $\pm m$. For our purpose, it is useful to choose $m$ to be real and positive, and use gauge invariance to put $L_3$ in eq.~(\ref{c3}) in the form \begin{equation} L_3={\rm diag}\,(j,j-1,...,-j). \eeq{c4a} With this choice, the fields in the Cartan subalgebra of $SU(N)$ are diagonal, up to an $SU(N)$ transformation. Also, at a generic point in moduli space, all and only the fields in the Cartan subalgebra are light. This suggests to introduce another basis for the chiral multiplets, labeled by the eigenvalues of $L_3$ and $S_3$, $l_3,s_3$. In this basis the ``light'' fermions (i.e. the slow modes in the adiabatic expansion) read $U^\dagger \psi_{l_3=0,s_3,\alpha}U$, $U\in SU(N)$, while the heavy fermions (fast modes) read $U^\dagger \psi_{l_3\neq 0,s_3,\alpha}U$. Now, we have two basis with which to label the fermions of the chiral multiplet. One, labeled by the eigenvalues of $\vec{J}^2, J_3$ diagonalizes the fermion mass matrix around the stationary point given by eqs.~(\ref{m14}-\ref{m16}). The other, labeled by the eigenvalues of $L_3,S_3$, diagonalizes the fermion mass matrix along the moduli space. As shown in Appendix B, the fermionic wave function, $F(\phi)$, is proportional to the Fock vacuum determined by $N(0)$. In our case, this means, among other things, that $F(\phi)$ obeys the following equation: \begin{equation} \alpha^{(l-1)}_{j_3}F(\phi)= 0 \Rightarrow U^\dagger\alpha^{(l-1)}_{j_3}UF(\phi)= 0, \;\;\; -1\leq j_3 \leq 1. \eeq{c5} Here, we have used the gauge invariance of $F(\phi)$, and denoted by $\alpha^{(l-1)}_{j_3}$ the fermion annihilation operator in the basis $\vec{J}^2,J_3$, belonging to the $\vec{J}$ representation $(l-1)$. When expressed in terms of the creators and annihilators $\beta^\dagger_{l_3s_3}$, $\beta_{l_3,s_3}$, in the basis $L_3,S_3$, this operator obeys: \begin{equation} U^\dagger\alpha^{(l-1)}_{j_3}U=\langle l-1,j_3|l,1,0,j_3\rangle \beta^\dagger_{0,j_3} + \sum_{m\neq 0}\langle l-1,j_3|l,1,m,j_3-m\rangle\beta^\#_{m,j_3-m}. \eeq{c6} Here $\beta^\#$ stands for either $\beta$ or $\beta^\dagger$, while $\langle l-1,j_3|l,1,l_3,s_3\rangle$ are Clebsch-Gordan coefficients. Notice the presence of the creation operator $\beta^\dagger_{0,j_3}$ on the r.h.s. of eq.~(\ref{c6}). This is a crucial point that needs some explaining. The matrix $(W+w)_{ij}$ in eq.~(\ref{c1}), computed at the stationary point is real, since it is given by eq.~(\ref{c3}). This implies that if $\psi_1$ and $\psi_2$ are two eigenvectors of $(W+w)''$, with eigenvalue $\omega$, then the eigenstates of the ``fermion mass'' are $\psi_\pm=\psi_1\pm \psi^\dagger_2$, with eigenvalues $\pm \omega$. Along the valley given by eq.~(\ref{m21}), $(W+w)''$ can also be diagonalized. Let us label by $\tilde{\psi}^a_1$, $\tilde{\psi}^a_2$ a basis of eigenstates with eigenvalues $\omega^a$. Expanding $\psi_\alpha$ in this new basis we find $U^\dagger\psi_\alpha U=\sum_a c_a\tilde{\psi}^a_\alpha $, $U\in SU(N)$. We also have: $U^\dagger\psi_\pm U=\sum_a (c_a\tilde{\psi}^a_1 \pm c^*_a \tilde{\psi}^{a\,\dagger}_2)$. When, say, $\omega^1$ and $c_1$ are real, one may rewrite this equation in the form $U^\dagger\psi_\pm U=c_1\tilde{\psi}^1_\pm + ... $. In this last equation, $\psi_+$ and $\tilde{\psi}^1_+$ are both creators or both destructors if $\omega$ and $\omega^1$ have the same sign. When $\omega$ and $\omega^1$ have opposite sign, instead, $\psi_+$ is a creator when $\tilde{\psi}^1_+$ is a destructor, and vice versa. In our case, $c_1$ is a Clebsch-Gordan coefficient, which is indeed real; the eigenvalue $\omega$ is $ml$, while $\omega^1$ is $-m$, which explains eq.~(\ref{c6}). By writing eq.~(\ref{c5}) in terms of the oscillators $\beta$, we get an equation that can be simplified by noticing that on the moduli space, away from the origin $\phi_i=0$, the charged fermions created by $\beta^\dagger_{m,s_3}$, $m\neq 0$ have a very large frequency $\propto |\phi_i|$. This implies that all components of $F(\phi)$, except the one proportional to the ground state of these oscillators, decay as fast as $\exp(-{\rm const}\,|\phi_i|^4$). Eq.~(\ref{c5}) can thus be rewritten as \begin{equation} (\langle l-1,j_3|l,1,0,j_3\rangle\beta^\dagger_{0,j_3} + \sum_{m\neq 0} c^m_{j_3}\beta^\dagger_{m,j_3-m})F(\phi)=0. \eeq{c7} Here $c^m_{j_3}=\langle l-1,j_3|l,1,m,j_3-m\rangle$, if $\beta^\#_{m,j_3-m}= \beta^\dagger_{m,j_3-m}$, and $c^m_{j_3}=0$, otherwise. Notice that the sum in eq.~(\ref{c7}) contains at most 2 non-vanishing terms. Since $F(\phi)$ has zero charged-fermion number, in the large-$\phi_i$ region of moduli space all terms in the sum have to vanish separately. Since $\langle l-1,j_3|l,1,0,j_3\rangle$ vanishes only for $j_3=\pm l$, this may only happen when all $c^m_{j_3}$ vanish, and $\beta^\dagger_{0,j_3}F(\phi)=0$ for all $|j_3|\leq \min[1,l-1]$. This equation means that $F(\phi)$, is the completely filled state in the Fock space of the oscillators $\beta^\#_{0,j_3}$, $|j_3|\leq \min[1,l-1]$. \newpage
1,116,691,497,494
arxiv
\section{\bf Introduction} Jacobi structure on a manifold $M$ (Jacobi manifold) has been introduced by A. Lichnerowicz \cite{Lich2} and, as a local Lie algebra structure on $C^{\infty}(M,R)$, by A. Kirillov \cite{Ki}. Jacobi structures have all properties of the Poisson structures, except that they are not necessarily derivation. Generalization of Poisson-Lie groups (a Lie group whose Poisson structure, is compatible with the group structure \cite{Drin},\cite{Drin2}) to Jacobi-Lie groups has been done in \cite{Iglesias}. Of course, first, this work has been done for the Lie bialgebroids, namely, the relation between Jacobi structure and Lie bialgebroids has been studied, in addition, the generalized Lie bialgebroids \cite{IM} (Jacobi bialgebroid \cite{GM}) have been defined; then the generalized Lie bialgebras and their group structure (i.e., Jacobi-Lie groups) have been defined \cite{Iglesias}. Generalized Lie bialgebras introduced by D. Iglesias and J. C. Marrero are the algebraic structures of Jacobi-Lie groups, similar to Lie bialgebras as the algebraic structures of the Poisson-Lie groups \cite{Drin},\cite{Drin2}. In \cite{Iglesias}, a generalizing Yang-Baxter equation method has been proposed to obtain Jacobi-Lie bialgebras and some examples of Jacobi-Lie bialgebras have been given. Here, we describe the definition of the Jacobi-Lie bialgebras $(({\bf{g}},\phi_{0}),({\bf{g}}^{*},X_{0}))$ in terms of structure constants of the Lie algebras ${\bf{g}}$ and ${\bf{g}}^{*}$ and components of their 1-cocycles $X_{0}\in {\bf{g}}$ and $\phi_{0}\in {\bf{g}}^{*}$ in the basis of the Lie algebras. Then, using adjoint representations and automorphism Lie groups of these Lie algebras, we obtain a method for classifying Jacobi-Lie bialgebras and classify real two and three dimensional Jacobi-Lie bialgebras. The outline of the paper is as follows. In section two, we describe the definition of the Jacobi-Lie bialgebras $(({\bf{g}},\phi_{0}),({\bf{g}}^{*},X_{0}))$ in terms of the structure constants of the Lie algebras ${\bf{g}}$ and ${\bf{g}}^{*}$ and components of 1-cocycles $X_{0}\in {\bf{g}}$ and $\phi_{0}\in {\bf{g}}^{*}$ in the basis of the Lie algebras; and at the end of this section we give a proposition about equivalence of Jacobi-Lie bialgebras using automorphism of Lie algebras. Then, in section three we give the matrix representation of the obtained relations (in section two) using adjoint representations. Also, we give three steps for obtaining and classifying real low dimensional Jacobi-Lie bialgebras. In section four, in order to clarify our method, we give a detailed example, then we obtain and classify real two and three dimensional Jacobi-Lie bialgebras by this method. Some remarks are addressed in conclusion. \section{\bf Jacobi-Lie bialgebra } In this section, we review the basic definitions of Jacobi (generalized)-Lie bialgebras $(({\bf{g}},\phi_{0}),({\bf{g}}^{*},X_{0}))$. Then, we describe these definitions in terms of structure constants of the Lie algebras ${\bf g}$ and ${{\bf{g}}^{*}}$ and components of 1-cocycles $X_{0}\in {\bf{g}}$ and $\phi_{0}\in {\bf{g}}^{*}$ in the basis of the Lie algebras. \begin{definition}\cite{Iglesias}: A Jacobi-Lie bialgebra is a pair $(({\bf{g}},\phi_{0}),({\bf{g}}^{*},X_{0}))$, where $({\bf{g}},[,]^{{\bf{g}}})$ is a real Lie algebra of finite dimension with Lie bracket $[,]^{{\bf{g}}}$, so that the dual space ${\bf{g}}^{*}$ is also a Lie algebra with bracket $[,]^{{\bf{g^{*}}}}$, $X_{0}\in {\bf{g}}$ and $\phi_{0}\in {\bf{g}}^{*}$ are 1-cocycles on ${\bf{g}}^{*}$ and ${\bf{g}}$, respectively, and {\small $\forall X,Y \in {\bf{g}}$} we have \begin{equation}\label{1} d_{*X_{0}}[X,Y]^{{\bf{g}}}=[X,d_{*X_{0}}Y]^{{\bf{g}}}_{\phi_{0}}-[Y,d_{*X_{0}}X]^{{\bf{g}}}_{\phi_{0}}, \end{equation} \begin{equation}\label{2} \phi_{0}(X_{0})=0, \end{equation} \begin{equation}\label{3} i_{\phi_{0}}(d_{*}X)+[X_{0},X]=0, \end{equation} \end{definition} where $i_{\phi_{0}}P$ is contraction of a $P\in\wedge^{k}{\bf g}$ to a tensor $\wedge^{k-1}{\bf g}$; furthermore $d_{*}$ being the Chevalley-Eilenberg differential of ${\bf{g}}^{*}$ acting on ${\bf{g}}$ and $d_{*X_{0}}$ is its generalization such that we have \begin{equation}\label{4} d_{*X_{0}}Y= d_{*}Y+X_{0}\wedge Y, \end{equation} meanwhile $[,]^{{\bf{g}}}_{\phi_{0}}$ is $\phi_{0}$-Schouten-Nijenhuis bracket with the following properties $$ \hspace{-11cm}\forall P\in\wedge^{k}{\bf{g}}, P{'}\in\wedge^{k{'}}{\bf{g}}, P^{''}\in\wedge^{k^{''}}{\bf{g}},~~~~~~ $$ \vspace{-2mm} \begin{equation}\label{5} [P,P{'}]_{\phi_{0}}=[P,P{'}]+(-1)^{k+1}(k-1)P\wedge i_{\phi_{0}}P{'}-(k{'}-1)i_{\phi_{0}}P\wedge P{'}, \end{equation} \begin{equation}\label{6} [P,P{'}]_{\phi_{0}}=(-1)^{kk{'}}[P{'},P]_{\phi_{0}}, \end{equation} \begin{equation}\label{7} [P,P{'}\wedge P^{''}]_{\phi_{0}}=[P,P{'}]_{\phi_{0}}\wedge P^{''} + (-1)^{k{'}(k+1)}P{'}\wedge[P,P^{''}]_{\phi_{0}}-(i_{\phi_{0}}P)\wedge P^{'}\wedge P^{''}, \end{equation} \begin{equation}\label{8} (-1)^{kk^{''}}[[P,P{'}]_{\phi_{0}},P^{''}]_{\phi_{0}}+(-1)^{k{'}k^{''}}[[P^{''},P]_{\phi_{0}},P{'}]_{\phi_{0}}+ (-1)^{kk{'}}[[P{'},P^{''}]_{\phi_{0}},P]_{\phi_{0}}=0. \end{equation} Moreover, in the above definition, the $\phi_{0}(X_{0})$ means the natural inner product of the dual spaces ${\bf{g}}$ and ${\bf{g}}^{*}$. Note that, the above definition is symmetric with respect to $({\bf{g}},\phi_{0})$ and $({\bf{g}}^{*},X_{0})$ i.e., if $(({\bf{g}},\phi_{0}),({\bf{g}}^{*},X_{0}))$ is a Jacobi-Lie bialgebra then $(({\bf{g}}^{*},X_{0}),({\bf{g}},\phi_{0}))$ is also a Jacobi-Lie bialgebra, for this case, we have $d$ as the Chevalley-Eilenberg differential of ${\bf{g}}$ acting on ${\bf{g}}^{*}$ and it has the following $\phi_{0}\in{\bf{g}}^{*}$ generalization \begin{equation}\label{9} \forall w\in \wedge^{k} {\bf g^{*}} ~~~~~~~~~~~~~~~ d_{\phi_{0}}w=dw+\phi_{0}\wedge w, \end{equation} with the Schouten-Nijenhuis bracket replaced by \begin{equation}\label{10} [Q,Q{'}]^{\bf g^{*}}_{X_{0}}=[Q,Q{'}]^{\bf g^{*}}+(-1)^{k+1}(k-1)Q\wedge i_{X_{0}}Q{'}-(k{'}-1)i_{X_{0}}Q\wedge Q{'}, \end{equation} $\forall~Q\in\wedge^{k}{\bf g^{*}}$,$Q'\in\wedge^{k'}{\bf g^{*}}$; with the properties \eqref{6}-\eqref{8} similar to $[ , ]^{{\bf{g}}}_{\phi_{0}}$. \begin{remark}\cite{Iglesias}: In the above definition, $X_{0}$ and $\phi_{0}$ are 1-cocycles on ${\bf{g}}^{*}$ and ${\bf{g}}$, respectively, i.e., we must have \begin{equation}\label{11} d_{*}X_{0}=0, \end{equation} \begin{equation}\label{12} d\phi_{0}=0. \end{equation} \end{remark} \begin{remark}: In the case of $\phi_{0}=0$ and $X_{0}=0$ the definition 1 recovers the concept of a Lie bialgebra \cite{Drin}, that is, a pair of dual Lie algebras $({\bf{g}},{\bf{g}}^{*})$ such that relation \eqref{1} reduces to the following one \begin{equation}\label{13} d_{*}[X,Y]^{{\bf{g}}}=[X,d_{*}Y]^{{\bf{g}}}-[Y,d_{*}X]^{{\bf{g}}}. \end{equation} \end{remark} For the above case, there is a correspondence between Lie bialgebra $({\bf{g}},{\bf{g}}^{*})$ and the Manin triple $({\bf{g}}\oplus {\bf{g}}^{*}, {\bf{g}},{\bf{g}}^{*})$ such that the direct sum ${\bf{g}}\oplus {\bf{g}}^{*}$ is a Lie algebra when ${\bf{g}}$ and ${\bf{g}}^{*}$ are isotropic subspaces of ${\bf{g}}\oplus {\bf{g}}^{*}$ with respect to $ad$-invariant symmetric pairing \cite{Drin2}. But, for the Jacobi-Lie bialgebra $(({\bf{g}},\phi_{0}),({\bf{g}}^{*},X_{0}))$ in the sense of Tan and Liu \cite{TL}, we have the following bilinear skew-symmetric bracket on the space ${\bf{g}}\oplus {\bf{g}}^{*}$\\ ~~~~$[X\oplus \zeta,Y\oplus \eta]^{{\bf{g}}\oplus {\bf{g}}^{*}}=([X,Y]^{{\bf{g}}}+({\cal{L}}_{*X_{0}})_{\zeta}Y-({\cal{L}}_{*X_{0}})_{\eta}X-\frac{1}{2}(\zeta(Y)-\eta(X))X_{0})$\\ \begin{equation}\label{14} \oplus ([\zeta,\eta]^{{\bf{g}}^{*}}+({\cal{L}}_{\phi_{0}})_{X}\eta-({\cal{L}}_{\phi_{0}})_{Y}\zeta+\frac{1}{2}(\zeta(Y)-\eta(X))\phi_{0}), \end{equation} \vspace{2mm} $\forall X,Y \in {\bf{g}}$ and $\zeta,\eta \in {\bf{g}}^{*}$; such that Lie derivative ${\cal{L}}_{*X_{0}}$ (resp. ${\cal{L}}_{\phi_{0}}$) of ${\bf{g}^{*}}$(resp. ${\bf{g}}$) on ${\bf{g}}$(resp. ${\bf{g^{*}}}$) are defined as follows{\footnote{For general definition of the differential and the Lie derivative associated with a 1-cocycle, one can see \cite{Iglesias,IM,LLMP}.} \begin{equation}\label{15} \hspace{-1.4cm}\forall w\in \wedge^{k} {\bf g^{*}} , X\in{\bf g} ~~~~~~~ ({\cal{L}}_{\phi_{0}})_{X} w = (d_{\phi_{0}} \circ i_{X}+i_{X} \circ d_{\phi_{0}}) w, \end{equation} and \begin{equation}\label{17} \forall~\xi\in{\bf g^{*}} ~~~~~~~~~~ ({\cal{L}}_{*X_{0}})_\xi {P} = (d_{*X_{0}} \circ i_{\xi}+i_{\xi} \circ d_{*X_{0}}) P, \end{equation} where, for $P\in \wedge^{k} {\bf g}$ (resp. $w\in \wedge^{k} {\bf g^{*}}$),~$\phi_{0}$ (resp. $X_{0}$)-Lie derivative are defined by $\phi_{0}$ (resp. $X_{0}$)-Schouten Nijenhuis brackets in the following forms \begin{equation}\label{16} ({\cal{L}}_{\phi_{0}})_{X} P = [X,P]^{{\bf{g}}}_{\phi_{0}}, \end{equation} and \begin{equation}\label{18} ({\cal{L}}_{*X_{0}})_{\xi} w = [\xi,w]^{{\bf{g}}^{*}}_{X_{0}}. \end{equation} In general, the $({\bf{g}}\oplus {\bf{g}}^{*}, [,]^{{\bf{g}}\oplus {\bf{g}}^{*}})$ is not a Lie algebra, i.e., the Jacobi identities do not satisfy the algebra ${\bf{g}}\oplus {\bf{g}}^{*}$ \cite{Iglesias,TL}. Now, choosing the basis of the Lie algebras ${\bf{g}}$ and ${\bf{g}}^{*}$ as $\{X_{i}\}$ and $\{\tilde{X}^{i}\}$, respectively, we try to express the above definitions in terms of structure constants. We have \begin{equation}\label{19} [X_{i},X_{j}]={f_{ij}\hspace{0cm}}^{k} X_{k}\hspace{1mm} , \hspace{1mm}[\tilde{X}^{i},\tilde{X}^{j}]={{\tilde{f}}^{ij}\hspace{0cm}}_{k} {\tilde{X}}^{k}, \end{equation} where ${f_{ij}\hspace{0cm}}^{k}$ and ${{\tilde{f}}^{ij}\hspace{0cm}}_{k}$ are the structure constants of the Lie algebras ${\bf{g}}$ and ${\bf{g}}^{*}$, respectively, such that they satisfy the following Jacobi identities \begin{equation}\label{20} {f}_{ij}\hspace{0cm}^k{{f}_{km}}\hspace{0cm}^{n}+ {f}_{ik}\hspace{0cm}^n{{f}_{mj}}\hspace{0cm}^{k} + {f}_{jk}\hspace{0cm}^n{{f}_{im}}\hspace{0cm}^{k}=0, \end{equation} \begin{equation}\label{21} {\tilde{f}}^{ij}\hspace{0cm}_k{\tilde{f}^{km}}\hspace{0cm}_{n}+ {\tilde{f}}^{im}\hspace{0cm}_k{\tilde{f}^{jk}}\hspace{0cm}_{n} + {\tilde{f}}^{jm}\hspace{0cm}_k{\tilde{f}^{ki}}\hspace{0cm}_{n}=0. \end{equation} Furthermore, according to duality between ${\bf{g}}$ and ${\bf{g}}^{*}$ we have \begin{equation}\label{22} <X_{i},\tilde{X}^{j}> = {\delta_{i}}\hspace{0cm}^{j}. \end{equation} On the other hand, we know that for the Lie bialgebras by choosing \cite{RHR} \begin{equation}\label{23} d_{*}X_{i}=-\frac{1}{2} {\tilde{f}^{jk}}\hspace{0cm}_{i} X_{j}\wedge X_{k}, \end{equation} the relation \eqref{13} can be rewritten in terms of ${f_{ij}\hspace{0cm}}^{k}$ and ${{\tilde{f}}^{ij}\hspace{0cm}}_{k}$ as the following mixed-Jacobi identities \begin{equation}\label{24} {f}_{ij}\hspace{0cm}^k{\tilde{f}^{mn}}\hspace{0cm}_{k}= {f}_{ik}\hspace{0cm}^m{\tilde{f}^{kn}}\hspace{0cm}_{j} + {f}_{ik}\hspace{0cm}^n{\tilde{f}^{mk}}\hspace{0cm}_{j}+ {f}_{kj}\hspace{0cm}^m{\tilde{f}^{kn}}\hspace{0cm}_{i}+ {f}_{kj}\hspace{0cm}^n{\tilde{f}^{mk}}\hspace{0cm}_{i}, \end{equation} and using the $ad$-invariant symmetric bilinear form on the Manin triple of Lie bialgebras ${\bf{g}}\oplus {\bf{g}}^{*}$, one can find the following commutation relation \cite{Drin} \begin{equation}\label{25} [X_i , \tilde{X}^j] ={\tilde{f}^{jk}}\hspace{0cm}_{i} X_k +{f}_{ki}\hspace{0cm}^{j} \tilde{X}^k, \end{equation} where the relation \eqref{24} together with \eqref{20} and \eqref{21} are the Jacobi identities on the Lie algebra ${\bf{g}} \oplus {\bf{g}}^{*}$. Now, for Jacobi-Lie bialgebra \eqref{1}-\eqref{3} and \eqref{11}-\eqref{12} one can also apply the relation \eqref{23} as Chevalley-Eilenberg differential. In this way, expanding $X_{0}\in {\bf{g}}$ and $\phi_{0}\in {\bf{g}}^{*}$ in terms of the basis of the Lie algebras ${\bf{g}}$ and ${\bf{g}}^{*}$ \begin{equation}\label{26} X_{0}={\alpha}^{i} X_{i}~~~~~~~,~~~~~~~~\phi_{0}={\beta}_{j}{\tilde{X}}^{j}, \end{equation} and using \eqref{4}, \eqref{19} and \eqref{23}, after some calculations, the relations \eqref{1}-\eqref{3} and \eqref{11}-\eqref{12} can be rewritten as follows, respectively, $$ \hspace{-.3cm}{f}_{ij}\hspace{0cm}^k{\tilde{f}^{mn}}\hspace{0cm}_{k}-{f}_{ik}\hspace{0cm}^m{\tilde{f}^{kn}}\hspace{0cm}_{j} -\\ {f}_{ik}\hspace{0cm}^n{\tilde{f}^{mk}}\hspace{0cm}_{j}-{f}_{kj}\hspace{0cm}^m{\tilde{f}^{kn}}\hspace{0cm}_{i}-\\ {f}_{kj}\hspace{0cm}^n{\tilde{f}^{mk}}\hspace{0cm}_{i}+\beta_{i}{\tilde{f}^{mn}}\hspace{0cm}_{j}-\beta_{j}{\tilde{f}^{mn}}\hspace{0cm}_{i}+\alpha^{m}{f}_{ij}\hspace{0cm}^n-\alpha^{n}{f}_{ij}\hspace{0cm}^m\\ $$ \begin{equation}\label{27} +(\alpha^{k}{f}_{ik}\hspace{0cm}^m-\alpha^{m}\beta_{i})\delta_{j}\hspace{0cm}^{n} -(\alpha^{k}{f}_{jk}\hspace{0cm}^m-\alpha^{m}\beta_{j})\delta_{i}\hspace{0cm}^{n}-(\alpha^{k}{f}_{ik}\hspace{0cm}^n-\alpha^{n}\beta_{i})\delta_{j}\hspace{0cm}^{m} +(\alpha^{k}{f}_{jk}\hspace{0cm}^n-\alpha^{n}\beta_{j})\delta_{i}\hspace{0cm}^{m}=0, \end{equation} \begin{equation}\label{28} \alpha^{i}\beta_{i}=0, \end{equation} \begin{equation}\label{29} \alpha^{n}{f}_{ni}\hspace{0cm}^{m}-\beta_{n}{\tilde{f}^{nm}}\hspace{0cm}_{i}=0, \end{equation} \begin{equation}\label{30} \alpha^{i}{\tilde{f}^{mn}}\hspace{0cm}_{i}=0, \end{equation} \begin{equation}\label{31} \beta_{i}{f}_{mn}\hspace{0cm}^{i}=0. \end{equation} Furthermore, from \eqref{14}-\eqref{17} and \eqref{19},\eqref{22} after some calculation, one can find the commutation relations between $\{X_{i}\}$ and $\{\tilde{X}^{j}\}$ as follows \begin{equation}\label{32} [X_i , \tilde{X}^j] =({\tilde{f}^{jk}}\hspace{0cm}_{i}+\frac{1}{2}\alpha^{k}\delta_{i}\hspace{0cm}^{j}-\alpha^{j}\delta_{i}\hspace{0cm}^{k})X_k +({f}_{ki}\hspace{0cm}^{j}-\frac{1}{2}\beta_{k}\delta_{i}\hspace{0cm}^{j}+\beta_{i}\delta_{k}\hspace{0cm}^{j}) \tilde{X}^k. \end{equation} Note that the relations \eqref{20}-\eqref{21} and \eqref{27}-\eqref{31} and \eqref{32} are the algebraic definitions for the Jacobi-Lie bialgebras in terms of basis $\{X_{i}\}$ and $\{\tilde{X}^{j}\}$ and in this sense, these are a generalization of the ordinary Lie bialgebras \eqref{19}-\eqref{21} and \eqref{24}-\eqref{25}\footnote{Note that relations \eqref{27}-\eqref{32} for $\alpha^{i}=\beta_{i}=0$, reduce to \eqref{24} and \eqref{25}.}. \begin{definition} A Jacobi-Lie bialgebra is a pair $(({\bf{g}},\phi_{0}),({\bf{g}}^{*},X_{0}))$ where $({\bf{g}},[,]^{{\bf{g}}})$ is a real Lie algebra of finite dimension with the Lie bracket $[,]^{\bf g}$ and the basis $\{X_{i}\}$ and Lie algebra ${\bf{g}}^{*}$ (where it is dual space of ${\bf{g}}$) with Lie bracket $[,]^{{\bf{g}^{*}}}$ and basis $\{\tilde{X}^{i}\}$, such that $X_{0}={\alpha}^{i} X_{i}\in{\bf{g}}$ and $\phi_{0}={\beta}_{j}{\tilde{X}}^{j}\in{\bf{g}}^{*}$ are 1-cocycles on ${\bf{g}}^{*}$ and ${\bf{g}}$, respectively, i.e., \begin{equation*}\label{19'} [X_{i},X_{j}]={f_{ij}\hspace{0cm}}^{k} X_{k}\hspace{1mm} , \hspace{1mm}[\tilde{X}^{i},\tilde{X}^{j}]={{\tilde{f}}^{ij}\hspace{0cm}}_{k} {\tilde{X}}^{k}, \end{equation*} \begin{equation*}\label{30'} \alpha^{i}{\tilde{f}^{mn}}\hspace{0cm}_{i}=0~,~\beta_{i}{f}_{mn}\hspace{0cm}^{i}=0, \end{equation*} and we have $$ \hspace{-.3cm}{f}_{ij}\hspace{0cm}^k{\tilde{f}^{mn}}\hspace{0cm}_{k}-{f}_{ik}\hspace{0cm}^m{\tilde{f}^{kn}}\hspace{0cm}_{j} -\\ {f}_{ik}\hspace{0cm}^n{\tilde{f}^{mk}}\hspace{0cm}_{j}-{f}_{kj}\hspace{0cm}^m{\tilde{f}^{kn}}\hspace{0cm}_{i}-\\ {f}_{kj}\hspace{0cm}^n{\tilde{f}^{mk}}\hspace{0cm}_{i}+\beta_{i}{\tilde{f}^{mn}}\hspace{0cm}_{j}-\beta_{j}{\tilde{f}^{mn}}\hspace{0cm}_{i}+\alpha^{m}{f}_{ij}\hspace{0cm}^n-\alpha^{n}{f}_{ij}\hspace{0cm}^m\\ $$ \begin{equation*}\label{27'} +(\alpha^{k}{f}_{ik}\hspace{0cm}^m-\alpha^{m}\beta_{i})\delta_{j}\hspace{0cm}^{n} -(\alpha^{k}{f}_{jk}\hspace{0cm}^m-\alpha^{m}\beta_{j})\delta_{i}\hspace{0cm}^{n}-(\alpha^{k}{f}_{ik}\hspace{0cm}^n-\alpha^{n}\beta_{i})\delta_{j}\hspace{0cm}^{m} +(\alpha^{k}{f}_{jk}\hspace{0cm}^n-\alpha^{n}\beta_{j})\delta_{i}\hspace{0cm}^{m}=0, \end{equation*} \begin{equation*}\label{28'} \alpha^{i}\beta_{i}=0, \end{equation*} \begin{equation*}\label{29'} \alpha^{n}{f}_{ni}\hspace{0cm}^{m}-\beta_{n}{\tilde{f}^{nm}}\hspace{0cm}_{i}=0, \end{equation*}\\ where the commutation relations between $\{X_{i}\}$ and $\{\tilde{X}^{j}\}$ are as follows \begin{equation*}\label{32'} [X_i , \tilde{X}^j] =({\tilde{f}^{jk}}\hspace{0cm}_{i}+\frac{1}{2}\alpha^{k}\delta_{i}\hspace{0cm}^{j}-\alpha^{j}\delta_{i}\hspace{0cm}^{k})X_k +({f}_{ki}\hspace{0cm}^{j}-\frac{1}{2}\beta_{k}\delta_{i}\hspace{0cm}^{j}+\beta_{i}\delta_{k}\hspace{0cm}^{j}) \tilde{X}^k. \end{equation*} \end{definition} These relations can be applied for finding and classifying the Jacobi-Lie bialgebras in low dimensions similar to the Lie bialgebras and Lie super bialgebras in low dimensions \cite{JR,ER}. To this aim, we prove the following proposition. \begin{proposition} Two Jacobi-Lie bialgebras $(({\bf{g}},\phi_{0}),({\bf{g}}^{*},X_{0}))$ and $(({\bf{g}},{\phi}'_{0}),({{\bf{g}}^{*}}',{X}'_{0}))$ are equivalent, if there exist $A \in Aut({\bf{g}})$ (automorphism group of the Lie algebra {\bf g}) such that \begin{equation}\label{33} {{{\tilde{f}^{ij}}\hspace{0cm}_n}_{(\bf { {{\bf{g}}^{*}}'})}} =(A^{-t})^i\hspace{0cm}_{k}{{{\tilde{f}^{kl}}\hspace{0cm}_m}_{(\bf { {\bf{g}}^{*}})}} (A^{-t})^j\hspace{0cm}_{l} (A^{t})^m\hspace{0cm}_{n}, \end{equation} \begin{equation}\label{34} {\alpha}'^{i}=(A^{-t})^i\hspace{0cm}_{m}{\alpha^{m}}, \end{equation} \begin{equation}\label{35} {\beta}'_{i}=A_{i}\hspace{0cm}^{m}{\beta_{m}}, \end{equation} where $A_{m}\hspace{0cm}^{n}$s are the elements of the automorphism matrix $A$ for the Lie algebra $\bf{g}$ and $X'_{0}={\alpha'^{i}}X_{i}$ , $\phi'_{0}=\beta'_{j}\tilde{X}'^{j}$ so that $\{\tilde{X}'^{j}\}$ is the basis of ${{\bf{g}}^{*}}'$. \end{proposition} Proof: From the definition of automorphism of the Lie algebra; $A:\bf{g}\rightarrow\bf{g}$ in terms of the basis $X_{j}$ we have \begin{equation}\label{36} A{X}_{i}=A_{i}\hspace{0cm}^{j}{X_{j}}, \end{equation} where $A_{i}\hspace{0cm}^{j}$s satisfy the following relation \begin{equation}\label{37} A_{i}\hspace{0cm}^{m} f_{mn}\hspace{0cm}^{k} A_{j}\hspace{0cm}^{n}= f_{ij}\hspace{0cm}^{l} A_{l}\hspace{0cm}^{k}. \end{equation} Now, applying \eqref{37} in \eqref{27}, one can obtain relations \eqref{33}-\eqref{35} where ${\alpha}'^{i},{\beta}'_{i}$ and ${{{\tilde{f}^{ij}}\hspace{0cm}_n}_{(\bf {{{\bf{g}}^{*}}'})}}$ are satisfied in the relations \eqref{27}-\eqref{31} and these show that $(({\bf{g}},{\phi}'_{0}),({{\bf{g}}^{*}}',{X}'_{0}))$ is also a Jacobi-Lie bialgebra and is equivalent to the Jacobi-Lie bialgebra $(({\bf{g}},\phi_{0}),({\bf{g}}^{*},X_{0}))$.\\ The above proposition in the case of $\phi_{0}=0$ and $X_{0}=0$ recovers the equivalency between Lie bialgebras $({\bf{g}},\delta)$ and $({\bf{g}},\delta^{'})$ \cite{RHR} i.e., the relation \begin{equation}\label{38} \delta^{'}=(A\otimes A)\circ\delta\circ A^{-1}. \end{equation} In this way, one can apply the definition 4 and proposition 5 for obtaining and classifying the Jacobi-Lie bialgebras, directly. \section{\bf Calculation of Jacobi-Lie bialgebras using adjoint representation } In this section, using adjoint representation of the Lie algebras ${\bf g}$ and ${\bf{g^{*}}}$ in \eqref{21} and \eqref{27}-\eqref{31}, we provide a procedure like \cite{ER} for calculating and classifying low dimensional Jacobi-Lie bialgebras. Because of tensorial form of mentioned relations, working with them is very difficult, so, we suggest writing these equations in matrix forms using the following adjoint representations for the Lie algebras ${\bf g}$ and ${\bf{g^{*}}}$ \begin{equation}\label{39} ({\cal{X}}_{i})_{j}\hspace{0cm}^{k}=-f_{ij}\hspace{0cm}^{k}~~~~~,~~~~~({\cal{Y}}^{k})_{ij}=-f_{ij}\hspace{0cm}^{k}, \end{equation} \begin{equation}\label{40} ({\tilde{\cal{X}}}^{i})^{j}\hspace{0cm}_{k}=-{\tilde{f}}^{ij}\hspace{0cm}_{k}~~~~~,~~~~~({\tilde{\cal{Y}}}_{k})^{ij}=-{\tilde{f}}^{ij}\hspace{0cm}_{k}, \end{equation} Then, the matrix forms of the relations \eqref{21} and \eqref{27}-\eqref{31} become as follows, respectively, \begin{equation}\label{41} ({\tilde{\cal X}}^i)^j_{\; \;k}{\tilde{\cal X}}^k + {\tilde{\cal X}}^i {\tilde{\cal X}}^j - {\tilde{\cal X}}^j {\tilde{\cal X}}^i =0, \end{equation} \begin{equation}\label{42} ({\cal{D}}^{mn})_{ij} + {\cal{C}}_{i}\hspace{0cm}^{m}\delta_{j}\hspace{0cm}^{n} - {\cal{C}}_{j}\hspace{0cm}^{m}\delta_{i}\hspace{0cm}^{n}- {\cal{C}}_{i}\hspace{0cm}^{n}\delta_{j}\hspace{0cm}^{m}+ {\cal{C}}_{j}\hspace{0cm}^{n}\delta_{i}\hspace{0cm}^{m}=0, \end{equation} \begin{equation}\label{43} Tr({\cal{A}}{\cal{B}}^{t})=0, \end{equation} \begin{equation}\label{44} {\alpha}^{i}({\cal{X}}_{i})^{t}-{\beta}_{i} {\tilde{\cal{X}}}^{i}=0, \end{equation} \begin{equation}\label{45} {\alpha}^{i}{\tilde{\cal{Y}}}_{i}=0, \end{equation} \begin{equation}\label{46} {\beta}_{i}{\cal{Y}}^{i}=0, \end{equation} where the matrices ${\cal {C}}$ and ${\cal{D}}^{mn}$ have the following forms $$ {\cal{C}}=\alpha^{k}{\cal{X}}_{k}-{\cal{B}}{\cal{A}}^{t}, $$ \begin{equation}\label{47} {\cal{D}}^{mn}=({\tilde{\cal{X}}}^{m})^{n}\hspace{0cm}_{k} {\cal{Y}}^{k}+{\cal{Y}}^{m}{\tilde{\cal{X}}}^{n}-{\cal{Y}}^{n}{\tilde{\cal{X}}}^{m} +({\tilde{\cal{X}}}^{n})^{t}{\cal{Y}}^{m}-({{\tilde{\cal{X}}}^{m}})^{t}{\cal{Y}}^{n}+{\cal{B}}({\cal{\tilde F}}^{mn})^{t}-{\cal{\tilde F}}^{mn}{\cal{B}}^{t} +\alpha^{n}{\cal{Y}}^{m}-\alpha^{m}{\cal{Y}}^{n}, \end{equation} and $\cal{A,B}$ and ${\tilde F}$ represent the following column matrices (where $d$ is dimension of the Lie algebras ${\bf g}$ and ${\bf g}^{*}$). \begin{equation}\label{48} {\cal{A}}=\left(\begin{array}{c} \alpha_{1}\\ \alpha_{2}\\ .\\ .\\ .\\ \alpha_{d}\\ \end{array} \right),\hspace{1cm} {\cal{B}}=\left(\begin{array}{c} \beta_{1}\\ \beta_{2}\\ .\\ .\\ .\\ \beta_{d}\\ \end{array} \right),\hspace{1cm} {\cal{\tilde F}}^{mn}=\left(\begin{array}{c} \tilde{f}^{mn}\hspace{0cm}_{1}\\ \tilde{f}^{mn}\hspace{0cm}_{2}\\ .\\ .\\ .\\ \tilde{f}^{mn}\hspace{0cm}_{d}\\ \end{array} \right). \end{equation} Now, by substituting the structure constants of Lie algebra ${\bf{g}}$ in the matrix equations \eqref{42}-\eqref{46} and solving these equations simultaneously using \eqref{41}, we obtain the structure constants of dual Lie algebras ${\bf{g^{*}}}$ and the matrices ${\cal {A,B}} $ so that $(({\bf{g}},\phi_{0}),({\bf{g}}^{*},X_{0}))$ is a Jacobi-Lie bialgebra. By this method, we will classify two and three dimensional Jacobi-Lie bialgebras. We perform this work in the following three steps. \bigskip {\bf {\small Step 1:}}~~{\it Solving the equations \eqref{41}-\eqref{46} and determining the Lie algebras $\bf g'$ which are isomorphic with dual solutions ${\bf {g^{*}}}$} \smallskip By solving matrix equations \eqref{41}-\eqref{46} for obtaining matrices ${\tilde{\cal X}}^i$, ${\cal{A}}$ and ${\cal{B}}$, some structure constants of ${\bf {g^{*}}}$ and also some coefficients of $\alpha^{i}$ and $\beta_{i}$ are obtained to be zero, some are unknown and some are obtained in terms of each other. In order to know whether ${\bf {g^{*}}}$ is one of the known Lie algebras of the classification table or it is isomorphic to them, we must use the following isomorphism relation between the obtained Lie algebras ${\bf {g^{*}}}$ and one of the known Lie algebras of the classification table, e.g. $\bf g'$. Applying the following transformation for a change of basis ${\bf {g^{*}}}$, we have \begin{equation}\label{49} \tilde{X}^{'\;i}=C^i\hspace{0cm}_{j}\tilde{X}^j,\hspace{20mm} [\tilde{X}^{'\;i} ,\tilde{X}^{'\;j}] ={\tilde{f}^{'\;ij}}\hspace{0cm}_{k} \tilde{X}^{'\;k}. \end{equation} Then, we obtain the following matrix equations for isomorphism \begin{equation}\label{50} C\;(C^i\hspace{0cm}_{k}\;\tilde{\cal X}^k_{\bf {(g^{*})}})={\cal X}^{i}_{(\bf g')}\;C, \end{equation} where ${\cal X}^{i}_{(\bf g')}$ are adjoint matrices of the known Lie algebra $\bf g'$ of the classification table. Solving equation \eqref{50} with the condition $det C\neq 0$, we obtain some extra conditions on ${{\tilde{f}^{kl}}_{(\bf {g^{*}})}}\hspace{1mm}_m$s which were obtained from \eqref{41}-\eqref{46}. \bigskip {\bf {\small Step 2:}}~~{\it Obtaining general form of the transformation matrices $B:{\bf g}'\longrightarrow {\bf g}'.i$; such that $(({\bf g},\phi'_{0}), ({\bf g}'.i,X'_{0}))$ is a Jacobi-Lie bialgebra} \smallskip As the second step, we transform Jacobi-Lie bialgebra $(({\bf g},\phi_{0}), ({\bf g}^{*},X_{0}))$ (where in the Lie algebra $\bf{g}^{*}$ we impose the extra conditions obtained in the step one) to Jacobi-Lie bialgebra $(({\bf g},\phi'_{0}), ({\bf g}'.i,X'_{0}))$ (where ${\bf g}'.i$ is isomorphic as Lie algebra with ${\bf g}'$) using an automorphism $A$ of the Lie algebra ${\bf g}$. As the inner product \eqref{22} is invariant, we have $A^{-t}:{\bf {g}^{*}}\longrightarrow {\bf g}'.i$, \begin{equation}\label{51} X'_i=A_i\hspace{0cm}^{k} X_k,\hspace{10mm}\tilde{X}^{'\;j}=(A^{-t})^j\hspace{0cm}_{l}\tilde{X}^l,\hspace{10mm}<X'_i , \tilde{X}^{'\;j}> = \delta_i\hspace{0cm}^{j}, \end{equation} where $A^{-t}$ is inverse transpose of every matrix $A\in Aut(\bf g)$. Thus, we have the following relation \begin{equation}\label{52} (A^{-t})^i\hspace{0cm}_{k}{{\tilde{f}^{kl}}_{(\bf {g^{*}})}}\hspace{1mm}_m (A^{-t})^j\hspace{0cm}_{l} = {{{{f}}^{ij}}_{(\bf {g}'.i)}}\hspace{0.5mm}_n (A^{-t})^n\hspace{0cm}_{m}.\\ \end{equation} Now, for obtaining Jacobi-Lie bialgebras $(({\bf g},\phi'_{0}), ({\bf g}'.i,X'_{0}))$, we must obtain Lie algebras ${\bf g}'.i$ or transformations \\ $B:{\bf g}'\longrightarrow {\bf g}'.i$ such that \begin{equation}\label{53} B^i\hspace{0cm}_{k}{{{f}^{kl}}_{(\bf { g}')}}\hspace{0.5mm}_m B^j\hspace{0cm}_{l} = {{{{f}}^{ij}}_ {(\bf {g}'.i)}}\hspace{0.5mm}_n B^n\hspace{0cm}_{m}.\\ \end{equation} To this end, it is enough to omit ${{{{f}}^{ij}}_ {(\bf {g}'.i)}}\hspace{1mm}_n $ between \eqref{52} and \eqref{53}. Then, we will have the following matrix equation for $B$ \begin{equation}\label{54} (A^{-t})^i\hspace{0cm}_{m}\tilde{\cal X}^{t\;m}_{\bf {(g^{*})}}A^{-1} =(B^{t} A)^{-1}(B^i\hspace{0cm}_{k}{{\cal X}^{t\;k}}_{({\bf g}')}) B^{t}.\\ \end{equation} By solving \eqref{54}, we obtain the general form of matrix $B$ with the condition $detB \neq 0$. In solving equation \eqref{54}, one can obtain conditions on the elements of the matrix $A$; and only these conditions should be considered under which we have $det A\neq 0$. \bigskip {\bf {\small Step 3:}}~~{\it Obtaining the non-equivalent Jacobi-Lie bialgebras} \smallskip Having solved \eqref{54}, we obtain the general form of the matrix $B$ so that its elements are written in terms of the elements of matrices $A$, $C$ and structure constants ${{\tilde{f}^{ij}}_{(\bf {g^{*}})}}\hspace{0.5mm}_k $ and some $\alpha^{i}$ and $\beta_{i}$. Now, by substituting $B$ in \eqref{53}, we obtain structure constants ${{{{f}}^{ij}}_ {(\bf {g}'.i)}}\hspace{0.5mm}_n $ of the Lie algebra ${\bf g}'.i$ in terms of elements of matrices $A$ and $C$ and some ${{\tilde{f}^{ij}}_{(\bf {g^{*}})}}\hspace{1mm}_k$s, and also using \eqref{34} and \eqref{35}, we obtain the $\alpha'^{i}$ and $\beta'_{i}$ or column matrices ${\cal A'}$ and ${\cal B'}$. Then, we check whether it is possible to equalize some structure constants ${{{{f}}^{ij}}_ {(\bf {g}'.i)}}\hspace{0.5mm}_n$ with each other and with $\pm1$ such that $det A\neq0$, $det B\neq0$ and $det C\neq0$. In this way, we obtain matrices $B_1$, $B_2$,... and also ${\cal A''}$ and ${\cal B''}$,... . Note that in obtaining $B_i$s, we impose the condition $B{B_i}^{-1}\in Aut^{t}(\bf g)$ (where $Aut^{t}(\bf g)$ is the transpose of $Aut(\bf g)$); if this condition is not satisfied, we can not impose it on the structure constants because $B$ and $B_i$ are not equivalent. \smallskip Now, using isomorphism matrices $B_1$, $B_2$, ... and also ${\cal A'}$,${\cal A''}$ and ${\cal B'}$,${\cal B''}$, ... we can obtain Jacobi-Lie bialgebras $(({\bf g},\phi'_{0}), ({\bf g}'.i,X'_{0}))$, $(({\bf g},\phi''_{0}), ({\bf g}'.ii,X''_{0}))$,... . But, there is a question: which of these Jacobi-Lie bialgebras are equivalent? To answer this question, we use the matrix form of the relation \eqref{33}. Consider the two Jacobi-Lie bialgebras $(({\bf g},\phi'_{0}), ({\bf g}'.i,X'_{0}))$ and $(({\bf g},\phi''_{0}), ({\bf g}'.ii,X''_{0}))$, using \begin{equation}\label{55} A(X_i)=A_i\hspace{0cm}^{j} X_j, \end{equation} the relation \eqref{33} will have the following matrix form \begin{equation}\label{56} A^{t}((A^{t})^i\hspace{0cm}_{k}{{\cal X}_{({\bf g}'.i)}}^k) = {{{\cal X}}_{({\bf g}'.ii)}}^i A^{t}.\\ \end{equation} On the other side, the transformation matrix between ${\bf g}'.i$ and ${\bf g}'.ii$ is $B_2B_1^{-1}$ if $B_1:{\bf g}'\longrightarrow {\bf g}'.i$ and $B_2:{\bf g}'\longrightarrow {\bf g}'.ii$; then we have \begin{equation}\label{57} (B_2B_1^{-1}) ((B_2B_1^{-1})^i\hspace{0cm}_{k}{{\cal X}_{({\bf g}'.i)}}^k) = {{{\cal X}}_{({\bf g}'.ii)}}^i (B_2B_1^{-1}).\\ \end{equation} A comparison of \eqref{57} with \eqref{56} reveals that if $B_2B_1^{-1}\in A^{t}$ such that we have also ${\cal A''}=A^{-t}{\cal A'}$ and ${\cal B''}=A{\cal B'}$, then the Jacobi-Lie bialgebras $(({\bf g},\phi'_{0}), ({\bf g}'.i,X'_{0}))$ and $(({\bf g},\phi''_{0}), ({\bf g}'.ii,X''_{0}))$ are equivalent. In this way, we obtain nonequivalent classes of $B_i$s and ${\cal A}$s and ${\cal B}$s such that we consider only one element of this class. Note that, for obtaining and fixing $\alpha^{i}$, $\beta_{i}$ we must impose conditions which were obtained for the elements ${{{{f}}^{ij}}_ {(\bf {g}'.i)}}$s and elements of automorphism group in relations \eqref{34} and \eqref{35}, then we have to fix elements of $\alpha'^{i}$ and $\beta'_{i}$ with constant values (0,1,...), used in those relations. In this manner, we obtain $\phi'_{0}$, $X'_{0}$ and then Jacobi-Lie bialgebras $(({\bf g},\phi'_{0}), ({\bf g}'.i,X'_{0}))$; such that one can classify all Jacobi-Lie bialgebras. In the next section, we apply this formulation to classify real two and three dimensional Jacobi-Lie bialgebras. \vspace{5mm} \section{\bf Classification of real two and three dimensional Jacobi-Lie bialgebras } In this section, we will use the classification of real two and three dimensional Lie algebras and their automorphism groups. It should be noted that for real two dimensional Lie algebras, we will use the classification of the ref \cite{Patera} as table 1 and for real three dimensional case we will use the Bianchi classification \cite{LL} as table 2. \begin{center} \begin{tabular}{l l l p{15mm} } \multicolumn{3}{c}{Table 1: \small Real two dimensional Lie algebras}\\ \hline \hline Lie Algebra & Commutation relations \\ \hline \hline \vspace{2mm} {\footnotesize$A_{1}$}& {\footnotesize $[X_{i},X_{j}]=0$}\\ \vspace{2mm} {\footnotesize $A_{2}$} & {\footnotesize $[X_{1},X_{2}]=X_{1}$}\\ \hline \end{tabular} \end{center} \begin{center} \begin{tabular}{l l l p{15mm} } \multicolumn{3}{c}{Table 2: \small Real three dimensional Lie algebras}\\ \hline \hline Lie Algebra & Commutation relations & Comments\\ \hline \hline \vspace{2mm} {\footnotesize$I$} &{\footnotesize $[X_{i},X_{j}]=0$}&\\ \vspace{2mm} {\footnotesize$II$} &\footnotesize $[X_{2},X_{3}]=X_{1}$ &\\ \vspace{2mm} {\footnotesize$III$} &\footnotesize $[X_{1},X_{2}]=-(X_{2}+X_{3})$,\footnotesize$[X_{1},X_{3}]=-(X_{2}+X_{3})$&\\ \vspace{2mm} {\footnotesize$IV$} &\footnotesize $[X_{1},X_{2}]=-(X_{2}-X_{3})$,\footnotesize$[X_{1},X_{3}]=-X_{3}$&\\ \vspace{2mm} {\footnotesize$V$} &\footnotesize $[X_{1},X_{2}]=-X_{2}$,\footnotesize$[X_{1},X_{3}]=-X_{3}$&\\ \vspace{2mm} {\footnotesize$VI_{0}$}&\footnotesize $[X_{1},X_{3}]=X_{2}$,\footnotesize$[X_{2},X_{3}]=X_{1}$&\\ \vspace{2mm} {\footnotesize$VI_{a}$}&\footnotesize $[X_{1},X_{2}]=-(aX_{2}+X_{3})$,\footnotesize$[X_{1},X_{3}]=-(X_{2}+aX_{3})$&{ \footnotesize $ a\in\Re-\{1\},\;\;a>0$ } \\ \vspace{2mm} {\footnotesize$VII_{0}$}&\footnotesize $[X_{1},X_{3}]=-X_{2}$,\footnotesize$[X_{2},X_{3}]=X_{1}$&\\ \vspace{2mm} {\footnotesize$VII_{a}$}&\footnotesize $[X_{1},X_{2}]=-(aX_{2}-X_{3})$,\footnotesize$[X_{1},X_{3}]=-(X_{2}+aX_{3})$&{ \footnotesize $a\in\Re,\;\;a>0 $ } \\ \vspace{2mm} {\footnotesize$VIII$}&\footnotesize $[X_{1},X_{2}]=-X_{3}$,\footnotesize$[X_{1},X_{3}]=-X_{2}$,\footnotesize$[X_{2},X_{3}]=X_{1}$&\\ \vspace{2mm} {\footnotesize$IX$}&\footnotesize $[X_{1},X_{2}]=X_{3}$,\footnotesize$[X_{1},X_{3}]=-X_{2}$,\footnotesize$[X_{2},X_{3}]=X_{1}$&\\ \hline \end{tabular} \end{center} As mentioned in the previous section, for obtaining Jacobi-Lie bialgebras, automorphism groups are necessary. These automorphism groups have been calculated using the transformation \eqref{37}, or in the matrix form (with condition $det A\neq 0$)\cite{RHR}(see also \cite{F}) using the following relation \begin{equation}\label{58} A {\cal Y}^k A^{t} = {\cal Y}^{i} A_{i}\hspace{0cm}^{k}. \end{equation} The results are given in table 3. \begin{center} \begin{tabular}{l l l} \multicolumn{3}{l}{Table 3: \small Automorphism groups of the real two and three dimensional Lie algebras}\\ \hline \hline Lie Algebra & Automorphism groups & Comments \\ \hline \vspace{2mm} {\footnotesize $ A_{1}$}& {\footnotesize $GL(2,R)$} &\\ \vspace{2mm} {\footnotesize $ A_{2}$}&{\footnotesize $\left( \begin{array}{cc} a & 0 \\ b & 1 \\ \end{array} \right)$} &~~~~~~~{\footnotesize $a\in\Re-\{0\}$} \\ \vspace{2mm} {\footnotesize $ I$}& {\footnotesize $GL(3,R)$} & \\ \vspace{2mm} {\footnotesize $II$}&{\footnotesize $\left( \begin{array}{ccc} bf-ce & 0 & 0 \\ a & b & c \\ d & e & f \end{array} \right)$} &~~~~~~~{\footnotesize $a,b,c,d,e,f\in\Re$, $bf\neq ce$} \\ \vspace{2mm} {\footnotesize $III$}&{\footnotesize $\left( \begin{array}{ccc} 1 & a & b \\ 0 & c & d \\ 0 & d & c \end{array} \right)$} &~~~~~~~{\footnotesize $a,b,c,d\in\Re$, $c\neq\pm d$} \\ \vspace{2mm} {\footnotesize $IV$}&{\footnotesize $\left( \begin{array}{ccc} 1 & a & b \\ 0 & c & d \\ 0 & 0 & c \end{array} \right)$} &~~~~~~~{\footnotesize $a,b,d\in\Re$, $c\in\Re-\{0\}$} \\ \vspace{2mm} {\footnotesize $V$}&{\footnotesize $\left( \begin{array}{ccc} 1 & a & b \\ 0 & c & d \\ 0 & e & f \end{array} \right)$} &~~~~~~~{\footnotesize $a,b,c,d,e,f\in\Re$, $cf\neq ed$} \\ \vspace{2mm} {\footnotesize $VI_{0}$}&{\footnotesize $\left( \begin{array}{ccc} a & b & 0 \\ b & a & 0 \\ c & d & 1 \end{array} \right)$},{\footnotesize $\left( \begin{array}{ccc} a & b & 0 \\ -b & -a & 0 \\ c & d & -1 \end{array} \right)$}&~~~~~~~{\footnotesize $a,b,c,d\in\Re$, $a\neq\pm b$} \\ \vspace{2mm} {\footnotesize $VI_{a}$}&{\footnotesize $\left( \begin{array}{ccc} 1 & b & c \\ 0 & d & e \\ 0 & e & d \end{array} \right)$} &~~~~~~~{\footnotesize $b,c,d,e\in\Re$, $d\neq\pm e$} \\ \vspace{2mm} {\footnotesize $VII_{0}$}&{\footnotesize $\left( \begin{array}{ccc} a & b & 0 \\ -b & a & 0 \\ c & d & 1 \end{array} \right)$},{\footnotesize $\left( \begin{array}{ccc} a & b & 0 \\ b & -a & 0 \\ c & d & -1 \end{array} \right)$} &~~~~~~~{\footnotesize $a,b,c,d\in\Re$, $a^{2}+b^{2}\neq 0$} \\ \vspace{2mm} {\footnotesize $VII_{a}$}&{\footnotesize $\left( \begin{array}{ccc} 1 & b & c \\ 0 & d & -e \\ 0 & e & d \end{array} \right)$} &~~~~~~~{\footnotesize $b,c,d,e\in\Re$, $d^{2}+e^{2}\neq 0$} \\ \vspace{2mm} {\footnotesize $VIII$}& {\footnotesize $SL(2,R)$} &\\ \vspace{2mm} {\footnotesize $IX$}& {\footnotesize $SO(3)$} &\\ \hline \end{tabular} \end{center} Now, using the method considered in section 3 and applying {\small MAPLE} program for solving our equations \eqref{41}-\eqref{46}, we classify real two and three dimensional Jacobi-Lie bialgebras. Let us investigate an example for explaining the method and steps mentioned in the previous section.\\ \subsection{\bf An example} In the following, we explain our method for this classification by describing the details of the calculations for obtaining the Jacobi-Lie bialgebra $((III,-2\tilde{X}^{1}),(V.i,-[X_{2}+X_{3}]))$. By substituting the structure constants of Lie algebra $III$ in the matrix equations \eqref{41}-\eqref{46}, we obtain the following form for the structure constants of ${\bf{g}}^{*}$ and matrices ${\cal A,B}$ \begin{equation}\label{59} {\tilde{f}}^{12}\hspace{0cm}_{1}={\tilde{f}}^{13}\hspace{0cm}_{1}=\alpha~,~{\tilde{f}}^{23}\hspace{0cm}_{1}=\beta~,~ {\tilde{f}}^{23}\hspace{0cm}_{2}=-{\tilde{f}}^{23}\hspace{0cm}_{3}=\gamma~,~ {\cal{A}}=\left(\begin{array}{c} 0\\ -\alpha\\ -\alpha\\ \end{array} \right)~,~{\cal{B}}=\left(\begin{array}{c} -2\\ 0\\ 0\\ \end{array} \right). \end{equation} Using \eqref{50}, the obtained Lie algebra ${\bf{g}}^{*}$ is isomorphic with the Lie algebra $V$ by the following isomorphism matrix \begin{equation}\label{60} C=\left( \begin{array}{ccc} c_{11} & -\frac{\gamma c_{31}-1}{\gamma} & c_{13} \\ c_{21} & -c_{23} & c_{23} \\ c_{31} & -c_{33} & c_{33} \end{array} \right), \end{equation} by assuming the conditions $\alpha=\gamma$ and $\beta=0$. Now, by substituting the above results and the automorphism group of the Lie algebra $III$ in \eqref{54} one can obtain the following form for the matrix $B$ \begin{equation}\label{61} B=\left( \begin{array}{ccc} 0 & b_{12} & b_{13} \\ \frac{\gamma}{c+d} & b_{22} & b_{23} \\ \frac{\gamma}{c+d} & b_{32} & b_{33} \end{array} \right), \end{equation} where condition $det B\neq0$ requires $\gamma\neq 0$. Then, using \eqref{53} and \eqref{34}-\eqref{35} we have the following commutation relations for the algebra ${\bf{g'}}.i$ and the following forms for ${\cal A',B'}$ \begin{equation}\label{62} [{\tilde X}^1,{\tilde X}^2]=\alpha'{\tilde X}^1,[{\tilde X}^1,{\tilde X}^3]=\alpha'{\tilde X}^1,[{\tilde X}^2,{\tilde X}^3]=\alpha'({\tilde X}^2-{\tilde X}^3) ,{\cal{A'}}=\left(\begin{array}{c} 0\\ -\alpha'\\ -\alpha'\\ \end{array} \right)~,~{\cal{B'}}=\left(\begin{array}{c} -2\\ 0\\ 0\\ \end{array} \right), \end{equation} where $\alpha'=\frac{\gamma}{c+d}$ such that $\alpha'\neq0$ {\footnote{Here, in the above relations, $a,b,c,d$ are elements of automorphism group of the Lie algebra $III$ (see table 3).}}. Now, if $\alpha'=1$ i.e., $\gamma=c+d$ we have \begin{equation}\label{63} B'=\left( \begin{array}{ccc} 0 & b'_{12} & b'_{13} \\ 1 & b'_{22} & b'_{23} \\ 1 & b'_{32} & b'_{33} \end{array} \right)~,~{\cal{A''}}=\left(\begin{array}{c} 0\\ -1\\ -1\\ \end{array} \right)~,~{\cal{B''}}=\left(\begin{array}{c} -2\\ 0\\ 0\\ \end{array} \right). \end{equation} Since $B'B^{-1}\in A^{t}$, then $B'$ is equivalent to $B$ and according to the relations \eqref{34} and \eqref{35}, ${\cal A'}$ and ${\cal B'}$ is equivalent to ${\cal A''}$ and ${\cal B''}$, respectively, where $A$ is automorphism group of the Lie algebra $III$. This equivalency indicate that one can choose $\alpha'=1$. In this way, we obtain the Jacobi-Lie bialgebra $((III,\phi_{0}),(V.i,X_{0}))$ where $X_{0}=-(X_{2}+X_{3})$ and $\phi_{0}=-2\tilde{X}^{1}$ and commutation relations for $V.i$ as \begin{equation}\label{64} [{\tilde X}^1,{\tilde X}^2]={\tilde X}^1,[{\tilde X}^1,{\tilde X}^3]={\tilde X}^1,[{\tilde X}^2,{\tilde X}^3]={\tilde X}^2-{\tilde X}^3. \end{equation} Note that, we have two classes of Jacobi-Lie bialgebras; the first class is Jacobi-Lie bialgebras with {$X_{0}$},$\phi_{0}\neq 0$ and the second class is Jacobi-Lie bialgebras with {$X_{0}$} or $ \phi_{0}=0$ {\footnote {If $(({\bf{g}},\phi_{0}),({\bf{g}}^{*},X_{0}))$ being a Jacobi-Lie bialgebra, then $(({\bf{g}}^{*},X_{0}),({\bf{g}},\phi_{0}))$ is also a Jacobi-Lie bialgebra, therefore, in this paper (for state with {$X_{0}$} or $ \phi_{0}=0$) we have only presented Jacobi-Lie bialgebras with {$\phi_{0}=0$} .}}. We classify real two and three dimensional Jacobi-Lie bialgebras in tables 4, 5 and 6, 7, respectively, as follows. \vspace{.7cm} \begin{center} \begin{tabular}{l l l l l l p{0.15mm} } \multicolumn{6}{l}{Table 4: \small Real two dimensional Jacobi-Lie bialgebras with {\footnotesize $X_{0}$},$ \phi_{0}\neq 0$}\\ \hline \hline {\footnotesize ${\bf g}$ }& {\footnotesize ${\bf g^{*}}$} &{\footnotesize Commutation relations of ${\bf g^{*}}$} &{$X_{0}$}& $\phi_{0}$&{\footnotesize Comments} \\ \hline \vspace{2mm} {\footnotesize $A_{1}$}&{\footnotesize $A_{1}$}&{\footnotesize $[{\tilde X}^i,{\tilde X}^j]=0$}&{\footnotesize $X_{2}$}&{\footnotesize $ {\tilde X}^{1}$}& \\ \vspace{2mm} {\footnotesize $A_{2}$}&{\footnotesize $A_{2}.i$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^2]={\tilde X}^2$}&{\footnotesize $-\alpha X_{1}$}&{\footnotesize $\alpha {\tilde X}^{2}$}& {\footnotesize $\alpha \in \Re-\{0\}$}\\ \hline \end{tabular} \end{center} \vspace{3mm} \begin{center} \begin{tabular}{l l l l l p{0.15mm} } \multicolumn{5}{l}{Table 5: \small Real two dimensional Jacobi-Lie bialgebras with {\footnotesize $\phi_{0}=0$}}\\ \hline \hline {\footnotesize ${\bf g}$ }& {\footnotesize ${\bf g^{*}}$} &{\footnotesize Commutation relations of ${\bf g^{*}}$} &{$X_{0}$}&{\footnotesize Comments} \\ \hline \vspace{2mm} {\footnotesize $A_{1}$}&{\footnotesize $A_{1}$}&{\footnotesize $[{\tilde X}^i,{\tilde X}^j]=0$}&{\footnotesize $X_{1}+X_{2}$}&\\ \vspace{2mm} {\footnotesize $A_{1}$}&{\footnotesize $A_{2}$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^2]={\tilde X}^1$}&{\footnotesize $\alpha X_{2}$}& {\footnotesize $\alpha \in \Re-\{0\}$}\\ \hline \end{tabular} \end{center} ~\\ ~\\ ~\\ ~\\ \vspace{7mm} \hspace{-1.2cm}\begin{tabular}{ l l l l l l} \multicolumn{6}{l}{Table 6: \small Real three dimensional Jacobi-Lie bialgebras with {\footnotesize $X_{0}$},$ \phi_{0}\neq 0$}\\ \hline \hline {\footnotesize ${\bf g}$ }& {\footnotesize ${\bf g^{*}}$} &{\footnotesize Commutation relations of ${\bf g^{*}}$} &{$X_{0}$}& $\phi_{0}$&{\footnotesize Comments} \\ \hline \vspace{2mm} {\footnotesize $I$}&{\footnotesize $III$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^2]=-({\tilde X}^2+{\tilde X}^3),[{\tilde X}^1,{\tilde X}^3]=-({\tilde X}^2+{\tilde X}^3)$}&{\footnotesize $-2X_{1}$}&{\footnotesize $-({\tilde X}^2-{\tilde X}^3)$}&\\ \vspace{2mm} {\footnotesize $II$}&{\footnotesize $III$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^2]=-({\tilde X}^2+{\tilde X}^3),[{\tilde X}^1,{\tilde X}^3]=-({\tilde X}^2+{\tilde X}^3)$}&{\footnotesize $-2X_{1}$}&{\footnotesize $-({\tilde X}^2-{\tilde X}^3)$}&\\ \vspace{2mm} {\footnotesize $III$}&{\footnotesize $III.i$}&{\footnotesize $[{\tilde X}^2,{\tilde X}^3]=-\frac{\alpha+2}{\alpha}({\tilde X}^2+{\tilde X}^3)$}&{\footnotesize $-(X_{2}-X_{3})$}&{\footnotesize $\alpha{\tilde X}^1$}&{\footnotesize $\alpha \in \Re-\{0\}$}\\ \vspace{2mm} {\footnotesize $III$}&{\footnotesize $III.ii$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^2]={\tilde X}^1,[{\tilde X}^1,{\tilde X}^3]={\tilde X}^1$}&{\footnotesize $-2X_{2}$}&{\footnotesize $-2{\tilde X}^1$}&\\ \vspace{2mm} {\footnotesize $III$}&{\footnotesize $III.ii$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^2]={\tilde X}^1,[{\tilde X}^1,{\tilde X}^3]={\tilde X}^1$}&{\footnotesize $-(X_{2}+X_{3})$}&{\footnotesize $-2{\tilde X}^1$}&\\ \vspace{2mm} {\footnotesize $III$}&{\footnotesize $III.iii$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^2]=-{\tilde X}^1,[{\tilde X}^1,{\tilde X}^3]=-{\tilde X}^1,[{\tilde X}^2,{\tilde X}^3]=-2{\tilde X}^1$}&{\footnotesize $X_{2}+X_{3}$}&{\footnotesize $-({\tilde X}^2-{\tilde X}^3)$}&\\ \vspace{2mm} {\footnotesize $III$}&{\footnotesize $III.iv$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^2]=\frac{\alpha}{2}({\tilde X}^2+{\tilde X}^3),[{\tilde X}^1,{\tilde X}^3]=\frac{\alpha}{2}({\tilde X}^2+{\tilde X}^3),[{\tilde X}^2,{\tilde X}^3]={\tilde X}^2+{\tilde X}^3$}&{\footnotesize $\alpha X_{1}$}&{\footnotesize $-\alpha({\tilde X}^2-{\tilde X}^3)$}&{\footnotesize $\alpha \in \Re-\{0\}$}\\ \vspace{2mm} {\footnotesize $III$}&{\footnotesize $V.i$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^2]={\tilde X}^1,[{\tilde X}^1,{\tilde X}^3]={\tilde X}^1,[{\tilde X}^2,{\tilde X}^3]={\tilde X}^2-{\tilde X}^3$}&{\footnotesize $-(X_{2}+X_{3})$}&{\footnotesize $-2{\tilde X}^1$}&\\ \vspace{2mm} {\footnotesize $IV$}&{\footnotesize $III.v$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^3]={\tilde X}^1,[{\tilde X}^2,{\tilde X}^3]={\tilde X}^1$}&{\footnotesize $-X_{3}$}&{\footnotesize $-{\tilde X}^1$}&\\ \vspace{2mm} {\footnotesize $IV$}&{\footnotesize $III.vi$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^2]={\tilde X}^1,[{\tilde X}^2,{\tilde X}^3]={\tilde X}^1$}&{\footnotesize $-(X_{2}+X_{3})$}&{\footnotesize $-{\tilde X}^1$}&\\ \vspace{2mm} {\footnotesize $IV$}&{\footnotesize $IV.i$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^3]=\alpha{\tilde X}^1,[{\tilde X}^2,{\tilde X}^3]={\tilde X}^1+\alpha{\tilde X}^2$}&{\footnotesize $-\epsilon\alpha X_{3}$}&{\footnotesize $-\epsilon{\tilde X}^1$}&{\footnotesize $\epsilon=1,2~,~\alpha>0$}\\ \vspace{2mm} {\footnotesize $IV$}&{\footnotesize $IV.ii$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^3]=\alpha{\tilde X}^1,[{\tilde X}^2,{\tilde X}^3]=-{\tilde X}^1+\alpha{\tilde X}^2$}&{\footnotesize $-\epsilon\alpha X_{3}$}&{\footnotesize $-\epsilon{\tilde X}^1$}&{\footnotesize $\epsilon=1,2~,~\alpha>0$}\\ \vspace{2mm} {\footnotesize $IV$}&{\footnotesize $V.ii$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^3]={\tilde X}^1,[{\tilde X}^2,{\tilde X}^3]={\tilde X}^2$}&{\footnotesize $-\epsilon X_{3}$}&{\footnotesize $-\epsilon{\tilde X}^1$}&{\footnotesize $\epsilon=1,2$}\\ \vspace{2mm} {\footnotesize $IV$}&{\footnotesize $VI_{0}.i$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^3]={\tilde X}^1,[{\tilde X}^2,{\tilde X}^3]=-{\tilde X}^2$}&{\footnotesize $-X_{3}$}&{\footnotesize $-{\tilde X}^1$}&\\ \vspace{2mm} {\footnotesize $IV$}&{\footnotesize $VI_{a}.i$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^3]={\tilde X}^1,[{\tilde X}^2,{\tilde X}^3]=\frac{a+1}{a-1}{\tilde X}^2$}&{\footnotesize $-X_{3}$}&{\footnotesize $-{\tilde X}^1$}& {\footnotesize $a > 0~,~a\neq1$}\\ \vspace{2mm} {\footnotesize $IV$}&{\footnotesize $VI_{a}.ii$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^3]={\tilde X}^1,[{\tilde X}^2,{\tilde X}^3]=\frac{a-1}{a+1}{\tilde X}^2$}&{\footnotesize $-X_{3}$}&{\footnotesize $-{\tilde X}^1$}& {\footnotesize $a > 0~,~a\neq1$}\\ \vspace{2mm} {\footnotesize $IV$}&{\footnotesize $VI_{a}.i$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^3]={\tilde X}^1,[{\tilde X}^2,{\tilde X}^3]=\frac{a+1}{a-1}{\tilde X}^2$}&{\footnotesize $-\frac{2a}{a-1}X_{3}$}&{\footnotesize $-\frac{2a}{a-1}{\tilde X}^1$}& {\footnotesize $a > 0~,~a\neq1$}\\ \vspace{2mm} {\footnotesize $IV$}&{\footnotesize $VI_{a}.ii$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^3]={\tilde X}^1,[{\tilde X}^2,{\tilde X}^3]=\frac{a-1}{a+1}{\tilde X}^2$}&{\footnotesize $-\frac{2a}{a+1}X_{3}$}&{\footnotesize $-\frac{2a}{a+1}{\tilde X}^1$}& {\footnotesize $a > 0~,~a\neq1$}\\ \vspace{2mm} {\footnotesize $V$}&{\footnotesize $V.i$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^2]={\tilde X}^1,[{\tilde X}^1,{\tilde X}^3]={\tilde X}^1,[{\tilde X}^2,{\tilde X}^3]={\tilde X}^2-{\tilde X}^3$}&{\footnotesize $-\epsilon(X_{2}+X_{3})$}&{\footnotesize $-\epsilon{\tilde X}^1$}&{\footnotesize $\epsilon=1,2$}\\ \vspace{2mm} {\footnotesize $V$}&{\footnotesize $VI_{0}.i$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^3]={\tilde X}^1,[{\tilde X}^2,{\tilde X}^3]=-{\tilde X}^2$}&{\footnotesize $-X_{3}$}&{\footnotesize $-{\tilde X}^1$}&\\ \vspace{2mm} {\footnotesize $V$}&{\footnotesize $VI_{a}.i$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^3]={\tilde X}^1,[{\tilde X}^2,{\tilde X}^3]=\frac{a+1}{a-1}{\tilde X}^2$}&{\footnotesize $-X_{3}$}&{\footnotesize $-{\tilde X}^1$}& {\footnotesize $a > 0~,~a\neq1$}\\ \vspace{2mm} {\footnotesize $V$}&{\footnotesize $VI_{a}.ii$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^3]={\tilde X}^1,[{\tilde X}^2,{\tilde X}^3]=\frac{a-1}{a+1}{\tilde X}^2$}&{\footnotesize $-X_{3}$}&{\footnotesize $-{\tilde X}^1$}& {\footnotesize $a > 0~,~a\neq1$}\\ \vspace{2mm} {\footnotesize $V$}&{\footnotesize $VI_{a}.i$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^3]={\tilde X}^1,[{\tilde X}^2,{\tilde X}^3]=\frac{a+1}{a-1}{\tilde X}^2$}&{\footnotesize $-\frac{2a}{a-1}X_{3}$}&{\footnotesize $-\frac{2a}{a-1}{\tilde X}^1$}& {\footnotesize $a > 0~,~a\neq1$}\\ \vspace{2mm} {\footnotesize $V$}&{\footnotesize $VI_{a}.ii$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^3]={\tilde X}^1,[{\tilde X}^2,{\tilde X}^3]=\frac{a-1}{a+1}{\tilde X}^2$}&{\footnotesize $-\frac{2a}{a+1}X_{3}$}&{\footnotesize $-\frac{2a}{a+1}{\tilde X}^1$}& {\footnotesize $a > 0~,~a\neq1$}\\ \vspace{2mm} {\footnotesize $VI_{0}$}&{\footnotesize $III.vii$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^3]={\tilde X}^3,[{\tilde X}^2,{\tilde X}^3]={\tilde X}^3$}&{\footnotesize $-(X_{1}+X_{2})$}&{\footnotesize ${\tilde X}^3$}&\\ \vspace{2mm} {\footnotesize $VI_{0}$}&{\footnotesize $III.viii$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^3]=-{\tilde X}^3,[{\tilde X}^2,{\tilde X}^3]={\tilde X}^3$}&{\footnotesize $-(X_{1}-X_{2})$}&{\footnotesize ${\tilde X}^3$}&\\ \vspace{2mm} {\footnotesize $VI_{0}$}&{\footnotesize $III.ix$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^2]={\tilde X}^3,[{\tilde X}^2,{\tilde X}^3]={\tilde X}^3$}&{\footnotesize $-X_{1}$}&{\footnotesize ${\tilde X}^3$}&\\ \vspace{2mm} {\footnotesize $VI_{0}$}&{\footnotesize $VI_{0}.ii$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^2]={\tilde X}^1+{\tilde X}^2,[{\tilde X}^1,{\tilde X}^3]=-{\tilde X}^3,[{\tilde X}^2,{\tilde X}^3]={\tilde X}^3$}&{\footnotesize $-\epsilon(X_{1}-X_{2})$}&{\footnotesize $\epsilon{\tilde X}^3$}&{\footnotesize $\epsilon=1,-2$}\\ \hline \end{tabular} \newpage \hspace{-1.5cm}\begin{tabular}{ l l l l l l} \multicolumn{6}{l}{Table 6: Real three dimensional Jacobi-Lie bialgebras with {\footnotesize $X_{0}$},$ \phi_{0}\neq 0$ \small(Continued.)}\\ \hline \hline {\footnotesize ${\bf g}$ }& {\footnotesize ${\bf g^{*}}$} &{\footnotesize Commutation relations of ${\bf g^{*}}$} &{$X_{0}$}& $\phi_{0}$&{\footnotesize Comments} \\ \hline \vspace{2mm} {\footnotesize $VI_{0}$}&{\footnotesize $VI_{a}.iii$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^2]=-\frac{a+1}{a-1}({\tilde X}^1+{\tilde X}^2),[{\tilde X}^1,{\tilde X}^3]=-{\tilde X}^3,[{\tilde X}^2,{\tilde X}^3]={\tilde X}^3$}&{\footnotesize $-(X_{1}-X_{2})$}&{\footnotesize ${\tilde X}^3$}& {\footnotesize $a > 0~,~a\neq1$}\\ \vspace{2mm} {\footnotesize $VI_{0}$}&{\footnotesize $VI_{a}.iv$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^2]=-\frac{a-1}{a+1}({\tilde X}^1+{\tilde X}^2),[{\tilde X}^1,{\tilde X}^3]=-{\tilde X}^3,[{\tilde X}^2,{\tilde X}^3]={\tilde X}^3$}&{\footnotesize $-(X_{1}-X_{2})$}&{\footnotesize ${\tilde X}^3$}& {\footnotesize $a > 0~,~a\neq1$}\\ \vspace{2mm} {\footnotesize $VI_{0}$}&{\footnotesize $VI_{a}.iii$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^2]=-\frac{a+1}{a-1}({\tilde X}^1+{\tilde X}^2),[{\tilde X}^1,{\tilde X}^3]=-{\tilde X}^3,[{\tilde X}^2,{\tilde X}^3]={\tilde X}^3$}&{\footnotesize $-\frac{2}{a-1}(X_{1}-X_{2})$}&{\footnotesize $\frac{2}{a-1}{\tilde X}^3$}& {\footnotesize $a > 0~,~a\neq1,3$}\\ \vspace{2mm} {\footnotesize $VI_{0}$}&{\footnotesize $VI_{a}.iv$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^2]=-\frac{a-1}{a+1}({\tilde X}^1+{\tilde X}^2),[{\tilde X}^1,{\tilde X}^3]=-{\tilde X}^3,[{\tilde X}^2,{\tilde X}^3]={\tilde X}^3$}&{\footnotesize $\frac{2}{a+1}(X_{1}-X_{2})$}&{\footnotesize $-\frac{2}{a+1}{\tilde X}^3$}& {\footnotesize $a > 0~,~a\neq1$}\\ \vspace{2mm} {\footnotesize $VI_{a}$}&{\footnotesize $III.ii$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^2]={\tilde X}^1,[{\tilde X}^1,{\tilde X}^3]={\tilde X}^1$}&{\footnotesize $-(X_{2}+X_{3})$}&{\footnotesize $-(a+1){\tilde X}^1$}& {\footnotesize $a>0~,~a\neq1$}\\ \vspace{2mm} {\footnotesize $VI_{a}$}&{\footnotesize $III.ii$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^2]={\tilde X}^1,[{\tilde X}^1,{\tilde X}^3]={\tilde X}^1$}&{\footnotesize $-\frac{a-1}{a+1}(X_{2}+X_{3})$}&{\footnotesize $-(a-1){\tilde X}^1$}& {\footnotesize $a>0~,~a\neq1$}\\ \vspace{2mm} {\footnotesize $VI_{a}$}&{\footnotesize $III.v$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^3]={\tilde X}^1,[{\tilde X}^2,{\tilde X}^3]={\tilde X}^1$}&{\footnotesize $\frac{1}{a-1}(X_{2}-aX_{3})$}&{\footnotesize $-(a+1){\tilde X}^1$}& {\footnotesize $a>0~,~a\neq1$}\\ \vspace{2mm} {\footnotesize $VI_{a}$}&{\footnotesize $III.v$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^3]={\tilde X}^1,[{\tilde X}^2,{\tilde X}^3]={\tilde X}^1$}&{\footnotesize $\frac{1}{a+1}(X_{2}-aX_{3})$}&{\footnotesize $-(a-1){\tilde X}^1$}& {\footnotesize $a>0~,~a\neq1$}\\ \vspace{2mm} {\footnotesize $VI_{a}$}&{\footnotesize $III.x$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^2]={\tilde X}^1,[{\tilde X}^1,{\tilde X}^3]=-{\tilde X}^1$}&{\footnotesize $-(X_{2}-X_{3})$}&{\footnotesize $-(a-1){\tilde X}^1$}& {\footnotesize $a>0~,~a\neq1$}\\ \vspace{2mm} {\footnotesize $VI_{a}$}&{\footnotesize $III.x$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^2]={\tilde X}^1,[{\tilde X}^1,{\tilde X}^3]=-{\tilde X}^1$}&{\footnotesize $-\frac{a+1}{a-1}(X_{2}-X_{3})$}&{\footnotesize $-(a+1){\tilde X}^1$}& {\footnotesize $a>0~,~a\neq1$}\\ {\footnotesize $VI_{a}$}&{\footnotesize $VI_{b}.v$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^2]={\tilde X}^1,[{\tilde X}^1,{\tilde X}^3]={\tilde X}^1,[{\tilde X}^2,{\tilde X}^3]=\frac{b+1}{b-1}({\tilde X}^2-{\tilde X}^3)$}&{\footnotesize $-(X_{2}+X_{3})$}&{\footnotesize $-(a+1){\tilde X}^1$}& {\footnotesize $a>0~,~a\neq1$}\\ \vspace{1mm} &&&&& {\footnotesize $b >0~,~b\neq1$}\\ {\footnotesize $VI_{a}$}&{\footnotesize $VI_{b}.vi$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^2]={\tilde X}^1,[{\tilde X}^1,{\tilde X}^3]={\tilde X}^1,[{\tilde X}^2,{\tilde X}^3]=\frac{b-1}{b+1}({\tilde X}^2-{\tilde X}^3)$}&{\footnotesize $-(X_{2}+X_{3})$}&{\footnotesize $-(a+1){\tilde X}^1$}& {\footnotesize $a>0~,~a\neq1$}\\ \vspace{1mm} &&&&& {\footnotesize $b >0~,~b\neq1$}\\ {\footnotesize $VI_{a}$}&{\footnotesize $VI_{b}.vii$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^2]={\tilde X}^1,[{\tilde X}^1,{\tilde X}^3]=-{\tilde X}^1,[{\tilde X}^2,{\tilde X}^3]=-\frac{b+1}{b-1}({\tilde X}^2+{\tilde X}^3)$}&{\footnotesize $-(X_{2}-X_{3})$}&{\footnotesize $-(a-1){\tilde X}^1$}& {\footnotesize $a>0~,~a\neq1$}\\ \vspace{1mm} &&&&& {\footnotesize $b >0~,~b\neq1$}\\ {\footnotesize $VI_{a}$}&{\footnotesize $VI_{b}.viii$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^2]={\tilde X}^1,[{\tilde X}^1,{\tilde X}^3]=-{\tilde X}^1,[{\tilde X}^2,{\tilde X}^3]=-\frac{b-1}{b+1}({\tilde X}^2+{\tilde X}^3)$}&{\footnotesize $-(X_{2}-X_{3})$}&{\footnotesize $-(a-1){\tilde X}^1$}& {\footnotesize $a>0~,~a\neq1$}\\ \vspace{1mm} &&&&& {\footnotesize $b >0~,~b\neq1$}\\ {\footnotesize $VI_{a}$}&{\footnotesize $VI_{b}.v$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^2]={\tilde X}^1,[{\tilde X}^1,{\tilde X}^3]={\tilde X}^1,[{\tilde X}^2,{\tilde X}^3]=\frac{b+1}{b-1}({\tilde X}^2-{\tilde X}^3)$}&{\footnotesize $-\frac{2(ab+1)}{(a+1)(b-1)}(X_{2}+X_{3})$}&{\footnotesize $-\frac{2(ab+1)}{b-1}{\tilde X}^1$}& {\footnotesize $a>0~,~a\neq1$}\\ &&&&& {\footnotesize $b >0~,~b\neq1$}\\ \vspace{1mm} &&&&& {\footnotesize $b \neq -\frac{a+3}{a-1}$}\\ {\footnotesize $VI_{a}$}&{\footnotesize $VI_{b}.vi$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^2]={\tilde X}^1,[{\tilde X}^1,{\tilde X}^3]={\tilde X}^1,[{\tilde X}^2,{\tilde X}^3]=\frac{b-1}{b+1}({\tilde X}^2-{\tilde X}^3)$}&{\footnotesize $-\frac{2(ab-1)}{(a+1)(b+1)}(X_{2}+X_{3})$}&{\footnotesize $-\frac{2(ab-1)}{b+1}{\tilde X}^1$}& {\footnotesize $a>0~,~a\neq1$}\\ &&&&& {\footnotesize $b >0~,~b\neq1$}\\ \vspace{1mm} &&&&& {\footnotesize $b \neq \frac{a+3}{a-1}$}\\ {\footnotesize $VI_{a}$}&{\footnotesize $VI_{b}.vii$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^2]={\tilde X}^1,[{\tilde X}^1,{\tilde X}^3]=-{\tilde X}^1,[{\tilde X}^2,{\tilde X}^3]=-\frac{b+1}{b-1}({\tilde X}^2+{\tilde X}^3)$}&{\footnotesize $-\frac{2(ab-1)}{(a-1)(b-1)}(X_{2}-X_{3})$}&{\footnotesize $-\frac{2(ab-1)}{b-1}{\tilde X}^1$}& {\footnotesize $a>0~,~a\neq1$}\\ &&&&& {\footnotesize $b >0~,~b\neq1$}\\ \vspace{1mm} &&&&& {\footnotesize $b \neq -\frac{a-3}{a+1}$}\\ {\footnotesize $VI_{a}$}&{\footnotesize $VI_{b}.viii$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^2]={\tilde X}^1,[{\tilde X}^1,{\tilde X}^3]=-{\tilde X}^1,[{\tilde X}^2,{\tilde X}^3]=-\frac{b-1}{b+1}({\tilde X}^2+{\tilde X}^3)$}&{\footnotesize $-\frac{2(ab+1)}{(a-1)(b+1)}(X_{2}-X_{3})$}&{\footnotesize $-\frac{2(ab+1)}{b+1}{\tilde X}^1$}& {\footnotesize $a>0~,~a\neq1$}\\ &&&&& {\footnotesize $b >0~,~b\neq1$}\\ &&&&& {\footnotesize $b \neq \frac{a-3}{a+1}$}\\ \hline \end{tabular} \newpage \begin{center} \begin{tabular}{l l l l lp{0.15mm} } \multicolumn{5}{l}{Table 7: \small Real three dimensional Jacobi-Lie bialgebras with {\footnotesize $\phi_{0}=0$}}\\ \hline \hline {\footnotesize ${\bf g}$ }& {\footnotesize ${\bf g^{*}}$} &{\footnotesize Commutation relations of ${\bf g^{*}}$}&{$X_{0}$}&{\footnotesize Comments} \\ \hline \vspace{2mm} {\footnotesize $I$}&{\footnotesize $I$}&{\footnotesize $[{\tilde X}^i,{\tilde X}^j]=0$}&{\footnotesize $X_{1}$}&\\ \vspace{2mm} {\footnotesize $I$}&{\footnotesize $II$}&{\footnotesize $[{\tilde X}^2,{\tilde X}^3]={\tilde X}^1$}&{\footnotesize $X_{3}$}&\\ \vspace{2mm} {\footnotesize $I$}&{\footnotesize $III$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^2]=-({\tilde X}^2+{\tilde X}^3),[{\tilde X}^1,{\tilde X}^3]=-({\tilde X}^2+{\tilde X}^3)$}& {\footnotesize $bX_{1}$}&{\footnotesize $b\in \Re-\{0\}$}\\ \vspace{2mm} {\footnotesize $I$}&{\footnotesize $III$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^2]=-({\tilde X}^2+{\tilde X}^3),[{\tilde X}^1,{\tilde X}^3]=-({\tilde X}^2+{\tilde X}^3)$}& {\footnotesize $-(X_{2}-X_{3})$}&\\ \vspace{2mm} {\footnotesize $I$}&{\footnotesize $IV$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^2]=-({\tilde X}^2-{\tilde X}^3),[{\tilde X}^1,{\tilde X}^3]=-{\tilde X}^3$}& {\footnotesize $bX_{1}$}&{\footnotesize $b\in \Re-\{0\}$}\\ \vspace{2mm} {\footnotesize $I$}&{\footnotesize $V$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^2]=-{\tilde X}^2,[{\tilde X}^1,{\tilde X}^3]=-{\tilde X}^3$}& {\footnotesize $bX_{1}$}&{\footnotesize $b\in \Re-\{0\}$}\\ \vspace{2mm} {\footnotesize $I$}&{\footnotesize $VI_{0}$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^3]={\tilde X}^2,[{\tilde X}^2,{\tilde X}^3]={\tilde X}^1$}& {\footnotesize $bX_{3}$}&{\footnotesize $b>0$}\\ {\footnotesize $I$}&{\footnotesize $VI_{a}$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^2]=-(a{\tilde X}^2+{\tilde X}^3),[{\tilde X}^1,{\tilde X}^3]=-({\tilde X}^2+a{\tilde X}^3)$}& {\footnotesize $bX_{1}$}&{\footnotesize $a>0,a\neq1$}\\ &&&&{\footnotesize $b\in \Re-\{0\}$}\\ \vspace{2mm} {\footnotesize $I$}&{\footnotesize $VII_{0}$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^3]=-{\tilde X}^2,[{\tilde X}^2,{\tilde X}^3]={\tilde X}^1$}& {\footnotesize $bX_{3}$}&{\footnotesize $b>0$}\\ {\footnotesize $I$}&{\footnotesize $VII_{a}$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^2]=-(a{\tilde X}^2-{\tilde X}^3),[{\tilde X}^1,{\tilde X}^3]=-({\tilde X}^2+a{\tilde X}^3)$}& {\footnotesize $bX_{1}$}&{\footnotesize $a>0$}\\ &&&&{\footnotesize $b\in \Re-\{0\}$}\\ \vspace{2mm} {\footnotesize $II$}&{\footnotesize $I$}&{\footnotesize $[{\tilde X}^i,{\tilde X}^j]=0$}&{\footnotesize $X_{1}$}&\\ \vspace{2mm} {\footnotesize $II$}&{\footnotesize $II.i$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^3]={\tilde X}^2$}&{\footnotesize $X_{1}$}&\\ \vspace{2mm} {\footnotesize $II$}&{\footnotesize $II.ii$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^3]=-{\tilde X}^2$}&{\footnotesize $X_{1}$}&\\ \vspace{2mm} {\footnotesize $II$}&{\footnotesize $III$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^2]=-({\tilde X}^2+{\tilde X}^3),[{\tilde X}^1,{\tilde X}^3]=-({\tilde X}^2+{\tilde X}^3)$}&{\footnotesize $b X_{1}$}&{\footnotesize $b\in \Re-\{0\}$}\\ \vspace{2mm} {\footnotesize $II$}&{\footnotesize $IV$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^2]=-({\tilde X}^2-{\tilde X}^3),[{\tilde X}^1,{\tilde X}^3]=-{\tilde X}^3$}&{\footnotesize $b X_{1}$}&{\footnotesize $b\in \Re-\{0\}$}\\ \vspace{2mm} {\footnotesize $II$}&{\footnotesize $IV.iii$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^2]={\tilde X}^2-{\tilde X}^3,[{\tilde X}^1,{\tilde X}^3]={\tilde X}^3$}&{\footnotesize $b X_{1}$}&{\footnotesize $b\in \Re-\{0\}$}\\ \vspace{2mm} {\footnotesize $II$}&{\footnotesize $V$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^2]=-{\tilde X}^2,[{\tilde X}^1,{\tilde X}^3]=-{\tilde X}^3$}&{\footnotesize $b X_{1}$}&{\footnotesize $b\in \Re-\{0\}$}\\ \vspace{2mm} {\footnotesize $II$}&{\footnotesize $VI_{0}.iii$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^2]={\tilde X}^3,[{\tilde X}^1,{\tilde X}^3]={\tilde X}^2$}&{\footnotesize $b X_{1}$}&{\footnotesize $b >0$}\\ {\footnotesize $II$}&{\footnotesize $VI_{a}$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^2]=-(a{\tilde X}^2+{\tilde X}^3),[{\tilde X}^1,{\tilde X}^3]=-({\tilde X}^2+a{\tilde X}^3)$}&{\footnotesize $b {X}_1$}& {\footnotesize $a>0,a\neq1$}\\ &&&&{\footnotesize $b \in \Re-\{0\}$}\\ \vspace{2mm} {\footnotesize $II$}&{\footnotesize $VII_{0}.i$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^2]=-{\tilde X}^3,[{\tilde X}^1,{\tilde X}^3]={\tilde X}^2$}& {\footnotesize $bX_{1}$}&{\footnotesize $b>0$}\\ \vspace{2mm} {\footnotesize $II$}&{\footnotesize $VII_{0}.ii$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^2]={\tilde X}^3,[{\tilde X}^1,{\tilde X}^3]=-{\tilde X}^2$}& {\footnotesize $bX_{1}$}&{\footnotesize $b>0$}\\ {\footnotesize $II$}&{\footnotesize $VII_{a}$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^2]=-(a{\tilde X}^2-{\tilde X}^3),[{\tilde X}^1,{\tilde X}^3]=-({\tilde X}^2+a{\tilde X}^3)$}&{\footnotesize $b {X}_1$}& {\footnotesize $a>0$}\\ &&&&{\footnotesize $b \in \Re-\{0\}$}\\ {\footnotesize $II$}&{\footnotesize $VII_{a}.i$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^2]=a{\tilde X}^2-{\tilde X}^3,[{\tilde X}^1,{\tilde X}^3]={\tilde X}^2+a{\tilde X}^3$}&{\footnotesize $b {X}_1$}& {\footnotesize $a>0$}\\ &&&&{\footnotesize $b \in \Re-\{0\}$}\\ \vspace{2mm} {\footnotesize $III$}&{\footnotesize $III.v$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^3]={\tilde X}^1,[{\tilde X}^2,{\tilde X}^3]={\tilde X}^1$}&{\footnotesize $\frac{1}{2}(X_{2}-X_{3})$}&\\ \vspace{2mm} {\footnotesize $III$}&{\footnotesize $III.x$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^2]={\tilde X}^1,[{\tilde X}^1,{\tilde X}^3]=-{\tilde X}^1$}&{\footnotesize $-(X_{2}-X_{3})$}&\\ \vspace{2mm} {\footnotesize $III$}&{\footnotesize $IV.iv$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^2]=-{\tilde X}^1,[{\tilde X}^1,{\tilde X}^3]={\tilde X}^1,[{\tilde X}^2,{\tilde X}^3]={\tilde X}^1+{\tilde X}^2+{\tilde X}^3$}&{\footnotesize $X_{2}-X_{3}$}&\\ \vspace{2mm} {\footnotesize $III$}&{\footnotesize $V.iii$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^2]=-{\tilde X}^1,[{\tilde X}^1,{\tilde X}^3]={\tilde X}^1,[{\tilde X}^2,{\tilde X}^3]={\tilde X}^2+{\tilde X}^3$}&{\footnotesize $X_{2}-X_{3}$}&\\ \vspace{2mm} {\footnotesize $III$}&{\footnotesize $VI_{0}.iv$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^2]=-{\tilde X}^1,[{\tilde X}^1,{\tilde X}^3]={\tilde X}^1,[{\tilde X}^2,{\tilde X}^3]=-({\tilde X}^2+{\tilde X}^3)$}&{\footnotesize $X_{2}-X_{3}$}&\\ \vspace{2mm} {\footnotesize $III$}&{\footnotesize $VI_{a}.vii$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^2]={\tilde X}^1,[{\tilde X}^1,{\tilde X}^3]=-{\tilde X}^1,[{\tilde X}^2,{\tilde X}^3]=-\frac{a+1}{a-1}({\tilde X}^2+{\tilde X}^3)$}&{\footnotesize $-(X_{2}-X_{3})$}& {\footnotesize $a>0,a\neq1$}\\ \vspace{2mm} {\footnotesize $III$}&{\footnotesize $VI_{a}.viii$}&{\footnotesize $[{\tilde X}^1,{\tilde X}^2]={\tilde X}^1,[{\tilde X}^1,{\tilde X}^3]=-{\tilde X}^1,[{\tilde X}^2,{\tilde X}^3]=-\frac{a-1}{a+1}({\tilde X}^2+{\tilde X}^3)$}&{\footnotesize $-(X_{2}-X_{3})$}& {\footnotesize $a>0,a\neq1$}\\ \hline \end{tabular} \end{center} \section{\bf Conclusion} In this paper, we have described the definition of the Jacobi (generalized)-Lie bialgebras $(({\bf{g}},\phi_{0}),({\bf{g}}^{*},X_{0}))$ in terms of structure constants of the Lie algebras ${\bf g}$ and ${\bf g^{*}}$ and components of their 1-cocycles $X_{0}\in {\bf{g}}$ and $\phi_{0}\in {\bf{g}}^{*}$. In this way, we have obtained a method to classify real low dimensional Jacobi-Lie bialgebras. By this method, we have classified real two and three dimensional Jacobi-Lie bialgebras. Now, using generalized coboundary equation presented in \cite{Iglesias}, one can obtain the classical $r$-matrices of these Jacobi-Lie bialgebras and Jacobi brackets for their Jacobi-Lie groups. There are some physical applications in this direction; such as constructing integrable models, quantizing these Jacobi-Lie bialgebras, generalizing Poisson-Lie symmetry \cite{KL} to Jacobi-Lie symmetry and so on. Some of these problems are under investigation \cite{RS2},\cite{RS1}.\\ {\bf Acknowledgments}\\ This research was supported by a research fund No. 217/D/1639 from Azarbaijan Shahid Madani University. The authors are grateful to A. Basaki for his useful aids. Also, the authors would like to thank F. Darabi, A. Eghbali and R.Gholizadeh-Roshanagh for their valuable comments and carefully reading the manuscript. \vspace{-4mm}
1,116,691,497,495
arxiv
\section{Introduction} \label{sec:introduction} \begin{figure}[tb] \centering \includegraphics[width=0.85\linewidth]{Teaser} \caption{Results of face swapping and additional visual attribute editing with the proposed system. The face regions of images in the first column are embedded to the images in the second column. The face-swapped results are illustrated in the third column, and their appearances are further manipulated by adding visual attributes such as ``blond hair'' and ``eyeglasses''. RSGAN can obtain these results by passing the two input images and visual attributes through the network only once.} \label{fig:teaser} \end{figure} Human face is an important symbol to recognize individuals from ancient time to the present. Drawings of human faces have been used traditionally to record the human identities of people in authorities. Nowadays, many people enjoy sharing their daily photographs, which usually includes human faces, in social networking websites. In these situations, there has been a potential demand for making the drawings or photographs to be more attractive. As a result of this demand, a large number of studies for face image analysis~\cite{blanz02,cao14,liu15,zhang16_mtcnn} and manipulation~\cite{blanz04,bitouk08,yang11,chai12,kemelmacher16,shu17_tog,fiser17} have been introduced in the research communities of computer graphics and vision. Face swapping is one of the most important techniques of face image editing that has a wide range of practical applications such as photomontage~\cite{blanz04}, virtual hairstyle fitting~\cite{kemelmacher16}, privacy protection~\cite{bitouk08,mosaddegh14,korshunova16}, and data augmentation for machine learning~\cite{hassner13,mclaughlin15,masi16}. The traditional face swapping methods firstly detect face regions in source and target images. The face region of the source image is embedded into the target image by digital image stitching. To clarify the motivation of our study, we briefly review the previous face-swapping methods in the following paragraphs. One of the most popular approaches of face swapping is to use the 3D morphable models (3DMM)~\cite{blanz04,nirkin17}. In this class of methods, the face geometries and their corresponding texture maps are first obtained by fitting 3DMM~\cite{blanz02,cao14}. The texture maps of the source and target images are then swapped with the estimated UV coordinates. Finally, the replaced face textures are re-rendered using the estimated lighting condition estimated with the target image. These approaches with the 3DMM can replace the faces even for those with different orientations or in the different lighting conditions. However, these methods are prone to fail the estimations of face geometries or lighting conditions in practice. The incorrect estimations are usually problematic because people can sensitively notice even slight mismatches of these geometries and lighting conditions. In specific applications of face swapping, such as privacy protection and virtual hairstyle fitting, either of the source image or target image can be selected arbitrarily. For instance, the privacy of the target image can be protected even though the new face region is extracted from a random image. This has suggested an idea of selecting one of the source and target image from large-scale image databases~\cite{bitouk08,kemelmacher16}. The approaches in this class can choose one of two input images such that a selected image is similar to its counterpart. These approaches can consequently avoid replacing faces in difficult situations with different face orientations or different lighting conditions. However, these methods cannot be used for more general purposes of face swapping between arbitrary input face images. A vast body of recent deep learning research has facilitated face swapping with large-scale image databases. Bao et al. have introduced a face swapping demo in their paper of conditional image generation with their proposed neural network named CVAE-GAN~\cite{bao17}. Their method uses hundreds of images for each person in the training dataset, and the face identities are learned as the image conditions. A similar technique is used in a desktop software tool ``FakeApp''~\cite{fakeapp} that has recently attracted much attention due to its easy-to-use pipeline for face swapping with deep neural networks (DNN). This tool requires hundreds of images of the two target people for swapping the faces. However, preparing such a large number of portrait images for non-celebrities is rather inadvisable. In contrast to these techniques, Korshunova et al.~\cite{korshunova16} have applied the neural style transfer~\cite{gatys16} to face swapping by fine-tuning the pre-trained network with several tens of images of a single person in a source image. Unfortunately, it is still impractical for most of people to collect many images and to fine-tune the network for generating a single face-swapped image. In this paper, we address the above problems using a generative neural network that we refer to as ``region-separative generative adversarial network (RSGAN).'' While a rich body of studies for such deep generative models has already been introduced, applying it to face swapping is still challenging. In ordinary generative models, the images or data which a network synthesizes are obtained as training data. However, it is difficult or even impossible to prepare a dataset which includes face images both before and after face swapping because the faces of real people can hardly be swapped without special surgical operations. We tackle this problem by designing the network to variationally learn different latent spaces for each of face and hair regions. A generator network used in the proposed method is trained to synthesize a natural face image from two random vectors that correspond to latent-space representations of face and hair regions. In consequence, the generator network can synthesize a face-swapped image from two latent-space representations calculated from real image samples. The architecture of RSGAN is illustrated in \figref{fig:network}. This architecture consists of two variational autoencoders (VAE) and one generative adversarial network (GAN). The two VAE parts encode face and hair appearances into latent-space representations, and the GAN part generates a natural face image from the latent-space representations of faces and hairs. The detailed description of the network and its training method are introduced in \secref{sec:rsgan}. In addition to face swapping, this variational learning enables other editing applications, such as visual attribute editing and random face parts synthesis. To evaluate face swapping results of the proposed method, we leveraged two metrics of identity preservation and swap consistency. The identity preservation is evaluated using OpenFace~\cite{amos16_openface}, which is an open-source face feature extractor. The consistency of face swapping is evaluated by measuring the absolute difference and multi-scale structural similarity (MS-SSIM)~\cite{wang_msssim} between a input image and a resulting image obtained by swapping faces twice between two input images. The results of applications of RSGAN and their evaluations are shown in \secref{sec:results}. \miniparagraph{Contributions} As a face swapping and editing system, the proposed method has following advantages over previous methods: \begin{enumerate} \item it provides an integrated system for face swapping and additional face appearance editing; \item its applications are achieved by training a single DNN, and it does not require any additional runtime computation such as fine-tuning; \item it robustly performs high-quality face swapping even for faces with different face orientations or in different lighting conditions; \end{enumerate} \section{Related Work} \label{sec:related-work} \subsection{Face swapping} \label{ssec:face-swapping} Face swapping has been studied in a number of research for different purposes, such as photomontage~\cite{blanz04}, virtual hairstyle fitting~\cite{kemelmacher16}, privacy protection~\cite{bitouk08,mosaddegh14,korshunova16} and data augmentation for large-scale machine learning~\cite{masi16}. Several studies~\cite{yang11,mosaddegh14} have replaced only parts of the face, such as eyes, nose, and mouth between images rather than swapping the whole face between images. As described in the previous section, one of the traditional approaches for face swapping is based on 3DMM~\cite{blanz04,nirkin17}. Fitting 3DMM to a target face obtains face geometry, texture map and lighting condition approximately~\cite{blanz02,cao14}. Using the 3DMM, face swapping is achieved by replacing texture maps and re-rendering the face appearance using the estimated lighting condition. The main drawback of these 3DMM-based methods is that they require manual alignment of the 3DMM to obtain accurate fitting. To alleviate this problem, Bitouk et al.~\cite{bitouk08} proposed automatic face swapping with a large-scale face image database. Their method first searches a face image with a similar layout to the input image, and then replace the face regions with boundary-aware image composition. A more sophisticated approach was recently proposed by Kemelmacher-Shlizerman~\cite{kemelmacher16}. She carefully designed a handmade feature vector to face image appearances and achieved high-quality face swapping. However, these methods by searching similar images cannot freely select the input images, and are not applicable to arbitrary face image pairs. Recently, Bao et al. have introduced a face swapping demo in their paper of CVAE-GAN~\cite{bao17}, which is a DNN for conditional image generation. In their method, the CVAE-GAN is trained to generate face images of specific people in a training dataset by handling face identities as conditions for generated images. The CVAE-GAN achieves face swapping by changing the conditions of face identities of target images. Korshunova et al. applied neural style transfer, which is another technique of deep learning, to face swapping~\cite{korshunova16}. Their approach is similar to the original neural style transfer~\cite{gatys16} in the sense that a face identity is handled similarly to an artistic style. The face identity of a target face is substituted by that of a source face. The common drawback of these DNN-based models is that users must collect at least dozens of images to obtain a face-swapped image. While collecting such a number of images is possible, it is nevertheless impractical for most of people to collect the images just for their personal photo editing. \subsection{Face image editing} \label{ssec:face-editing} To enhance the visual attractiveness of face images, various techniques, such as facial expression transfer~\cite{liu01,yang11}, attractiveness enhancement~\cite{leyvand08}, face image relighting~\cite{chai15,shu17}, have been proposed in the last decades. In traditional face image editing, underlying 3D face geometries and face parts arrangements are estimated using face analysis tools, such as active appearance models~\cite{cootes01} and 3D morphable models~\cite{blanz02,cao14}. These underlying information are manipulated in editing algorithms to improve attractiveness of output images. On the other hand, recent approaches based on DNNs do not explicitly analyze such information. Typically, an input image and a user's edit intention are fed to an end-to-end DNN, and then, the edit result is directly output from the network. For example, several DNN models~\cite{kingma14,odena16,bao17,choi17} based on autoencoders are used to manipulate visual attributes of faces, in which visual attributes, such as facial expressions and hair colors, are modified to change face image appearances. In contrast, Brock et al.~\cite{brock16} proposed an image editing system with the paint-based interface in which a DNN synthesizes a natural image output following the input image and paint strokes specified by the users. Several studies for DNN-based image completion~\cite{iizuka17,chen18} have presented demos of manipulating face appearances by filling the parts of an input image with the DNN. However, estimating results of these approaches is rather difficult because they only fill the regions painted by the users such that completed results plausibly exists in training data. \section{Region-Separative GAN} \label{sec:rsgan} The main challenge of face swapping with DNNs is the difficulty of preparing face images before and after face swapping because the face of a real person cannot be replaced by that of another person without a special surgical operation. Another possible way to collect such face images is to digitally synthesize them. However, it is an chicken-and-egg problem because the synthesis of such face-swapped images is our primary purpose. To overcome this challenge, we leverage a variational method to represent appearances of faces and hairs. In face swapping, a face region and a hair region are handled separately in the image space. The face swapping problem is generalized as a problem of composing any pair of face and hair images. The purpose of the proposed RSGAN is to achieve this image composition using latent-space representations of face and hair appearances. In the proposed method, this purpose is achieved by a DNN shown in \figref{fig:network}. As shown in this figure, the architecture of RSGAN consists of two VAEs, which we refer to as \textit{separator network}, and one GAN, which we refer to as \textit{composer network}. In this network, appearances of the face and hair regions are first encoded into different latent-space representations with the separator networks. Then, the composer network generates a face image with the obtained latent-space representations so that the original appearances in the input image is reconstructed. However, training with only latent-space representations from real image samples incurs over-fitting. We found that a RSGAN trained in this way ignores the face representation is the latent space, and synthesizes an image similar to the target image while face swapping. Thus, we also feed random latent-space representations to the composer network such that they are trained to synthesize natural face images rather than over-fitting the training data. \begin{figure*}[tb] \centering \includegraphics[width=\linewidth]{Network.pdf} \caption{The network architecture of the proposed RSGAN that comprises three partial networks, i.e., two separator networks and a composer network. The separator networks extract latent-space representations $z_f$ and $z_h$ respectively for face and hair regions of an input image $x$. The composer network reconstructs the input face image from the two latent-space representations. The reconstructed image $x'$ and input image $x$ are evaluated by two discriminator networks. The global discriminator $D_g$ distinguishes whether the images are real or fake, and the patch discriminator $D_p$ distinguishes whether local patches of the images are real or fake.} \label{fig:network} \end{figure*} Let $x$ be a training image, and $c$ be its corresponding visual attribute vector. Latent-space representations $z_{x_{f}}$ and $z_{x_{h}}$ of face and hair appearances of $x$ are obtained by a face encoder $F_{E \text{-} x_{f}}$ and a hair encoder $F_{E \text{-} x_{h}}$. Similarly, the visual attribute $c$ is embedded into latent spaces of the attributes. Latent-space representations $z_{c_{f}}$ and $z_{c_{h}}$ of the face and hair attribute vectors are obtained by encoders $F_{E \text{-} c_{f}}$ and $F_{E \text{-} c_{h}}$. As standard VAEs, these latent-space representations are sampled from multivariate normal distributions whose averages and variances are inferred by the encoder networks: \begin{equation} z_\ell = \mathcal{N} \left( \mu_\ell, \sigma^2_\ell \right), \quad \left( \mu_\ell, \sigma^2_\ell \right) = F_{E \text{-} \ell}(x, c), \quad \ell \in \{ x_{f}, x_{h}, c_{f}, c_{h} \}, \label{eq:latent-space-sampling} \end{equation} where $\mu_\ell$ and $\sigma^2_\ell$ are the average and variance of $z_\ell$ obtained with the encoders. Decoder networks $F_{D \text{-} f}$ and $F_{D \text{-} h}$ for face and hair regions reconstruct the appearances $x_f'$ and $x_h'$ respectively from the corresponding latent-space representation. The composer network $G$ generates the reconstructed appearance $x'$ with the latent-space representations from the encoders. These reconstruction processes are formulated as: \begin{equation} x_f' = F_{D \text{-} f}(z_{x_{f}}, z_{c_{f}}), \quad x_h' = F_{D \text{-} h}(z_{x_{h}}, z_{c_{h}}), \quad x' = G(z_{x_{f}}, z_{c_{f}}, z_{x_{h}}, z_{c_{h}}). \end{equation} In addition, random variables sampled from a multivariate standard normal distribution $\mathcal{N}(0, 1)$ are used together in the training. Let $\hat{z}_{x_{f}}$, $\hat{z}_{x_{h}}$, $\hat{z}_{c_{f}}$, and $\hat{z}_{c_{h}}$ be the random variables which correspond to $z_{x_{f}}$, $z_{x_{h}}$, $z_{c_{f}}$, and $z_{c_{h}}$, respectively. We also compute a random face image $\hat{x}'$ with these samples: \begin{equation} \hat{x}' = G(\hat{z}_{x_{f}}, \hat{z}_{c_{f}}, \hat{z}_{x_{h}}, \hat{z}_{c_{h}}). \end{equation} The input image $x$ and two generated images $x'$ and $\hat{x}'$ are evaluated by two discriminator networks $D_g$ and $D_p$. The global discriminator $D_g$ distinguishes whether those images are real or fake as in standard GANs~\cite{goodfellow14}. On the other hand, the patch discriminator $D_p$, which is originally used in an image-to-image network~\cite{isola17}, distinguishes whether local patches from those images are real or fake. In addition, we train a classifier network $C$ to estimate the visual attribute $c^{*}$ from the input image $x$. The classifier network is typically required to edit an image for which visual attributes are not prepared. In addition, the classifier network obtains a visual attribute vector whose entries are in between $0$ and $1$, whereas visual attribute vectors prepared in many public datasets take discrete values of $0$ or $1$. Such intermediate values are advantageous, for example, when we represent dark brown hair with two visual attribute items ``black hair'' and ``brown hair''. Accordingly, we use the estimated attributes $c^{*}$ rather than $c$ even when visual attributes are prepared for $x$. \subsection{Training} \label{ssec:training} In the proposed architecture of RSGAN, three autoencoding processes are performed, each of those reproduces $x_f'$, $x_h'$ and $x'$ from an input image $x$. Following standard VAEs, we define three reconstruction loss functions: \begin{align} \mathcal{L}_{rec \text{-} f} &= \mathbb{E}_{x, x_f \sim P_{data}} \big[ \| x_f - x_f' \|_1 \big], \label{eq:face-rec-loss} \\ \mathcal{L}_{rec \text{-} h} &= \mathbb{E}_{x, x_h \sim P_{data}} \big[ \left\| \left( 1 - \beta M_{BG} \right) \odot \left( x_h - x_h' \right) \right\|_1 \big], \label{eq:hair-rec-loss} \\ \mathcal{L}_{rec} &= \mathbb{E}_{x \sim P_{data}} \big[ \left\| \left(1 - \beta M_{BG} \right) \odot (x - x') \right\|_1 \big], \label{eq:rec-loss} \end{align} where $M_{BG}$ is a background mask which take 0 for foreground pixels and 1 for background pixels, and an operator $\odot$ denotes per-pixel multiplication. The background mask $M_{BG}$ is used to train the network to synthesize more detailed appearances in foreground regions. In our implementation, we used a parameter $\beta = 0.5$ to halve the least square errors in the background. The Kullback Leibler loss function is also defined as standard VAEs for each of the four encoders: \begin{equation} \mathcal{L}_{KL \text{-} \ell} = \frac{1}{2} \left( \mu_\ell^T \mu_\ell + \sum \left( \sigma_\ell - \log (\sigma_\ell) - 1 \right) \right), \qquad \ell \in \{ x_{f}, x_{h}, c_{f}, c_{h} \}. \label{eq:kl-losses} \end{equation} The set of separator and composer networks, and the two discriminator networks are trained adversarially as standard GANs. Adversarial losses are defined as: \begin{align} \mathcal{L}_{adv \text{-} \ell} =& - \mathbb{E}_{x \sim P_{data}} \left[ \log D_\ell (x) \right] \nonumber \\ & - \mathbb{E}_{z \sim P_z} \left[ \log (1\! - \! D_\ell (x')) \right] \nonumber \\ & - \mathbb{E}_{z \sim P_z} \left[ \log (1\! - \! D_\ell (\hat{x}')) \right], \quad \ell \in \{ g, p \}. \label{eq:adv-loss} \end{align} In addition, the classifier network $C$ is trained to estimate correct visual attributes $c^{*}$ which is close to $c$. We defined a cross entropy loss function $L_{BCE}$, and used it to define a loss function of classifier network $\mathcal{L}_C$: \begin{equation} \mathcal{L}_{C} = L_{BCE}(c, c^{*}) \mathrel{\raisebox{0.034em}{$\mathop{:}$}}= - \sum_{i} \left( c_i \log c_i^{*} + (1 - c_i) \log (1 - c_i^{*}) \right), \label{eq:classifier-loss} \end{equation} where $c_i$ denotes the $i$-th entry of the visual attribute vector $c$. To preserve the visual attributes in generated images $x'$ and $\hat{x}'$, we add the following loss functions to train the composer network: \begin{equation} \mathcal{L}_{GC} = L_{BCE}(c, c') + L_{BCE}(c, \hat{c}'), \label{eq:gen-classifier-loss} \end{equation} where $c'$ and $\hat{c}'$ are estimated visual attributes of $x'$ and $\hat{x}'$, respectively. The total loss function for training RSGAN is defined by a weighted sum of the above loss functions: \begin{align} \mathcal{L} &= \lambda_{rec} (\mathcal{L}_{rec \text{-} f} + \mathcal{L}_{rec \text{-} h} + \mathcal{L}_{rec}) \nonumber \\ & + \lambda_{KL} (\mathcal{L}_{KL \text{-} x_{f}} + \mathcal{L}_{KL \text{-} x_{h}} + \mathcal{L}_{KL \text{-} c_{f}} + \mathcal{L}_{KL \text{-} c_{h}}) \nonumber \\ & + \lambda_{adv \text{-} g} \mathcal{L}_{adv \text{-} g} + \lambda_{adv \text{-} p} \mathcal{L}_{adv \text{-} p} \\ & + \lambda_C \mathcal{L}_C + \lambda_{GC} \mathcal{L}_{GC} \end{align} We empirically determined the weighting factors as $\lambda_{rec} = 4000$, $\lambda_{KL} = 1$, $\lambda_{adv \text{-} g} = 20$, $\lambda_{adv \text{-} p} = 30, \lambda_{C} = 1$, and $\lambda_{GC} = 50$. In our experiment, the loss functions were minimized using ADAM optimizer~\cite{kingma14_adam} with an initial learning rate of 0.0002, $\beta_1 = 0.5$, and $\beta_2 = 0.999$. The size of a mini-batch was 50. The detailed training algorithm is provided in the supplementary materials. \begin{figure}[tb] \centering \includegraphics[width=0.8\linewidth]{Datasets} \caption{The process of generating face and hair region images from portraits in CelebA~\cite{liu15}. The background mask in (b) is computed with PSPNet~\cite{zhao16_pspnet}, which is a state-of-the-art DNN-based semantic segmentation. Clipping rectangles in (d) for face and hair regions are computed using blue facial landmarks in (c). To improve the reconstruction quality of face identities, face regions are magnified with larger scale than hair regions.} \label{fig:dataset} \end{figure} \subsection{Dataset} \label{ssec:dataset} The training of RSGAN requires to sample face image $x$, face region image $x_f$, hair region image $x_h$, and background mask $M_{BG}$ from real samples. For this purpose, we computationally generate face and hair region images with a large-scale face image dataset, i.e., CelebA~\cite{liu15}. Figure\,\ref{fig:dataset} illustrates the process of dataset generation. The size of original images in CelebA is $178 \times 218$ (\figref{fig:dataset}(a)). We first estimate the foreground mask using PSPNet~\cite{zhao16_pspnet}, which is a state-of-the-art semantic segmentation method, with the ``person'' label. The background mask is obtained by inverting masked and non-masked pixels in the foreground mask (\figref{fig:dataset}(b)). Second, we extract 68 facial landmarks (\figref{fig:dataset}(c)) with a common machine learning library, i.e., Dlib~\cite{king09}. The face region is defined with the 41 landmarks that correspond to eyes, nose, and mouth, which are indicated with blue circles in \figref{fig:dataset}(c). We calculate a convex hull of these landmarks and stretch the hull by 1.3 times and 1.4 times along horizontal and vertical directions, respectively. The resulting hull is used as a face mask as depicted in \figref{fig:dataset}(d). The face and hair regions are extracted with the mask and crop these regions to be square (\figref{fig:dataset}(e) and (f)). The face region has its top-left corner at $(30, 70)$ and its size is $118 \times 118$. The hair region has its top-left corner at $(0, 20)$ and its size is $178 \times 178$. Finally, we resize these cropped images to the same size. In our experiment, we resize them to $128 \times 128$. Since the face region is more important to identify a person in the image, we used a higher resolution for the face region. While processing images in the dataset, we could properly extract the facial landmarks for 195,361 images out of 202,599 images included in CelebA. Among these 195,361 images, we used 180,000 images for training and the other 15,361 images for testing. \subsection{Face swapping with RSGAN} \label{ssec:how-to-face-swap} A face-swapped image is computed from two images $x_1$ and $x_2$ with RSGAN. For each of these images, visual attributes $c_1^{*}$ and $c_2^{*}$ are first estimated by the classifier. Then, latent-space representations of these variables $z_{1, x_{f}}$, $z_{1, c_{f}}$, $z_{2, x_{h}}$, and $z_{2, c_{h}}$ are computed by the encoders. Finally, the face-swapped image is generated by the composer network as $x' = G(z_{1, x_{f}}, z_{1, c_{f}}, z_{2, x_{h}}, z_{2, c_{h}})$. This operation of just feeding two input images to RSGAN usually performs face swapping appropriately. However, hair and background regions in an input image are sometimes not recovered properly with RSGAN. To alleviate this problem, we optionally perform gradient-domain image stitching for a face-swapped image. In this operation, the face region of a face-swapped image is extracted with a face mask, which is obtained in the same manner as in the dataset generation. Then, the face region of the face-swapped image is composed of the target image by a gradient-domain image composition~\cite{levin04}. In order to distinguish these two approaches, we denote them as ``RSGAN'' and ``RSGAN-GD'', respectively. Unless otherwise specified, the results shown in this paper are computed only using RSGAN without the gradient-domain stitching. \section{Results and Discussion} \label{sec:results} \begin{figure*}[tb] \centering \includegraphics[width=0.9\linewidth]{ResultsSwap} \caption{Face swapping results for different face and hair appearances. Two top rows in this figure represent the original inputs and their reconstructed appearances obtained by RSGAN. In these results, face regions of the images in each row is replaced by a face from the image in each column.} \label{fig:results-swapping} \end{figure*} \begin{figure}[tb] \centering \includegraphics[width=0.55\linewidth]{Comparison} \caption{Comparisons to the state-of-the-art face swapping methods~\cite{kemelmacher16,nirkin17}.} \label{fig:comparison} \end{figure} This section introduces the results of applications using the pre-trained RSGAN. For these results, we implemented a program using TensorFlow~\cite{abadi2016_tensorflow} in Python, and executed it on a computer with an Intel Xeon 3.6 GHz E5-1650 v4 CPU, NVIDIA GTX TITAN~X GPU, and 64 GB RAM. We used 180,000 training images, and trained the proposed RSGAN network over 120,000 global steps. The training spent about 50 hours using a single GPU. All the results in this paper are generated using test images which are not included in the training images. \miniparagraph{Face swapping} Face-swapping results of the proposed system are illustrated in \figref{fig:results-swapping}. In this figure, the first row illustrates source images, the second row illustrates reproduced appearances for the source images, and the bottom three rows illustrate face-swapped results for different target images in the leftmost column. In each result, we observe that face identities, expressions, shapes of facial parts, and shading are naturally presented in face-swapped results. Among these input image pairs, there are a large difference in the facial expression between Face \raisebox{.4ex}{\scriptsize\#} 7 and Hair \raisebox{.4ex}{\scriptsize\#} 1, a difference in the face orientation between Face \raisebox{.4ex}{\scriptsize\#} 4 and Hair \raisebox{.4ex}{\scriptsize\#} 2, and a difference in the lighting condition in Face \raisebox{.4ex}{\scriptsize\#} 3 and Hair \raisebox{.4ex}{\scriptsize\#} 1. Even for such input pairs, the proposed method achieves natural face swapping. In addition, we compared our face-swapping results with the state-of-the-art methods~\cite{kemelmacher16,nirkin17} in \figref{fig:comparison}. The results are compared for the input images used in~\cite{kemelmacher16} that are searched from a large-scale database such that their layouts are similar to the source images. While these input images are more favorable for~\cite{kemelmacher16}, the results of our RSGAN-GD are compatible to those of~\cite{kemelmacher16}. Compared to the other state-of-the-art method~\cite{nirkin17}, the sizes of facial parts in our results look more natural in the sense that the proportions of the facial parts to the entire face sizes are more similar to those in the source images. As reported in the paper~\cite{nirkin17}, these performance losses are due to their sensitiveness to the quality of landmark detection and 3DMM fitting, even though their proposed semantic segmentation is powerful. \begin{figure}[tb] \vspace*{2em} \centering \includegraphics[width=0.95\linewidth]{ResultsAttribs} \caption{Results of visual attribute editing using RSGAN. In this figure, the results in the rows marked with ``Total'' are obtained adding a new visual attribute both for face and hair regions. The visual attributes used for these results are indicated in the top. On the other hand, the results in the rows marked with ``Face'' are obtained by adding a new visual attribute only for the face region, and the original visual attributes are used for the hair region. The results in the rows marked with ``Hair'' are generated in the same way.} \label{fig:results-attrs} \end{figure} \miniparagraph{Visual attribute editing} To perform face swapping together with visual attribute vectors, the proposed RSGAN embeds visual attribute vectors into the latent spaces of face and hair visual attributes. As a result, the proposed editing system enables to manipulate the attributes in only either of face or hair region. The results of visual attribute editing are illustrated in \figref{fig:results-attrs}. This figure includes two image groups, each of which has three rows. In the first row, visual attributes indicated on the top is added to both face and hair regions. In the second and third rows, the visual attributes are added to either of the face or hair region. As shown in this figure, adding visual attributes for only one of face and hair region do not affect the other region. For example, the hair colors have not been changed when the attribute ``Blond hair'' is added to the face regions. In addition, the attributes such as ``Male'' and ``Aged'', which can affect both regions, change the appearance of only one region when they are added to either of two regions. For example, the attribute ``Male'' is added to the face regions, only face appearances become masculine while hair appearances are not changed. In addition, the visual attribute editing can be applied to the face-swapped images with RSGAN. The results for this application is shown in \figref{fig:teaser}. Note that RSGAN can achieve both face swapping and visual attribute editing by feeding two input images and modified visual attribute vectors to the network at the same time. \begin{figure}[tb] \centering \includegraphics[width=0.8\linewidth]{ResultsSampling} \caption{Results of random face and hair parts generation and composition. We can sample independent latent-space representations for face and hair appearances, and combine them with the proposed RSGAN. In the top group, random hair appearances are combined with the face region of an input image in the left. In the bottom group, random face appearances are combined with the hair region of an input image.} \label{fig:results-sampling} \end{figure} \miniparagraph{Random face parts synthesis} With the proposed RSGAN, we can generate a new face image which has an appearance of face or hair in a real image sample, and an appearance of the counterpart region defined by a random latent-space. Such random image synthesis is used in privacy protection by changing face regions randomly, and in data augmentation for face recognition by changing hair regions randomly. The results for the random image synthesis are shown in \figref{fig:results-sampling}. This figure consists of two group of images in the top and bottom. In each group, an input image is shown in the left. Its face or hair region is combined with random hair or face region in the right images. The top group illustrates the images with random hairs, and the bottom group illustrates those with random faces. Even though the random face and hair regions cover a significant range of appearances, the appearances of face and hair in the real inputs are preserved appropriately in the results. \begin{figure}[tb] \centering \includegraphics[width=0.8\linewidth]{Poisson} \caption{Face swapping results of the proposed RSGAN and the other methods compared in \tabref{tab:experiments}. In two image groups in the top and bottom, face regions of the two input images in the leftmost column are replaced.} \label{fig:swap-other-gans} \end{figure} \begin{table}[tb] \scriptsize \centering \caption{Performance evaluation in identity preservation and swap consistency.} \label{tab:experiments} \begin{tabular*}{\linewidth}{l@{\extracolsep{\fill}}llllll} \toprule & & OpenFace & \multicolumn{2}{c}{Abs. Errors} & \multicolumn{2}{c}{MS-SSIM} \\ \cmidrule{3-3} \cmidrule{4-5} \cmidrule{6-7} & & Swap & Recon. & Swap $\times 2$ & Recon. & Swap $\times 2$ \\ \midrule \multirow{2}{*}{VAE-GAN~\cite{larsen_vaegan}} & Avg. & 1.598 & 0.082 & 0.112 & 0.694 & 0.563 \\ & Std. & 0.528 & 0.018 & 0.024 & 0.089 & 0.099 \\ \midrule \multirow{2}{*}{ALI~\cite{dumoulin16}} & Avg. & 1.687 & 0.230 & 0.270 & 0.338 & 0.254 \\ & Std. & 0.489 & 0.065 & 0.068 & 0.133 & 0.108 \\ \midrule \multirow{2}{*}{$\alpha$-GAN~\cite{rosca17}} & Avg. & 1.321 & \textbf{0.058} & 0.099 & \textbf{0.823} & 0.638 \\ & Std. & 0.465 & 0.013 & 0.026 & 0.057 & 0.102 \\ \midrule \multirow{2}{*}{Nirkin et al.~\cite{nirkin17}} & Avg. & \textbf{0.829} & --- & \textbf{0.027} & --- & \textbf{0.961} \\ & Std. & 0.395 & --- & 0.010 & --- & 0.022 \\ \midrule \multirow{2}{*}{\textbf{RSGAN}} & Avg. & \textit{1.127} & \textit{0.069} & \textit{0.093} & \textit{0.760} & \textit{0.673} \\ & Std. & 0.415 & 0.016 & 0.020 & 0.074 & 0.087 \\ \bottomrule \end{tabular*} \end{table} \subsection{Experiments} \label{ssec:experiments} We evaluated the face swapping results of the proposed and other previous methods using two metrics, i.e., identity preservation and swap consistency. In this experiment, we compared these two values with previous self-reproducing generative networks VAE-GAN~\cite{larsen_vaegan}, ALI~\cite{dumoulin16}, $\alpha$-GAN~\cite{rosca17} and the state-of-the-art face swapping method by Nirkin et al.~\cite{nirkin17}. With the generative networks, we computed face-swapped results in three steps. First, we compute a face mask in the same manner as in our dataset synthesis. Second, the face region of the source image in the mask is copy-and-pasted to the target image such that the two eye locations are aligned. Finally, the entire image appearance after copy-and-pasting is repaired by feeding it to self-reproducing networks. The examples of the face-swapped images made by these algorithms are illustrated in \figref{fig:swap-other-gans}. We computed the results of different algorithms for 1,000 random image pairs selected from 15,361 test images. The averages and standard deviations of the two metrics are provided in \tabref{tab:experiments}. In this table, the best score in each column is indicated with bold characters, and the second-best score is indicated with italic characters. The identity preservation in face swapping is evaluated by the squared Euclidean distance between feature vectors of the input and face-swapped images. The feature vectors are computed with OpenFace \cite{amos16_openface}, which is an open-source face feature extractor. The measured distances in the third column indicate that RSGAN outperform the other generative neural networks but it performs worse than Nirkin et al.'s method. However, the method of Nirkin et al. could perform face swapping only 81.7\% of 1,000 test image pairs used in this experiment because it often fails to fit the 3DMM to at least one of the two input images. In contrast, the proposed RSGAN and the other method based on generative neural networks perform face swapping for all the test images. Therefore, we consider face swapping by RSGAN is practically useful even though the identity preservation is slightly worse than the state-of-the-art method of Nirkin et al. The swap consistency is evaluated with an absolute difference and MS-SSIM~\cite{wang_msssim} between an input image and a resulting image obtained after swapping the face region twice between two input images. For the previous generative neural networks and RSGAN, we computed these values also for images reconstructed by the networks. As shown in \tabref{tab:experiments}, evaluation results with absolute errors and MS-SSIM indicate that the method of Nirkin et al. outperforms the generative neural networks including RSGAN. We consider this is because Nirkin et al.'s method generates only a face region while face swapping, whereas the generative neural networks synthesize both the face and hair regions. Therefore, the scores of absolute differences and MS-SSIM becomes relatively lower for the method of Nirkin et al. in which the differences in pixel values only occur in the face regions. In addition, Nirkin et al.'s method is rather unstable for using in practice as mentioned in the previous paragraph. Consequently, the proposed method with RSGAN is worth using in practice because it has achieved the best swap consistency compared to the other generative neural networks as can be seen in the ``Swap $\times$2'' columns. \subsection{Discussion} \label{ssec:discussion} \miniparagraph{Variational vs non-variational} For the purpose of visual feature extraction, many variants of autoencoders have been used~\cite{kingma14,gregor15_draw,yang17,li18_agegan}. Among these approaches, non-variational approaches are often preferably used when a real image appearance needs to be reproduced in their applications. For example, recent studies~\cite{yang17,li18_agegan}, which have introduced a similar idea to our study, used non-variational approaches for image parts extraction~\cite{yang17} and manipulation of people's ages in portraits~\cite{li18_agegan}. We have also experimented non-variational one of the proposed RSGAN in a prototype implementation, and we found that its self-reproducibility is slightly better than the variational one that is introduced in this paper. However, considering the wide applicability of the variational one of RSGAN such as random face parts sampling, we determined that the variational one is practically more useful than the non-variational one. \miniparagraph{Region memorization with RNN} In image parts synthesis, some of the previous studies have applied the recurrent neural networks to memorize which parts are already synthesized or not~\cite{gregor15_draw,kwak16,yang17}. Following these studies, we have experimentally inserted the long-short term memory (LSTM)~\cite{hochreiter97_lstm} such that the outputs from the two image encoder networks are fed to it. However, in our experiment, we found that this application of the LSTM makes the training difficult and its convergence slower. The visual qualities of the results in face-swapping and the other applications are not significantly better than RSGAN without LSTM. We illustrated the RSGAN architecture with LSTM, and the results of this network in the supplementary materials. \miniparagraph{Limitation} The main drawback of the proposed system is its limited image resolution. In our implementation, the image size of in a training dataset is $128 \times 128$. Therefore, the image editing can be performed only in this resolution. To improve the image resolution, we need to train the network with higher-resolution images as in CelebA-HQ~\cite{karras17}. In recent studies~\cite{karras17,chen18}, training with such a high-resolution image dataset is robustly performed by progressively increasing the resolutions of input images. This approach can be straightforwardly applied to the proposed RSGAN as well. Therefore, the limited image resolution of the proposed system will be evidently resolved. \section{Conclusion} \label{sec:Conclusion} This paper proposed an integrated editing system for face images using a novel generative neural network that we refer to as RSGAN. The proposed system achieves high-quality face swapping, which is the main scope of this study, even for faces with different orientations and in different lighting conditions. Since the proposed system can encode the appearances of faces and hairs into underlying latent-space representations, the image appearances can be modified by manipulating the representations in the latent spaces. As a deep learning technique, the success of the RSGAN architecture and our training method implies that deep generative models can obtain even a class of images that are not prepared in a training dataset. We believe that our experimental results provide a key for generating images which are hardly prepared in a training dataset. \ifreview \else \section*{Acknowledgments} This study was granted in part by the Strategic Basic Research Program ACCEL of the Japan Science and Technology Agency (JPMJAC1602). Tatsuya Yatagawa was supported by a Research Fellowship for Young Researchers of Japan's Society for the Promotion of Science (16J02280). Shigeo Morishima was supported by a Grant-in-Aid from Waseda Institute of Advanced Science and Engineering. The authors would also like to acknowledge NVIDIA Corporation for providing their GPUs in the academic GPU Grant Program. \fi \bibliographystyle{splncs}
1,116,691,497,496
arxiv
\section{Introduction}\label{sec:introduction} Smart contracts are programs stored on blockchains to execute transactions. In recent years, smart contracts have been widely used for various purposes such as to offer financial services~\cite{smartcontractusecase}. Ethereum~\cite{ethereum_yellow_paper} is the largest decentralized platform for smart contracts with the second biggest blockchain market capitalization~\cite{coinmarketcap}. There are over one million transactions executed on Ethereum daily~\cite{dailytransaction}. As smart contracts are often used to manage valuable user assets, their security is of paramount importance. Anomalous transactions caused by various runtime errors should be detected and reverted promptly to prevent undesirable consequences such as financial losses. In Solidity~\cite{soliditydocumentation}, the most popular programming language for Ethereum smart contracts, there are three statements that can help detect runtime errors and revert transactions, namely, \texttt{require}, \texttt{if...revert}, and \texttt{if...throw}. Figure~\ref{transaction-reverting_statements} shows the example uses of these \textit{transaction-reverting statements} to revert transactions submitted by unauthorized senders. While all three statements can revert transactions when anomalous conditions occur, the first two would refund the unused gas to transaction senders. \begin{figure}[tbp] \centering \includegraphics[scale=0.96, trim=2.5cm 16.5cm 4.3cm 0.7cm]{1-Introduction/example_statement.pdf} \caption{Examples of transaction-reverting statements} \label{transaction-reverting_statements} \end{figure} Transaction-reverting statements are frequently used in smart contracts. Our analysis reveals that over 94$\%$ of smart contracts use transaction-reverting statements in certain ways. Surprisingly, this figure is even higher than that of general-purpose \texttt{if} statements. These statements are also frequently discussed in the Solidity developers community. We searched on Stack Overflow\cite{stackoverflow}, the most popular Q\&A website for programmers, using the keywords ``require()'', ``revert()'', and ``if throw'' under the tag ``solidity''. As of August~2021, there are already 1,280 questions related to the three transaction-reverting statements, many of which have been viewed thousands of times. \begin{comment} Transaction-reverting statements are frequently used in Smart contracts. Our analysis reveals that over 92$\%$ of smart contracts use transaction-reverting statements. The occurrence frequency of \texttt{require} statements is even higher than that of if-statements in smart contracts. In addition, require statements are hotly discussed in the smart contract community. We conducted a small-scale study on the use of require statements on StackExchange\cite{StackExchange} and stackoverflow\cite{stackoverflow}. We used keywords often used in the related discussions of transaction-reverting statements such as "require statement", "require()", "revert()", "assert()" and "require if()" to search relevant questions. As of December 2020, there are 1,032 questions related to require statements with over 10 thousand views on StackExchange and stackoverflow.\yepang{Do we have more recent data?}\liulu{Need to do the experiment again to get the data.} \end{comment} Transaction-reverting statements can effectively help prevent smart contracts from exhibiting abnormal behaviors or suffering malicious attacks. For example, in the SWC Registry~\cite{swcregistry}, which indexes common smart contract weaknesses, there is a kind of weakness called ``Unchecked Call Return Value'' (SWC-104\cite{swc104}). This weakness occurs when the return value of a message call is not properly checked in a smart contract. To ease understanding, we give an illustrative example in Figure~\ref{swc104}. In the code snippet, the \texttt{callNotChecked()}function does not check the return value of \texttt{callee.call()} (Line~2). When the execution of \texttt{callee.call()} fails, the \texttt{callNotChecked()}function would not do anything. This may cause serious and irreversible consequences, e.g., the contract announces to the caller with error execution information that the call has been executed successfully, but actually, the call fails. To fix the weakness, developers are suggested to add a \texttt{require} statement to check the execution status of \texttt{callee.call()} (such as in Line 6 of \texttt{callChecked()}) so that the anomalous transactions can be reverted and the unused gas can be returned to the transaction sender upon unsuccessful execution of \texttt{callee.call()}. \begin{comment} The \texttt{call()} function in Line 7 is a low-level call function for calling \texttt{someOtherFunction()} in this case. It fails to throw an If the \texttt{call()} function encounters an exception, it will not throw an exception but only return \lstinline{false}. If the transaction which calls the \texttt{callnotchecked()} function fails, the transaction will not be reverted, such that the transaction sender will not know whether the call to \texttt{someOtherFunction()} is successful or not. \end{comment} \begin{figure}[tb] \begin{center} \includegraphics[scale=0.95, trim=2.2cm 15.2cm 4.3cm 0cm]{1-Introduction/example_unchecked_call} \caption{An example of the \textit{Unchecked Call Return Value} weakness} \label{swc104} \end{center} \end{figure} As we can see from the above example, appropriate uses of transaction-reverting statements can help improve the reliability and security of smart contracts. However, there is little research on transaction-reverting statements. Without a comprehensive understanding of how these statements are used in practice, one cannot design tools to effectively identify the inappropriate uses of such statements or formulate good practices to help smart contract developers. We conducted the first empirical study to characterize transaction-reverting statements in Ethereum smart contracts to bridge the gap. Specifically, we investigated the following four research questions: \begin{itemize}[leftmargin=1em] \item \textbf{RQ1 (Prevalence):} \textit{Are transaction-reverting statements commonly used in Ethereum smart contracts?} \item \textbf{RQ2 (Purpose):} \textit{What are the major purposes of using transaction-reverting statements in smart contracts?} \item \textbf{RQ3 (Developer Customization):} \textit{Are there differences between template contracts and custom contracts in terms of using transaction-reverting statements?} \item \textbf{RQ4 (Security Impact):} \textit{Are there any security consequences if transaction-reverting statements are missing in smart contracts?} \end{itemize} For the study, we constructed a dataset of 270 template contracts and 3,866 dapp contracts, which were collected from popular template code repositories~\cite{OpenzeppelinContracts, AragonOS, ConsenSys, EthereumImprovementProposals} and real-world dapps with millions of transactions. To answer RQ1, we measured the code density of transaction-reverting statements and compared it with that of general-purpose \texttt{if} statements in smart contracts. To answer RQ2, we built a taxonomy of the purposes of transaction-reverting statements via an inductive coding process~\cite{seaman1999qualitative}. To answer RQ3, we leveraged a code clone detector to identify contracts developers customized from popular-used contract templates and studied how developers customize transaction-reverting statements at a fine granularity of clauses of conditions based on template contracts. To answer RQ4, we analyzed the security impact of transaction-reverting statements by removing them from smart contracts and comparing the mutated contracts against the original ones. Our major findings include: \begin{itemize}[leftmargin=1em] \item Over 94$\%$ of our analyzed smart contracts use transaction-reverting statements. Comparatively, only 87.9\% of them use general-purpose \texttt{if} statements. This shows that transaction-reverting statements are pervasively used in real-world smart contracts and may play important roles in assuring the correct execution of transactions. \item Transaction-reverting statements are commonly used to perform seven types of security-critical checks, such as verifying user authorities. \item Developers are most likely to strengthen transaction-reverting statements by adding clauses, variables, or new transaction-reverting statements. The customized transaction-reverting statements are commonly used for range checks and logic checks. \item The lack of transaction-reverting statements may introduce security issues to smart contracts. Existing smart contract security analyzers show weak support in handling transaction-reverting statements when detecting security vulnerabilities. \end{itemize} To summarize, the main contribution of this work is the character study of transaction-reverting statements in Ethereum smart contracts. To the best of our knowledge, this study is the first of its kind. The findings can facilitate further research in smart contract quality assurance and provide practical guidance to smart contract developers on the appropriate use of transaction-reverting statements. Our data are released on GitHub for public usage~\cite{dataset}. The organization of the remaining sections is as follows. In Section~\ref{sec:background}, we introduce some related background knowledge. Section~\ref{sec:data_collection} presents how we constructed four datasets of smart contracts for empirical analysis. Then in Section~\ref{sec:empirical-study}, we present the design of the empirical study to answer the four research questions and introduce our data analysis methodologies and empirical findings. We discuss threats to the validity of our studies in Section~\ref{sec:threats}. After that, we discuss related works in Section~\ref{sec:related_work} and conclude our work in Section~\ref{sec:conclusion}. \section{Background}\label{sec:background} This section presents the background and explains the terminologies used in the paper. \subsection{Smart Contracts \& Dapps} Smart contracts are autonomous programs running on blockchains like Ethereum~\cite{ethereum_yellow_paper}. The execution of smart contracts does not rely on a trusted third party and is fully decentralized. Dapps are decentralized applications that can offer end-users various functionalities. The core logic of dapps is backed by smart contracts to meet the requirements of applications. Solidity~\cite{soliditydocumentation} is the most popular high-level language to program Ethereum smart contracts. In this paper, we focus on the smart contracts written in Solidity. \subsection{Error-Handling Statements}\label{ssec:bg-error-handling-statements} Solidity uses state-reverting exceptions to handle errors. It provides four statements to deal with errors, namely, \texttt{require}, \texttt{if...revert}, \texttt{assert}, and \texttt{if...throw}. If these statements identify the occurrence of erroneous conditions, they will throw an exception and revert the blockchain and contract state to the state before the execution of the transaction. The four error-handling statements can be further divided into the following two categories~\cite{soliditydocumentation}: \begin{itemize}[leftmargin=1em] \item \textbf{Transaction-reverting statements} refer to the \texttt{require}, \texttt{if...revert}, and \texttt{if...throw} statements, that are used to check for erroneous conditions. Before version~0.4.10, Solidity provides the \texttt{if...throw} statement for reverting transactions. As the language evolves, there are two more alternatives, namely, \texttt{require} and \texttt{if...revert}, to replace \texttt{if...throw} since Solidity~0.4.10. The \texttt{if...throw} statement is officially deprecated in Solidity~0.4.13. These statements can all trigger state reversion when erroneous conditions occur. The only difference between \texttt{if...throw} and the two replacements is that \texttt{if...throw} will use up all remaining gas when errors occur, while the two replacements will refund the remaining gas to the transaction sender. \item \textbf{The assertion statement} \texttt{assert} should only be used for debugging purpose, which are not supposed to exist in production code. If a specified assertion is violated, it means that the contract has a bug, which needs to be fixed. \end{itemize} In our study, we focus on transaction-reverting statements. Since \texttt{if...throw} statement is already deprecated, we mainly investigate the use of \texttt{require} and \texttt{if...revert} statements in real-world smart contracts. In the following of this paper, transaction-reverting statements refer to \texttt{require} and \texttt{if...revert} statements if not otherwise specified. \subsection{Template Contracts \& Custom Contracts}\label{ssec:bg-template-contracts} Writing a smart contract is non-trivial for developers, especially when there is a high demand for security~\cite{contract_development}. To facilitate contract development and prevent vulnerabilities, \textit{template contracts} are provided by industrial institutions and organizations for different use cases. These template contracts are usually well-maintained and provide many high-quality or fully functional components for reuse. In practice, many developers copy or reuse components in template contracts in their own contracts, which we call \textit{custom contracts}, to save efforts and ensure security. Developers' customizations may add, delete, or modify existing transaction-reverting statements for various purposes. \subsection{Solidity Components} Smart contracts written in Solidity are put in \texttt{.sol} files, each of which may contain one or more components of three kinds: \textit{contracts}, \textit{libraries}, and \textit{interfaces}. Template contract codebases often provide a set of such Solidity components that developers can reuse. \section{Dataset Construction} \label{sec:data_collection} To investigate our research questions, we constructed four datasets of smart contracts for empirical analysis. This section explains how these datasets were constructed. \subsection{Crawling Dapp Contracts} As of April 2021, over 40 million smart contracts have been deployed on Ethereum~\cite{bigquery_ethereum}. Despite the large volume, many Ethereum smart contracts are deprecated or rarely used (with few transactions). In our empirical study, we aim to analyze representative smart contracts that are often used in real life. For this purpose, we chose to collect smart contracts from popular dapps. Such collected smart contracts are of higher quality, more frequently used, and better maintained. Specifically, we collected smart contracts from all 1,699 dapps indexed by Dapp.com~\cite{Dappcom}, a popular dapp collection website, in February 2021 by referring to the contract addresses listed in the description of dapps. We found that most dapps have less than 200 contracts, but the dapp Uniswap is an exception. Uniswap~\cite{uniswap} is a decentralized exchange that allows users to exchange one kind of token for another kind. It has 3,964 smart contracts because there is a contract factory, which will create a contract for every directly exchangeable token pair on Uniswap, and most such created contracts share the same code. To reduce the impact of data imbalance, we randomly selected 200 contracts for Uniswap (i.e., downsampling). For the other dapps, we collected all the addresses of their used smart contracts listed on Dapp.com. Then, we leveraged the APIs provided by Etherscan~\cite{etherscan}, an Ethereum block explorer, to collect contract source codes. In total, we collected 6,016 smart contracts, and 3,866 of them are verified ones with source code available, which will be used in the subsequent studies. Table~\ref{dapp_dataset} provides the demographic information of the 3,866 verified contracts. As we can see, they are from different categories, contain hundreds of lines of code (on average), and have a large number of transactions. \input{tables/Background/dapp_dataset.tex} \subsection{Collecting Template Contracts} Template contracts play an important role in the Ethereum ecosystem. When reusing them, developers may customize the transaction-reverting statements. To study such customizations, we built a dataset of custom contracts and the corresponding template contracts. For template contracts, we collected them from four data sources, which contain smart contracts that are widely used on Ethereum. For custom contracts, we explain how to identify them in the next subsection. Table~\ref{template_codebase_info} shows the popularity of the data sources of template contracts, and we introduce each of them in the following. \input{tables/Data_Collection/template_codebase_info.tex} \begin{itemize}[leftmargin=1em] \item OpenZeppelin~\cite{OpenzeppelinContracts} is a library for secure smart contract development, which provides reusable contract templates such as implementations of token standards to help build custom contracts. We collected 115 contracts from OpenZeppelin. \item aragonOS\cite{AragonOS} is a smart contract framework for building decentralized organizations, dapps, and protocols. We collected 107 contracts from aragonOS. \item ConsenSys\cite{ConsenSys} provides Solidity smart contract code for simple, standards-compliant tokens on Ethereum. We collected 34 contracts from ConsenSys. \item Besides the above data sources, we also collected 14 final EIPs (\underline{E}thereum \underline{I}mprovement \underline{P}roposals) with 10 reusable template contracts from the ERC website~\cite{EthereumImprovementProposals}. \end{itemize} In total, we collected 270 template contracts. \subsection{Identifying Custom Contracts}\label{subsec:custom-contracts} It isn't easy to associate template contracts with custom contracts because developers rarely explicitly specify the templates they reuse to write smart contracts. To identify custom contracts, we leveraged a code clone detection tool, SmartEmbed~\cite{gao2020checking}, to calculate the code similarity between the 270 template contracts and our collected 3,866 dapp contracts. If the code similarity between a dapp contract and a template contract is higher than or equal to 85\%, we consider that the dapp contract is a custom contract based on the template contract. Via this process, we identified a set of 227 custom contracts based on 74 template contracts. We give more details of the custom contract dataset in Section~\ref{subsec:rq3}. \subsection{Creating Mutated Contracts} To investigate the security impact of transaction-reverting statements in smart contracts, we constructed a dataset of mutated contracts by removing all transaction-reverting statements in the 3,866 contracts. The mutated contracts were later analyzed by existing smart contract vulnerability detection tools to assess their security. A detailed description of the mutated contracts is given in Section~\ref{subsec:rq4}. \section{Empirical Study}\label{sec:empirical-study} With the four datasets, we conducted a large-scale empirical study, aiming to 1) understand the use of transaction-reverting statements in smart contracts, 2) identify good/bad practices, and provide suggestions to help developers appropriately use transaction-reverting statements, and 3) inspire future research. In this section, we present our data analysis methodology and empirical findings for each of the four research questions listed in Section~\ref{sec:introduction}. \input{4-Empirical_study/RQ1.tex} \input{4-Empirical_study/RQ2.tex} \input{4-Empirical_study/RQ3.tex} \input{4-Empirical_study/RQ4.tex} \subsection{RQ4 (Security Impact)} \label{subsec:rq4} \textbf{Study Methodology:} To answer RQ4, we mutated the dapp contracts by removing the transaction-reverting statements. We then leveraged smart contract security analyzers to detect the vulnerabilities in the original and the mutated contracts and compared the detection results. Specifically, we adopted a state-of-the-art framework, SmartBugs~\cite{durieux2020empirical}, to conduct the study. It integrates nine smart contract security analyzers, including HoneyBadger~\cite{torres2019art}, Slither~\cite{Feist_2019}, Manticore~\cite{mossberg2019manticore}, etc. Collectively, these nine analyzers can detect 141 types of vulnerabilities, while many of them refer to the same types of vulnerabilities but have different names. To unify the vulnerability types, we followed the existing practices~\cite{durieux2020empirical} and used DASP~\cite{DASP}, a smart contract vulnerability taxonomy, to categorize the reported vulnerabilities. Another problem with these analyzers is that they may generate many false alarms due to the imprecise static analyses~\cite{durieux2020empirical}. To mitigate this problem, we followed the existing practice~\cite{durieux2020empirical} and only counted those vulnerabilities that are reported by at least two of the nine analyzers. \vspace{0.5em} \textbf{Finding 6: }\textit{Missing transaction-reverting statements can introduce security vulnerabilities to smart contracts. } \vspace{0.5em} \input{tables/RQ4/voting_result.tex} Table~\ref{voting_result} presents the number of original dapp contracts and mutated contracts that are reported to contain vulnerabilities. The number of vulnerable contracts increases after removing the transaction-reverting statements. In particular, the number of contracts containing \textit{Time Manipulation} and \textit{Front Running} vulnerabilities increase significantly, by 16.98\% and 12.90\%, respectively. This shows that transaction-reverting statements are useful in improving the security of smart contracts. To ease understanding, we provide an example. \begin{figure}[tb] \begin{center} \includegraphics[width=\linewidth]{4-Empirical_study/example_vul_increase.pdf} \caption{The number of vulnerabilities increases after mutation in contract 0x9F91b5Aa41b9fbDae6877593910586484d291F05.} \label{fig:vul-increase} \end{center} \end{figure} Figure~\ref{fig:vul-increase} shows a code snippet in a real smart contract. After removing the transaction-reverting statement in Line~3, the contract is reported to have an \textit{Underflow/Overflow} vulnerability~\cite{swc101}. In this example, both \texttt{leafHeaderByte} and \texttt{offset} are unsigned integers. If Line~3 is removed, the value of \texttt{leafHeaderByte} in Line~4 can be smaller than \texttt{0xf7}, which may lead to an underflow. \vspace{0.5em} \textbf{Finding 7: }\textit{Smart contract security analyzers can fail to analyze transaction-reverting statements properly and induce false negatives in security vulnerability detection. } \vspace{0.5em} \begin{comment} Note that the change of the number of reported vulnerabilities between original contracts and mutated contracts can be caused by many reasons, such as false positives and false negatives induced by detection tools, execution failures for some contracts. Here we only focus on vulnerability changes that may be related to transaction-reverting statements by locating the lines that are related to transaction-reverting statements and are reported as vulnerable. To understand the reason behind this, we manually inspected the analysis result of Oyente and compared the difference between original contracts and mutated contracts. Table~\ref{oyente_result} shows the number of contracts whose number of vulnerabilities reported by Oyente increases, decreases or remains the same, for each type of vulnerabilities. \lili{What percentage of the results do we manually inspect? Are the reasons and examples discussed below representative?} \liulu{We only check the result in one out of the nine tools.} Results show that after removing transaction-reverting statements, the number of some kinds of vulnerabilities has decreased instead of increase. \input{tables/RQ4/oyente_result_change.tex} We present two example contracts in which the number of reported vulnerabilities increases/decreases after removing transaction-reverting statements. \liulu{The two examples are not representative, I just show two examples to illustrate the increase and decrease situation.}. \end{comment} When inspecting the results reported by the nine analyzers, we found that there are also cases where vulnerabilities in original smart contracts disappear after removing transaction-reverting statements. This is counter-intuitive as we have found that transaction-reverting statements are commonly used for security checks. We found out that eight out of the nine contract analyzers (except Maian~\cite{maian}) used in our study suffered from this problem. We further inspected such cases identified in our dataset. \begin{figure}[tb] \begin{center} \includegraphics[width=\linewidth]{4-Empirical_study/example_vul_decrease.pdf} \caption{The number of vulnerabilities decreases after mutation in contract 0x0AbdAce70D3790235af448C88547603b945604ea.} \label{fig:vul-decrease} \end{center} \end{figure} Figure~\ref{fig:vul-decrease} shows a code snippet in a real smart contract. The function \texttt{contributeWithAddress()} is reported to have a \textit{Timestamp Dependence} vulnerability~\cite{swc116}. Due to the direct use of \texttt{now} (Line~16) which is an alias of \texttt{block.timestamp}, a malicious block miner can manipulate the block's timestamp to gain profits from the contract. In this case, the transaction-reverting statement in Line~2 is not related to the vulnerability as it is not checking against block timestamp. In other words, after removing it, the \textit{Timestamp Dependence} vulnerability should still exist. However, the tool Osiris~\cite{torres2018osiris} does not report the vulnerability after removing this transaction-reverting statement. This shows that Osiris can be fooled by the removal of transaction-reverting statements and induce false negatives. We observed 5,404 such cases in our dataset where the originally detected vulnerabilities disappeared after removing transaction-reverting statements. In our future work, we plan to take a deeper look into this problem and investigate why removing transaction-reverting statements can fool smart contract security analyzers. \vspace{0.5em} \noindent \setlength{\fboxsep}{0.5em} \fbox{\parbox{0.95\linewidth}{ \textbf{Answer to RQ4:} \textit{Missing transaction-reverting statements can induce security vulnerabilities in smart contracts. In other words, transaction-reverting statements can be used to avoid vulnerabilities effectively. However, there are also cases where removing irrelevant transaction-reverting statements can fool smart contract analyzers and induce false negatives in security vulnerability detection.} \vspace{0.5em} \textbf{Implication:} \textit{Researchers need to further improve the effectiveness of smart contract security analyzers. In particular, properly dealing with transaction-reverting statements is a basic and critical requirement for such tools. } }} \vspace{0.5em} \subsection{RQ3 (Developer Customization)} \label{subsec:rq3} \textbf{Study Methodology:} As discussed earlier, many developers customize template contracts to develop their own smart contracts. In RQ3, we aim to understand how developers customize transaction-reverting statements in template contracts. \textbf{Step 1: Mapping Template \& Custom Contracts:} To answer RQ3, the first step is to build a dataset containing template contracts and their corresponding custom contracts. However, real-world smart contracts rarely explicitly specify whether they are customized from a certain template or not. To address this problem, we leveraged code clone detection techniques to compute the similarities between each of our collected template contracts and the dapp contracts. We considered a dapp contract customized from a template contract if the two contracts have a high similarity. A smart contract can contain multiple components, including contracts, libraries, and interfaces. In practice, different components are usually put in one file in dapp contracts, while in template contracts, a file usually contains a single component. To normalize the two kinds of contracts, we first broke down the dapp contracts into components and then compared the contracts at the component level. Table~\ref{data pre-processing} presents the result after this pre-processing step, where \#~Contracts, \#~Interfaces, and \#~Libraries represent the number of individual contracts, libraries, and interfaces, respectively. \input{tables/RQ3/data_preprocessing.tex} We then adopted a code clone detector, SmartEmbed~\cite{gao2020checking}, to compare the dapp contracts with the template contracts. SmartEmbed computes similarities between two contracts based on word embeddings, and it has been shown to be effective in code clone detection for smart contracts. Following the original experiment setting of SmartEmbed, a dapp contract is considered a custom contract of a template contract if the similarity between these two contracts is higher than 85\%. We chose this threshold because it achieves the highest recall in the evaluation of SmartEmbed. As interfaces do not contain any statements, we excluded them from our dataset. In total, we obtained 175 contracts and 52 libraries from dapp contracts that are similar to template contracts. These contracts and libraries form the custom contracts dataset for our subsequent analysis. \input{tables/RQ3/require_frequent_change_patterns} \textbf{Step 2: Detecting Customization Patterns:} We leveraged the custom contracts to investigate how developers customized transaction-reverting statements. Inspired by an existing study that characterizes changes to \texttt{if} statements~\cite{pan2009toward}, we derived a taxonomy of possible customization patterns for transaction-reverting statements as shown in Table~\ref{require_fre_patterns}. Since transaction-reverting statements are also conditional statements, the change patterns of \texttt{if} conditional statements can also be applied to transaction-reverting statements. However, we found that patterns identified in the existing work (marked with * in Table~\ref{require_fre_patterns}) are not sufficient to cover all customizations on transaction-reverting statements. To identify more patterns, we manually analyzed 30$\%$ of the customized transaction-reverting statements. Specifically, we compared the transaction-reverting statements in the custom contracts with those in the corresponding template contracts and identified common customizations by checking the conditions of the transaction-reverting statements. Via the sampling and manual analysis, we identified five more patterns. To investigate the prevalence of the customization patterns and identify commonly used ones, we implemented a static analyzer based on a Solidity parser~\cite{PythonSolidityparser} to automatically identify the occurrences of each customization pattern. For each pair of a template contract and a corresponding custom contract, we matched their functions by the function names and input parameters. Functions with the same name and input parameters are seen as matched function pairs. For each matched function pair, the analyzer 1) parses the source code into AST trees, 2) extracts all transaction-reverting statements, and 3) recognizes the customization patterns by comparing the syntactic differences of the transaction-reverting statements in the two functions. More details about the analyzer can be found on our project website~\cite{dataset}. \vspace{0.5em} \textbf{Finding 4: } \textit{Transaction-reverting statements in template contracts are commonly customized. Developers are most likely to strengthen transaction-reverting statements by adding clauses, variables, or new transaction-reverting statements.} \vspace{0.5em} Table~\ref{require_fre_patterns} shows the frequency of each customization pattern. In the 175 custom contracts and 53 custom libraries, our analyzer identified 529 occurrences of customization patterns. This indicates that developers commonly customize transaction-reverting statements in template contracts. 50.1\% of the customizations are statement-level changes involving the addition or removal of a transaction-reverting statement. 27.8\% lie in clause granularity, including adding, deleting, and modifying a clause within the condition of a transaction-reverting statement. It is infrequent for developers to change transaction-reverting statements to other kinds of statements while keeping the condition unchanged (0.8\%). In our dataset, all of the four cases in this category are changing transaction-reverting statements to general-purpose \texttt{if} statements. 44.4\% of the customizations fall into the ``add'' category, 31.9\% fall into the ``delete'' category, and 11.9\% involved changes. Besides, 11.7\% customizations are cosmetic changes, such as changing the order of clauses, adding or removing a string message, etc. These changes do not alter the semantics of the original transaction-reverting statements. These results show that developers are more likely to strengthen the transaction-reverting statements in template contracts. \vspace{0.5em} \textbf{Finding 5: }\textit{The customized transaction-reverting statements are commonly used for range checks and logic checks.} \vspace{0.5em} We further investigated the purposes of customizing transaction-reverting statements. We randomly sampled 100 customized transaction-reverting statements and manually analyzed their purpose according to the taxonomy in Table~\ref{purpose of use}. \input{tables/RQ3/change_purpose.tex} Table~\ref{change_purpose} shows the analysis results. For the 100 statements, we identified 112 customized clauses and categorized them accordingly. The results indicate that the most frequent purposes of the customized transaction-reverting statements are \textit{logic check} (30.4\%), \textit{range check} (48.2\%), and \textit{address validity check} (6.3\%). This is reasonable since custom contracts may need to deal with use cases different from those encountered by template contracts. They would naturally have different definitions of the validity of runtime values. Figure~\ref{range-check-example} shows an example of a transaction-reverting statement in a custom contract. It is a stake contract that allows EIP20 token to be staked. Staking is the process of investing tokens into the network and get a reward for doing it. Compared with the EIP20 token template contract, the custom contract adds a transaction-reverting statement to do \textit{Range Check} to ensure that the staked amount provided by a staker is greater than 0, which intends to prevent the \textit{Integer Underflow} vulnerability~\cite{swc101}. \begin{figure}[tb] \begin{center} \includegraphics[scale=0.96, trim=2.5cm 16.3cm 4.3cm 0cm]{4-Empirical_study/example_range_check.pdf} \caption{An example customization with \textit{Range Check} purpose} \label{range-check-example} \end{center} \end{figure} The other 17 (15.2\%) customizations are related to \textit{address authority check}, among which, ten added statements to perform authorization check on addresses and four deleted statements for address authority check. This is also understandable since custom contracts can have customized permission settings for different account types. For example, if there are multiple authorized users with different identities, the custom contract should add new transaction-reverting statements to verify the identity of the transaction sender to prevent unauthorized operations, as shown in Figure~\ref{address-check-example}. \begin{figure}[tb] \begin{center} \includegraphics[scale=0.96, trim=2.5cm 16.6cm 4.3cm 0cm]{4-Empirical_study/example_address_authority_check.pdf} \caption{An example customization with \textit{Address Authority Check} purpose} \label{address-check-example} \end{center} \end{figure} \noindent \setlength{\fboxsep}{0.5em} \fbox{\parbox{0.95\linewidth}{ \textbf{Answer to RQ3:} \textit{Transaction-reverting statements in template contracts are commonly customized when developing smart contracts. Developers tend to strengthen transaction-reverting statements, mainly for logic and range checks.} \vspace{0.5em} \textbf{Implication:} \textit{The customizations of transaction-reverting statements often serve security purposes. Future research may also focus on investigating the security impact of the customizations of transaction-reverting statements.} }} \vspace{0.5em} \subsection{RQ2 (Purpose)}\label{subsec:purpose taxonomy} \textbf{Study Methodology:} To understand the purposes of using transaction-reverting statements, we manually analyzed our collected smart contracts with the following two steps: \textbf{Step 1: Statement selection.} Since there are 67,770 transaction-reverting statements in the 3,866 dapp contracts, it is infeasible to analyze all of them manually. For our study, we randomly selected 382 of these statements, representing the whole set with a confidence level of 95$\%$ and a confidence interval of 5$\%$. For the 270 template contracts, we analyzed all 175 transaction-reverting statements in them. \textbf{Step 2: Constructing the purpose taxonomy.} To understand and categorize the purposes, we first sampled 100 of the 557~(=~382~+~175) transaction-reverting statements for a pilot construction of the taxonomy. Similar to many existing empirical studies, we followed an open coding procedure~\cite{seaman1999qualitative} to inductively create the categories of our taxonomy in a bottom-up manner. Two authors read all the sampled transaction-reverting statements and the corresponding contracts to understand their purposes. The two authors also considered the string arguments of the transaction-reverting statements provided by the contract owners and the comments around the transaction-reverting statements when comprehending the contract code. They categorized the 100 statements independently and marked those unclear or insufficient categories. They then discussed and adjusted their category tags during meetings with the help of a third author to resolve conflicts. In this way, we successfully constructed the pilot taxonomy. Based on the coding schema in the pilot taxonomy, the two authors continued to label the remaining 457 transaction-reverting statements for two more iterations. In these two iterations, the two authors went back and forth between categories and transaction-reverting statements to refine the taxonomy. The conflicts of labeling were again discussed during meetings and resolved by the third author. In this way, we adjusted the pilot taxonomy and obtained the final results. We used the Cohen’s Kappa score~\cite{cohen1960coefficient} to measure the agreement between the two authors. The overall score is 0.73, indicating that the two authors had a high agreement on the taxonomy. As shown in Table~\ref{purpose of use}, the final taxonomy is organized into two categories, each of which is further divided into sub-categories. There is no overlap between these sub-categories, i.e., a clause in a transaction-reverting statement can only be classified into one of them. The table also provides illustrative examples collected from our datasets to ease understanding. \input{tables/RQ2/purpose_of_use_table.tex} \vspace{0.5em} \textbf{Finding 3: }\textit{Transaction-reverting statements are commonly used to perform seven types of authority verifications or validity checks.} \vspace{0.5em} \textbf{Authority Verification.} 76 of the 435 clauses in the 382 transaction-reverting statements in the dapp contracts are for \textit{Authority Verification}. The figure for the template contracts is 31 of 175. Authority Verification aims to check whether a given contract address or token ID is authorized by the contract owner for the sake of security: \begin{itemize}[nosep, wide] \item \textit{\textbf{Address Authority Check}} is to check whether a given address, mostly the address of the transaction sender, is authorized by the contract owner. We observed two types of address checks. One is to check whether the given address equals to a specified address. The other is to check whether the given address is within a list of authorized addresses. The proportions of transaction-reverting statements that perform address authority checks in dapp contracts and template contracts are 14.9\% and 16.0\%, respectively. \item \textit{\textbf{Token Verification}}. Tokens are value counters stored in a contract, which are mappings of addresses to account balances. Token verification checks whether a given token ID is authorized, i.e., within the mappings of addresses. The proportions of transaction-reverting statements that perform token verification in dapp contracts and template contracts are 2.5\% and 1.7\%, respectively. \end{itemize} \textbf{Validity Check.} 359 of the 435 clauses in the 382 transaction-reverting statements in the dapp contracts are for validity checks. The figure for the template contracts is 144 of 175. Generally, validity checks are performed to check if certain runtime values are valid, i.e., satisfying pre-defined conditions. We observed five sub-categories of validity checks: \begin{itemize}[nosep, wide] \item \textit{\textbf{Logic Check}} refers to the use of logical operators to check the validity of certain runtime values. Such checks are commonly seen in the conditions of transaction-reverting statements, such as checking the return value of a low-level function call, checking the value of a boolean flag, and so on. 37.7\% dapp contracts and 29.1\% template contracts contain transaction-reverting statements for logic checks. \item \textit{\textbf{Range Check}} is to check whether a runtime value (e.g., an input) is within a specific range. 29.4\% dapp contracts and 26.9\% template contracts contain transaction-reverting statements for range checks. \item \textbf{\textbf{Overflow/Underflow Check}} is to check whether an input value crosses the limit of the prescribed size for a data type. 14.9\% template contracts contain transaction-reverting statements for overflow/underflow checks, while the ratio is only 3.9\% for dapp contracts. We further investigated the corresponding template contracts and found that many of them adopt the \texttt{SafeMath} library, which provides safe number operations to protect contracts from overflow/underflow vulnerabilities. This also shows that template contracts emphasize more on security, comparing than ordinary dapp contracts. \item \textit{\textbf{Arithmetic Check}} is to check whether the value of a variable violates common constraints in arithmetic operations, such as divided by 0, mod 0, etc. These checks are not frequently performed comparing to the above categories. Only 0.7\% dapp contracts and 2.3\% template contracts contain transaction-reverting statements for \textit{arithmetic checks}. \item \textit{\textbf{Address Validity Check}} is to check whether a contract address is valid. Note that this is different from the address authority check discussed above, which is to check whether an address is an authorized one (a valid address may not be authorized). For example, a common address validity check is to check whether a contract address is equal to \texttt{address(0)} in an ether transfer function. When the address is zero, a new contract will be created instead of transferring ether. To avoid such cases, address validity checks should be performed. 10.8\% dapp contracts and 7.4\% template contracts contain transaction-reverting statements for address validity checks. \end{itemize} During our manual analysis, three clauses could not be categorized into the above sub-categories. Since they are not common, we do not further discuss them in the paper. From the above analysis, we can see that dapp contracts and template contracts show differences in using transaction-reverting statements. 14.9$\%$ template contracts contain transaction-reverting statements for overflow/underflow checks, while the percentage in dapp contracts is only 3.9$\%$. Besides, dapp contracts show higher percentages in using transaction-reverting statements for logic checks, range checks, and address validity checks. One possible reason is that developers will consider more specific factors when applying smart contracts to a real Ethereum environment, which could be complicated since a smart contract may need to interact with other contracts and user accounts. Comparatively, developers should consider general factors when developing template contracts and cannot anticipate the specific conditions that may arise in real environments. \vspace{0.5em} \noindent \setlength{\fboxsep}{0.5em} \fbox{\parbox{0.95\linewidth}{ \textbf{Answer to RQ2:} \textit{Transaction-reverting statements are commonly used to perform authority verifications and validity checks, many of which involve security-critical constraints. Template contracts and dapp contracts have different purposes for using transaction-reverting statements.} \vspace{0.5em} \textbf{Implication:} \textit{Since transaction-reverting statements often check the runtime status of smart contracts against security-critical constraints, it is crucial to ensure the proper use of such statements. Future research can study the vulnerabilities induced by various misuses of transaction-reverting statements and propose detection or repairing techniques to combat such vulnerabilities. } }} \input{tables/RQ2/purpose_of_use_classification_table.tex} \subsection{RQ1 (Prevalence)} \textbf{Study Methodology:} To answer RQ1, we measured the prevalence of transaction-reverting statements in smart contracts. Specifically, we first identified all the transaction-reverting statements in the 3,866 dapp contracts and then computed the code density of these statements. Following existing practices~\cite{yuan2012characterizing,harty2021logging}, we computed code density for transaction-reverting statements as LOC/LOT, where LOC is the lines of code of a contract and LOT is the lines of transaction-reverting statements. Similarly, we also computed the code density for general-purpose \texttt{if} statements and \texttt{if...throw} statements for comparison. Note that we separately analyzed general-purpose \texttt{if}, \texttt{if..throw}, and \texttt{if...revert} statements. When an \texttt{if} statement is used with \texttt{throw} or \texttt{revert}, we will not consider it as a general-purpose \texttt{if} statement since it is used to revert transactions. In addition, we counted the number of transaction-reverting statements within a \texttt{if...throw} or \texttt{if...revert} code block as one. \vspace{0.5em} \textbf{Finding 1: }\textit{In our analyzed smart contracts, transaction-reverting statements are more frequently used than general-purpose \texttt{if} statements.} \vspace{0.5em} Among all the 3,866 contracts, 3,647 (94.3\%) contracts contain transaction-reverting statements, while only 3,399 (87.9\%) contracts contain general-purpose \texttt{if} statements. Table~\ref{code_density} gives the detailed results, where the column \textit{``Total Lines of Statements''} lists the total number of the concerned statements in the whole dataset and the \textit{``Code Density''} column shows the average code density per contract for each type of statement. As shown in the table, transaction-reverting statements are more frequently used than general-purpose \texttt{if} statements in terms of both metrics. On average, there is one transaction-reverting statement per 49.76 lines of code, while general-purpose \texttt{if} statements are used once per 86.92 lines. \input{tables/RQ1/code_density.tex} \vspace{0.5em} \textbf{Finding 2: }\textit{8.6\% of our analyzed smart contracts are still using the deprecated \texttt{if...throw} statements, which may cause unnecessary financial loss to users.} \vspace{0.5em} As explained in Section~\ref{ssec:bg-error-handling-statements}, \texttt{if..throw} statements can also help revert transactions but using them would incur additional costs of gas and induce unnecessary financial loss to the contract users. As a result, \texttt{require} and \texttt{if...revert} statements were introduced in Solidity~0.4.10 as replacements and \texttt{if..throw} was officially deprecated since Solidity~0.4.13 in 2017. However, we found that 332 (8.6\%) of our analyzed smart contracts are still using \texttt{if...throw} statements. Besides, in 252 smart contracts (6.5\%), there exists a mixed use of \texttt{if...throw} and \texttt{require} statements. We further collected the Solidity versions used in these 3,866 contracts. Our results showed that 43 contracts (1.1$\%$) still use Solidity versions before 0.4.10. Such contracts can only use the deprecated \texttt{if...throw} statements to revert transactions. The users who submit transactions to these contracts may suffer from unnecessary costs of gas. \vspace{0.5em} \noindent \setlength{\fboxsep}{0.5em} \fbox{\parbox{0.95\linewidth}{ \textbf{Answer to RQ1:} \textit{Transaction-reverting statements are more frequently used in smart contracts than general-purpose \texttt{if} statements. There are still a non-negligible proportion of contracts using deprecated \texttt{if..throw} statements, which may incur unnecessary gas consumption when transactions revert.}~ \vspace{0.5em} \textbf{Implication:} \textit{Transaction-reverting statements may play an essential role in assuring the correct execution of transactions. Researchers working on smart contract quality assurance and security analysis should pay more attention to such statements as inappropriately using them may lead to abnormal contract behaviors or financial losses. } }} \vspace{0.5em} \section{Threats to validity}\label{sec:threats} The validity of our study results may be subject to several threats. First, our selected template contracts may not be sufficiently diverse or representative. To mitigate this threat, we considered the popularity of the templates in the selection process. The four template contract repositories are all widely used by developers on GitHub~\cite{github}. Second, we proposed a taxonomy to categorize the purposes of using transaction-reverting statements in Table~\ref{purpose of use}. There could be other ways to categorize the purposes. To address this threat, we followed the widely-used open coding procedure to derive the results. Third, our study results may be affected by human subjectivity, which is a common problem in qualitative coding~\cite{chandra2019qualitative}. To reduce this threat, we followed the common research practices on manual labeling by involving multiple people. Three authors iterated the labeling process three times to obtain the final taxonomy. This helped improve the reliability and generality of our taxonomy. Our data is also released for public scrutiny~\cite{dataset}. Fourth, we used a code clone technique, SmartEmbed~\cite{gao2019smartembed}, to identify custom contracts of template contracts and set the code similarity threshold as 85\% following the experiments in the original paper to reduce false negatives. The chosen code clone technique and threshold may affect the mapping results. Also, the subjects used when investigating RQ3 are limited. We will keep expanding our dataset in the future and trying other clone detectors to see if more reliable results can be obtained. Lastly, we used a framework supporting nine smart contract security analyzers to detect vulnerabilities in RQ4. False positives and false negatives can both exist in the results. To reduce the threat, we only kept results for items detected as vulnerable by more than one analyzer. Besides, the quality of transaction-reverting statements used in our constructed contract dataset may affect the accuracy of the results since our analysis is based on the assumption that the transaction-reverting statements analyzed are correct. We plan to conduct more experiments and analyses in future studies to validate our findings further. \section{Related Work}\label{sec:related_work} \textbf{Error-handling Statements.} Various studies have been conducted to characterize error-handling statements in other areas. Filho et al.~\cite{castor2007extracting} studied the impacts of factors that affect the exception handling code in aspect-oriented programming (AOP) techniques. Tian et al.~\cite{tian2017automatically} conducted a comprehensive study of error-handling bugs and their fixes and implemented \textit{ErrDoc}, a tool to diagnose and repair error-handling bugs in C programs automatically. Some other studies~\cite{weimer2004finding, susskraut2006automatically, lawall2010finding, jana2016automatically, jia2019detecting} automatically detected and patched error-handling bugs using a variety of techniques. Different from the previous studies, our work conducts the first empirical study on transaction-reverting statements (a type of error-handling statements) for Ethereum smart contracts. It reveals the security impact of transaction-reverting statements, which is specific to smart contracts. In terms of smart contracts, several previous studies have discussed the usefulness of transaction-reverting statements for providing defenses for vulnerabilities. Xue et al.~\cite{xue2020cross} showed that the \texttt{require} statement could be used to prevent reentrancy vulnerability. Zhou et al.~\cite{zhou2020ever} observed that most smart contracts implemented defenses via transaction-reverting statements to abort a transaction when noticing an attack. However, these studies only reported the use cases of transaction-reverting statements for specific purposes and did not regard them as their major focuses. In comparison, our work is the first empirical study that systematically characterizes the use of transaction-reverting statements in real-world smart contracts. \textbf{Smart Contract Vulnerability Detection.} In recent years, there have been many studies targeting smart contract vulnerability detection. Static analysis methods inspected the code of smart contracts without executing them. Examples are \textit{Oyente}~\cite{luu2016making}, \textit{Zeus}~\cite{kalra2018zeus}, \textit{Vandal}~\cite{brent2018vandal}, \textit{Securify}~\cite{tsankov2018securify}, \textit{F* Framework}~\cite{bhargavan2016formal}, and \textit{Fether}~\cite{yang2019fether}. Dynamic analysis methods check the runtime behavior of smart contracts to detect vulnerabilities. Nikolic et al.~\cite{nikolic2018finding} employed inter-procedural symbolic analysis and concrete validators for detecting real security vulnerabilities. Ting et al.~\cite{chen2020understanding} constructed three kinds of graphs to characterize major activities on Ethereum and proposed graph-based techniques to detect security issues. While these studies proposed different techniques to detect vulnerabilities in smart contracts, none discussed the security impact of transaction-reverting statements. Our work showed the prevalence of transaction-reverting statements and concluded the security impact of such statements. Our findings can help improve security vulnerability detection techniques for smart contracts. \section{Conclusion and Future Work}\label{sec:conclusion} In this work, we present the first empirical study on transaction-reverting statements in Ethereum smart contracts. Through intensive analyses of 3,866 real-world smart contracts and 270 popular template contracts, we showed that transaction-reverting statements are prevalent in smart contracts. They are often used to check the runtime status of smart contracts against security-critical constraints. Our study characterizes the usage of transaction-reverting statements in practice and may shed light on future research in areas such as smart contract security and quality assurance. In the future, we plan to extend our study by investigating the challenges in properly using transaction-reverting statements and identifying security issues induced by the misuse of transaction-reverting statements. We also plan to leverage our findings to improve the security vulnerability detection techniques for smart contracts. \section{APPENDIX} \subsection{Issues need to be solved} \subsection{Mind Map} Figure~\ref{mindmap} represents the mind map for the paper. \begin{figure*}[htbp] \centering \includegraphics[width=1\textwidth]{figures/mindmap.jpg} \caption{Mind Map for the paper} \label{mindmap} \end{figure*} \section{Issues need to be fixed} \section*{Acknowledgment} This work was supported by the National Natural Science Foundation of China (Grant No. 61932021 and No. 62002125), Hong Kong RGC/GRF (Grant No. 16207120), Hong Kong RGC/RIF (Grant No. R5034-18) and Guangdong Provincial Key Laboratory (Grant No. 2020B121201001). Lili Wei was supported by the Postdoctoral Fellowship Scheme of the Hong Kong Research Grant Council. \bibliographystyle{IEEEtran} \balance
1,116,691,497,497
arxiv
\section{Introduction}\label{sec:intro} \input{content/introduction.tex} \section{Preliminaries and Problem Formulation}\label{sec:prelim} \input{content/prelim.tex} \section{Convergence Analysis}\label{sec:analysis} \input{content/convergence.tex} \section{Algorithm Design}\label{sec:alg} \input{content/algorithm.tex} \section{Experimentation and Evaluation}\label{sec:results} \input{content/evaluation.tex} \section{Related Work}\label{sec:related} \input{content/related.tex} \section{Conclusion}\label{sec:conclusion} \input{content/conclusion.tex} \section{Acknowledgement}\label{sec:Acknowledgement} \input{content/ack.tex} \balance \bibliographystyle{IEEEtran} \subsection{Consensus Distance Estimation}\label{subsec_distance_estimation} We first analyze how the network topology and local updating frequency affect the consensus distance between the model of worker $i$ and the average of all workers' models. According to the update rule in Eq. (\ref{Eq:Update rule 1}) and the definition in Eq. (\ref{Eq:Local Consenus Distance}), the consensus distance $\| \overline{x}^{h+1} - x_i^{h+1}\|_2$ at round $h+1$ can be formulate as: \begin{align} &D^{h+1}_{i} = \| \overline{x}^{h+1} - x_i^{h+1}\|_2 \notag\\ & = \left\| \frac{1}{N}\sum_{j=1}^N x_j^{h, \tau_j^h} - ( x_i^{h, \tau_i^h} + w_{i,j}^{h} \sum_{j=1}^N a_{i,j}^{h} (x_j^{h, \tau_j^h} - x_i^{h, \tau_i^h} ) ) \right\|_2 \notag\\ & = \left\| \sum_{j=1}^N (\frac{x_j^{h, \tau_j^h} - x_i^{h, \tau_i^h}}{N} - w_{i,j}^{h} a_{i,j}^{h} (x_j^{h, \tau_j^h} - x_i^{h, \tau_i^h} ) ) \right\|_2. \end{align} According to $w_{i,j}^{h} = \frac{1}{u_{max}^{h} + 1}$ in Eq. \eqref{Eq:Step size}, we set $u_{max}^{h} = N-1$ for simplicity, which is the possible maximum value \cite{xiao2004fast}. Thus, it follows: \vspace{-0.5em} \begin{align}\label{Eq:node-pair-dist} \mathbb{E} D^{h+1}_{i} &= \left\| \sum_{j=1}^N \frac{(1 - a_{i, j}^{h})(x_j^{h, \tau_j^h} - x_i^{h, \tau_i^h})}{N} \right\|_2 \notag\\ & \le \frac{1}{N}\sum_{j=1}^N (1 - a_{i, j}^{h}) D_{i, j}^{h}\mbox{,} \end{align} where $D_{i, j}^{h} = \| x_i^{h, \tau_i^h} - x_j^{h, \tau_j^h}\|_2$ ($\forall i, j \in [N] $) is the consensus distance between two models of worker $i$ and worker $j$. The last step of Eq. \eqref{Eq:node-pair-dist} follows the triangle inequality. After receiving local models of neighbors, worker $i$ can locally calculate the consensus distance $D_{i, j}^{h}$, $\forall j \in \mathcal{N}_i^{h}$. As a result, the upper bound of the average consensus distance in Eq. \eqref{Eq:Avg Consenus Distance} can be expressed as: \vspace{-0.6em} \begin{equation} \label{Eq:Distance upper bound} \mathbb{E} D^{h+1} \le \frac{1}{N^2} \sum_{i=1}^N \sum_{j=1}^N (1 - a_{i, j}^{h}) D_{i, j}^{h}. \end{equation} Note that when we set $a_{i, j}^{h} = 1$, $\forall i, j \in [N]$, the upper bound of average consensus distance $D^{h+1}$ is 0, \textit{i.e.}\xspace, if each worker receives local models from all others, the updated models among workers are identical. To solve the problem in Eq. \eqref{problem} with Eq. \eqref{Eq:Distance upper bound}, we still need to know the consensus distances among models of all workers. However, if worker $i$ and worker $j$ are not connected at round $h$, it is infeasible to obtain their consensus distance directly since each worker only receives local models from its neighbors. Thus, we need to estimate the consensus distance between unconnected workers with the help of those of the connected workers. Firstly, when the coordinator has collected consensus distance $D_{i, p}^{h}$ and $D_{p, j}^{h}$, $\forall p \in {N} \setminus \{i, j\}$, $D_{i, j}^{h}$ can be estimated as: \begin{align} \label{Eq:consensus estimation 2} D_{i, j}^{h} &= \left \| x_i^{h, \tau_i^h} - x_p^{h, \tau_p^h} + x_p^{h, \tau_p^h} - x_j^{h, \tau_j^h} \right \|_2 \notag\\ &\le \left \| x_i^{h, \tau_i^h} - x_p^{h, \tau_p^h} \right \|_2 + \left \| x_p^{h, \tau_p^h} - x_j^{h, \tau_j^h} \right \|_2 \notag\\ &= D_{i, p}^{h} + D_{p, j}^{h}\mbox{,} \end{align} where the second step follows the triangle inequality. Thus we can estimate $D_{i, j}^{h}$ as $\hat{D}_{i, j}^{h}$: \begin{equation} \label{Eq:minimum dist} \hat{D}_{i, j}^{h} = \min_{p \in [N] \setminus \{i, j\}} (D_{i, p}^{h} + D_{p, j}^{h}). \end{equation} Secondly, if there is no common neighbor between worker $i$ and worker $j$ at round $h$ (\textit{i.e.}\xspace, $\mathcal{N}_i^{h} \cap \mathcal{N}_j^{h} = \emptyset$), we can use Eq. (\ref{Eq:consensus estimation 2}) and Eq. (\ref{Eq:minimum dist}) iteratively to obtain $\hat{D}_{i, j}^{h}$. Since the network topology is a connected graph, the above problem is equivalent to the shortest path problem, which can be solved efficiently by the Floyd-Warshall algorithm \cite{black1998dictionary} at the coordinator. As the triangle inequality may amplify consensus distance among workers, the historical consensus distance is used to make our estimation more stable and accurate. Specifically, we use the exponential moving average to smooth the consensus distance, with $\beta_1 \in [0, 1]$, as follows: \begin{equation} \label{Eq:consensus estimation 3} D_{i,j}^{h} = (1 - \beta_1)D_{i,j}^{h-1} + \beta_1 \hat{D}_{i,j}^{h},\ \text{if}\ a_{i, j}^{h} = 0. \end{equation} \subsection{Algorithm Description} \label{sec:alg1} Firstly, to minimize the average waiting time of all workers, we let the $t_i^h$ among workers be approximately equal. Then we can have the following formulation: \begin{equation}\label{tau_max} \lfloor \frac{\tau_{l}^{h} \cdot \mu_l^h+\max \{\beta_{l,j}^h\}}{\tau_{i}^{h} \cdot\mu_i^h+\max \{\beta_{i,j}^h\}} \rfloor = 1\mbox{,} \end{equation} where $l$ denotes the index of the fastest worker with the largest local updating frequency at round $h$. Thus, $\tau = \tau_{l}^{h}$. Then the total training time can be formulated as follows: \begin{equation}\label{eq:round_time} T(H,\tau)=\sum_{h=1}^H (\tau \cdot \mu_l^h+ \max \{\beta_{l,j}^h\}). \end{equation} Secondly, the problem in Eq. \eqref{problem} is a non-linear mixed integer programming problem, which is hard to solve \cite{karp1972reducibility, papadimitriou1982complexity}. However, given a specific network topology, we can take the upper bound of $D^{h+1}$ in Eq. (\ref{Eq:Distance upper bound}) as the estimation and transform Eq. \eqref{problem} into a linear programming problem as: \vspace{0.1cm} \centerline{$\min T(H,\tau)$} \vspace{-0.3cm} \begin{equation}\label{eq:DFLproblem-refor} s.t. \begin{cases} \frac{1}{N^2} \sum_{i=1}^N \sum_{j=1}^N (1 - a_{i, j}^{h}) D_{i, j}^{h} \le D_{max}^h \vspace{2mm}\\ \lfloor \frac{\tau_{l}^{h} \cdot \mu_l^h+\max \{\beta_{l,j}^h\}}{\tau_{i}^{h} \cdot\mu_i^h+\max \{\beta_{i,j}^h\}} \rfloor = 1 \end{cases} \end{equation} In terms of Eq. \eqref{eq:DFLproblem-refor}, we propose an efficient algorithm, that adaptively determines local updating frequency for each worker and constructs the network topology. And the coordinator is responsible for monitoring the network condition and recording the model training status. We present the procedure for workers (Alg. \ref{alg:clients}) and the coordinator (Alg. \ref{alg:coordinator}) while the proposed algorithm is formally described in Alg. \ref{alg:algorithm}. In Alg. 1, at the beginning of round $h$, each worker $i$ requests the information about its neighbor set $\mathcal{N}_i^{h}$ and local updating frequency $\tau_i^h$ from the coordinator. Then worker $i$ performs local updating of $\tau_i^h$ times by Eq. \eqref{Eq:Update rule 2} and estimates the parameters $L_{i}$ and $\sigma_i$. After local updating is finished, worker $i$ sends the local model to its neighbors and waits for receiving the models from its neighbors for aggregation. The local updating frequency of each worker is associated with its computing and communicating capabilities. For instance, the workers with high performance are assigned with larger local updating frequencies, so that each worker does not need to waste too much waiting time. After receiving models from the neighbors, worker $i$ computes consensus distance $D_{i, j}^{h}$, $\forall j \in \mathcal{N}_i^{h}$. Finally, worker $i$ sends network conditions, model training statuses, and other parameters to the coordinator and starts the next communication round. In Alg. \ref{alg:coordinator}, the coordinator waits for receiving the parameters (\textit{i.e.}\xspace, $L_i$ and $\sigma_i$), consensus distance (\textit{i.e.}\xspace, $D_{i, j}^{h}$), computing time (\textit{i.e.}\xspace, $\mu_i^{h}$) and communication time (\textit{i.e.}\xspace, $\beta_{i,j}^{h}$) from workers, and takes average of parameters $L_i$ and $\sigma_i$ to get $L$ and $\sigma$. Then the coordinator calls Alg. \ref{alg:algorithm} to get local updating frequencies and network topology of different workers for the next communication round. \begin{algorithm}[!t] \caption{Procedure at worker $i$}\label{alg:clients} \begin{algorithmic}[1] \For {$h=1$ to $H$} \State Receive $\mathcal{N}_i^{h}$ and $\tau_i^h$ from the coordinator; \State Perform local updating of $\tau_i^h$ times by Eq. \eqref{Eq:Update rule 2}; \State {Estimate $L_{i} \leftarrow \frac{\|\nabla f_{i}(x_i^{h+1})-\nabla f_{i}(x_i^{h})\|}{\|x_i^{h+1}-x_i^{h}\|}$}; \State {Estimate $\sigma_{i} \leftarrow \mathbb{E}\left[\|\nabla F_{i}(x_i^h, \xi_{i}^{h})-\nabla f_{i}(x_i^h)\|^{2}\right]$}; \State Send local model to workers in $\mathcal{N}_i^{h}$; \State Receive models from workers in $\mathcal{N}_i^{h}$; \State Aggregate models by Eq. (\ref{Eq:Update rule 1}) and obtain $x_i^{h+1}$; \State Record computing time $\mu_i^{h}$ and communication time $\beta_{i,j}^{h}$, $\forall j \in \mathcal{N}_i^{h}$; \State Compute consensus distance $D_{i, j}^{h}$, $\forall j \in \mathcal{N}_i^{h}$; \State Send $\mu_i^{h}$, $\beta_{i,j}^{h}$, $D_{i, j}^{h}$, $L_i$, $\sigma_i$ to the coordinator; \EndFor \end{algorithmic} \begin{flushleft} {\bf Output:} $x_i^{H}$. \end{flushleft} \end{algorithm} \begin{algorithm}[!t] \caption{Procedure at coordinator}\label{alg:coordinator} \begin{algorithmic}[1] \For {$h=1$ to $H$} \State Send $\mathcal{N}_i^{h}$ and $\tau_i^h$ to worker $i$, $\forall i \in [N]$; \State Receive $\mu_i^{h}$, $\beta_{i,j}^{h}$, $D_{i, j}^{h}$, $L_i$, $\sigma_i$ from worker $i$, $\forall i \in [N]$; \State $L \leftarrow \frac{1}{N} \sum_i^N L_i$; \State $\sigma \leftarrow \frac{1}{N} \sum_i^N \sigma_i$; \State Determine the local updating frequency and network topology for each worker by the proposed algorithm in Alg. \ref{alg:algorithm}; \EndFor \end{algorithmic} \end{algorithm} \begin{algorithm}[t] \caption{Adaptive control algorithm of FedHP}\label{alg:algorithm} \begin{flushleft} {\bf Input:} $\mu_i^h$, $D_{i,j}^h$, $\beta_{i,j}^h$, $\forall i, j \in [N]$; $L$, $\sigma$; $D_{max}^{h}$; $\mathbf{A}_{b}$. \end{flushleft} \begin{algorithmic}[1] \State Initialize adjacent matrix $\mathbf{A}^{h} = \mathbf{A}_{b}$, search step $s=N$ and $Flag = True$; \State Minimize $T_i(H, \tau_i^h) = \sum_{h=1}^H (\tau_{i}^{h} \cdot \mu_i^h+ \max \{\beta_{i,j}^h\})$ and abtain $T_i$ and $\tau_i^h$ of worker $i$, $\forall i\in [N]$; \State $l \leftarrow \arg \min_{i} (T_i)$, $T \leftarrow T_l$ and $\tau \leftarrow \tau_l$; \While{$True$} \If{$Flag$} \State $s = \lfloor \sqrt{\sum_{i,j}a_{i,j}^{h}} \rfloor$; \Else \State $s = \lfloor s / 2 \rfloor$; \EndIf \State Select $s$ slowest links under the threshold of Eq. \eqref{eq:DFLproblem-refor} into $E$; \State Initialize $\mathbf{A}^{\prime} \leftarrow \mathbf{A}^{h}$; \For{each link $e_{i,j} \in E$} \State Set $a_{i, j} \in \mathbf{A}^{\prime}$ as $0$; \If{$\mathbf{A}^{\prime}$ is not connected} \State Set $a_{i, j} \in \mathbf{A}^{\prime}$ as $1$; \EndIf \EndFor \State Minimize $T_i(H, \tau_i^h) = \sum_{h=1}^H (\tau_{i}^{h} \cdot \mu_i^h+ \max \{\beta_{i,j}^h\})$ and abtain $T_i$ and $\tau_i^h$ of worker $i$, $\forall i\in [N]$; \State $l^{\prime} \leftarrow \arg \min_{i} (T_i)$, $T^{\prime} \leftarrow T_{l^{\prime}}$ and $\tau \leftarrow \tau_{l^{\prime}}$; \If{$T^{\prime} < T$} \State $l$, $T$, $\tau$, $\mathbf{A}^{h}$, $Flag$ $\leftarrow$ $l^{\prime}$,$T^{\prime}$, $\tau_{l^{\prime}}$, $\mathbf{A}^{\prime}$ ,$True$; \Else \State $Flag \leftarrow False$; \EndIf \If{not $Flag$ and $s==1$} \State Break; \EndIf \EndWhile \State Calculate $\tau_i^h$ for each worker by Eq. \eqref{tau_max}, where $\tau_l^h=\tau$; \end{algorithmic} \begin{flushleft} {\bf Output:} $\tau_i^h$, $\forall i \in [N]$, $\mathbf{A}^{h}$. \end{flushleft} \end{algorithm} As indicated in Eq. \eqref{eq:round_time}, the completion time of model training depends on the slowest link and the slowest worker. Thus we mainly use the greedy algorithm to remove the slow links in the current network topology to reduce the completion time under the threshold of consensus distance in Eq. \eqref{eq:DFLproblem-refor}. The procedure executes iteratively until the completion time cannot be reduced after removing any slow links. Specifically, we take the network conditions, model training statuses of workers, and other parameters as the algorithm input. Firstly, we start from the base topology (\textit{i.e.}\xspace, $\mathbf{A}_{b}$) which includes all available links for P2P communication. Then we set $\tau_i^h=\sqrt{\frac{N f(\overline{x}^{1})}{L H \eta^2 \sigma^2}}$ and minimize $T_i(H,\tau_i^h)$ by using an LP solver to obtain $T_i$ and $\tau_i^h$ for worker $i$, $\forall i \in [N]$. We obtain the minimum of completion time $T_l$ in the base topology and get the local updating frequency $\tau_l^h$ of worker $l$ at round $h$ (Line 1-3), where $l=\arg \min_{i} (T_i)$. In order to search the optimal topology and local updating frequencies efficiently, we first take a large search step. Concretely, we set the search step $s$ as the square root of the number of links in the current topology (Line 5-6). At round $h$, since the slow links may become the system bottleneck in terms of time, we use a greedy algorithm to remove $s$ slowest links and obtain the new network topology $A^{\prime}$ (Line 10-14). Then we minimize $T_i(H,\tau_i^h)$ again to obtain the new minimum of completion time $T_{l^{\prime}}$ in the new topology and get the new local updating frequency $\tau_{l^{\prime}}$ (Line 15-16). If a better solution (\textit{i.e.}\xspace, shorter completion time) is found, the current network topology and local updating frequency are updated (Line 17-18). If we cannot find a better solution at the current search step, the search step is reduced by half. If the completion time $T$ cannot be further reduced by removing any link, we stop searching and obtain the final network topology as well as local updating frequency of worker $l$. It is worth noting that we only remove the links that will not affect the connectivity of the network topology and exceed the constraint of consensus distance $D_{max}^{h}$ in Eq. \eqref{eq:DFLproblem-refor}. In our algorithm, we follow \cite{lin2021on} to set the threshold of $D_{max}^{h}$ adaptively. Specifically, $D_{max}^{h}$ is the exponential moving average of the gradient norm: \begin{equation}\label{eq:ema-norm} D_{max}^{h} = (1-\beta_2) D_{max}^{h-1} + \frac{\beta_2}{N }\sum_{i=1}^N \left\| g_i^{h}\right\|_2\mbox{,} \end{equation} where $\frac{1}{N}\sum_{i=1}^{N} \left\| g_i^{h}\right\|_2$ denotes the average norm of local updates at round $h$ among all workers and $\beta_2 \in [0, 1]$. Herein, we analyze the time complexity of Alg. \ref{alg:algorithm}. As described above, the proposed algorithm reduces the search step $s$ by half if a better solution cannot be found at the current search step. As a result, there are at most $\lceil \log N \rceil$ iterations, where $N$ is the number of workers. In each iteration, the linear programming can be solved in polynomial time according to \cite{spielman2004smoothed}. Actually, since the base topology in real world is usually sparse, the practical time cost for Alg. \ref{alg:algorithm} will be further reduced at the coordinator, which is usually deployed in cloud or cloudlet with high computing power. Therefore, the time for solving the joint optimization problem can be negligible, compared with that for model training and transmission. \subsection{Datasets and Models}\label{dataset} \textbf{Datasets:} We conduct extensive experiments on three real-world datasets: (\romannumeral1) EMNIST, (\romannumeral2) CIFAR-10, and (\romannumeral3) ImageNet. Specifically, EMNIST \cite{cohen2017emnist} is a handwritten character dataset that contains 731,668 training samples and 82,587 test samples from 62 categories (10 digits, 52 characters with lowercase and uppercase). CIFAR-10 is an image dataset composed of 60,000 32$\times$32 colour images (50,000 for training and 10,000 for test) in 10 categories. ImageNet \cite{russakovsky2015imagenet} is a dataset for visual recognition which consists of 1,281,167 training images, 50,000 validation images and 100,000 test images from 1,000 categories. To cope with the constrained resource of edge devices, we create IMAGE-100, a subset of ImageNet that contains 100 out of 1,000 categories, and each sample is resized with the shape of 64$\times$64$\times$3. To simulate the non-IID setting, we propose to create synthesized non-IID datasets with different \textit{class distribution skews} as in \cite{zhao2018federated,wang2020optimizing}, \textit{e.g.}\xspace, a single user can possess more data for one class or a couple of classes than others. Concretely, $p$ (\textit{e.g.}\xspace, 0.1, 0.2, 0.4, 0.6 and 0.8) of a unique class is divided equally for every three workers and the remaining samples of each class are partitioned to other workers uniformly. Accordingly, the non-IID levels of the above datasets are denoted as 0.1, 0.2, 0.4, 0.6 and 0.8, respectively. Note that $p$ = 0.1 is a special case, where the distribution of training dataset is IID for 30 workers. For fair comparisons, the full test datasets are used across all workers. \textbf{Models:} Three models with different types and structures are implemented on the above three real-world datasets for performance evaluation: (\romannumeral1) CNN on EMNIST, (\romannumeral2) AlexNet on CIFAR-10, (\romannumeral3) VGG-16 on IMAGE-100. Firstly, The plain CNN model \cite{mcmahan2017communication} specialized for the EMNIST dataset has two 5$\times$5 convolutional layers, a fully-connected layer with 512 units, and a softmax output layer with 62 units. Secondly, An 8-layer AlexNet \cite{krizhevsky2012imagenet}, which is composed of three 3$\times$3 convolutional layers, one 7$\times$7 convolutional layer, one 11$\times$11 convolutional layer, two fully-connected hidden layers, and one fully-connected output layer, is adopted for CIFAR-10. Thirdly, a famous model VGG-16 \cite{simonyan2014very}, that consists of 13 convolution layers with kernel of 3$\times$3, two dense layers and a softmax output layer, is utilized to classify the images in IMAGE-100. \subsection{Baselines and Metrics}\label{baselines} \textbf{Baselines:} We choose four classical algorithms as baselines for performance comparison, which are summarized as follows. (\romannumeral1) D-PSGD \cite{lian2017can} is a synchronous DFL algorithm using a ring network topology and the same local updating frequency for workers. (\romannumeral2) AD-PSGD \cite{lian2018asynchronous} is an asynchronous DFL algorithm, where workers randomly send local models to one of their neighbors immediately after performing local updating to speed up the training process. (\romannumeral3) LD-SGD \cite{li2019communication} alternates the frequencies of local updating and global updating for efficient decentralized communication. (\romannumeral4) PENS \cite{onoszko2021decentralized} with adaptive network topology allows workers with similar data distributions to communicate with each other to deal with statistical heterogeneity. \textbf{Metrics:} The following metrics are adopted to evaluate the performance of FedHP and the baselines. (\romannumeral1) \textit{Test accuracy} is measured by the proportion between the amount of the right data predicted by the model and that of all data. Specifically, at each communication round, we evaluate the average test accuracy of all workers' models trained with different algorithms on the test datasets. (\romannumeral2) \textit{Completion time} is defined as the total training time until the average model of all workers converges to the target accuracy. Concretely, we record the completion time of each communication round and sum up to get the total training time. (\romannumeral3) \textit{Average waiting time} is introduced to reflect the training efficiency of different algorithms. Specifically, the waiting time of worker $i$ at round $h$ can be represented by $t^h-t_i^h$, then the average waiting time of all workers at round $h$ is expressed as $\frac{1}{N} \sum_{i=1}^{N}(t^h-t_i^h)$. \subsection{Experiments}\label{simulation} \subsubsection{Experimental Setup} We evaluate the performance of FedHP through extensive simulation experiments, which are conducted on an AMAX deep learning workstation equipped with an Intel(R) Xeon(R) Gold 5218R CPU, 8 NVIDIA GeForce RTX 3090 GPUs and 256 GB RAM. On the workstation, we simulate a heterogeneous EC system with 30 workers and one coordinator (each is implemented as a process in the system) for DFL. The implementation for model training on each worker is based on the PyTorch framework \cite{paszke2019pytorch}, and we use the socket library of Python to build up the communication among workers and between workers and the coordinator. We consider the common situation where each worker communicates with its neighbors and coordinator through either LANs or WANs. To reflect the heterogeneity and dynamics of networks in our simulations, we let the bandwidth of each worker fluctuate between 1Mb/s and 10Mb/s. In addition, for simulating the computing heterogeneity, we assume that the computing time of one local iteration on a certain simulated worker is subject to the Gaussian distribution. Different simulated workers are randomly assigned with a specific Gaussian function whose mean and variance are derived from the time records of performing one local iteration on a commercial device (\textit{e.g.}\xspace, laptop, Jetson TX, Xavier NX). Each experiment will by default run 200, 500, and 500 communication rounds for EMNIST, CIFAR-10 and IMAGE-100, respectively, which will guarantee the convergence of the models. For CNN on EMNIST, the learning rate is initialized as 0.1 and the corresponding decay rate is specified as 0.98, while for AlexNet on CIFAR-10 and VGG-16 on IMAGE-100, the learning rates and the corresponding decay rates of them are identical, separately initialized as 0.1 and 0.993 \cite{xu2022adaptive}. Besides, the batch size is set as 32 for all three models. \begin{figure}[t] \centering \subfigure[EMNIST] { \includegraphics[width=0.29\linewidth,height=2.3cm]{fig/EMNIST_time_acc_iid} \label{fig:EMNIST-IID} } \subfigure[CIFAR-10] { \includegraphics[width=0.29\linewidth,height=2.3cm]{fig/Cifar10_time_acc_iid} \label{fig:CIFAR10-IID} } \subfigure[IMAGE-100] { \includegraphics[width=0.29\linewidth,height=2.3cm]{fig/IMAGE100_time_acc_iid} \label{fig:IMAGE100-IID} } \caption{Test accuracy of five algorithms on the three IID datasets.} \label{fig:IID} \vspace{-0.6em} \end{figure} \begin{figure}[t] \centering \subfigure[EMNIST] { \includegraphics[width=0.29\linewidth,height=2.3cm]{fig/EMNIST_time} \label{fig:EMNIST_time} } \subfigure[CIFAR-10] { \includegraphics[width=0.29\linewidth,height=2.3cm]{fig/Cifar10_time} \label{fig:CIFAR10_time} } \subfigure[IMAGE-100] { \includegraphics[width=0.29\linewidth,height=2.3cm]{fig/IMAGE100_time} \label{fig:IMAGE100_time} } \caption{Completion time of five algorithms when achieving different target accuracy} \label{fig:completion_time} \vspace{-0.9em} \end{figure} \begin{figure}[t] \centering \subfigure[EMNIST] { \includegraphics[width=0.29\linewidth,height=2.3cm]{fig/EMNIST_time_acc_noniid06} \label{fig:EMNIST-non-IID0.6} } \subfigure[CIFAR-10] { \includegraphics[width=0.29\linewidth,height=2.3cm]{fig/Cifar10_time_acc_noniid06} \label{fig:CIFAR10-non-IID0.6} } \subfigure[IMAGE-100] { \includegraphics[width=0.29\linewidth,height=2.3cm]{fig/IMAGE100_time_acc_noniid06} \label{fig:IMAGE100-non-IID0.6} } \caption{Test accuracy of five algorithms on the three datasets with non-IID level $p$=0.6.} \label{fig:non-IID0.6} \vspace{-0.9em} \end{figure} \begin{figure}[t] \centering \subfigure[EMNIST] { \includegraphics[width=0.29\linewidth,height=2.3cm]{fig/EMNIST_time_acc_noniid08} \label{fig:EMNIST-non-IID0.8} } \subfigure[CIFAR-10] { \includegraphics[width=0.29\linewidth,height=2.3cm]{fig/Cifar10_time_acc_noniid08} \label{fig:CIFAR10-non-IID0.8} } \subfigure[IMAGE-100] { \includegraphics[width=0.29\linewidth,height=2.3cm]{fig/IMAGE100_time_acc_noniid08} \label{fig:IMAGE100-non-IID0.8} } \caption{Test accuracy of five algorithms on the three datasets with non-IID level $p$=0.8.} \label{fig:non-IID0.8} \vspace{-0.9em} \end{figure} \begin{figure}[t] \centering \subfigure[EMNIST] { \includegraphics[width=0.29\linewidth,height=2.3cm]{fig/EMNIST_noniid} \label{fig:EMNIST-non-IID_level} } \subfigure[CIFAR-10] { \includegraphics[width=0.29\linewidth,height=2.3cm]{fig/Cifar10_noniid} } \subfigure[IMAGE-100] { \includegraphics[width=0.29\linewidth,height=2.3cm]{fig/IMAGE100_noniid} \label{fig:IMAGE100-non-IID_level} } \caption{Test accuracy varies with different non-IID levels.} \label{fig:non-IID_level} \vspace{-0.9em} \end{figure} \begin{figure}[t] \centering \subfigure[EMNIST] { \includegraphics[width=0.29\linewidth,height=2.3cm]{fig/EMNIST_waiting_time} } \subfigure[CIFAR-10] { \includegraphics[width=0.29\linewidth,height=2.3cm]{fig/Cifar10_waiting_time} \label{fig:CIFAR10-waiting_time} } \subfigure[IMAGE-100] { \includegraphics[width=0.29\linewidth,height=2.3cm]{fig/IMAGE100_waiting_time} \label{fig:ImageNet-waiting_time} } \caption{Average waiting time of five algorithms on the three datasets.} \label{fig:waiting_time} \vspace{-0.9em} \end{figure} \subsubsection{Overall Effectiveness} Firstly, we implement a set of experiments of these algorithms on the IID datasets. The training processes of FedHP and the baselines are presented in Fig. \ref{fig:IID}. In addition, we show the completion time of different algorithms when they achieve different target accuracy in Fig. \ref{fig:completion_time}. The results demonstrate that all the algorithms achieve the similar test accuracy eventually. FedHP achieves the fastest convergence, followed by AD-PSGD on all the three datasets, and they are much faster than the other methods. For example, by Figs. \ref{fig:EMNIST-IID} and \ref{fig:EMNIST_time}, FedHP takes 1,064s to achieve 85\% accuracy for CNN on EMNIST, while PENS, LD-SGD, AD-PSGD, D-PSGD, takes 2,725s, 1,680s, 1,129s, 2,254s, respectively. Besides, by Figs. \ref{fig:CIFAR10-IID} and \ref{fig:CIFAR10_time}, FedHP reduces the completion time of training AlexNet by about 56\%, 41\%, 3\% and 51\%, compared with PENS, LD-SGD, AD-PSGD and D-PSGD. Moreover, for VGG-16 on IMAGE-100 as shown in Figs. \ref{fig:IMAGE100-IID} and \ref{fig:IMAGE100_time}, FedHP can separately speed up training by about 2.17$\times$, 1.65$\times$, 1.06$\times$ and 2.07$\times$, compared with PENS, LD-SGD, AD-PSGD and D-PSGD. These results demonstrate the advantage of FedHP in accelerating model training. Secondly, we implement two sets of experiments of these algorithms on non-IID datasets. The results of non-IID scenarios with $p$=0.6 and $p$=0.8 are presented in Fig. \ref{fig:non-IID0.6} and Fig. \ref{fig:non-IID0.8}, respectively. We observe that FedHP can achieve the same convergence rate as that in the IID scenario while achieving higher accuracy than the other methods. For example, by Fig. \ref{fig:CIFAR10-non-IID0.6}, FedHP takes 5,015s to achieve 76.77\% accuracy for AlexNet on CIFAR-10, while PENS, LD-SGD, AD-PSGD and D-PSGD takes 11,953s, 8,926s, 5,539s and 10,634s to achieve 73.52\%, 70.54\%, 69.29\% and 70.35\% accuracy, respectively. By Fig. \ref{fig:CIFAR10-non-IID0.8}, FedHP can improve the test accuracy by about 4.83\%, 13.37\%, 14.26\% and 13.52\% on CIFAR-10 with non-IID level of $p$=0.8, compared with PENS, LD-SGD, AD-PSGD and D-PSGD. The above results indicate the effectiveness of FedHP by adaptively assigning appropriate local updating frequencies and constructing network topology for heterogeneous workers. \subsubsection{Effect of Statistical Heterogeneity} To demonstrate the robustness of FedHP to non-IID data, we show the test accuracies of these algorithms at different non-IID levels in Fig. \ref{fig:non-IID_level}, where the horizontal axis denotes the non-IID level of the datasets. By Fig.\ref{fig:non-IID_level}, we observe that the test accuracies of models trained by the five algorithms on all datasets decrease with the increasing of non-IID level. However, FedHP can always achieve the highest model accuracy in comparison with the other algorithms. In addition, PENS with performance-based neighbor selection can achieve higher model accuracy than the algorithms without considering the challenge of statistical heterogeneity. For instance, by Fig. \ref{fig:IMAGE100-non-IID_level}, FedHP and PENS achieve 50.63\% and 47.81\% accuracy on IMAGE-100 with non-IID level of $p$=0.8, while LD-SGD, AD-PSGD and D-PSGD achieve 45.69\%, 45.12\% and 45.83\%, respectively. In AD-PSGD, each worker probably receives the stale models for aggregation, which amplifies the negative impact of non-IID data on model performance, leading to the lowest test accuracy. Both D-PSGD and LD-SGD adopt static network topologies without considering the challenge of statistical heterogeneity on model training, thus they suffer from severe loss of accuracy. Although PENS allows workers with similar data distributions to communicate with each other in order to deal with the statistical heterogeneity, it still achieves a lower test accuracy than FedHP. More specifically, by Fig. \ref{fig:IMAGE100-non-IID_level}, FedHP can achieve improvement of test accuracy by about 5.90\%, 10.81\%, 12.22\%, 10.47\% for VGG-16 on IMAGE-100 with non-IID level of $p$=0.8, compared with the baselines (\textit{i.e.}\xspace, AD-PSGD, LD-SGD, D-PSGD, PENS). Collectively, these results demonstrate the advantage of FedHP in addressing the challenge of statistical heterogeneity. \subsubsection{Effect of System Heterogeneity} To further illustrate the efficiency of FedHP, the average waiting time of five algorithms on the three datasets is illustrated in Fig. \ref{fig:waiting_time}, where we find that FedHP takes much less waiting time than both D-PSGD and PENS. For instance, by Fig. \ref{fig:CIFAR10-waiting_time}, the average waiting time of FedHP is 1.7s while PENS and D-PSGD incur average waiting time of 12.1s and 10.6s, respectively. That is because both D-PSGD and PENS assign identical local updating frequencies for workers without considering system heterogeneity, resulting in non-negligible waiting time. In addition, PENS always suffers from more computing time for neighbor selection and model training, incurring the highest average waiting time among five algorithms. As shown in Fig. \ref{fig:waiting_time}, the average waiting time of AD-PSGD is the lowest among these algorithms, because in the asynchronous scenario, workers update their local models as soon as they receive any models from their neighbors. Besides, LD-SGD, implemented to alternate the frequencies of local updating and global updating, reduces the variance of waiting time to some extent. Concretely, by Fig. \ref{fig:ImageNet-waiting_time}, FedHP and AD-PSGD only incur average waiting time of 3.2s and 2.9s, while LD-SGD, D-PSGD and PENS incur average waiting time of 19.2s, 21.5 and 24.7s, respectively. The above results explain why FedHP and AD-PSGD can achieve much faster converge rate than D-PSGD and PENS while LD-SGD takes less completion time than D-PSGD in Figs. \ref{fig:IID}, \ref{fig:non-IID0.6} and \ref{fig:non-IID0.8}. The results in Fig. \ref{fig:waiting_time} demonstrate that FedHP can well overcome the challenges of system heterogeneity compared with existing methods. \subsection{Network Model} An EC system includes a set of distributed workers (\textit{e.g.}\xspace, IoT devices or small base stations) $\mathcal{V} = \{v_1,v_2,\ldots,v_N\}$, with $|\mathcal{V}|=N>1$. In DFL, the workers collaboratively train deep learning models on their local datasets, and each worker needs to exchange models with its neighbors rather than sharing its original data. A control node (\textit{i.e.}\xspace, coordinator) is still needed to collect the global information about model training statuses and network conditions in DFL \cite{zhou2021communication,wang2019matcha,wang2022accelerating, xu2021decentralized}. However, unlike the parameter server in FL, the coordinator does not aggregate the models and hence will not become the bandwidth bottleneck. Furthermore, any worker can act as the coordinator. Since the size of these information (\textit{e.g.}\xspace, 100-300KB \cite{lyu2018multi}) is much smaller than that of model parameters, it is reasonable to ignore the cost (\textit{e.g.}\xspace, bandwidth consumption and time cost) for information collection \cite{lyu2019optimal}. The P2P network topology at the $h$-th communication round can be expressed as a connected undirected graph $\mathcal{G}^h=(\mathcal{V},E^h)$, where $\mathcal{V}$ denotes the worker set and $E^{h}$ denotes the set of links connecting workers at communication round $h$. Specifically, the P2P network topology at round $h$ can be expressed as a symmetric adjacency matrix $\mathbf{A}^{h} = \{a_{i,j}^{h} \in \{0, 1\}, 1 \leq i,j \leq N\}$, where $a_{i,j}^{h} = 1$ if $e_{i,j}^{h} \in E^{h}$, otherwise $0$. The neighbor set of worker $i$ at round $h$ is represented as $\mathcal{N}^{h}_i$, whose cardinality is denoted as $|\mathcal{N}^{h}_i|=\sum_{j\in \mathcal{N}^{h}_i} a_{i,j}^{h}$. The degree matrix $\mathbf{D}^{h}=\{d^{h}_{i, j}, 1 \leq i,j \leq N\}$ is defined as a diagonal matrix, where $d^{h}_{i, i} = |\mathcal{N}^{h}_i|$. Combining the adjacency matrix and the degree matrix, the Laplacian matrix $\mathbf{L}^{h}$ can be expressed as follows: \begin{equation} \mathbf{L}^{h} = \mathbf{D}^{h} - \mathbf{A}^{h}. \end{equation} According to the spectral graph theory \cite{chung1997spectral}, $\lambda_2(\mathbf{L}^{h}) > 0$ if and only if the topology is connected, where $\lambda_{m}(\mathbf{L}^{h})$ denotes the $m$-th smallest eigenvalue of matrix $\mathbf{L}^{h}$. \subsection{Model Training Process} In DFL, worker $i$ updates the local model parameter $x_i$ at the $h$-th communication round based on a mini-batch $\xi_i$ sampled from its local dataset $\mathcal{D}_i$. Let $f_i(x_i)$ and $F_i(x_i;\xi_i)$ (for ease of description, written as $F_i(x_i)$) denote the local loss function and the loss function over mini-batch $\xi_i$, respectively. Generally, model training can be formally described as optimizing the following objective function \cite{koloskova2019decentralized}: \begin{equation}\label{Eq:loss function} f^* := \min_{x \in \mathbb{R}^d}\ [\ f(x) := \frac{1}{N} \sum_{i=1}^{N} f_i(x_i)\ ]\mbox{,} \end{equation} where $f_i(x_i) := \mathbb{E}_{\xi_i \sim \mathcal{D}_i}\ F_i(x_i)$ and $x$ denotes the global model parameter. This setting covers the important cases of empirical risk minimization in DFL \cite{koloskova2019decentralized}. The model will be updated by applying the decentralized stochastic gradient descent (DSGD) algorithm \cite{tsitsiklis1986distributed}, which provides an effective way to optimize the loss function in a decentralized manner. For the mini-batch stochastic gradient descent, a gradient descent step over a mini-batch on each worker is regarded as a local iteration (or a local update). After performing one or multiple local iterations, each worker exchanges local models or gradients with its neighbors and aggregates these models. Such a training process is regarded as a communication round. $x_i^{h,k}$ denotes the local model of worker $i$ at the $k$-th local iteration within communication round $h$. At the beginning of communication round $h$, by setting $x_i^{h, 0}=x_i^{h}$, worker $i$ updates its local model by gradient descent as follows \cite{wang2022accelerating,xu2022adaptive}: \begin{equation} \label{Eq:Update rule 2} x_i^{h, k+1} = x_i^{h, k} - \eta \nabla F_i(x_i^{h, k})\mbox{,} \ 0 \le k < \tau \mbox{,} \end{equation} where $\eta$ is the local learning rate, $\tau$ is the local updating frequency, and $\nabla F_i(x_i^{h, k})$ is the gradient. The local updates of worker $i$ at round $h$ is denoted as $g_i^{h} = \sum_{k=0}^{\tau-1}\nabla F_i(x_i^{h, k})$. Then the local updating of worker $i$ can be rewritten as: \begin{equation} \label{Eq:Update rule} x_i^{h+1} = x_i^{h} - \eta \cdot g_i^h. \end{equation} After local updating, workers send local models to their neighbors. Based on the received model parameters, worker $i$ will aggregate these models from neighbors: \begin{equation} \label{Eq:Update rule 1} x_i^{h+1} = x_i^{h} + \sum_{j \in \mathcal{N}_i^{h}} w^{h}_{i,j} (x_j^{h} - x_{i}^{h})\mbox{,} \end{equation} where $\mathcal{N}_i^{h}$ is the neighbor set of worker $i$ at round $h$ and $w_{i,j}^{h}, j \in \mathcal{N}_i^{h}$, is the mixing weight for aggregating the model of neighbor $j$. Defining $u_{max}^{h}$ as the maximum of $|\mathcal{N}_i^{h}|$ over workers at round $h$, a simple suboptimal choice of $w^{h}_{i,j}$ is \cite{xiao2004fast}: \begin{equation} \label{Eq:Step size} w^{h}_{i,j} = \frac{1}{u_{max}^{h} + 1}. \end{equation} \subsection{Consensus Distance} Unlike the traditional PS architecture, there is no global model in DFL, and local models hosted by different workers are not always the same. We introduce the \emph{consensus distance} metric to measure the discrepancy among local models \cite{koloskova2019decentralized,lin2021on,wang2022accelerating}. Firstly, the consensus distance between model of worker $i$ and model of worker $j$ at the $h$-th communication round is defined as: \begin{equation} \label{Eq:Consenus Distance} D^{h}_{i,j} = \left \| x_i^h - x_j^h \right \|. \end{equation} Then the consensus distance between local model of worker $i$ and ``global model'' (\textit{i.e.}\xspace, the average of all workers' models) at round $h$ is defined as: \begin{equation} \label{Eq:Local Consenus Distance} D^{h}_i = \left \| \overline{x}^{h} - x_i^{h} \right \|\mbox{,} \end{equation} where $\overline{x}^{h} = \frac{1}{N} \sum_{i=1}^{N} x_i^{h}$ denotes the average of all workers' models at round $h$. It is worth noting that $\overline{x}^{h}$ is not available in practice because there is no PS to collect all workers' models in DFL. To this end, we would estimate $D_i^{h}$ using consensus distance between the local model of worker $i$ and the models of its neighbors (\textit{i.e.}\xspace, $D_{i,j}^{h}, j \in \mathcal{N}_i^h$), which will be elaborated in Sec. \ref{subsec_distance_estimation}. Accordingly, the average consensus distance of all workers' models is: \begin{equation} \label{Eq:Avg Consenus Distance} D^{h} = \frac{1}{N} \sum_{i=1}^{N} D^{h}_i. \end{equation} Similar to the weight divergence \cite{zhao2018federated, qian2020towards} in the PS architectures, the consensus distance is correlated to data distribution and is the key factor that captures the joint effect of decentralization \cite{lin2021on}, which motivates us to apply consensus distance for topology construction to overcome the challenge introduced by non-IID data. \subsection{Relationship between Local Updating Frequency and Network Topology}\label{sec:relation} In this section, we explain the coupled relationship between local updating frequencies and network topologies. On the one hand, the computing time of one local iteration and the transmission time of one model among workers are highly different due to system heterogeneity. However, in traditional synchronous schemes, local updating frequencies among workers are usually identical or fixed at each communication round. Accordingly, fast workers have to wait for slow ones, incurring non-negligible idle time and significantly reducing the training efficiency \cite{ma2021adaptive, zhang2018adaptive}. Considering the heterogeneous computing capabilities of workers, before aggregation, the workers with higher computing capabilities will perform more local iterations while the workers with lower computing capabilities only perform fewer local iterations. On the other hand, data samples across all workers may be non-IID, which seriously affects the convergence rate and even compromises the accuracy of trained model \cite{zhao2018federated, wang2020optimizing}. To deal with the statistical heterogeneity, the workers with significantly different data distributions (\textit{i.e.}\xspace, with large consensus distance) can be connected preferentially and frequently. After that, the training performance over non-IID data can be guaranteed meanwhile the waiting time and training time among workers would be significantly reduced. Furthermore, the local models trained with different local updating frequencies are discrepant, which requires to select suitable neighbors for model aggregation to achieve satisfied model accuracy. Meanwhile, the completion time of each communication round (including computing time and communication time) varies with dynamic network topology, which requires to assign appropriate local updating frequencies for heterogeneous workers to reduce the waiting time. Accordingly, we propose to jointly optimize the local updating frequency and network topology to address the system heterogeneity and statistical heterogeneity in DFL. \subsection{Problem Formulation} This section defines the problem of efficient DFL with adaptive local updating and network topology: \textit{minimizing the training time while requiring workers to achieve a satisfied accuracy for their models}. Given a DFL task in the EC system, we need to determine the local updating frequencies and average consensus distance of all workers to minimize the training time. First, the local updating frequency and the computing time of one local iteration at the $h$-th communication round on worker $i$ are denoted as $\tau_i^h$ and $\mu_i^h$, respectively. Let $\mathbf{B}^{h} = \{\beta_{i,j}^{h}, 1 \leq i,j \leq N\}$ denote the communicating time matrix at round $h$, where $\beta_{i,j}^h$ is the communicating time between worker $i$ and worker $j$. Therefore, the local updating time (including computing time and communication time) of worker $i$ at round $h$ is formulated as: \begin{equation} t_i^h=\tau_i^h \cdot \mu_i^h + \max\{\beta_{i,j}^h\}\ \forall i \in [N], \forall j \in \mathcal{N}^{h}_i. \end{equation} In addition, the waiting time of worker $i$ can be expressed as $t^h-t_i^h$, where $t^h=\max\{t_i^h\}\ (\forall i \in [N])$ denotes the local updating time of the slowest worker at round $h$. $t^h$ also denotes the completion time of round $h$. Then the average waiting time of all workers at round $h$ can be formulated as: \begin{equation} \mathcal{W}^h =\frac{1}{N} \sum_{i=1}^{N}(t^h-t_i^h). \end{equation} Accordingly, we formulate the problem as follows: \vspace{0.1cm} \centerline{$\min \sum\limits_{h=1}^{H} t^h$} \vspace{-0.4cm} \begin{equation}\label{problem} s.t. \begin{cases} D^{h+1} \le D_{max}^{h}, \\ \lambda_2(\mathbf{L}^{h}) > 0,\\ t_i^h=\tau_i^h \cdot \mu_i^h+ \max\{\beta_{i,j}^h\}, \forall i \in [N], \forall j \in \mathcal{N}^{h}_i\\ \mathcal{W}^h =\frac{1}{N} \sum_{i=1}^{N}(t^h-t_i^h) \le \varepsilon \end{cases} \end{equation} The first inequality expresses that the average consensus distance should not exceed the predefined threshold $D_{max}^{h}$. We set $D_{max}^{h}$ as the same in \cite{lin2021on} and the details are described in Sec. \ref{sec:alg}. The second inequality ensures a connected topology in each communication round, which is essential to guarantee the training convergence \cite{pmlr-v119-koloskova20a}. The third set of equalities denotes the formulation of the local updating completion time and communication time on worker $i$ at the $h$-th communication round, where $\beta_{i,j}$ denotes the communication time between worker $i$ and worker $j$. The fourth set of inequalities essentially guarantees that the average waiting time of all workers at each communication round is sufficiently small, where $\varepsilon > 0$ is the time threshold, so as to mitigate the effects of the synchronization barrier. Our objective is to minimize the training time under the constraints.
1,116,691,497,498
arxiv
\section{Introduction}\label{sec:intro} In its simplest realization inflationary cosmology can be effectively described as a quasi-deSitter space time. Early studies\cite{polyakov1,IR1,IR2,allen,folaci,dolgov} revealed that de Sitter space time features infrared instabilities and profuse particle production in interacting field theories. Infrared divergences in loop corrections to correlation functions hinder the reliability of the perturbative expansion\cite{weinberg,seery,branrecent}, led to the suggestion of an infrared instability of the vacuum\cite{polyakov,kroto,akhmedov,higuchi,vidal}, and affect correlation functions during inflation\cite{weinberg,giddins,seery,bran,mazumdar,leblond2,woodard,marolf} requiring a non-perturbative treatment. Back reaction from particle production in a de Sitter background has been argued to provide a dynamical``screening'' mechanism that leads to relaxation of the cosmological constant\cite{emil,IR3,branmore}, a suggestion that rekindled the interest on infrared effects in de Sitter space time. A body of work established that infrared and secular divergences are manifest in super-Hubble fluctuations during de Sitter (or nearly de Sitter) inflation\cite{petri,enq,riotto,holman}, thus a consistent program that provides a resummation of the perturbative expansion is required. Non-perturbative methods of resummation of the secular divergences have been implemented in several studies in de Sitter space time\cite{boyan} suggesting a dynamical generation of mass\cite{holman}, a result that was originally anticipated in the seminal work of ref.\cite{staroyoko}, and explored and extended in ref.\cite{richard}. More recently a self-consistent mechanism of mass generation for scalar fields through infrared fluctuations has been suggested\cite{petri,holman,rigo,garb,arai,serreau,raja,prokossb,boywwds}. The lack of a global time-like killing vector in de Sitter space time leads to remarkable physical effects, as it implies the lack of particle thresholds (a direct consequence of energy-momentum conservation) and the decay of fields even in their own quanta\cite{boyprem,boyan} with the concomitant particle production, a result that was confirmed in ref.\cite{moschella,akhmedov} and more recently investigated in ref.\cite{donmor,leblond} for the case of heavy fields. For light scalar fields in de Sitter space time with mass $M \ll H$, it was shown in refs.\cite{boyan} that the infrared enhancement of self-energy corrections is manifest as poles in $\Delta = M^2/3 H^2$ in correlation functions and that the most infrared singular contributions to the self-energy can be isolated systematically in an expansion in $\Delta$ akin to the $\epsilon$ expansion in critical phenomena. A similar expansion was noticed in refs.\cite{holman,leblond,rigo,smit,serreau}. Whereas infrared effects in de Sitter (or quasi de Sitter) cosmology are typically studied via correlation functions, recently the issue of the time evolution of the \emph{quantum states} has began to be addressed. In ref.\cite{boyhol} the Wigner-Weisskopf method\cite{ww,boyaww} ubiquitous in quantum optics\cite{qoptics} has been adapted and extended as a non-perturbative quantum field theory method in inflationary cosmology to study the time evolution of quantum states. This method reveals how quantum states \emph{decay} in time, it has been shown to be equivalent to the dynamical renormalization group in Minkowski space time\cite{drg,boyhol} and has recently been implemented to study the radiative generation of masses and decay widths of minimally coupled fields during inflation\cite{boywwds}. Early studies\cite{ford,ratra} suggested that infrared divergences during inflation can prevent spontaneous symmetry breaking, however more recently the issue of spontaneous symmetry breaking during inflation has been revisited in view of the generation of masses by radiative corrections\cite{serreau,prokossb,arai}. In ref.\cite{serreau} the study of an $O(N)$ model in the large N limit reveals that there is no spontaneous symmetry breaking as a consequence of the infrared divergences: if the $O(N)$ symmetry is spontaneously broken there would be \emph{massless} Goldstone bosons which lead to strong infrared divergences, the resolution, as per the results of this reference is that the symmetry is restored by the strong infrared divergences and no symmetry breaking is possible. This result is in qualitative agreement with those of earlier refs.\cite{ford,ratra}. However, a different study of the same model in ref.\cite{prokossb} reaches a different conclusion: that indeed the $O(N)$ symmetry is spontaneously broken but Goldstone bosons acquire a radiatively induced mass. In ref.\cite{arai} a scalar model with $Z_2$ symmetry is studied with the result that radiative corrections tend to restore the symmetry via the non-perturbative generation of mass. Both refs.\cite{prokossb,arai} suggest a discontinuous transition. \vspace{2mm} \textbf{Motivation, goals and results:} Spontaneous symmetry breaking is an important ingredient in the inflationary paradigm, and as such it merits a deeper understanding of whether radiative corrections modify the familiar picture of slow roll inflation. If, as found in ref.\cite{serreau}, symmetry breaking is not possible in some models, these would be ruled out at least in the simple small field scenarios of slow roll, as inflation would not be successfully ended by the inflaton reaching the broken symmetry minimum. Furthermore, if the inflaton is part of a Higgs-type mode of multiplet of fields, the question of whether the fields associated with unbroken generators are massless is very important as these could lead to entropy perturbations whose infrared divergences are more severe than those of adiabatic perturbations\cite{branrecent}. \vspace{2mm} In this article we study an $O(2)$ scalar field theory in de Sitter space time and extract implications for $O(N)$ with the following \textbf{goals}: i) to revisit at a deeper level the content of Goldstone's theorem in an \emph{expanding cosmology} in absence of manifest time translational invariance. In particular whether spontaneous symmetry breaking of a continuous symmetry does imply the existence of massless Goldstone modes in an inflationary setting. ii) a study beyond the local mean field approximation of whether a continuous symmetry can be spontaneously broken in de Sitter space time, iii) how the mechanism of self-consistent non-perturbative mass generation can be compatible with symmetry breaking and Goldstone modes. Recently there has been renewed interest in a deeper understanding of Goldstone's theorem and spontaneous symmetry breaking both in relativistic and non-relativistic systems\cite{brauner,nicolis,wata}, thus our study provides a complementary investigation of symmetry breaking in a \emph{ cosmological setting} wherein the lack of a global time-like Killing vector leads to unexpected yet very physical consequences. \vspace{2mm} \textbf{Brief summary of results:} \vspace{2mm} \begin{itemize} \item{We argue that in absence of time translational invariance Goldstone's theorem \emph{does not} imply the existence of massless excitations if a continuous symmetry is spontaneously broken. We revisit the implementation of Goldstone's theorem in a spontaneously broken $O(2)$ symmetry in Minkowski space time and highlight that the masslessness of Goldstone Bosons is a consequence of a cancellation between \emph{space time local and non-local terms} in the loop expansion and discuss the implications for an $O(N)$ theory in the large N limit. } \item{We then study the same model in de Sitter space-time, and emphasize that whereas in Minkowski space-time the conservation of the Noether current associated with the continuous symmetry directly leads to Goldstone's theorem, in an expanding cosmology this current is \emph{covariantly conserved} and the consequences are, therefore, much less stringent. In conformal coordinates a \emph{conserved} Noether current is manifestly obtained, but the lack of time translational invariance renders the content of Goldstone's theorem much less stringent.} \item{ We implement a self-consistent non-perturbative approach based on the Wigner-Weisskopf method described in refs.\cite{boyhol,boywwds} that allows to extract the mass of the single particle excitations and distinctly shows that the space-time local terms cannot be cancelled by non-local self-energy terms in leading order in a $\Delta$ expansion. As a result Goldstone modes acquire a radiatively generated mass as a consequence of infrared divergences in agreement with the results in refs.\cite{serreau,prokossb}. The lack of a time-like Killing vector entails that there are no kinematic thresholds, and as a consequence Goldstone modes acquire a \emph{width} from processes of absorption and emission of superhorizon quanta of both Goldstone and Higgs-like modes. } \item{ We show that for finite $N$ there is a symmetry breaking first order transition as a function of the Hawking temperature $T_H = H/2\pi$, Goldstone modes acquire a radiatively infrared generated self consistent mass but also a \emph{decay width}, and that the symmetry cannot be spontaneously broken in the strict $N\rightarrow \infty$ limit. We argue that a first order transition is a distinct and expected consequence of infrared effects, because a continuous transition would entail that at the critical point there should be massless excitations which would lead to infrared divergences. Radiative corrections relieve the infrared singularities by generating a mass but at the expense of turning the symmetry breaking transition into first order. } \end{itemize} \section{\label{sec:mass} Spontaneous symmetry breaking and Goldstone Bosons \\ in Minkowski space-time:}\label{sec:minkowski} \subsection{General aspects:}\label{subsec:general} We consider the $O(2)$ linear sigma model as a simple example of a scalar theory with spontaneous symmetry breaking (SSB) and extract consequences for the case of $O(N)$ in the large N limit. The Lagrangian density for the $O(2)$ sigma model is \be \mathcal{L} = \frac{1}{2}(\partial_\mu \,\sigma)^2 +\frac{1}{2}(\partial_\mu \,\pi)^2- V(\sigma^2+\pi^2) \label{sigmamodel} \ee which is invariant under the infinitesimal transformations \be \pi \rightarrow \pi + \epsilon \sigma ~~;~~ \sigma \rightarrow \sigma - \epsilon \pi \label{trafo}\ee with $\epsilon$ a space-time constant infinitesimal angle. The canonical momenta conjugate to the $\pi,\sigma$ fields are respectively, \be P_\pi(x) = \dot{\pi}(x)~~;~~P_\sigma(x) = \dot{\sigma}(x) \label{canmom}\ee with the equal time canonical commutation relations \be \Big[P_\pi(\vec{x},t),\pi(\vec{y},t) \Big] = -i\,\delta^{3}(\vec{x}-\vec{y})~~;~~\Big[P_\sigma(\vec{x},t),\sigma(\vec{y},t) \Big] = -i\,\delta^{3}(\vec{x}-\vec{y}) \,.\label{ccr}\ee The conserved Noether current associated with the global symmetry (\ref{trafo}) is \be J^\mu(x) = i \Big(\sigma(x)\,\partial^\mu \pi(x) - \pi(x)\,\partial^\mu \sigma(x)\Big) ~~;~~\partial_\mu J^\mu(x) =0 \label{conscur}\ee with the conserved charge \be Q = i\,\int d^3 x \Big(\sigma(\vx,t)\,P_\pi(\vx,t)- \pi(\vx,t)\,P_\sigma(\vx,t) \Big) \,.\label{charge}\ee Consider the following identity resulting from current conservation (\ref{conscur}), \be \int d^3x \langle 0|\big[\vec{\nabla}\cdot\vec{J}(\vx,t),\pi(\vec{y},t')\big]|0\rangle = \frac{\partial}{\partial t} \int d^3x \langle 0|\big[ {J^0}(\vx,t),\pi(\vec{y},t')\big]|0\rangle \label{conmut}\ee Assuming spatial translational invariance we introduce \be S(\vk;t,t') = \int d^3x ~ e^{-i\vk\cdot(\vec{x}-\vec{y})}\,\langle 0|\big[ {J^0}(\vx,t),\pi(\vec{y},t')\big]|0\rangle \label{Skoft} \ee \emph{If} the surface integral on the left hand side of eqn. (\ref{conmut}) vanishes, then it follows that \be {\mathrm{lim}}_{k\rightarrow 0} ~~ \frac{\partial}{\partial t} S(\vk;t,t') = 0 \label{lim1}\ee In general this result implies that \be {\mathrm{lim}}_{k\rightarrow 0}~~ S(\vk;t,t') = \langle 0|\big[ Q(t),\pi(\vec{y},t')\big]|0\rangle = \langle 0|\sigma(\vec{y},t')|0\rangle = v(t') \,.\label{res1}\ee namely $Q$ is time independent. In absence of time translational invariance the results (\ref{lim1},\ref{res1}) are the only statements that can be extracted from the conservation of the current. However if \emph{time tranlational invariance holds} then $S(\vk;t,t') = S(\vk;t-t')$ and introducing the spectral representation \be S(\vk,t-t') = \int \frac{d\omega}{2\pi} S(\vk,\omega) ~~e^{-i\omega(t-t')} \label{specrep}\ee it follows from (\ref{lim1}) that i) $v(t')=v$ in (\ref{res1}) is time independent and ii) \be {\mathrm{lim}}_{k\rightarrow 0}~~ S(\vk;\omega) = 2\pi \, v\,\delta(\omega)~~;~~ v = \langle 0|\sigma(\vec{0},0)|0\rangle \,, \label{finres}\ee where we have used eqns.(\ref{charge},\ref{ccr}). When space-time translational invariance is available further information is obtained by writing $S(\vk,\omega)$ in term of a complete set of eigenstates of the momentum and Hamiltonian operators by inserting this complete set of states in the commutators \be e^{i\,\vec{P}\cdot\vec{x}}\,e^{-iHt}|n\rangle = e^{i\,\vec{p}_n\cdot\vec{x}}\,e^{-iE_nt}|n\rangle \,, \label{intstates} \ee from which we obtain \bea S(\vk,\omega) = 2\pi \sum_{n} &\Bigg\{&\langle 0|J^0(\vec{0},0)|n\rangle\langle n|\pi(\vec{0},0)|0\rangle ~ \delta^3(\vec{p}_n-\vk)\,\delta(E_n -\omega)- \nonumber \\ && \langle 0|\pi(\vec{0},0)|n\rangle\langle n|J^0(\vec{0},0)|0\rangle ~ \delta^3(\vec{p}_n+\vk)\,\delta(E_n +\omega)\Bigg\} \,. \label{sums}\eea Then the result (\ref{finres}) implies an intermediate state with vanishing energy for vanishing momentum. This is the general form of Goldstone's theorem valid even for non-relativistic systems\cite{lange,brauner,nicolis,wata}. The result has a clear interpretation: under the assumption that the current flow out of the integration boundaries vanishes, the total charge is a constant of motion. \emph{If the theory is manifestly time translational invariant} this automatically implies that $S(\vec{k},t-t')$ in (\ref{Skoft}) does not depend on $t-t'$ by charge conservation, therefore it follows directly that in the limit $k \rightarrow 0$ the spectral density $S(\vk,\omega)$ can \emph{only} have support at $\omega =0$. The standard intuitive explanation for gapless long wavelength excitations relies on the fact that the continuous symmetry entails that the manifold of minima away from the origin form a continuum of degenerate states. A rigid rotation around the minimum of the potential does not cost any \emph{energy} because of the degeneracy, therefore the energy cost of making a long-wavelength spatial rotation vanishes in the long-wavelength limit precisely because of the degeneracy. Both this argument and the more formal proof (\ref{finres}) rely on the existence of a conserved energy and energy eigenstates, which is not available in the cosmological setting. The main reason for going through this textbook derivation of Goldstone's theorem is to highlight that \emph{time translational invariance} is an essential ingredient in the statement that the Goldstone theorem implies a \emph{gapless excitation} if the symmetry is spontaneously broken\footnote{Under the assumption that the current flow out of a boundary vanishes, see discussion in\cite{lange}.}. Precisely this point will be at the heart of the discussion of symmetry breaking in inflationary cosmology. \subsection{Tree level, one-loop and large N:} \label{subsec:treeonelup} In order to compare the well known results in Minkowski space-time with the case of inflationary cosmology we now study how Goldstone's theorem is implemented at tree and one-loop levels in the $O(2)$ case, and in the large N limit in the case of $O(N)$ symmetry, as this study will highlight the main differences between Minkowski and de Sitter space times. To be specific, we now consider the $O(2)$ model with potential \be V(\sigma^2+\pi^2) = \frac{\lambda}{8}\Big(\sigma^2 + \pi^2 -\frac{\mu^2}{\lambda} \Big)^2 \label{potential} \ee Shifting the field \be \sigma = \sigma_0 + \chi \label{shiftsig}\ee the potential (\ref{potential}) becomes \be V(\chi,\pi) = \frac{M^2_\chi}{2}\,\chi^2 + \frac{M^2_\pi}{2}\,\pi^2 + \frac{\lambda}{2}\sigma_0 J\, \chi + \frac{\lambda}{2}\sigma_0 \, \chi^3 + \frac{\lambda}{2}\sigma_0 \,\pi^2 \chi + \frac{\lambda}{8} \chi^4 +\frac{\lambda}{8} \pi^4 +\frac{\lambda}{4} \chi^2 \pi^2 \label{potafter}\ee where \be J= \sigma^2_0 -\frac{\mu^2}{\lambda}~~;~~ M^2_\chi = {\lambda}\,\Big( \sigma^2_0 + \frac{J}{2} \Big)~~;~~ M^2_\pi = \frac{\lambda}{2}\, J \Rightarrow M^2_\chi - M^2_\pi = \lambda \sigma^2_0 \label{massesetc}\ee The value of $\sigma_0$ is found by requiring that the expectation value of $\chi$ vanishes in the correct vacuum state, thus it departs from the tree level value $\mu^2/\lambda$ by radiative corrections. \vspace{2mm} \textbf{Tree level:} \vspace{2mm} At tree level $\sigma^2_0 = \mu^2/\lambda~~;~~M^2_\pi =0,M^2_\chi = \mu^2$, and the $\pi$ field obeys the equation of motion \be \ddot{\pi}(\vec{x},t) - \nabla^2 \pi(\vec{x},t) =0 \,.\label{eompi}\ee The $\pi$ field is quantized in a volume $V$ as usual \be \pi(\vec{x},t) = \sum_{\vec{k}}\frac{1}{\sqrt{2Vk}}\Big[a_{\vec{k}}\,e^{-i(kt-\vec{k}\cdot{\vec{x}})}+a^\dagger_{\vec{k}}\,e^{i(kt-\vec{k}\cdot{\vec{x}})} \Big]\,.\label{piquant}\ee The conserved current (\ref{conscur}) becomes \be J^\mu = i\, \sigma_0 \,\partial^\mu\pi + i \,\Big(\chi \,\partial^\mu\pi - \pi \partial^\mu \chi\Big) \label{shiftedcurr}\ee At tree level only the first term contributes to the spectral density (\ref{sums}), since at this level the $\pi$ field creates a single particle state out of the vacuum, which is the \emph{only} state that contributes to (\ref{sums}). We refer to the first term as $J^\mu_{tl}$ and its conservation is a result of the equation of motion (\ref{eompi}) and $\sigma_0$ being a space-time constant. It is straightforward to find \be \langle 0|J^0_{tl}(\vec{0},0)|1_{\vec{p}}\rangle\langle 1_{\vec{p}}|\pi(\vec{0},0)|0\rangle = -\langle 0|\pi(\vec{0},0)|1_{\vec{p}}\rangle\langle 1_{\vec{p}}|J^0_{tl}(\vec{0},0)|0\rangle = \frac{\sigma_0}{2V}\label{treesumrule} \ee where $V$ is the quantization volume. Therefore \be S(\vec{k},\omega) = 2\pi \sigma_0 \int \frac{d^3p}{(2\pi)^3} ~\frac{1}{2}\, \Big[\delta(p+\omega)\delta^{3}(\vec{p}+\vec{k})+\delta(p-\omega)\delta^{3}(\vec{p}-\vec{k})\Big] \label{sofkw}\ee and \be {\mathrm{lim}}_{k\rightarrow 0}~ S(\vec{k},\omega) = 2\pi \sigma_0 \, \delta(\omega) \,. \label{sumruletree}\ee \vspace{2mm} \textbf{One loop:} We now focus on understanding how the $\pi-$ field remains massless with radiative corrections. We carry out the loop integrals in four dimensional Euclidean space time, the result is independent of this choice. The interaction vertices are depicted in fig. (\ref{fig:vertices}). \begin{figure}[ht!] \begin{center} \includegraphics[height=3in,width=4in,keepaspectratio=true]{vertices.eps} \caption{Vertices in broken symmetry. The broken line ending in the black dot refers to the \emph{linear} term in $\chi$ in eqn.(\ref{potafter}). } \label{fig:vertices} \end{center} \end{figure} The vacuum expectation value $\sigma_0$ is fixed by the requirement that \be \langle \chi \rangle = 0 \,,\label{tadcond}\ee to which we refer as the \emph{tadpole condition}, it is depicted in fig.(\ref{fig:expval}). We find \be \langle \chi \rangle =0 \Rightarrow \frac{\lambda\,\sigma_0}{2\,M^2_\chi}\Big[J + 3 I_\chi + I_\pi \Big] = 0 \label{tadpole}\ee where \be I_{\chi} = \int \frac{d^4k}{(2\pi)^4}\, \frac{1}{k^2+M^2_\chi} ~~;~~ I_{\pi} = \int \frac{d^4k}{(2\pi)^4}\, \frac{1}{k^2+M^2_{\pi}}\,. \label{tadints}\ee This condition ensures that the matrix element of the interaction Hamiltonian $H_I$ between the vacuum and single particle states vanishes, namely \be \langle 1_{\vec{k}}|H_I|0\rangle = 0 \,.\label{nomtxel}\ee \begin{figure}[ht!] \begin{center} \includegraphics[height=5in,width=4in,keepaspectratio=true]{expval.eps} \caption{Tadpole condition (\ref{tadcond}). } \label{fig:expval} \end{center} \end{figure} There are two solutions of the tadpole equation \bea \sigma_0 & = & 0 \,,\label{nossb} \\ J & = & - 3 I_\chi - I_\pi \Rightarrow \sigma^2_0 = \frac{\mu^2}{\lambda} - 3 I_\chi - I_\pi \, \neq 0\,, \label{ssbcon}\eea if available, the second solution (\ref{ssbcon}) leads to spontaneous symmetry breaking. At finite temperature \be \int \frac{d^4k}{(2\pi)^4}\, \frac{1}{k^2+M^2_{\chi,\pi}} \Rightarrow T \sum_{\omega_n} \frac{d^3k}{(2\pi)^3}\, \frac{1}{\omega^2_n+\vec{k}^2+M^2_{\chi,\pi}}~~;~~ \omega_n = 2\pi\,n\,T \label{finiteT}\ee where $\omega_n$ are the Matsubara frequencies. For $T^2 \gg M^2_{\chi,\pi}$ both integrals are proportional to $T^2$ and the symmetry breaking solution becomes \be \sigma^2_0 = C\,\Big(T^2_c - T^2\Big) \label{ssbT}\ee with $C$ a positive numerical constant. This well known observation will become relevant below in the discussion of symmetry breaking in de Sitter space time because the (physical) event horizon of de Sitter space-time $1/H$ determines the Hawking temperature $T_H = H/2\pi$. The $\pi$ propagator becomes \be G_{\pi}(k) = \frac{1}{k^2+M^2_\pi - \Sigma_\pi(k)} \label{pipropa}\ee where the Feynman diagrams for the self-energy are shown in fig. (\ref{fig:selfenergy}). \begin{figure}[h!] \begin{center} \includegraphics[height=4in,width=4in,keepaspectratio=true]{selfenergy.eps} \caption{One loop diagrams that contribute to the $\pi$ field self-energy $\Sigma_\pi(k)$. } \label{fig:selfenergy} \end{center} \end{figure} The contributions from diagrams (a),(b),(c) yield \be \Sigma_{\pi,a}(k)+\Sigma_{\pi,b}(k)+\Sigma_{\pi,c}(k)= \frac{\lambda^2 \,\sigma^2_0}{2\, M^2_\chi} \Big[ J + 3 I_\chi + I_\pi \Big]=0 \label{sigtad}\ee as a consequence of the tadpole condition (\ref{tadpole}). The remaining diagrams yield \be \Sigma_{\pi,d}(k)+\Sigma_{\pi,e}(k)+\Sigma_{\pi,f}(k)= -\frac{\lambda}{2} \Bigg[I_\chi + 3 I_\pi - 2\lambda \,\sigma^2_0 \int \frac{d^4q}{(2\pi)^4}\frac{1}{(q^2+M^2_\chi)((q+k)^2+M^2_\pi)} \Bigg] \label{siglups}\ee The pole in the $\pi$ propagator determines the physical mass of the $\pi$ field, we find \be k^2+ M^2_\pi - \Sigma_\pi(k) = k^2+\frac{\lambda}{2} \Bigg[J+I_\chi + 3 I_\pi - 2\lambda \,\sigma^2_0 \int \frac{d^4q}{(2\pi)^4}\frac{1}{(q^2+M^2_\chi)((q+k)^2+M^2_\pi)} \Bigg] \label{masaphys}\ee where we have used $M^2_\pi$ given by eqn. (\ref{massesetc}). If there is spontaneous symmetry breaking, $J = -3I_\chi-I_\pi$ leading to \be M^2_\pi - \Sigma_\pi(k) = \lambda\int\frac{d^4q}{(2\pi)^4}\Bigg[\frac{1}{q^2+M^2_{\pi}} - \frac{1}{q^2+M^2_{\chi}}- \frac{\lambda \, \sigma^2_0 }{((q+k)^2+M^2_\pi)(q^2+M^2_{\chi})}\Bigg] \,.\label{massren}\ee Therefore the inverse propagator is given by \be k^2+ M^2_\pi - \Sigma_\pi(k) = k^2+{\lambda\,\sigma^2_0} \, \int\frac{d^4q}{(2\pi)^4} \frac{1}{q^2+M^2_{\chi}} \Bigg[\frac{1}{q^2+M^2_{\pi}} - \frac{ 1 }{(q+k)^2+M^2_\pi} \Bigg] \label{pionmass}\ee where we used eqn. (\ref{massesetc}). Obviously (\ref{massren},\ref{pionmass}) vanish as $k^2 \rightarrow 0$ (and are proportional to $k^2$ in this limit by Lorentz invariance), therefore the propagator for the Goldstone mode $\pi$ features a pole at $k^2=0$. We emphasize that the vanishing of the mass is a consequence of a precise \emph{cancellation} between the local tadpole terms, fig.(\ref{fig:selfenergy}, (d),(e)) and the non-local (in space-time) contribution fig.(\ref{fig:selfenergy}, (f)) in the $k\rightarrow 0$ limit. The propagator for $\chi$-the Higgs like mode- is obtained in a similar manner, the Feynman diagrams for the self energy $\Sigma_\chi(k)$ are similar to those for $\Sigma_\pi$ with $\chi$ external lines and the only difference being the combinatoric factors for diagrams (a)-(e), and two exchange diagrams of the (f)-type with intermediate states of two $\chi$ particles and two $\pi$ particles respectively. Again diagrams of the type (a)-(c) are cancelled by the tadpole condition (\ref{tadpole}) and we find \be k^2+M^2_\chi-\Sigma_\chi(k) = k^2+\frac{\lambda}{2}\Bigg[2 \sigma^2_0 + J + 3 I_\chi +I_\pi - \lambda \sigma^2_0 \,\tilde{I}_\pi(k) -9\,\lambda \sigma^2_0 \,\tilde{I}_\chi(k) \Bigg] \label{chiprop}\ee where \be \tilde{I}_{\chi,\pi}(k) = \int \frac{d^4q}{(2\pi)^4}\,\frac{1}{\Big(\big(q+k\big)^2+M^2_{\chi,\pi}\Big)^2}\,.\label{tildeI}\ee If the symmetry is spontaneously broken, using the condition (\ref{ssbcon}) we find \be k^2+M^2_\chi-\Sigma_\chi(k) = k^2 + \lambda \,\sigma^2_0\Big[1- \frac{\lambda}{2}\,\tilde{I}_\pi(k)- \frac{9\,\lambda}{2}\,\tilde{I}_\chi(k) \Big] \label{finchiprop}\ee \vspace{2mm} \textbf{Large N limit:} \vspace{2mm} If rather than an $O(2)$ symmetry we consider the $O(N)$ case, after symmetry breaking along the $\sigma$ direction the $\vec{\pi}$ fields belong to an $O(N-1)$ multiplet. In the large N limit the leading term in the tadpole condition $\langle \chi \rangle =0$ (\ref{tadcond}) is given by the last diagram (solid circle) in fig.(\ref{fig:expval}), \be \langle \chi \rangle =0 \Rightarrow \frac{\lambda\,\sigma_0}{2\,M^2_\chi}\Big[J + N\, I_\pi \Big] = 0 \label{tadlargeN}\ee where we have neglected terms of $\mathcal{O}(1/N)$ in the large N limit. In this limit the leading contribution to the $\pi$ self-energy is given by fig. (\ref{fig:selfenergy}-(e)), \be \Sigma_{\pi} = -\frac{\lambda}{2}\, N\,I_\pi \,,\label{sigmapilargeN}\ee where again we neglected terms of $\mathcal{O}(1/N)$. Therefore the inverse $\pi$ propagator in the large N limit is given by \be k^2+M^2_\pi-\Sigma_{\pi} = k^2 + \mathcal{M}^2_\pi \label{invproplargeN}\ee where \be \mathcal{M}^2_\pi = \frac{\lambda}{2}\Big[J+N\,I_\pi\Big] \label{ginvlargeNpi}\ee thus in the large N limit, the tadpole condition (\ref{tadlargeN}) can be written as \be \langle \chi \rangle = 0 \Rightarrow \sigma_0 \,\mathcal{M}^2_\pi = 0 \label{tadpolelarN}\ee therefore if this condition is fulfilled with $\sigma_0 \neq 0$, namely with spontaneous symmetry breaking, automatically the $\pi$ field becomes massless. \vspace{2mm} \subsection{Counterterm approach:} \label{subsec:counter} \vspace{2mm} An alternative approach that is particularly suited to the study of radiative corrections to masses in the cosmological setting is the familiar method of introducing a mass counterterm in the Lagrangian by writing the mass term in the Lagrangian density as \be {M^2_\pi}\pi^2 = \mathcal{M}^2_\pi \pi^2 + \delta M^2_\pi \pi^2~~;~~\delta M^2_\pi = M^2_\pi-\mathcal{M}^2_\pi \ee and requesting that the counterterm $\delta M^2$ subtracts the $\pi$ self-energy at zero four momentum \be -\delta M^2_\pi+\Sigma_\pi(0) =0 \Rightarrow \mathcal{M}^2_\pi = {M}^2_\pi-\Sigma_\pi(0) \label{counter}\ee and the inverse propagator becomes \be G^{-1}_\pi(k) = k^2+\mathcal{M}^2_\pi-\Big[\Sigma_\pi(k)- \Sigma_\pi(0)\Big] \label{subprop}\ee in the broken symmetry phase $\mathcal{M}^2_\pi = 0$ from eqns. (\ref{massren},\ref{pionmass}) and the propagator features a pole at zero four momentum. The main reason to go through this exercise is to highlight the following important points: \begin{itemize} \item \textbf{i)} the tadpole type diagrams (a),(b),(c) are cancelled by the tadpole condition (\ref{tadpole}) which is tantamount to the requirement that the interaction Hamiltonian has vanishing matrix element between the vacuum and a single $\chi$ particle state. \item \textbf{ ii)} at one loop level the vanishing of the $\pi$ mass in the case of spontaneous symmetry breaking is a consequence of the cancellation between the \emph{local} tadpole diagrams (d), (e) and the \emph{non-local} one loop diagram (f) in the $k\rightarrow 0$ limit (the non-locality is in configuration space not in Fourier space). This point will be at the heart of the discussion in inflationary space time below. \item \textbf{iii)} In the large N limit, only the local tadpole fig. (\ref{fig:selfenergy}-(e)) contributes to the $\pi$ self-energy and the tadpole condition (\ref{tadpole}), for which a symmetry breaking solution immediately yields a vanishing $\pi$ mass. The tadpole and non-local diagrams fig. (\ref{fig:selfenergy}-(d,f)) are suppressed by a power of $1/N$ in this limit compared to the diagram (\ref{fig:selfenergy}-(e)). \item \textbf{iv)} The general, non-perturbative proof of the existence of gapless long wavelength excitations as a consequence of the results (\ref{finres},\ref{sums}) manifestly relies on \emph{time translational invariance} and energy eigenstates. In its most general form, without invoking time translational invariance, the result (\ref{res1}) is much less stringent on the long-wavelength spectrum of excitations without an (obvious) statement on the mass spectrum of the theory. Such a situation, the lack of time translational invariance (global time-like Killing vector) is a hallmark of inflationary cosmology and it is expected that -unlike in Minkowski space-time- Goldstone modes \emph{may acquire a mass radiatively}. \end{itemize} These points are relevant in the discussion of the fate of Goldstone bosons in de Sitter space-time discussed below. \section{Goldstone Bosons in de Sitter space-time:}\label{sec:goldcosmo} We consider the $O(2)$ linear sigma model minimally coupled in a spatially flat de Sitter space time with metric given by \be ds^2 = dt^2-a^2(t)~ d\vec{x}^2 ~~;~~a(t)=e^{Ht} \label{frw}\ee defined by the action (the different notation for the fields as compared to the previous section will be explained below) \be L= \int d^4x \sqrt{|g|}\,\Bigg\{ \frac{1}{2} g^{\mu \nu} \partial_\mu \vec{\Phi}\cdot \partial_\nu \vec{\Phi} - V(\vec{\Phi}\cdot\vec{\Phi}) \Bigg\}~~;~~\vec{\Phi} = (\phi_1,\phi_2) \,.\label{actionds}\ee were \be V(\vec{\Phi}\cdot\vec{\Phi}) = \frac{\lambda}{8}\Bigg(\phi^2_1 + \phi^2_2 - \frac{\mu^2}{\lambda}\Bigg)^2\,. \label{potds}\ee We follow the method of ref.\cite{coleman} to obtain the conservation law associated with the global $O(2)$ symmetry: consider a space-time dependent infinitesimal transformation that vanishes at the boundary of space-time \be \phi_1(\vec{x},t) \rightarrow \phi_1(\vec{x},t) - \epsilon(\vec{x},t)\phi_2(\vec{x},t) ~~;~~ \phi_2(\vec{x},t) \rightarrow \phi_2(\vec{x},t) + \epsilon(\vec{x},t)\phi_1(\vec{x},t) \label{trafods}\ee under which the change in the action is given by \be \delta L = \int d^4x \sqrt{|g|}\, \partial_\mu \epsilon(\vec{x},t)~J^\mu(\vec{x},t) \label{changeL}\ee where \be J^\mu(\vec{x},t) = i\,g^{\mu \nu}\Big[\phi_1 \partial_\nu \phi_2 - \phi_2 \partial_\nu \phi_1\Big] \label{current}\ee upon integration by parts assuming a vanishing boundary term, \be \delta L = -\int d^4x \sqrt{|g|}\, \epsilon(\vec{x},t)\,J^\mu_{;\,\mu}(\vec{x},t) \label{covacons}\ee from which upon using the variational principle\cite{coleman} we recognize that the current (\ref{current}) is \emph{covariantly conserved} \be J^\mu_{;\,\mu}(\vec{x},t) = \frac{1}{\sqrt{|g|}}~ \partial_\mu \Big( \sqrt{|g|} \,J^\mu\Big) = \dot{J}^0+3\,H \,J^0-\frac{1}{a^2(t)}\nabla\cdot\Big(\phi_1\,\nabla\,\phi_2-\phi_2\,\nabla\,\phi_1 \Big)=0 \label{covacons2}\ee where the dot stands for $d/d t$. This covariant conservation law can be seen to follow from the Heisenberg equations of motion for the fields, \be \ddot{\phi}_a + 3H \dot{\phi}_a - \frac{\nabla^2}{a^2(t)}\phi_a + 2~\Big(\frac{d V(\rho^2)}{d\rho^2}\Big)\,\phi_a = 0 ~~;~~ a= 1,2 ~~;~~ \rho^2 = \phi^2_1 + \phi^2_2 \,. \label{heiseqn}\ee It is the second term in (\ref{covacons2}) that prevents a straightforward generalization of the steps leading to Goldstone's theorem as described in the previous section. Fundamentally it is this difference that is at the heart of the major discrepancies in the corollary of Goldstone's theorem in the expanding cosmology as compared to Minkowski space time. It is convenient to pass to conformal time \be \eta = - \frac{e^{-Ht}}{H} ~~;~~ a( \eta ) = -\frac{1}{H\eta} \label{conftime}\ee and to rescale the fields \be \phi_1(\vec{x},t) = \frac{\sigma(\vec{x},\eta)}{a( \eta)}~~,~~\phi_2(\vec{x},t) = \frac{\pi(\vec{x},\eta)}{a(\eta)} \label{rescafield}\ee in terms of which the covariant conservation law (\ref{covacons2}) becomes \be \frac{\partial}{\partial \eta} \mathcal{J}^0(\vec{x},\eta) + \vec{\nabla}\cdot\vec{\mathcal{J}}(\vec{x},\eta) =0 \label{consconf} \ee where \bea \mathcal{J}^0(\vec{x},\eta) & = & i \Big[\sigma \, \pi^{'} -\pi \, \sigma^{'}\Big] \label{joconf}\\ \vec{\mathcal{J}}(\vec{x},\eta) & = & -i \Big[\sigma \, \vec{\nabla} \pi -\pi \, \vec{\nabla} \sigma \Big] \label{jvecconf}\eea where $'\equiv d/d\eta$. In terms of the rescaled fields the action becomes (after dropping a total surface term) \be L=\int d^3x d\eta \Bigg\{\frac{1}{2}\Big[\sigma^{'\,2}-(\nabla \sigma)^2 + \pi^{'\,2}-(\nabla \pi)^2+\frac{a''}{a}(\sigma^2+\pi^2) \Big]-\mathcal{V}\big(\sigma^2+\pi^2;\eta\big) \Bigg\} \label{lagconf} \ee where \be \mathcal{V}\big(\sigma^2+\pi^2;\eta\big) = \frac{\lambda}{8}\Big(\sigma^2+\pi^2-a^2(\eta) \frac{\mu^2}{\lambda}\Big)^2 \,. \label{potconf}\ee Therefore, although the Noether current (\ref{joconf},\ref{jvecconf}) is conserved and looks similar to that in Minkowski space time, the Hamiltonian is manifestly time dependent, there is no time translational invariance and no energy conservation and no spectral representation is available, all of these are necessary ingredients for Goldstone's theorem to guarantee massless excitations. The Heisenberg equations of motion are \bea \sigma^{''}-\nabla^2 \sigma + \Big[2 \,\frac{d\mathcal{V}(r^2)}{dr^2}-\frac{a^{''}}{a} \Big]\,\sigma & = & 0 \label{sigdseqn}\\ \pi^{''}-\nabla^2 \pi + \Big[2 \,\frac{d\mathcal{V}(r^2)}{dr^2}-\frac{a^{''}}{a} \Big]\,\pi & = & 0 \label{pidseqn} \eea where $r^2 = \pi^2 + \sigma^2$. Using these Heisenberg equations of motion it is straightforward to confirm the conservation law (\ref{consconf}) with (\ref{joconf},\ref{jvecconf}). Now making an $\eta$ dependent shift of the field $\sigma$ \be \sigma(\vec{x},\eta) = \sigma_0\,a(\eta)+\chi(\vec{x},\eta) \label{shifteta}\ee the action (\ref{lagconf}) becomes \bea L & = & \int d^3x d\eta \Bigg\{\frac{1}{2}\Big[\chi^{'\,2}-(\nabla \chi)^2 + \pi^{'\,2}-(\nabla \pi)^2- \frac{1}{\eta^2}\Big(\frac{M^2_\chi }{H^2}-\frac{1}{2}\Big)\,\chi^2-\frac{1}{\eta^2}\Big(\frac{M^2_\pi }{H^2}-\frac{1}{2}\Big)\,\pi^2 \Big] \nonumber \\ & + & \frac{\lambda}{2\,\eta^3}\frac{\sigma_0 J}{H^3} \, \chi +\frac{ \lambda}{2\eta}\frac{\sigma_0}{H} \, \chi^3 + \frac{\lambda}{2\eta}\frac{\sigma_0}{H} \,\pi^2 \chi - \frac{\lambda}{8} \chi^4 -\frac{\lambda}{8} \pi^4 -\frac{\lambda}{4} \chi^2 \pi^2 \Bigg\} \label{lagrads} \eea where $M_{\chi,\pi}, J$ are the same as in the Minkowski space time case given by eqn. (\ref{massesetc}). The Heisenberg equations of motion for the spatial Fourier modes of wavevector $k$ of the fields in the non-interacting ($\lambda=0$) theory are given by \bea \chi''_{\vk}(\eta)+ \Big[k^2-\frac{1}{\eta^2}\Big(\nu^2_\chi -\frac{1}{4} \Big) \Big]\chi_{\vk}(\eta) & = & 0 \label{chimodes} \\ \pi''_{\vk}(\eta)+ \Big[k^2-\frac{1}{\eta^2}\Big(\nu^2_\pi -\frac{1}{4} \Big) \Big]\pi_{\vk}(\eta) & = & 0 \label{pimodes} \eea \noindent where \be \nu^2_{\chi,\pi} = \frac{9}{4}- \frac{M^2_{\chi,\pi} }{H^2}\,. \label{nu}\ee We will focus on the case of ``light'' fields, namely $M^2_{\chi,\pi}\ll H^2$ and choose Bunch-Davies vacuum conditions for which the two linearly independent solutions are given by \bea g_{\chi,\pi}(k;\eta) & = & \frac{1}{2}\; i^{\nu_{\chi,\pi}+\frac{1}{2}} \sqrt{-\pi \eta}\,H^{(1)}_{\nu_{\chi,\pi}}(-k\eta)\label{gnu}\\ f_{\chi,\pi}(k;\eta) & = & \frac{1}{2}\; i^{-\nu_{\chi,\pi}-\frac{1}{2}} \sqrt{-\pi \eta}\,H^{(2)}_{\nu_{\chi,\pi}}(-k\eta)= g^*_{\chi,\pi}(k;\eta) \label{fnu} \; , \eea \noindent where $H^{(1,2)}_\nu(z)$ are Hankel functions. Expanding the field operator in this basis in a comoving volume $V$ \bea \chi(\vec{x},\eta) & = & \frac{1}{\sqrt{V}}\sum_{\vec{k}} \Big[a_{\vec{k}}\,g_\chi(k;\eta)\,e^{i\vec{k}\cdot\vec{x}}+ a^\dagger_{\vec{k}}\,\,g^*_\chi(k;\eta)\,e^{-i\vec{k}\cdot\vec{x}}\Big] \label{quantumfieldchi}\\ \pi(\vec{x},\eta) & = & \frac{1}{\sqrt{V}}\sum_{\vec{k}} \Big[b_{\vec{k}}\,g_\pi(k;\eta)\,e^{i\vec{k}\cdot\vec{x}}+ b^\dagger_{\vec{k}}\,\,g^*_\pi(k;\eta)\,e^{-i\vec{k}\cdot\vec{x}}\Big] \label{quantumfieldpi} \eea The Bunch-Davies vacuum is defined so that \be a_{\vec{k}}|0\rangle = 0 ~~;~~b_k|0\rangle = 0 \,,\label{dsvac}\ee and the Fock states are obtained by applying creation operators $a_{\vec{k}}^{\dagger};b^\dagger_{\vec{k}}$ onto the vacuum. After the shift (\ref{shifteta}), the current (\ref{joconf},\ref{jvecconf}) becomes \bea \mathcal{J}^0(\vec{x},\eta) & = & \mathcal{J}^0_{tl}(\vec{x},\eta)+i \Big[\chi \, \pi^{'} -\pi \, \chi^{'}\Big]~~;~~ \mathcal{J}^0_{tl}(\vec{x},\eta) = i \Big[\sigma_0\,a \, \pi^{'} -\pi \, \sigma_0\,a^{'}\Big] \label{joconfshift}\\ \vec{\mathcal{J}}(\vec{x},\eta) & = & \vec{\mathcal{J}}_{tl}(\vec{x},\eta) -i \Big[\chi \, \vec{\nabla} \pi -\pi \, \vec{\nabla} \chi \Big]~~;~~\vec{\mathcal{J}}_{tl}(\vec{x},\eta) = -i\sigma_0\,a\,\vec{\nabla} \pi \,.\label{jvecconfshift}\eea The terms $ \mathcal{J}^0_{tl}(\vec{x},\eta),\vec{\mathcal{J}}_{tl}(\vec{x},\eta) $ on the right hand sides of (\ref{joconfshift},\ref{jvecconfshift}) are the \emph{tree level} contributions to the conserved current as these terms create single particle $\pi$ states out of the vacuum. The interaction vertices are the same as those for the Minkowski space-time case depicted in fig.(\ref{fig:vertices}) but with the replacements \be \sigma_0 \rightarrow -\frac{\sigma_0}{H\eta} ~~;~~J \rightarrow -\frac{J}{H\eta} \,. \label{repds}\ee In refs.\cite{boyan,rigo,boywwds} it is found that the tadpole contributions in figs.(\ref{fig:expval},\ref{fig:selfenergy}-(d,e)) are given by \bea \langle 0| \chi^2(\vx,\eta)|0 \rangle_{ren} & = & \frac{1}{8\pi^2 \,\eta^2}~\frac1{\Delta_\chi} ~\big[1+ \cdots \big] \label{poletadchi} \\ \langle 0| \pi^2(\vx,\eta)|0 \rangle_{ren} & = & \frac{1}{8\pi^2 \,\eta^2}~\frac1{\Delta_\pi} ~\big[1+ \cdots \big] \label{poletadpi}\eea where the renormalization regularizes ultraviolet divergences, and \be \Delta_\chi = \frac{M^2_\chi}{3H^2}~~;~~\Delta_\pi = \frac{M^2_\pi}{3H^2}\,,\label{deltas}\ee the dots in eqns. (\ref{poletadchi},\ref{poletadpi}) stand for terms subleading in powers of $\Delta_{\chi,\pi}\ll 1$. In order to maintain a notation consistent with the previous section we introduce \be \mathcal{I}_{\chi,\pi} \equiv \frac{1}{8\pi^2\,\Delta_{\chi,\pi}} \,.\label{Isdef}\ee The tadpole condition now becomes \be \langle \chi \rangle = 0 \Rightarrow \frac{\lambda\,a\,\sigma_0}{2\,\eta^2}\Big[\frac{J}{H^2}+3\mathcal{I}_{\chi}+\mathcal{I}_{\pi}\Big] =0 \,. \label{tadpoleds}\ee A symmetry breaking solution corresponds to $\sigma_0\neq 0 ~;~ J/H^2 = -3\mathcal{I}_{\chi}-\mathcal{I}_{\pi} $. At tree level \be \sigma^2_0 = \frac{\mu^2}{\lambda} \Rightarrow J=0 \Rightarrow M^2_\pi =0 \,,\label{treelevelssbds} \ee and using that $a^{''}/a = 2/\eta^2$ the tree-level conservation law becomes \be \frac{\partial}{\partial \eta}\mathcal{J}^0_{tl} +\vec{\nabla}\cdot\vec{\mathcal{J}}_{tl} = 0 \Rightarrow \sigma_0 a(\eta)\Big[\pi^{''} - \frac{2}{\eta^2}-\nabla^2\pi\Big] =0 \label{tlcl}\ee which is fulfilled by the Heisenberg equation of motion for the $\pi$ field (\ref{pimodes}) with $M_\pi=0$, namely $\nu_\pi = 3/2$. It is illuminating to understand how the result (\ref{res1}) is fulfilled at tree level. With the expansion of the $\pi$ field given by (\ref{quantumfieldpi}) and $\nu_\pi = 3/2$ introduced in $\mathcal{J}^{0}_{tl}(\vec{x},\eta)$ we find \be S(\vec{k};\eta,\eta') = -2 \,\sigma_0\,a(\eta) \,\mathrm{Im}\Big[g^*_\pi(k;\eta') \Big(g^{'}_\pi(k;\eta)+ \frac{g_\pi(k;\eta)}{\eta} \Big)\Big] \label{skds}\ee and the long wavelength limit is given by \be \mathrm{lim}_{k \rightarrow 0} \, S(\vec{k};\eta,\eta') = \sigma_0 \,a(\eta')\,. \label{k0lim}\ee Again, we note that it is precisely the lack of time translational invariance that restricts the content of eqn. (\ref{k0lim}), while this equation is satisfied with $M_\pi =0$ at tree level, there is no constraint on the mass of the single particle excitations from the general result (\ref{res1}). Thus whether the Goldstone fields acquire a mass via radiative corrections now becomes a dynamical question. There are two roadblocks to understanding radiative corrections to the mass, both stemming from the lack of time translational invariance: i) in general there is no simple manner to resum the series of one particle irreducible diagrams into a Dyson propagator, whose poles reveal the physical mass, ii) there is no Fourier transform in time that when combined with a spatial Fourier transform would allow to glean a dispersion relation for single particle excitations. Obviously these these two problems are related. In refs.\cite{serreau,prokossb,arai} only the local tadpoles were considered, this is a local mean field approximation and the space-time local nature of the tadpole allows to extract a mass. However, while the mean field tadpole is the leading contribution in the large N limit as discussed in the previous section, for finite $N$ the non-local diagram equivalent to fig. (\ref{fig:selfenergy}-(f)) is of the same order, and in Minkowski space time it is \emph{this} diagram that cancels the tadpole (mean field) contribution to the $\pi$ mass. Thus for finite $N$ the question is whether the non-local self-energy contribution (\ref{fig:selfenergy}-(f)) can cancel the tadpole contributions of fig. (\ref{fig:selfenergy}-(d),(e)) even when these feature very different time dependence and (\ref{fig:selfenergy}-(f)) does not have a time Fourier transform that renders it local in frequency space. It is at this point where the Wigner-Weisskopf method introduced in refs.\cite{boyhol,boywwds} proves to be particularly useful. \subsection{Wigner-Weisskopf theory in de Sitter space time:} In order to make the discussion self-contained, we highlight the main aspects of the Wigner-Weisskopf non-perturbative approach to study the time evolution of quantum states pertinent to the self-consistent description of mass generation discussed in the previous sections. For a more thorough discussion and comparison to results in Minkowski space time the reader is referred to ref.\cite{boyhol,boywwds}. Expanding the interaction picture state $|\Psi(\eta)\rangle_I$ in Fock states $|n\rangle$ obtained as usual by applying the creation operators on to the (bare) vacuum state (here taken to be the Bunch-Davies vacuum) as \be |\Psi(\eta)\rangle_I = \sum_n C_n(\eta) |n\rangle \label{expastate}\ee the evolution of the state in the interaction picture given by \cite{boyhol} \be i \frac{d}{d\eta}|\Psi(\eta)\rangle_I = H_I(\eta)|\Psi(\eta)\rangle_I \label{eomip}\ee where $H_I(\eta)$ is the interaction Hamiltonian in the interaction picture. In terms of the coefficients $C_n(\eta)$ eqn. (\ref{eomip}) becomes \be \frac{d\,C_n(\eta)}{d\eta} = -i \sum_m C_m(\eta) \langle n|H_I(\eta)|m\rangle \,, \label{ecns}\ee it is convenient to separate the diagonal matrix elements, that represent \emph{local contributions} from those that represent transitions and are associated with non-local self-energy corrections, writing \be \frac{d\,C_n(\eta)}{d\eta} = -i C_n(\eta)\langle n|H_I(\eta)|n\rangle -i \sum_{m\neq n} C_m(\eta) \langle n|H_I(\eta)|m\rangle \,. \label{ecnsoff}\ee Although this equation is exact, it yields an infinite hierarchy of simultaneous equations when the Hilbert space of states $|n\rangle$ is infinite dimensional. However, progress is made by considering the transition between states connected by the interaction Hamiltonian at a given order in $H_I$: consider the case when one state, say $|A\rangle$ couples to a set of states $|\kappa\rangle$, which couple back to $|A\rangle$ via $H_I$, to lowest order in the interaction the system of equation closes in the form \bea \frac{d\,C_A(\eta)}{d\eta} & = & -i \langle A|H_I(\eta)|A\rangle \, C_A(\eta)-i \sum_{\kappa \neq A} \langle A|H_I(\eta)|\kappa\rangle \,C_\kappa(\eta)\label{CA}\\ \frac{d\,C_\kappa(\eta)}{d\eta}& = & -i \, C_A(\eta) \langle \kappa|H_I(\eta) |A\rangle \label{Ckapas}\eea where the $\sum_{\kappa \neq A}$ is over all the intermediate states coupled to $|A\rangle$ via $H_I$ representing transitions. Consider the initial value problem in which at time $\eta=\eta_0$ the state of the system is given by $|\Psi(\eta=\eta_0)\rangle = |A\rangle$ so that \be C_A(\eta_0)= 1 ~~;~~ C_{\kappa\neq A}(\eta=\eta_0) =0 \,,\label{initial}\ee solving (\ref{Ckapas}) and introducing the solution into (\ref{CA}) we find \bea C_{\kappa}(\eta) & = & -i \,\int_{\eta_0}^{\eta} \langle \kappa |H_I(\eta')|A\rangle \,C_A(\eta')\,d\eta' \label{Ckapasol}\\ \frac{d\,C_A(\eta)}{d\eta} & = & -i \langle A|H_I(\eta)|A\rangle \, C_A(\eta) - \int^{\eta}_{\eta_0} \Sigma_A(\eta,\eta') \, C_A(\eta')\,d\eta' \label{intdiff} \eea where\footnote{In ref.\cite{boyhol} it is proven that in Minkowski space-time the retarded self-energy in the single particle propagator is given by $i\Sigma$.} \be \Sigma_A(\eta,\eta') = \sum_{\kappa \neq A} \langle A|H_I(\eta)|\kappa\rangle \langle \kappa|H_I(\eta')|A\rangle \,. \label{sigma} \ee In eqn. (\ref{Ckapas}) we have not included the diagonal term as in (\ref{CA})\footnote{These diagonal terms represent local self-energy insertions in the propagators of the intermediate states, hence higher orders in the perturbative expansion.}, it is clear from (\ref{Ckapasol}) that with the initial condition (\ref{initial}) the amplitude of $C_{\kappa}$ is of $\mathcal{O}(H_I)$ therefore a diagonal term would effectively lead to higher order contributions to (\ref{intdiff}). The integro-differential equation (\ref{intdiff}) with \emph{memory} yields a non-perturbative solution for the time evolution of the amplitudes and probabilities, which simplifies in the case of weak couplings. In perturbation theory the time evolution of $C_A(\eta)$ determined by eqn. (\ref{intdiff}) is \emph{slow} in the sense that the time scale is determined by a weak coupling kernel $\Sigma_A$, hence an approximation in terms of an expansion in derivatives of $C_A$ emerges as follows: introduce \be W (\eta,\eta') = \int^{\eta'}_{\eta_0} \Sigma_A(\eta,\eta'')d\eta'' \label{Wo}\ee so that \be \Sigma_A(\eta,\eta') = \frac{d}{d\eta'}\,W (\eta,\eta'),\quad W (\eta,\eta_0)=0. \label{rela} \ee Integrating by parts in eq.(\ref{intdiff}) we obtain \be \int_{\eta_0}^{\eta} \Sigma_A(\eta,\eta')\,C_A(\eta')\, d\eta' = W (\eta,\eta)\,C_A(\eta) - \int_{\eta_0}^{\eta} W (\eta,\eta')\, \frac{d}{d\eta'}C_A(\eta') \,d\eta'. \label{marko1}\ee The second term on the right hand side is formally of \emph{higher order} in $H_I$, integrating by parts successively yields a systematic approximation scheme as discussed in ref.\cite{boyhol}. Therefore to leading order in the interaction we find \be C_A(\eta) = e^{-\int^{\eta}_{\eta_0}\widetilde{W} (\eta',\eta')\, d\eta'} , \quad \widetilde{W} (\eta',\eta')= i \langle A|H_I(\eta')|A\rangle + \int_{\eta_0}^{\eta'} \Sigma_A(\eta',\eta^{''}) d\eta^{''}\,. \label{dssolu} \ee Following ref.\cite{boywwds} we introduce the \emph{real quantities} $\mathcal{E}_A(\eta)\,;\,\Gamma_A(\eta)$ as \be i \langle A|H_I(\eta')|A\rangle +\int^{\eta'}_{\eta_0} \Sigma_A(\eta',\eta'')d\eta'' \equiv i\,\mathcal{E}_A(\eta')+ \frac{1}{2}~\Gamma_A(\eta') \label{realima}\ee in terms of which \be C_A(\eta) = e^{-i\int^{\eta}_{\eta_0}\mathcal{E}_A(\eta') d\eta'}~ e^{-\frac{1}{2}\int^{\eta}_{\eta_0}\Gamma_A(\eta') d\eta'} \label{caofeta}\ee When the state $A$ is a single particle state, radiative corrections to the mass are extracted from $\mathcal{E}_A$ and \be \Gamma_A(\eta) = - \frac{d}{d\eta}\ln\Big[|C_A(\eta)|^2\Big] \label{decarate} \ee is identified as a (conformal) time dependent decay rate. \vspace{2mm} \textbf{Extracting the mass:} In Minkowski space-time for $|A\rangle = |1_{\vk}\rangle$ a single particle state of momentum $\vk$, $\mathcal{E}_{1_{\vk}}$ includes the self-energy correction to the mass of the particle\cite{boyhol,boywwds,qoptics}. Consider adding a mass counterterm to the Hamiltonian density, in terms of the spatial Fourier transform of the fields it is given by \be H_{ct} = \frac{\delta M^2}{2} \sum_{\vec{k}} \pi_{\vk}\,\pi_{-\vk} \label{hctex}\ee the matrix element \be \langle 1^{\pi}_{\vk} | H_{ct} |1^{\pi}_{\vk} \rangle = \delta M^2 \,|g_\pi(\eta)|^2 \,, \label{mtxelex}\ee hence it is clear that only the imaginary part of $\widetilde{W}$ can be interpreted as a mass term, thus only the imaginary part of $\Sigma_{1_{\vk}}$ contributes to the mass. However, the non-local nature of $\Sigma_{1_{\vk}}$ also includes transient behavior from the initial state preparation thus a mass term must be isolated in the asymptotic long time limit when transient phenomena has relaxed. Last but not least momentum dependence can mask a constant mass term, which can only be identified in the long wavelength limit. In particular in refs.\cite{boyhol,boywwds} it is shown that in Minkowski space time (see appendix) \be \mathrm{Im} \int^{t\rightarrow \infty}_0 \Sigma_{1_{\vk}}(t,t')dt' = \delta E_{1_{\vk}} \label{minki}\ee where $\delta E_{1_{\vk}}$ is the second order correction to the energy of a single particle state with momentum $\vk$ obtained in quantum mechanical perturbation theory (see also the appendix). The program of renormalized perturbation theory begins by writing the free field part of the Lagrangian in terms of the renormalized mass and introducing a counterterm in the interaction Lagrangian so that it cancels the radiative corrections to the mass from the self-energy. Namely the counterterm in the interaction Lagrangian is fixed by requiring that $\mathcal{E}_{1\vec{k}}(\eta') = 0 $, in the long time limit $\eta' \rightarrow 0^-$ and in the long-wavelength limit. Therefore as per the discussion above we extract the mass term from the condition \be \mathcal{E}_{1\vec{k}}(\eta') =\langle 1_{\vec{k}}|H_I(\eta')|1_{\vec{k}}\rangle + \int_{\eta_0}^{\eta'} \mathrm{Im}\Big[\Sigma_1(k;\eta',\eta^{''})\Big] d\eta^{''} = 0\, \label{renmass}\ee in the long wavelength limit. In Minkowski space time, the condition (\ref{renmass}) is tantamount to requiring that the (real part of the) pole in the propagator be at the physical mass\cite{boyhol} and is equivalent to the counterterm approach described in section (\ref{subsec:counter}). In the appendix we carry out this program and show explicitly how the Wigner-Weisskopf approach reproduces the results in Minkowski space time obtained in section (\ref{sec:minkowski}) and how the mass is reliably extracted in the long time, long wavelength limit. We implement the \emph{same strategy} to obtain the self-consistent radiatively generated mass in de Sitter space time where equation (\ref{renmass}) will determine the \emph{self-consistent condition} for the mass. In the mass terms in the Lagrangian (\ref{lagrads}) we implement the counterterm method by introducing the renormalized masses $\mathcal{M}^2_{\chi,\pi}$ that include the radiative corrections, and writing \be -\frac{M^2_\chi }{2\,H^2\,\eta^2}\,\chi^2- \frac{M^2_\pi }{2\,H^2\,\eta^2}\,\pi^2 \equiv -\frac{\mathcal{M}^2_\chi }{2\,H^2\,\eta^2}\,\chi^2- \frac{\mathcal{M}^2_\pi }{2\,H^2\,\eta^2}\,\pi^2 -\mathcal{L}_{ct} \label{ctdsdef}\ee leading to the counterterm \emph{Hamiltonian} \be H_{ct} = \frac{1}{2\,H^2\,\eta^2}~ \int d^3x \Bigg[ \big(M^2_\chi -\mathcal{M}^2_\chi \big) \,\chi^2 + \big(M^2_\pi - \mathcal{M}^2_\pi \big) \,\pi^2 \Bigg] \label{Hctds}\ee included in the interaction Hamiltonian $H_I(\eta)$, and redefining \be \Delta_\chi = \frac{\mathcal{M}^2_\chi}{3H^2}~~;~~\Delta_\pi = \frac{\mathcal{M}^2_\pi}{3H^2}\,.\label{newdeltas}\ee In what follows we assume that $\Delta_{\chi,\pi} \ll 1$, therefore the leading order contributions arise from poles in $\Delta_{\chi,\pi}$ as a result of the strong infrared divergences of minimally coupled light fields. The contributions from diagrams like those of fig. (\ref{fig:selfenergy}, (a),(b),(c)) are cancelled by the tadpole condition (\ref{tadpoleds}). For the $\pi - \chi$-fields respectively we find \be \langle 1^\pi_{\vk}|H_I(\eta)|1^\pi_{\vk} \rangle = \frac{|g_{\pi}(k,\eta)|^2 }{ H^2\,\eta^2}\, \, \Bigg[ \frac{\lambda}{2}~ \Big(\frac{J}{H^2}+3 \mathcal{I}_\pi + \mathcal{I}_\chi \Big)-\frac{\mathcal{M}^2_\pi}{H^2}\Bigg] \,.\label{diagMEpi}\ee where $\mathcal{I}_{\chi,\pi}$ are given by eqns.(\ref{Isdef}) with the redefined $\Delta_{\chi,\pi}$ given by (\ref{newdeltas}). The non-local contribution is given by (see \cite{boywwds}) \be \Sigma_{\pi}(k;\eta;\eta') = \frac{\lambda^2\,\sigma^2_0}{H^2\,\eta\,\eta'} \,g^*_{\pi}(k;\eta)g_{\pi}(k;\eta')\,\int \frac{d^3q}{(2\pi)^3}\,g_{\chi}(q;\eta)g^*_{\chi}(q;\eta')g_{\pi}(|\vec{q}-\vk|;\eta) g^*_{\pi}(|\vec{q}-\vk|;\eta')\,, \label{selfpids}\ee For $\Delta_{\pi,\chi} \sim 0$ the integral features infrared divergences in the regions $q\sim 0;|\vec{q}-\vk|\sim 0$ which are manifest as poles in $\Delta_{\pi,\chi}$\cite{boywwds}. These regions are isolated following the procedure of ref.\cite{boywwds} and the poles in $\Delta_{\pi,\chi}$ can be extracted unambiguously. To leading order in these poles we find \be \Sigma_{\pi}(k;\eta;\eta') = \frac{\lambda^2\,\sigma^2_0}{8\pi^2\,H^2\,(\eta\,\eta')^2} \,g^*_{\pi}(k;\eta)g_{\pi}(k;\eta')\Bigg[ \frac{g_{\pi}(k;\eta) g^*_{\pi}(k;\eta')}{\Delta_\chi}+\frac{g_{\chi}(k;\eta) g^*_{\chi}(k;\eta')}{\Delta_\pi} \Bigg] \label{sigpipoles} \ee As discussed in detail in ref.\cite{boywwds} the poles originate in the emission and absorption of superhorizon quanta and arise from the integration of a band of superhorizon wavevectors $0\leq q \leq \mu_{ir} \rightarrow 0$ (see ref.\cite{boywwds} for details). As per the discussion in Minkowski space-time, a vanishing mass for a Goldstone boson after radiative correction requires that the tadpole terms in (\ref{diagMEpi}) be \emph{exactly} cancelled by the non-local self-energy contribution in the long-time, long wavelength limit. In particular the poles in $\Delta_{\chi,\pi}$ in (\ref{diagMEpi}) must be exactly cancelled by similar poles in $\Sigma_\pi$ (\ref{sigpipoles}). Therefore, to leading order in $\Delta_{\pi,\chi}$ we can set $\Delta_\pi = \Delta_\chi =0$, namely $\nu_{\pi,\chi}= 3/2$ in the mode functions $g_{\pi,\chi}$ given by (\ref{gnu}), whence it follows that to leading order in $\Delta_{\pi,\chi}$ \be \Sigma_{\pi}(k;\eta;\eta') = \frac{\lambda^2\,\sigma^2_0}{8\pi^2\,H^2} \,\frac{|g(k;\eta)|^2|g (k;\eta')|^2}{(\eta\,\eta')^2}\Big[\frac{1}{\Delta_\pi}+ \frac{1}{\Delta_\chi}\Big]\Big[1+\mathcal{O}(\Delta_\pi,\Delta_\chi)+\cdots\Big]\,, \label{selfleadds}\ee where \be g(k;\eta) = -\frac{1}{2} \sqrt{-\pi \eta}\,H^{(1)}_{\frac{3}{2}}(-k\eta)\,. \label{g32}\ee Therefore, to leading order in poles in $\Delta_{\chi,\pi}$, $\Sigma_\pi(k;\eta;\eta')$ is \emph{real} and \emph{does not contribute to the radiatively generated $\pi$ mass }. Therefore, to leading order in the poles in $\Delta_{\pi,\chi}$ the self-consistent condition that determines the mass, eqn. (\ref{renmass}) becomes \be\langle 1^\pi_{\vk}|H_I(\eta)|1^\pi_{\vk} \rangle =0 \,.\label{finicondi}\ee This observation is important: unlike Minkowski space time where the diagram (\ref{fig:selfenergy}-(f)) \emph{cancels} the local tadpole contributions, in de Sitter space time the similar diagram \emph{cannot} cancel the local contributions because the leading infrared divergences yield a real contribution whereas the tadpoles yield a purely imaginary contribution as befits a mass insertion. Therefore, the self-consistent mass is obtained solely from the local tadpole terms which determine the mean-field contribution. This validates the results of \cite{serreau,prokossb} which rely solely on the mean field approximation (which is exact only in the strict $N\rightarrow \infty$ limit). \emph{Assuming} spontaneous symmetry breaking so that eqn. (\ref{tadpoleds}) is fulfilled with $\sigma_0 \neq 0$, namely \be \frac{J}{H^2} = -3\mathcal{I}_\chi-\mathcal{I}_\pi \, , \label{ssb22}\ee it follows that \be \frac{\mathcal{M}^2_\pi}{H^2} = \frac{\lambda }{8\pi^2}\Big[\frac{1}{\Delta_\pi}-\frac{1}{\Delta_\chi} \Big] \,.\label{masscondids}\ee For the $\chi$ field we find the following contributions, \be \langle 1^\chi_{\vk}|H_I(\eta)|1^\chi_{\vk} \rangle = \frac{|g_{\chi}(k,\eta)|^2 }{ H^2\,\eta^2}\, \, \Bigg[ \frac{\lambda}{2}~ \Big(\frac{J}{H^2}+2\,\frac{\sigma^2_0}{H^2}+3 \mathcal{I}_\chi + \mathcal{I}_\pi \Big)-\frac{\mathcal{M}^2_\chi}{H^2}\Bigg] \,.\label{diagMEchi}\ee where $\mathcal{I}_{\chi,\pi}$ are given by eqn. (\ref{Isdef}) and for $\Sigma_\chi(k;\eta,\eta')$ we find \bea \Sigma_{\chi}(k;\eta;\eta') & = & \frac{\lambda^2\,\sigma^2_0}{2\,H^2\,\eta\,\eta'} g^*_{\chi}(k;\eta)g_{\chi}(k;\eta') \int \frac{d^3q}{(2\pi)^3} \Bigg[9\, g_{\chi}(q;\eta)g^*_{\chi}(q;\eta')g_{\chi}(|\vec{q}-\vk|;\eta) g^*_{\chi}(|\vec{q}-\vk|;\eta') \nonumber \\ & + & g_{\pi}(q;\eta)g^*_{\pi}(q;\eta')g_{\pi}(|\vec{q}-\vk|;\eta) g^*_{\pi}(|\vec{q}-\vk|;\eta') \Bigg]\,. \label{selfchids}\eea Extracting the poles in $\Delta_{\pi,\chi}$ the leading order result is given by \be \Sigma_{\chi}(k;\eta;\eta') = \frac{\lambda^2\,\sigma^2_0}{8\pi^2\,H^2\,(\eta\,\eta')^2} \,g^*_{\chi}(k;\eta)g_{\chi}(k;\eta')\Bigg[ \frac{g_{\pi}(k;\eta) g^*_{\pi}(k;\eta')}{\Delta_\pi}+9\,\frac{g_{\chi}(k;\eta) g^*_{\chi}(k;\eta')}{\Delta_\chi} \Bigg] \label{sigchipoles} \ee Again, just as for the $\pi$ field above, to leading order in the poles in $\Delta_{\pi,\chi}$ we can set $\Delta_\pi = \Delta_\chi =0$, namely $\nu_{\pi,\chi}=3/2$ in the mode functions $g_{\pi,\chi}$, leading to \be \Sigma_{\chi}(k;\eta;\eta') = \frac{\lambda^2\,\sigma^2_0}{8\pi^2\,H^2\,} \,\frac{|g(k;\eta)|^2|g(k;\eta')|^2}{(\eta\,\eta')^2}\Bigg[ \frac{1}{\Delta_\pi}+ \frac{9}{\Delta_\chi} \Bigg] \label{sigchilead} \ee where $g(k;\eta)$ is given by eqn. (\ref{g32}). The result is that to leading order in the poles, \emph{both} $\Sigma_{\pi,\chi}$ are \emph{real} and \emph{do not contribute to the radiatively generated masses} but will contribute to the \emph{decay} of the single particle excitations discussed below (see section \ref{subsec:decay}). Therefore, \emph{assuming} spontaneous symmetry breaking so that the condition (\ref{ssb22}) holds we find that \be \frac{\mathcal{M}^2_\chi}{H^2} = \frac{\lambda \, \sigma^2_0}{H^2}\,. \label{masachids}\ee Now identifying self-consistently the masses in the definition (\ref{newdeltas}) with $\mathcal{M}_{\pi,\chi}$, and defining \be \varepsilon = \sqrt{\frac{\lambda}{24\pi^2}}~~;~~ \Delta_\pi = \varepsilon \delta_\pi~~;~~\Delta_\chi = \frac{\lambda}{3}\,\frac{\sigma^2_0}{H^2} \equiv \varepsilon \delta_\chi \label{defsds}\ee equation (\ref{masscondids}) becomes \be \delta_\pi = \frac{1}{\delta_\pi}- \frac{1}{\delta_\chi} \label{delpieq}\ee with the (positive) solution \be \delta_\pi = \frac{1}{2\delta_\chi}\Big[\sqrt{1+4\delta^2_\chi}-1 \Big]\label{delpisol}\ee the negative root would lead to an instability and an uncontrollable infrared divergence in the loop integrals which would not yield a self-consistent solution. Now we are in position to understand whether spontaneous symmetry breaking does occur. The condition (\ref{ssb22}) is \be \frac{\sigma^2_0}{H^2} = \frac{\mu^2}{\lambda H^2} - \frac{3}{8\pi^2\,\Delta_\chi}-\frac{1}{8\pi^2\,\Delta_\pi} \neq 0\label{condi2}\ee which when written in terms of the definitions (\ref{defsds}) and using (\ref{delpisol}) becomes \be F[\delta_\chi]\equiv \delta_\chi +\frac{1}{2\delta_\chi}\Big[7+ \sqrt{1+4\delta^2_\chi} ~ \Big]= \frac{\mu^2}{3\varepsilon H^2} \label{funcondi} \ee The function $F[\delta_\chi]$ and its intersection with $\mu^2/3\varepsilon H^2$ is displayed in fig. (\ref{fig:ssbsol}). \begin{figure}[ht!] \begin{center} \includegraphics[height=4in,width=4in,keepaspectratio=true]{ssbsol.eps} \caption{$F[\delta_\chi]$ vs. $\delta_\chi$ and its intersection with $\mu^2/3\varepsilon H^2$. The function features a minimum at $\delta_{\chi,min} = 1.906\cdots$ with $F[\delta_{\chi,min}]= 4.77614\cdots$. The value of $\delta_\pi(\delta_{\chi,min})= 0.772\cdots$. } \label{fig:ssbsol} \end{center} \end{figure} As shown in fig. (\ref{fig:ssbsol}), $F[\delta_\chi]$ features a minimum at $\delta_{\chi,min}= 1.906\cdots$ at which $F[\delta_{\chi,min}]= 4.77614\cdots$, therefore there are symmetry breaking solutions for \be \frac{\mu^2}{3\varepsilon H^2} > 4.77614\cdots \label{ssbsols}\ee this condition can be written in a more illuminating manner as \be T_H < T_c ~~;~~T_H = \frac{H}{2\pi} ~~;~~ T_c = \frac{\mu}{2.419\cdots\,\lambda^{1/4}} = \frac{\lambda^{1/4}\,v }{2.419\cdots} \label{crittemp}\ee where $T_H$ is the Hawking temperature of de Sitter space time\footnote{In comoving time $t$, the mode functions $g_\pi,g_\chi$ are functions of $\eta = -e^{-Ht}/H$ therefore \emph{periodic} in imaginary time $\tau=it$ with period $\beta=2\pi/H=1/T_H$. See \cite{boyprem}.} and $v=\mu/\sqrt{\lambda}$ is the tree level vacuum expectation value (minimum of the tree level potential). From eqn. (\ref{delpieq}) it follows that \be \frac{\delta_\chi}{\delta_\pi} = \frac{1+\sqrt{1+4\delta^2_\chi}}{2} \label{ratio}\ee and $\delta_\chi > 1.906\cdots$, therefore in the broken symmetry phase we find that \be \frac{\delta_\chi}{\delta_\pi} \simeq \delta_\chi +\frac{1}{2} ~~\mathrm{for}~~T_H < T_c \,, \label{aprossb}\ee in the spontaneously broken phase. At weak coupling, for $\mu^2 \gg 3 \varepsilon H^2$ (but $\mu^2 \ll H^2$ for consistency ) we find that \be \mathcal{M}_\chi \simeq |\mu| + a \,\lambda^{1/4}\,H ~~;~~ \mathcal{M}_\pi = b \,\lambda^{1/4}\,H \, \label{wcmasses}\ee where $a,b$ are positive constants. For $T_H > T_c$ the unbroken symmetry solution $\sigma_0=0$ is the only solution of the tadpole condition (\ref{tadpoleds}). In this case we find \be \frac{\mathcal{M}^2_\pi}{H^2}= \frac{\lambda}{2}~ \Big(\frac{J}{H^2}+3 \mathcal{I}_\pi + \mathcal{I}_\chi \Big) \label{thgtcpi}\ee \be \frac{\mathcal{M}^2_\chi}{H^2} = \frac{\lambda}{2}~ \Big(\frac{J}{H^2}+3 \mathcal{I}_\chi + \mathcal{I}_\pi \Big) \label{thgtcchi}\ee subtracting (\ref{thgtcchi}) from (\ref{thgtcpi}) we find \be \delta_\pi - \delta_\chi = \frac{1}{\delta_\pi} - \frac{1}{\delta_\chi} \,, \label{diffa}\ee if $\delta_\pi > (<) \, \delta_\chi$ the left hand side is positive (negative) but the right hand side is negative (positive), therefore the only solution is \be \delta_\pi = \delta_\chi = \frac{\mu^2}{12\varepsilon H^2}\Bigg[\sqrt{1+\Big(\frac{12\varepsilon H^2}{\mu^2} \Big)^2}-1 \Bigg] \,.\label{thgtcdels}\ee Inserting this result in (\ref{thgtcchi}) we find for $T_H > T_c$ \be \mathcal{M}_\pi = \mathcal{M}_\chi = \frac{\mu^2}{4}\Bigg[\sqrt{1+0.701\Big(\frac{ T^2_H}{T^2_c} \Big)^2}-1 \Bigg] \,,\label{thgtcmass}\ee as expected $ \mathcal{M}_{\chi}=\mathcal{M}_{\pi}$ if the symmetry is unbroken. \subsection{A first order phase transition:} Fig. (\ref{fig:ssbsol}) shows that for $T_H < T_c$ there are \emph{two solutions} of the equation that determines symmetry breaking and the question arises: which of the two solutions describes the broken symmetry phase?. The answer is gleaned by analyzing the weak coupling limit $\varepsilon \rightarrow 0$ ($\lambda \rightarrow 0)$. In this limit the left most intersection in fig.(\ref{fig:ssbsol}) corresponds to the solution \be \delta^{(-)}_\chi \simeq 12\,\varepsilon \,\frac{H^2}{\mu^2} \Rightarrow \mathcal{M}^2_\chi \simeq \frac{\lambda}{2\pi^2} \frac{H^4}{\mu^2} \stackrel{\lambda \rightarrow 0}{\rightarrow 0} \label{solumin}\ee whereas the right-most intersection corresponds to the solution \be \delta^{(+)}_\chi \simeq \frac{\mu^2}{3\varepsilon H^2} \Rightarrow \mathcal{M}^2_\chi \simeq \mu^2 ~~;~~ \mathcal{M}^2_\pi \simeq \varepsilon\,H^2 \rightarrow 0 \label{soluplus}\ee Obviously the solution $\delta^{(+)}_\chi$ is the correct one since for $\lambda \rightarrow 0$ the expectation value $\lambda \sigma_0 = \mu^2 $ the loop corrections vanish and the mass of the $\chi,\pi$ fields should be the tree level ones namely $\mathcal{M}^2_\chi = \mu^2,\mathcal{M}^2_\pi = 0$ respectively. However, as $\varepsilon H^2$ increases beyond the critical value at which $\mu^2/3\varepsilon H^2 = F[\delta_{\chi,min}]$ there is no available symmetry breaking solution and this occurs for a non-vanishing value of $\sigma_0$ signaling a \emph{first order phase transition} at $T_H = T_c$ given by (\ref{crittemp}). The value of the order parameter at $T_H=T_c$ is given by \be \sigma_{0c} \simeq 0.61\, \frac{H}{\lambda^{1/4}} \,.\label{critop}\ee These results are in general agreement with those of ref.\cite{prokossb}. The first order nature of the phase transition can also be understood within the context of the infrared divergences: if the transition (as a function of coupling or $T_H$) were of second order, then at the critical point the masses of both $\chi,\pi$ fields must necessarily vanish, but the vanishing of the masses would lead to strong infrared divergences. Therefore a first order transition with a finite mass (correlation length) and a jump in the order parameter is a natural consequence of the strong infrared behavior of minimally coupled nearly massless fields in de Sitter space-time. The infrared singularities are self-consistently relieved by the radiative generation of a mass at the expense of turning the phase transition into first order. \subsection{Large N limit} The above results can be simply generalized to the $O(N)$ case where the $\pi$-fields form an $O(N-1)$ multiplet. Now the tadpole condition becomes \be \langle \chi \rangle = 0 \Rightarrow \frac{\lambda\,a\,\sigma_0}{2\,\eta^2}\Big[\frac{J}{H^2}+3\mathcal{I}_{\chi}+(N-1)\mathcal{I}_{\pi}\Big] =0 \label{tadpoledsN}\ee and the $\chi,\pi$ masses become \be \frac{\mathcal{M}^2_\pi}{H^2} = \frac{\lambda}{2}~ \Big[\frac{J}{H^2}+ (N+1) \mathcal{I}_\pi + \mathcal{I}_\chi \Big] \label{masspiN} \ee \be \frac{\mathcal{M}^2_\chi}{H^2} = \frac{\lambda}{2}~ \Big[2\frac{\sigma^2_0}{H^2}+\frac{J}{H^2}+ (N-1) \mathcal{I}_\pi + 3\mathcal{I}_\chi \Big]\,. \label{masschiN}\ee In the strict $N\rightarrow \infty$ limit these equations simplify to \bea && \sigma_0 \Big[\frac{J}{H^2} +N\mathcal{I}_{\pi}\Big] =0 \label{largeNtad} \\ && \frac{\mathcal{M}^2_\pi}{H^2} = \frac{\lambda}{2}~ \Big[\frac{J}{H^2}+ N \mathcal{I}_\pi \Big] \label{largeNmasspi}\\ && \frac{\mathcal{M}^2_\chi}{H^2} = \frac{\lambda}{2}~ \Big[2\frac{\sigma^2_0}{H^2}+\frac{J}{H^2}+ N \mathcal{I}_\pi \Big]\,, \label{largeNmasschi}\eea with $\mathcal{I}_{\pi,\chi}$ given by eqn.(\ref{deltas}) and self-consistently $\Delta_{\pi,\chi} = \mathcal{M}^2_{\pi,\chi}/3H^2$. Clearly, eqns. (\ref{largeNtad},\ref{largeNmasspi}) lead to conclude that the only symmetry breaking solution corresponds to $\mathcal{M}^2_\pi =0$ but this is obviously in contradiction with the self-consistent solution because of the infrared singularity in $\mathcal{I}_\pi \propto 1/\mathcal{M}^2_\pi$. Therefore, the only available solution of (\ref{largeNtad}) that is also self-consistent and infrared finite must be the unbroken symmetry solution $\sigma_0=0$ which results in equal masses for $\chi,\pi$ fields. Thus in the strict $N\rightarrow \infty$, neglecting the $1/N$ corrections the $O(N)$ symmetry \emph{cannot} be spontaneously broken because of the strong infrared effects. This is the conclusion of ref.\cite{serreau}. However the analysis presented above for finite $N$, and in particular for $N=2$ suggests that this conclusion holds \emph{only} in the strict $N\rightarrow \infty$ limit but for any finite $N$ \emph{there is spontaneous symmetry breaking}, along with infrared radiatively induced masses for the Goldstone fields without contradicting Goldstone's theorem, but the transition is first order as a consequence of infrared divergences. \vspace{2mm} \subsection{Decay of $\pi,\chi$ particles:} \label{subsec:decay} As discussed above the non-local self-energies $\Sigma_{\pi,\chi}(k;\eta,\eta')$ are real and do not contribute to the mass to leading order in $\Delta_{\pi,\chi}$, however they determine the \emph{decay} of single particle states as described in ref.\cite{boywwds}. We now focus in obtaining the decay amplitudes arising from these contributions. Using the relations given by eqns. (\ref{masachids}-\ref{defsds}) to leading order in poles in $\Delta_{\pi,\chi}$ the one loop results (\ref{selfleadds},\ref{sigchilead}) can be written as \bea \Sigma^{(1)}_{\pi}(k;\eta;\eta') & = & \frac{3\,\lambda }{8\pi^2} \,\frac{|g(k;\eta)|^2|g (k;\eta')|^2}{(\eta\,\eta')^2}\Big[1+\frac{\delta_\chi}{\delta_\pi}\Big] \label{sigpifi}\\ \Sigma^{(1)}_{\chi}(k;\eta;\eta') & = & \frac{27\,\lambda }{8\pi^2} \,\frac{|g(k;\eta)|^2|g (k;\eta')|^2}{(\eta\,\eta')^2}\Big[1+\frac{\delta_\chi}{9\delta_\pi} \Big] \label{sigchifi}\,. \eea Thus formally the real part of the single particles self-energy are of $\mathcal{O}(\lambda)$. In ref.(\cite{boywwds}) it was found that quartic self-interactions with strength $\lambda$ yield \emph{two loops} self-energies that are \emph{also} of $\mathcal{O}(\lambda)$ as a consequence of infrared divergences that are manifest as \emph{second order} poles in $\Delta$. Implementing the ``infrared rules'' obtained in ref.\cite{boywwds} in the two loop diagrams for $\Sigma_{\pi,\chi}$ figs. (\ref{fig:twoloops}) (a,b) and (c,d) respectively we find the leading order two-loops contributions \bea \Sigma^{(2)}_{\pi}(k;\eta;\eta') & = & \frac{3\lambda}{16\pi^2}\,\frac{|g(k;\eta)|^2 |g(k;\eta')|^2}{(\eta\,\eta')^2}\,\Bigg[\frac{9}{\delta^2_\pi}+\frac{1}{\delta^2_\chi}+\frac{2}{\delta_\pi\delta_\chi} \Bigg] \label{sigpi2} \\\Sigma^{(2)}_{\chi} (k;\eta;\eta') & = & \frac{3\lambda}{16\pi^2}\,\frac{|g(k;\eta)|^2 |g(k;\eta')|^2}{(\eta\,\eta')^2}\,\Bigg[\frac{9}{\delta^2_\chi}+\frac{1}{\delta^2_\pi}+\frac{2}{\delta_\pi\delta_\chi} \Bigg]\label{sigchi2} \eea From (\ref{realima}) we obtain the conformal time dependent single particle decay rates (\ref{decarate}) \be \frac{1}{2}\Gamma_{\pi,\chi}(k;\eta) = \int^{\eta}_{\eta_0} \Sigma_\pi(k;\eta;\eta') d\eta' = \lambda ~ \mathcal{C}_{\pi,\chi}\, \,k\,\frac{\big|H^{(1)}_{ {3}/{2}}(z)\big|^2}{z}\int^{z_0}_z \frac{dz'}{z'}\, \big|H^{(1)}_{ {3}/{2}}(z')\big|^2 ~~;~~z=-k\eta \,, \label{gammas}\ee with \bea \mathcal{C}_\pi & = & \frac{3}{256}\Bigg[2\Big( 1+\frac{\delta_\chi}{\delta_\pi}\Big) + \Big( \frac{9}{\delta^2_\pi}+\frac{1}{\delta^2_\chi}+\frac{2}{\delta_\pi\delta_\chi} \Big) \Bigg]\label{Cpi}\\ \mathcal{C}_\chi & = & \frac{3 }{256}\Bigg[18\Big( 1+\frac{\delta_\chi}{9\delta_\pi}\Big) + \Big( \frac{9}{\delta^2_\pi}+\frac{1}{\delta^2_\chi}+\frac{2}{\delta_\pi\delta_\chi} \Big) \Bigg]\label{Cchi} \eea \begin{figure}[ht!] \begin{center} \includegraphics[height=4in,width=4in,keepaspectratio=true]{twoloops.eps} \caption{Two loops contributions to $\Sigma_\pi$ (a,b) and $\Sigma_\chi$ (c,d). Solid lines $=\pi$, dashed lines $=\chi$ . } \label{fig:twoloops} \end{center} \end{figure} As discussed in ref.\cite{boyhol,boywwds} the decay $\pi \rightarrow \pi + \chi$ is a consequence of emission and absorption of superhorizon quanta, in the superhorizon limit $z \ll 1~;~z_0 \sim 1$ the integrals can be done simply\cite{boywwds}, leading to the results for the single particle amplitudes in the superhorizon limit \be |C^{\pi,\chi}_{1_{\vk}}| \simeq e^{-\gamma_{\chi,\pi}(-k\eta)}~~;~~ \gamma_{\chi,\pi}(-k\eta) = \frac{2\lambda}{9\pi^2}\,\mathcal{C}_{\chi,\pi}\,\Bigg[ \frac{H}{k_{phys}(\eta)}\Bigg]^6 \,. \label{decaypichi}\ee \textbf{Possible caveats:} There are other two loops diagrams that have not been accounted for above. The generic form of these diagrams are displayed in fig.(\ref{fig:othertwoloops}) (we have not displayed specific $\pi,\chi$ lines but just showed the generic form of the diagrams) and can be interpreted as a renormalization of the internal propagator and the vertex. Both of these diagrams are $\propto (\lambda \sigma_0/H)^4 \simeq \lambda^2 \Delta^2_\chi$, therefore \emph{if} the ``infrared rules'' of ref.\cite{boywwds} apply to these diagrams the two loops imply an infrared factor $\propto 1/\Delta^2_\chi; 1/\Delta^2_\pi ; 1/\Delta_\chi \Delta_\pi$, in which case the overall coupling dependence of these diagrams is $\propto \lambda^2$ and would be subdominant as compared to the two loop diagrams of fig. (\ref{fig:twoloops}). The possible caveat in this argument is that the rules to obtain the leading contributions in poles in $\Delta$ given in ref.\cite{boywwds} do not directly apply to the diagrams above because if the bubble that renormalizes the propagator in the first diagram dresses a line in which the wavevector is within an infrared band $0 < q < \mu_{ir}\rightarrow 0$, then both lines in this bubble are within this band. This situation is not contemplated in the rules provided in ref.\cite{boywwds} which apply to the case when in a loop integral only one of the lines carries momenta within an infrared band whereas the other line carries a finite value of the momentum (even if superhorizon) (see the arguments in ref. \cite{boywwds}). Thus in absence of a sound proof that the diagrams in fig. (\ref{fig:othertwoloops}) are subleading, the result for the damping rate $\Gamma(k;\eta)$ given by eqn. (\ref{gammas}) should be taken as indicative. Nevertheless the analysis of symmetry breaking and the emerging conclusions on the mass generation of Goldstone bosons and the order of the transition are not affected by this possible caveat on the damping rate. Further study on the infrared aspects of diagrams in fig. (\ref{fig:othertwoloops}) is certainly worthy but beyond the scope of this article. \begin{figure}[ht!] \begin{center} \includegraphics[height=4in,width=4in,keepaspectratio=true]{otherdiags.eps} \caption{Other two loops contributions to $\Sigma_{\pi,\chi}$ . } \label{fig:othertwoloops} \end{center} \end{figure} \section{Conclusions:} Spontaneous symmetry breaking is an important ingredient in the inflationary paradigm. In this article we have studied (SSB) of continuous symmetry in an $O(2)$ model of scalar fields minimally coupled to gravity in de Sitter space time, focusing in particular on understanding whether Goldstone's theorem implies massless Goldstone bosons and trying to shed light on conflicting previous results\cite{serreau,prokossb} which implemented a local mean field approximation. We first revisited the general results of Goldstone's theorem in Minkowski space time highlighting the fact that it is through \emph{time translational invariance that the conservation of the Noether theorem guarantees massless Goldstone bosons}. We emphasized that in absence of time translational invariance Goldstone's theorem is much less stringent and does not rule out radiatively generated masses for Goldstone modes. We followed with an analysis of the implementation of Goldstone's theorem at one loop level in Minkowski space-time by studying the self energies of Goldstone and Higgs-like modes, we showed that at one loop level the masslessness of the Goldstone boson is a consequence of a precise cancellation between local tadpole and non-local (in space-time) contributions, and analyzed in detail the implementation of Goldstone's theorem in the large N limit of an $O(N)$ scalar theory. These results paved the way towards a deeper understanding of Goldstone's theorem and its consequences in de Sitter cosmology. Our conclusions are summarized as follows: \begin{itemize} \item In absence of a global time-like Killing vector Goldstone's theorem \emph{does not} imply massless Goldstone bosons when a continuous symmetry is spontaneously broken. \item We implemented a non-perturbative Wigner-Weisskopf method that allows to obtain the masses and decay widths of single particle states in a cosmological setting. Strong infrared behavior associated with light particles minimally coupled to gravity are treated in a self-consistent manner. \item Whereas in Minkowski space time at one loop level the masslessness of Goldstone modes in the broken symmetry phase is a consequence of a precise cancellation between tadpole and non-local (absorptive) contributions to the self energy, we find that in de Sitter space time no such cancellation is possible. Goldstone modes acquire a self-consistent radiatively generated mass resulting from the build-up of infrared singularities in self-energies. We find that in a weak coupling the mass of the Goldstone modes is $\mathcal{M}_\pi \propto \lambda^{1/4} H$, where $\lambda$ is the quartic coupling of the $O(2)$ theory. \item We find a \emph{first order phase transition} between the broken and unbroken symmetry phase as a function of $T_H=H/2\pi$ the Hawking temperature of de Sitter space-time. For the $O(2)$ model we find (SSB) for $T_H < T_c = {\lambda^{1/4}\,v }/{2.419\cdots} $ where $v$ is the tree level vacuum expectation value. For $T_H > T_c$ the symmetry is restored. The value of the order parameter at $T_H=T_c$ is $\sigma_{0c} \simeq 0.61\, {H}/{\lambda^{1/4}}$. The first order nature of the transition and concomitant jump in the order parameter is a consequence of the strong infrared behavior of correlation functions: if the transition were second order both fields would be massless at $T_c$ leading to strong infrared singularities. Thus radiatively induced masses relieve the infrared singularities at the expense of a first order transition and a jump in the order parameter. These results are in qualitative agreement with those of ref.\cite{prokossb} and also confirm the validity of the local mean field approximation since the non-local radiative corrections do not contribute to the masses of either Goldstone or Higgs-like modes but only to their decay widths. \item In the strict $N \rightarrow \infty$ limit of an $O(N)$ scalar theory there is no possibility of (SSB) in agreement with the result of ref.\cite{serreau}, but (SSB) is available for any finite $N$. This result reconciles the conflicting conclusions of refs.\cite{serreau,prokossb}. \item The lack of a global time-like Killing vector prevents the existence of kinematic thresholds, as a result we find that Goldstone modes \emph{decay} into Goldstone and Higgs modes via the emission and absorption of superhorizon quanta. We have obtained the decay width of Goldstone modes in the superhorizon limit, the amplitude of single particle Goldstone modes $|C^{\pi}_{1_{\vk}}| \simeq e^{-\gamma_{\pi}(-k\eta)}$ where $\gamma_{ \pi}(-k\eta) \propto \lambda \, \big(H/k_{phys}(\eta)\big)^6 $. \end{itemize} \vspace{2mm} \textbf{Further Questions:} The discussion in section (\ref{sec:goldcosmo}) on the applicability and corollary of Goldstone's Theorem in an expanding cosmology highlights the consequences of a covariant conservation law in a time dependent background geometry as contrasted with the strict conservation law in Minkowski space time and is general for any cosmological background. Our study focused on de Sitter space time wherein infrared divergences associated with minimally coupled massless particles lead to the self-consistent generation of masses for Goldstone bosons as described above. There remains the very important question of whether Goldstone bosons acquire a mass in other cosmologies, for example during the radiation dominated stage, where the arguments on the time dependence of the background are valid but there may not be infrared divergences that lead to a self-consistent generation of mass as in de Sitter space time. A deeper understanding of this case certainly merits further study as it may yield to novel and unexpected phenomena in cosmology and is relegated to future work. \acknowledgments The author acknowledges support by the NSF through award PHY-0852497.
1,116,691,497,499
arxiv
\section{Introduction} \label{section:intro} LIS has been identified as one of the key technologies for beyond 5G \cite{husha_data,husha_data2,husha_asign,husha_pos}. In Fig. \ref{fig:LIS_concept} we show the concept of a LIS serving multiple users simultaneously. The LIS is a continuous radiating surface located in the proximity of the users. Each part of the surface is capable of receiving and transmitting electromagnetic (EM) waves with a certain control, so the EM waves can be focused in 3D space with high resolution, opening the door of a new world of possibilities for power-efficient communication. Apart from LIS, another type of intelligent surface has been studied in the literature, which can be classified within the smart radio environment paradigm \cite{direnzo}, by which the wireless channel can be controlled to facilitate the transmission of information, as opposite to traditional wireless communication systems, where the channel is imposed by nature, and transmitter and receiver adapt to changes in it. One example of this new trend is the reconfigurable surfaces, known as \textit{intelligent reflecting surfaces}, \textit{programmable metasurfaces}, \textit{reconfigurable intelligent surfaces (RIS)}, and \textit{passive intelligent mirrors} among others \footnote[1]{We refer to \cite{basar} and \cite{huang_hol} for a complete list of surfaces.}, which consist of electronically passive surfaces with the capability to control how the waves are reflected when hitting their surface. Furthermore, the term LIS has also been recently used for such a passive surfaces \cite{taha,han,huang}, with the subsequent risk of confusion. While RIS can be seen as part of the radio channel, LIS acts as an active basestation/access point. LIS contains full transmitters and receivers chains, together with baseband processing capabilities to transmit and receive. A list of the main differences between RIS and LIS is shown in Section \ref{sub:RIS}. \begin{figure} \centering \includegraphics[width=\linewidth]{LIS_concept.eps} \caption{A LIS serving multiple users simultaneously.} \label{fig:LIS_concept} \vspace*{-4mm} \end{figure} Most of the research on LIS has been focusing on concept exploration \cite{husha_data,husha_data2,husha_asign,husha_pos}, system performance \cite{jung,juan_CAMSAP}, and channel modeling \cite{dardari,williams}. However, the questions from implementation point of view have not yet been answered. This paper aims to cover this area, by identifying and addressing implementation challenges, and providing design guidelines for an efficient implementation of LIS. The first step to make LIS implementable is to make it discrete (based on discrete antennas). It is known \cite{husha_data} that a continuous LIS can be replaced by a discrete one with no practical difference in achieved capacity. However, an efficient implementation of a discrete LIS is still very challenging, as it is expected to be made up of a very large number of antennas with the corresponding receiver (and transmitter) chains, which translates into a tremendous amount of inter-connection data-rate, that needs to be routed to the Central Digital Signal Processor (CDSP) through the backplane network. This centralized approach has already been employed in the LuMaMi Massive MIMO testbed \cite{LuMaMi}, with a need of $100$ bidireccional links, and a total aggregated interconnection bandwidth of 5GB/s. In case of LIS this number is much higher. To illustrate, let´s assume a $1.2m\times1.2m$ array containing $1,024$ antennas in the 4GHz band (assuming spacing of half wavelength), with the corresponding radio frequency (RF) and analog-to-digital converter (ADC) blocks. Then, if each ADC uses 12bits per I and Q, that makes a total rate of $\sim 48$Tb/s \footnote{Assuming 5G-NR standard, and sampling rate of $480,000 \cdot 4,096 \sim 2$Gs/s.}. This is $3$ orders of magnitude higher than the massive MIMO counterpart \cite{LuMaMi}, where this issue has been previously addressed \cite{cavallaro,puglielli,jesus_journal_MaMi,muris}. Therefore there is a need to come up with specific architectures and algorithms in order to overcome this bottleneck. We propose to tackle those challenges by algorithm and architecture co-design. At the algorithm level, we explore the unique features of LIS (e.g., very large aperture) to develop distributed algorithms that enable the processing being performed locally, near the antennas. This will significantly relax the requirement for interconnection bandwidth. At the hardware architecture design level, we propose to panelize the LIS in order to facilitate processing distribution, scalability, manufacturing, and installation. A hierarchical interconnection topology is developed accordingly to provide efficient and flexible data processing, and data exchange between panels and CDSP. Based on the proposed algorithm-architecture, extensive analysis has been performed to enable trade-offs between system capacity, interconnection bandwidth, computational complexity, and processing latency. This will provide high-level design guidelines for the real implementation of LIS systems. The contributions of this work are originated from our previous work in \cite{juan_VTC19,jesus_icc20,jesus_iter}, being considerably extended in the present paper. This article is organized as follows: Section \ref{section:LIS} introduces the LIS concept, then the system model is presented in Section \ref{section:sys_model}. Our proposed algorithms are described in Section \ref{section:algorithms}, and the architecture description in Section \ref{section:arch}. Analysis and design trade-offs are presented in Section \ref{section:analysis}, and finally conclusions in Section \ref{section:conclusions}. Notation: In this paper, lowercase, bold lowercase and upper bold face letters stand for scalar, column vector and matrix, respectively. The operations $(.)^T$, $(.)^*$ and $(.)^H$ denote transpose, conjugate and conjugate transpose respectively. $\IK$ represents the identity matrix of size $K \times K$. Operator $\diag(.)$ returns a block diagonal matrix built with the list of matrices in the argument. \section{Large Intelligent Surfaces} \label{section:LIS} \begin{figure*}[ht] \captionsetup[subfigure]{justification=centering} \centering \subfloat[LIS panel with 64 dual-port antennas]{ \includegraphics[width=0.24\linewidth]{LIS_processing_dist_a.eps} \label{fig:LIS_processing_dist_a} } \subfloat[Internal LIS panel architecture]{ \includegraphics[width=0.24\linewidth]{LIS_processing_dist_b.eps} \label{fig:LIS_processing_dist_b} } \subfloat[Fully connected LIS using 16 panels]{ \includegraphics[width=0.23\linewidth]{LIS_processing_dist_c.eps} \label{fig:LIS_processing_dist_c} } \subfloat[Partially connected LIS using 6 panels]{ \includegraphics[width=0.25\linewidth]{LIS_processing_dist_d.eps} \label{fig:LIS_processing_dist_d} } \caption{LIS architecture components in the form of a) panel, b) each with internal analog and digital processing resources, synchronization, and digital back-haul. Identical panels can be combined in arbitrary configurations, e.g., fully or partially connected. Each panel contributes with its own processing resources, making the available resources for distributed processing fixed per area unit.} \label{fig:LIS_processing_dist} \vspace*{-5mm} \end{figure*} This section describes the key features of LIS, by comparing with massive MIMO and RIS. We also present the general concept of panelized LIS, which is proposed to ensure scalability and implementation feasibility. \subsection{Differences with Massive MIMO} \label{subsection:difference_mami} Multi-antenna technology has evolved in recent years in the form of Massive MIMO, where the number of antennas in the Base-Station (BS) grows up to $\sim 100$, bringing many benefits from communication and energy consumption points of view \cite{rusek}. LIS wants to go further by increasing the number of antennas one or two orders of magnitude more, altogether with the physical size of the array, which brings gains beyond Massive MIMO can provide. This results in fundamental differences between these two technologies, which are listed as follows: \begin{itemize} \item LIS aperture is larger in comparison to Massive MIMO, which translates into higher directivity and spatial multiplexing capabilities. \item Users are close to the LIS in relation to its size, what makes them being in the near field, as opposite to Massive MIMO (and others cellular access technologies) where users are in the (Fraunhofer) far field region. Being in the near field requires the use of channel models based on spherical waveforms, rather than the planar wave approximation, whose use is generalized in Massive MIMO (and other cellular technologies). \item Due to the lower path loss (due to the close proximity between users and LIS), and the large antenna gain, transmit power is expected to be relatively small for both sides of the communication, opening the door for extensive use of low-cost and low-power analog components. \item Received power distribution from users is not uniform throughout the surface as illustrated in Fig. \ref{fig:LIS_concept}. The same user is received with different signal intensity from different parts of the LIS. This can be exploited by the use of localized digital signal processing, leading to a more efficient use of computational resources, and inter-connection bandwidth, without significantly sacrificing the system performance. This is in contrast with Massive MIMO (and other cellular technologies), where users are seen with same power across the antenna array (which is in fact connected to the planar wave approximation). \end{itemize} \subsection{Differences with RIS} \label{sub:RIS} As commented in the Introduction, LIS and RIS are fundamentally different technologies. In this section we summarize the main differences between these two: \begin{itemize} \item RIS acts as a programmable reflector between the radio access point and the users, forming part of the channel. Typically it is configured in a way to improve a certain quality metric, such as capacity. LIS acts as a radio access point capable to communicate directly to users. \item LIS contains full receivers (in contrast to most of RIS) and baseband processing capabilities to obtain CSI from pilots transmitted by users. This allows an accurate calculation of the corresponding equalization matrix, and further detection within LIS. \end{itemize} \subsection{Panelized Implementation of LIS} Given that LIS is large in physical size and there is a need for distributed processing close to the antennas, we propose to divide the LIS in square units or panels. Panelization allows the LIS to adapt to a wide range of scenarios by adding, moving, or removing panels as desired, varying consequently the size and form of the LIS. Different shapes can be achieved by placing the panels in different ways: square, rectangular or distributed (panels not physically together, but covering a certain area). It also simplifies the system design, verification, and fabrication by only focusing on the panel as building block, instead of covering all possible LIS sizes and forms. Additionally, the installation becomes also simpler as the panel weights less, being easy to lift and mount. A high level overview of the LIS architecture components, processing distribution, and interconnection is shown in Fig. \ref{fig:LIS_processing_dist}. Panels are composed of a group of antennas forming a squared array as shown in Fig. \ref{fig:LIS_processing_dist_a}. Each panel contains internal processing resources in the analog and digital domains, and inter-connection capabilities to connect the panel to other panels (Fig. \ref{fig:LIS_processing_dist_b}). As said before, panels provide freedom to assembly the LIS. As an example, Fig. \ref{fig:LIS_processing_dist_c} shows 16 panels fully connected, forming a 1024-antennas LIS, while in Fig. \ref{fig:LIS_processing_dist_d}, 6 physically distant panels are connected in a distributed fashion (e.g: covering a certain volume in space, such as an office, or theater). \section{System Model} \label{section:sys_model} \begin{figure}[ht] \footnotesize \centering \psfrag{xh}{$\s$} \psfrag{N}{$N$} \psfrag{K}{$K$} \psfrag{P}{$P$} \psfrag{MP}{$M_{p}$} \psfrag{NP}{$N_{p}$} \psfrag{z}{$\z$} \psfrag{y}{$\y$} \psfrag{x}{$\x$} \psfrag{W1}{$\Wbf_{\mathrm{P},1}$} \psfrag{W2}{$\Wbf_{\mathrm{P},2}$} \psfrag{WP}{$\Wbf_{\mathrm{P},P}$} \psfrag{WB}{$\Wbf_{\mathrm{B}}$} \psfrag{H1}{$\Hbf_{1}$} \psfrag{H2}{$\Hbf_{2}$} \psfrag{HP}{$\Hbf_{P}$} \psfrag{Z1}{$\Z_{1}$} \psfrag{Z2}{$\Z_{2}$} \psfrag{ZP1}{$\Z_{P-1}$} \psfrag{ZP}{$\Z_{P}$} \psfrag{TP1}{$\text{To Panel 1 (optional)}$} \psfrag{FE}{$\text{Front-End}$} \psfrag{BP}{$\text{Backplane}$} \psfrag{local}{$\text{local}$} \includegraphics[width=0.95\linewidth]{LIS_system_model.eps} \caption{$K$ users transmitting to an M-elements discrete-LIS formed by $P$ panels.} \label{fig:system_model} \end{figure} A conceptual view of a discrete LIS system is presented in Fig. \ref{fig:system_model}. We consider $K$ users transmitting to the LIS, which is divided in three parts: \textit{front-end}, \textit{backplane}, and CDSP. We will use the term \textit{front-end} to refer to the per-antenna processing which is performed locally at each panel, and \textit{backplane} to the related processing involving data aggregation, distribution, and processing for further dimensionality reduction. Backplane can be made of multiple levels and processing nodes as we will present in Section \ref{section:arch}. The processing unit in the front-end is the Local DSP (LDSP), while the one in backplane is the Backplane DSP (BDSP). The data is finally collected by the CDSP for detection. In the present section we also introduce a mathematical model for the communication and the LIS-baseband processing. We consider the transmission from $K$ single antenna users to the LIS containing $M$ active antenna elements (input dimensionality). The LIS is divided into $P$ squared panels, each with $M_{\text{p}}$ elements, such that $M_{\text{p}} \cdot P = M$. Each panel has an output with $N_p$ dimensions, and the total number of them is $N$, such that $N = N_{\text{p}} \cdot P$. Panels are connected to the backplane, which collects their output data, process it, and provides the CDSP with $K$ values to ensure proper detection. The data dimensionality is reduced from the antenna elements interface (vector $\y \in \mathbb{C}^{M}$ in the figure) to the backplane input ($\z \in \mathbb{C}^{N}$) due to the front-end, and from this to the CDSP interface ($\s \in \mathbb{C}^{K}$) due to backplane processing. We assume $M \gg K$ for the rest of the article. The $M\times 1$ received vector at the LIS is given by \begin{equation} \mathbf{y} = \sqrt{\rho}\Hbf\mathbf{x}+\mathbf{n}, \label{eq:received_signal} \end{equation} where $\x$ is the transmitted $K\times 1$ user data vector, and $\E\{\x \x^{H}\}=\IK$. $\Hbf$ is the channel matrix, and $\mathbf{n} \sim \mathcal{CN}(0,\I)$ is a $M \times 1$ noise vector, that we assume with identity covariance for simplicity without loss of generality. This convention leaves $\rho$ as the "transmit" SNR and therefore it is dimensionless. Assuming the location of user $k$ is $(x_{k},y_{k},z_{k})$, where the LIS is at $z=0$. The channel between this user and a LIS antenna at location $(x,y,0)$ is given by the complex value \cite{husha_data} \begin{equation} h_{k}(x,y)=\frac{\sqrt{z_{k}}}{2\sqrt{\pi} d_{k}^{3/2}}\exp{\left( -\frac{2\pi j d_{k}}{\lambda} \right)}, \label{eq:channel} \end{equation} where $d_{k}=\sqrt{z_{k}^{2}+(x_{k}-x)^2+(y_{k}-y)}$ is the distance between the user and the antenna, and Line of Sight (LoS) propagation between them is assumed. $\lambda$ is the wavelength. The channel matrix can be expressed as \begin{equation} \Hbf=[\Hbf_{1}^{T},\Hbf_{2}^{T},\cdots \Hbf_{P}^{T}]^{T}, \label{eq:H_structure} \end{equation} where $\Hbf_{i}$ is the $\Mp \times K$ channel matrix of the $i$-th panel. We assume each panel has perfect knowledge of its local channel. \subsection{Dimensionality reduction: A lossless or lossy process} \label{sub:lossles_vs_lossy} As commented previously, our LIS architecture can be seen as a system to reduce the dimensionality of the very large incoming signal ($M \times 1$) down to a value required for detection at the CDSP ($K \times 1$). We can classify this process attending to the criteria of preserving information as: lossless and lossy. A lossless process maintains the mutual information between CDSP input and user's data, formally \begin{equation} I(\s;\x) = I(\y;\x), \nonumber \end{equation} so the system can achieve channel capacity performance if optimal processing is done in CDSP. Initial progress on the trade-offs of distributed processing for MIMO systems in lossless approach can be seen in \cite{juan_icc20}, and more recently in \cite{juan_journal}. In this regime $\Np \geq \min\{\Mp,K\}$. In despite of the attractiveness of achieving optimal performance, lossless presents a high cost from implementation point of view, as it requires larger panel output dimensionality, which translates in higher interconnection bandwidth throughout the backplane. In this article we look for a good compromise between implementation cost and performance, which leads us to explore the case $\Np \leq \Mp$ \footnote{We note that this is equivalent to: $N \leq M$.}, and especially $\Np \ll \Mp$. By selecting this regime we expect to reduce significantly interconnection bandwidth at the cost of a loss in performance, which can be expressed formally as \begin{equation} I(\s;\x) \leq I(\y;\x). \nonumber \end{equation} Our approach is to include enough flexibility into the system to obtain enough working points to establish a rich trade between implementation cost and performance, which in fact, allows the system to adapt to a large variety of scenarios during the deployment phase. As we will see in Section \ref{section:analysis}, it is possible achieve close to channel capacity conditions with significant reduction in implementation cost. \subsubsection{Filtering} \label{sub:filtering} In order to achieve dimensionality reduction, we employ linear filtering in the incoming data, while to achieve enough flexibility we consider separate filters for front-end and backplane. Let us consider the panelized architecture shown in Fig. \ref{fig:system_model}, where each panel performs local per-antenna processing on the received signal and delivers the result to the backplane. There is not cooperation among panels during front-end filtering, therefore the filter matrix $\Wp$ has the following structure \begin{equation} \Wp = \diag(\Wpi{1},\Wpi{2},\cdots,\Wpi{P}) \label{eq:W_structure} \end{equation} where $\Wpi{i}$ is the $\Mp \times \Np$ matrix filter of the $i$-th panel. Then the front-end output is given by \begin{equation} \z= \Wph \y = \sqrt{\rho}\Wph\Hbf \mathbf{x} + \hatn, \label{eq:filtering_fe} \end{equation} where $\hatn=\Wph\mathbf{n}$ is the filtered noise. Mind that size of $\z$ is $N$, and $N \leq M$ according to the reasoning in this section. Finally, the backplane filters $\z$ in order to obtain $\s$ as \begin{equation} \s= \Wbf_{\mathrm{B}}^{H} \z, \label{eq:filtering_bp} \end{equation} which is used by CDSP for detection. \subsection{Sum-Rate Capacity} \label{sub:sum-rate} The mutual information between $\z$ and $\x$ is $I(\x;\z)=H(\z)-H(\z|\x)$. Assuming white Gaussian signaling transmitted by users, the mutual information for a given $\Hbf$ and $\Wp$ can be further expanded as \begin{equation} \begin{split} I(\x;\z) &= \log_{2}|\Sbf_{\z\z}| - \log_{2}|\Sbf_{\hatn\hatn}|\\ &= \log_{2} |\rho\Wph\Hbf\Hbfh\Wp+\Wph\Wp|\\ &-\log_{2} |\Wph\Wp|, \end{split} \label{eq:MI} \end{equation} where $\Sbf_{\z\z}$ and $\Sbf_{\hatn\hatn}$ are the covariance of the multivariate complex gaussian vector $\z$ and $\hatn$ respectively. If $\Wp$ is full-rank matrix, and taking into account that $M \geq N$, then $(\Wph \Wp)^{-1}$ exists and we can rewrite \eqref{eq:MI} as \begin{equation} \begin{split} I(\x;\z) &= \log_{2} |\IK + \rho\Hbfh\Wp(\Wph\Wp)^{-1}\Wph\Hbf|.\\ \end{split} \label{eq:MI2} \end{equation} We are interested in maximize the sum-rate capacity for this front-end architecture, and it will be the maximum of \eqref{eq:MI2} over all possible $\Wp$ for a given $\Hbf$. If we take into account the block structure of $\Hbf$ and $\Wp$ presented in \eqref{eq:H_structure} and \eqref{eq:W_structure} respectively, the sum-rate capacity at $\z$ interface is given by \begin{equation} \begin{split} C_{\z} &= \max_{\{\Wpi{i}\}} \log_{2} | \IK + \rho \sum_{i=1}^{P} \Hbf_{i}^{H} \Wpi{i} (\Wphi{i} \Wpi{i})^{-1} \Wphi{i} \Hbf_{i} |\\ &= \max_{\{\Q_{i}: \Q_{i}^{H}\Q_{i}=\I_{\Np}\}} \log_{2} | \IK + \rho \sum_{i=1}^{P} \Hbf_{i}^{H} \Q_{i} \Q^{H}_{i} \Hbf_{i} |,\\ \end{split} \label{eq:Cz_multi} \end{equation} where $\Q_{i}$ is a $\Mp \times \Np$ semi-unitary matrix, consisting of the $\Np$-first singular vectors of $\Wpi{i}$. For the last expression in \eqref{eq:Cz_multi}, it is assumed that all matrices $\Wphi{i}\Wpi{i}$ are full-rank, so the inverse exists. As we will show in next section, selection of $\{\Wpi{i}\}$ is done in a way that each element is semi-unitary, which leads to white noise at the front-end output. Therefore, once the front-end filters are selected, they can be seen as part of the channel by the backplane, and we can apply same reasoning to obtain $\Wbf_{\mathrm{B}}$, leading to \footnote{A detailed explanation of this process can be found in Section \ref{section:arch}.} \begin{equation} C_{\s} = \max_{\Wb} \log_{2} | \IK + \rho \widetilde{\Hbf}^{H} \Wb (\Wb^{H} \Wb)^{-1} \Wb \widetilde{\Hbf} |, \label{eq:Cs} \end{equation} where $\widetilde{\Hbf}=\Wph \Hbf$ is the equivalent channel. \section{Distributed algorithms for dimensionality reduction} \label{section:algorithms} In this section we introduce two algorithms to obtain the filtering matrices $\{\Wpi{i}\}$ and $\Wb$, which are executed in the LDSP and BDSP respectively. The way the algorithms are explained here refers to the panels for simplicity, but can be extended to the backplane by using as channel matrix the equivalent one $\widetilde{\Hbf}$, presented before, and $P$ equal to the number of processing nodes in backplane. More details about the backplane case can be found in Section \ref{section:arch}. The first of the algorithms is a straightforward approach with relatively low computational complexity based on the known Matched Filter (MF) method, which we select conveniently as a comparison baseline for our proposed algorithm. \subsection{Reduced Matched Filter (RMF)} RMF consists of a reduced version of the known MF method. In this case, the filter $\Wbf_{i}$ is built by the $N_{\text{p}}$ $\textit{strongest}$ columns of $\Hbf_{i}$. The $\textit{strenght}$ of a column $\h_{n}$ is defined as $\|\h_{n}\|^{2}$. The $\Mp \times \Np$ filtering matrix of the $i$-th panel is then expressed as \begin{equation} \Wbf_{\text{RMF},i} = \left[ \h_{k_1}, \h_{k_2}, ..., \h_{k_{Np}} \right], \label{eq:W_RMF} \end{equation} where $\h_{n}$ is the $\Mp \times 1$ channel vector for the $n$-th user, and $\{k_{i}\}$ the set of indexes relative to the $N_{\text{p}}$ strongest users \footnote{This is connected to the non-uniform user power distribution in the LIS, described in Section \ref{subsection:difference_mami}, which translates to the fact that a panel may not see all users with same power, which depends on their physical proximity.}. When RMF is applied at the panel level as local filtering, each output is associated to a certain user. Therefore, nodes in the backplane can combine data coming from the same user, in a similar fashion as in distributed MF \cite{jeon_li2}. The result of the filtering is available at CDSP input for final detection (hard or soft). It is important to notice that in this method front-end processing nodes can work independently, without sharing channel related information. This saving in interconnection bandwidth comes with a performance loss as we will see in Section \ref{section:analysis}. \subsection{Iterative Interference Cancellation (IIC)} The IIC algorithm aims to solve the optimization problem described in \eqref{eq:Cz_multi}. It is an iterative algorithm based on a variant of the known multiuser water-filling method \cite{Yu}. The pseudocode is shown in Algorithm \ref{algo:MUWF}. \IncMargin{1em} \begin{algorithm}[ht] \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \SetKwInOut{Preprocessing}{Preprocessing} \SetKwInOut{Init}{Init} \Input{$\{\Hbf_{i}\}, i=1 \cdots P$} \Preprocessing{ $\Q_{i} = \mathbf{0}, i=1 \cdots P$} \Repeat {sum-rate converges} { \For{$i = 1,2,...,P$}{ $\Z_{i} = \IK + \rho \sum_{j=1,j\neq i}^{P} \Hbf_{j}^{H} \Q_{j} \Q^{H}_{j} \Hbf_{j}$\\ $\mathbf{\Q}_{i} = \argmax_{\overline{\Q}_i} |\rho \Hbf_{i}^{H} \overline{\Q}_i \overline{\Q}_i^{H} \Hbf_{i} + \Z_{i}|$\\ $\text{subject to } \overline{\Q}_i^H \overline{\Q}_i = \I_{\Np}$ } } \Output{$\{\Q_{i}\}, i=1 \cdots P$} \caption{IIC algorithm pseudocode} \label{algo:MUWF} \end{algorithm}\DecMargin{1em} The algorithm splits the joint optimization problem \eqref{eq:Cz_multi} into $P$ small ones, which are solved in a sequential basis. The goal of the algorithm is to calculate the $\Mp \times \Np$ matrices $\{\Q_{i}\}$. The product $\Q_{i}\Q^{H}_{i}$ is low-rank as $\Np \leq \Mp$, which exploits the fact that only a few users are conveniently seen by each panel (ideally this number is $\Np$). The fundamental difference between our current algorithm and \cite{Yu} is due to the low-rank constraint present in our proposed algorithm. At each iteration of the algorithm, $K \times K$ matrix $\Z_i$ is obtained as intermediate result, which contains contribution from the rest of panels, and plays the role of noise covariance in the sum-rate optimization problem formulated in line 4. The algorithm iterates over all panel indexes, as many times as needed until a certain convergence criteria is achieved. \subsection{Processing Distribution} \label{sub:dist} It is natural to map each iteration of the IIC algorithm to each panel, as it requires local CSI, while $\Z_{i}$ can be computed also locally as an update of $\Z_{i-1}$. Therefore, each panel computes and shares $\Z_{i}$ with the neighbor panel, $i+1$, while $\Q_{i}$ is stored locally for further filter calculation, and not shared. We propose that panels are connected by fast local and dedicated connections for the exchange of data related to matrix $\Z$. In general, we can say that "the matrix $\Z$ is passed from panel to panel" using the dedicated connections depicted in Fig. \ref{fig:system_model}. This decentralized approach is described in Algorithm \ref{algo:IIC_i} for a certain panel $i$ \footnote{For simplicity and to limit latency, we consider only one iteration to the set of panels throughout the rest of this article. We are aware that increasing the number of iterations improves the performance.}. \IncMargin{1em} \begin{algorithm}[ht] \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \SetKwInOut{Preprocessing}{Preprocessing} \SetKwInOut{Init}{Init} \Preprocessing{ $\mathbf{Z}_{0} = \IK$} \Input{$\{\Hbf_{i}, \Z_{i-1}\}$} $\Q_{i} = \argmax_{\overline{\Q}_i} |\rho \Hbf_{i}^{H} \overline{\Q}_{i} \overline{\Q}_{i}^{H} \Hbf_{i} + \mathbf{Z}_{i-1}|$\\ $\text{subject to } \overline{\Q}_i^H \overline{\Q}_i = \I_{\Np}$\\ $\mathbf{Z}_{i} = \mathbf{Z}_{i-1} + \rho\Hbf_{i}^{H} \Q_{i} \Q^{H}_{i} \Hbf_{i}$ \caption{Decentralized IIC algorithm at $i$-th panel} \label{algo:IIC_i} \Output{$\{\Q_{i}, \mathbf{Z}_{i}\}$} \end{algorithm}\DecMargin{1em} The solution to the local optimization problem at $i$-th panel is $\Q_{i}=[\hatu_{1},\hatu_{2}, \cdots, \hatu_{\Np}]$, where $\hatu_{n}$ is the $n$-th left-singular vector of $\hat{\Hbf}_{i}=\Hbf_{i} \U_{z} \Sbf_{z}^{-1/2}$ corresponding to the n-$th$ ordered singular value, and $\mathbf{Z}_{i-1} = \U_{z} \Sbf_{z} \U_{z}^{H}$ the eigen-decomposition of $\Z_{i-1}$ (see Appendix-\ref{proof:sol_IIC_i} for proof). The pseudocode for the processing at the $i$-th panel is shown in Algorithm \ref{algo:IIC}, \IncMargin{1em} \begin{algorithm}[ht] \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \SetKwInOut{Preprocessing}{Preprocessing} \SetKwInOut{Init}{Init} \Input{$\{\Hbf_{i}, \Z_{i-1}\}$} $[\U_z,\Sbf_z] = \text{svd} (\Z_{i-1})$\\ $\widetilde{\Hbf}_{i}=\Hbf_{i} \U_{z} \Sbf_{z}^{-1/2}$\\ $\widetilde{\U} = \text{svd} (\widetilde{\Hbf}_{i})$\\ $\Q_{i} = \widetilde{\U}(:,1:\Np)$\\ $\Z_i = \Z_{i-1} + \rho\Hbfh_i \Q_i \Q^{H}_i \Hbf_{i}$ \caption{Decentralized IIC algorithm processing steps for $i$-th panel} \label{algo:IIC} \Output{$\{\Q_i, \Z_i\}$} \end{algorithm}\DecMargin{1em} where $\widetilde{\U}$ is the left unitary matrix of $\widetilde{\Hbf}$, and $\Q_i$ is made by the eigenvectors associated to the $\Np$ strongest singular values. \subsection{Selection of $\Wbf$ in IIC Algorithm} \label{sub:selection_W} In the single panel case (centralized LIS), the optimal selection of $\Q$ leads to $\Q = \Htilde^{H}$, where $\Htilde$ is a $M \times N$ semi-unitary matrix made by the $N$-first left singular vectors of $\Hbf$. Then, capacity will be given by the first $N$ largest singular values of $\Hbf$. Once $\Q$ is known, in order to select $\Wbf$, we notice that $\Wbf = \Q \widetilde{\Sbf}_{W} \V_{W}^{H}$, where $\widetilde{\Sbf}_{W}$ is a diagonal $N \times N$ matrix containing the $N$ largest singular values of $\Wbf$. Selection of $\widetilde{\Sbf}_{W}$ and $\V_{W}$ does not play any role in the sum-rate capacity, but the right choice can provide some benefits in other areas. In this work we choose $\widetilde{\Sbf}_{W} = \I_{N}$ to make $\Wbf$ semiunitary matrix, which brings a benefit in terms of reduction of interconnection bandwidth, that will be explained in next section. Selection of $\V_{W}$ can be arbitrary, and for simplicity we choose $\V_{W}=\I_{N}$. However, other unitary matrices are also valid, and could offer some advantages, but we do not cover this in the present work. In the multiple panel case, \eqref{eq:Cz_multi} represents a joint optimization problem among the matrices in the set $\{\Q_{i}\}$. Similarly to the single panel case, $\Wbf_{i} = \Q_{i} \widetilde{\Sbf}_{W,i} \V_{W,i}^{H}$. Therefore, once $\Q_{i}$ is obtained, the selection of $\widetilde{\Sbf}_{W,i}$ and $\V_{W,i}^{H}$ will follow identical considerations, this is: $\widetilde{\Sbf}_{W,i} = \I_{\Np}$, and $\V_{W,i}^{H} = \I_{\Np}$. \section{Interconnection Topology and DSP architecture} \label{section:arch} In this section we describe the proposed LIS architecture, including interconnection topology, and LDSP internal architecture able to support both RMF and IIC algorithms. \subsection{Tree-based Global Interconnection and Processing} \label{sub:backplane} \begin{figure}[ht] \footnotesize \centering \psfrag{W1}{$\Wpi{1}$} \psfrag{W4}{$\Wpi{4}$} \psfrag{W61}{$\Wpi{61}$} \psfrag{W64}{$\Wpi{64}$} \psfrag{W1_1}{$\Wbij{1}{1}$} \psfrag{W4_1}{$\Wbij{4}{1}$} \psfrag{W13_1}{$\Wbij{13}{1}$} \psfrag{W16_1}{$\Wbij{16}{1}$} \psfrag{W1_2}{$\Wbij{1}{2}$} \psfrag{W4_2}{$\Wbij{4}{2}$} \psfrag{W1_3}{$\Wbij{1}{3}$} \psfrag{MP}{\tiny $\Mp$} \psfrag{NP}{$\Np$} \psfrag{NB1}{$\Nb^{(1)}$} \psfrag{4NB1}{$4\Nb^{(1)}$} \psfrag{NB2}{$\Nb^{(2)}$} \psfrag{NB3}{$\Nb^{(3)}$} \psfrag{CDSP}{\tiny $\text{CDSP}$} \psfrag{BDSP}{$\text{BDSP}$} \psfrag{Wh}{$\Wbfh$} \psfrag{WhH}{\color{blue} $\Wbfh\Hbf$} \psfrag{Why}{\color{red} $\Wbfh\y$} \psfrag{H}{\color{blue} $\Hbf$} \psfrag{W}{\color{blue} $\Wbf$} \psfrag{IIC}{$\mathrm{IIC}$} \psfrag{RMF}{$\mathrm{RMF}$} \psfrag{y}{\color{red} $\y$} \psfrag{BP}{$\text{Backplane}$} \psfrag{FE}{$\text{Front-End}$} \includegraphics[width=\linewidth]{LIS_model_backplane.eps} \caption{Front-end and backplane tree topology and interconnection for a 64-panels LIS. Each panel contains a LDSP for distributed MIMO processing. Additionally, each node in the tree contains a BDSP unit, which aggregates data from 4 nodes, process, and delivers the result to the next node after corresponding dimensionality reduction, this is: $\Nb^{(i+1)} \leq 4 \Nb^{(j)}, i=1,2$, and $\Nb^{(1)} \leq 4 \Np$.} \label{fig:model_backplane} \vspace*{-5mm} \end{figure} In order to further increase the dimensionality reduction of the incoming data, while performing spatially local processing, we propose a hierarchical interconnection based on tree topology. The tree represents a distributed backplane, where front-end processing nodes are the leaves, and their outputs are combined in backplane nodes through multiple levels, reducing the total inter-connection bandwidth each time, until the resulting data is delivered to the CDSP. This process if shown in Fig. \ref{fig:model_backplane}. The main idea is to enable system scalability by adding levels in the tree as the LIS grows (more panels), while keeping the CDSP resources demand constant (dependent only on K) regardless of the LIS size. Another benefit of the tree topology is its low latency (the latency grows logarithmically with the number of panels). As shown in the figure, the LIS backplane constitutes a 4-ary tree, which acts as an adaptation between the panels and the CDSP, introducing an extra dimensionality reduction of the incoming signal down to a level which can be efficiently transfered and handled by the CDSP, but high enough to allow good detection performance. Each node in the backplane contributes to $\Wb$, and aggregates data from 4 nodes, process it and deliver the output to the next node. The dimensionality of the output is lower or equal to the input, this is: $\Nb^{(i+1)} \leq 4 \Nb^{(j)}, i=1,2$, and $\Nb^{(1)} \leq 4 \Np$. This reduction is accumulated for the different consecutive levels of nodes the signal goes through. Let as assume the panels, during the formulation phase and after they obtain their local filtering matrix $\Wpi{i}$ (according to the selected algorithm), deliver the product $\Wpi{i}^{H} \Hbf_{i}$ ($\Np \times K$) to the corresponding node in the backplane. This can be seen as the result of filtering over the incoming pilot signals, which requires same amount of data as the filtering phase does. This product is the equivalent channel between the panel output and the users. A node aggregating outputs from 4 panels ($4 N_{p}$) can see those incoming values as an equivalent channel including the wireless channel and the 4 panels combined. The dimensionality of this equivalent channel is $4 N_{p}$, which is lower compared to the $4 M_{p}$ at the antenna level, but we expect it carries most of the captured channel capacity. If we take into account the selection of $\Wbf_{i}$ in the panels as semiunitary matrices according to Subsection \ref{sub:selection_W}, then the noise will be also white at the panel output. And filtered noise from 4 adjacent panels is still white due to the independence property of noise of different antennas/panels. Therefore at any node in the backplane connected to the panels we have same model as in \eqref{eq:received_signal} with the equivalent channel instead of $\Hbf_{i}$, and the filtered noise instead of $\mathbf{n}$, but with same covariance (identity matrix) \footnote{In case of not using semiunitary matrices, the noise gets colored and the covariance needs to be taken into account for sum-rate capacity optimization, therefore this noise covariance matrix needs also to be transfered between nodes in the tree. Selecting semiunitary matrices for the filters saves from this requirement.}. See Appendix-\ref{proof:white_noise} for proof. This means that \eqref{eq:filtering_fe} and \eqref{eq:filtering_bp}, and the sum-rate capacity derivation is also valid in this case with the equivalent channel, and the filter $\Wbf^{(1)}_{i}$ can be found by solving the optimization problem \eqref{eq:Cz_multi}. To obtain the filtering matrices, we follow the same problem described in Section \ref{sub:sum-rate}, with the same considerations as in \ref{sub:selection_W} for $\Wbf_{i}$ selection. Because we have the same problem for dimensionality reduction as in the front-end, both algorithms described in Section \ref{section:algorithms} can also be used in this case. This process can be repeated recursively for all levels of the tree up to the CDSP, which receives the total equivalent $K \times K$ channel matrix between the CDSP input interface and the users. This is used by the CDSP for detection. The general formulation algorithm to be executed at a certain LDSP or BDSP follows the steps shown in Algorithm \ref{algo:gen_form_alg}, where $\Heq$ is the equivalent channel matrix from current node input interface to users \footnote{Our experimental results shows no performance improvement by sharing $\Z$ among backplane nodes. Due to this reason we skip its use in Figure \ref{fig:model_backplane}}. \begin{algorithm} \DontPrintSemicolon \SetAlgoLined \SetKwInOut{Input}{Input}\SetKwInOut{Output}{Output} \Input{$\{\Heq,\Z\}$} \BlankLine \eIf{algorithm == IIC}{ $\Wbf = \text{IIC}(\Heq, \Z)$\; }{ $\Wbf = \text{RMF}(\Heq)$\; } \Output{$\{\Wbfh\Heq,\Z\}$} \caption{General formulation algorithm for tree-based LIS} \label{algo:gen_form_alg} \end{algorithm} \vspace*{-5mm} \subsection{DSP in panel and backplane nodes} \begin{figure*}[ht] \centering \psfrag{Mp}{$M_{\mathrm{p}}$} \psfrag{Np}{$N_{\mathrm{p}}$} \psfrag{ADC}{$\tiny\text{ADC}$} \psfrag{RF}{$\tiny\text{RF}$} \psfrag{ctrl}{$\text{ctrl}$} \psfrag{CDSP}{$\small\text{CDSP}$} \psfrag{LDSP}{$\text{LDSP}$} \psfrag{RCDSP}{$R_\text{CDSP}$} \psfrag{SPU}{$\text{SPU}$} \psfrag{Wh}{$\Wbfh_{i}$} \psfrag{WhH}{\color{blue} $\Wbfh_{i}\Hbf_{i}$} \psfrag{Why}{\color{red} $\Wbfh_{i}\y_{\mathrm{p}}$} \psfrag{H}{\color{blue} $\Hbf_{i}$} \psfrag{W}{\color{blue} $\Wbf_{i}$} \psfrag{FU}{$\text{FU}$} \psfrag{ALG}{$\tiny \text{(IIC/RMF)}$} \psfrag{MEM}{$\tiny \text{MEM}$} \psfrag{CE}{$\text{CE}$} \psfrag{FFT}{$\text{FFT}$} \psfrag{y}{\color{red} $\y_{\mathrm{p}}$} \psfrag{SPU}{$\text{SPU}$} \psfrag{PT}{$\text{Processing Tree}$} \psfrag{PP}{$P$} \psfrag{ppm1}{$\text{panel p-1}$} \psfrag{pp}{$\text{panel p}$} \psfrag{ppp1}{$\text{panel p+1}$} \psfrag{Zp}{$Z_\text{p}$} \psfrag{Zpm1}{$Z_\text{p-1}$} \includegraphics[width=0.9\linewidth]{LIS_panel.eps} \caption{Overview of the Local DSP and Spatial Processing Unit (SPU) in a panel. Panel-panel, and panel-backplane connections are also shown. Blue lines are used only in formulation phase. Blue letters relate to data which is generated/transfered during formulation. Red ones refer to filtering phase. Green lines are used in both phases. In those cases, blue and red data structures are shown above and below the line. ctrl represents control line to switch between formulation and filtering phases.} \label{fig:panel} \vspace*{-3mm} \end{figure*} The internal architecture of the panel together with LDSP is depicted in Fig. \ref{fig:panel}. LDSP comprises all digital signal processing involved in the uplink tasks. After the RF and ADC, digitalized incoming signal is processed by FFT blocks to perform time-to-frequency domain transformation. During the formulation phase, the Channel Estimation block (CE) estimates a new $\Hbf_i$ for each channel coherence interval. In this paper we assume perfect channel estimation. The Spatial Processing Unit (SPU), and specifically the Formulation Unit (FU) block receives $\Hbf_i$ and computes the filtering matrix $\Wpi{i}$ (in the figure we drop the subscript P for convenience). FU performs complex conjugate transpose in the case of RMF, and follows steps in Algorithm \ref{algo:IIC} in the case of IIC. $\Wpi{i}$ is then written to the memory. During the filtering phase, incoming data vector gets multiplied by $\Wpi{i}$, and its dimensionality reduced from $\Mp \times 1$ to a $\Np \times 1$ ($\Np \ll \Mp$), which is sent to the backplane for further processing. The SPU is shown in the figure as part of the LDSP, but it is also present in the BDSP architecture. SPU is in charge of data collection, filtering, and distribution. It also performs matrix filtering calculation and its storing. In case of the BDSP architecture, SPU is its main processing element, as in this case FFT and channel estimation is not needed \footnote{Even tough the SPU as a processing unit is identical at each node, data dimensionality may differ from one level to another in the system tree}. The filter can be either $\Wbij{i}{j}$, or $\Wpi{i}$, depending on weather it is part of BDSP or LDSP respectively, and it supports both algorithms. The multiplexers allow to switch between filtering and formulation phase. It is important to notice that same input and output data ports are used during both phases. The dimensionality in both phases is the same. This design decision of using the same SPU architecture throughout the LIS is highly desirable, as it simplifies considerably the design time, verification, and cost of the system. Furthermore, by using the same unit, some or all the backplane nodes may potentially be mapped onto the panels, therefore reducing the number of physical units in the system (in the expense of increasing the workload in panels). \section{Performance Analysis and Design Trade-offs} \label{section:analysis} In this section, we analyze the performance and implementation cost of the proposed uplink detection algorithms with the corresponding implementation architecture. More in detail: \begin{itemize} \item Performance is analyzed based on sum-rate capacity. \item Implementation cost in terms of computational complexity, interconnection bandwidth, and processing latency. \end{itemize} The trade-offs between sum-rate capacity and implementation cost is then presented to give high-level design guidelines. \subsection{Performance: Optimality and capacity bounds} \label{sub:performance} Closed-form sum-rate expression for multi-panel LIS and IIC algorithm is out of the scope of this work, however we present two upper bounds which provide useful insights. Numerical evaluation of the bounds is shown in next subsection. \begin{theorem} \label{prop:ub} For a certain channel realization $\Hbf$, an upper bound for $C_{\z}$ is given by \begin{equation} C_{\z} \leq \min \{C_{\mathrm{ub1}}, \Cb\}, \end{equation} where \begin{equation} \Ca = K \log_2 \left( 1 + \rho\frac{S_{\Np}}{K} \right), \end{equation} and \begin{equation} \Cb = \sum_{n=1}^{K}\log_2 (1 + \rho \lambda_{n}), \end{equation} where $S_{\Np} = \sum_{i=1}^{P}\sum_{n=1}^{\Np}\lambda_{n}^{(i)}$, $\lambda_{n}^{(i)}$ is the $n$-th eigenvalue of $\Hbf_{i}^{H}\Hbf_{i}$, and $\lambda_{n}$ is the $n$-th eigenvalue of $\Hbf^{H}\Hbf$. $P\Np \geq K$ is assumed. \end{theorem} \begin{proof} $\Cb$ corresponds to the single panel case, then it acts as an upper bound, as always outperforms the multiple-panel case under the same conditions of $P$ and $\Np$. See Appendix-\ref{proof:ub} for proof of $\Ca$. \end{proof} \subsection{Performance: Experimental results and simulation} \begin{figure} \footnotesize \centering \psfrag{LIS}{LIS} \psfrag{1.2m}{$1.2m$} \psfrag{3m}{$3m$} \psfrag{10m}{$10m$} \includegraphics[width=\linewidth]{LIS_scenario.eps} \caption{Simulation scenario. A $10m \times 10m \times 3m$ volume, with $1.2m \times 1.2m$ LIS. 64 users uniformly distributed in 3D.} \label{fig:LIS_scenario} \vspace*{-4mm} \end{figure} \begin{figure*}[htb] \centering \subfloat[Low SNR]{ \includegraphics[width=0.43\linewidth]{SR_vs_SNR_low.eps} } \subfloat[High SNR]{ \includegraphics[width=0.43\linewidth]{SR_vs_SNR_high.eps} } \caption{Average sum-rate capacity at panels output interface vs SNR. Upper bounds in Proposition \ref{prop:ub} also shown in low and high SNR regimes. $M=1024$, $\Mp=16$, $\Np=2$, and $K=64$.} \label{fig:sum_rate_vs_SNR} \vspace*{-8mm} \end{figure*} \begin{figure*}[htb] \centering \subfloat[RMF]{ \includegraphics[width=0.43\linewidth]{beta_b1_vs_beta_p_M1024_Mp16_K64_RMF.eps} } \subfloat[IIC]{ \includegraphics[width=0.43\linewidth]{beta_b1_vs_beta_p_M1024_Mp16_K64_IIC.eps} } \caption{Sum-rate capacity normalized by channel capacity at CDSP interface for different values of $\beta_{b1}$ vs $\beta_{p}$. $\beta_{b1}=\beta_{b2}=\beta_{b3}$. $M=1024, \Mp=16, K=64$, $\rho=10$. Black dots represent simulated cases. Rest is obtained by linear interpolation.} \label{fig:sum_rate_betas} \vspace*{-4mm} \end{figure*} \begin{figure*}[htb] \centering \subfloat[Sum-rate vs $\Cf$]{ \includegraphics[width=0.42\linewidth]{SR_vs_C_Mp_K64.eps} \label{fig:SR_vs_C} } \subfloat[Sum-rate vs $\Req$]{ \includegraphics[width=0.42\linewidth]{SR_vs_R_Mp_K64.eps} \label{fig:SR_vs_R} } \vspace*{-4mm} \\ \subfloat[Sum-rate vs $\Cf$ for different $M$]{ \includegraphics[width=0.42\linewidth]{sum-rate_vs_C_vs_M.eps} \label{fig:SR_vs_C_M} } \subfloat[Sum-rate vs $\Req$ for different $M$]{ \includegraphics[width=0.42\linewidth]{sum-rate_vs_R_vs_M.eps} \label{fig:SR_vs_R_M} } \caption{Sum-rate capacity normalized by channel capacity at CDSP interface versus computational complexity (\ref{fig:SR_vs_C}) and interconnection data-rate (\ref{fig:SR_vs_R}). In all cases, results for different panel sizes are shown, together with both algorithms. For both cases: $\beta_{\mathrm{b1}}=\beta_{\mathrm{b2}}$. Simulated points represent different $\Np$ values. Sum-rate capacity versus computational complexity (\ref{fig:SR_vs_C_M}), and versus interconnection data-rate for different LIS size (\ref{fig:SR_vs_R_M}). $\Mp=64$, IIC method, and $\rho=10$. $K=64$ in all cases.} \label{fig:SR_vs_complexity} \vspace*{-4mm} \end{figure*} The scenario for simulation is shown in Fig. \ref{fig:LIS_scenario}. It consists of 64 users ($K=64$) uniformly distributed in a $10m \times 10m \times 3m$ (depth x width x height) volume in front of a $1.2m \times 1.2m$ (height x width) LIS. Signal bandwidth and carrier frequency are 100MHz and 4GHz respectively. We assume the OFDM-based 5G New Radio (NR) frame structure \cite{5G} and consider uplink processing only. To obtain meaningful statistical information we generate 100 channel realizations, by placing the users within the volume following an uniform distribution in the 3 dimensions. For each realization, sum-rate capacity is calculated at different interfaces \footnote{For interfaces we mean panels output, tree nodes outputs, and CDSP input.} of the system, and then averaged across all realizations. The first analysis consists of studying the relation between sum-rate and SNR, and the validity of the bounds in Proposition \ref{prop:ub}. Averaged $C_{\z}$ for $\Np=2$ and different SNR values is shown in Fig. \ref{fig:sum_rate_vs_SNR}, which has been divided in two SNR regions for visual clarity \footnote{The bounds are obtained for the sum-rate at the panels output interface, but are also valid for any other internal interface in the system (such as CDSP input), as sum-rate is lower of equal after each processing in the tree.}. Selection of $\Np=2$ allows us to have enough output panel dimensionality \footnote{$\Np > 2$ also meet this requirement, but at the expense of an increase interconnection bandwidth.}, specifically: $N=128 > K$. Averaged values of the bounds are also shown for comparison. It is clear as $\Ca$ is tight in the low SNR region, while both bounds follow the same slope ($K$) as the sum-rate for high SNR values, with $\sim 5dB$ offset in this case. $\Ca$ is better bound than $\Cb$ in this scenario. The sum-rate capacity at CDSP input interface depends on the individual selection of the reduction factor at each node in the system, which leads to a considerable number of possibilities. In order to simplify the analysis and show in a clear form how this individuals selection affects the system performance, let as consider a tree with 3 levels (as in Fig. \ref{fig:model_backplane}) where we constraint the reduction factors as follows: $\betabb=\betabc$, and $\betabc \betabb \betaba \betap = \frac{K}{\Mp}$, where $\betabi=\frac{\Nbi}{4 N_{\text{bi-1}}}$, $\betap=\frac{\Np}{\Mp}$, and $\betaba=\frac{\Nba}{4\Np}$. By doing so, we ensure there is dimensionality $K$ at the CDSP input for every combination. Therefore, $\beta$ represents the dimensionality reduction at a certain level of the system (all nodes in a certain level are assumed to have same $\beta$ for simplicity\footnote{We foreseen a non-uniform $\beta$ case can be more adequate for scenarios with non-uniform user distribution, which allows to spend resources where it is needed. This is left for further analysis.}), and may take values from 0 (total reduction) to 1 (no reduction). Under this constraint, $\betap$ and $\betaba$ are free to be chosen. Each possible combination provides different sum-rate at CDSP interface, in exchange of different complexity cost \footnote{Computational complexity and interconnection bandwidth.}. Fig. \ref{fig:sum_rate_betas} shows the relation between these two parameters and the normalized sum-rate (value 1 refers to channel capacity measured at antenna interface, and consequently it is the same for both algorithms) for RMF and IIC. It is important to note as multiple ($\betap$, $\betaba$) working points on the same contour level provide the same performance. We verify as a lower reduction (higher $\beta$) leads to higher capacity (but higher interconnection bandwidth), reaching the maximum (or close to it) if no reduction is taking place in the first two levels (point (1,1) in the figure). It is evident as IIC allows higher reduction for same same performance compared to RMF, which translates in lower complexity during filtering, in exchange of higher formulation complexity (computational due to SVD dependency, and interconnection due to the panel-panel local exchange of data). \subsection{Computational Complexity} \label{sub:complexity} We consider number of complex multiplications (MAC) as a metric to measure computational complexity. Our analysis includes both phases: formulation and filtering. In the filtering phase, the operations are the same for RMF and ICC, which consists of applying a liner filter in the panels, of size $\Np \times \Mp$, to the $\Mp \times 1$ input vector, and similar for the BDSP nodes (with different sizes). The total computational complexity for filtering is given by (in MAC/s) \begin{equation} \Cf = \underbrace{\fB P \Cf^{(0)}}_{\text{front-end}} + \underbrace{\fB \sum_{n=1}^{L} \Nspu^{(n)} \Cf^{(n)}}_{\text{backplane}}, \label{eq:Cfilt} \end{equation} where $\fB$ is the signal bandwidth, $\Cf^{(0)} = \Mp \Np$ is the computational complexity per panel to filter one subcarrier, $\Cf^{(n)} = 4\Nb^{(n-1)} \Nb^{(n)}$ is the corresponding in a node at level $n$, $L$ is the number of levels in the tree, $\Nspu^{(n)}$ is the number of SPUs at level $n$, this is $\Nspu^{(n)} = \frac{P}{4^{n}}$, and $\Nb^{(0)} = \Np$ for notation commodity. The formulation phase of RMF includes the computation of $\|\h\|^{2}$ for each user. For the IIC algorithm, the steps required for the formulation phase are shown in Algorithm \ref{algo:IIC} for each panel \footnote{For backend there is no exchange of $\Z$ as explained in Section \ref{section:arch}, so the computational complexity is highly reduced.}. This algorithm relies on singular value decomposition (SVD), which we assume is based on 2 steps: Householder bidiagonalization and QR method by Givens rotations. Bidiagonalization is dominant in terms of complexity, so the total complexity of SVD of a $M \times N$ complex matrix can be approximated by $2 M^2N$. For step 1 of the Algorithm \ref{algo:IIC}, SVD of a $K \times K$ Gramian matrix $\Z_{i-1}$ is required, with complexity $2K^3$. Step 2 has a complexity of $(\Mp+1)K^2$, step 3 combined together with 4 require a complexity of $2\Np d_{0}^2$, where $d_{0}=\max\{K,\Mp\}$, $\Hbf_{eq}^{H}=\Wbfh \Hbf$ consists of $\Np \Mp K$ products, and step 5 of $\Np K^2$. The total computational complexity for IIC is given by (in MACs)\footnote{We assume one channel estimate per PRB, and therefore one filtering matrix calculation per PRB.} \begin{equation} C_{\text{form,IIC}} = \underbrace{\Nprb P \Cform^{(0)}}_{\text{front-end}} + \underbrace{\Nprb \sum_{n=1}^{L} \Nspu^{(n)} \Cform^{(n)}}_{\text{backplane}} \label{eq:Cform_IIC} \end{equation} where $\Cform^{(0)} = (2K + \Mp + \Np) K^2 + \Np \Mp K + 2\Np d_{0}^2$ is the computational complexity per panel during formulation, while $\Cform^{(n)} = 4\Nb^{(n)}\Nb^{(n-1)}K + 2\Nb^{(n)}d^{2}_{n}$ is the one per node at level $n$, $d_{n}=\max\{K,4\Nb^{(n-1)}\}$. For RMF we have same expression as \eqref{eq:Cform_IIC} with $\Cform^{(0)}=\Mp K$, and $\Cform^{(n)}=4\Nb^{(n-1)} K$. Figures \ref{fig:SR_vs_C} shows normalized sum-rate capacity versus computational complexity during filtering for both algorithms and different panel sizes. We observe as IIC achieves better performance than RMF for same panel size, while large panels are key to harvest most of the capacity, with $\Mp\geq64$ reaching channel capacity in our simulations. Figure \ref{fig:SR_vs_C_M} shows sum-rate capacity versus computational complexity during filtering for different LIS size (IIC assumed), and same panel size ($\Mp = 16$). It is very interesting to observe as the same performance (for example 100) can be achieved by both $M=4096$ and $M=1024$, and with same computational complexity. However, their architecture may differ substantially, as the smaller LIS requires higher number of outputs per panel and lower dimensionality reduction, than the larger LIS, where aggressive reduction can be used. In summary, the small LIS ($M=1024$) is harvesting a significant fraction of the available channel capacity, while the larger LIS is only exploiting a very small fraction of it. This presents a very interesting design trade-off. \subsection{Interconnection bandwidth} In this section we analyze the inter-connection bandwidth during filtering phase, covering panel-node and node-node links. This bandwidth is given by (in bps) \begin{equation} R_{\text{inter}} = \underbrace{2w \fB P \Np}_{\text{front-end}} + \underbrace{2w \fB \sum_{n=1}^{L} \Nspu^{(n)} \Nb^{(n)}}_{\text{backplane}}, \nonumber \end{equation} where $w$ is the bit-width of the SPU input/output (real and imaginary parts). In our analysis we also consider the movement of data happening internally at panels/nodes level, which covers the data transfer between the inputs ports to the SPU, for processing, and from it to the output ports. We name this transfer data-rate as $\textit{intra-connection data-rate}$ or $R_{\text{intra}}$ \footnote{We are aware that $\Rintra$ does not include all internal data-rate in a real system, as this is highly dependent on the specific implementation, internal topology, and type of processing unit employed in the panel. However, the spirit of this work is to provide a general analysis and first order approximation of the complexity required, applicable to all possible implementations, instead of being attached to an specific hardware implementation, and provide exact analysis numbers.}, and \begin{equation} \begin{aligned} R_{\text{intra}} &= \underbrace{2w \fB P (\Mp + \Np)}_{\text{front-end}} \\ &+\underbrace{2w \fB \sum_{n=1}^{L} \Nspu^{(n)} (4\Nb^{(n-1)}+\Nb^{(n)}) }_{\text{backplane}}. \end{aligned} \nonumber \end{equation} In order to take both magnitudes into consideration in our analysis, we define the relative cost $\alpha$, as $\alpha \triangleq \text{cost}(\Rintra)/\text{cost}(\Rinter)$, and the cost equivalent inter-connection data-rate $\Req$ as: $\Req \triangleq \Rinter + \alpha \Rintra$. In this analysis, we take power/data-rate as cost magnitude. If we assume serial link (serdes) technology for intra-connection, and Ethernet for inter-connection, then we obtain a power consumption of $1.29-24.8$mW/Gps, and $40$mW/Gbps respectively according to different sources \cite{serdes1,serdes2,eth1,eth2}. The serdes power range is very wide, so as an example we take 4mW/Gbps as reference, that gives $\alpha \sim \frac{1}{10}$ \footnote{These numbers are dependent on the technology used, however, the method still holds.}. Figure \ref{fig:SR_vs_R} shows normalized sum-rate capacity versus equivalent inter-connection bandwidth during filtering for both algorithms and different panel sizes. We observe as IIC achieves better performance than RMF for same panel size, and large panels are capable to harvest most of the channel capacity in our simulations. It is relevant to point out that small panels require more total interconnection data-rate than large panels, however this is more distributed among panels and nodes, reducing considerably the bottlenecks. Fig. \ref{fig:SR_vs_R_M} shows sum-rate capacity versus interconnection bandwidth during filtering for different LIS size (IIC assumed). Similar conclusions can be extracted compared to Fig. \ref{fig:SR_vs_C_M}. \subsection{Processing Latency} The processing latency represents the time between when the estimated channel of a subcarrier is available at panels and when the data of that subcarrier is filtered and available at the CDSP input for detection. The latency can be expressed as $L_{\text{tot}} = L_{\text{form}} + L_{\text{filt}}$, where $L_{\text{form}}$ is the formulation latency, and $L_{\text{filt}}$ is the latency for data filtering. More specifically, $L_{\text{form}} = L^{\text{proc}}_{\text{form}} + (\nP - 1) L^{\text{com}}_{\text{local}} + (L+1) L^{\text{com}}_{\text{global}}$, where $L^{\text{proc}}_{\text{form}}$ is the time needed to calculate the filter coefficients, $L^{\text{com}}_{\text{local}}$ refers to panel-to-panel communication latency (only in IIC), and $L^{\text{com}}_{\text{global}}$ refers to panel-to-node, and node-to-node link communication latency. $\nP$ is the number of panels involved ($\nP = 1$ in RMF and $P$ in IIC for the worst case) \footnote{Depending on the users distribution we may not need to go through all panels ($\nP < P$) with the subsequent benefits. We leave both items for future work.}. For filtering latency we have: $L_{\text{filt}} = L^{\text{proc}}_{\text{filt}} + (L+1) L^{\text{com}}_{\text{global}}$, which accounts for filtering in panels and nodes, and communication latency. We assume the IIC formulation is done sequentially along all panels (worst case) using local connections, and then across nodes in the tree. The latency for processing highly depends on the hardware architecture used to implement the algorithms. Here we assume highly optimized accelerators (e.g., ASIC) are used that the available data parallelism ($\Nparal$) can be explored using $N_{\text{proc}}$ processing units ($N_{\text{proc}}<\Nparal$), i.e., the $N_{\text{proc}}$ PEs will take $\Nparal/N_{\text{proc}}$ clock cycles to iteratively process $\Nparal$ parallel operations. Moreover, the channel matrix (of the subcarrier that is being processed) is cached in register files (the latency for memory access is hidden). The main component of $L^{\text{proc}}_{\text{form}}$ is the time needed to perform SVD which is implemented by Householder bidiagonalization followed by QR method based on Givens rotations. The processing of each column and row can be done in parallel, while sequential processing is needed between columns and rows due to the data dependency. With these assumptions, the total processing latency in formulation phase is $L^{\text{proc}}_{\text{form}} = \frac{\widetilde{C}_{\text{form}} T_{\text{CLK}}}{N_{\text{proc}}}$, where $\widetilde{C}_{\text{form}} = \nP \Cform^{(0)} + \sum_{n=1}^{L}\Cform^{(n)}$. The first term in $\widetilde{C}_{\text{form}}$ represents the serial processing in the front-end, and the second term represents the computational complexity of one branch of the tree. $\Cform^{(0)}$ and $\Cform^{(n)}$ are defined after \eqref{eq:Cform_IIC}. $T_{\text{CLK}}$ is the clock period, and we assume that one complex multiplication and accumulation (MAC) can be done within one clock cycle. In case of filtering, processing latency is given by $L^{\text{proc}}_{\text{filt}} = \frac{\widetilde{C}_{\text{filt}} T_{\text{CLK}}}{N_{\text{proc}}}$, where $\widetilde{C}_{\text{filt}} = \sum_{n=0}^{L}\Cf^{(n)}$ is the computational complexity corresponding to a path between a panel and the CDSP, and $\Cf^{(n)}$ is defined after \eqref{eq:Cfilt}. \begin{table}[t!] \centering \begin{tabular}{c|c|c|c|c|c|c|c} \hline $\textbf{Method}$ & $\Cform$ & $\Cf$ & $\Rinter$ & $\Rintra$ & $L_{\text{form}}$ & $L_{\text{filt}}$ \\ \hline \hline IIC & 3.1 & 2.3 & 1.0 & 5.4 & 110.2 & 1.0\\ \hline RMF & 0.02 & 2.3 & 1.0 & 5.4 & 1.2 & 1.0\\ \hline \end{tabular} \caption{Values of total complexity for a LIS: $M=1024$, $\Mp=64$, $K=50$, $\betap=1/4,\betaba=1/2$. $w = 12$ bits. $\Nprb = 275$. $\fB=100$MHz. Units are as follows: $\Cform$ [GMAC], $\Cf$ [TMAC/s], $\Rinter$ [Tb/s], $\Rintra$ [Tb/s], $L$ [$\mu$s].} \label{table:Comp_total} \vspace*{-4mm} \end{table} \subsection{Case study and discussion} Performance has been analyzed, together with computational complexity, inter-connection data-rate, and processing latency. General expressions for these different magnitudes have been presented based on general system parameters, such as number of users, number of antennas, number of panels, and signal bandwidth among others; what makes it easy to particularize for concrete implementations. Nevertheless, based on the trade-off analysis shown in Fig. \ref{fig:SR_vs_C} and Fig. \ref{fig:SR_vs_R}, we can see $\Mp=64$ as an attractive option, as it provides higher capacity than $\Mp=16$ for same computational complexity and interconnection data-rate, while it is able to reach channel capacity in our analysis scenario. It is also of a reasonable size in case we want to distribute the LIS in a certain area. On top of that, its physical dimensions makes it easy to handle and mount ($30cm \times 30cm$ at 4GHz). For this panel size we present numerical values of the analyzed complexity in Table \ref{table:Comp_total}. The following parameter values are assumed: $\nP=P=16$, $T_{\text{CLK}} = 1ns$, $N_{\text{paral}} = 100$, $L^{\text{com}}_{\text{local}}=100ns$ (serdes technology assumed \cite{serdes1,serdes2}), and $L^{\text{com}}_{\text{global}}=300ns$ (ethernet assumed \cite{eth1,eth2,eth_ti}). Assuming 12 subcarriers per PRB, the subcarrier spacing in our example is: $\frac{\fB}{12\Nprb}=30$KHz, and the OFDM symbol duration is therefore $\approx 33 \mu s$. The benefits of the distributed architecture are evident in terms of interconnection data-rate reduction. If we look at the CDSP input interface, the reduction is easily obtained as: $\frac{M}{K} \sim 20$x. Of course, this is in exchange of a performance loss due to dimensionality reduction, but as we have explained before the system is fully configurable, offering a rich performance-complexity trade-off. It is important to consider that even tough computational complexity and inter-connection data rates numbers may seem large, they are distributed among all processing units in the LIS. This LIS contains 21 SPUs (panels + backplane nodes). Regarding latency, $L_{\text{form,RMF}}$ and $L_{\text{filt}}$ values seem reasonable for NR frame structure. We observe as $L_{\text{form,IIC}}$ shows much higher value due to the higher computational complexity required in this method (in this example equivalent to 3 OFDM symbols). For a certain LIS system this latency is sensitive to the $\beta$ used in panels and nodes (which translates into complexity cost), and $K$ (system capacity). Therefore we can foresee a trade-off between these system parameters and how often filters are updated in panels and nodes. It is important to remark that we analyzed latency from a worst case point of view, where all panels in the LIS are serially connected and jointly contribute to formulation. In reality we do not think this is the best approach as this may only be helpful in cases with very high density of users with dominant interference over noise. We foresee groups of panels performing serial processing within, but parallel among groups, reducing considerably the formulation latency. We are aware that depending on the implementation latency may be different (selection of memory system, hardware, interconnection), and here we provide high level analysis assuming we use dedicated accelerators without any overhead. \section{Conclusions} \label{section:conclusions} In this article we have presented distributed uplink processing algorithms and the corresponding hardware architecture for efficient implementation of large intelligent surfaces (LIS). The proposed processing structure consists of local panel processing units to reduce incoming data dimensionality without losing much information, and hierarchical backplane network with distributed processing-combining units to support flexible and efficient data aggregation. We have systematically analyzed the system capacity and implementation cost with different design parameters, and provided design guidelines for the implementation of LIS.
1,116,691,497,500
arxiv
\section{\label{sec:Introduction}Introduction} Sawtooth oscillations\cite{Goeler1974} are internal relaxation events in a tokamak that lead to a rapid drop of core electron temperature. A significant question for sawtooth oscillations is what is the cause of the short crash time. In the traditional Kadomtsev model\cite{Kadomtsev1975}, the crash time is related to how fast reconnection can occur to re-arrange the magnetic field. Several possible competing mechanisms have been proposed for the fast crash, including two-fluid effects at the reconnection layer\cite{AydemirPoF1992,WangPRL1993,KlevaPoP1995,FoxPRL2017}, plasmoid instability\cite{loureiro2007instability,bhattacharjee2009fast,gunter2014fast}, and interchange instability\cite{wesson1986sawtooth,jardin2020new}. The understanding of the sawtooth crash phenomena has improved with the development of new measurement capabilities. Current profile measurements\cite{levinton1993q}, reconstruction of 2-D temperature profile using soft X-ray (SXR)\cite{nagayama1988soft} and electron cyclotron emission (ECE)\cite{nagayama1996tomography} tomography, direct imaging of 2-D electron temperature using electron cyclotron imaging \cite{park2019newly} (ECEI) have provided valuable information to differentiate between various models. For example, measurements of 2-D electron temperature using ECEI in TEXTOR\cite{park2006comparison,park2006observation} supported the occurrence of magnetic reconnection, while the SXR tomography diagnostic in JET\cite{granetz1988x} showed the presence of the interchange mode. Beam emission spectroscopy\cite{FonckRSI1990} (BES) is an active plasma diagnostic that can measure the time evolution of the plasma density in a 2-D plane. A high-energy hydrogenic neutral beam is injected into the plasma, and the associated emission from the collisionally-excited neutral beam fluorescence is observed. The intensity of light emission is related to the plasma density via atomic physics. The valuable insights obtained from the localized density information supplied by the BES in various experiments\cite{mckee2006high, mckee2003turbulence, schlossberg2006velocity,yan2011high,yan2014observation} motivate further development of BES setups and analysis techniques to measure the plasma density during sawtooth events. The plasma density is a relatively unexplored measurement channel during sawtooth events and may show complementary physics to the temperature channel, which is dominated by the sawtooth-driven heat transport. Observing the plasma density may therefore help determine which of several processes drive the sawtooth \cite{AydemirPoF1992,WangPRL1993,KlevaPoP1995,FoxPRL2017,loureiro2007instability,bhattacharjee2009fast,gunter2014fast,wesson1986sawtooth,jardin2020new}. In this paper, we develop techniques for direct observation of plasma density local to the q=1 layer during sawtooth events using BES. The most significant challenge we overcome is to isolate the core BES signals from stray “edge” light. In typical DIII-D experiments, the spectral Doppler shift induced by a beam of velocity, $v/c \sim 0.01$ ($\sim$ 80 keV deuterium atoms), is a few nm, which is sufficient under usual applications to spectroscopically isolate the beam emission from the $\rm{D}_{\alpha}$ emission produced by edge recycling, enabling accurate measurement of the core density fluctuations\cite{Mckee1999}. However, during the sawtooth events studied here, the $\rm{D}_{\alpha}$ edge light is exceptionally high in magnitude. The edge light is transmitted to the BES photodiodes despite the high attenuation through the optical interference filter. The edge light temporally overlaps with the neutral-beam-driven core emission, complicating the interpretation. This motivates the present paper, in which we describe a technique to isolate and remove the undesired edge light from BES data to isolate the core BES signals. The technique may also be valuable to analyze plasma density evolution during other impulsive MHD events such as edge localized modes\cite{leonard2014edge}. Other active spectroscopic diagnostics employing a neutral beam may also find our technique to be useful. The scope of this paper is to discuss, in detail, the BES analysis procedure and a sample of measurements of the 2-D in-plane plasma density during sawtooth events. A large-amplitude density oscillation, likely associated with rotating a $(m,n) = (1,1)$ mode, is observed to grow near the onset of the crash in core electron temperature $T_{e,\rm{core}}$. This mode reaches its maximum amplitude at the latter end of the crash, after which it decays over a few cycles. In addition, a density gradient in the $R$-$Z$ plane across the $q=1$ surface is found to be associated with a sawtooth crash. A detailed physics analysis and study of multiple events will be pursued in upcoming publications. The rest of the paper is organized as follows: Section~\ref{sec:overview} gives a brief overview of sawtooth oscillations and how spatially-resolved density measurements can help to constrain the physics. The experimental setup is described in Section~\ref{sec:experimental_set_up}. The method for BES analysis for sawtooth events and analysis of experimental results is presented in Section~\ref{sec:result_analysis}. A discussion and summary follow this in Section~\ref{sec:discussion}. \section{\label{sec:overview}Overview} Sawtooth oscillations \cite{hastie1997sawtooth,chapman2010controlling} are a periodic relaxation of $T_{e,\rm{core}}$ in tokamaks. These oscillations are characterized by a slow build-up of $T_{e,\rm{core}}$ followed by a rapid ``crash" phase, so that the temporal evolution over several cycles appears sawtooth-like. According to most models, the sequence of events leading to a crash are as follows: Typically, tokamaks have a peaked temperature profile. This supports a peaked toroidal current because of a higher conductivity at the center of the plasma. A higher toroidal current further heats the core increasing the $T_{e,\rm{core}}$ in a positive feedback loop. Enhancement of the toroidal current increases the poloidal magnetic field lowering the $q $ value near the core, where $q = \left \langle rB_\phi / R B_\theta \right \rangle$ is the inverse rotational transform of the magnetic field, and angle brackets refer to the flux surface average\cite{wesson2011tokamaks}. When $q$ in the core becomes less than unity, current\cite{Kadomtsev1975} or pressure\cite{wesson1986sawtooth} driven MHD instabilities can grow. These instabilities relax the temperature and current profiles, causing the cycle to begin anew. The open physics question is, how a MHD mode or modes can cause a rapid crash in $T_{e,\rm{core}}$. According to models based on magnetic reconnection, a growing kink mode causes magnetic reconnection at $q = 1$ surface, which rearranges the magnetic field and relaxes the temperature via fast transport along the newly reconnected field lines. However, as calculated by the Sweet-Parker model, resistive reconnection is not fast enough to explain the observed short crash time. Extensions to reconnection theory have been proposed to overcome this shortcoming. Two-fluid effects about a single reconnection current sheet or development of multiple plasmoid structures in the reconnection layer due to secondary tearing of the current sheet can speed up magnetic reconnection and explain the fast crash time\cite{AydemirPoF1992,gunter2014fast}. 2-D density measurements at the $q=1$ sawtooth inversion region would enable the detection of density structures that may be relevant to confirming these reconnection models. A signature of the two-fluid effect would be a quadrupolar variation of density in the inversion region i.e. two regions about the current sheet near the X-point with a positive density perturbation, and the other two with a negative\cite{KlevaPoP1995,FoxPRL2017,Bose2020,2020APSDPPJ11004B}. The plasma density evolution may alternatively provide evidence for plasmoid reconnection\cite{gunter2014fast}. An alternative explanation of a sawtooth event that does not depend on reconnection predicts the crash is caused by higher-mode-number pressure-driven interchange modes\cite{jardin2020new}. In this case, a 2-D density measurement in the $R$-$Z$ plane may detect such higher-order MHD modes. We note that depending on the line of sight, interferometer data may exhibit signs of density fluctuations\cite{chapman2010controlling} as well. \section{\label{sec:experimental_set_up}Experimental Setup} \begin{figure}[h] \includegraphics[scale=0.65]{Figure1.pdf} \caption{\label{fig:setup} $R$-$Z$ plane of DIII-D showing the $q$-contours, BES channels, ECE channels and filterscope (FS04UP).} \end{figure} Experiments to measure the time resolved 2-D density evolution during sawtooth oscillations were conducted in DIII-D with a setup allowing for BES\cite{Mckee1999,mckee2010wide} measurements at and around the $q=1$ surface. In order to shift the $q=1$ surface to a suitable location for the BES, a relatively low $q_a \sim 2.2$ DIII-D discharge was adopted in an L-mode plasma, where the toroidal magnetic field was $\rm{B}_{\rm{T}}=$1.52~T, and the plasma current $I_{\rm{p}}$ was 1.76 MA. The BES diagnostic uses a hydrogenic neutral beam of 45--80 keV energy (was 55 keV in our experiment) that is injected into the plasma by a neutral beam source. As neutrals in the beam collide with plasma electrons, ions, and impurities, a fraction of the beam neutrals enter the n = 3 state via direct excitation or cascade processes. Transitions from $\rm{n}=3$ to $\rm{n}=2$ state causes emission near $\lambda_0 = 656.1~\rm{nm}$. A rather high velocity of the neutral beam, $v/c \sim 0.01$, causes the emission manifold to be blue shifted to near $653 - 655~\rm{nm}$\cite{Mckee1999}. This blue shift is sufficient to isolate the neutral beam fluorescence from the thermal $\rm{D}_{\alpha}$ emission produced by edge recycling in most experiments using customized interference filters. The measured light intensity fluctuations are related to the plasma electron and ion density fluctuations through atomic physics of the beam excitation process and weakly dependent on other beam and plasma parameters\cite{FonckRSI1990,HutchinsonPPCF2002}. At DIII-D, a 2-D BES system has been implemented which measures the beam fluorescence using 64 spatial channels\cite{Mckee1999, mckee2006high}. A flexible fiber array mount allows for rapid and easy reconfiguration of the 64 channels for the scientific needs of a given experiment. For this experiment, the array was organized in an $8\times8$ configuration for a total coverage of $8 \times 20$~cm in the radial-poloidal plane. The location of the BES channels is shown in Figure \ref{fig:setup}, which shows an EFIT plasma equilibrium reconstruction along with several other diagnostics discussed in the present paper. The contour of the $q=1$ surface passes through the area sampled by BES. The BES measurement volume is located at a toroidal angle of $\sim140^{\circ}$. The BES sightlines are approximately tangent to flux surfaces and are angled to match the dominant magnetic field pitch angle to provide good spatial resolution perpendicular to the field lines. Each BES channel integrates emission over an area of approximately $1 \times 1.3\; \rm{cm}$ region in the $R$-$Z$ plane, though the detailed point-spread-function is calculated for each channel and shot based on position, equilibrium, and profiles using diagnostic geometry\cite{shafer2006spatial}. The light acquired by the collection optics is converted to voltage signals by photodiodes \cite{Mckee1999,fonck1992low,gupta2004enhanced}. The high and low frequency components of the BES photodiodes signal are saved separately to provide extra bit resolution, in what we refer to below as the ``fast'' and ``slow'' BES channels, respectively. In particular, the fast channels are AC-coupled, and pass frequencies from $\sim1~\rm{kHz}$ up to a cutoff frequency of 425~kHz. To complement the BES measurements, several other diagnostics were used (Ref.~\cite{boivin2005diii} and references therein). The electron temperature was measured using a 40 channel electron cyclotron emission radiometer located at a toroidal angle of $81^{\circ}$. The layout of the ECE channels is shown in Figure \ref{fig:setup}. We have used the channel 10 to measure the $T_{e,\rm{core}}$ which was found to be $\sim3~\rm{keV}$. The electron density near the $q=1$ surface is measured using a multichannel Thomson scattering diagnostic. Since the time response of the Thomson scattering diagnostic is not fast enough to resolve density fluctuations owing to sawtooth oscillations, this diagnostic gives a local time-averaged electron density, $n_{e0}$. We compared the $n_{e0}$ measured using Thomson scattering at multiple locations in the neighborhood of the plasma sampled by BES. We did not observe any significant spatial variation of $n_{e0}$ and the magnitude of $n_{e0}$ is $\sim 3.5 \times 10^{13}\; \rm{cm^{-3}}$. % Magnetic field fluctuations, $dB/dt$, are measured using B-dot probes which are located external to the plasma at various toroidal angles. Here we use $dB/dt$ measured by a high frequency B-dot probe located at a toroidal angle of $150^{\circ}$. A filterscope called FS04UP is used for line-integrated measurements of edge $\rm{D}_{\alpha}$ spectral line emission. The line of sight of the FS04UP is shown by the green dashed line in Figure \ref{fig:setup}. \section{\label{sec:result_analysis}Results and Analysis} \subsection{Raw data and preprocessing} \begin{figure} \centering \includegraphics[scale=0.88]{Figure2.pdf} \caption{Time evolution of several quantities during a series of sawtooth events. (a) $T_{e,\rm{core}}$ measured by ECE, (b) B-dot probe showing a magnetic activity during temperature crash. Temporal variation of the (c) fast and (d) slow BES for an example channel. Time range for inset in `c' is from 3880 to 3900~msec. (e) Neutral beam power vs. time showing neutral beam turn off at 3999.42 msec. Note that the slow BES signal drops and 360~Hz power supply oscillation in the fast BES signal disappears after the neutral beam is turned off.} \label{fig:sample_data} \end{figure} \begin{figure} \centering \includegraphics[scale=0.87]{Figure3.pdf} \caption{Demonstration of the processing routine for raw fast BES signal using channel 28 data as an example. Blue and black lines are used to represent data with neutral beam ``on" and ``off", respectively. The red is used for curves obtained after some mathematical operation. (a) Raw BES fast channel data where the power-supply hum in red is removed in (b). (c) Fast BES, (d) Filterscope, and (e) B-dot probe data corresponding to a sawtooth event when the neutral beam was on. (f) Fast BES, (g) Filterscope, and (h) B-dot probe data corresponding to a sawtooth event after the neutral beam is turned off at 3999.42 msec. (i) Sawtooth signals before and after neutral beam turn-off aligned for edge light removal. (j) Sawtooth signal with edge light removed.} \label{fig:analysis_steps} \end{figure} Data from DIII-D shot number 176392 is shown in Figure~\ref{fig:sample_data}, which shows a sequence of 5 sawtooth crashes during the plasma current flat-top, and which we will use to illlustrate the analysis chain. $T_{e,\rm{core}}$ exhibits a sawtooth pattern, rapidly falling from $\sim 3.2$ to $\sim 1.9$~keV at each crash, as seen in Figure~\ref{fig:sample_data}(a). At each sawtooth crash, the B-dot probe (Figure~\ref{fig:sample_data}b) and fast BES (Figure~\ref{fig:sample_data}c) also show a burst of activity. The slow BES signal containing background density information and the time variation of the power of the neutral beam used by BES diagnostic are shown in Figures~\ref{fig:sample_data}(d) and (e), respectively. In the experiment, the neutral beam was switched off at 3999.42~msec. This leads to a rapid, but not complete, drop in the BES signal. A crucial point is that BES continues to observe a response during sawtooth events even when the BES probe beam is off. Additionally the fast BES signal has a $\approx$~360~Hz frequency component in the presence of the neutral beam current, which is due to power-supply ripple on the neutral beam, and which is absent once the beam is off. These features of the BES signal with and without the probe beam provide essential components for developing the data analysis routine. We now describe how we process the BES data to obtain the localized density evolution during a sawooth event. (Figure~\ref{fig:analysis_steps}) The first processing step removes the $\approx 360~\rm{Hz}$ ``hum" caused by the neutral beam power supply. A simple filter cannot be used to remove the hum because the frequency of the light collected from the plasma has frequencies near 360~Hz. The hum is instead isolated with a digital comb filter obtained by averaging the signal with several shifted versions. Isolated ``hum" is shown by the red curve in Figure~\ref{fig:analysis_steps}(a), which is subtracted off in Figure~\ref{fig:analysis_steps}(b). Next, a comparison of the fast signal before and after the neutral beam turn-off in Figures~\ref{fig:analysis_steps}(c) and (f) shows that the fast BES observes a significant burst during a sawtooth crash even in the absence of the neutral beam. While the BES information is ordinarily localized by the probe beam, ultimately the field of view also goes through edge plasma regions which may also have large $D_\alpha$ emission. Ideally, the edge emission is expected to be spectrally filtered as it is not doppler-shifted to the neutral-beam velocity, but the filtering is evidently not complete. Second, it appears these sawteeth events are violent enough that they cause an extra burst of plasma and light at the edge, which is synced to the sawteeth events and slightly delayed by a few 100 microseconds.. We compared the fast BES data with the filterscope data in Figures~\ref{fig:analysis_steps} (d) and (g) to understand this. The filterscope does not sample the identical plasma volume viewed by the BES but does acquire light from the plasma edge. Both the fast BES and filterscope exhibit a burst of signal following each sawtooth. The bursts arise contemporaneously or shortly after a sawtooth crash, overlaps the measured core instability signal in time, and has roughly similar measured amplitude. This suggests that a rapid expulsion of particles, caused by a sawtooth, creates a burst of edge light emission as they recycle from the vessel walls. We adopted the following procedure to remove the edge light from the sawtooth events. The steps followed to remove the edge light from the fast BES are shown in Figures~\ref{fig:analysis_steps}(e), (h), (i) and (j). First, the sawteeth events exhibit a few characteristic patterns on the B-dot probes, likely related to toroidal differences from sawtooth to sawtooth. Therefore, we first identify beam-on and beam-off sawtooth events which have a similar B-dot pattern. Next, the beam-on and beam-off B-dot probe signals (Fig.~\ref{fig:analysis_steps}(e) and (h)) are cross-correlated to find the lag time. This lag time is used to align the fast BES signals, as shown in Figure~\ref{fig:analysis_steps}(i). We can observe that the initial fast oscillations early in the event are not matched between beam-on and beam-off cases, but that the $\sim$ms-time-scale ``pulse'' toward the end of the event is nearly identical in the two cases. This suggests that the initial fast oscillations in the beam-on event represent \textit{bona fide} core density oscillations, but the longer pulse is simply the edge light. Finally, the beam-off fast BES signal is subtracted from the beam-on fast BES to obtain the fast BES signal from core plasma shown in Figure~\ref{fig:analysis_steps}(j). Henceforth, this fast BES signal from the core is referred to as $V_{\rm{f}}$. Note that $V_{\rm{f}}$ is not symmetric about zero but exhibits a positive skewness, so a simple bandpass filter is not appropriate to remove the edge light. The slow BES signal is shown in Figure~\ref{fig:sample_data}(d). The signal drops after the beam turn-off, but the signal does not go to zero. We checked the entire time series of the slow BES signal and observed that the signal goes to zero only in the absence of plasma. This indicates that in the presence of plasma, the BES photodiodes collect some light primarily from the residual edge recycling and visible bremsstrahlung, even in the absence of the neutral beam. The slow BES is recording that signal. The average light from the sawtooth inversion region due to neutral beam fluorescence is isolated by subtracting the slow BES signal averaged over a sawtooth period after beam turn-off from the signal when the beam is on. This average signal from the slow BES channel is referred to as $V_{\rm{s}}$ and is used to calculate light fluctuation due to sawteeth from the inversion region in the following subsection. \subsection{Conversion of raw BES signals to $\delta I/I$} The relative core light emission variations, $\delta I / I_0$, is obtained from $V_{\rm{f}}$ and $V_{\rm{s}}$ using the expression, \begin{equation} \frac{\delta I}{I_{0}}=K_{\rm{s}} \frac{V_{\rm{f}}}{V_{\rm{s}}},\label{Eq:deltaI_byI_calc} \end{equation} where $K_{\rm{s}}$ is a standard calibration factor, which depends on the channel and was obtained by checking the relative gain of the fast and slow channels on a test bench. \begin{figure} \centering \includegraphics[scale=0.85]{Figure4.pdf} \caption{Fast BES signals at neutral beam turn-off. Blue lines show the time variation of fast BES signal at beam turn-off in channels (a) 18 and (b) 28. An exponential fit to fast BES data during the recovery phase after beam turn-off is shown by the continuous red curves. The red dashed curve represent the change in fast BES at beam turn-off in the absence of saturation and finite response time of the electronics. (c) Calibration factor supplied by BES group and calculated using Equation~\ref{Eq:calib_fac_det} are given by magenta and black curves, respectively. } \label{fig:calib_factor} \end{figure} We have verified the calibration factor for calculating $\delta I/I_{0}$ using the neutral beam turn-off events for an \textit{in situ} calibration of the BES system. The calibration also complements the standard BES calibration, $K_{\rm{s}}$, as it provides a direct time-domain calibration for the channels in the lower frequency regime relevant for sawteeth events. At the beam turn-off event, the light onto the BES photodiodes undergoes an abrupt downward step, as the core light (neutral beam fluorescence) is removed, leaving only the edge light after turn-off. The slow arm observes this as an downward step function of magnitude $V_s$ as defined and discussed in the previous section. The fast channel observes the same step function through its $RC$ high-pass filter circuit, which leads to a downward step followed by an exponentially-decaying recovery to $0$. We fit the response to the functional form, \begin{equation} V_{\rm{f}}(t > t_{t-o}) = -\Delta V_{\rm{f},t-o} \exp(- (t - t_{t-o}) / \tau) \label{Eq:RC_response} \end{equation} where $t_{t-o}$ is the beam turn-off time and $\tau$ is the high-pass filter time constant. Since, in principle, the identical current step is applied to both fast and slow channels, the voltage steps observed on the fast and slow channels and the calibration coefficient are related by the equation, \begin{equation} 1=K_{\rm{c}}\frac{\Delta V_{\rm{f}, t-o}}{V_{\rm{s}}},\label{Eq:calib_fac_det} \end{equation} where $K_{\rm{c}}$ is a new calibration coefficient defined by this relation. In Figures~\ref{fig:calib_factor} (a) and (b), blue curves show a zoomed-in view of fast BES data for channels 18 and 28, respectively. We note that the Fast BES signals are observed to take a finite amount of time to decrease to the lowest value at beam turn-off, due to finite frequency response. Also, some channels were found to saturate, in which case we fit to the non-saturated portion of the data. The calculated values of $K_{\rm{c}}$ for various channels are compared with $K_{\rm{s}}$ in Figure~\ref{fig:calib_factor}(c), and both calibration factors are found to agree within $\sim20\%$. Figure~\ref{fig:delta_I_by_I_time} shows $\delta I/I_{0}$ calculated using $K_s$ and $K_c$. The wave-trains obtained using $K_{\rm{s}}$ and $K_{\rm{c}}$ are similar. Henceforth, we have used $K_{s}$ to determine $\delta I/I_{0}$ from $V_{\rm{s}}$ and $V_{\rm{f}}$. Finally, we note that the discussion is also useful for a quick cross-check of the data ``by eye.'' Adopting a ``$K_c$'' calibration, i.e., adopting a calibration in Eq.~\ref{Eq:deltaI_byI_calc} using $K_c$, and inserting Eq.~\ref{Eq:calib_fac_det}, one obtains, \begin{equation} \frac{\delta I}{I_{0}} = \frac{V_{\rm{f}}}{\Delta V_{\rm{f, t-o}}}, \label{Eq:deltaI_byI_new} \end{equation} which is useful because it does not involve the slow channel. Secondly, one can then immediately read off from raw data, for example Figure~\ref{fig:sample_data}c, that the $\delta I/I_{0}$ during sawtooth crash events are $\sim10-20$\% of the downward step during the turn-off event (for channels that do not saturate), which directly supports the inferred $\delta I/I_0$ from the full calibration (Fig.~\ref{fig:delta_I_by_I_time}). \begin{figure} \centering \includegraphics[scale=0.85]{Figure5.pdf} \caption{Time variation of $\delta I/I_{0}$ measured by channel (a) 14 and (b) 60. The continuous blue and broken red lines were obtained using $K_{s}$ and $K_{c}$, respectively. } \label{fig:delta_I_by_I_time} \end{figure} \subsection{Conversion of $\delta {I}/{I}_{0}$ to density variation} The conversion of the BES intensity variations $\delta I / I_{0}$ to plasma density variations is the final step, and per standard procedure, requires an atomic physics model\cite{FonckRSI1990,HutchinsonPPCF2002}. The intensity of light emission depends on the number of collisionally excited neutrals in the neutral beam undergoing $\rm{n}=3$ to $\rm{n}=2$ transitions. The fractional population of $\rm{n}=3$ state depends on the local plasma density, temperature, $T$, impurity ions, $Z_{\rm{eff}}$, and neutral beam energy, $E_{\rm{beam}}$\cite{FonckRSI1990}. The relationship between $\delta I/I_{0}$ and $\delta n_{e}/n_{e0}$ taking into consideration the intricacies of the mechanism of emission from the neutral beam is given by \begin{equation} \frac{\delta n_{e}}{n_{e0}}=C (n_{e}, T, Z_{\rm{eff}}, E_{\rm{beam}}) \frac{\delta I}{I_{0}},\label{Eq:deltan_by_n} \end{equation} where $C$ is the proportionality factor\cite{FonckRSI1990}. This linearized version is valid for small density fluctuations where $C$ is replaced by a constant number\citep{HutchinsonPPCF2002,FonckRSI1990}. \begin{figure} \centering \includegraphics[scale=0.85]{Figure6.pdf} \caption{Variation of the proportionally factor $C$ with density and temperature. Density and temperature were held constant at $n_{e}=3.3\times10^{13}~\rm{cm^{-3}}$, and $T_{e}=1.7~\rm{keV}$, for determining the $C$ vs.\ $T_{e}$, and $C$ vs.\ $n_{e}$ curve, respectively. } \label{fig:C_dependence} \end{figure} We calculated the dependence of $C$ on plasma parameters, shown in Fig.~\ref{fig:C_dependence}. The red squares in Figure~\ref{fig:C_dependence} show that $C$ does not vary with $T_{e}$ in our operation regime. However, $C$ does depend on the $n_{e}$, which indicates that for large $n_{e}$ fluctuations the non-linearity should be preserved. The effect of $E_{\rm{beam}}$ and $Z_{\rm{eff}}$ on $C$ are ignorable\cite{FonckRSI1990}. Therefore, for our regime of operation, Equation \ref{Eq:deltan_by_n} can be written as \begin{equation} \frac{\delta n_{e}}{n_{e0}}\approx C(n_{e}) \frac{\delta I}{I_{0}}. \end{equation} In order to handle density variations beyond the linear regime, we obtain the full non-linear $n_{e}$ vs.\ $I$ relationship (starting from tabulated data of Eq.~\ref{Eq:deltan_by_n} shown in Fig.~\ref{fig:C_dependence}) by separating variables and integrating, \begin{equation} \int^n \frac{dn'}{C(n') n'} = \int^I \frac{dI'}{I'}. \end{equation} The resulting calibration curve $I(n_{e})$ for the present operating parameters is plotted in Figure~\ref{fig:I_vs_n}. We verified that the resulting curve could be re-linearized to reproduce Figure~\ref{fig:C_dependence}. \begin{figure} \centering \includegraphics[scale=0.85]{Figure7.pdf} \caption{Dependence of intensity on density} \label{fig:I_vs_n} \end{figure} Figure~\ref{fig:delta_n_by_n}(i) shows $\delta n_{e}/n_{e0}$ obtained from $\delta I /I_{0}$ in Figure~\ref{fig:delta_I_by_I_time}(a) using the $I$ vs.\ $n_{e}$ calibration curve. To use the curve, a central operating point $(n_{e0}, I_0)$ must be chosen that corresponds to the point when $\delta I = 0$. Here, we use $n_{e0} = 3.5\times 10^{13}~\rm{cm^{-3}}$, which was obtained from a time-average of a Thomson scattering density channel close to the $q=1$ surface. Then $I(t)$ was obtained using first, \begin{equation} I(t)=I_{0}\times \bigg\{1 + \frac{\delta I}{I_{0}}(t) \bigg\}, \end{equation} after that, $I(t)$ was converted to $n_{e}(t)$ using the $I$ vs.\ $n_{e}$ calibration curve. Finally, the time variation of $\delta n_{e}/n_{e0}$ was calculated using, \begin{equation} \frac{\delta n_{e}(t)} {n_{e0}}=\frac{n_{e}(t) - n_{e0} } {n_{e0}}. \end{equation} We used the same $n_{e0}$ for all BES channels. Furthermore, $\delta n_{e}/n_{e0}$ data from channels 5, 19, 21, 26, 29, 36 and 59 were found to have some systematic issues due to the hardware. One channel is dead and the transducer in other channels might have a nonlinear response to neutral beam fluorescence. We replaced the data of those channels by the time series obtained by averaging data from neighbouring channels. \subsection{Density variation during sawtooth events} Using the techniques above we now present the density evolution measured by the BES during a sawtooth event. We first note that the large oscillation discussed above and shown in Figure~\ref{fig:delta_I_by_I_time} is well-correlated across the entire BES array. Figure~\ref{fig:avg_density} shows the time evolution of the array-average $\langle \delta n_{e} (t)/n_{e0} \rangle$, where we have taken the average over all 64 BES channels. (We note that the BES array, deployed here over an area of 20 $\times$~8~cm, is still a small fraction of the entire plasma cross section, i.e. see Fig.~\ref{fig:setup}.) The magenta curve shows the associated time variation of $T_{e,\rm{core}}$, as measured by a core ECE channel. The density exhibits a large amplitude oscillation at a frequency of $\sim 13~\rm{kHz}$ that starts just before the temperature crash and persists for a few cycles afterwards. This density variation coincides with large magnetic fluctuations (Figure~\ref{fig:avg_density}). The good correlation of this structure across the whole array indicates this structure is at an equal or larger scale to the array, i.e. a large-scale mode in the plasma. This oscillation is consistent with a helical $(1,1)$ mode growing during a sawtooth event, where the oscillation is due to the rotation of the mode (with the plasma) past the fixed measurement location. The density oscillations are relatively large, with magnitude up to $\langle \delta n_{e}/n_{e0} \rangle \sim 0.2-0.4$ indicating a significant modification of the plasma density profile during the event. The large density oscillations are observed to persist for $\sim$5 plasma rotations, after which they progressively decay. \begin{figure} \centering \includegraphics[scale=0.25]{Figure8.pdf} \caption{Time variation of BES array-averaged density, $\langle \delta n_{e} (t) /n_{e0} \rangle$, $dB/dt$, and $T_{e,\rm{core}}$ in blue, light red, and magenta, respectively. } \label{fig:avg_density} \end{figure} \begin{figure*} \includegraphics[scale=1.05]{Figure9.pdf} \caption{\label{fig:delta_n_by_n} 2-D BES observations during a sawtooth event. Magenta coloured line in i, ii, and iii show the temporal variation of core electron temperature. Blue lines show the time variation of $\delta n_{e}/n_{e0}$ measured by (i) channel 14 located to the right of the inversion layer, (ii) channel 20 at the inversion layer, and (iii) channel 35 located to the left of the inversion layer, where $n_{e0}=3.5\times 10^{13}~\rm{cm^{-3}}$. The circles in green in the time series data are the time instants for which we have plotted 2-D images of $\delta n_{e}/n_{e0}$ in the $RZ$ plane near the $q=1$ surface. Labels of the 2-D images specifies those time instants. For example, figure (a) shows the spatial profile of $\delta n_{e}/n_{e0}$ before the crash and this time instant is marked as `a' in figures i, ii and iii. The black line on the 2-D images represent the contour of $q=1$ surface. The red circles show the location of the center of the BES channels. Note that before a sawtooth crash, $\delta n_{e}/n_{e0}$ is nearly uniformly zero across the $R$-$Z$ plane. However, during a crash $\delta n_{e}/n_{e0}$ develops a significant spatial variation across the $q=1$ surface.} \end{figure*} Local variation of density during a sawtooth event is shown in Figure~\ref{fig:delta_n_by_n} using 1-D and 2-D plots. The blue curves in Figures~\ref{fig:delta_n_by_n}(i), (ii), and (iii) show the time variation of the density for three representative channels, (i) outboard of the sawtooth inversion layer, (ii) at the inversion region, and (iii) inboard of the inversion layer, respectively. The amplitude of the density oscillation is higher on the outboard side than on inboard of the inversion layer. The 2-D color plots in Figure~\ref{fig:delta_n_by_n} show the spatial profiles of the density at different instants of time. The 2-D data was passed through a median filter before making those color plots. The black curve on the 2-D color plots shows the contour of the $q=1$ surface, and the color represents the magnitude of $\delta n_{e}/n_{e0}$, where $n_{e0}=3.5\times10^{13}~\rm{cm^{-3}}$. For example, Figure~\ref{fig:delta_n_by_n}(a) shows that density is uniform in the sampled part of the $R$-$Z$ plane much before the crash in $T_{e,\rm{core}}$. The temporal variation of the spatial profile in density during a sawtooth oscillation is studied by making a video of the time evolution of the 2-D density data. Figures~\ref{fig:delta_n_by_n}(b) - (h) show snapshots of the 2-D variation of density at the times indicated in (i)-(iii), which correspond to times of a few maxima, minima, and zero-crossings in $\delta n_{e}/n_{e0}$. Interestingly, in addition to the array-averaged density variations described above, significant in-plane density non-uniformities are also observed. Figure~\ref{fig:delta_n_by_n}(b) shows that $\delta n_{e}/n_{e0}$ is non-uniform in the $R$-$Z$ plane at the onset of the crash in $T_{e,\rm{core}}$. The density increases in the upper right region of the $R$-$Z$ plane and decreases in the lower left region. The density varies in the plane from $\sim 3.7\times 10^{13}$ to $4.6\times 10^{13}~\rm{cm^{-3}}$. As the crash in $T_{e,\rm{core}}$ continues, the density non-uniformity increases significantly. The density in the $R$-$Z$ plane ranges from $\sim4\times10^{13}$ to $\sim6\times10^{13}~\rm{cm^{-3}}$ at the latter end of the crash as seen in Figure~\ref{fig:delta_n_by_n}(c). The latter oscillations show much more uniform $\delta n_{e}/n_{e0}$ (panels g and h), as do the times of the minima of $\delta n_{e}/n_{e0}$ (panel e). We provide a caveat that the fine-structure on the array may still be influenced by the channel-to-channel variations in response, and therefore we do not discuss the fine-scale structures here. Understanding the fine-scale 2-D structures will be in the scope of future work. \section{\label{sec:discussion}Discussion and Summary} We have presented a technique allowing the localized fast measurement of density in the $R$-$Z$ plane near the $q=1$ sawtooth inversion region during a sawtooth crash. We have developed a comprehensive method for analyzing the BES data considering various pitfalls. Since the measurement of the density is sensitive to the calibration of the BES diagnostic, a novel technique was developed to cross-verify the channel-to-channel calibration during a plasma shot. Our technique does not require any additional measurement on the test bench. Intense $\rm{D}_{\alpha}$ emission due to edge recycling caused by sawtooth oscillation makes the use of standard spectroscopic filtering techniques insufficient for removal of edge light. Traditional data analysis technique like Fast Fourier Transform (FFT) cannot be used because there is not a significant frequency separation between core and edge light signals. Therefore, we developed a method for isolating and removing undesired edge light from the BES data by comparing beam-on and beam-off events. In experiments where light fluctuations due to neutral beam fluorescence are small, $\delta I/I_{0}$ is related to $\delta n_{e}/n_{e0}$ by a constant of proportionality. However, light fluctuations are large during a sawtooth crash, and the neutral beam emission depends nonlinearly on density. Therefore, we inverted the light intensity to density considering the complete nonlinear dependence. A total of 64 BES channels in $8\times8$ configuration spanning a $8~\rm{cm}$ (radial) $\times ~20~\rm{cm}$ (poloidal) area across the sawtooth inversion layer was used to make density measurements. A large amplitude (1,1) mode is observed in the density data. The mode starts almost at the onset of the crash. The maximum amplitude of this mode is, $\langle \delta n_{e}(t)/n_{e0}\rangle_{\rm{max}}\approx$ 0.4 (Figure~\ref{fig:avg_density}). This mode persists for a few cycles even after the crash. Multiple localized density measurements in the $R$-$Z$ plane show that at the onset of the sawtooth crash, the density becomes spatially inhomogeneous near the sawtooth inversion layer with $\Delta n_{e}(R,Z)/n_{e}\sim 0.2$. This spatial nonuniformity in density significantly increases at the latter end of the crash; $\Delta n_{e}(R,Z)/n_{e}~\sim 0.4$. In addition to the measurements reported in this article, we have done other experiments also. In one of those experiments, the BES diagnostic was moved to sample the plasma further to the right and to the left of the $q=1$ surface. We observed $\delta n_{e}/n_{e0}$ to be strong near the $q=1$ surface and decrease further away on either side. We will report these results elsewhere. The spatial variation of density reported in this article may be related to guide field reconnection or may indicate some other MHD phenomena. We are making comparisons of the measured density variation with predictions of various models, which will be reported in future publications. \begin{acknowledgments} This material is based upon work supported by U.S. Department of Energy, Office of Fusion Energy and Sciences through DIII-D Frontier Science program using DIII-D National Fusion Facility, Max Planck Princeton Center for Plasma Physics, and partially funded under awards DE-FC02-04ER54698, DE-FG02-08ER54999, DE-AC0204CH11466, and DE-AC02-09CH11466. The United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes. \end{acknowledgments} \vspace{20px}
1,116,691,497,501
arxiv
\section{Introduction} The first generation of gravitational waves (GWs) detectors was based on mechanical resonance in large aluminium cylinders that is triggered by gravitational wave flying through the detector \cite{Weber0}. This type of GWs detectors, so-called Weber bars, are still operating connected in the worldwide network. Unfortunately, despite a superb stable operation and a perfect reduction of noises in the present-day resonant-mass bar detectors, so far there is no evidence of GWs events observed by the acoustic resonance phenomena \cite{Abbot_2007}. The second type of detectors is based on Michelson interferometry. In such detectors ripples in the space-time are registered by electromagnetic radiation instead of acoustic waves. In 2015 the ground-based observatories were finally sensitive enough to detect the first gravitational wave event, which was two black holes (BHs) merger \cite{GW_detection1}. {So far, during three observational runs 90 events were confirmed in total, where vast majority were BHs merger (BH-BH), two of black hole~-~neutron star merger and two neutron~-~neutron star (NS-NS) mergers \cite{Poggiani_2021_03results}} In the events observed up to date the total mass and luminosity distance (i.e., distance from the Earth calculated by measuring absolute and apparent magnitudes) of binary systems were in the range from $2.74^{+0.04}_{-0.01}$ solar masses (M$_{\odot})$ and $40^{+8}_{-14}$~Mpc (NS-NS) to $142^{+28}_{-16}$~M$_{\odot}$ and $5.3^{+2.4}_{-2.6}$~Gpc, which is the first observational evidence of the existence of a BH with a mass of more than 100~M$_{\odot}$, so-called intermediate-mass black hole) \cite{Abbott_2020_IMBH}. The range of maximum sensitivity for existing LIGO-like observatories spans between $\sim10$~Hz and roughly 2~kHz~\cite{Sensitivity_2018}. Most of the other promising experiments, like the International Pulsar Timing Array (IPTA)~\cite{Hobbs_2010} that includes three independent projects, i.e. NANOGrav~\cite{NANOGrav_2018}, EPTA~\cite{EPTA_2015}, and PPTA~\cite{PPTA_2015}, optical atomic clocks~\cite{Kolkowitz_2016}, torsion-bar antenna TOBA~\cite{Ando_2010}, atomic interferometry projects, like AION \cite{AION_2020} and ELGAR \cite{ELGAR_2019}, and the space-based LISA~\cite{Amaro_2017} and DECIGO \cite{Sato_2017} are sensitive at frequencies lower than existing LIGO-like detectors. At present there is a noticeable gap in higher frequency range, although several important GWs sources probably emit in this part of the GW spectrum, and, moreover, GWs detectors sensitive in 1-100~kHz range are perfectly suited for searches beyond the standard model. Recently proposed levitated-sensor-based GW detector~\cite{Aggarwal_2022}, that would achieve reasonable sensitivity in the above 10~kHz range, requires cryogenic 100~m facility. Underground LIGO-like Einstein Telescope~\cite{ET_2020}, being still in the early design study phase, will have 10 km long arms. In this letter we prove that table-top-size ultra-stable optical cavities from the state-of-the-art optical atomic clocks~\cite{Ludlow_2015,OC18} can be used as resonant-mass gravitational wave detectors for the frequencies higher than 2~kHz. We calculate the mechanical resonances for the existing state-of the art and future possible cavity set-ups and analyse limitation of sensitivity by fundamental noises. The proposed cavities' materials and components were selected for the best properties while being within the present-day technology grasp. We show that fundamentally limited sensitivity to GW of a table-top ultra-stable optical cavity used as a resonant-mass gravitational wave detector allows detecting predicted GW signals from such sources as binary neutron star mergers and post-mergers, collapsing stellar cores, subsolar-mass primordial BHs mergers, and QCD axions and axion-like particles formed through BH superradiance. \section{Principle of observation} A behaviour of a bar-like resonant detector, e.g. used in the Weber resonant mass GWs detectors, with resonant frequency of $f_0$, in the vicinity of the gravitational wave can be modelled as a driven and damped harmonic oscillator~\cite{Creighton} in a form of two masses connected by a spring of a length $L$. The spring constant $k$ and the masses are chosen to satisfy $f_0=1/2\pi\sqrt{k/\mu}$, where $\mu$ is the reduced mass of the system. The mass displacement $x$ induced by a plane gravitational wave with frequency $f$, travelling perpendicularly to the spring and polarised along the spring, can be described by a simple equation of motion \begin{equation} \ddot{x}(t) + 2\beta \dot{x}(t) + 4\pi^{2}f_{0}^{2}x(t) = F_{GW}(t), \label{eq:oscillator} \end{equation} \noindent where $\beta := \pi f_{0}/ Q$ is a damping parameter related by definition with a quality factor of the spring $Q$, and $F_{GW}=-\frac{1}{2}hL(2\pi f)^{2}\cos(2\pi f t)$ is a GW force acting on the system. The strain amplitude of the GWs is denoted as $h$. The eq. \ref{eq:oscillator} can be solved in the frequency domain, with corresponding quantities $\tilde{x}(f) = G(f) \tilde{h}(f)$, where $\tilde{h}(f)$ is a strain in the frequency domain and $ G(f)$ is the following transfer function \begin{equation} G(f) = \frac{L}{2}\frac{f^2}{(f_{0}^2 - f^2) + iff_{0}/Q }. \end{equation} The best sensitivity to the GW that can be achieved by the system, i.e. the GW strain-equivalent power spectral density (PSD) $S_{h}(f)$, is related to the PSD of the noise present in the system $S_{x}$ by \begin{equation} S_{h}(f) = \frac{S_{x}(f)}{|G(f)|^{2}}. \end{equation} In general, $f_{0}$ is a characteristic resonance frequency of a detector. A simple three-dimensional analytical approach shows that position of the $n_l$-th resonance $f_{0} = \frac{1}{2L} \sqrt{\frac{E}{\rho}n_{l}^2}$ depends on detector length $L$ and material internal properties i.e. Young modulus $E$ and density $\rho$. \begin{figure}[hbt] \centering \includegraphics[width=0.7\columnwidth]{Fig1.png} \caption{A cross section of a typical ultra-stable optical cavity used in optical atomic clocks experiments and mesh for the FEM simulation. $L$ and $R$ stands for its length and external radius, respectively, while $r$ is the radius of internal bore between mirrors. } \label{fig:FEM} \end{figure} In the case of an ultra-stable optical cavity this simplified approximation is not sufficiently accurate in determining exact values of the cavity resonance. Moreover, in existing system the optical cavities are placed on carefully calculated points to dump most of the possible mechanical resonances. Therefore, we performed a finite element method (FEM) simulation for the several existing state-of the art and future possible cavity set-ups, taking into account Earth gravity field and support points. Fig.~\ref{fig:FEM} shows a cross section of a popular horizontal design of a cavity used the optical clock experiments together with the mesh used in our FEM simulation. The resolution of the mesh is adjusted to the local radius of curvature in the range of {1.5~mm to 7~mm}. Fig.~\ref{fig:FEM} also defines geometrical variables of the spacer, i.e. spacer radius $R$ and length $L$, and internal bore diameter $r$. In the case of the spacer made of ultra-low expansion (ULE) glass and mirrors' substrate made of fused-silica (FS), additional ULE rings are usually added outside the mirrors to tune the zero crossing temperature of the cavity~\cite{Lagero_2010} - these rings, however, do not contribute significantly to the mechanical resonance properties of the whole system. \begin{figure}[hbt] \centering \includegraphics[width=1\columnwidth]{Fig2.png} \caption{(colour online) The two cavities are aligned perpendicularly to each other. The relative length change between cavities is detected by frequency or phase difference between the lasers' beams stabilised to the cavities. PD stands for photodiode.} \label{fig:setup} \end{figure} To detect the length change of an optical cavity a laser light is frequency-locked to one of the cavity modes. Shrinking and extending path of a photon inside the optical cavity will effectively move the cavity mode frequency $\nu$ by $\Delta\nu$ according to the simple formula ${\Delta L}/{L} = -{\Delta \nu}/{\nu} $. With a laser frequency tightly stabilised to the cavity mode frequency, e.g. by the Pound-Drever-Hall technique~\cite{Black_2001}, the length change of the cavity is transferred to the phase or frequency change of the laser light. To detect change of the length of a cavity due to GW, a reference is needed, for instance a second, perpendicular cavity. Fig.~\ref{fig:setup} depicts a system of two ultra-stable optical cavities aligned perpendicularly to each other, either in the horizontal or in the vertical plane. Environmental perturbations can be greatly reduced by installing both cavities in one shared vacuum system and mounting the system on a single vibration isolation platform. With two lasers’ beams stabilised to each cavity, the relative length change between the cavities is transferred to the phase or frequency difference between the lasers’ light. This difference can be detected by an optical beat note on a photodiode. The in-vacuum detection of beat note signal may by done either with the light transferred through both cavities (like in Fig.~\ref{fig:setup}), or by with light uncoupled from main beams in front of cavities. In the former case, the light is filtered by the optical cavities but has low intensity, while in the latter case the light can has high intensity improving the beat note detection signal. The beat note detection system as well as optical elements and photodiodes required for lasers stabilisation can be placed in-vacuum on the same vibration isolation platform. The light from the lasers can be transferred to the platform over fibres. With such configuration the cavities can be surrounded in vacuum by several thermal shields providing superb thermal insulation. Additionally, installing the cavities and the beat note system in the mutual vacuum set-up allows skipping the optical path length stabilisation~\cite{Falke_2012}. \section{Fundamental sensitivity limits} The fundamental sensitivity of an ultra-stable optical cavity to a GW is primarily limited by thermal processes i.e. mechanical and optical thermal noise. {The mechanical thermal noise }refers to the Brownian motion of a cavity residing in non-zero temperature $T$. The thermodynamical fluctuations of the cavity components can be expressed quantitatively by the fractional PSDs $S_{y,t}$ using fluctuation-dissipation theorem~\cite{Callen_1951, Callen_1952}. The {magnitude of} mechanical thermal noise depends on the spacer geometry and mass $m$, and {mirrors'} coating thickness $d_{ct}$, {as well as on the} cavity intrinsic physical parameters, such as Young modulus $E$, Poisson's coefficient $\sigma$, and mechanical loss angle $\phi$: \begin{widetext} \begin{equation} S_{y,t}(f) = \frac{4k_{b} T \phi_{sp}}{(2\pi)^{3}mL^{2}} \frac{f_{0}^2}{f[(f_{0}^2-f^{2})^{2} + f_{0}^{4}\phi_{sp}^{2}]} + \frac{4k_{b}T}{2\pi^{5/2}}\frac{1-\sigma_{sb}^{2}}{f E_{sb} w L^{2} }\phi_{sb} \left( 1+\frac{2d_{ct}}{w\sqrt{\pi}}\frac{1-2\sigma_{sb}}{1-\sigma_{sb}}\frac{\phi_{ct}}{\phi_{sb}} \right). \label{eq:ASD_thermal} \end{equation} \end{widetext} \noindent where $sp$, $sb$, $ct$ indices corresponds to spacer, mirrors' substrate, and mirrors' coating, respectively, and $w$ is the beam spot radius on the mirror. Fractional PSD $S_{y}(f)$ is related to the PSD of the length fluctuation $S_{x}(f)$ by $S_{y}(f) = S_{x}(f)/L^{2}$. Additionally, the contribution to the total fractional PSD from the non-Brownian optical thermal noise composed of thermo-elastic and thermo-refractive noises in mirror substrate and multilayer mirror coatings can be described by fractional PSD $S_{y,o} \sim L^{-2} (1+\sigma_{sb})^2\alpha^{2}_{sb}{T^{2}}A(w,f) + L^{-2} {T^{2}}B(w,f)$, where $\alpha_{sb}$ is the mirror substrate coefficient of thermal expansion and $A(w, f)$ and $B(w,f)$ are effective material constants of mirrors' substrate and coatings (see ~\cite{Cole_2013}). In practical realisations, intensity of light used to measure the cavity length is kept at minimum detectable levels to avoid additional heating of the mirrors surface. This leads to another potential source of sensitivity limitation due to the quantum nature of light, the shot noise $S_{y,s} =\sqrt{2\pi\hbar c}/(8\mathcal{F}L\sqrt{\lambda P_{c}})$, where $\mathcal{F}$ is the optical cavity finesse, $\hbar$ is the reduced Planck constant, $\lambda$ is the light wavelength, and $P_{c}$ is the power of light injected into the cavity~\cite{Black_2001}. This noise, however, will be further minimised as newer mirror coatings materials will allow for higher intra-cavity light powers. \begin{figure}[hbt] \centering \includegraphics[width=\columnwidth]{Fig3a.png}\quad \includegraphics[width=\columnwidth]{Fig3b.png}\quad \includegraphics[width=\columnwidth]{Fig3c.png}\quad \caption{(colour online) {Estimated sensitivities limited by fundamental noises represented by amplitude spectral density (ASD)} of a present-generation ultra-stable optical cavity (0.5~m long cavity made of ultra-low expansion (ULE) glass at room temperature) if the mirrors' coatings would be replaced by crystalline AlGaAs coatings~\cite{Hafner_2015} (top), possible cryogenic 0.5-m long cavity made of single-crystal silicon with mirrors with crystalline AlGaAs coatings at 4~K (middle) and in 0.1~K (bottom). The exemplary shot-noise limit is estimated for 50~$\mu$W cavity light power and the cavity finesse of {150000}.} \label{fig:noises} \end{figure} \begin{figure} \centering \includegraphics[width=1\columnwidth]{Fig4.png} \caption{Estimated sensitivities to GWs of the existing 0.5~m long optical cavity made of ULE glass at room temperature if the mirrors' coatings would be replaced by crystalline AlGaAs coatings~\cite{Hafner_2015} (red) and possible cryogenic 0.5~m, 1~m and 2-m long cavities made of single-crystal silicon (SCS) at 4~K and 0.1~K. All cases have been presented without (dashed lines) the shot-noise (SN), which corresponds to the hard fundamental limits, and with (dotted lines) the exemplary shot-noise limit estimated for 50~$\mu$W light power injected into the cavity of {150000} finesse. The sensitivities of other existing~\cite{LIGO_1st_volume_1989, Auriga_2016, MiniGrail_2007} (solid lines) and planned~ \cite{Ackley_2020_NEMO,ELGAR_2019,Aggarwal_2022,Moore_2015} (dash-dot lines) GWs detectors are added to the plot for comparison. Vertical black dot-dashed line represents exemplary ring-down frequency limit for the 0.5~m long cavity with finesse $\mathcal{F} = 150000$. } \label{fig:sensitivity} \end{figure} {Fig.~\ref{fig:noises} shows estimated sensitivities limited by fundamental noises represented by fractional amplitude spectral density $A_{y} = \sqrt{S_{y}}$ (where $S_{y}$ includes all previously described noise components) of the existing state-of-the-art ultra-stable optical cavity (0.5~m long cavity made of ULE glass at room temperature) if the mirrors' coatings would be replaced by crystalline AlGaAs coatings~\cite{Hafner_2015} and possible cryogenic 0.5~m and 1-m long cavities made of single-crystal silicon with mirrors with crystalline AlGaAs coatings at 4~K and 0.1~K.} All cases have been presented without the shot-noise, which corresponds to the hard fundamental limits, and with the exemplary shot-noise limit estimated for 50~$\mu$W light power injected into the cavity of finesse $\mathcal{F} = 150000$. In Fig.~\ref{fig:sensitivity} we depict estimated amplitude strain sensitivities $A_{h}(f) =\sqrt{S_{h}(f)}$ to GWs of several possible ultra-stable optical cavities. The sensitivities of other existing (LIGO~\cite{LIGO_1st_volume_1989}, AURIGA~\cite{Auriga_2016} and MiniGrail~\cite{MiniGrail_2007}) and planned (NEMO~\cite{Ackley_2020_NEMO}, ELGAR~\cite{ELGAR_2019}, Optically levitated sensors~\cite{Arvanitaki_2013,Aggarwal_2022}, and Einstein telescope\cite{Moore_2015}) GWs detectors are added to the plot for comparison. \section{Discussions} The detectable range of the Fabry-Per\'ot detector is also limited in frequency spectrum, because of the finite speed of light and the reflection of the mirrors surface. The finesse is responsible for the number of reflections of a photon inside the optical cavity before it leaves it and can contribute to the feedback to the laser frequency --- the higher finesse of the cavity, the longer ring-downtime for a photon in the cavity and the lower servo loop bandwidth. The highest GWs frequency that can be detected for a given~$\mathcal{F}$ is limited by $f_{RD}<\frac{\pi c }{\mathcal{F} L}$. For the 0.5~m long cavity with finesse $\mathcal{F} = 150000$ the limit is $f_{RD} \sim {12566}$~Hz. Fig.~\ref{fig:sources} shows comparison of GWs sensitivity limits of optical cavities from Fig.~\ref{fig:sensitivity} with the fractional amplitude spectral densities of astrophysical sources that fall within the range of maximum resonance sensitivity. The plot shows predicted signals from binary neutron star inspiral, merger, and post-mergers, collapsing stellar cores, subsolar-mass BHs mergers, and QCD axions and axion-like particles formed through BH superradiance. While only 0.5~m, 1~m, and 2~m long cavities are presented in the graph, the mechanical resonance position (the sensitivity peak) may be shifted by changing the cavity length ($f_0\sim1/L$). \begin{figure} \centering \includegraphics[width=1\columnwidth]{Fig5.png} \caption{ Comparison of GWs sensitivity limits of technically possible optical cavities with predicted GWs signals of several possible sources. Dotted and dashed lines depict sensitivities to GWs signals 0.5~m, 1~m, and 2~m long cavities made of SCS at 4K. All cases have been presented without (dashed lines) the shot-noise (SN), which corresponds to the hard fundamental limits, and with (dotted lines) the exemplary shot-noise limit estimated for 50~$\mu$W light power injected into the cavity of 150000 finesse. Vertical black dot-dashed line represents exemplary ring-down frequency limit for the 0.5 m long cavity with finesse F = 150000. The brick-red area depicts the GWs signal of a typical binary neutron star (NS) inspiral, merger, and post-merger (NS-NS)~\cite{Ackley_2020_NEMO} at the distance of 3 Mpc. The blue area presents the characteristic GWs spectra from the process of a BH formation from fast spinning, moderate-metallicity, massive stellar progenitors~\cite{Cerda_2013_collapse} at the distance of 30 kpc. Potential GWs signals from the primordial sub-solar mass BHs, calculated analytically for the innermost stable circular orbits of the BHs binaries with two equal masses, each $<$1~M$_{\odot}$, at the distance of 1~kpc \cite{Creighton}, were depicted by green area. The solid lines present predicted signals due to GWs emitted from axions or ALPs around BHs in our galaxy within 10 kpc for $10^6$s coherent integration time. The BH with initial masses of 1, 2, 4 and 6 M$_{\odot}$ (blue, orange, red and brown, respectively) and initial spin of 0.9 were calculated for the dominant level ($l = m_l = 1$, $n = 0$)~\cite{Isi_2019,Isi_2020}. } \label{fig:sources} \end{figure} {\it Coalescing neutron stars.} The predicted gravitational-wave strain for a typical binary neutron star (NS) inspiral, merger, and post-merger (NS-NS) is taken from~\cite{Ackley_2020_NEMO} and scaled to amplitude spectral density at the distance of 3 Mpc (size of the Local Group). Weaker signals from farther source distances form the shaded brick-red area in Fig.~\ref{fig:sources}. While the tidal effects emitted during the inspiral are outside possible sensitivity, the postmerger signal above 1 kHz from massive neutron star remnants~\cite{Shibata_2006,Baiotti_2008} may be produced by majority of the binary NS mergers~\cite{Margalit_2019}. While matching the optical cavity mechanical resonance to the maximum of the GWs signal needs the cavity to be 2~m long, which can be technically challenging for the cryogenic SCS spacer, this kind of source has indisputable advantage of being already observed~\cite{Abbot_NS,Abbott_2019,Abbott_2020NS} making neutron star science low risk. {\it Collapsing stellar cores.} Theoretical predictions shows that the process of a BH formation from fast spinning, moderate-metallicity, massive stellar progenitors leads to seconds-long~\cite{OConnor_2011,de_Brye_2014} and high-amplitude GWs signal. The characteristic GWs spectra in the slow rotating model is taken from~\cite{Cerda_2013_collapse} and scaled to amplitude spectral density at the distance of 30 kpc (the Milky Way Galaxy size). Weaker signals from farther source distances form the shaded blue area in Fig.~\ref{fig:sources}. {\it Coalescence of subsolar-mass BH binaries.} While there are no known mechanisms through standard stellar evolution to produce sub-solar mass BHs, the observation of subsolar-mass BHs merger will be an indication of theirs primordial origin. This makes the potential observation particularly important since primordial BHs may contribute to the dark matter distribution~\cite{Zel'dovich_1967,Hawking_1971,Pani_2014} and verify theories on dark matter triggered formation of BHs ~\cite{Shandera_2018,Singh_2021,Dasgupta_2021}. Potential GWs signals were calculated analytically for the innermost stable circular orbits of the BHs binaries with two equal masses, each $<$1~M$_{\odot}$, at the distance of 1~kpc \cite{Creighton} (shaded green area in Fig.~\ref{fig:sources}). {\it Axions and ALPs (axion-like particles) superradiance.} Light bosonic fields such as axions or ALPs can form gravitational bound states around a black hole~\cite{Damour_1976,Ternov_1978,Zouros_1979,Detweiler_1980}. Their occupation number grows exponentially at the cost of the angular momentum and energy of the rotating BH through superradiance~\cite{Brito_2020} forming a coherent axion or ALP bound state emitting gravitational waves~\cite{Arvanitaki_2010,Arvanitaki_2011,Yoshino_2014,Arvanitaki_2015,Brito_2015, Arvanitaki_2017, Brito_2017,Brito_2017b,Baumann_2019, Isi_2019,Isi_2020,Ng_2021,Aggarwal_2022}. Gravitational signals are expected to be produced during axions/ALPs transition between gravitationally bound levels, axions/ALPs annihilation to gravitons, and bosenova collapse of the axion/ALPs cloud. The first two mechanisms should yield long lasting, monochromatic gravitational wave signals, since axions/ALPs involved in transitions and annihilations are in exact energy eigenstates of the BH potential. The potential signal from axions/ALPs were calculated with the analytic approximation from \cite{Isi_2019,Isi_2020} for the values used in \cite{Aggarwal_2022}, i. e. for signals due to GWs produced from axions/ALPs around a BH in our galaxy within 10 kpc for $10^6$~s coherent integration time. The BH with initial masses of 1, 2 and 3~M$_{\odot}$ and initial spin of 0.9 were calculated for the dominant level ($l=m_l=1, n=0$) (solid lines in Fig.~\ref{fig:sources}). \section{Conclusion} In this paper we consider table-top ultra-stable optical cavity made with the most advanced present-day technologies and report that it can be used as a resonant-mass gravitational wave detector in the 2-20 kHz range of GWs spectrum. Moreover, despite the resonance character of the sensitivity, contrary to the metallic Weber bar detectors, the detection scheme allows observing potential GW also outside the resonance, although with smaller sensitivity. We show that it allows detecting not only predicted GW signals from such sources as binary neutron star mergers and post-mergers, subsolar-mass primordial black-hole mergers, and collapsing stellar cores, but can reach new physics beyond standard model looking for ultralight bosons such as QCD axions and axion-like particles formed through black hole superradiance. \section*{Acknowledgements} This project (20FUN08 NEXTLASERS) has received funding from the EMPIR programme co-financed by the Participating States and from the European Union’s Horizon 2020 research and innovation programme. The research is a part of the program of the National Laboratory FAMO (KL FAMO) in Toru\'n, Poland, and is supported by a subsidy from the Polish Ministry of Science and Higher Education.
1,116,691,497,502
arxiv
\section*{Methods} \label{sec:model} Our model is meant for ferromagnetic insulators and it assumes that electrons and phonons are in local equilibrium at every moment in time. Consequently, they are considered as a single system referred to as ``the lattice''. The magnetization dynamics is given by the standard Landau-Lifshitz-Gilbert equation \begin{equation} \label{eq:LLG} \frac{d\vec{m}}{dt}=-\gamma\,\vec{m}\times(\vec{B}_{\mathrm{eff}}+\vec{B}_{\mathrm{th}}) + \alpha\,\vec{m}\times\frac{d\vec{m}}{dt} \end{equation} \noindent where $\vec{m}(\vec{r},t)$ is the reduced magnetization, $\gamma=1.76\times 10^{-5} \, \mathrm{T}^{-1}\cdot \mathrm{s}^{-1}$ is the gyromagnetic ratio, $\alpha$ is the damping parameter, $\vec{B}_{\mathrm{eff}}$ is the effective field (including exchange, anisotropy, self-magnetostatic and Zeeman contributions) and $\vec{B}_{\mathrm{th}}$ is the random field representing thermal fluctuations \cite{Palacios-98}. On the other hand, the thermal properties of the lattice are macroscopically described by its temperature distribution, which changes in space and time according to the heat equation \begin{equation} \label{eq:heat} \frac{dT}{dt}=\frac{1}{c_p \rho} (\kappa \, \nabla^2 T + M_s \frac{d\vec{m}}{dt} \cdot \vec{B}_{\mathrm{eff}} + q_{\mathrm{ext}}) \end{equation} \noindent where $T$ is the temperature, $\kappa$ the thermal conductivity, $c_p$ the specific heat capacity, $\rho$ the density, $M_s$ the saturation magnetization and $\vec{B}_\mathrm{eff}$ the total effective field acting on the system. The first term accounts for phonon-phonon interactions as a diffusive term, which tends to make the temperature uniform, in a similar way as the exchange does for the spin system. The second term represents heat transfer between the spins and the lattice, computed as the variation of magnetic energy of the spin system. The last term describes heat transfer per unit volume and time between the lattice and the environment. In the present work we consider a standard Newton term $q_\mathrm{ext}=\frac{T_0-T}{\tau}$, $T_0$ being the room temperature and $\tau$ the characteristic thermal relaxation time constant. Equations (\ref{eq:LLG}) and (\ref{eq:heat}) are solved self-consistently, the temperature distribution entering (\ref{eq:LLG}) in the amplitude of the thermal field \cite{Palacios-98}. To solve both equations numerically we discretize the sample in $12.5\,\mu\mathrm{m} \times 12.5\,\mu\mathrm{m} \times 8.0\,\mu\mathrm{m}$ cells. The use of such large cells as compared to the exchange length ($\sqrt{\frac{2\,A}{\mu_0\,M_s}}=27.5\,\mathrm{nm}$) is justified because in all the simulations performed the amplitude of the oscillations is below $1.5$ degrees even in the excitation region.
1,116,691,497,503
arxiv
\section{Introduction} \label{introduction} Much attention has been paid to artificial lipid bilayer membranes as model systems of biological cell membranes~\cite{AlbertsBook}. They exhibit a wide variety of complex phenomena in both statics and dynamics, since lipid densities, membrane deformation and surrounding fluids are coupled to each other~\cite{Lipowsky95}. Dynamical properties of lipid membranes near the equilibrium is characterized by wavenumber dependent relaxation rates. In the early theoretically studies, the relaxation rate of a single-component membrane was discussed by regarding a membrane as an elastic sheet having out-of-plane deformation, and further surrounded by a three-dimensional (3D) fluid. Neglecting the bilayer structure, several authors predicted that the relaxation of the bending mode is dominated by the bending rigidity and the viscosity of the surrounding bulk fluid~\cite{Kramer,Brochard}. Later, Seifert and Langer considered the inter-monolayer friction and the two-dimensional (2D) hydrodynamics of each monolayer, and obtained another relaxation mode associated with the density difference between the two monolayers~\cite{Seifert}. They found that the relaxation of the density fluctuation is dominated by the inter-monolayer friction and is relevant to the slow dynamics characterized by large wave numbers, whereas the relaxation of the bending mode is relevant for small wave numbers if the membrane surface tension is not acting. A somewhat similar theory was also developed in ref.~\cite{Yeung}. The predicted mode crossing behavior has been supported by several experiments~\cite{Pfeiffer,Pott} and by molecular dynamics simulations~\cite{Shkulipa}. More recently, some experiments reported a chemically induced tubule growing from a giant unilamellar vesicle (GUV)~\cite{JBprl,Bitbol1}. In these studies, they showed that the interplay between the faster bending relaxation and the slower density relaxation on the scale of tens of micrometers plays an essential role. In recent years, both the statics and dynamics of multi-component lipid membranes have been extensively studied because 2D phase separation takes place in a certain range of temperature and composition~\cite{VK05,HVK09,SK_DA_Review}. Studies on the dynamics of multi-component membranes can be classified into two categories; (i) dynamics of lateral phase separation below the phase separation temperature, and (ii) dynamics of concentration fluctuations above the critical temperature. For the details of the domain growth dynamics in the lower temperatures, which is not the subject of the present work, readers are referred to ref.~\cite{SK_DA_Review}. Experimentally, Honerkamp-Smith \textit{et al.} have investigated the dynamics of concentration fluctuations in ternary GUVs and showed that the dynamic critical exponent crosses over from a 2D value to a 3D one as the critical temperature is approached from above~\cite{KellerDynamics}. The dynamics of concentration fluctuations in membranes was first modeled by Seki \textit{et al.}~\cite{SKI07} and later extended by Inaura and Fujitani~\cite{Inaura}. Ramachandran \textit{et al.} used the general mobility tensor to numerically calculate the effective diffusion coefficient of concentration fluctuations~\cite{RKSI11}. In these theoretical works, however, they did not take into account the out-of-plane membrane deformation nor the membrane bilayer structure. In biomembranes of living cells, the two monolayers have in general different compositions, with a unique asymmetry between the inner and outer leaflets. Furthermore, the two leaflets are not independent, but rather interact strongly with each other due to various physical and chemical mechanisms~\cite{May09}. Some experiments have shown strong positional correlation and domain registration between domains across the two membrane leaflets~\cite{Collins08,CK08}, while some papers reported the anti-registration of domains in different leaflets~\cite{Regen1,Regen2,Longo}. Inspired by the experiments, several simulations have been performed~\cite{Stevens05,Sachs11} and some phenomenological models have been proposed~\cite{WLM07,Schick08} to describe the phase separation in such coupled leaflets. Hirose \textit{et al.} considered a coupled bilayer composed of two modulated monolayers and discussed the static and dynamic properties of concentration fluctuations above the transition temperature~\cite{HKA09,HKA12}. The purpose of this paper is to investigate the relaxation dynamics of a binary lipid bilayer membrane in one phase region (rather than phase separated membranes in two phase state). In such membranes, the interplay of various important effects, such as inter-monolayer friction and composition-deformation coupling, leads to a complex behavior. In particular, as in usual 3D multi-component fluids, a chemical potential gradient, and thus mutual diffusion, are induced by the inhomogeneity of the density difference between the two lipid species. Such mutual diffusion leads to homogenization of the density difference in each monolayer, as an irreversible process. In this paper, we take into account the membrane surface tension, 2D hydrodynamics of each monolayer, mutual diffusion in each monolayer, 3D hydrodynamics of the surrounding fluid, and inter-monolayer friction, whereas the flip-flop motion of the lipid molecules between the two leaflets is not included. In our model, the sources of the energy dissipation are the viscosities of the monolayers and the surrounding fluid, the inter-monolayer friction, and the mutual diffusion in each monolayer. We find that the two relaxation modes associated with the mutual diffusion appear in addition to the three previously discussed relaxation modes~\cite{Seifert}. These two diffusive modes turn out to be much slower than the other hydrodynamic modes, and become even slower in the vicinity of the unstable region towards the phase separation. This paper is organized as follows. In sect.~\ref{freeenegy}, we present the free energy functional of a binary lipid bilayer membrane by assuming that the membrane deformation and the density deviations from the respective average values are small. The whole set of dynamic equations are introduced in sect.~\ref{dynamics}, and the surrounding flow field is integrated out to obtain the relaxation equations for the membrane variables. In sect.~\ref{results}, thermodynamic stability of the one phase state and the wave number dependencies of various relaxation modes are discussed in detail for both small and moderate surface tension cases. We also present our numerical study of the domain relaxation dynamics. Finally, sect.~\ref{summary} is devoted for summary and discussion. \section{Free Energy} \begin{figure} \label{freeenegy} \includegraphics[scale=0.43]{fig1.eps} \caption{Schematic representation of a two-component fluid bilayer membrane consisting of lipid A and lipid B. The hydrophobic chains are arranged in a back-to-back configuration to form a bilayer. The surface at which the upper and lower monolayers are in contact with each other is defined as the mid-surface. The membrane shape is expressed by the height $z=h(x,y)$ of the mid-surface measured from the $z=0$ plane.} \label{Figillustration} \end{figure} A binary lipid bilayer membrane consists of lipid A and lipid B as schematically presented in fig.~\ref{Figillustration}. In the presence of the surrounding fluid, the hydrophobic tails of lipid molecules face each other to form a bilayer structure, while the hydrophilic heads are in contact with the outer fluid. The surface at which the hydrocarbon tails are in contact with each other is defined as the mid-surface. Using the height $h(x,y)$ of the mid-surface from the $z=0$ plane in the 3D Euclidean space, we express the shape of a nearly flat membrane using the Monge gauge, {\it i.e.}, $z=h(x,y)$. Let us write $\psi _{\rm J}^\pm$ the areal mass densities of lipid J (${\rm J} = {\rm A,B}$) in the upper ($+$) and lower ($-$) monolayers, respectively. The total free energy of the bilayer membrane is generally given by the form \begin{equation} F=\int {\rm d}^2x \, \sqrt{g} \, f_{\rm tot}(H, \psi_{\rm J}^\pm, \nabla_\perp\psi_{\rm J}^\pm), \label{Ftot} \end{equation} where $\nabla_\perp$ is the gradient operator along the membrane surface, $g$ the determinant of the metric tensor, and $\int {\rm d}^2x$ denotes the integration with respect to $x$ and $y$. Within the lowest order in $h$, we have $\nabla_\perp\simeq \tilde\nabla$ where $\tilde\nabla=(\partial_x,\partial_y)$ is the 2D gradient operator in the projected plane. The areal free energy density $f_{\rm tot}$ depends on the mean curvature $H$, the densities and their spacial derivatives. In general, the free energy also depends on the temperature, but we shall not write the temperature dependence of any quantities explicitly. For small membrane deformations ($|\tilde\nabla h| \ll 1$), $H$ and $g$ are approximated as $H \simeq (\tilde{\nabla}^2 h)/2$ and $g\simeq1+(\tilde{\nabla} h)^2$, respectively, within the lowest order in $h$. We assume in this paper that the upper and the lower monolayers have the same number of lipid molecules, namely, $\int {\rm d}^2x \, \sqrt{g} \, \psi_{\rm J}^+= \int {\rm d}^2x \, \sqrt{g} \, \psi_{\rm J}^-$. We introduce the reference mass densities of the lipid molecules $\psi_{\rm J0}$ as the spacial average of the densities for a flat membrane (or projected mass densities). Then the conservation law for the lipid molecules is written as \begin{equation} \int {\rm d}^2x \, \sqrt{g} \, \psi_{\rm J}^\pm = \psi _{\rm J0} \int {\rm d}^2x. \label{conservation1} \end{equation} We further define the normalized density deviations as \begin{equation} \rho_{\rm J}^\pm = \frac{\psi_{\rm J}^\pm}{\psi_{\rm J0}}-1. \label{rhoJ} \end{equation} With the aid of eq.~(\ref{rhoJ}), the conservation law eq.~(\ref{conservation1}) can be rewritten as \begin{equation} \int {\rm d}^2x\, \sqrt{g}\, \rho_{\rm J}^\pm \simeq - \frac{1}{2} \int {\rm d}^2x \, (\tilde\nabla h)^2, \label{conservation2} \end{equation} up to the second order in $h$. Notice that the integral in the left hand side does not vanish exactly because $\psi_{\rm J0}$ is the projected average density. \subsection{Bilinear free energy} Hereafter we assume that the membrane is weakly deformed and the density deviations are small enough so that $h$ and $\rho_{\rm J}^\pm$ can be treated as small variables. Then $\sqrt{g}$ and $f_{\rm tot}$ in eq.~(\ref{Ftot}) can be expanded about the reference state ($h=0$, $\psi_{\rm J}^\pm=\psi_{\rm J0}$) with respect to the small variables $h$, $\rho_{\rm J}^\pm$, and $\tilde\nabla\rho_{\rm J}^\pm$. The total free energy is given by the sum of three contributions \begin{equation} F=F_{\rm def}+F_{\rm coup}+F_{\rm grad}, \end{equation} where $F_{\rm def}$ is the deformation part, $F_{\rm coup}$ the coupling part, and $F_{\rm grad}$ the gradient part. Each part will be explained in order. First the deformation part $F_{\rm def}$ is given by \begin{equation} F_{\rm def}=\int {\rm d}^2x \left[ \frac{\sigma}{2} (\tilde{\nabla} h)^2+ \frac{\kappa}{2}(\tilde{\nabla}^2 h)^2 \right], \label{Fdef} \end{equation} where $\sigma$ is the membrane surface tension and $\kappa$ the bending rigidity. The surface tension $\sigma$ is expressed in terms of $f_{\rm tot}$ in eq.~(\ref{Ftot}) as \begin{equation} \sigma=f_{\rm tot}-\sum_{\epsilon=+,-}\sum_{{\rm J}={\rm A, B}} \frac{\partial f_{\rm tot}}{\partial \psi_{\rm J}^\epsilon} \psi_{\rm J0}, \label{sigma} \end{equation} where $f_{\rm tot}$ and its derivatives $\partial f_{\rm tot}/\partial \psi_{\rm J}^\epsilon$ are evaluated at the reference state. In deriving eqs.~(\ref{Fdef}) and (\ref{sigma}), we have made use of $\sqrt{g}\simeq 1+(\tilde\nabla h)^2/2$, $\partial f_{\rm tot}/\partial \rho_{\rm J}^\pm= (\partial f_{\rm tot}/\partial \psi_{\rm J}^\pm)\psi_{\rm J0}$ and eq.~(\ref{conservation2}). The right hand side of eq.~(\ref{sigma}) can be identified as the (negative) in-plane pressure for a flat membrane~\cite{Pressure}. The coupling part $F_{\rm coup}$ consists of all the possible bilinear couplings between $H$ and $\rho^\pm_{\rm J}$. For later convenience, we introduce the normalized total mass density deviation \begin{equation} \rho^\pm= \frac{\sum_{\rm J}\psi_{\rm J}^\pm}{\sum_{\rm J}\psi_{\rm J0}}-1= \frac{\sum_{\rm J}\psi_{\rm J0}\rho_{\rm J}^\pm}{\sum_{\rm J}\psi_{\rm J0}}, \label{rho} \end{equation} and the normalized mass density difference \begin{equation} \phi^\pm= \rho_{\rm A}^\pm-\rho_{\rm B}^\pm. \label{phi} \end{equation} We express $F_{\rm coup}$ in terms of bilinear couplings between $H \simeq (\tilde{\nabla}^2 h)/2$, $\rho^\pm$ and $\phi^\pm$ rather than those between $H$ and $\rho^\pm_{\rm J}$. With this choice of variables, the dynamic equations will be simplified as we will show in the next section. Since we have five independent variables, there should be in principle fourteen coupling parameters in $F_{\rm coup}$~\cite{coupling}. However, we can reduce the number of coupling parameters by using the invariance of the system under the interchange of the upper and the lower monolayers. For instance, the coupling parameter for $(\rho^+)^2$ should be the same for $(\rho^-)^2$. Also the coupling parameter for $\rho^+(\tilde\nabla^2h)$ should have the same magnitude but with an opposite sign of that for $\rho^-(\tilde\nabla^2h)$. Using these symmetric properties, we are left with eight coupling parameters. Furthermore, it is convenient to absorb two of them, $d$ and (dimensionless) $\nu$, in the following redefinitions of the variables: \begin{equation} \alpha^\pm\equiv \rho^\pm \pm d (\tilde\nabla^2h), \quad \beta^\pm \equiv \phi^\pm \pm \nu d (\tilde\nabla^2h). \label{alphabeta} \end{equation} The two lengths $d$ and $\nu d$ can be interpreted as the distances between the membrane mid-surface and the two effective neutral surfaces~\cite{Seifert}. Introducing the parameters $k$ and $\Lambda_i$ $(i=1,\cdots, 5)$, we can write $F_{\rm coup}$ in the form \begin{align} F_{\rm coup} = &\frac{k}{2}\int {\rm d}^2x \Bigg[ \sum_{\epsilon =+,-} \left\{ (\alpha^\epsilon)^2+\Lambda_1(\beta^\epsilon)^2 +\Lambda_2 \alpha^\epsilon\beta^\epsilon \right\} \nonumber \\ & +\Lambda_3 \alpha^+\alpha^- + \Lambda_4 \beta^+\beta^- + \Lambda_5( \alpha^+\beta^-+\alpha^-\beta^+) \Bigg]. \label{Fcoup} \end{align} Here $k$ has the dimension of areal compression modulus, and $\Lambda_i$ are the dimensionless parameters of order unity. Within the lowest order in the membrane deformations and density deviations, we can approximate as $\nabla_\perp\simeq\tilde\nabla$ in $f_\text{tot}$. Then the gradient part $F_{\rm grad}$ is given by the sum of the scalar products of $\tilde{\nabla}\rho^\pm$ and $\tilde{\nabla}\phi^\pm$. For simplicity, we neglect here the couplings between the different leaflets such as $(\tilde{\nabla} \rho^+)\cdot(\tilde{\nabla} \phi^-)$. Using again the above symmetric properties, we have \begin{align} F_{\rm grad} = &\frac{c}{2}\int {\rm d}^2x \sum_{\epsilon=+,-}\Big [(\tilde{\nabla}\rho^\epsilon)^2 +\lambda _1(\tilde{\nabla}\phi^\epsilon)^2 \nonumber \\ & +\lambda_2(\tilde{\nabla}\rho^\epsilon)\cdot(\tilde{\nabla}\phi^\epsilon)\Big], \label{Fgrad} \end{align} where $c$ has the dimension of energy and is comparable to thermal energy, $\lambda_1$ and $\lambda_2$ are the dimensionless parameters of order unity. Some comments are in order. (i) We have thirteen parameters in our free energy; $\sigma$, $\kappa$, $k$, $d$, $\nu$, $\Lambda_i$ $(i=1,\cdots, 5)$, $c$, $\lambda_1$ and $\lambda_2$. In fact they all depend on the temperature $T$ and the reference densities $\psi_{\rm J0}$. In this paper, however, we regard them as independent parameters although they cannot be varied independently in experiments. In the following sections, we investigate the behaviors of the relaxation rates as these parameters are varied, especially when the instability boundary of the one phase state is approached. (ii) In the above total free energy $F$, terms which are purely linear in $H$ do not exist. They can be always eliminated by using the invariance of the system under the interchange of the two leaflets, which flips the sign of $H$. Notice that the terms which are linear in $\rho_{\rm J}^\pm$ have already been taken into account in the definition of the surface tension $\sigma$ in eq.~(\ref{sigma}). (iii) In principle, the free energy can include terms linear in Gaussian curvature $K$ which is proportional to $h^2$. However, without any topological change of the membrane, the integral of $K$ depends only on the geodesic curvature along the boundary of the membrane. As long as the topology and the geodesic curvature at the edge of the membrane are fixed, the integral merely adds a constant to the free energy. For this reason, we do not include any Gaussian curvature term in eq.~(\ref{Fdef}). \subsection{Fourier representation} The in-plane Fourier transform of any function $g(\tilde{\bm x})$ in the monolayer is defined by \begin{equation} g(\tilde{\bm q})=\int {\rm d}^2x \, g(\tilde{\bm x}) e ^{-i\tilde{\bm q}\cdot \tilde{\bm x}}, \label{Fourier} \end{equation} where $\tilde{\bm x}= (x,y)$ and $\tilde{\bm q} =(q_x, q_y)$. It is convenient to introduce the following new variables \begin{align} &\rho= (\rho^+-\rho^-)/2, \quad \bar\rho= (\rho^++\rho^-)/2 \label{mrho}, \\ &\phi= (\phi^+-\phi^-)/2, \quad \bar\phi= (\phi^++\phi^-)/2 \label{mphi}, \\ &\hat h= h/d, \label{hhat} \end{align} and define the column vectors \begin{equation} {\bm a}= (\hat{h}, \rho, \phi)^{\rm T}, \quad {\bm b}= (\bar\rho, \bar\phi)^{\rm T}, \label{ab} \end{equation} where ``T" denotes the transpose. The total free energy is alternatively expressed in term of the Fourier modes as \begin{equation} F=\int \frac{{\rm d}^2q}{(2\pi)^2} \frac{1}{2} \left[ {\bm a}^\dag A{\bm a} +{\bm b}^\dag B{\bm b} \right], \label{FFourier} \end{equation} where $\dag$ denotes the conjugate transpose. In the above, $A$ and $B$ are symmetric matrices of $3\times 3$ and $2\times 2$, respectively. Owing to the rotational symmetry, their components depend only on the magnitude of the wave vector, $q = \vert \tilde{\bm q} \vert$, and are given by \begin{align} &A_{11}=\sigma d^2 q^2 +(\kappa+kd^2\Omega_0)d^2 q^4, \label{A11}\\ &A_{12}=A_{21}=-kd^2 \Omega_1 q^2, \label{A12}\\ &A_{13}=A_{31}=-kd^2 \Omega_2 q^2, \label{A13}\\ &A_{22}=k(2-\Lambda_3)+2cq^2, \label{A22}\\ &A_{23}=A_{32}=k(\Lambda_2-\Lambda_5) +c \lambda_2 q^2, \label{A23}\\ &A_{33}=k(2\Lambda_1-\Lambda_4) +2c\lambda_1 q^2, \label{A33} \end{align} and \begin{align} &B_{11}=k(2+\Lambda_3)+2c q^2, \label{B11} \\ &B_{12}=B_{21}=k(\Lambda_2+\Lambda_5) +c \lambda_2 q^2, \label{B12}\\ &B_{22}=k(2\Lambda_1+\Lambda_4) +2c \lambda_1 q^2. \label{B22} \end{align} Here we have introduced the following dimensionless combinations \begin{align} &\Omega_0= 2+2\nu^2\Lambda_1+2\nu\Lambda_2-\Lambda_3-\nu^2\Lambda_4-2\nu\Lambda_5, \label{Omega0}\\ &\Omega_1= 2+\nu\Lambda_2-\Lambda_3-\nu\Lambda_5, \label{Omega1}\\ &\Omega_2= 2\nu\Lambda_1+\Lambda_2-\nu\Lambda_4-\Lambda_5. \label{Omega2} \end{align} It is important to note that ${\bm a}$ and ${\bm b}$ are decoupled in eq.~(\ref{FFourier}). This is due to the symmetry of the system under the interchange of the two monolayers, {\it i.e.}, ${\bm a}$ changes its sign under this interchange while ${\bm b}$ does not. \section{Dynamic equations} \label{dynamics} In this section, we present the dynamic equations for a two-component bilayer membrane surrounded by a viscous fluid. We shall take into account (i) the flows in the surrounding fluid and in the membrane, (ii) the frictional force between the two monolayers, and (iii) the mutual diffusion in each monolayer. The surrounding fluid is assumed to be incompressible, while the membrane itself is compressible~\cite{Seifert}. Our dynamic equations are based on the standard irreversible thermodynamics~\cite{degroot,Landau}, and ensure that the dissipation in the whole system is non-negative definite (see Appendix \ref{appa}). While our derivation presented in this section is self-contained, they can be formulated in a more systematic manner by using the so called Onsager's variational principle (see Appendix \ref{appb})~\cite{Onsager1, Onsager2, Doi}. \subsection{Hydrodynamic equations} We use ${\bm v}$ to denote the velocity field of the surrounding fluid which is assumed to be incompressible and to have a low Reynolds number. Then ${\bm v}$ for $z>0$ and $z<0$ obeys the Stokes equation \begin{equation} \eta \nabla^2 {\bm v}-\nabla p=0, \label{Stokes} \end{equation} where $\eta$ is the shear viscosity, $\nabla=(\partial_x, \partial_y, \partial_z)$ the nabla operator in 3D space, and $p$ the pressure of the fluid that is determined by the incompressibility condition \begin{equation} \nabla \cdot {\bm v}=0. \label{incompressible} \end{equation} Let $\tilde {\bm v}_{\rm J}^\pm$ denote the flow velocity of the lipid ${\rm J}$ in the upper ($+$) and the lower ($-$) monolayers. Here the flow velocity is defined as the lipid mass flux divided by the mass density $\psi_{\rm J}^\pm$. We consider the dynamic equations only within the linear order in $\tilde{\bm v}_{\rm J}^\pm$, $h$ and $\rho_{\rm J}^\pm$. The average lipid velocities $\tilde{\bm v}^\pm$ in the upper and lower monolayers are defined as \begin{equation} \tilde {\bm v}^\pm= \frac{\psi_{\rm A}^\pm \tilde {\bm v}_{\rm A}^\pm+ \psi_{\rm B}^\pm \tilde{\bm v}_{\rm B}^\pm}{\psi_{\rm A}^\pm+\psi_{\rm B}^\pm}, \end{equation} which can be approximated within the linear order as \begin{equation} \tilde{\bm v}^\pm=\frac{\psi_{\rm A0} \tilde {\bm v}_{\rm A}^\pm+\psi_{\rm B0}\tilde {\bm v}_{\rm B}^\pm}{\psi_{\rm A0}+\psi_{\rm B0}}. \label{vlayer} \end{equation} The diffusive flux of lipid A is given by \begin{equation} {\bm j}_{\rm d}^\pm = \psi_{\rm A0}(\tilde{\bm v}_{\rm A}^\pm-\tilde{\bm v} ^\pm) =-\psi_{\rm B0}(\tilde{\bm v}_{\rm B}^\pm-\tilde{\bm v} ^\pm), \end{equation} where use has been made of eq.~(\ref{vlayer}) in the second equality. It should be noted here that the diffusive flux of lipid B is given by $-{\bm j}_{\rm d}^\pm$. Then the continuity equations for the lipids A and B, $\partial\psi_{\rm J}^\pm/\partial t=-\tilde\nabla\cdot(\psi_{\rm J}^\pm \tilde{\bm v}_{\rm J}^\pm)$, can be approximated as \begin{align} &\frac{\partial \rho_{\rm A}^\pm}{\partial t} = -\tilde\nabla \cdot \tilde {\bm v}^\pm -\frac{1}{\psi_{\rm A0}}\tilde\nabla \cdot {\bm j}_{\rm d}^\pm, \label{Acon}\\ &\frac{\partial \rho_{\rm B}^\pm}{\partial t} = -\tilde\nabla \cdot \tilde {\bm v}^\pm +\frac{1}{\psi_{\rm B0}}\tilde\nabla \cdot {\bm j}_{\rm d}^\pm. \label{Bcon} \end{align} We further note that eqs.~(\ref{Acon}) and (\ref{Bcon}) can be expressed in simpler forms by using $\rho^\pm$ and $\phi^\pm$ as \begin{align} &\frac{\partial \rho^\pm}{\partial t} = -\tilde\nabla \cdot \tilde {\bm v}^\pm, \label{rhocon}\\ &\frac{\partial \phi^\pm}{\partial t} = -\tilde\nabla\cdot {\bm j}^\pm_\phi, \label{phicon} \end{align} where the diffusive flux associated with $\phi$ is now defined as ${\bm j}^\pm_\phi = (\psi_{\rm A0}^{-1}+\psi_{\rm B0}^{-1}){\bm j}^\pm_{\rm d}$. As in the standard irreversible thermodynamics, ${\bm j}^\pm_\phi$ is assumed to be proportional to the gradient of the effective chemical potential $\mu^\pm =(\mu_{\rm A}^\pm/m_{\rm A})-(\mu_{\rm B}^\pm/m_{\rm B})$, where $m_{\rm J}$ and $\mu^\pm_{\rm J}$ are the molecular mass and the chemical potential per molecule for lipid ${\rm J}$, respectively~\cite{Landau}. The chemical potentials are given by $\mu^\pm_{\rm J}=m_{\rm J}(\delta F/\delta \psi^\pm_{\rm J})$. Then the diffusive flux in eq.~(\ref{phicon}) becomes \begin{align} {\bm j}_\phi^\pm=- L_\phi\Big( \frac{1}{\psi_{\rm A0}}+\frac{1}{\psi_{\rm B0}}\Big)^{-1}\tilde{\nabla}\mu^\pm =- L_\phi \tilde{\nabla} \frac{\delta F}{\delta \phi^\pm}, \label{Dflux} \end{align} where $L_\phi>0$ is the Onsager coefficient~\cite{degroot}, and the second equality follows from the relation, $\mu^\pm=\delta F/\delta\psi^\pm_{\rm A}-\delta F/\delta\psi^\pm_{\rm B}= (\psi_{\rm A0}^{-1}+\psi_{\rm B0}^{-1}) \delta F /\delta \phi^\pm$. In the definition of $L_\phi$, we have intentionally put the factor $[(1/\psi_{\rm A0})+(1/\psi_{\rm B0})]^{-1}$ in order to make eq.~(\ref{phicon}) simpler. Equation (\ref{Dflux}) indicates that, as in usual 3D multi-component fluids, mutual diffusion occurs essentially due to the inhomogeneity of the density difference $\phi^\pm$ between the lipid A and B in each monolayer. Furthermore, even if $\phi^\pm$ are homogeneous, mutual diffusion can still be induced by the inhomogeneity of $h$ and $\rho^\pm$ that are coupled to $\phi^\pm$ via the free energy. Next we discuss the force balance conditions. We regard each monolayer as a compressible 2D fluid characterized by the shear viscosity $\mu$ and the bulk viscosity $\zeta$. The 2D viscous stress tensors $\tau_{ij}^\pm$ in the monolayers are given by \begin{equation} \tau_{ij}^\pm=\mu (\partial_i \tilde v_j^\pm+\partial_j \tilde v_i^\pm) + (\zeta-\mu) \delta_{ij} \tilde\nabla\cdot \tilde{\bm v}^\pm. \label{stresslayer} \end{equation} On the other hand, the reversible force density due to the in-plane pressure is given by \begin{equation} {\bm f}^\pm=-\sum_{{\rm J}={\rm A, B}} \psi_{\rm J0}\tilde\nabla \frac{\delta F}{\delta \psi_{\rm J}^\pm}=-\tilde\nabla\frac{\delta F}{\delta \rho^\pm}, \label{flayer} \end{equation} up to the linear order~\cite{Bitbol2}. The force balance equations in the tangential direction of the monolayers are given by \begin{equation} f^\pm_i + \partial _j \tau_{ij}^\pm \pm T^\pm_{iz} \mp b(\tilde v^+_i - \tilde v^-_i)=0, \label{lateralforce} \end{equation} for $i=x, y$. Here $T^\pm_{ij}$ are the stress tensors of the surrounding fluid $T_{ij} = -p \delta_{ij}+\eta (\partial_i v_j +\partial _j v_i )$ evaluated at $z\rightarrow \pm 0$. The last term in eq.~(\ref{lateralforce}) represents the frictional forces between the upper and the lower monolayers, and $b$ is the friction coefficient~\cite{Seifert,Yeung}. In the normal $z$-direction, the restoring force of the membrane is balanced with the normal force due to the surrounding fluid. Hence we have \begin{align} T_{zz}^+-T_{zz}^- =\frac{\delta F}{\delta h}= &-\sigma\tilde\nabla^2 h +(\kappa+kd^2\Omega_0)\tilde\nabla^2 \tilde\nabla^2 h \nonumber \\ &+kd(\Omega_1 \tilde\nabla^2\rho+\Omega_2 \tilde\nabla^2\phi), \label{normalforce} \end{align} where the last expression follows from eqs.~(\ref{Fdef}), (\ref{Fcoup}) and (\ref{Omega0})--(\ref{Omega2}). We further assume that the non-slip boundary condition holds at the upper and the lower monolayers. Let ${\bm v}^\pm=(v^\pm_x, v^\pm_y, v^\pm_z)$ denote the velocity of the surrounding fluid ${\bm v} $ evaluated at $z\to \pm0$. The tangential components of ${\bm v}^\pm$ should coincide with the average velocities of the monolayers \begin{equation} v^\pm_i=\tilde v^\pm_i, \label{BC1} \end{equation} for $i=x,y$. On the other hand, the normal components $v^\pm_z$ should coincide with the time derivative of the membrane height $h$ \begin{equation} v^\pm_z=\frac{\partial h}{\partial t}. \label{BC2} \end{equation} Up to now, we have presented a set of dynamic equations given by eqs.~(\ref{Stokes}), (\ref{incompressible}), (\ref{rhocon}), (\ref{phicon}), (\ref{Dflux}), (\ref{lateralforce}), (\ref{normalforce}), (\ref{BC1}) and (\ref{BC2}) to be solved. As mentioned before, they can be also derived systematically by using the Onsager's variational principle explained in Appendix \ref{appb}. \subsection{Relaxation equations for membrane variables} \label{relaxation} From the derived dynamic equations, we can integrate out the velocity fields ${\bm v}$ and $\tilde{\bm v}^\pm$ to obtain the relaxation equations for the spatially Fourier transformed dynamical variables, $\rho^\pm (\tilde{\bm q},t)$, $\phi^\pm (\tilde{\bm q},t)$ and $\hat{h}(\tilde{\bm q},t)=h(\tilde{{\bm q}},t)/d$ (see eq.~(\ref{Fourier})). The details are described in Appendix \ref{appc} and the resulting equations are \begin{align} \frac{\partial {\bm a}}{\partial t}& =-\Gamma_a (q) {\bm a} \label{Dmatrixa}, \\ \frac{\partial {\bm b}}{\partial t}& =-\Gamma_b (q) {\bm b} \label{Dmatrixb}, \end{align} where ${\bm a}$ and ${\bm b}$ are defined in eq.~(\ref{ab}). In the above, the matrices $\Gamma_a$ and $\Gamma_b$ are given by \renewcommand{\arraystretch}{1.8} \begin{equation} \Gamma_a(q)= \begin{pmatrix} \displaystyle \frac{A_{11}}{4\eta d^2 q} && \displaystyle\frac{A_{12}}{4\eta d^2 q} &&\displaystyle \frac{A_{13}}{4\eta d^2 q} \\ c_1A_{12}q^2 && c_1A_{22}q^2 && c_1A_{23}q^2 \\ \frac{1}{2} L_\phi A_{13}q^2 && \frac{1}{2} L_\phi A_{23}q^2 && \frac{1}{2}L_\phi A_{33}q^2 \end{pmatrix}, \label{gamma_a} \end{equation} \renewcommand{\arraystretch}{1} and \renewcommand{\arraystretch}{1.8} \begin{equation} \Gamma_b(q)= \begin{pmatrix} c_2B_{11}q^2 && c_2B_{12}q^2 \\ \frac{1}{2} L_\phi B_{12}q^2 && \frac{1}{2} L_\phi B_{22}q^2 \end{pmatrix}, \label{gamma_b} \end{equation} \renewcommand{\arraystretch}{1} with \begin{align} &c_1 = [4b+4\eta q+2(\mu+\zeta)q^2]^{-1}, \label{c1}\\ &c_2 = [4\eta q+2(\mu+\zeta)q^2]^{-1}. \label{c2} \end{align} The eigenvalues of $\Gamma_a$ and $\Gamma_b$ correspond to the relaxation rates of the binary bilayer membranes. The equations for the five dynamical variables are split into the decoupled two equations (\ref{Dmatrixa}) and (\ref{Dmatrixb}), where eq.~(\ref{Dmatrixa}) changes its sign under the interchange of the two monolayers, while eq.~(\ref{Dmatrixb}) does not. This is the consequence of the symmetry of the hydrodynamic equations as well as that of the free energy (the latter is discussed after eq.~(\ref{Omega2})). In the next section, we will examine how these relaxation rates behave as the coupling parameters $\Lambda_i$ are varied. \section{Results} \label{results} \subsection{Parameter values} In Table~\ref{TabPara}, we list the set of parameter values chosen in our numerical calculations. Following previous experiments \cite{Helfrich, Song, Rawicz}, the bending modulus $\kappa$ is set equal to $10^{-12}$ erg. As discussed after eq.~(\ref{alphabeta}), the lengths $d$ and $d\nu$ are comparable with the monolayer thickness. Then we may set $d=10^{-7}$ cm, with $\nu$ of order unity. The combination $c\lambda_i$ ($i=1,2$) in $F_{\rm grad}$ is related to the line tension $\xi$ in a phase separated membrane as $\xi \sim (k_{\rm B}Tc)^{1/2}\lambda_i/d$. Since $\xi$ has been measured to be several pN \cite{Tian}, we may set $c=10^{-14}$ ${\rm erg}$, with $\lambda_i$ of order unity. The surface tension $\sigma$ can take extremely wide range of values depending on experimental conditions. For vesicles in a solution, it can be controlled by changing the osmotic pressure difference between the inside and outside of the vesicles. In the following, we will examine two cases, namely, the small tension case with $\sigma=10^{-8}$ ${\rm erg / cm}^{2}$ and the moderate tension case with $\sigma=10^{-4}$ ${\rm erg / cm}^{2}$. The coefficients $A_{22}$ and $B_{11}$ in eqs.~(\ref{A22}) and (\ref{B11}) can be interpreted as the moduli associated with the total densities in the upper and the lower monolayers, respectively (to be more precise, the moduli of their linear combinations $\rho$ and $\bar\rho$ in eq.~(\ref{mrho})). Then $k(2-\Lambda_3)$ and $k(2+\Lambda_3)$ are comparable with the areal compression moduli. Following previous experiments \cite{Evans1, Evans2}, we set $k=70$ erg/cm$^2$ with $|\Lambda_3|\ll1$. The remaining parameters that have yet to be determined in the free energy are $\Lambda_1$, $\Lambda_2$, $\Lambda_4$ and $\Lambda_5$. These parameters depend on the temperature and the average composition. We find in the following, however, that the behavior of the decay rates is not sensitive to these parameters, unless the reduced temperatures $\tau_a$ and $\tau_b$ defined below in eqs.~(\ref{taua}) and (\ref{taub}) are very close to zero. When they are close to zero (but positive), the associated diffusive modes become extremely slow. This point will be discussed later in more detail. We next discuss the kinetic parameters. The membrane viscosities $\mu$ and $\zeta$ appear only as a sum $\mu+\zeta$ in $\Gamma_a$ and $\Gamma_b$. Since we could not find any reliable value of $\zeta$ in the literatures, we set $\mu+\zeta=10^{-7}$ ${\rm erg\cdot s/cm^2}$. (The membrane bulk viscosity $\zeta$ was neglected in ref.~\cite{Seifert}.) The Onsager coefficient $L_\phi$ for the mutual diffusion is roughly estimated as follows. Assuming that the mutual diffusion constant is on the same order as the self-diffusion constant $D$ of a lipid molecule, we have $D\sim (\Gamma_a)_{33}/q^2\sim L_\phi k$ (see eqs.~(\ref{A33}), (\ref{Dmatrixb}) and (\ref{gamma_a})). Using the value $D\sim 10^{-7}$ ${\rm cm^2/s}$~\cite{Membook}, we obtain $L_\phi = 1.4\times 10^{-9}$ ${\rm cm^4 / (erg \cdot s)}$. Several authors have reported different values of the friction coefficient $b$ \cite{Pott,Shkulipa,JBprl,Bitbol1,Merkel,Horner}. Since they are in the range of $10^{7}$ -- $3\times 10^8$ $\mathrm{erg\cdot s/cm}^4$, we set $b=2\times 10^7$ $\mathrm{erg\cdot s/cm}^4$ in this paper. \begin{table}[tbh] \caption{ List of static and dynamic parameters taken from the literatures~\cite{Seifert,Rawicz,Pott,Tian, Evans1, Evans2,Shkulipa,JBprl,Bitbol1,Helfrich, Membook, Song, Merkel,Horner} and used in sect.~\ref{results}. \label{TabPara}} \begin{ruledtabular} \begin{tabular}[t]{c c c c c} $\sigma$ \quad & $\kappa$\ \ \quad & $k$\ \quad & $d$\ \quad & $c$\\ $\mathrm{erg/cm}^2\quad$ & $\mathrm{erg}$\quad & $\mathrm{erg}/\mathrm{cm}^2$ \quad & $\mathrm{cm}$\quad & $\mathrm{erg}$\\ \hline $10^{-8}$ or $10^{-4}$ & $10^{-12} $ & $70$ & $10^{-7} $ \ & $10^{-14}$\\ \end{tabular} \end{ruledtabular} \vspace{0mm} \begin{ruledtabular} \begin{tabular}[t]{c c c c} $\eta$ & $b$\ \quad & $\mu +\zeta $ \quad & $L_\phi$ \quad \\ $\mathrm{erg}\cdot \mathrm{s}/\mathrm{cm}^3\quad $&$\mathrm{erg}\cdot \mathrm{s}/\mathrm{cm}^4\quad$&$\mathrm{erg}\cdot \mathrm{s}/\mathrm{cm}^2$ &$\mathrm{cm}^4 / (\mathrm{erg} \cdot\mathrm{s})$\\ \hline $10^{-2}$&$2\times 10^{7}$&$10^{-7}$& $1.4\times 10^{-9}$ \\ \end{tabular} \end{ruledtabular} \end{table} \subsection{Stability conditions} The wave number dependent susceptibilities $\chi_a(q)$ and $\chi_b(q)$ are defined as the reciprocals of the eigenvalues of the matrices $A$ and $B$ in eq.~(\ref{FFourier}), respectively. Then the thermodynamic stability of the one phase state (without any phase separation) is ensured when these susceptibilities are all positive. Since $A$ and $B$ are $3\times 3$ and $2\times 2$ matrices, respectively, there are three $\chi_a$ and two $\chi_b$ values which can be explicitly obtained in principle. However, since their full expressions are tedious, we discuss here the conditions for the thermodynamic stability at $q=0$ and $\infty$. More detailed discussions are given in Appendix \ref{appd} where we also show that the instability characterized by intermediate wave numbers does not occur as long as the stability conditions at $q=0$ and $\infty$ are satisfied. As $q\to \infty$, we find that the susceptibilities $\chi_a$ and $\chi_b$ are both positive if and only if \begin{equation} 0<\lambda_1-\frac{\lambda_2^2}{4}\equiv \Delta_\lambda. \label{stab_highQ} \end{equation} Hereafter we assume that the above condition is always satisfied. The stability at $q=0$, on the other hand, is ensured by the positivity of $\chi_a(0)$ and $\chi_b(0)$, which is realized if and only if \begin{align} &|\Lambda_3|<2, \label{stab1}\\ &0<\Lambda_1, \quad |\Lambda_4|<2\Lambda_1, \label{stab2} \end{align} and \begin{align} &|\Lambda_2-\Lambda_5|<[(2-\Lambda_3)(2\Lambda_1-\Lambda_4)]^{1/2}, \label{stab3}\\ &|\Lambda_2+\Lambda_5|<[(2+\Lambda_3)(2\Lambda_1+\Lambda_4)]^{1/2}. \label{stab4} \end{align} The conditions eqs.~(\ref{stab1}) and (\ref{stab2}) are equivalent to $A_{22}$, $A_{33}$, $B_{11}$, $B_{22}>0$ at $q=0$. In fig.~\ref{FigDiagram1}, we plot the condition eq.~(\ref{stab2}) on the $(\Lambda_1, \Lambda_4)$-plane. For the stability of the one phase state, $\Lambda_1$ and $\Lambda_4$ need to be within the gray region. Otherwise the system is unstable towards the phase separation. Given that $\Lambda_1$ and $\Lambda_4$ are fixed at the values satisfying eq.~(\ref{stab2}), the stability conditions for $\Lambda_2$, $\Lambda_3$ and $\Lambda_5$ are given by eqs.~(\ref{stab1}), (\ref{stab3}) and (\ref{stab4}). \begin{figure}[tbh] \includegraphics[scale=0.39]{fig2s.eps} \caption{ Stability diagram of the one phase state at $q=0$ in the $(\Lambda_1, \Lambda_4)$-plane as expressed by eq.~(\ref{stab2}). For the thermodynamic stability of the one phase state, $\Lambda_1$ and $\Lambda_4$ need to be in the gray region. The cross corresponds to the parameter values $\Lambda_1= \Lambda_4=1/16$ which we use in fig.~\ref{FigDiagram2} and in the numerical analysis in figs.~\ref{FigrateAlow1}--\ref{FigrateB2}.} \label{FigDiagram1} \end{figure} \begin{figure}[tbh] \includegraphics[scale=0.48]{fig3.eps} \caption{(a) Stability diagram of the one phase state at $q=0$ in the $(\Lambda_2, \Lambda_3, \Lambda_5)$-space when $\Lambda_1= \Lambda_4=1/16$ as marked in fig.~\ref{FigDiagram1}. The one phase state is stable if $\Lambda_2$, $\Lambda_3$ and $\Lambda_5$ are inside the region enclosed by the surface. (b) The enclosing surface consists of two surfaces; the blue one at which $1/\chi_a(0)=0$ and the red one at which $1/\chi_b(0)=0$. The cross sections of the stable region on the $(\Lambda_2,\Lambda_5)$-plane when (c) $\Lambda_3=-3/2$ and (d) $\Lambda_3=15/8$. The circles marked with (A)--(C) in (d) correspond to the parameter values used in figs.~\ref{FigrateAlow1}--\ref{FigTimeMode2}. } \label{FigDiagram2} \end{figure} \begin{figure}[tbh] \includegraphics[scale=0.32]{fig4.eps} \caption{Schematic illustrations of the two possible instabilities. Note that for the sake of clarity (i) the difference in the lipid heights is not drawn, and (ii) a strong phase separation is represented. In this paper we investigate, however, only the homogeneous phase in the vicinity of the phase separation. (a) Anti-registered instability. As one of the $\chi_a$ values diverges, the density difference between the two monolayers and the bending mode become unstable. (b) Registered instability. As one of the $\chi_b$ values diverges, the sum of the densities in the upper and lower monolayers becomes unstable.} \label{FigSchematic} \end{figure} In fig.~\ref{FigDiagram2}(a), we show the stable region in the $(\Lambda_2, \Lambda_3,\Lambda_5)$-space when $\Lambda_1=\Lambda_4=1/16$ as marked by a cross in fig.~\ref{FigDiagram1}. The stable region is enclosed by a surface which consists of blue and red parts. On the blue surface (fig.~\ref{FigDiagram2}(b) left), one of the three $\chi_a$-values diverges at $q=0$ and its corresponding mode becomes unstable, whereas on the red surface (fig.~\ref{FigDiagram2}(b) right), one of the two $\chi_b$-values diverges at $q=0$. One can confirm from eqs.~(\ref{stab3}) and (\ref{stab4}) that the cross section of the stable region on the $(\Lambda_2, \Lambda_5)$-plane at constant $\Lambda_3$ is given by an oblique rectangle whose center is at $\Lambda_2=\Lambda_5=0$. In Figs.~\ref{FigDiagram2}(c) and (d), we present the cross sections at $\Lambda_3=-3/2$ and $15/8$, respectively. As shown in (c), the apex coordinates of the rectangle $\Lambda^\pm$ are given by \begin{align} \Lambda^\pm = &\frac{1}{2} \left[ \sqrt{(2+\Lambda_3)(2\Lambda_1+\Lambda_4)} \right. \nonumber \\ & \left. \pm \sqrt{(2-\Lambda_3)(2\Lambda_1-\Lambda_4)} \right]. \end{align} These values are $\Lambda^\pm \approx 0.153\pm0.234$ and $\Lambda^\pm \approx 0.426\pm0.0442$ in fig.~\ref{FigDiagram2}(c) and (d), respectively. As $\Lambda_3$ is increased towards $2$, the blue sides of the cross section become longer while the red sides become shorter. In the limit of $\Lambda_3\nearrow 2$, the stable region eventually turns out to be a line segment whose endpoints are given by $(\Lambda_2, \Lambda_5)=(\pm\sqrt{2\Lambda_1+\Lambda_4}, \pm\sqrt{2\Lambda_1+\Lambda_4})=(\pm \sqrt{3}/4, \pm\sqrt{3}/4)$. In the limit of $\Lambda_3\searrow -2$, on the other hand, the stable region shrinks to a line segment with the endpoints at $(\Lambda_2, \Lambda_5)=(\pm\sqrt{2\Lambda_1-\Lambda_4}, \mp\sqrt{2\Lambda_1-\Lambda_4})=(\pm 1/4, \mp1/4)$. Even if we choose other $\Lambda_1$ and $\Lambda_4$ values in the stable region of fig.~\ref{FigDiagram1}, the qualitative features of the stable region in the $(\Lambda_2, \Lambda_3, \Lambda_5)$-space remains the same. However, as the combination $(\Lambda_1, \Lambda_4)$ approaches the boundary of the gray region, the stable region in the $(\Lambda_2, \Lambda_3, \Lambda_5)$-space becomes narrower, and eventually disappears just at the boundary of the stable region. In fig.~\ref{FigSchematic}, we illustrate the corresponding instabilities to take place. When one of the $\chi_a$-values diverges, a certain linear combination of $\rho$, $\phi$ and $\hat{h}$ becomes unstable as in fig.~\ref{FigSchematic}(a), while a linear combination of $\bar\rho$ and $\bar\phi$ becomes unstable as in fig.~\ref{FigSchematic}(b) when one of the $\chi_b$-values diverges. Hereafter we shall call the instabilities of type (a) and (b) the ``anti-registered instability" and the ``registered instability", respectively. Note that these two types of instabilities are purely the consequences of the symmetry of the system (see also the sentences after eq.~(\ref{Omega2})). So far, we have discussed the stability conditions of the one phase state at $q=0$ and $\infty$. In Appendix \ref{appd}, we show that the instability does not take place for intermediate wave numbers as long as the membrane is stable at $q=0$ and $\infty$. Hence the stability conditions are generally given by eqs.~(\ref{stab1})--(\ref{stab4}). \subsection{Relaxation rates} \begin{figure}[tbh] \includegraphics[scale=0.7]{fig5s.eps} \caption{Eigenmodes of $\Gamma_a$ for small tension case; $\sigma=10^{-8}$ $\mathrm{erg}/\mathrm{cm}^2$. The parameter values are $(\Lambda_2, \Lambda_5)=(0.193, 0.233)$ as marked (A) in fig.~\ref{FigDiagram2}(d), and the membrane is not close to the anti-registered instability boundary. (a) Plots of the relaxation rates $\gamma_{ai}$ ($i=1,2,3$) as a function of the wave number $q$. (b) Plots of the diagonal elements $(\Gamma_a)_{ii}$ ($i=1,2,3$) of the matrix $\Gamma_a$ as a function of the wave number $q$ (dashed color lines). The effective decay rate $\gamma_\phi^*$ is plotted with a black dashed line. For comparison, the relaxation rates $\gamma_{ai}$ in (a) are also plotted with grey solid lines. For convenience, all the plotted quantities are divided by $q^2$. } \label{FigrateAlow1} \end{figure} \begin{figure}[tbh] \includegraphics[scale=0.7]{fig6s.eps} \caption{Eigenmodes of $\Gamma_a$ for small tension case; $\sigma=10^{-8}$ $\mathrm{erg}/\mathrm{cm}^2$. The parameter values are $(\Lambda_2, \Lambda_5)=(-0.023, 0.063)$ as marked (B) in fig.~\ref{FigDiagram2}(d), and the membrane is close to the anti-registered instability. (a) Plots of the relaxation rates $\gamma_{ai}$ as a function of the wave number $q$. (b) Plots of the diagonal elements $(\Gamma_a)_{ii}$ of the matrix $\Gamma_a$ as a function of the wave number $q$ (dashed color lines). The effective decay rate $\gamma_\phi^*$ is plotted with a black dashed line. For comparison, the relaxation rates $\gamma_{ai}$ in (a) are also plotted with grey solid lines. For convenience, all the plotted quantities are divided by $q^2$. } \label{FigrateAlow2} \end{figure} As a main result of this paper, we next examine the relaxation rates (or decay rates) of the various hydrodynamic modes in the one phase state. They are obtained from the eigenvalues of $\Gamma_a$ and $\Gamma_b$ in eqs.~(\ref{Dmatrixa}) and (\ref{Dmatrixb}), respectively. In the following calculations, we set the parameter values to $\Lambda_1=\Lambda_4=1/16$ and $\Lambda_3=15/8$ as in fig.~\ref{FigDiagram2}(d), while $(\Lambda_2, \Lambda_5)$ are varied. For simplicity, we further set $\lambda_1=\lambda_2=\nu=1$ which also satisfy eq.~(\ref{stab_highQ}). \subsubsection{Eigenmodes of $\Gamma_a$: small tension case} Setting the parameters as $(\Lambda_2, \Lambda_5)=(0.193, 0.233)$, we plot in fig.~\ref{FigrateAlow1}(a) the three eigenvalues of $\Gamma_a$ denoted by $\gamma_{ai}$ ($i=1,2,3$ and $\gamma_{a1}>\gamma_{a2}>\gamma_{a3}$), and in (b) the three diagonal elements of $\Gamma_a$ denoted by $(\Gamma_a)_{ii}$ ($i=1,2,3$ and $(\Gamma_a)_{11}> (\Gamma_a)_{22} >(\Gamma_a)_{33}$) for a small surface tension, $\sigma=10^{-8}$ $\mathrm{erg}/\mathrm{cm}^2$. Similar plots are given in fig.~\ref{FigrateAlow2} when $(\Lambda_2, \Lambda_5)=(-0.023, 0.063)$ with the same surface tension value. These choices of the parameters are marked with (A) and (B) in fig.~\ref{FigDiagram2}(d). The system is far from and close to the unstable region in figs.~\ref{FigrateAlow1} and \ref{FigrateAlow2}, respectively. In the latter case, at least one of the eigenvalues of $A$ becomes very small, and the anti-registered instability shown in fig.~\ref{FigSchematic}(a) is about to take place. In both figs.~\ref{FigrateAlow1} and \ref{FigrateAlow2}, the fastest decay rate $\gamma_{a1}$ is found to be \begin{equation} \gamma_{a1}\simeq \left\{ \begin{array}{l} (\Gamma_a)_{22}\quad (q\ll q_{\rm mc}), \\ (\Gamma_a)_{11}\quad (q\gg q_{\rm mc}). \end{array} \right. \label{gammaA1low} \end{equation} Here the mode crossing wave number is given by \begin{equation} q_{\rm mc}=\frac{\eta k(2-\Lambda_3)}{\kappa b}, \label{qmc} \end{equation} at which $(\Gamma_a)_{11}= (\Gamma_a)_{22}$ holds. The positivity of $q_{\rm mc}>0$ follows from the stability condition eq.~(\ref{stab1}). Using the present parameter values, we obtain $q_{\rm mc} =4.38\times10^3$ $\mathrm{cm}^{-1}$. Let us introduce the ``quasi-equilibrium" state of $\rho$ for given $\hat{h}$ and $\phi$ as \begin{equation} \rho_{\rm e}^{(2)}(\hat{h},\phi, q)=-\frac{A_{12}\hat{h}+A_{23}\phi}{A_{22}}. \label{rhoe2} \end{equation} We use the term ``quasi-equilibrium" because $\rho_{\rm e}^{(2)}(\hat{h},\phi, q)$ minimizes the free energy $F$ under the condition that the other variables are fixed at $(\hat{h}, \phi)$. It can be obtained by equating the second row of eq.~(\ref{Dmatrixa}) to zero, and solving for $\rho$. Then we can rewrite the second row of eq.~(\ref{Dmatrixa}) as $\partial \rho /\partial t=-(\Gamma_a)_{22} (\rho-\rho_{\rm e}^{(2)})$. Hence $\gamma_{a1} \simeq (\Gamma_a)_{22}$ for $q\ll q_{\rm mc}$ indicates that $\rho$ relaxes towards the quasi-equilibrium state $\rho_{\rm e}^{(2)}$ with the decay rate $\gamma_{a1}$, while the other variables $\hat{h}$ and $\phi$ are almost unchanged (frozen) during this process. In other words, $\rho$ relaxes much faster than $\hat{h}$ and $\phi$. In this regime, we can approximate $A_{22}\simeq k(2-\Lambda_3)$ and $c_1\simeq 1/(4b)$ in eqs.~(\ref{A22}) and (\ref{c1}), respectively, and the decay rate scales as $\gamma_{a1}\simeq (\Gamma_a)_{22} \simeq k(2-\Lambda_3)q^2/(4b) \sim q^2$. Similarly, the decay rate $\gamma_{a1}$ for $q\gg q_{\rm mc}$ corresponds to the relaxation of $\hat{h}$ to its quasi-equilibrium state \begin{equation} \hat{h}_{\rm e}^{(2)}(\rho,\phi,q)=-\frac{A_{12}\rho+A_{13}\phi}{A_{11}}, \label{he2} \end{equation} while both $\rho$ and $\phi$ are frozen during the relaxation of $\hat{h}$. For $q\gg q^*$ with \begin{equation} q^*=\sqrt{\frac{\sigma}{\kappa}}, \label{qstar} \end{equation} one can approximate eq.~(\ref{A11}) as $A_{11}\simeq(\kappa+kd^2\Omega_0)d^2q^4$. For the parameter values used in figs.~\ref{FigrateAlow1} and \ref{FigrateAlow2}, we have $q_{\rm mc}\gg q^*$ because $q^*=100$ ${\rm cm}^{-1}$ (see also the sentences below eq.~(\ref{sigmac})). Then the decay rate scales as $\gamma_{a1}\simeq (\Gamma_a)_{11}\simeq (\kappa+kd^2\Omega_0)q^3/(4\eta) \sim q^3$ for $q\gg q_{\rm mc}$. The second fastest decay rate $\gamma_{a2}$ behaves as \begin{equation} \gamma_{a2}\simeq \left\{ \begin{array}{l } \displaystyle (\Gamma_a)_{11}-\frac{(\Gamma_a)_{12}(\Gamma_a)_{21}}{(\Gamma_a)_{22}} \simeq (\Gamma_a)_{11}\quad (q\ll q_{\rm mc}), \\ \displaystyle (\Gamma_a)_{22}-\frac{(\Gamma_a)_{12}(\Gamma_a)_{21}}{(\Gamma_a)_{11}}\simeq (\Gamma_a)_{22}\quad (q\gg q_{\rm mc}). \end{array} \right. \label{gammaA2low} \end{equation} Let us introduce the quasi-equilibrium states of $\hat{h}$ and $\rho$ for given $\phi$ as \begin{align} &\hat{h}_{\rm e}^{(1)}(\phi, q)=\frac{ A_{12} A_{23}- A_{13} A_{22}}{ A_{11}A_{22}- A_{12}^2} \phi \label{he1}, \\ &\rho_{\rm e}^{(1)}(\phi, q)=\frac{ A_{12} A_{13}- A_{11} A_{23}}{ A_{11}A_{22}- A_{12}^2} \phi, \label{rhoe1} \end{align} which minimize the total free energy $F$ under the condition that $\phi$ is fixed. They are obtained by equating the first and the second rows of eq.~(\ref{Dmatrixa}) to zero, and solving simultaneously for $\hat{h}$ and $\rho$. Assuming that the relaxation of $\rho$ is much faster than that of $\hat{h}$, we substitute $\rho\simeq\rho_{\rm e}^{(2)}$ given by eq.~(\ref{rhoe2}) into the first row of eq.~(\ref{Dmatrixa}) to obtain \begin{align} \frac{\partial \hat{h}}{\partial t}&\simeq- \Big[ (\Gamma_a)_{11}-\frac{(\Gamma_a)_{12}(\Gamma_a)_{21}}{(\Gamma_a)_{22}} \Big] (\hat{h}-\hat{h}_{\rm e}^{(1)}) \nonumber \\ &\simeq -(\Gamma_a)_{11} (\hat{h}-\hat{h}_{\rm e}^{(1)}). \label{he} \end{align} Hence the decay rate $\gamma_{a2}$ for $q\ll q_{\rm mc}$ in eq.~(\ref{gammaA2low}) corresponds to the relaxation of $\hat{h}$ to the quasi-equilibrium state $\hat{h}_{\rm e}^{(1)}$, while $\phi$ is frozen and $\rho$ instantly decays to $\rho^{(2)}_{\rm e}$. In this regime, we have $\gamma_{a2}\simeq (\Gamma_a)_{11}\simeq \sigma q/(4\eta)\sim q$ for $q\ll q^*$, and $\gamma_{a2}\simeq (\Gamma_a)_{11}\simeq (\kappa+kd^2\Omega_0)q^3/(4\eta ) \sim q^3$ for $q^* \ll q \ll q_{\rm mc}$. For $q\gg q_{\rm mc}$, on the other hand, $\gamma_{a2}$ is associated with the relaxation of $\rho$ towards $\rho_{\rm e}^{(1)}$, while $\phi$ is frozen and $\hat{h}$ instantly decays to $\hat{h}_{\rm e}^{(2)}$. In this regime, we have $\gamma_{a2}\simeq(\Gamma_a)_{22}\simeq k(2-\Lambda_3)q^2/(4b)\sim q^2$. From eqs.~(\ref{gammaA1low}) and (\ref{gammaA2low}), we see that the mode crossing occurs around $q\simeq q_{\rm mc}$; the fastest mode is associated with $\rho$ for $q< q_{\rm mc}$ while it is dominated by $\hat{h}$ for $q> q_{\rm mc}$. Such a mode crossing behavior between the density and the curvature was predicted by Seifert and Langer for single-component lipid bilayer membranes without any surface tension~\cite{Seifert}. In Table \ref{TabRates}(a), we present a list of the approximate expressions for $\gamma_{a1}$ and $\gamma_{a2}$ when the membrane tension is small (the threshold tension $\sigma_{\rm t}$ in the table caption is defined in eq.~(\ref{sigmac}) below). We now discuss the slowest decay rate $\gamma_{a3}$. Assuming $\hat{h}$ and $\rho$ vary much faster than $\phi$, we substitute $\hat{h}\simeq\hat{h}_{\rm e}^{(1)}$ and $\rho\simeq \rho_{\rm e}^{(1)}$ into the third row of eq.~(\ref{Dmatrixa}) to obtain \begin{equation} \frac{\partial \phi}{\partial t}\simeq -\gamma_\phi^* \phi. \end{equation} With the aid of eqs.~(\ref{he1}) and (\ref{rhoe1}), the effective decay rate $\gamma_\phi^*$ in the above equation can be obtained as \begin{equation} \gamma_\phi^*=\frac{L_\phi (\det A) q^2}{2(A_{11}A_{22}-A_{12}^2)}. \label{gammaR} \end{equation} In the small and large wave number limits, its asymptotic behaviors are \begin{equation} \gamma_\phi^*\to \left\{ \begin{array}{l } L_\phi k\tau_aq^2/2\sim q^2 \quad (q\to 0), \\ L_\phi c\Delta_\lambda q^4 \sim q^4 \quad (q\to \infty), \end{array} \right. \label{gammaPlim} \end{equation} where the reduced temperature $\tau_a$ is defined by~\cite{ReducedT} \begin{equation} \tau_a= 2\Lambda_1-\Lambda_4-\frac{(\Lambda_2-\Lambda_5)^2}{2-\Lambda_3}, \label{taua} \end{equation} and $\Delta_\lambda$ was defined before in eq.~(\ref{stab_highQ}). When the stability conditions in eqs.~(\ref{stab1})--(\ref{stab3}) are satisfied, one can show that $\tau_a$ is positive. As the unstable region is approached, $\tau_a$ becomes smaller and eventually vanishes just at the boundary. Then the anti-registered instability in fig.~\ref{FigSchematic}(a) takes place at the boundary as well as in the unstable region. The crossover wave number $q_{a}$ between the two limits in eq.~(\ref{gammaPlim}) is given by \begin{equation} q_{a}=\sqrt{\frac{k\tau_a}{2c\Delta_\lambda}}. \label{qac} \end{equation} In figs.~\ref{FigrateAlow1}(b) and \ref{FigrateAlow2}(b), we have also plotted $\gamma_\phi^*$. We see that $\gamma_\phi^*$ provides a perfect fit to the slowest mode $\gamma_{a3}$. Thus $\gamma_{a3}$ corresponds to the relaxation rate of $\phi$, while $\hat{h}$ and $\rho$ instantly change to their equilibrium values $\hat{h}_{\rm e}^{(1)}$ and $\rho_{\rm e}^{(1)}$, respectively. In Table \ref{TabRates}(c), the approximate expression for the slowest rate $\gamma_{a3}$ is summarized. In fig.~\ref{FigrateAlow1}(b), we see that the bare rate $(\Gamma_a)_{33}$ almost coincides with the effective rate $\gamma_\phi^*\simeq \gamma_{3a}$. This can be understood as follows. For the parameters used in fig.~\ref{FigrateAlow1}, the reduced temperature is approximately given by $\tau_a\simeq 2\Lambda_1-\Lambda_4$, while we have $A_{33}\simeq k(2\Lambda_1-\Lambda_4)$ when the membrane is far from the unstable region (see eq.~(\ref{A33})). We thus have $\gamma_\phi^*\simeq L_\phi A_{33}q^2/2 =(\Gamma_a)_{33}$. The crossover wave number given by eq.~(\ref{qac}) is $q_{a}=1.52\times 10^{7}$ $\mathrm{cm}^{-1}$ which is microscopic and may not be measurable in experiments. On the other hand, the parameters used in fig.~\ref{FigrateAlow2} yield $\tau_a=3.33\times 10^{-3}$ and $q_{a}=3.94\times 10^6$ $\mathrm{cm}^{-1}$. Hence the crossover from $\gamma_{a3}\sim q^2$ to $\sim q^4$ is measurable as in usual near critical fluids~\cite{Onuki}. Note that the $q^4$-dependence in large wave numbers is not due to the coupling with the other modes, but is just a consequence of diffusion when there are squared-gradient terms in the free energy as in eq.~(\ref{Fgrad}). \subsubsection{Eigenmodes of $\Gamma_a$: moderate tension case} In figs.~\ref{FigrateAmoderate1} and \ref{FigrateAmoderate2}, we show (a) the relaxation rates $\gamma_{ai}$ and (b) the diagonal elements $(\Gamma_a)_{ii}$ of $\Gamma_a$ for a moderate surface tension, $\sigma=10^{-4}$ $\mathrm{erg}/\mathrm{cm}^2$. In these plots, all the parameters except $\sigma$ are the same as in figs.~\ref{FigrateAlow1} and \ref{FigrateAlow2}. In the whole wave number range, the decay rates can be approximated as \begin{align} &\gamma_{a1}\simeq (\Gamma_a)_{11}, \\ &\gamma_{a2}\simeq (\Gamma_a)_{22}-\frac{(\Gamma_a)_{12}(\Gamma_a)_{21}}{(\Gamma_a)_{11}}\simeq (\Gamma_a)_{22}, \\ &\gamma_{a3}\simeq \gamma_\phi^*, \end{align} where $\gamma_\phi^*$ was defined in eq.~(\ref{gammaR}). The fastest decay rate $\gamma_{a1}$ is associated with the relaxation of $\hat{h}$ to $\hat{h}_{\rm e}^{(2)}$, while $\rho$ and $\phi$ are frozen. The second decay rate $\gamma_{a2}$ corresponds to the relaxation of $\rho$ to $\rho_{\rm e}^{(1)}$, while $\phi$ is frozen and $\hat{h}$ instantly changes to $\hat{h}_{\rm e}^{(1)}$. The slowest decay mode $\phi$ relaxes by the effective decay rate $\gamma_\phi^*$, while $\hat{h}$ and $\rho$ instantly change to $\hat{h}_{\rm e}^{(1)}$ and $\rho_{\rm e}^{(1)}$, respectively. The slowest decay rate $\gamma_{3a}\simeq \gamma_\phi^*$ in figs.~\ref{FigrateAmoderate1} and \ref{FigrateAmoderate2} is almost the same as in figs.~\ref{FigrateAlow1} and \ref{FigrateAlow2} for which the membrane tension is very small (see Table \ref{TabRates}(c)). However, unlike in figs.~\ref{FigrateAlow1} and \ref{FigrateAlow2}, the mode crossing behavior between the two fast (bending and density) modes does not occur for the moderate tension case. Recently, the absence of the mode crossing behavior due to the membrane tension has been reported in the experiment~\cite{Mell}, and theoretically discussed for single-component lipid bilayer membranes~\cite{JBNLM}. Since the minimum of $(\Gamma_a)_{11}/q^2$ is located around $q\sim q^*$ and $(\Gamma_a)_{22}/q^2$ is almost constant, the condition $q_{\rm mc}\simeq q^*$ gives the threshold surface tension \begin{equation} \sigma_{\rm t} \simeq \frac{1}{\kappa} \left[ \frac{k\eta (2-\Lambda_3)}{b}\right]^2, \label{sigmac} \end{equation} below which the mode crossing occurs. For $\Lambda_3=15/8$ and other parameter values, we can estimate $\sigma_{\rm t}\approx 1.91\times 10^{-5}$ $\mathrm{erg/cm}^2$. Table~\ref{TabRates}(a) and (b) summarize the approximate expressions of the two fastest rates of $\Gamma_a$ for small tension case ($\sigma<\sigma_{\rm t}$) and for large tension case ($\sigma>\sigma_{\rm t}$), respectively. Notice that in the small tension case, we always have $q^*<q_{\rm mc}$. \begin{table}[tbh] \caption{ Approximate expressions for the decay rates. (a) The two fastest decay rates $\gamma_{a1}$ and $\gamma_{a2}$ associated with $\Gamma_a$ for the small tension case $\sigma<\sigma_{\rm t}$. (b) The two fastest decay rates $\gamma_{a1}$ and $\gamma_{a2}$ associated with $\Gamma_a$ for the moderate (larger) tension case $\sigma>\sigma_{\rm t}$. The threshold tension $\sigma_{\rm t}$ is defined in eq.~(\ref{sigmac}). (c) The slowest decay rate $\gamma_{a3}$ associated with $\Gamma_a$ for both the small and the moderate tension cases. (d) The slowest decay rate $\gamma_{b2}$ associated with $\Gamma_b$ for both the small and the moderate tension cases. The characteristic wave numbers $q^*$, $q_{\rm mc}$, $q_a$ and $q_b$ are defined in eqs.~(\ref{qstar}), (\ref{qmc}), (\ref{qac}) and (\ref{qbc}) respectively. \label{TabRates}} \begin{tabular}[t]{c |c c c } \hline \hline (a) & \ $q\ll q^* $\quad & \quad $q^*\ll q\ll q_{\rm mc}$ \quad & $ q_{\rm mc}\ll q$ \quad \\ \hline \parbox[c][1.0cm][c]{0cm}{} $\gamma_{a1}$ & \multicolumn{2}{c}{$\displaystyle \frac{k(2-\Lambda_2)q^2}{4b}$} & $\displaystyle \frac{(\kappa+kd^2\Omega_0)q^3}{4\eta}$ \\ \parbox[c][1.0cm][c]{0cm}{} $\gamma_{a2}$ & \quad $\displaystyle \frac{\sigma q}{4\eta}$ \quad \quad & $\displaystyle \frac{(\kappa+kd^2\Omega_0)q^3}{4\eta}$ & $\displaystyle \frac{k(2-\Lambda_2)q^2}{4b}$ \\ \hline \hline \end{tabular} \vspace{4mm} \begin{tabular}[t]{c |c c } \hline \hline (b) & \ $q\ll q^* $\quad & \quad $q^*\ll q$ \quad \\ \hline \parbox[c][1.0cm][c]{0cm}{} $\gamma_{a1}$ & $\displaystyle \frac{\sigma q}{4\eta}$ & \quad$\displaystyle \frac{(\kappa+kd^2\Omega_0)q^3}{4\eta}$ \\ \parbox[c][0.9cm][c]{0cm}{} $\gamma_{a2}$ & \multicolumn{2}{c}{$\displaystyle \frac{k(2-\Lambda_2)q^2}{4b}$} \\ \hline \hline \end{tabular} \vspace{4mm} \begin{tabular}[t]{c |c c } \hline \hline (c) & \ $q\ll q_a $\quad & \quad $q_a \ll q$ \quad \\ \hline \parbox[c][0.8cm][c]{0cm}{} $\gamma_{a3}$ & $\frac{1}{2}L_\phi k\tau_aq^2 $ & \quad$L_\phi c\Delta_\lambda q^4$ \\ \hline \hline \end{tabular} \vspace{4mm} \begin{tabular}[t]{c |c c } \hline \hline (d) & \ $q\ll q_b $\quad & \quad $q_b \ll q$ \quad \\ \hline \parbox[c][0.8cm][c]{0cm}{} $\gamma_{b2}$ & $\frac{1}{2}L_\phi k\tau_b q^2 $ & \quad$L_\phi c\Delta_\lambda q^4$ \\ \hline \hline \end{tabular} \end{table} \begin{figure}[tbh] \includegraphics[scale=0.7]{fig7s.eps} \caption{Eigenmodes of $\Gamma_a$ for moderate tension case; $\sigma=10^{-4}$ $\mathrm{erg}/\mathrm{cm}^2$. The parameter values are $(\Lambda_2, \Lambda_5)=(0.193, 0.233)$ as marked (A) in fig.~\ref{FigDiagram2}(d), and the membrane is not close to the anti-registered instability boundary. (a) Plots of the relaxation rates $\gamma_{ai}$ as a function of the wave number $q$. (b) Plots of the diagonal elements $(\Gamma_a)_{ii}$ of the matrix $\Gamma_a$ as a function of the wave number $q$ (dashed color lines). The effective decay rate $\gamma_\phi^*$ is plotted with a black dashed line. For comparison, the relaxation rates $\gamma_{ai}$ in (a) are also plotted with grey solid lines. } \label{FigrateAmoderate1} \end{figure} \begin{figure}[tbh] \includegraphics[scale=0.7]{fig8s.eps} \caption{Eigenmodes of $\Gamma_a$ for moderate tension case; $\sigma=10^{-4}$ $\mathrm{erg}/\mathrm{cm}^2$. The parameter values are $(\Lambda_2, \Lambda_5)=(-0.023, 0.063)$ as marked (B) in fig.~\ref{FigDiagram2}(d), and the membrane is close to the anti-registered instability. (a) Plots of the relaxation rates $\gamma_{ai}$ as a function of the wave number $q$. (b) Plots of the diagonal elements $(\Gamma_a)_{ii}$ of the matrix $\Gamma_a$ as a function of the wave number $q$ (dashed color lines). The effective decay rate $\gamma_\phi^*$ is plotted with a black dashed line. For comparison, the relaxation rates $\gamma_{ai}$ in (a) are also plotted with grey solid lines. } \label{FigrateAmoderate2} \end{figure} \subsubsection{Eigenmodes of $\Gamma_b$} In figs.~\ref{FigrateB1} and \ref{FigrateB2}, we plot the eigenvalues and diagonal elements of $\Gamma_b$ in eq.~(\ref{gamma_b}). The parameters are chosen as $(\Lambda_2, \Lambda_5)=(0.193, 0.233)$ in fig.~\ref{FigrateB1} and $(0.445, 0.405)$ in fig.~\ref{FigrateB2}. These two choices are marked with (A) and (C) in fig.~\ref{FigDiagram2}(d). Since $\Gamma_b$ is a $2\times 2$ matrix, its eigenvalues $\gamma_b$ can be easily obtained as \begin{align} \gamma_b=&\frac{1}{2} \Big[ (\Gamma_b)_{11}+(\Gamma_b)_{22} \nonumber \\ & \pm \sqrt{\{ (\Gamma_b)_{11}-(\Gamma_b)_{22}\}^2+4(\Gamma_b)_{12}(\Gamma_b)_{21}} \Big] \\ \simeq & \frac{1}{2} \left[ \Tr \Gamma_b \pm \{ \Tr\Gamma_b +2(\Gamma_b)_{11}^{-1} \det\Gamma_b\}\right], \end{align} where the second equality follows from $(\Gamma_b)_{11}\gg (\Gamma_b)_{22}$ and $(\Gamma_b)_{11}^2 \gg (\Gamma_b)_{12}(\Gamma_b)_{21}$. Then we obtain approximately \begin{align} &\gamma_{b1}\simeq (\Gamma_b)_{11}, \\ &\gamma_{b2}\simeq \frac{L_\phi q^2 \det B}{2B_{11}}\equiv \gamma_{\bar\phi}^*. \label{gammaB2} \end{align} Equating the right hand side of eq.~(\ref{Dmatrixb}) to zero, we obtain the quasi-equilibrium variables as \begin{align} &\bar\rho_{\rm e} (\bar\phi,q)=-\frac{B_{12}}{B_{11}}\bar\phi, \label{rhobare}\\ &\bar\phi_{\rm e} (\bar\rho,q)=-\frac{B_{21}}{B_{22}}\bar\rho. \label{phibare} \end{align} As in the previous subsections, the fastest decay rate $\gamma_{b1}\simeq (\Gamma_b)_{11}$ is associated with the relaxation of $\bar\rho$ to $\bar\rho_{\rm e}$ while $\bar\phi$ is frozen. However, as shown in figs.~\ref{FigrateB1} and \ref{FigrateB2}, the decay rate $\gamma_{b1}$ is very large, and our theory, in which inertial effect is neglected, may not properly describe the dynamics of such a small time scale~\cite{Seifert}. Hence we do not further discuss the wave number dependence of $\gamma_{b1}$. Nevertheless, we can discuss the slower relaxation of $\bar\phi$ because $\bar\rho$ relaxes rapidly to the quasi-equilibrium value $\bar\rho_{\rm e}(\bar\phi)$. With the aid of eq.~(\ref{rhobare}), substitution of $\bar\rho\simeq \bar\rho_{\rm e}(\bar\phi)$ into the second row of eq.~(\ref{Dmatrixb}) yields $\partial \bar\phi /\partial t\simeq -\gamma_{\bar\phi}^* \bar\phi$. From eq.~(\ref{gammaB2}), we see that the slower decay rate $\gamma_{b2}\simeq \gamma_{\bar\phi}^*$ corresponds to the relaxation of $\bar\phi$, while $\bar\rho$ instantly changes to $\bar\rho_{\rm e}(\bar\phi,q)$. In the small and large wave number limits, the asymptotic behaviors are \begin{equation} \gamma_{\bar\phi}^*\to \left\{ \begin{array}{l } L_\phi k\tau_bq^2/2 \sim q^2\quad \quad (q\to 0), \\ L_\phi c\Delta_\lambda q^4 \sim q^4\quad (q\to \infty), \end{array} \right. \label{gammaPBlim} \end{equation} where the other reduced temperature $\tau_b$ is defined by~\cite{ReducedT} \begin{equation} \tau_b= 2\Lambda_1+\Lambda_4-\frac{(\Lambda_2+\Lambda_5)^2}{2+\Lambda_3}. \label{taub} \end{equation} When the stability conditions in eqs.~(\ref{stab1}), (\ref{stab2}) and (\ref{stab4}) are satisfied, $\tau_b$ is positive in the stable region. As the unstable region is approached, $\tau_b$ becomes smaller and eventually vanishes at the boundary where the registered instability in fig.~\ref{FigSchematic}(b) starts to take place. The crossover wave number $q_{b}$ between the two limits in eq.~(\ref{gammaPBlim}) is given by \begin{equation} q_{b}=\sqrt{\frac{k\tau_b}{2c\Delta_\lambda}}. \label{qbc} \end{equation} When the system is away from the unstable region ($\tau_b=0.141$) as in fig.~\ref{FigrateB1}, we have $q_{b}=2.57\times 10^7$ $\mathrm{cm}^{-1}$ which is too large to be observed. However, when the system is close to the unstable region ($\tau_b=1.10\times 10^{-3}$) as in fig.~\ref{FigrateB2}, we have $q_{b}=2.27\times 10^6$ $\mathrm{cm}^{-1}$ which is measurable in experiments. In Table \ref{TabRates}(d), the approximate expressions for the slowest rate $\gamma_{b2}$ are summarized. \begin{figure}[tbh] \includegraphics[scale=0.7]{fig9s.eps} \caption{ Eigenmodes of $\Gamma_b$ when the parameter values are $(\Lambda_2, \Lambda_5)=(0.193, 0.233)$ as marked (A) in fig.~\ref{FigDiagram2}(d), and the membrane is not close to the anti-registered instability boundary. (a) Plots of the relaxation rates $\gamma_{bi}$ ($i=1,2$) as a function of the wave number $q$. (b) Plots of the diagonal elements $(\Gamma_b)_{ii}$ ($i=1,2$) of the matrix $\Gamma_b$ as a function of the wave number $q$ (dashed color lines). The effective decay rate $\gamma_{\bar\phi}^*$ is plotted with a black dashed line. For comparison, the relaxation rates $\gamma_{bi}$ in (a) are also plotted with grey solid lines. } \label{FigrateB1} \end{figure} \begin{figure}[tbh] \includegraphics[scale=0.7]{fig10s.eps} \caption{ Eigenmodes of $\Gamma_b$ when the parameter values are $(\Lambda_2, \Lambda_5)=(0.445, 0.405)$ as marked (C) in fig.~\ref{FigDiagram2}(d), and the membrane is close to the registered instability boundary. (a) Plots of the relaxation rates $\gamma_{bi}$ as a function of the wave number $q$. (b) Plots of the diagonal elements $(\Gamma_b)_{ii}$ of the matrix $\Gamma_b$ as a function of the wave number $q$ (dashed color lines). The effective decay rate $\gamma_{\bar\phi}^*$ is plotted with a black dashed line. For comparison, the relaxation rates $\gamma_{bi}$ in (a) are also plotted with grey solid lines. } \label{FigrateB2} \end{figure} \subsection{Domain relaxation dynamics} \begin{figure}[tbh] \includegraphics[scale=0.75]{fig11s.eps} \caption{Time evolutions of $\phi(x,t)/\Delta \phi$, $\hat{h}(x)/\Delta \phi$ and $\rho(x,t)\Delta \phi$ from the initial state given by eqs.~(\ref{initialR1}) and (\ref{initialR2}). The parameters are the same as in fig.~\ref{FigrateAlow2}, {\it i.e.}, $\sigma=10^{-8}$ $\mathrm{erg/cm}^{-2}$ and $(\Lambda_2, \Lambda_5)=(-0.023, 0.063)$. } \label{FigTimeR} \end{figure} In this subsection, we examine the relaxation dynamics of a domain in which $\phi$ is larger than the outside. When the system is in the stable region as in eqs.~(\ref{stab1}) and (\ref{stab2}), such a domain should relax to a homogeneous state $(\hat{h}, \rho, \phi) =0$. Let us assume that the initial state $(\hat{h}_0,\rho_0,\phi_0)$ at $t=0$ is described by one-dimensional profiles \begin{align} &\hat{h}_0(x)=\rho_0(x)=0, \label{initialR2} \\ &\phi_0(x)=\frac{\Delta\phi}{2} \left[\phi_{\rm c}+ \tanh \left\{ \frac{ L_{\rm d}-|2x-L|}{2\ell} \right\} \right], \label{initialR1} \end{align} while these profiles are homogeneous in $y$-direction. The profile $\phi_0(x)$ represents a patch centered at $x=L/2$, and its size and interfacial thickness are given by $L_{\rm d}$ and $\ell$, respectively. The difference of $\phi$ between the inside and the outside the initial domain is given by $\Delta\phi$, whereas $\phi_{\rm c}$ is determined so that the spacial average of $\phi_0$ vanishes. We will not discuss the other variables $(\bar\rho,\bar\phi)$ because they are not coupled to $(\hat{h},\rho,\phi)$. The three variables can be generally expressed as Fourier series defined by \begin{equation} g (x,t)=\sum_{n=-\infty}^{\infty} \exp \left( \frac{2\pi i n x}{L} \right) g(q_n,t), \end{equation} where \begin{equation} q_n=\frac{2\pi n}{L}. \end{equation} Let ${\bm a}_0(q_n)= (0, 0, \phi_0(q_n))$ denote the Fourier modes of the initial state given by eqs.~(\ref{initialR2}) and (\ref{initialR1}). Since the time evolution of each Fourier mode is governed by eq.~(\ref{Dmatrixa}), we can write ${\bm a}(q_n, t)=e^{-\Gamma_a(q)t}\,{\bm a}_0(q_n)$ with $q=|q_n|$. The matrix $\Gamma_a(q)$ can be diagonalized by using its eigenvalues $\gamma_i(q)$ and their respective eigenvectors ${\bm w}_i(q)$ as \begin{equation} \Lambda_a(q)=W^{-1}(q) \Gamma_a(q) W(q), \end{equation} where $\Lambda_a = {\rm diag}(\gamma_{a1}, \gamma_{a2}, \gamma_{a3})$ is the diagonalized matrix and $W = ({\bm w}_1,{\bm w}_2,{\bm w}_3)$. Then ${\bm a}(x,t)$ can be generally written as \begin{equation} {\bm a}(x, t)=\sum_{n=-n_{\rm c}}^{n_{\rm c}} \exp\left( {\frac{2\pi i n x}{L}} \right) W e^{-\Lambda_a(q)t}W^{-1}{\bm a}_0(q_n), \label{TimeSol} \end{equation} where we have introduced a cut-off wave number set by the monolayer thickness $d$ \begin{equation} \frac{2\pi n_{\rm c} }{L}=\frac{\pi}{d}. \end{equation} In fig.~\ref{FigTimeR}, we present the time evolution of $\phi(x,t)$, $\hat{h}(x,t)$ and $\rho(x,t)$ obtained from eq.~(\ref{TimeSol}) by setting $L=6000$, $\ell=10$ and $L_{\rm d}=1000$ in nm. The other parameters are the same as in fig.~\ref{FigrateAlow2} and the system is close to the anti-registered instability. Notice that ${\bm a}(x,t)$ divided by $\Delta\phi$ is independent of $\Delta\phi$ since eq.~(\ref{Dmatrixa}) is linear in ${\bm a}(x,t)$. For $t\le 10^{3}$ ms, $|\hat{h}|$ and $|\rho|$ increase while $\phi$ remains almost the same. This means that, within a small time interval, non-zero $\phi$ induces the bending $\hat{h}$ and the density difference $\rho$ which were initially both zero. For $t \ge 10^3$ ms, all the three variables become smaller and almost vanish for $t \ge 10^5$ ms. The above dynamics can be roughly understood by looking at the time evolution of a Fourier mode at $q_n \simeq 2\pi /L_{\rm d}$. In figs.~\ref{FigTimeMode1}(a) and (b), the time evolutions of $|\hat{h}(q_n)|$, $|\rho(q_n)|$ and $|\phi(q_n)|$ are presented at $q_n= 3.14$ $\mu\mathrm{m}^{-1}$ for which the decay rates are $\gamma_{a1}=785$ $\mathrm{s}^{-1}$, $\gamma_{a2}=110$ $\mathrm{s}^{-1}$ and $\gamma_{a3}=0.157$ $\mathrm{s}^{-1}$. In fig.~\ref{FigTimeMode2}, $|\rho -\rho_{\rm e}^{(1)}(\phi_0)|$ and $|\hat{h}-\hat{h}_{\rm e}^{(2)}(\rho_0,\phi_0)|$ are plotted as a function of $t$ for the same wave number as in fig.~\ref{FigTimeMode1}. As for the long time behavior, $t\gg \gamma_{a2}^{-1}$ ($\gg\gamma_{a1}^{-1}$), fig.~\ref{FigTimeMode1}(a) shows that all the three variables decay exponentially with a common decay rate $\gamma_{a3}$. In this regime, we have $\hat{h}(q_n)\simeq\hat{h}_{\rm e}^{(1)}(\phi, q_n)$, $\rho(q_n)\simeq\rho_{\rm e}^{(1)}(\phi, q_n)$ and \begin{align} \phi(q_n)\simeq \phi_0(q_n)e^{-\gamma_{a3}t}\simeq \phi_0(q_n)e^{-\gamma_\phi^*t}, \label{phiTapp} \end{align} as in the the discussion after eq.~(\ref{qac}). Substituting eq.~(\ref{phiTapp}) into eqs.~(\ref{he1}) and (\ref{rhoe1}), we then have \begin{align} &\hat{h}(q_n) \simeq\frac{ A_{12} A_{23}- A_{13} A_{22}}{ A_{11}A_{22}- A_{12}^2} \phi_0e^{-\gamma_{a3}t}, \\ &\rho(q_n) \simeq\frac{ A_{12} A_{13}- A_{11} A_{23}}{ A_{11}A_{22}- A_{12}^2} \phi_0e^{-\gamma_{a3}t}, \end{align} which decay exponentially with the common rate $\gamma_{a3}$. For shorter times, on the other hand, $|\hat{h}|$ and $|\rho|$ rapidly vary while $\phi$ stays almost constant, as shown in fig.~\ref{FigTimeMode1}(b). In figs.~\ref{FigTimeMode1} and \ref{FigTimeMode2}, the chosen $q_n=3.14$ $\mu\mathrm{m}^{-1}$ is much larger than the mode crossing wave number $q_{\rm mc}=0.438$ $\mu\mathrm{m}^{-1}$. Then the fastest decay rate $\gamma_{a1}$ corresponds to the relaxation of $\hat{h}$ to $\hat{h}_{\rm e}^{(2)}$ while $\rho$ and $\phi$ are frozen. Notice that in fig.~\ref{FigTimeMode1}(b) $\hat{h}(q_n,t)$ changes its sign around $t\approx 20\ [\mathrm{ms}]$. Hence $\hat{h}-\hat{h}_{\rm e}^{(2)}(\rho_0,\phi_0)$ decays exponentially with the rate $\gamma_{a1}$ for $t\ll \gamma_{a2}^{-1}$ ($\ll \gamma_{a3}^{-1}$). However, fig.~\ref{FigTimeMode2}(a) shows a slight deviation between $\hat{h}-\hat{h}_{\rm e}^{(2)}(\rho_0,\phi_0)$ and $e^{-\gamma_{a1}t}$. This is due to the fact that the ratio $\gamma_{a1}/\gamma_{a2}=7.15$ is not large enough to regard $\rho$ as a completely frozen variable. The second mode $\gamma_{a2}$ in fig.~\ref{FigTimeMode2}(b) is associated with the relaxation of $\rho$ to $\rho_{\rm e}^{(1)}$, while $\phi$ is frozen and $\hat{h}$ rapidly changes to $\hat{h}_{\rm e}^{(2)}$. Hence we have $\rho-\rho_{\rm e}^{(1)}(\phi_0) \sim e^{-\gamma_{a2}t}$ for $\gamma_{a3}^{-1} \ll t \ll \gamma_{a1}^{-1}$. \begin{figure} \includegraphics[scale=0.7]{fig12s.eps} \caption{Semilogarithmic plots of the time evolutions of $|\hat{h}(q_n,t)|/\Delta \phi$, $|\rho(q_n,t)|/\Delta \phi$ and $|\phi(q_n,t)|/\Delta \phi$ for (a) large $t$ and (b) small $t$. Here we choose $q_n=3.14$ $\mu\mathrm{m}^{-1}$. In (a), we also plot $|\phi_0(q_n)|e^{-\gamma_{a3}t}$ with a dashed line.} \label{FigTimeMode1} \end{figure} \begin{figure} \includegraphics[scale=0.7]{fig13s.eps} \caption{Semilogarithmic plots of the time evolutions of (a) $|\hat{h}(q_n,t)-\hat{h}_{\rm e}^{(2)}(\rho_0,\phi_0,q_n)|$ (solid line) and $e^{-\gamma_{a1}t}$ (dashed line), and (b) $|\rho(q_n,t)-\rho_{\rm e}^{(1)}(\phi_0,q_n)|$ (solid line) and $e^{-\gamma_{a2}t}$ (dashed line). } \label{FigTimeMode2} \end{figure} \section{Summary and Discussion} \label{summary} In this paper, we have theoretically investigated the relaxation dynamics of a binary lipid bilayer membrane by taking into account (i) the coupling between the height and the density variables, (ii) the hydrodynamics of the surrounding fluid, (iii) the frictional force between the upper and lower leaflets, and (iv) the mutual diffusion in each monolayer. In sect.~\ref{freeenegy}, we have constructed the free energy in terms of the membrane shape $h$, the total lipid density $\rho^\pm$, and the lipid density difference $\phi^\pm$ up to quadratic order. The membrane surface tension $\sigma$, which was neglected in the previous theory for single-component lipid bilayer membranes~\cite{Seifert}, and taken into account recently~\cite{JBNLM}, naturally appears in the expansion of the general free energy in eq.~(\ref{Ftot}). In sect.~\ref{dynamics}, the dynamic equations have been formulated on the basis of momentum and molecular number conservations. In Appendix \ref{appa}, we have proved the non-negative definiteness of the dissipation in our formulation. We have also presented an alternative derivation of the dynamic equations by using the Onsager's variational principle in Appendix \ref{appb}. The derived equations for binary lipid bilayer membranes are the generalization of those in the Seifert and Langer model~\cite{Seifert}. We have further obtained the relaxation equations for five variables by integrating out the velocity field of the surrounding fluid (see also Appendix C). The equations are separated into two independent sets of equations; one for $(\hat{h}, \rho, \phi)$ and the other for $(\bar\phi,\bar\rho)$. The former equations change their signs under the interchange of the upper and lower leaflets, while the latter equations are invariant. In sect.~\ref{results}, we have discussed the stability of the one phase state and found that there are two possible instabilities; the anti-registered instability of $(\hat{h}, \rho, \phi)$ and the registered instability of $(\bar\phi,\bar\rho)$. We have investigated in detail the relaxation rates of the various hydrodynamic modes. In the case of small surface tension $\sigma<\sigma_{\rm t}$ (see Eq~(\ref{sigmac})), figs.~\ref{FigrateAlow1} and \ref{FigrateAlow2} show that the mode crossing between $\rho$ and $\hat{h}$ takes place around the intermediate wave number $q_{\rm mc}$. Such a mode crossing was originally predicted for tensionless single-component lipid bilayers~\cite{Seifert}. When $\sigma>\sigma_{\rm t}$, however, the height variable $\hat{h}$ is the fastest mode in the whole wave number range, and the mode crossing does not occur~\cite{JBNLM}. Unlike single-component membranes for which either $\hat{h}$ or $\rho$ is the slowest mode, mutual diffusion in two-component membranes is the slowest mode both for small and moderate surface tensions. While $\phi$ varies slowly, the faster variables $\hat{h}$ and $\rho$ rapidly approach their respective quasi-equilibrium states determined by $\phi$. In all the examined cases, the effective decay rate $\gamma_{a3}\simeq\gamma_\phi^*$ for $\phi$ (see eq.~(\ref{gammaR})) is smaller than the bare decay rate $(\Gamma_a)_{33}$ because of the faster slaved variables $\hat{h}$ and $\rho$. As the unstable region is approached, the slowdown of the effective rate $\gamma_{a3}$ becomes even more significant, and the crossover from $\gamma_{a3}\sim q^2$ to $\sim q^4$ behaviors may be measurable in experiments. As for the faster dynamics, the relaxation of $\hat{h}$ is controlled by the hydrodynamics of the surrounding fluid, and the corresponding decay rate is approximately given by $A_{11}/(4\eta d^2 q)$ (see eqs.~(\ref{gamma_a}) and (\ref{he})). The relaxation of $\rho$ is dominated by the inter-monolayer friction, and its decay rate is given by $A_{22}q^2/(4b)$ (see the sentences after eq.~(\ref{he})). We have also examined the relaxation of a domain that is rich in $\phi$ when the membrane is close to the unstable region. In the very early stage, the bending of the membrane is induced by a non-zero density variation of $\phi$ even the membrane is initially flat. In the late stage of the relaxation process, all the variables decay with the common decay rate $\gamma_{a3}\simeq\gamma_\phi^*$ as mentioned above. The dynamics of $(\bar\rho,\bar\phi)$ is simpler than that of $(\hat{h}, \rho, \phi)$. The fastest variable $\bar\rho$ instantly approaches to its quasi-equilibrium state $\bar\rho_{\rm e}$. Then $\bar\phi$ relaxes with the effective decay rate $\gamma_{b2}\simeq \gamma_{\bar\phi}^*$ (see eq.~(\ref{gammaB2})) which becomes even slower as the unstable region is approached. While the kinetic parameters and some of the static parameters have been determined in Sec.~\ref{results}, the dimensionless parameters $\Lambda_i$ ($i=1,3,4,5$) in the free energy could not be estimated from the previous experimental data. However, the behaviors of the relaxation rates, which are summarized in Table \ref{TabRates} and are described in figs.~\ref{FigrateAlow1}--\ref{FigrateB2}, are not sensitive to these parameters, unless the reduced temperatures $\tau_a$ and $\tau_b$ are very close to zero ($\tau_a$ or $\tau_b$ are defined in terms of $\Lambda_i$'s in eqs.~(\ref{taua}) and (\ref{taub}), respectively). In fact, besides the parameters determined from the experimental data, these reduced temperatures are the only relevant parameters. In the case of $\Gamma_a$ (resp.~$\Gamma_b$), this is because the time scales of the different modes characterized by the diagonal elements $(\Gamma_a)_{ii}$ (resp.~$(\Gamma_b)_{ii}$) are well separated, except in the vicinity of the characteristic wave number $q\simeq q_{\rm mc}$ where the values of two fastest modes of $\Gamma_a$ become close in the low tension case ($q_{\rm mc}$ is independent of the parameters that could not be estimated). The two reduced temperatures measure the distances in the phase space from their respective critical points \cite{ReducedT}, and one can experimentally control them by varying the average lipid composition and the temperature. As discussed above, when $\tau_a$ (resp.~$\tau_b$) is close to zero, the anti-registered (resp.~registered) diffusive mode becomes very slow, and the associated rate is given by $\gamma^*_\phi$ (resp.~$\gamma^*_{\bar\phi}$). Finally, we give some remarks. (i) We have constructed our free energy as a power series expansion up to quadratic order with respect to the deformation and the densities about the reference state. Here the physical meaning and microscopic interpretation of some phenomenological coupling parameters such as $\Lambda_i$ are not so obvious. It would be ideal to construct a free energy from a microscopic model, and perform a series expansion of the free energy with respect to the densities and curvature. With such a procedure, a connection between our phenomenological parameters and the microscopic quantities can be made. Recently, an attempt has been made for a flat bilayer membrane by Williamson and Olmsted who derived a mean field free energy from a semi-microscopic lattice bilayer model. In their model, the difference in length between the two different lipid species was taken into account~\cite{Olmsted}. (ii) In real biological cells, inclusions in membranes such as proteins play essential roles. It was recently discussed that the proteins which span the bilayer give rise to a further constraint in the dynamics and an additional source of dissipation leading to anomalous diffusion~\cite{JBnew}. Furthermore, the surrounding fluid can be viscoelastic rather than purely viscous, and inclusions can be active in a sense that they consume energy and drive membranes out of equilibrium. Neglecting the bilayer structure, some authors have investigated the membrane shape fluctuations when it contains active/non-active inclusions and is surrounded by viscoelastic media~\cite{Granek,Lau,KomuraJPCM}. Generalization of our theory to such situations is also interesting. (iii) As we further approach the unstable region or the critical point, the dynamical non-linear coupling (mode-mode coupling) between the density variables and the velocity fields in the bilayer becomes important like in the ordinary 3D critical fluids~\cite{SKI07,Inaura,RKSI11,KellerDynamics}. It would be interesting to investigate the effects of the bilayer structure and friction on top of the mode coupling between the velocity and the density fields. \begin{acknowledgments} We thank D.\ Andelman, T.\ Hoshino, T.\ Kato, C.-Y. D. Lu, P.\ D.\ Olmsted, P.\ Sens, M. Turner, K. Yasuda for useful discussions. R.O. and S.K. acknowledge support from the Grant-in-Aid for Scientific Research on Innovative Areas ``\textit{Fluctuation and Structure}" (Grant No.\ 25103010) from the Ministry of Education, Culture, Sports, Science, and Technology of Japan, the Grant-in-Aid for Scientific Research (C) (Grant No.\ 24540439) from the Japan Society for the Promotion of Science (JSPS), and the JSPS Core-to-Core Program ``\textit{International Research Network for Non-equilibrium Dynamics of Soft Matter}". \end{acknowledgments}
1,116,691,497,504
arxiv
\section{Introduction}\label{sec:intro} The impulsive phase of solar flares is characterised by intense emission of microwaves and hard X-rays (HXR), showing the presence of non-thermal electrons in the corona and chromosphere, respectively. These electrons are believed to be accelerated by the magnetic energy release mechanisms in the corona \citep[\textit{e.g.}][]{ZharkovaArznerBenz:2011} although this interpretation has been contested recently \citep{FletcherHudson:2008,BrownTurkmaniKontar:2009,VaradyKarlickyMoravec:2014}. Strong ultraviolet (UV) and extreme ultraviolet (EUV) emission is also commonly observed in association with the non-thermal emission \citep[\textit{e.g.}][]{HintereggerHall:1969,EmslieNoyes:1978,HoranKreplinFritz:1982,AlexanderCoyner:2006,CoynerAlexander:2009}, indicating fast heating at the transition region and chromosphere. This heating is usually explained by the energy deposition of the accelerated electrons hitting the high-density plasma at these locations \citep{Brown:1973,Hudson:1972,BrownKarlickyMacKinnon:1990}. The increase of pressure due to heating drives the plasma upwards into the coronal loops, a process termed chromospheric evaporation. The coronal loops filled with hot plasma will be bright in soft X-rays (SXR) and EUV, peaking at temperatures between 8 and 40 MK \citep[\textit{e.g.}][]{RyanMilliganGallagher:2012}. The maximum temperature is usually reached after the impulsive phase, \textit{i.e.} after most of the released energy is deposited in the ambient plasma. The hot coronal loops eventually cool by conduction, sending energy to the lower atmosphere, followed by a long radiative cooling phase \citep[\textit{e.g.}][]{Svestka:1987}. \cite{McTiernanKaneLoran:1993}, using \textit{Yohkoh} \textit{Soft X-ray Telescope} (SXT) images identified impulsive SXR emission associated with a HXR footpoint. Its duration was less than one minute, reaching its maximum about 20 seconds before an associated HXR sub-burst was detected by the low energy channel, and 40 seconds before the main HXR burst at the same location. This SXR burst corresponded to a sharp temperature-time profile, rising from $\approx 7$ MK to about $\approx 10$ MK in $\approx 30$ seconds, followed by a fast cooling. \cite{HudsonStrongDennis:1994} reported similar observations for the limb event SOL1992-01-26, where the footpoint SXR peak time matched the HXR peak time within the instrument's temporal resolution. \cite{MrozekTomczak:2004} studied SXR impulsive brightenings in footpoints in 46 \textit{Yohkok} events, and concluded that these can be explained as result of collisional heating mainly by non-thermal electrons with relatively low energies. More recently, \cite{GrahamHannahFletcher:2013} presented the emission measure distributions (EMD) as a function of temperature for the footpoints of six flares, derived from EUV spectroscopic observations, and showed that the footpoint EMDs had a peak around 8 MK, indicating the presence of high temperature plasma within the footpoints. Evidence of hot ($T>1$ MK) and dense ($n_e>10^{10}~\mathrm{cm}^{-3}$) plasma at flare footpoints during the impulsive phase was also presented by several other authors \citep{MilliganDennis:2009,WatanabeHaraSterling:2010,Del-ZannaMitra-KraevBradshaw:2011,Milligan:2011,GrahamFletcherHannah:2011,FletcherHannahHudson:2013}. A comprehensive review of EUV spectroscopic observations of hot footpoints has been presented by \cite{Milligan:2015}. While the \textit{Hinode}/{\em Extreme ultraviolet Imaging Spectrometer} \citep[EIS:][]{CulhaneHarraJames:2007} provides great spectroscopic information on the plasma across a wide temperature range, the slit-raster observational technique does not allow evaluation of the plasma evolution at the same spatial location with high temporal cadence. Imaging from the {\em Atmospheric Imager Assembly} \citep[AIA:][]{LemenTitleAkin:2012}, on board the {\em Solar Dynamics Observatory} (SDO), and methods to recover the EMDs allow us to track the plasma evolution at different regions of flares simultaneously. We present the analysis of the SOL2013-11-09 flare ribbons, recovering the temperature and emission measure and show evidence for a sudden heating to $\approx 10$ MK temperatures of the ribbon plasma during the impulsive phase. \section{Observational Data of SOL2013-11-09}\label{sec:overview} The SOL2013-11-09 flare occurred in active region NOAA 11890 near disc centre, and was characterised by two flaring ribbons plus a bright compact source between them during the impulsive phase. For this study we used data from SDO/AIA. AIA has nine filters, or passbands, in the EUV and UV range: 94, 131, 171, 193, 211, 304, 335, 1600 and 1700 \AA~ filters, taking full Sun images with a pixel resolution of 0.6 arcsecs and 12 seconds cadence for the EUV passbands and 24 seconds for the UV passbands. We also used data from the {\em Reuven Ramaty High-Energy Solar Spectroscopic Imager} \citep[RHESSI:][]{LinDennisHurford:2002}. The RHESSI imager consists of nine bi-grid rotating modulation collimators (RMCs), with a rotation period of about 4 seconds. The data from individual RMC are combined by the imaging algorithms to cover different spatial scales, from 2.3 arcsecs (RMC 1) up to 180 arcsecs (RMC 9). To avoid saturation of the detectors at times with high rate of incident photons, two-stage (``thin" and ``thick") attenuators can be automatically employed. No attenuators (state ``A0") were in during this event, a rare occurrence that gives us the opportunity to investigate the evolution of the low-energy X-rays during the impulsive phase without gaps and with full count rates. Complementary SXR data from the {\em Soft X-ray Sensor} on board of the {\em Geostationary Operational Environmental Satellite} (GOES) were also employed. GOES observes the Sun as a star in two broadband SXR channels, 1--8 \AA~ and 0.5--4 \AA, which can be used to estimate the temperature, $T$, and emission measure, EM, of the flaring plasma \citep{ThomasCrannellStarr:1985,WhiteThomasSchwartz:2005}. In Figure \ref{fig:overview} we summarise some aspects of the event. Figure \ref{fig:overview}a shows RHESSI HXR count rates at several energy bands, where the high-energy bands (9--12, 12--25 and 25--50 keV) show a clear impulsive peak with a maximum at 06:25:46~UT, and are associated with bremsstrahlung emission from non-thermal electrons, as we discuss in Section \ref{sec:rhessi}. A second, weaker and less impulsive peak is seen around 06:27:20~UT. The lower energy bands (below 9 keV), associated with thermal bremsstrahlung, show a gradual rise starting slightly after 06:22~UT, and also a pronounced peak simultaneous with the high-energy channels. Both GOES channels, 1--8 \AA~ and 0.5--4 \AA~ (Figure \ref{fig:overview}b) have a steep rise, with a ``shoulder'' at the same time as the HXR peak. Spatially integrated EUV emission was obtained by summing the pixels of AIA images taking the whole flare region and are shown in Figure \ref{fig:overview}c. The 171 \AA~emission is well associated with the high-energy HXR, with a main impulsive peak around 06:25:46~UT and a secondary peak around 06:27:20~UT. Similar characteristics are observed at 193, 211, 304, 1600, and 1700 \AA. Emission in the 94 and 131 \AA\ passbands is similar to the GOES SXR channels: a sharp rise matching the HXR peak, followed by a more gradual rise. These AIA passbands are sensitive to emission from hot plasma: the main contribution to the 94 \AA~ passband is the Fe {\sc xviii} 93.93 \AA\ emission line, formed at $\log T=6.8$ while the dominant contribution to the AIA 131 \AA\ passband during flares comes from Fe {\sc xxi} $128.75$ \AA~ ($\log T = 7.1$) \citep{ODwyerDel-ZannaMason:2010,BoernerTestaWarren:2014}. These main contributions were verified using from the \textit{EUV Variability Experiment} \citep[EVE:][]{WoodsEparvierHock:2012} data for this event by \cite{Simoes:2015}. \begin{figure} \centerline{\includegraphics[angle=0,width=\textwidth]{fig_overview.eps}} \caption{SOL2013-11-09 flare overview. (a) RHESSI HXR count rates (energy bands indicated in the figure). (b) GOES flux at 1--8 and 0.5--4 \AA \ channels. (c) Spatially integrated emission of AIA passbands 94, 131, 171 \AA. {\em Bottom:} (d) Temperature, $T$, and emission measure, EM, determined using from GOES and RHESSI. In all frames, the vertical dotted line indicates the HXR peak.} \label{fig:overview} \end{figure} \section{SDO/AIA Data Analysis} \label{sec:aia} \subsection{Impulsive EUV Emission from Flaring Ribbons} \label{sec:dem} We will now focus our analysis on the impulsive phase, 06:22 to 06:30~UT, when the impulsive ribbon emission is observed. AIA images { (see Figure \ref{fig:aia} and \ref{fig:aia2})} reveal two ribbons as the brightest features in all AIA EUV/UV passbands during the impulsive phase of the event. { The ribbons are located in regions with opposite magnetic field polarity, as can be seen in Figure \ref{fig:hmi}, which shows the line-of-sight (LOS) photospheric magnetic field obtained by SDO/\textit{Helioseismic and Magnetic Imager} \citep[HMI:][]{Scherrer:2012}.} Coronal loops become the dominant features at 94, 131 and 335 \AA \ after the main impulsive peak of the flare. The bright source located between the ribbons, visible at the impulsive phase in all AIA wavelengths and also seen in HXR (Section \ref{sec:rhessi}), was analysed in detail by \cite{Simoes:2015}. They characterised this source as the main energy release site of this event. We summarise their findings as follows: 1) the source is located in the corona (possibly low down), it has a filamentary shape along the loops seen in EUV/UV images, the lack of hot loops connecting the region after the impulsive phase, and the weak and featureless photospheric magnetic field at the same location (see Figure \ref{fig:hmi}). 2) it has intense and impulsive EUV and HXR emission. 3) consistently, the source is found to be dense ($\log n=11.50\pm 0.82$) and hot ($T \approx 12\sim16$ MK). 4) strong red-shifts observed in many EUV emission lines observed by \textit{Hinode}/EIS, including Fe {\sc xii} and Fe {\sc xxiv}, indicate plasma downflows of 40--250 km s$^{-1}$, which, along with plasma outflows observed in AIA images, are interpreted as plasma outflows along the magnetic loops. \begin{figure} \centerline{\includegraphics[angle=0,width=\textwidth]{fig_maps_aia.eps}} \caption{SDO/AIA images at 94, 131, 171 and 1600 \AA\ filters of SOL2013-11-09, at four times during the impulsive phase.} \label{fig:aia} \end{figure} \begin{figure} \centerline{\includegraphics[angle=0,width=\textwidth]{fig_maps_aia2.eps}} \caption{SDO/AIA images at 193, 211, 335 and 304 \AA\ filters of SOL2013-11-09, at four times during the impulsive phase.} \label{fig:aia2} \end{figure} \begin{figure} \centerline{\includegraphics[angle=0,width=\textwidth]{fig_hmi_ribbon.eps}} \caption{SDO/HMI line-of-sight (LOS) magnetogram, overlaid by SDO/AIA 1600 \AA\ contours at 500 DN s$^{-1}$, showing the position of the ribbons and the coronal source, analysed by \cite{Simoes:2015}.} \label{fig:hmi} \end{figure} We investigate the emission from the ribbons and the coronal source (for context) separately by marking regions of interest (ROI) associated with each of the three main sources as shown in Figure \ref{fig:aia_roi}: East (orange) and West (green) ribbons, coronal source (blue). We obtained lightcurves for all AIA passbands by summing up the pixels for each ROI, which are shown in Figure \ref{fig:aia_peaks}. We also show RHESSI HXR 15-25 keV counts for comparison. The UV (1600 and 1700 \AA), 304 \AA, and the ``warm'' EUV (171, 193, 211 \AA) channels have similar enhancements during the impulsive phase, showing impulsive emission from both ribbons and the coronal source. The bursts are coincident with the HXR main peak (06:25:46~UT), within the instrumental cadence. A secondary, less impulsive HXR peak at 06:27:14~UT is also well-associated with most AIA channels. The peaks are simultaneous at the three ROIs in all nine AIA filters, within the instrumental cadence. After about 06:30~UT the coronal source fades out at all wavelengths. At 94 and 131 \AA \ the coronal source ROI is dominated by the emission from the coronal loops. \begin{figure} \centerline{\includegraphics[width=\textwidth,angle=0]{map_roi.eps}} \caption{a) Regions of interest (ROI) defined for the AIA images: East (orange) and West (green) ribbons, and coronal source (blue).} \label{fig:aia_roi} \end{figure} \begin{figure} \centerline{\includegraphics[angle=0,width=\textwidth]{fig_aia_lc.eps}} \caption{SDO/AIA lightcurves for EUV/UV filters, (a) 94 \AA, (b) 131 \AA, (c) 171 \AA\ and (d) 1600 \AA, for each region of interest (ROI), as defined in Figure \ref{fig:aia_roi}: East (orange) and West (green) ribbons, and coronal source (blue). RHESSI 15-25 keV counts (grey) are shown as a reference of the impulsive non-thermal emission, with the time of its maximum shown by the vertical dotted line.} \label{fig:aia_peaks} \end{figure} \subsection{Emission Measure Distributions} To investigate the evolution of the plasma at the flaring ribbons, we apply a method of regularised inversion to the AIA data developed by \cite{HannahKontar:2012} to obtain the {\em differential emission measure} (DEM), $\xi(T)={n_e}^2\mathrm{d}h/\mathrm{d}T$ $[\mathrm{cm}^{-5}$ K$^{-1}]$, defining the quantity of emitting material as a function of temperature, $T$, along the given line-of-sight, $h$, with an average density $n_e$. We employed up-to-date AIA temperature response functions \citep{BoernerEdwardsLemen:2012,BoernerTestaWarren:2014} which have empirical corrections for the missing emission lines from the CHIANTI v7.1.3 database \citep{DereLandiMason:1997,LandiYoungDere:2013}, time-dependent response corrections for each channel due to degradation of the filters, and normalisation to ensure agreement with SDO/EVE spectroscopic full-disk observations. The regularisation method also provides the DEM uncertainty and effective temperature resolution. To obtain the DEM it must be assumed that the emitting plasma is optically thin, in local thermodynamic equilibrium, and in ionisation equilibrium. In the case of a flare ribbon, where the emitting plasma is both hot and dense, this may be hold, although departures from equilibrium and optical depth effects should never be discounted completely \cite[see][for further discussion]{GrahamHannahFletcher:2013}. Continuum emission may also contribute to the AIA passbands \citep{ODwyerDel-ZannaMason:2010}, however this contribution is likely to be small for C class flares and can probably be neglected \citep{MilliganMcElroy:2013}. We point the reader to \cite{HannahKontar:2012} for an extended explanation of the method and comparison with other available methods. We applied the method to the average emission inside each ROI indicated in Figure \ref{fig:aia_roi} throughout the impulsive phase. Integrating the DEM over fixed temperature intervals gives the emission measure distribution, $\mathrm{EMD}=n^2h$, in the more practical units of cm$^{-5}$ \cite[\textit{e.g.}][]{GrahamHannahFletcher:2013,HannahKontar:2013}. The resulting $\mathrm{EMD}$ for the East and West ribbons and coronal source for selected times are shown in Figure \ref{fig:emds} along with the EM-{\em loci} curves for each filter, these representing the $\mathrm{EMD}$ for an isothermal plasma, i.e the maximum theoretical EM. The pre-flare EMD is shown by the dashed line with a peak around $\log T = 6.3 \sim 6.4$ and falling off towards higher temperatures. A second peak is visible around $\log T=6.9 \sim 7.1$, most notable in the East ribbon, however the 94 and 131~\AA\ AIA channels both contain contributions from lines at approximately 1 MK and 10 MK, therefore it is likely that this apparent hot emission is in fact much cooler. The $\mathrm{EMD}$ for the three sources share a similar development; a pre-flare EMD which evolves with an overall increase of the emission measure at all temperatures in the AIA passband response range (roughly $5.7 < \log T < 7.3$), a stronger increase in high temperature emission at $\log T \simeq 7.0$, and the formation of a low temperature ``shoulder'' just below $\log T = 6.0$. \begin{figure} \centerline{\includegraphics[angle=0,width=\textwidth]{fig_emd_samples.eps}} \caption{Emission measure distribution $\mathrm{EMD}$ (cm$^{-5}$) for the East (orange) and West (green) ribbons and coronal source (blue) at selected times from the early phase into the impulsive phase. Pre-flare (06:20:19~UT) EMD are shown by the grey dashed line for comparison. { The EM-\textit{loci} curves for each filter are indicated by different colours and identified in the top-left panel.} } \label{fig:emds} \end{figure} We then integrated the $\mathrm{EMD}$ at temperature ranges 0.5--1.5 MK, 1.8--3.2 MK and 7.9--12.6 MK, which show the most prominent peaks, to obtain the column emission measure, $\mathrm{EM}_c=n^2h$~$[\mathrm{cm}^{-5}]$, at these temperature ranges. We did not attempt this at the temperatures around $\log T= 6.6$, as this range is not very well constrained by the AIA filters. In order to compare these results with (volume), $\mathrm{EM}=n^2V~[\mathrm{cm}^{-3}]$, values from GOES and RHESSI, we converted the $\mathrm{EM}_c$ to $\mathrm{EM}$ by multiplying the values by the projected area, $A$, of the sources, \textit{i.e.} making $V=hA$. The projected area $A$ of the emitting plasma was estimated by finding the number of pixels inside each ROI with a value above a determined threshold. Here we used 94 \AA\ images and chose a threshold value of 35 DN s$^{-1}$ pixel$^{-1}$ that captures the weaker but relevant emission at the early phase, and also the stronger emission at the impulsive phase. We verified if the emitting pixels at the other AIA filters sensitive to cooler plasma temperatures co-aligned well with the emitting area at 94 \AA, generally finding a good agreement within a couple of pixels. The time evolution of the EM (now in cm$^{-3}$) at the three temperature ranges for the three sources is shown in Figure \ref{fig:em_time}, along with the values from GOES and RHESSI obtained for full-Sun observations. We subtracted the EM pre-flare values to consider the flare excess only, taking the pre-flare values as the mean in the interval 6:00--6:20~UT. Looking at Figures \ref{fig:em_time}a-c in the early phase (6:22:00--6:25:20~UT), the EM for the two ribbons in all three temperature bands starts to rise very slowly from about 6:22~UT (clearly visible on a logarithmic scale, not shown here), with a faster enhancement after 6:24:40~UT. During the main impulsive phase (6:25:20--6:26:40~UT) the EM in the three temperature ranges for all the three sources rises impulsively, peaking around 6:25:40--6:26:20~UT. The GOES and RHESSI emission measures also show a steeper enhancement. This phase coincides with the main HXR peak. The rapid increase, followed by a decrease of the EM implies fast heating and cooling (or removal) of the plasma, and it is more pronounced at the East ribbon. After 6:26:40~UT, at 7.9--12.6 MK, the EM keeps rising, which is consistent with loop filling by chromospheric evaporation. AIA images at 94, 131 and 335 \AA \ show the enhancement of bright loops in this phase, and the increase is also supported by the GOES and RHESSI EM values. At lower temperature ranges, the EM peak is more pronounced, showing a fast increase and decrease in the amount of material below $\approx 3$ MK. This behavior is similar to emission lines observed by SDO/EVE formed at chromospheric and transition region temperatures, $4.2 < \log T < 5.7$K, which show impulsive emission in time with HXR. In Figure \ref{fig:em_time}d we show the lightcurves of C {\sc iii} 977 \AA, O {\sc iv} 554 \AA~and O {\sc v} 629 \AA, noting that other lines formed at similar temperatures display similar behaviour, namely Ne {\sc vii} 465 \AA, O {\sc iii} 526 \AA, He {\sc i} 584 \AA, O {\sc iii} 599 \AA, O {\sc iv} 790 \AA, O {\sc vi} 1032 \AA, with some being noisier and less impulsive. Simulations by \cite{FisherCanfieldMcClymont:1985} show that the entire flare chromosphere cools rapidly on a radiative timescale after the heating is turned off. However, in this flare, after the main HXR peak there is still HXR and EUV emission present, indicating that the energy deposition is still occurring. \begin{figure} \centerline{\includegraphics[angle=0,width=\textwidth]{fig_emd.eps}} \caption{Time evolution of the EM (converted to $\mathrm{cm}^{-3}$ for comparison with GOES and RHESSI results) at three temperature ranges, (a) 0.5--1.4, (b) 1.8--3.2, and (c) 7.9--12.6 MK, for East ribbon, West ribbon, coronal source from AIA from pre-flare into the impulsive phase. (d) SDO/EVE lightcurves of selected emission lines formed at transition region temperature ($\log T \approx 5$). The vertical dotted line indicates the time of the HXR peak.} \label{fig:em_time} \end{figure} \section{RHESSI Hard X-rays Observations}\label{sec:rhessi} We now employ RHESSI HXR observations to characterise the hot plasma and the non-thermal electrons. First, we evaluate the HXR spectra during the impulsive phase, with a 4 second time resolution. RHESSI spectra were fitted with an isothermal plus a single power-law thick-target model, using \textit{Object Spectral Executive} (OSPEX) software \citep{SchwartzCsillaghyTolbert:2002}. No attenuator was active during the course of the event (state A0). The non-thermal power-law is assumed to originate from a cold, collisional thick-target \citep{Brown:1971}. The fitting parameters for the non-thermal electrons derived from RHESSI are shown in Figure \ref{fig:rhessi}. We will use these parameters to derive the collisional beam heating in Section \ref{sec:heating}. The thermal plasma parameters (EM and $T$) are shown in Figure \ref{fig:overview} along with the same parameters estimated from GOES data. \begin{figure} \centerline{\includegraphics[angle=0,width=0.95\textwidth]{fig_rhessi_fits.eps}} \caption{Spatially unresolved RHESSI spectral analysis results for the non-thermal electrons: $F_\mathrm{tot}$, the total electron rate above the low energy cutoff $E_c$ and spectral index $\delta$.} \label{fig:rhessi} \end{figure} We applied the imaging spectroscopy technique for the three main HXR sources observed associated with the EUV ribbons and coronal source, indicated in Figure \ref{fig:imsp_roi}. Due to RHESSI dynamic range, it was not possible to do imaging spectroscopy for the West ribbon. We constructed RHESSI Clean images for 14 energy bands, logarithmically spaced between 3 and 40 keV, using detectors 3 to 8, integrated during the main impulsive phase 06:23:34 -- 06:26:42~UT. The spectra were fitted with an isothermal plus thick-target model, shown in Figure \ref{fig:imsp}, and the resulting parameters are found in Table \ref{tab:imsp}. For comparison, we also fitted the spatially unresolved full Sun spectrum integrated for the same time interval. The thermal and non-thermal properties of the three HXR sources are very similar, confirming that the ribbon plasma is above 10 MK during the impulsive phase, as indicated by our AIA analysis in Section \ref{sec:dem}. Also, the non-thermal properties are in agreement with the findings of \cite{Simoes:2015} { who showed that the coronal source is 9--18 arcsecs long, with a plasma density of $\log n=11.5$ and thus being collisionally thick for electrons with energies of up to 45--65 keV.} \begin{figure} \centerline{\includegraphics[angle=0,width=0.99\textwidth]{imsp_roi.eps}} \caption{(a) RHESSI HXR 23--27 keV contours overlaid on an SDO/AIA 171 \AA\ image (inverted colours) with (red) dashed boxes indicating the regions for imaging spectroscopy. (b) RHESSI HXR 4--9 keV contours obtained with Clean (red) and MEM NJIT (dark blue) overlaid on an SDO/AIA 131 \AA~ image (inverted colours).} \label{fig:imsp_roi} \end{figure} \begin{figure} \centerline{\includegraphics[angle=0,width=0.99\textwidth]{imsp_spec.eps}} \caption{RHESSI photon spectra for the three main HXR sources indicated in Figure \ref{fig:imsp_roi} and spatially integrated, for the interval 06:23:34 -- 06:26:42~UT.} \label{fig:imsp} \end{figure} \begin{table} \caption{RHESSI imaging spectroscopy results.} \label{tab:imsp} \begin{tabular}{lllll} \hline & EM $\times 10^{47}\mathrm{cm}^{-3}$ & $T$[MK] & $F_{35} 10^{35} s^{-1}$ & $\delta$ \\ \hline East ribbon &$1.0 \pm 0.5$ & $10 \pm 1 $&$ 1.0\pm 0.3 $&$ 5.7 \pm 0.3$ \\ Coronal &$0.6 \pm 0.3$ & $12 \pm 1 $&$ 0.7 \pm 0.2 $&$ 5.2 \pm 0.3$ \\ West ribbon&$1.0 \pm 0.7$ & $10 \pm 1 $&$ 0.5 \pm 0.2 $&$ 4.9 \pm 0.3$ \\ Full Sun &$1.33 \pm 0.02$ & $11.81 \pm 0.05 $&$ 2.84 \pm 0.04$ &$ 5.61 \pm 0.02$ \\ \hline \end{tabular} \end{table} \section{Discussion}\label{sec:discussion} \subsection{Impulsive Heating of Ribbons} We now investigate the evolution of the properties of the thermal plasma at the East ribbon and coronal source compared to the full Sun spectral results. The presence of low-energy HXR from thermal bremsstrahlung from the ribbon sources is shown by imaging spectroscopy results, and can also be seen directly in the reconstructed images. HXR 4--9 keV emission associated with both ribbons (and coronal source) is shown in Figure \ref{fig:imsp_roi}b. We obtained images using two different imaging algorithms, Clean and the Maximum Entropy Method of the New Jersey Institute of Technology \citep[MEM-NJIT:][]{SchmahlPernakHurford:2007}, to confirm that the sources were not spurious artifacts of the algorithms. MEM-NJIT has a tendency to over resolve the sources, but in this case it works to confirm the overall spatial distribution of the low-energy HXR emission over part of the ribbons. We then constructed Clean maps (with \verb!Clean_beam_width!\footnote{Arbitrary factor applied to the beam convolved with Clean sources. Values between 1.5--2.0 tend to give better results when compared to different imaging algorithms \citep[see ][ and references therein.]{SimoesKontar:2013} } = 1.5; detectors 3--8) for nine time bins in the interval 06:22:22 and 06:27:51~UT. The spectra were fitted with an isothermal plus thick-target component. The latter was included to achieve a better fitting for the temperature, as fluxes around 8--10 keV and above could not be fitted with a thermal component only, requiring a non-thermal tail. In Figure \ref{fig:spot_em_t} we show the EM and $T$ for the East ribbon (orange), coronal source (blue) and full Sun (grey). Although the uncertainties in the imaging spectroscopy method are larger due to the smaller time intervals considered, the temperature peak around 06:25:40~UT is evident for both East ribbon and coronal source, in good agreement with the full Sun values from both RHESSI and GOES (see Figure \ref{fig:overview}d). {The evolution of the plasma temperature in the impulsive phase of this event is in fact quite unusual. The plasma temperature from GOES and RHESSI (Figure \ref{fig:temprdecay}) show that the temperature peak (06:25:38~UT) occurs about 8 to 10 seconds\footnote{Given the time resolution of GOES ($\approx$ 2 $s$) and RHESSI ($\approx$ 4 $s$)} before the HXR peak (06:25:46~UT). Even considering the different instrumental responses from GOES and RHESSI, both sample the hottest plasma present in flares, weighted by the EM \citep{RyanOFlannagainAschwanden:2014}, indicating that this temperature peak reveals an impulsive heating followed by a fast cooling process. The temperature decrease from 13 MK to 11MK in about two minutes is consistent with conduction losses, with a time scale of about 14 to 38 seconds \citep[for the coronal source,][]{Simoes:2015}. The $e$-folding cooling time for the (full Sun) temperatures in Figure \ref{fig:temprdecay} is $\tau=$ 35\,--\, 55 seconds. } \begin{figure} \centerline{\includegraphics[angle=0,width=\textwidth]{fig_spot_em_t.eps}} \caption{EM and T from RHESSI imaging spectroscopy for the coronal source (blue crosses) and East ribbon source (orange). The EM and $T$ from the unresolved spectral fitting are also shown (grey).} \label{fig:spot_em_t} \end{figure} Since the spatial resolution of RHESSI is lower than that of AIA, we confirm the location of the highest EM by applying the AIA-DEM method for each pixel in AIA images, following \cite{HannahKontar:2013}. In Figure \ref{fig:demmap} we show EM maps from the AIA-DEM method, where the largest concentrations ($\mathrm{EM}_c > 10^{29.5}$ cm$^{-5}$) of hot plasma ($T>8$ MK) are near the flare ribbons and the coronal source. \begin{figure} \centerline{\includegraphics[angle=0,width=\textwidth]{fig_aia_demmap.eps}} \caption{EM maps obtained from the AIA-DEM method \citep{HannahKontar:2012}, for temperature ranges 2--3, 6--8, 8--11 and 11--14 MK, at four times during the impulsive phase. The arrows (from left to right) in panel {\em k} indicate the positions of the East ribbon, coronal source and West ribbon.} \label{fig:demmap} \end{figure} EM maps (Figure \ref{fig:demmap}) obtained with the AIA-DEM method also suggest the presence of short-lived, high-EM ribbon ($\mathrm{EM}_c \approx 10^{30}$ cm$^{-5}$) at 11-14 MK near the HXR peak time at 06:25:46~UT. \begin{figure} \centerline{\includegraphics[angle=0,width=\textwidth]{tempr_decay.eps}} \caption{Plasma temperature inferred from RHESSI (blue) and GOES (orange) data, peaking about 10 seconds before the RHESSI HXR counts at 3--6 keV (green) and 25--45 keV (violet). The values of the $e$-folding cooling time, $\tau$ (in seconds), are indicated in the figure.} \label{fig:temprdecay} \end{figure} The gradual rise of the EM values from the AIA-DEM (Figure \ref{fig:em_time}) starting about 06:24~UT shows the filling of the loops with hot material but the sudden enhancement between 06:25:46 and 06:25:55~UT comes from this impulsively heated material, with the two components then contributing to the overall EM. The fast cooling of the 13 MK source can only be attributed to conduction to the lower atmosphere, as radiative cooling is much slower than the observed $e$-folding time, of about 35\,--\,55 seconds. It is not clear what is the source of energy to produce the temperature peak, however we here speculate a scenario where a rise in the plasma temperature leads the acceleration of particles. \subsection{Energy Budget at the 10 MK Ribbon}\label{sec:heating} We now examine the energy budget of the 10 MK plasma in the East ribbon. For that, we consider a slab of plasma at $T=10$ MK in the lower atmosphere (\textit{i.e.} at transition region heights), with a thickness $L$, being collisionally heated by the non-thermal electrons, balanced by radiative and conductive losses. Following \cite{FletcherHannahHudson:2013} we calculate the energy budget of the 10 MK plasma, considering radiative and conductive losses and the collisional energy loss of the non-thermal electrons as the energy input mechanism. The total collisional power input $P_{\mathrm{coll}}$ [erg s$^{-1}$] to the plasma can be inferred from the collisional thick-target model by integrating the energy loss \citep{Emslie:1978,FletcherHannahHudson:2013} \begin{equation} P_{\mathrm{coll}}=F_\mathrm{tot}E_c\frac{\delta-1}{\delta-2}\left[1-\left(\frac{\delta}{2}-1 \right)x_c^{1-\delta/2}B(x_c;\frac{\delta}{2}-1;\frac{3}{2}) \right], \end{equation} where $B$ is the incomplete beta function and $x_c={3KN}/{E_c^2}$, where $K=2\pi e^4$, and also \begin{equation} F_\mathrm{tot}=\int_{E_c}^\infty F_0E_0^{-\delta}dE_0 \end{equation} being $F_\mathrm{tot}$ the total number of electrons per second above $E_c$ injected into the source. The values for $F_{tot}$, $\delta$ and $E_c$ are taken from RHESSI spectral analysis (Figure \ref{fig:rhessi}). In order to estimate the time evolution of these parameters for the East ribbon source, we used the results from the full Sun spectroscopy and assumed that about a third of the total number of electrons are associated with each of the three HXR sources, based on the HXR imaging spectroscopy results in Table \ref{tab:imsp}. The column depth, $N$, can be estimated from the column, EM$_c$, inferred with the AIA-DEM analysis, as $N = n_eL \simeq (\mathrm{EM}\ L)^{1/2}$, for a uniform source of thickness $L$. The hot plasma will lose energy by conduction, along the field direction $z$, to cooler layers of the atmosphere at a rate $L_\mathrm{cond}$ [erg s$^{-1}$]. We estimate the conductive losses following \cite{BattagliaFletcherBenz:2009}, considering the case of flux-limited conduction: \begin{equation} L_{\mathrm{cond}}=\varrho(x)\kappa_0T^{5/2}\frac{dT}{dz}A \label{eq:cond} \end{equation} where the factor $\varrho(x)$, a function of $x=\log(l_\mathrm{mfp}/L)$, reduces the classical \cite{Spitzer:1965} conduction coefficient $\kappa_0=10^{-6}$ erg cm$^{-1}$ s$^{-1}$ K$^{-7/2}$ to the flux-limited conduction \citep{Campbell:1984}. \cite{BattagliaFletcherBenz:2009} fitted the values of $\varrho$ published by \cite{Campbell:1984} as $\varrho (x)=1.01 \mathrm{e}^{-0.05(x+6.63)^2}$. This condition applies because the chromospheric scale length is small and the electron collisional mean free path, $l_\mathrm{mfp}=5.21\times 10^3 T^2/n_e$, is significant compared to the temperature scale length. Here, we approximate $dT/dz$ by $T/L$ and set $T=10$ MK and $L=2000$ km. The optically thin hot plasma radiates energy at a rate $L_\mathrm{rad}$ that can be estimated by \citep{RosnerTuckerVaiana:1978}: \begin{equation} L_\mathrm{rad}=10^{-17.73}\mathrm{EM}~T^{-2/3}, (10^{6.3}<T<10^7~\mathrm{K}). \end{equation} Using the EM values derived from the AIA-DEM analysis (Figure \ref{fig:em_time}c), and the area $A$ of the 10 MK source estimated as defined in Section \ref{sec:dem}, we found that the collisional heating is not sufficient to balance the estimated conductive losses as shown in Figure \ref{fig:power}. \cite{FletcherHannahHudson:2013} found a substantial amount of plasma at 10 MK in the flare ribbons during the early impulsive phase of the flare SOL2010-08-07T18:24, with an average column EM of a few times 10$^{28}$ cm$^{-5}$. They found that the energy carried by an electron beam is not sufficient to heat the ribbon plasma when radiative and conductive losses are taken into account, unless a low-energy cutoff of $E_c \approx 5$ keV is considered. The total electron energy inferred from the HXR spectrum is mostly defined by the low-energy cutoff $E_c$ \citep[\textit{e.g.}][]{Saint-HilaireBenz:2005}, which is sometimes chosen arbitrarily to balance the energy deposited by the non-thermal electrons and the value of the maximum thermal energy \citep[\textit{e.g.}][]{MrozekTomczak:2004}. The $E_c$ values we found from the HXR spectral fitting are already quite low, approximately $8$ keV during the impulsive phase (see the bottom panel in Figure \ref{fig:rhessi}), only a few times the average thermal energy $kT$ of electrons in the 10 MK plasma. { It is of course possible to reduce the value of the low-energy cutoff to make the energy losses and gains balance, and a lower value of the low-energy cutoff is permitted by the HXR spectrum. A lower $E_c$ means an increased total `non-thermal' power, and an increased electron number flux. For this event, a value of $E_{c}\approx 4$ keV is necessary to equate energy gains and losses.} On the other hand, using the $E_c$ values obtained from the HXR spectral analysis, a slab thickness $L$ of $\approx$30 Mm is required to balance the energy budget, which would comprise a large portion of the loops, a picture not supported by the AIA images, which show very compact footpoint sources. As shown by \cite{Simoes:2015}, the coronal source is both dense ($n \approx 10^{11}$cm$^{-3}$) and hot ($T\approx 13$ MK), a downward thermal conduction from the coronal source to the ribbons may contribute to the observed heating at these regions \citep[\textit{e.g.}][]{HoriYokoyamaKosugi:1998,QiuSturrockLongcope:2013,BattagliaFletcherBenz:2009}. \begin{figure} \resizebox{\hsize}{!}{\includegraphics[angle=0]{energy_budget_east_int_L83_F100.eps}} \caption{Power budget of the East ribbon source, considering beam collisional heating (blue), conductive (magenta) and radiative (gold) cooling, in a $L_T=2000$ km thick slab of plasma at a temperature of 10 MK.} \label{fig:power} \end{figure} { If the low energy cutoff is indeed as low as 4 keV to balance energy gains and losses by the chromospheric plasma then $E_c$ approaches the mean thermal energy of the 10 MK plasma, and it becomes difficult to separate the `beam' from the thermal core. We may find it profitable in the future to think about the electrons in the chromosphere forming a single distribution, such as a $\kappa$-distribution, which describes a single population transitioning smoothly from a power-law-like tail to a thermal-like core. This distribution is being discussed in connection with dense coronal HXR sources, where populations of heated electrons and accelerated electrons are present in the same volume \citep{Oka:2013,Oka:2015}. \cite{Bian:2014} have demonstrated how a $\kappa$-distribution arises naturally in a volume where diffusive acceleration and collisional losses operate simultaneously and co-spatially. If this volume is coronal and a beam is formed by a high energy tail that `leaks out' then that beam looks very much like a standard power-law beam, with a low energy cutoff determined by the particle velocity at which the escape time is less than the acceleration or diffusion timescales; then, we are then still in the standard collisional thick-target scenario but with a slightly differently shaped coronal beam. However, if we move the site of both heating and acceleration to the chromosphere, which is a very different interpretation, then the properties of the $\kappa$-distribution extending across all of the electron energy space can be investigated to evaluate the total required power and number flux needed to account for all radiation signatures. This is an interesting possibility; it does however require that some other agent, such as wave turbulence, is present in the chromosphere to locally heat and accelerate electrons. A full investigation is beyond the scope of this paper.} \section{Summary} We have presented an analysis of the plasma in the flare ribbons of the event SOL2013-11-09, a C2.6 class non-eruptive, two-ribbon flare using SDO/AIA EUV and RHESSI HXR observations. The ribbons have impulsive EUV/UV emission seen in all SDO/AIA filters, well associated with non-thermal HXR emission observed by RHESSI. Using the method of regularised inversion of SDO/AIA data \citep{HannahKontar:2012} we obtained the differential emission measure (DEM) of the two flare ribbons, and investigated the time evolution of the emission measure (EM) in three temperature ranges (0.5 -- 1.4, 1.8 -- 3.2 and 7.9 -- 12.6 MK). From these, we have shown that the plasma heats rapidly to 12--13 MK during the impulsive phase of the event, marked by the HXR peak. The EM temporal evolution shows a peak near the time of maximum HXR emission, indicating fast heating and cooling with the hottest plasma ($T=$7.9\,--\, 12.6 MK) reaching EM values of 1 \,--\, 3 $\times 10^{47}$ cm$^{-3}$, these values agreeing with those obtained from GOES and RHESSI. The rapid evolution of the ribbon plasma temperature and its high peak temperature are confirmed by RHESSI imaging spectroscopic analysis, and also agree with the temperature derived from GOES (although without spatial resolution). Also we note that the evolution of the ribbon plasma characteristics are very similar to the those found in the intense and compact coronal source, as studied in detail by \cite{Simoes:2015}. Performing RHESSI HXR imaging spectroscopy, we obtained the parameters to describe the distribution of non-thermal electrons at each source (both ribbons and coronal source). With the information about the plasma and non-thermal electrons at the East ribbon, we deduced the energy balance of the plasma, considering collisional beam heating \citep{Emslie:1978,FletcherHannahHudson:2013} against conductive and radiative losses. We found that beam heating alone is not sufficient to heat and maintain the ribbon plasma at $T=10$ MK, even with a low energy cutoff of $E_c\approx 8$ keV, and speculate if the dense and hot coronal source can provide a heat source for the ribbons by conduction \citep[\textit{e.g.}][]{HoriYokoyamaKosugi:1998,QiuSturrockLongcope:2013}. \begin{acks} The authors would like to thank Paul Boerner for providing updated SDO/AIA temperature response functions and Iain Hannah for making the DEM regularised inversion software freely available. The research leading to these results has received funding from the European Community’s Seventh Framework Programme (FP7/2007- 2013) under grant agreement no. 606862 (F-CHROMA), from STFC grant ST/I001808/1 (PJAS, LF) and ST/L000741/1 (LF), and from an STFC ‘STEP’ award to the University of Glasgow (DRG). \end{acks} \bibliographystyle{spr-mp-sola}
1,116,691,497,505
arxiv
\section{The Advantages and Limitations of the Deconfounder Method} We first discuss several advantages offered by the deconfounder method. We then examine the assumptions required by the method and discuss its limitations. \subsection{The Deconfounder Method} Suppose that we have a simple random sample of $n$ units from a population. We have a total of $m$ treatments, represented by the $m$-dimensional vector, $\bm{A}_i=(A_{i1}, A_{i2}, \ldots, A_{im})^\top$, for unit $i$. For the sake of simplicity, we ignore the possible existence of observed confounders $\mathbf{X}_i$. But, all the arguments of this commentary are applicable, conditional on $\mathbf{X}_i$. The deconfounder method consists of the following simple two steps. The first step fits the following factor model to the observed treatments, \begin{equation} p(A_{i1}, A_{i2}, \ldots, A_{im}) \ = \ \int p(\bm{Z}_i) \prod_{j=1}^m p(A_{ij} \mid \bm{Z}_i) \ d\bm{Z}_i, \label{eq:factor} \end{equation} where $\bm{Z}_i=(Z_{i1}, Z_{i2}, \ldots, Z_{ik})^\top$ represents the $k$-dimensional vector of latent factors. Once the estimates of the factors $\widehat{\bm{Z}}_i$, which Wang and Blei call the {\it substitute confounders}, are obtained, the second step estimates the average causal effects of multiple treatments by adjusting for these substitute confounders as follows, \begin{equation} \tau(\bm{a}, \bm{a}^\prime) \ = \ \mathbb{E}\{Y_i(\bm{a}) - Y_i(\bm{a}^\prime)\} \ = \ \mathbb{E}\{\mathbb{E}(Y_i \mid \bm{A}_i = \bm{a}, \widehat{\bm{Z}}_i) - \mathbb{E}(Y_i \mid \bm{A}_i = \bm{a}^\prime, \widehat{\bm{Z}}_i)\}, \label{eq:reg} \end{equation} where $\bm{a} \in \mathcal{A}$ and $\bm{a}^\prime \in \mathcal{A}$ are the vectors of selected treatment values with $\bm{a} \ne \bm{a}^\prime$ and $\mathcal{A}$ represents the support of $\bm{A}_i$. In practice, a regression model may be used to adjust for the substitute confounders as demonstrated by Wang and Blei in their empirical application. The deconfounder method is attractive to applied researchers for several reasons. First, it is a simple procedure based on two classes of familiar statistical models --- factor models and regression models. Second, the method offers diagnostics in observational studies with unmeasured confounding. Specifically, researchers can check the conditional independence among the observed treatments given the estimated factors, \begin{equation} A_{ij} \ \mbox{$\perp\!\!\!\perp$} \ \bm{A}_{i,-j} \mid \widehat{\bm{Z}}_i \end{equation} for any $j=1,\ldots,m$ and $\bm{A}_{i,-j}$ represents all the treatments except $A_{ij}$. If this conditional independence does not hold, then there may exist unobserved confounders that affect both $A_{ij}$ and some of $\bm{A}_{i,-j}$, yielding a biased causal estimate. As discussed below, however, the lack of conditional independence may also be due to the misspecification of factor model, which, for example, would be present if there are causal relationships among treatments. In sum, the deconfounder method proposes a simple solution to a long-standing problem of inferring causal effects of multiple treatments in observational studies. Many analysts of observational studies rely upon the assumption that the treatments are unconfounded conditional on a set of observed pre-treatment covariates. And yet, it is often difficult to rule out the possible existence of unobserved confounders. The deconfounder method not only offers a new identification strategy in the presence of unobserved confounding, but also shows how to check the validity of the resulting estimates under certain assumptions. \subsection{Assumptions} \begin{figure}[t] \begin{center} \tikzstyle{VertexStyle} = [shape = circle, minimum width = 2ex, draw] \tikzstyle{EdgeStyle} = [->,>=stealth'] \begin{tikzpicture}[scale=1] \SetGraphUnit{2} \node[VertexStyle] (A1) at (0, 0) {$A_1$}; \node[VertexStyle] (A2) at (2, 0) {$A_2$}; \node[VertexStyle] (Am) at (6, 0) {$A_m$}; \node[VertexStyle] (Y) at (3, 2) {$Y$}; \node (dots) at (4, 0) {$\cdots$}; \node[circle, dashed, draw] (Z) at (3, -2) {$\bm{Z}$}; \Edges(A1, Y) \Edges(A2, Y) \Edges(Am, Y) \Edges(Z, A1) \Edges(Z, A2) \Edges(Z, Am) \Edges(Z, Y) \end{tikzpicture} \end{center} \vspace{-.25in} \caption{Directed Acyclic Graph for the Deconfounder Method.} \label{fig:DAG} \end{figure} What assumptions does the deconfounder method require? Wang and Blei uses a graphical model to represent the conditional dependencies required by the deconfounder method. Here, we reproduce the graphical model using the directed acyclic graph (DAG) in Figure~\ref{fig:DAG}. In addition to the SUTVA \citep{rubi:90}, this DAG implies several key assumptions. First, the unobserved confounders $\bm{Z}$ should represent all confounding variables such that the treatments are ignorable given $\bm{Z}$, \begin{equation} Y_i(\bm{a}) \ \mbox{$\perp\!\!\!\perp$} \ \bm{A}_i \mid \bm{Z}_i \label{eq:unconfounded} \end{equation} for any $\bm{a} \in \mathcal{A}$. The assumption implies that the multi-cause confounder $\bm{Z}_i$ suffices to adjust for the treatment-outcome confounding. Second, the DAG also implies the following conditional independence assumption, \begin{equation} A_{ij} \ \mbox{$\perp\!\!\!\perp$} \ \bm{A}_{i,-j} \mid \bm{Z}_i \label{eq:mutual_cause} \end{equation} for any $j=1,2,\ldots,m$. The assumption justifies the factor model in equation~\eqref{eq:factor}. This assumption is violated if, for example, there exists a causal relationship among treatments. In the movie revenue application considered in the original article, the assumption is violated if the choice of actor for the main role (e.g., Sean Connery in a James Bond movie) influences the selection of actor for another role (e.g., Bernard Lee as the character of M). This is an important limitation of the deconfounder method as the problem may be common in applied research with multiple treatments. In addition, according to Wang and Blei, the deconfounder method also requires the following overlap assumption that is not explicitly represented in the DAG, \begin{equation} p(\bm{A}_i \in \mathcal{A}^\ast \mid \bm{Z}_i) \ > \ 0 \label{eq:overlap} \end{equation} for all sets $\mathcal{A}^\ast \subset \mathcal{A}$ with $p(\bm{A}_i \in \mathcal{A}^\ast) > 0$. The assumption implies that the choice of treatment values $\bm{a}$ may be constrained when estimating $\mathbb{E}\{Y_i(\bm{a})\}$. If the selected value of $\bm{a}$ does not belong to $\mathcal{A}^\ast$, then the resulting causal inference will be based on extrapolation. Finally, the key identification condition of the deconfounder method is the assumption of ``no unobserved single-cause confounder.'' Wang and Blei formalize this assumption as the following set of conditional independence assumptions (see Definition~4 of the original article), \begin{eqnarray} Y_i(\bm{a}) & \mbox{$\perp\!\!\!\perp$} & A_{ij} \mid \mathbf{V}_{ij} \label{eq:single} \\ A_{ij} & \mbox{$\perp\!\!\!\perp$} & \bm{A}_{i,-j} \mid \mathbf{V}_{ij} \label{eq:indep} \end{eqnarray} for any $j=1,2,\ldots,m$, $\bm{a} \in \mathcal{A}$, and some random variable $\mathbf{V}_{ij}$. In addition, the authors require that these conditional independence relations do not hold when conditioning on any proper subset of the sigma algebra of $\mathbf{V}_{ij}$. \begin{figure}[t] \vspace{-.25in} \spacingset{1} \begin{center} \subfigure[only unobserved single-cause confounders exist]{ \tikzstyle{VertexStyle} = [shape = circle, minimum width = 2ex, draw] \tikzstyle{EdgeStyle} = [->,>=stealth'] \begin{tikzpicture}[scale=1] \SetGraphUnit{2} \node[VertexStyle] (A1) at (1, 0) {$A_1$}; \node[VertexStyle] (A2) at (3, 0) {$A_2$}; \node[VertexStyle] (A3) at (5, 0) {$A_3$}; \node[VertexStyle] (Y) at (3, 2) {$Y$}; \node[circle, dashed, draw] (Z1) at (1, -2) {$\bm{Z}_1$}; \node[circle, dashed, draw] (Z2) at (3, -2) {$\bm{Z}_2$}; \node[circle, dashed, draw] (Z3) at (5, -2) {$\bm{Z}_3$}; \draw [->, >=stealth', thick=2] (Z2) to [out=45, in=-75] (Y); \Edges(A1, Y) \Edges(A2, Y) \Edges(A3, Y) \Edges(Z1, A1) \Edges(Z2, A2) \Edges(Z3, A3) \Edges(Z1, Y) \Edges(Z3, Y) \end{tikzpicture} } \hspace{.5in} \subfigure[both unobserved single-cause and multiple-cause confounders exist]{ \tikzstyle{VertexStyle} = [shape = circle, minimum width = 2ex, draw] \tikzstyle{EdgeStyle} = [->,>=stealth'] \begin{tikzpicture}[scale=1] \SetGraphUnit{2} \node[VertexStyle] (A1) at (1, 0) {$A_1$}; \node[VertexStyle] (A2) at (3, 0) {$A_2$}; \node[VertexStyle] (A3) at (5, 0) {$A_3$}; \node[VertexStyle] (Y) at (3, 2) {$Y$}; \node[circle, dashed, draw] (Z1) at (1, -2) {$\bm{Z}_1$}; \node[circle, dashed, draw] (Z2) at (3, -2) {$\bm{Z}_2$}; \node[circle, dashed, draw] (Z3) at (5, -2) {$\bm{Z}_3$}; \draw [->, >=stealth', thick=2] (Z2) to [out=60, in=-75] (Y); \Edges(A1, Y) \Edges(A2, Y) \Edges(A3, Y) \Edges(Z2, A1) \Edges(Z2, A3) \Edges(Z1, A1) \Edges(Z2, A2) \Edges(Z3, A3) \Edges(Z1, Y) \Edges(Z3, Y) \end{tikzpicture} } \end{center} \vspace{-.2in} \caption{Examples of Unobserved Single-cause Confounders.} \label{fig:DAG2} \end{figure} Unfortunately, these conditional independence assumptions are not sufficient to eliminate the possible existence of unobserved single-cause confounders. Figure~\ref{fig:DAG2} presents two examples, in which single-cause confounders exist, but equations~\eqref{eq:single}~and~\eqref{eq:indep} still hold. In addition, both cases can be reduced to the DAG in Figure~\ref{fig:DAG} where no single-cause unobserved confounder exists by defining the unobserved multi-cause confounder as $\bm{Z} = (\bm{Z}_1, \bm{Z}_2, \bm{Z}_3)$. The examples demonstrate that a single multi-cause confounder can be decomposed into multiple single-cause confounders, and that several single-cause confounders can be combined into a single multi-cause confounder. Therefore, it is difficult to distinguish between single-cause and multiple-cause confounders without the knowledge of causal relationships among the variables. We believe that it is important to develop the precise formal statement of the no unobserved single-cause confounder assumption. Such formalization allows us to understand how this assumption enables the identification of causal effects. In addition, our discussion implies that assessing the credibility of the assumption requires the scientific knowledge about the underlying causal structure involving unobserved confounders. \subsection{Nonparametric Identification} Wang and Blei establish the nonparametric identification of the average treatment effect given in equation~\eqref{eq:reg} under the aforementioned assumptions in two steps. First, they show that a factor model of the observed treatments can be used to consistently estimate the substitute confounder. Second, they show that given the substitute confounder, the average treatment effects can be nonparametrically identified using equation~\eqref{eq:reg} above. In an insightful paper, \citet{d2019multi} demonstrates that this two-step proof strategy leads to two problems for the deconfounder method. First, there may be more than one factor model that is compatible with the distribution of the observed treatments. He provides an example where different factor models that are compatible with the distribution of the observed treatments under the structure of Figure~\ref{fig:DAG} yield different causal estimates. Second, \citeauthor{d2019multi} shows that even if a factor model is uniquely identified, the nonparametric identification is in general impossible. Moving beyond the counterexamples, we consider the identification assumption for the factor model, discuss the role of the substitute confounder, and assess the overlap assumption required by the deconfounder method. With respect to the identifiability of factor models, \citet{kruskal1977three} and \citet{allman2009identifiability} give the general identification assumptions when observed variables are discrete. In this case, a crucial assumption is that the latent factor is correlated with the observed variables. In our context, this means that $\bm{Z}$ must causally affect each treatment $A_j$. In the causal inference literature, this assumption is known as faithfulness \citep{spirtes2000causation}, which states that there exists conditional independence among variables in the population distribution if and only if it is entailed in the corresponding DAG. Thus, although Wang and Blei only discuss a set of conditional independence assumptions, the deconfounder method requires the faithfulness assumption in order to ensure the identifiability of factor model. Next, we discuss the role of the substitute confounder. In the proof of the deconfounder method, Wang and Blei not only assume that the true unobserved confounder $\bm{Z}_i$ can be consistently estimated, but also treat the estimated substitute confounder $\widehat{\bm{Z}}_i$ as its true counterpart. This proof strategy ignores the crucial fact that the (estimated) substitute confounder is a function of observed treatments $\widehat{\bm{Z}}_i=\widehat{h}_M(\bm{A}_i)=\mathbb{E}_M(\bm{Z}_i\mid \bm{A}_i)$, where $\hat{h}_M$ indicates the fact that the substitute confounder is estimated from the data and depends on the choice of factor model and $\mathbb{E}_M$ represents the expectation with respect to the fitted factor model. We emphasize that the substitute confounder $\widehat{\bm{Z}}_i$ does not converge in probability to the true confounder $\bm{Z}_i$, which in itself is a random variable. Rather, the substitute confounder converges to a function of observed treatments. Yet, this consistency result is required for the key results of the paper (i.e., Theorems~6--8). We also closely examine the identification formula given in equation~\eqref{eq:reg} by explicitly writing out the conditional expectation, \begin{eqnarray} \ \mathbb{E}\{\mathbb{E}(Y_i \mid \bm{A}_i = \bm{a}, \widehat{\bm{Z}}_i)\} =\int \mathbb{E}(Y_i \mid \bm{A}_i = \bm{a}, \widehat{\bm{Z}}_i) p(\widehat{\bm{Z}}_i) d\widehat\bm{Z}_i \label{eq:deconfounder} \end{eqnarray} Notice that equation~\eqref{eq:deconfounder} does not follow unless the support of $p(\widehat{\bm{Z}}_i \mid \bm{A}_i = \bm{a})$ is identical to the support of $p(\widehat{\bm{Z}}_i)$ for any given $\bm{a} \in \mathcal{A}$. Unfortunately, since the substitute confounder is estimated using the observed treatments, $p(\widehat{\bm{Z}}_i \mid \bm{A}_i = \bm{a})$ is in general degenerate. The overlap assumption given in equation~\eqref{eq:overlap} is not applicable because the assumption is about the (true) unobserved confounders $\bm{Z}_i$ rather than the (estimated) substitute confounders, $\widehat{\bm{Z}}_i$. This means that we can only identify $\mathbb{E}(Y_i \mid \bm{A}_i = \bm{a}, \widehat{\bm{Z}}_i = \bm{z})=\mathbb{E}(Y_i \mid \bm{A}_i = \bm{a})$ for the values of $\bm{z}$ with $\bm{z} = \widehat{h}_M(\bm{a})$, implying that only a certain set of causal effects are identifiable. In Theorem~6 of the original paper, Wang and Blei address this problem by imposing two additional restrictions. First, it is assumed that the outcome is separable in the following sense, \begin{eqnarray} \mathbb{E}\{Y_i(\bm{a}) \mid \widehat{\bm{Z}}_i\} & = & f_1(\bm{a}) + f_2(\widehat{\bm{Z}}_i), \label{eq:separable1}\\ \mathbb{E}(Y_i \mid \bm{A}_i, \widehat{\bm{Z}}_i) & = & f_3(\bm{A}_i) + f_4(\widehat{\bm{Z}}_i), \label{eq:separable2} \end{eqnarray} where we use $\widehat{\bm{Z}}_i$ instead of $\bm{Z}_i$ to emphasize the fact that the substitute confounder is estimated. Although equation~\eqref{eq:separable1} allows us to write the average treatment effect as a function of treatment values alone, i.e., $\mathbb{E}\{Y_i(\bm{a}) - Y_i(\bm{a}^\prime)\} = f_1(\bm{a}) - f_1(\bm{a}^\prime)$, this assumption is not particularly helpful for identification since conditioning on $\widehat{\bm{Z}}_i$ is still required to identify the mean potential outcomes. In addition, equation~\eqref{eq:separable2} can be rewritten as $\mathbb{E}(Y_i \mid \bm{A}_i) = f_3(\bm{A}_i) + f_4(\hat{h}_M(\bm{A}_i))$ because $\widehat{\bm{Z}}_i$ is a deterministic function of $\bm{A}_i$. This suggests that the validity of this restriction about the outcome model critically depends on the choice of factor model. The second restriction is that when the treatments are continuous, the substitute confounder is a piece-wise constant function, i.e., $\nabla_{\bm{a}} f_{\boldsymbol{\theta}}(\bm{a})= 0$ where a parametric model is assumed for $p(\widehat{\bm{Z}}_i \mid \bm{A}_i = \bm{a}, \boldsymbol{\theta}) = \delta_{f_{\boldsymbol{\theta}}(\bm{a})}$ with a vector of parameters $\boldsymbol{\theta}$. A similar restriction is proposed for the case of discrete treatments. Since $p(\widehat{\bm{Z}}_i \mid \bm{A}_i = \bm{a}, \boldsymbol{\theta}) = \delta_{\hat{h}_M(\bm{a})}$ automatically holds, the assumption is valid if $\hat{h}_M(\bm{a})$ is a piece-wise constant function. Thus, this second restriction also suggests that the choice of factor model is critical for the validity of the deconfounder method. In sum, we conclude that the nonparametric identification is generally difficult to obtain under the deconfounder method. Because the substitute confounder is a function of observed treatments, it leads to the violation of the overlap assumption. Wang and Blei introduce two additional restrictions to address this problem. However, these assumptions impose severe constraints on the choice of factor model as well as that of outcome model. As a consequence, they may significantly limit the practical applicability of the deconfounder method. Even when researchers carefully choose a factor model that satisfies these restrictions, they may obtain causal effects only for a restricted range of treatment values. \section{Alternative Approaches} We next consider three alternative approaches to the important question of identifying the causal effects of multiple treatments in the presence of unobserved confounders. The approaches in this section will be based on equation~\eqref{eq:unconfounded}. Unlike the deconfounder method, however, we will directly consider the identification of the probability distributions involving the (true) unobserved confounder $p(\bm{A}_i, \bm{Z}_i)$ and $p(Y_i \mid \bm{A}_i, \bm{Z}_i)$ rather than adopting Wang and Blei's two-step proof strategy. \subsection{Parametric Approach} Wang and Blei use parametric models in their empirical applications. Here, we consider a more general parametric approach. A primary advantage of the parametric approach is simplicity, whereas its major limitation is the required modeling assumptions that may not be credible in practice. Suppose that there exists a uniquely identifiable factor model for the treatments, and that the joint distribution of $(\bm{A}, \bm{Z})$ is also identifiable. We assume the following additive model for the outcome variable, \begin{eqnarray*} \mathbb{E}\{Y_i(\bm{a}) \mid \bm{Z}_i\} \ = \ \sum_{j=1}^m \beta_j b_j(a_j)+ \sigma g(\bm{Z}_i), \end{eqnarray*} where $b_j(\cdot)$ and $g(\cdot)$ are pre-specified functions. Under this setting, it can be shown that if $\sigma$ is known, then the average treatment effect is identifiable so long as $(b_1(A_{i1}),\ldots,b_m(A_{im}))$ is linearly independent. In contrast, if $\sigma$ is unknown, then the average treatment effect is identifiable if $(b_1(A_{i1}),\ldots,b_m(A_{im}), \mathbb{E}\{g(\bm{Z}_i)\mid \bm{A}_i\})$ is linearly independent. This linear independence assumption is analogous to the overlap assumption discussed earlier, but the assumption can be tested using the observed data. To illustrate this parametric approach, consider an example, in which we have three binary treatments $m=3$ and one binary latent factor $Z_i$. Further assume that we have the following outcome model, \begin{eqnarray*} \mathbb{E}\{Y_i(\bm{a}) \mid Z_i\} \ = \ \beta_0+ \sum_{j=1}^3 \beta_j A_{ij}+ \sigma Z_i. \end{eqnarray*} Now, consider a scenario, under which $A_{ij}$'s are mutually independent of one another given $Z_i$. Then, the joint distribution $p(A_{i1}, A_{i2}, A_{i3}, Z_i)= p(Z_i)\prod_{j=1}^3 p(A_{ij} \mid Z_i)$ is identifiable based on the joint distribution of $(A_{i1}, A_{i2}, A_{i3})$ up to label switching \citep[see][]{kruskal1977three}. Note that the average treatment effects are invariant to label switching. Thus, under this condition, even if $\sigma$ is unknown, $\beta_j$'s are identifiable so long as $\mathbb{E}(Z_i \mid A_{i1},A_{i2},A_{i3})$ is not linear in $(A_{i1},A_{i2},A_{i3})$. \begin{figure}[t] \begin{center} \tikzstyle{VertexStyle} = [shape = circle, minimum width = 2ex, draw] \tikzstyle{EdgeStyle} = [->,>=stealth'] \begin{tikzpicture}[scale=1] \SetGraphUnit{2} \node[VertexStyle] (A1) at (0, 0) {$A_1$}; \node[VertexStyle] (A2) at (2, 0) {$A_2$}; \node[VertexStyle] (A3) at (4, 0) {$A_3$}; \node[VertexStyle] (A4) at (6, 0) {$A_4$}; \node[VertexStyle] (Y) at (3, 2) {$Y$}; \node[circle, dashed, draw] (Z) at (3, -2) {$\bm{Z}$}; \draw [->, >=stealth', thick=2] (A1) to [out=-50, in=-150] (A3); \Edges(A1, Y) \Edges(A1, A2) \Edges(A2, Y) \Edges(A3, Y) \Edges(A4, Y) \Edges(Z, A1) \Edges(Z, A2) \Edges(Z, A3) \Edges(Z, A4) \Edges(Z, Y) \end{tikzpicture} \end{center} \vspace{-.25in} \caption{Directed Acyclic Graph in the Presence of Causal Relations among Treatments.} \label{fig:causaltreat} \end{figure} Next, consider a different case shown as the DAG in Figure~\ref{fig:causaltreat}, in which one treatment causally affects other treatments. In this case, we may focus on estimating the causal effects of $(A_2, A_3, A_4)$ conditional on $A_1$. We assume the following model for the outcome variable, \begin{eqnarray*} \mathbb{E}\{Y_i(\bm{a}) \mid Z_i\} \ = \ \beta_0+ \sum_{j=1}^4 \beta_j A_{ij}+ \sigma Z_i. \end{eqnarray*} The joint distribution of $\bm{A}_i$ and $Z_i$ under Figure~\ref{fig:causaltreat} is given by $p(Z_i) p(A_{i1} \mid Z_i)p(A_{i2} \mid A_{i1}, Z)p(A_{i3}\mid A_{i1}, Z_i) p(A_{i4} \mid Z_i)$. This factorization is identifiable from the observed data \citep{allman2009identifiability}. Then, even when $\sigma$ is unknown, we can identify the parameters in the outcome model so long as $\mathbb{E}(Z_i \mid A_{i1},A_{i2},A_{i3},A_{i4})$ is not linear in $(A_{i1},A_{i2},A_{i3},A_{i4})$. Using these estimated parameters, we can obtain the estimates for the causal effects. \subsection{Nonparametric Approach} In the causal inference literature, many scholars first consider the problem of nonparametric identification by asking whether or not causal effects can be identified without making any modeling assumption. Only after the nonparametric identification of causal effects is established, researchers proceed to their estimation and inference. \citet{cox:donn:11} regard this approach as a general principle of applied statistics. They state, \begin{quote} {\it If an issue can be addressed nonparametrically then it will often be better to tackle it parametrically; however, if it cannot be resolved nonparametrically then it is usually dangerous to resolve it parametrically.} (p. 96) \end{quote} \begin{figure}[t] \begin{center} \tikzstyle{VertexStyle} = [shape = circle, minimum width = 2ex, draw] \tikzstyle{EdgeStyle} = [->,>=stealth'] \begin{tikzpicture}[scale=1] \SetGraphUnit{2} \node[VertexStyle] (A1) at (0, 0) {$A_1$}; \node[VertexStyle] (A2) at (2, 0) {$A_2$}; \node[VertexStyle] (Am) at (6, 0) {$A_m$}; \node[VertexStyle] (Y) at (3, 2) {$Y$}; \node[VertexStyle] (W) at (0, -2) {$\mathbf{W}$}; \node (dots) at (4, 0) {$\cdots$}; \node[circle, dashed, draw] (Z) at (4, -2) {$\bm{Z}$}; \Edges(A1, Y) \Edges(A2, Y) \Edges(Am, Y) \Edges(Z, A1) \Edges(Z, A2) \Edges(Z, Am) \Edges(Z, Y) \Edges(W, A1) \Edges(W, A2) \Edges(W, Am) \end{tikzpicture} \end{center} \vspace{-.25in} \caption{Directed Acyclic Graph for the Instrumental Variable Approach.} \label{fig:DAGiv} \end{figure} To enable the general nonparametric identification of causal effects in the current setting, we must introduce auxiliary variables. \citet{d2019multi} considers the use of proxy variables. Here, we examine an approach based on instrumental variables. Figure~\ref{fig:DAGiv} presents the DAG for this approach where $\mathbf{W}$ represents a set of instrumental variables. Instrumental variables have the property that they are not affected by the unobserved confounders $\bm{Z}$ and influence the outcome $Y$ only through the treatments $\bm{A}$. For the sake of simplicity, we begin by considering the following separable model for the outcome, \begin{eqnarray*} \mathbb{E}\{Y_i(\bm{a}) \mid \bm{Z}_i\} \ = \ q(\bm{a})+ r(\bm{Z}_i), \end{eqnarray*} where $\mathbb{E}\{r(\bm{Z}_i)\}=0$ without loss of generality. Since the instrumental variables satisfy $\mathbb{E}\{r(\bm{Z}_i) \mid \mathbf{W}_i\}=\mathbb{E}\{r(\bm{Z}_i)\}=0$, we obtain, \begin{eqnarray} \label{eqn::iv-nonpara} \mathbb{E}(Y_i \mid \mathbf{W}_i)&=& \mathbb{E}\{q(\bm{A}_i) \mid \mathbf{W}_i\} \ = \ \sum_{\bm{a} \in \mathcal{A}} q(\bm{A}_i = \bm{a}) p(\bm{A}_{i}=\bm{a} \mid \mathbf{W}_i). \end{eqnarray} Since we can identify $\mathbb{E}(Y_i \mid \mathbf{W}_i)$ and $p(\bm{A}_i\mid \mathbf{W}_i)$ from the observed data, the causal effects are identifiable if we can uniquely solve $q(\cdot)$ using equation~\eqref{eqn::iv-nonpara}. Suppose that all the treatments are binary and the instrumental variable is discrete with $L$ levels. Since there are $2^m$ parameters in $q(\bm{a})$, equation~\eqref{eqn::iv-nonpara} implies that the identification requires the $2^m\times L$ matrix $\{ p(\bm{A}_i \mid \mathbf{W}_i) \}$ to be full-rank. This condition is analogous to the overlap assumption discussed earlier and can be checked using the observed data. The proposed approach here, however, requires the instrumental variables to have more than $2^m$ levels. When $m$ is large, it may be difficult to find instrumental variables that satisfy this condition. The deconfounder method is closely related to the control function methods developed in the econometrics literature. The control function is a variable that, when adjusted for, renders an otherwise endogenous treatment variable exogenous \citep[see e.g.,][]{wooldridge2015control}. \citet{imbens2009identification} consider the nonparametric identification of the following nonseparable triangular system of equations (as before, we omit observed pre-treatment confounding variables for simplicity), \begin{eqnarray} \label{eqn::ivY} Y_i & = & s_1(A_i, Z_i), \\ \label{eqn::ivA} A_i & = & s_2(W_i, U_i) \end{eqnarray} where $Z_i$ and $U_i$ are unobserved, $A_i$ is the endogenous treatment variable of interest, $W_i$ is the instrumental variable with $W_i \mbox{$\perp\!\!\!\perp$} (Z_i, U_i)$, and $s_2(\cdot, \cdot)$ is a strictly monotonic function of $U_i$. When $A_i$ is a vector and $U_i=Z_i$, equations~\eqref{eqn::ivY}~and~\eqref{eqn::ivA} become identical to the setting of the deconfounder method. \citeauthor{imbens2009identification} show that the control function $C_i$ is given by the cumulative distribution function of $A_i$ given $W_i$, i.e., $C_i = F_{A \mid W}(A_i, W_i)$. Like the substitue confounder, the control function unconfounds the treatment variable, i.e., $Y_i(a) \mbox{$\perp\!\!\!\perp$} A_i \mid C_i$. This is because $C_i$ is a one-to-one function of $U_i$, and $A_i$ depends only on $W_i$ conditional on $U_i$. It is important to emphasize that the control function methodology requires the overlap assumption that the support of the marginal distribution of the control function, i.e., $p(C_i)$, is the same as the support of the conditional distribution, i.e., $p(C_i \mid A_i)$. However, unlike the case of the deconfounder method, the control function is not a function of the treatment variable, making this overlap assumption more likely to be satisfied. In sum, the nonparametric identification of causal effects in the current settings requires the existence of auxiliary variables. Here, we consider an approach based on instrumental variables. Even when such instrumental variables are available, certain overlap assumptions are needed. This point is also clearly shown for the control function methods that are closely related to the deconfounder method. As we discussed, the overlap assumptions required for these instrumental variable methods are less stringent than those required for the deconfounder method. \subsection{Stochastic Intervention Approach} Our discussion has identified the overlap assumption as a main methodological challenge for the deconfounder method. Because the estimated substitute confounder itself is a function of treatment variables, conditioning on the particular treatment values alters the support of its distribution. The parametric and nonparametric approaches introduced above address this problem through the reliance on modeling assumptions and the use of instrumental variables, respectively. The final approach we consider is to change the causal quantities of interest using the idea of stochastic intervention. Instead of comparing two sets of fixed treatment values, we propose to contrast the two different distributions of treatments. In the movie application of the original article, one may be interested in comparing the revenue of a film featuring a typical cast for action movies with that featuring common actors for Sci-Fi movies. Stochastic intervention is a useful approach especially in the settings where inferring the average outcome under the fixed treatment values is difficult. For example, \citet{gene:07} applies it to mediation analysis, while \citet{hudg:hall:08} propose an experimental design with stochastic intervention to identify spillover effects. More recently, \citet{kennedy2019nonparametric} considers the incremental interventions that shift propensity score values to avoid overlap assumption. Specifically, we focus on the average causal effects of distributions of treatments rather than the effects of treatments themselves. \begin{equation} \delta(p_1, p_0) \ = \ \mathbb{E}\left\{\int Y_i(\bm{a}) p_1(\bm{A}_i = \bm{a}) d\bm{a} - \int Y_i(\bm{a}) p_0(\bm{A}_i = \bm{a}) d\bm{a} \right\} \end{equation} where $p_1$ and $p_0$ are the pre-specified distributions of treatments to be compared. Various distributions can be selected for comparison. For example, we may compare the conditional distributions of treatments given the different values of observed covariates, i.e., $p_1(\bm{A}_i \mid \mathbf{X}_i = \mathbf{x}_1)$ and $p_0(\bm{A}_i \mid \mathbf{X}_i = \mathbf{x}_2)$. Moreover, if factors are interpretable, then we may choose the conditional distributions given some specific values of the factors, i.e., $p_1(\bm{A}_i \mid \bm{Z}_i = \bm{z}_1)$ and $p_0(\bm{A}_i \mid \bm{Z}_i = \bm{z}_2)$. Topic models in the analysis of texts and ideal point models in the analysis of roll calls are good examples of interpretable factor models \citep{blei:ng:jord:03,clin:etal:04}. In the current setting, we may use the following estimator, \begin{equation} \hat\delta(p_1, p_0) \ = \ \sum_{i=1}^n Y_i \frac{p_1(\bm{A}_i) - p_0(\bm{A}_i)}{\hat{p}(\bm{A}_i \mid \bm{Z}_i)} \label{eq:estimator} \end{equation} where $\hat{p}(\bm{A}_i \mid \bm{Z}_i)$ is the estimated factor model. For this estimator, the required overlap assumption is that the support of $p_j(\bm{A}_i)$ is a subset of the support of $p(\bm{A}_i \mid \bm{Z}_i)$ for $j=0,1$. Researchers can choose $p_1(\bm{A}_i)$ and $p_0(\bm{A}_i)$ so that this overlap assumption is satisfied. Furthermore, although the deconfounder method is not applicable when one treatment causally affects another, under the stochastic intervention approach one could model causal relationships among treatments by specifying $p(\bm{A}_i \mid \bm{Z}_i)$ provided that the model is identifiable. An example of such case is given in Figure~\ref{fig:causaltreat}. \section{Concluding Remarks} The article by Wang and Blei is an important contribution to the causal inference literature because it opens up a new research frontier. The authors study a relatively unexplored question of how to infer the causal effects of many treatments in the presence of unobserved confounders. The deconfounder method provides a novel and yet intuitive approach using familiar statistical models. A key insight is that under certain assumptions, the factorization of treatments can yield a substitute confounder as well as a practically useful diagnostic tool for checking the validity of the resulting substitute confounder. Although the deconfounder method has advantages, as first pointed out by \citet{d2019multi} and further elaborated in this commentary, the method is not free of limitations. In particular, it cannot achieve nonparametric identification without additional restrictions. We emphasized the violation of the overlap assumption due to the fact that the estimated substitute confounder is a function of observed treatments. Wang and Blei consider some restrictions on the outcome model that may overcome this limitation and enable identification. However, such restrictions may severely limit the applicability of the deconfounder method. More research is needed in order to investigate the consequences of these restrictions in practical settings. We discussed three alternative approaches to the methodological problems of the deconfounder method. The first approach is based on parametric assumptions and extend the data analysis conducted in the original article. The second approach relies upon the use of instrumental variables and is related to the control function literature in econometrics. The final approach considers an alternative causal estimand based on stochastic intervention, which is particularly useful in the settings with high-dimensional treatments. We expect and hope that many researchers will follow up on the work of Wang and Blei and develop new methods for estimating the causal effects of multiple treatments in observational studies. \clearpage \spacingset{1.4} \pdfbookmark[1]{References}{References} \bibliographystyle{pa}
1,116,691,497,506
arxiv
\section{Introduction}\label{sec:intr} Observations of high-redshift supernovae indicate that the universe is accelerating at the present stage~\cite{SN} and this accelerating expansion also has been confirmed by many other cosmological experiments, such as observations of large scale structure (LSS) \cite{LSS}, and measurements of the cosmic microwave background (CMB) anisotropy \cite{CMB}. We refer to the cause for this cosmic acceleration as ``dark energy,'' which is a mysterious exotic matter with large enough negative pressure and whose energy density has been a dominative power of the universe. The combined analysis of cosmological observations suggests that the universe is consists of about $70\%$ dark energy, $30\%$ dust matter (cold dark matter plus baryons), and negligible radiation. The astrophysical feature of dark energy is that it remains unclustered at all scales where gravitational clustering of baryons and nonbaryonic cold dark matter can be seen. Its gravity effect is shown as a repulsive force so as to make the expansion of the universe accelerate when its energy density becomes dominative power of the universe. Although the nature and origin of dark energy are unknown, we still can propose some candidates to describe the properties of dark energy. The most obvious theoretical candidate of dark energy is the cosmological constant $\Lambda$ (vacuum energy) \cite{Einstein:1917} with an equation of state $w=-1$. The cosmological constant is rather popular in researches of cosmology and astrophysics due to its theoretical simpleness and its great success in fitting with observational data. However, as is well known, the two fundamental problems, namely the ``fine-tuning'' problem and the ``cosmic coincidence'' problem \cite{coincidence}, still puzzle us. Theorists have made many efforts to try to resolve the cosmological constant problem, but all of these efforts turn out to be unsuccessful \cite{cc}. Also, there are other alternatives to the cosmological constant. An alternative proposal to explaining dark energy is the dynamical dark energy scenario. The dynamical dark energy proposal is often realized by some scalar field mechanism which suggests that the energy form with negative pressure is provided by a scalar field slowly rolling down its potential. So far, a lot of scalar-field dark energy models have been studied. The models such as quintessence \cite{quintessence}, $K$-essence \cite{kessence}, phantom \cite{phantom}, tachyon \cite{tachyon} and ghost condensate \cite{ghost1,ghost2} are all famous examples of scalar-field dark energy models. In these models, the quintessence with a canonical kinetic term evolves its equation of state in the region of $w\geqslant -1$ whereas the model of phantom with negative kinetic term can always lead to $w\leqslant -1$; the $K$-essence can realize both $w>-1$ and $w<-1$, but it has been shown that it is very difficult for $K$-essence to achieve $w$ of crossing $-1$ \cite{Vikman:2004dc}. However, the analysis of the current observational data shows that the equation of state of dark energy $w$ is likely to cross the cosmological-constant boundary (or phantom divide) $-1$, i.e. $w$ is larger than $-1$ in the recent past and less than $-1$ today. The dynamical evolving behavior of dark energy with $w$ getting across $-1$ has brought forward great challenge to the model-building of scalar-field in the cosmology. Just as mentioned above, the scalar-field models, such as quintessence, $K$-essence, phantom, cannot realize the transition of $w$ from $w>-1$ to $w<-1$ or vice versa. Hence, the quintom model was proposed for describing the dynamical evolving behavior of $w$ crossing $-1$ \cite{quintom} with double fields of quintessence and phantom. The cosmological evolution of such model has been investigated in detail \cite{twofield,quintom2}. For the single real scalar field models, the transition of crossing $-1$ for $w$ can occur for the Lagrangian density $p(\phi, X)$, where $X$ is a kinematic term of a scalar-field $\phi$, in which $\partial{p}/\partial{X}$ changes sign from positive to negative, thus we require nonlinear terms in $X$ to realize the $w=-1$ crossing \cite{ghost2,Vikman:2004dc,Anisimov:2005ne}. When adding a high derivative term to the kinetic term $X$ in the single scalar field model, the energy-momentum tensor is proven to be equivalent to that of a two-field quintom model \cite{Li:2005fm}. It is remarkable that the generalized ghost condensate model of a single real scalar field is a successful realization of the quintom-like dark energy \cite{Tsujikawa:2005ju,Zhang:2006qu}. What's more, a generalized ghost condensate model was investigated in Refs.~\cite{Tsujikawa:2005ju,Zhang:2006qu} by means of the cosmological reconstruction program. For another interesting single-field quintom model see Ref.~\cite{Huang:2005gu}, where the $w=-1$ crossing is implemented with the help of a fixed background vector field. Besides, there are also many other interesting models, such as holographic dark energy model \cite{holo} and braneworld model \cite{Cai:2005ie}, being able to realize the quintom-like behavior. In any case, these dark energy models including the dynamical dark energy models have to face the test of cosmological observations. A typical approach for this is to predict the cosmological evolution behavior of the models, by putting in the Lagrangian (in particular the potential) by hand or theoretically, and to make a consistency check of models by comparing it with observations. An alternative approach is to reconstruct corresponding theoretical Lagrangian, by using the observational data. The reconstruction of scalar-field dark energy models has been widely studied. For a minimally coupled scalar field with a potential $V(\phi)$, the reconstruction is simple and straightforward \cite{simplescalar}. Saini et al. \cite{Saini:1999ba} reconstructed the potential and the equation of state of the quintessence field by parameterizing the Hubble parameter $H(z)$ based on a versatile analytical form of the luminosity distance $d_L(z)$. This method can be generalized to a variety of models, such as scalar-tensor theories \cite{scalartensor}, $f(R)$ gravity \cite{frgrav}, $K$-essence model \cite{Li:2006bx,Gao:2007ep}, and also tachyon model \cite{holotach}, etc.. Tsujikawa has investigated the reconstruction of general scalar-field dark energy models in detail \cite{Tsujikawa:2005ju}. In this paper, we will investigate the quintom reconstruction from the Wilkinson Microwave Anisotropy Probe (WMAP) 5-year observations. We will focus on the generalized ghost condensate model and will reconstruct this quintom scalar-field model using various dark energy ansatzs including the parametric forms of dark energy and holographic dark energy scenarios. In particular, we will put emphasis on a new parametrization form proposed by WMAP team in Ref.~\cite{Komatsu:2008hk}. The paper is organized as follows: In section \ref{sec:para} we address the dark energy parametrization proposed in Ref.~\cite{Komatsu:2008hk} and describe the corresponding analysis results of the WMAP5 observations. In section \ref{sec:ghost} we perform a cosmological reconstruction for the generalized ghost condensate model from various dark energy ansatzs and the fitting results of the up-to-date observational data. Finally we give the concluding remarks in section \ref{sec:concl}. \section{A new dark energy parametrization in WMAP5}\label{sec:para} The distinctive feature of the cosmological constant or vacuum energy is that its equation of state is always exactly equal to $-1$. Whereas, the dynamical dark energy exhibits a dynamic feature that its equation-of-state as well as its energy density are evolutionary with time. An efficient approach to probing the dynamics of dark energy is to parameterize dark energy and then to determine the parameters using various observational data. One can explore the dynamical evolution behavior of dark energy efficiently by making use of this way, although the results obtained are dependent on the parametrizations of dark energy more or less. Among the various parametric forms of dark energy, the minimum complexity required to detect time variation in dark energy is to add a second parameter to measure a change in the equation-of-state parameter with redshift. This is the so-called linear expansion parametrization $w(z)=w_0+w'z$, where $w'\equiv dw/dz|_{z=0}$, which was first used by Di Pietro $\&$ Claeskens \cite{DiPietro:2002cz} and later by Riess et al. \cite{Riess:2004nr}. However, when some ``longer-armed'' observations, e.g. CMB and LSS data, are taken into account, this form of $w(z)$ will be unsuitable due to the divergence at high redshift. The most commonly used form of equation-of-state, $w(z)=w_0+w_a z/(1+z)$, suggested by Chevallier $\&$ Polarski \cite{Chevallier:2000qy} and Linder \cite{Linder:2002et} (hence, hereafter, this form is called CPL parametrization, for convenience), can avoid the divergence problem effectively. It should be noted that this parametrization form has been investigated enormously in exploring the dynamical property of dark energy in light of observational data. However, this form cannot be adopted as it is when one uses the CMB data to constrain $w(a)$~\cite{Komatsu:2008hk}. Since this form is basically the leading-order term of a Taylor series expansion, the value of $w(a)$ can become unreasonably too large or too small when extrapolated to the decoupling epoch at $z_*\simeq 1090$ (or $a_*\simeq 9.17\times 10^{-4}$), and thus one cannot extract meaningful constraints on the quantities such as $w_0$ and $w_a$ that are defined at the {\it present epoch}. In order to avoid this problem, a new parametrized form was proposed by the WMAP team \cite{Komatsu:2008hk}, \begin{equation} w(a) = \frac{a\tilde{w}(a)}{a+a_{\rm trans}} - \frac{a_{\rm trans}}{a+a_{\rm trans}}, \label{eq:wz} \end{equation} (here, it is marked as ``WMAP5 parametrization'') where \begin{equation} \tilde{w}(a)=\tilde{w}_0+(1-a)\tilde{w}_a, \end{equation} and $a_{\rm trans}=1/(1+z_{\rm trans})$ is the ``transition epoch,'' and $z_{\rm trans}$ is the transition redshift. In this form, $w(a)$ approaches to $-1$ at early times and the dark energy density tends to a constant value at $a<a_{\rm trans}$. The dark energy density remains totally sub-dominant relative to the matter density at the decoupling epoch. At late times, $a>a_{\rm trans}$, one recovers the widely used CPL form~\cite{Linder:2002et}, $w(a)=w_0+(1-a)w_a$. In ``WMAP5 parametrization'', the present-day value of $w$, $w_0\equiv w(z=0)$, and the first derivative, $w'\equiv \left.dw/dz\right|_{z=0}$, are chosen as the free parameters, in stead of the $\tilde{w}_0$ and $\tilde{w}_a$. In Ref.~\cite{Komatsu:2008hk}, the WMAP group constrains $w_0$ and $w'$ in a flat universe from the WMAP distance priors ($l_A$, $R$, $z_*$), combined with the Baryon Acoustic Oscillations (BAO) and the Type Ia supernovae (SN) data. The results are that, for $z_{\rm trans}=10$, the 95\% limit on $w_0$ is $-0.33<1+w_0<0.21$; the 68\% intervals are $w_0=-1.06\pm 0.14$ and $w'=0.36\pm 0.62$. Note that Ref.~\cite{Wang:2007mza} shows that the two-dimensional distribution extends more towards south-east, i.e., $w>-1$ and $w'<0$, when the spatial curvature is allowed. The evolutionary behavior of $w(z)$ is plotted in Fig.~\ref{fig:wz}, using the best-fit results. It should be noted that that Fig.~\ref{fig:wz} is slightly different from Fig.~C1 of Ref.~\cite{Komatsu:2008hk} in that the revised best-fit values, $w_0=-1.06$ and $w'=0.36$, are used in plotting this figure. \begin{figure}[htbp] \begin{center} \includegraphics[scale=0.75]{wz.eps} \caption[]{\small The evolution of the equation of state of dark energy, corresponding to the WMAP5 parametrization with $w_0=-1.06$ and $w'=0.36$, for the transition redshift $z_{\rm trans}=0.5$, $2.0$ and $10$, respectively.}\label{fig:wz} \end{center} \end{figure} In this section, we have briefly introduced the new parametrization of dark energy equation-of-state proposed by the WMAP team in the 5-year observations. The advantage of this parameterized form is that the value of $w(a)$ is still reasonable when extrapolated to the early times such as the decoupling epoch. We shall use this parametrization with the observational constraint result to reconstruct the generalized ghost condensate model in the next section. As a comparison, we will also discuss other specific cases, such as the CPL parametrization as well as the holographic dark energy scenarios. This work is different from the previous ones \cite{Tsujikawa:2005ju,Zhang:2006qu} in that the reconstruction is implemented up to the decoupling epoch at $z_*\simeq 1090$. \section{Generalized ghost condensate model and its reconstruction}\label{sec:ghost} As mentioned above, the dynamical dark energy can be realized by some scalar-field mechanism. In particular, the quintom model was proposed for describing the dynamical evolving behavior of $w$ crossing $-1$. The results of the current observational data analysis show that the equation of sate of dark energy is likely to cross $-1$, see, for example, Fig.~\ref{fig:wz}. So, it is necessary to realize such a quintom behavior using some scalar field mechanism. It is remarkable that the generalized ghost condensate model is a successful single-real-scalar-field quintom model. In this section, we shall focuss on the reconstruction of the generalized ghost condensate model from the WMAP 5-year observations. We will first briefly review the generalized ghost condensate model of dark energy. Then, we will implement the scalar-field dark energy reconstruction according to the WMAP5 parametrization. As a comparison, we will also perform the same reconstruction program for other specific models such as the CPL parametrization and the holographic dark energy scenarios. \subsection{Generalized ghost condensate model} First, let us consider the Lagrangian density of a general scalar field $p(\phi, X)$, where $X=-g^{\mu\nu}\partial_\mu\phi\partial_\nu\phi/2$ is the kinetic energy term. Note that $p(\phi, X)$ is a general function of $\phi$ and $X$, and we have used a sign notation $(-, +, +, +)$. Identifying the energy momentum tensor of the scalar field with that of a perfect fluid, we can easily derive the energy density of dark energy, $\rho_{\rm de}=2Xp_X-p$, where $p_X=\partial p/\partial X$. Thus, in a spatially flat Friedmann-Robertson-Walker (FRW) universe involving dust matter (baryon plus dark matter) and dark energy, the dynamic equations for the scalar field are \begin{equation} 3H^2=\rho_{\rm m}+2Xp_X-p,\label{hsqr} \end{equation} \begin{equation} 2\dot{H}=-\rho_{\rm m}-2Xp_X,\label{hdot} \end{equation} where $X=\dot{\phi}^2/2$ in the cosmological context, and note that we have used the unit $M_P=1$ for convenience. Introducing a dimensionless quantity \begin{equation} r\equiv E^2= H^2/H_0^2, \end{equation} we find from Eqs.~(\ref{hsqr}) and (\ref{hdot}) that \begin{equation} p=[(1+z)r'-3r]H_0^2,\label{p} \end{equation} \begin{equation} \phi'^2p_X={r'-3\Omega_{\rm m0}(1+z)^2\over r(1+z)},\label{px} \end{equation} where prime denotes a derivative with respect to $z$. The equation of state for dark energy is given by \begin{equation} w={p\over \dot{\phi}^2 p_X-p}={(1+z)r'-3r\over 3r-3\Omega_{\rm m0}(1+z)^3}. \end{equation} Next, let us consider the generalized ghost condensate model proposed in Ref.~\cite{Tsujikawa:2005ju} (see also Ref.~\cite{Zhang:2006qu}), in which the behavior of crossing the cosmological-constant boundary can be realized, with the Lagrangian density \begin{equation} p=-X+h(\phi)X^2, \end{equation} where $h(\phi)$ is a function in terms of $\phi$. Actually, the function $h(\phi)$ can be explicitly expressed for the specific cases. For example, in the dilatonic ghost case, we have $h(\phi)=c e^{\lambda\phi}$ \cite{ghost2}. From Eqs.~(\ref{p}) and (\ref{px}) we obtain \begin{equation} \phi'^2={12r-3(1+z)r'-3\Omega_{\rm m0}(1+z)^3\over r(1+z)^2}, \label{phip} \end{equation} \begin{equation} h(\phi)={6(2(1+z)r'-6r+r(1+z)^2\phi'^2)\over r^2(1+z)^4\phi'^4}\rho_{\rm c0}^{-1}, \label{hz} \end{equation} \begin{equation} X=\frac{1}{2}\dot{\phi}^2=\frac{1}{6}r\phi'^2(1+z)^2\rho_{\rm c0}, \end{equation} where $\rho_{\rm c0}=3H_0^2$ represents the present critical density of the universe. The crossing of the cosmological-constant boundary corresponds to $hX=1/2$. The system can enter the phantom region ($hX <1/2$) without discontinuous behavior of $h$ and $X$. The evolution of the field $\phi$ can be derived by integrating $\phi'$ according to Eq.(\ref{phip}). Note that the field $\phi$ is determined up to an additive constant $\phi_0$, but it is convenient to take $\phi$ to be zero at the present epoch ($z=0$). The function $h(\phi)$ can be reconstructed using Eq.~(\ref{hz}) when the information of $r(z)$ is obtained from the observational data. Generically, the Friedmann equation can be expressed as \begin{equation} r(z)=\Omega_{\rm m0}(1+z)^3+(1-\Omega_{\rm m0})f(z), \end{equation} where $f(z)$ is some function encoding the information about the dynamical property of dark energy, \begin{equation} f(z)=\exp{[3\int_{0}^{z}{1+w(s)\over 1+s}ds]}. \end{equation} \subsection{Reconstruction} In this subsection, we will reconstruct the function $h(\phi)$ for the ghost condensate model using some ansatzs for the equation-of-state of dark energy. We will first use the WMAP5 parametrization discussed in section \ref{sec:para}. This case is important in this paper because the ansatz is new. Next, for comparing the new ansatz with the previous ones, we will preform the same reconstruction program for other scenarios. This includes the CPL parametrization and the holographic dark energy scenarios. The reconstruction will correspond to the fitting results from the latest observational data. What's more, the reconstruction program will be implemented up to the decoupling epoch at $z_*\simeq 1090$, which is different from the previous works \cite{Tsujikawa:2005ju,Zhang:2006qu} that focus only on the late times. \subsubsection{WMAP5 parametrization} First, we use the new ansatz (\ref{eq:wz}) to implement the reconstruction. The reconstruction for $h(\phi)$ is plotted in Fig.~\ref{fig:hphi} with transition redshift $z_{\rm trans}=0.5$, $2$ and $10$, by using the best-fit results, $w_0=-1.06$, $w'=0.36$ and $\Omega_{\rm m0}=0.273$, from the combined analysis of WMAP5+SN+BAO. In addition, the evolutions of the scalar field $\phi(z)$ as well as the functions $h(z)$ and $X(z)$ are also determined by the reconstruction program, see Figs.~\ref{fig:phiz}, \ref{fig:hz} and \ref{fig:xz}. From Fig.~\ref{fig:hphi}, we see that the reconstructed $h(\phi)$, up to $z_*\simeq 1090$, is not a monotonous function. In the rough range of $z$ between 0 and 1, the function $h(\phi)$ is increasing, see also Fig.~\ref{fig:hz}. The shape of $h(\phi)$ in this range indeed mimic an exponential function that is the case of the dilatonic ghost condensate \cite{ghost2}. However, in the range of $z$ greater than 1, $h(\phi)$ is a decreasing function. Figure~\ref{fig:xz} shows the case of the kinematic energy density $X(z)$. From this figure, we find that $z\simeq 1$ is indeed a pivot point. In the range of $z$ larger than 1, the field $\phi$ moves more and more slowly; in the range of $z$ less than 1, the field $\phi$ moves faster and faster, albeit the change of $X$ in this stage is slight. From Fig.~\ref{fig:phiz}, we can explicitly see the change rate of the field $\phi$. We find that in the range of $z\sim 0.1-10$, the change rate of $\phi$, namely $d\phi/dz$, is large; elsewhere, it is small. \begin{figure}[htbp] \begin{center} \includegraphics[scale=0.75]{hphi.eps} \caption[]{\small Reconstruction of the generalized ghost condensate model according to the WMAP5 parametrization with the best fit results derived from WMAP5 combined with SN and BAO, $w_0=-1.06$, $w'=0.36$ and $\Omega_{\rm m0}=0.273$. In this plot, we show the cases of the function $h(\phi)$, in unit of $\rho_{\rm c0}^{-1}$. The selected lines correspond to the transition redshift $z_{\rm trans}=0.5$, $2.0$ and $10$, respectively.} \label{fig:hphi} \end{center} \end{figure} \begin{figure}[htbp] \begin{center} \includegraphics[scale=0.75]{phiz.eps} \caption[]{\small Reconstruction of the generalized ghost condensate model according to the WMAP5 parametrization with transition redshift $z_{\rm trans}=0.5$, $2.0$ and $10$. In this plot, we show the evolutions of the scalar field $\phi(z)$, in unit of the Planck mass $M_{P}$, corresponding to the best fit results of the joint analysis of WMAP5 $+$ SN $+$ BAO.}\label{fig:phiz} \end{center} \end{figure} One of the aims of this paper is to explore the dynamical evolution behavior of the scalar field at early times (high redshifts), by reconstructing the dynamics of the scalar field according to the observations. Previous works only focus on the low redshift evolution ($z<2$ or so) \cite{Tsujikawa:2005ju,Zhang:2006qu}. From Figs.~\ref{fig:hz} and \ref{fig:xz}, we see that at low redshifts, the cases with different $z_{\rm trans}$ behave in accordance, but at high redshifts, the difference turns on. The bigger $z_{\rm trans}$ is, the smaller $h$ and bigger $X$ are, at high redshifts. For the scalar field evolution, we see from Fig.~\ref{fig:phiz} that the difference in the shapes of $\phi(z)$ is not big. However, the difference in shapes of $h(\phi)$ is rather evident for different $z_{\rm trans}$. Therefore, our investigation of the reconstruction explicitly exhibits the early-time dynamical evolution of the generalized ghost condensate model. We show that, for the WMAP5 parametrization, different $z_{\rm trans}$ will bring little impact at low redshifts but bring great impact at high redshifts, to the dynamics of scalar field. \begin{figure}[htbp] \begin{center} \includegraphics[scale=0.75]{hz.eps} \caption[]{\small Reconstruction of the generalized ghost condensate model according to the WMAP5 parametrization with transition redshift $z_{\rm trans}=0.5$, $2.0$ and $10$. In this plot, we show the evolution of the function $h(z)$. Here $h$ is in unit of $\rho_{\rm c0}^{-1}$.}\label{fig:hz} \end{center} \end{figure} \begin{figure}[htbp] \begin{center} \includegraphics[scale=0.75]{xz.eps} \caption[]{\small Reconstruction of the generalized ghost condensate model according to the WMAP5 parametrization with transition redshift $z_{\rm trans}=0.5$, $2.0$ and $10$. In this plot, we show the evolution of the kinematic energy density $X=\dot{\phi}^2/2$. Here, $X$ is in unit of $\rho_{\rm c0}$.}\label{fig:xz} \end{center} \end{figure} For a comparison, we shall also investigate other cases based on different ansatzs or scenarios in what follows. In those cases, we will only show the reconstructed $h(\phi)$ and $\phi(z)$, for briefness. \subsubsection{CPL parametrization} We now consider the CPL ansatz for the equation-of-state of dark energy, $w(a)=w_0+(1-a)w_a$. It should be pointed out that if one extends it to an arbitrarily high redshift, it will result in an undesirable situation in which the dark energy is as important as the radiation density at the epoch of the Big Bang Nucleosynthesis (BBN). Hence, in order to constrain such a scenario, one may use the limit on the expansion rate from BBN. The WMAP team also shows in Ref.~\cite{Komatsu:2008hk} the constraint on $w_0$ and $w_a$ for the CPL model, $w(a)=w_0+(1-a)w_a$, from the WMAP distance priors, the BAO and SN data, and the BBN prior in the flat universe. The 95\% limit on $w_0$ is $-0.29<1+w_0<0.21$ and the 68\% intervals are $w_0=-1.04\pm 0.13$ and $w_a=0.24\pm 0.55$. Besides, the effects of the systematic errors are also studied. They find that $w_0=-1.00\pm 0.19$ and $w_a=0.11\pm 0.70$ with the systematic errors included. \begin{figure}[htbp] \begin{center} \includegraphics[scale=0.75]{cplwz.eps} \caption[]{\small The equation-of-state $w(z)$ in the CPL parametrization. In this plot, we show the two best-fit cases from WMAP5+SN+BAO+BBN, with and without SN systematic errors.}\label{fig:cplwz} \end{center} \end{figure} The dark energy equation-of-state of the two cases with and without the SN systematic errors, for the CPL parametrization, at the best-fits, is plotted in Fig.~\ref{fig:cplwz}. From this figure, one can see that when considering the SN systematic errors, the fitting results will be influenced significantly. One can find that the equation-of-state even does not cross $-1$ in the CPL case with the systematic errors, at the best-fit. Furthermore, comparing with the WMAP5 parametrization (see Fig.~\ref{fig:wz}), it is easy to see that the early-time evolutionary behaviors for the equation-of-state are very different. \begin{figure}[htbp] \begin{center} \includegraphics[scale=0.75]{cplhphi.eps} \caption[]{\small Reconstruction of the generalized ghost condensate model according to the CPL parametrization with the best fit results derived from WMAP5 combined with SN, BAO, and BBN. The function $h(\phi)$ is in unit of $\rho_{\rm c0}^{-1}$.} \label{fig:cplhphi} \end{center} \end{figure} \begin{figure}[htbp] \begin{center} \includegraphics[scale=0.75]{cplphiz.eps} \caption[]{\small Reconstruction of the generalized ghost condensate model according to the CPL parametrization with the best fit results derived from WMAP5 combined with SN, BAO, and BBN. The scalar field $\phi(z)$ is in unit of $M_{P}$.}\label{fig:cplphiz} \end{center} \end{figure} Performing the reconstruction program, we derive the function forms of $h(\phi)$ and $\phi(z)$, shown in Figs.~\ref{fig:cplhphi} and \ref{fig:cplphiz}, respectively. We find that the global trend of the functions $h(\phi)$ and $\phi(z)$ of the CPL case is similar to that of the WMAP5 case (see also Figs.~\ref{fig:hphi} and \ref{fig:phiz}). For the function $h(\phi)$, comparing with the WMAP5 case, the late-time behaviors are very similar but the early-time behaviors are slightly different. Also, we find from Fig.~\ref{fig:cplhphi} that the function $h(\phi)$ will be monotonously decreasing if the equation-of-state does not cross $-1$ (see the dashed lines in Figs.~\ref{fig:cplwz} and \ref{fig:cplhphi}). For the dynamical evolution of the field $\phi$, comparing Fig.~\ref{fig:cplphiz} with Fig.~\ref{fig:phiz}, we find that the difference is fairly little. \subsubsection{Holographic dark energy scenarios} Furthermore, we also consider the holographic dark energy scenarios. The reason of considering the holographic dark energy is that we should not only consider the simple parametrizations of dark energy, but also consider some sophisticated dark energy models motivated by quantum gravity. The holographic dark energy density can be expressed as \begin{equation} \rho_{\rm de}=3c^2M_P^2L^{-2}, \end{equation} where $c$ is a numerical parameter determined by observations, and $L$ is the infrared (IR) cutoff of the theory. Here, we explicitly write out the reduced Planck mass $M_P$. In the holographic dark energy models, the key problem is how to choose an appropriate IR cutoff for the theory. In the original holographic dark energy scenario proposed by Li \cite{holo}, the IR cutoff is chosen as the event horizon of the universe, $R_{\rm eh}=a\int_t^\infty dt/a$. In a generalized version \cite{Gao:2007ep}, the IR cutoff is taken as the average of the Ricci scalar curvature, $|{\cal R}|^{-1/2}$. This new version is often called ``Ricci dark energy.'' It should be mentioned that the two scenarios of holographic dark energy both exhibit quintom feature \cite{holo,Gao:2007ep,RDERS}. Recently, the holographic dark energy models were constrained by the latest observational data, WMAP5+BAO+SN, see Ref.~\cite{Li:2009bn}. For the holographic dark energy, we have the fitting results: For $68.3\%$ confidence level, $\Omega_{\rm m0}=0.277^{+0.022}_{-0.021}$, and $c=0.818^{+0.113}_{-0.097}$; for $95.4\%$ confidence level, $\Omega_{\rm m0}=0.277^{+0.037}_{-0.034}$, and $c=0.818^{+0.196}_{-0.154}$. For the Ricci dark energy, we have the fitting results: For $68.3\%$ confidence level, $\Omega_{\rm m0}=0.324^{+0.024}_{-0.022}$, and $c^2=0.371^{+0.023}_{-0.023}$; for $95.4\%$ confidence level, $\Omega_{\rm m0}=0.324^{+0.040}_{-0.036}$, and $c^2=0.371^{+0.037}_{-0.038}$. The dark-energy equation of state, at the best fits, in these two scenarios is shown in Fig.~\ref{fig:holowz}. One can see from this figure that although both originated from the holographic principle of quantum gravity, different IR cutoffs will bring so different cosmological consequences. We shall make use of the best-fit results to reconstruct the ghost condensate model in the following. \begin{figure}[htbp] \begin{center} \includegraphics[scale=0.75]{holowz.eps} \caption[]{\small The equation-of-state $w(z)$ in the holographic dark energy scenarios. In this plot, we show the best-fit cases from WMAP5+SN+BAO.}\label{fig:holowz} \end{center} \end{figure} The reconstructed function forms of $h(\phi)$ and $\phi(z)$ are shown in Figs.~\ref{fig:holohphi} and \ref{fig:holophiz}. From these two figures, we see that the big difference in $w(z)$ is converted to the big differences in $h(\phi)$ and $\phi(z)$. The reconstructions of $h(\phi)$ and $\phi(z)$ indicate that the holographic dark energy is compatible with the previous dark energy parametrizations, but the Ricci dark energy is not. In Fig.~\ref{fig:holohphi}, we find that there exist a sharp peak of $h$ around $\phi\sim 1.5$ and a long tail of $h$ in the range of $\phi>3.5$, for the Ricci dark energy. From Fig.~\ref{fig:holophiz}, we see that the dynamics of the field $\phi$ in the Ricci scenario is also larruping. Although in the range of $z<1$, the evolutions of $\phi$ nearly go to degenerate, the big different occurs in the range of $z>1$, especially in the stage of $z>10$. \begin{figure}[htbp] \begin{center} \includegraphics[scale=0.75]{holohphi.eps} \caption[]{\small Reconstruction of the generalized ghost condensate model according to the holographic dark energy scenarios with the best-fit results derived from WMAP5 combined with SN and BAO. The function $h(\phi)$ is in unit of $\rho_{\rm c0}^{-1}$.} \label{fig:holohphi} \end{center} \end{figure} \begin{figure}[htbp] \begin{center} \includegraphics[scale=0.75]{holophiz.eps} \caption[]{\small Reconstruction of the generalized ghost condensate model according to the holographic dark energy scenarios with the best-fit results derived from WMAP5 combined with SN and BAO. The scalar field $\phi(z)$ is in unit of $M_{P}$.}\label{fig:holophiz} \end{center} \end{figure} In Ref.~\cite{Li:2009bn}, the authors use the Bayesian evidence (BE) as a model selection criterion to make a comparison between the holographic dark energy models. It is found that for holographic dark energy and Ricci dark energy, $\Delta \ln \mathrm{BE}= -0.86$ and $-8.14$, respectively. So, evidently, the holographic dark energy scenrio is more favored by the observational data, whereas the Ricci dark energy scenario looks like disfavored by the observational data. Our reconstruction investigation also supports this conclusion from another angle of view. \section{Concluding remarks}\label{sec:concl} The recent fits to current observational data, such as SN, CMB and LSS, find that even though the behavior of dark energy is consistent to great extent with a cosmological constant, an evolving dark energy with the equation of state $w$ larger than $-1$ in the recent past but less than $-1$ today is also with some possibility. Although the scalar-field models of dark energy, such as quintessence and phantom, can provide us with dynamical mechanism for dark energy, the behavior of cosmological-constant crossing brings forward a great challenge to the model-building for dynamical dark energy, because neither quintessence nor phantom can fulfill this behavior. A two-field quintom model, therefore, was suggested to realize this behavior by means of the incorporation of the features of quintessence and phantom. Besides, the generalized ghost condensate model provides us with a successful single-real-scalar-field model for realizing the quintom-like behavior. For probing the dynamical nature of dark energy, one should parameterize dark energy first and then constrain the parameters using the observational data. In this paper, we have investigated the dynamical behavior of the general ghost condensate scalar-field model by taking a new form of parametrization of the equation of state proposed in Ref.~\cite{Komatsu:2008hk} and the best-fit values from the observational data. The results of reconstruction show the dynamical behavior of the generalized ghost condensate from this parametrization. In particular, this reconstruction investigation explores the early-time evolutionary behavior of the scalar field model. As a comparison, we also discussed other specific cases including the CPL parametrization and the holographic dark energy scenarios. The increase of the quantity and quality of observational data in the future will undoubtedly provide a true {\it model-independent} manner for exploring the properties of dark energy. We hope that the future high-precision observations (e.g. SNAP) may be capable of providing us with deep insight into the nature of dark energy driving the acceleration of the universe. \section*{ACKNOWLEDGMENTS} We thank Xin Zhang for helpful discussions. This work was supported by the Natural Science Foundation of China under Grants Nos.~10705041 and 10975032.
1,116,691,497,507
arxiv
\section{INTRODUCTION} \IEEEPARstart{T}{he} accurate wireless channel characterization is, currently, of paramount importance for the understanding, evaluation and design of future wireless communication systems, which will operate under stringent requirements of capacity, reliability, latency, and scalability, to enable the new applications for machine-type communications (MTC) and mission-critical communications envisioned for the scenarios of 5G and beyond. Accordingly, wireless propagations models are crucial to compare potential candidate technologies that will be used for the deployment of these networks~\cite{sun2018}. Particularly, classical small-scale fading models have shown to be limited to adequately fit experimental data from practical scenarios. Then, more general fading models, which better capture the wireless channel statistics, are specially useful for modeling future wireless systems. Precisely, the $\alpha$-$\mu$ fading distribution, first proposed in~\cite{yacoub2007alpha}, is a more general and flexible model to better fit with field data in cases where other widely-used classical distributions are not accurate. Also, it presents an easy mathematical tractability and includes other important distributions as special cases, such as Gamma, Nakagami-$m$, Exponential, Weibull, one-sided Gaussian, and Rayleigh. The $\alpha$-$\mu$ fading model considers a signal composed of clusters of multipath waves, which propagates in a non-linear environment. Thus, this fading model is described by two physical parameters, namely: $\alpha$ which represents the nonlinearity of the propagation environment, and $\mu$ which represents the number of multipath clusters. The knowledge of the statistics of the sum, product, and ratio of fading random variables~(RVs) has a pivotal role in the performance analysis and evaluation of many practical wireless applications. In this context, the distribution of both the sum and the product of $\alpha$-$\mu$ RVs has been extensively studied by many research works, among them, the following are notable:~\cite{DaCosta,DaCosta2,DaCosta3,2018Naka} (for the sum), and~\cite{Carlos,product2,product3,product4} (for the product). On the other hand, the statistics of the ratio of $\alpha$-$\mu$ RVs has been little explored in the literature, as will be shown later. It is noteworthy that, the performance analysis of wireless communications systems, specifically for scenarios considering some of the key technologies for future wireless networks, commonly involves the calculation of the ratio of signals powers, such as the signal-to-interference ratio (SIR), for instance. Therefore, the distribution of the ratio of RVs is of particular interest, and it has a pivotal role in the analytical performance evaluation of those scenarios. Different approaches concerning the statistics of the ratio between RVs with well-known distributions such as Gamma, Exponential, Weibull, and Normal, are presented in~\cite{Ahsen,Annavajjala,Nadarajah,Gia}, where some application uses are also provided. Moreover, regarding generalized distributions, the statistics of the ratio of independent and arbitrary $\alpha$-$\mu$ RVs, via series representation, was proposed in~\cite{Leonardo}. However, in that work, the convergence of the power series was attained by making an strong assumption, more specifically: the values related to the non-linearity of the environment (i.e., to the $\alpha$ parameter, also referred as shape parameter) of each $\alpha$-$\mu$ RV involved in the quotient must be co-prime integers. Further, under the same consideration, the work in~\cite{Leonardo2016} provides closed-form expressions for the statistics of the ratio of products of an arbitrary number of independent and non-identically distributed $\alpha$-$\mu$ variates. Thus, an important constraint on the results of~\cite{Leonardo} and~\cite{Leonardo2016} is that the shape parameter (or, equivalently, the $\alpha$ parameter) of the $\alpha$-$\mu$ RVs involved on the ratio cannot take non-constrained arbitrary values. This fact hinders a more comprehensive insight into the performance analysis of different wireless communication systems. In light of the above considerations, in this paper we derive closed-form expressions for the main statistics of the ratio of independent and non-identically distributed (i.n.i.d.) squared $\alpha$-$\mu$ RVs, for which all the fading parameters of both distributions can be non-constrained arbitrary positive real numbers (thus including the special case of positive integers). This way, our expressions relieve the strong assumption considered in~\cite{Leonardo} and~\cite{Leonardo2016}. Also, our results can be employed as a powerful tool for the performance evaluation of different scenarios. The following are our main contributions: \begin{itemize} \item Novel closed-form expressions for the probability density function (PDF), cumulative distribution function (CDF), moment generating function (MGF), and higher order moments of the ratio of general $\alpha$-$\mu$ RVs are derived in terms of the Fox H-function. \item The statistics of the ratio of RVs for some special cases of classical fading distributions are also provided as byproducts. \item Application uses in wireless networks are proposed in the context of Physical Layer Security (PLS), Cognitive Radio (CR), and Full-Duplex (FD) relaying, where the obtained analytical expressions can be used straightforwardly. \item A simple, efficient and useful algorithm for the implementation of the univariate Fox H-function is also provided. \end{itemize} The remainder of this paper is organized as follows. Section~II revisits preliminaries on the $\alpha$-$\mu$ distribution. In Section~III, the statistics of the ratio of non-constrained arbitrary $\alpha$-$\mu$ RVs are derived. Section~IV presents some application uses of the derived expressions, while Section~V shows some illustrative numerical results and draws some discussions. Finally, some concluding remarks are presented in Section~VI. In what follows, we use the following notation: $f_{A}(\cdot)$ and $F_{A}(\cdot)$ for the PDF and CDF of a RV~$A$, respectively; $\mathbb{E}[\cdot]$ for expectation; $\mathbb{V}[\cdot]$ for variance; $\Pr\left \{ \cdot \right \}$ for probability; and $\abs{\cdot}$ for absolute value. In addition, $\Gamma(\cdot)$ is the gamma function~\cite[Eq.~(6.1.1)]{Abramowitz}; $\operatorname{P}(z,y)=\tfrac{1}{\Gamma(z)} \int_{0}^{y}t^{z-1}\text{exp}(-t)dt$ is the regularized lower incomplete gamma function~\cite[Eq.~(6.5.1)]{Abramowitz}; $\mathrm{H}_{p,q}^{m,n}\left[ \cdot \right]$ is the Fox H-function~\cite[Eq.~(1.1)]{Fox}; and $G_{p, q}^{m, n}\left[\cdot \right]$ is the Meijer G-function~\cite[Eq.~(7.82)]{Gradshteyn}. We also use $\mathrm{i}=\sqrt[]{-1}$ for the imaginary unit; $\field{N}^0$ for natural numbers including zero; $\mathbb{C}$ for complex numbers; $\mathbb{R}$ for real numbers; $\mathbb{R}^+$ for positive real numbers; $\approx$ to denote ``approximately equal~to''; and $\propto$ to denote ``proportionally~to'' \section{Preliminaries} The PDF of the envelope $R$ of a signal propagating on a fading channel with distribution $\alpha$-$\mu$ is given by~\cite{yacoub2007alpha} \begin{equation}\label{eq:pdfalpha} f_{R}(r)=\frac{\alpha\mu^{\mu}r^{\alpha\mu-1}}{\hat{r}^{\mu \alpha}\Gamma (\mu)}\exp\left(-\frac{\mu r^{\alpha}}{\hat{r}^{\alpha}} \right ), \end{equation} where $\alpha$ denotes the non-linearity of the environment, $\hat{r}=\sqrt[\alpha]{\mathbb{E}\left [ R^{\alpha} \right ]}$ is the $\alpha$-root mean value of the channel envelope, and $\mu=\hat{r}^{2\alpha}\mathbb{V}^{-1}\left [ R^{\alpha} \right ]$ is related to the number of multipath clusters. Therefore, some special cases for the parameters $\alpha$ and $\mu$, such that the $\alpha$-$\mu$ distribution reduces to well-known distributions commonly used in wireless application scenarios, are specified in Table~\ref{specialcases}~\cite{yacoub2007alpha}. \begin{table}[H] \scriptsize \centering \caption{Particular cases of the $\alpha$-$\mu$ distribution} \centering \begin{tabular}{cc} \toprule \hspace{1mm} \textbf{Distribution} & \hspace{1mm} \textbf{$\alpha$-$\mu$ fading values } \\ \cmidrule(lr){1-2} \multicolumn{1}{l} \textbf{Nakagami-$m$} & \hspace{3mm} \textbf{$\alpha=2$, $\mu=m$ } \\ \cmidrule(lr){1-2} \multicolumn{1}{l} \textbf{Weibull} \hspace{3mm} & \textbf{ $\alpha=\alpha$, $\mu=1$} \\ \cmidrule(lr){1-2} \multicolumn{1}{l} \textbf{Rayleigh} \hspace{2.5mm} & \hspace{2mm} \textbf{$\alpha=2$, $\mu=1$ } \\ \cmidrule(lr){1-2}\end{tabular}\label{specialcases} \end{table} From~\eqref{eq:pdfalpha}, the $n$-th moment $\mathbb{E}\left [ R^n \right ]$ can be obtained as \begin{equation}\label{eq:moments} \mathbb{E}\left [ R^n \right ]=\hat{r}^{n} \frac{\Gamma\left ( \mu+n/\alpha \right )}{\mu^{n/\alpha} \Gamma (\mu)}. \end{equation} Let $\Upsilon \stackrel{\Delta}{=} \gamma_t R^2$ be the instantaneous received signal-to-noise ratio (SNR) through an $\alpha$-$\mu$ fading channel, with $\gamma_t$ being the transmit SNR~\cite{DaCosta,Lei2017}. Hence, the corresponding PDF and CDF can be obtained from~\eqref{eq:pdfalpha} by performing a transformation of variables as in~\cite[Eqs.~(8)~and~(10)]{DaCosta} \begin{align} f_{\Upsilon}(\gamma) & =\frac{\alpha \gamma^{(\alpha\mu/2)-1}}{2\beta^{\alpha\mu/2}\Gamma (\mu)}\exp\left[-\left ( \frac{\gamma}{\beta} \right )^{\alpha/2}\right ],\label{eq:2}\\ F_{\Upsilon}(\gamma) & = \operatorname{P} \left ( \mu, \left ( \frac{\gamma}{\beta} \right ) ^{\alpha/2} \right ),\label{eq:3} \end{align} where $\beta=\bar{\Upsilon}\Gamma(\mu) /\Gamma(\mu+2/\alpha)$, with $\bar{\Upsilon}$ being the average received SNR, so that \begin{align}\label{eq:4} \bar{\Upsilon} & =\mathbb{E}\left [\Upsilon\right ]\nonumber\\ & =\hat{r}^{2} \frac{\Gamma(\mu+2/\alpha)}{\mu^{2/\alpha}\Gamma(\mu)}\gamma_t, \end{align} Now, by using~\cite[Eq. (01.03.26.0004.01)]{Wolfram1}, we can express the exponential function in~\eqref{eq:2} in terms of the Meijer G-function, so that the PDF of $\Upsilon$ can be rewritten as \begin{equation}\label{eq:6} f_{\Upsilon}(\gamma)=\frac{\alpha \gamma^{(\alpha\mu/2)-1}}{2\beta^{\alpha\mu/2}\Gamma (\mu)} G_{0,1}^{1,0}\left[ \left ( \frac{\gamma}{\beta} \right )^{\alpha/2} \bigg| \begin{array}{c} 0 \\ \end{array} \right]. \end{equation} Likewise, using~\cite[Eq. (06.09.26.0006.01)]{Wolfram1}, the regularized lower incomplete gamma function in~\eqref{eq:3} can be expressed in terms of the Meijer G-function. Thus, the CDF of $\Upsilon$ can be rewritten as \begin{equation}\label{eq:7} F_{\Upsilon}(\gamma)=\frac{1}{\Gamma(\mu)} \left ( \frac{\gamma}{\beta} \right ) ^{\frac{\mu\alpha}{2}} G_{1,2}^{1,1}\left[ \left ( \frac{\gamma}{\beta} \right ) ^{\frac{\alpha}{2}} \bigg| \begin{array}{c} 1-\mu\\ 0,-\mu \\ \end{array} \right]. \end{equation} \section{Statistics of the ratio of independent and arbitrary squared $\alpha$-$\mu$ RVs } In this section, we derive closed-form expressions for the PDF, CDF, MGF and higher order moments of the ratio $X$$=$$\Upsilon_1/\Upsilon_2 $, where $\Upsilon_1$ and $\Upsilon_1$ are i.n.i.d. RVs following an $\alpha$-$\mu$ distribution. Moreover, hereafter we assume that $\alpha_1, \alpha_2 \in \mathbb{R}^+$, $k = \tfrac{\alpha_1}{\alpha_2}$, $\mu_1, \mu_2 \in \mathbb{R}^+$, and $x \in \mathbb{R}^+$. \subsection{PDF, CDF and MGF of $X$} Herein, the PDF and CDF of the ratio of two independent squared $\alpha$-$\mu$ RVs are given in the following proposition. Besides, as one of the most important characterizations of a RV, the corresponding MGF is also provided. \begin{prop}\label{prop:pdf} Let $\Upsilon_1$ and $\Upsilon_2$ be i.n.i.d. squared $\alpha$-$\mu$ distributed RVs with probability functions given as in~\eqref{eq:6} and~\eqref{eq:7}. The PDF, CDF, and MGF of the ratio $X {=}\Upsilon_1/\Upsilon_2 $ are respectively given~by \begin{align} f_X(x) = & \frac{\alpha_{1} x^{\frac{\alpha_1\mu_1}{2}-1}\beta_2^{\frac{\alpha_1\mu_1}{2} }}{2 \beta_1^{\frac{\alpha_1\mu_1}{2}}\Gamma (\mu_2)\Gamma (\mu_1)} \nonumber \\ &\times \underset{\mathrm{H}_1}{\underbrace{ \mathrm{H}_{1,1}^{1,1}\left[{\left ( \frac{x\beta_2}{\beta_1} \right )^{\frac{\alpha_1}{2}}}\bigg| \begin{array}{c} (1-\mu_2-k\mu_1,k)\\ (0,1)\\ \end{array} \right]}},\label{pdfRatio}\\ F_X(x) = & \frac{1}{\Gamma(\mu_2)\Gamma (\mu_1)} \left ( \frac{x\beta_2}{\beta_1} \right )^{\frac{\alpha_1\mu_1}{2}}\nonumber \\ &\times \underset{\mathrm{H}_2}{\underbrace{ \mathrm{H}_{2,2}^{1,2}\left[{\left ( \!\frac{x\beta_2}{\beta_1}\! \!\right)^{\!\frac{\alpha_1}{2}}}\bigg| \begin{array}{c} \!(1\!-\!\mu_1,1\!),\!(1\!-\!\mu_1 k\!-\! \mu_2,k)\! \\ (0,1) ,\hspace{0.5mm} \!(-\mu_1,1)\!\\ \end{array} \right]}},\label{eq:CDFRATIO}\\ {\cal M}_X(s) = & \frac{\alpha_{1} }{2 \Gamma (\mu_2)\Gamma (\mu_1)} \left ( \frac{\beta_2}{s\beta_1} \right )^{\frac{\alpha_1\mu_1}{2}} \nonumber \\ \times &\underset{\mathrm{H}_3}{\underbrace{ \mathrm{H}_{2,1}^{1,2}\!\left[{\left ( \frac{\beta_2}{s\beta_1} \right )^{\frac{\alpha_1}{2}}}\!\bigg|\!\!\! \begin{array}{c} (1\!-\!\mu_2\!-\!k\mu_1,k),(1-\frac{\mu_1 \alpha_1}{2},\frac{\alpha_1}{2}) \\ (0,1) \\ \end{array} \!\!\right]}}.\label{eq:MGF} \end{align} \end{prop} \begin{proof} See Appendix~\ref{ap:statistics}. \end{proof} \begin{remark} Notice that contrary to previous works~\cite{Leonardo,Leonardo2016}, the results of Proposition~\ref{prop:pdf} are general, since no constraints are imposed on the parameters of $\Upsilon_1$ and $\Upsilon_2$. \end{remark} \begin{remark} It is worth mentioning that currently the Fox H-function is not implemented in mathematical software packages such as Wolfram Mathematica. However, the Fox H-function can be evaluated using either numerical evaluations in the form of a Mellin--Barnes integral\footnote{In Appendix~\ref{ap:mathimplementation}, we provide a portable implementation of the Fox H-function in MATHEMATICA\textregistered Wolfram. The code is simple, efficient, and provides very accurate results.}~\cite{Fox} or by applying calculus of residues\footnote{ An alternative method to compute the results presented here is given by the series representation for the Fox H-functions $\mathrm{H}_1$, $\mathrm{H}_2$, and $\mathrm{H}_3$ as in~\eqref{eq:FoxbyResidues1},~\eqref{eq:FoxbyResidues2} and~\eqref{eq:FoxbyResidues3}, respectively, shown at the bottom of next page. The mathematical derivation of the referred expressions is provided in Appendix~\ref{ap:residues}}. \end{remark} \subsection{Higher Order Moments} The $n$th order moment for a RV $X$ is defined as $\mathbb{E}\left [X^n\right ]\buildrel \Delta \over = \int_{0}^{\infty}x^{n}f_X(x)dx$. Then, to calculate the $n$th moment of the ratio of squared $\alpha${-}$\mu$ distributed RVs, $X{=}\Upsilon_1/\Upsilon_2$, we resort to the identity for the product of two statistically independent RVs, i.e., $\mathbb{E}\left [(\Upsilon_1\Upsilon_2)^n\right ]=\mathbb{E}\left [\Upsilon_1^n\right ] \mathbb{E}\left [\Upsilon_2^n\right ]$~\cite{productCarlos}. However, for the case of the ratio of two RVs, we are interested in solving $\mathbb{E}\left [(\Upsilon_1/\Upsilon_2)^n\right ]$, thus being necessary to determine the $n$th moment of the inverse of a RV. To this end, let us define $Z=1/\Upsilon_2$, such that $\mathbb{E}\left [(\Upsilon_1Z)^n\right ]=\mathbb{E}\left [\Upsilon_1^n\right ] \mathbb{E}\left [Z^{n}\right ]$. Thus, by determining the moments of $\mathbb{E}\left [\Upsilon_1^n\right ]$ and $\mathbb{E}\left [Z^{n}\right ]$, then the higher order moments of $\mathbb{E}\left [X^n\right ]$ can be found. The moments of $Z$ are determined from the distribution of the inverse of $R_2$ by considering $\mathbb{E}\left [(\gamma_t R_2^2)^{n}\right ]=\mathbb{E}\left [\Upsilon_2^{n}\right ]$~\cite{DaCosta}. From this consideration, the higher order moments $\mathbb{E}\left [X^n\right ]$ can be obtained as in the following proposition. \begin{prop}\label{prop:moments} The $n$th order moment for the ratio of squared $\alpha${-}$\mu$ distributed RVs, $X=\Upsilon_1/\Upsilon_2$, is given by \begin{align}\label{eq:HigherMoments} \mathbb{E}\left [X^n\right ]=& \frac{\left (\hat{r_1}\hat{r_2}\right )^{2n}\Gamma\left ( \mu_1+\frac{2n}{\alpha_1} \right )\Gamma\left ( \mu_2-\frac{2n}{\alpha_2} \right )}{\mu_1^{2n/\alpha_1} \mu_2^{2n/\alpha_2}\Gamma\left ( \mu_1 \right )\Gamma\left ( \mu_2\right )}, \nonumber \\ & \hspace{12mm} for \hspace{2mm} n>\mu_i \alpha_i,\hspace{2mm} i \in \left \{1,2.\right \}. \end{align} \end{prop} \vspace{2mm} \begin{proof} See Appendix~\ref{ap:momentinv}. \end{proof} \begin{remark} An equivalent expression for the $n$th order moment in~\eqref{eq:HigherMoments} can be obtained by applying the Mellin transform~\cite[Eq. (6.3.3.c)]{springer} to the PDF in~\eqref{pdfRatio}. \end{remark} The formulations derived in~\eqref{pdfRatio} to~\eqref{eq:HigherMoments} are general results that can be reduced to other distributions for different channel models, such as Rayleigh, Nakagami-$m$, and Weibull, by considering the corresponding parameters as in Table~\ref{specialcases}. Therefore, the PDF, CDF, and MGF for the distribution of the ratio of the aforementioned distributions are given in Table~\ref{RATIOPDF},~\ref{RATIOCDF} and~\ref{RATIOMGF}, respectively. \begin{table*}[t] \scriptsize \centering \caption{PDF of the Ratio for Different Distributions as Special Cases} \centering \begin{tabular}{ll} \toprule \hspace{10mm} \textbf{Ratio} & \hspace{30mm} \textbf{PDF } \\ \cmidrule(lr){1-2} \multicolumn{1}{l}{Nakagami-$m$$\mathlarger{\mathlarger{\mathlarger{/}}}$Nakagami-$m$}& $\begin{array} {lcl} f_X(x)=\frac{ x^{\mu_1-1}\beta_2^{\mu_1 }}{ \beta_1^{\mu_1}\Gamma (\mu_2)\Gamma (\mu_1)} G_{1,1}^{1,1}\left[ \frac{x\beta_2}{\beta_1} \bigg| \begin{array}{c} 1-\mu_2-\mu_1\\ 0\\ \end{array} \right] \end{array}$ \\ \cmidrule(lr){1-2} \multicolumn{1}{l}{Nakagami-$m$$\mathlarger{\mathlarger{\mathlarger{/}}}$Weibull}&$\begin{array} {lcl} f_X(x)=\frac{x^{\mu_1-1}\beta_2^{\mu_1 }}{ \beta_1^{\mu_1}\Gamma (\mu_2)} \mathrm{H}_{1,1}^{1,1}\left[ \frac{x\beta_2}{\beta_1} \bigg| \begin{array}{c} (- \frac{2\mu_1}{\alpha_2}, \frac{2}{\alpha_2})\\ (0,1)\\ \end{array} \right] \end{array}$ \\ \cmidrule(lr){1-2} \multicolumn{1}{l}{Nakagami-$m$$\mathlarger{\mathlarger{\mathlarger{/}}}$Rayleigh}&$\begin{array} {lcl} f_X(x)=\frac{x^{\mu_1-1}\beta_2^{\mu_1 }}{ \beta_1^{\mu_1}\Gamma (\mu_1)} G_{1,1}^{1,1}\left[ \frac{x\beta_2}{\beta_1} \bigg| \begin{array}{c} -\mu_1\\ 0\\ \end{array} \right] \end{array}$ \\ \cmidrule(lr){1-2} \multicolumn{1}{l}{Weibull$\mathlarger{\mathlarger{\mathlarger{/}}}$Weibull}& $\begin{array} {lcl} f_X(x)=\frac{\alpha_{1} x^{\frac{\alpha_1}{2}-1}\beta_2^{\frac{\alpha_1}{2} }}{2 \beta_1^{\frac{\alpha_1}{2}}} \mathrm{H}_{1,1}^{1,1}\left[\left ( \frac{x\beta_2}{\beta_1} \right )^{\frac{\alpha_1}{2}}\bigg| \begin{array}{c} (-k,k)\\ (0,1)\\ \end{array} \right] \end{array}$ \\ \cmidrule(lr){1-2} \multicolumn{1}{l}{Weibull$\mathlarger{\mathlarger{\mathlarger{/}}}$Nakagami-$m$}& $\begin{array} {lcl} f_X(x)=\frac{\alpha_{1} x^{\frac{\alpha_1}{2}-1}\beta_2^{\frac{\alpha_1}{2} }}{2 \beta_1^{\frac{\alpha_1}{2}}\Gamma (\mu_2)} \mathrm{H}_{1,1}^{1,1}\left[\left ( \frac{x\beta_2}{\beta_1} \right )^{\frac{\alpha_1}{2}}\bigg| \begin{array}{c} (1-\mu_2- \frac{\alpha_1}{2}, \frac{\alpha_1}{2})\\ (0,1)\\ \end{array} \right] \end{array}$ \\ \cmidrule(lr){1-2} \multicolumn{1}{l}{Weibull$\mathlarger{\mathlarger{\mathlarger{/}}}$Rayleigh}& $\begin{array} {lcl} f_X(x)=\frac{\alpha_{1} x^{\frac{\alpha_1}{2}-1}\beta_2^{\frac{\alpha_1}{2} }}{2 \beta_1^{\frac{\alpha_1}{2}}} \mathrm{H}_{1,1}^{1,1}\left[\left ( \frac{x\beta_2}{\beta_1} \right )^{\frac{\alpha_1}{2}}\bigg| \begin{array}{c} (- \frac{\alpha_1}{2}, \frac{\alpha_1}{2})\\ (0,1)\\ \end{array} \right] \end{array}$ \\ \cmidrule(lr){1-2} \multicolumn{1}{l}{Rayleigh$\mathlarger{\mathlarger{\mathlarger{/}}}$Rayleigh}& $\begin{array} {lcl} f_X(x)=\frac{\beta_2}{ \beta_1} G_{1,1}^{1,1}\left[ \frac{x\beta_2}{\beta_1} \bigg| \begin{array}{c} -1\\ 0\\ \end{array} \right] \end{array}$ \\ \cmidrule(lr){1-2} \multicolumn{1}{l}{Rayleigh$\mathlarger{\mathlarger{\mathlarger{/}}}$Nakagami-$m$}& $\begin{array} {lcl} f_X(x)=\frac{\beta_2}{ \beta_1 \Gamma(\mu_2)} G_{1,1}^{1,1}\left[ \frac{x\beta_2}{\beta_1} \bigg| \begin{array}{c} -\mu_2\\ 0\\ \end{array} \right] \end{array}$ \\ \cmidrule(lr){1-2} \multicolumn{1}{l}{Rayleigh$\mathlarger{\mathlarger{\mathlarger{/}}}$Weibull}& $\begin{array} {lcl} f_X(x)=\frac{ \beta_2}{ \beta_1} \mathrm{H}_{1,1}^{1,1}\left[ \frac{x\beta_2}{\beta_1} \bigg| \begin{array}{c} (- \frac{2}{\alpha_2}, \frac{2}{\alpha_2})\\ (0,1)\\ \end{array} \right] \end{array}$ \\ \cmidrule(lr){1-2} \end{tabular}\label{RATIOPDF} \end{table*} \begin{table*}[t] \scriptsize \centering \caption{CDF of the Ratio for Different Distributions as Special Cases} \centering \begin{tabular}{ll} \toprule \hspace{10mm} \textbf{Ratio} & \hspace{40mm} \textbf{CDF } \\ \cmidrule(lr){1-2} \multicolumn{1}{l}{Nakagami-$m$$\mathlarger{\mathlarger{\mathlarger{/}}}$Nakagami-$m$}& $\begin{array} {lcl} F_X(x)=\frac{1}{\Gamma(\mu_2)\Gamma (\mu_1)} \left ( \frac{x\beta_2}{\beta_1} \right )^{\mu_1}G_{2,2}^{1,2}\left[ \frac{x\beta_2}{\beta_1} \bigg| \begin{array}{c} 1-\mu_1,1-\mu_1-\mu_2 \\ 0 ,\hspace{0.5mm} -\mu_1\\ \end{array} \right] \end{array}$ \\ \cmidrule(lr){1-2} \multicolumn{1}{l}{Nakagami-$m$$\mathlarger{\mathlarger{\mathlarger{/}}}$Weibull}&$\begin{array} {lcl} F_X(x)=\frac{1}{\Gamma (\mu_1)} \left ( \frac{x\beta_2}{\beta_1} \right )^{\mu_1}\mathrm{H}_{2,2}^{1,2}\left[ \frac{x\beta_2}{\beta_1} \bigg| \begin{array}{c} (1-\mu_1,1),(-\frac{2\mu_1}{\alpha_2},\frac{2}{\alpha_2}) \\ (0,1) ,\hspace{0.5mm} (-\mu_1,1)\\ \end{array} \right] \end{array}$ \\ \cmidrule(lr){1-2} \multicolumn{1}{l}{Nakagami-$m$$\mathlarger{\mathlarger{\mathlarger{/}}}$Rayleigh}&$\begin{array} {lcl} F_X(x)=\frac{1}{\Gamma (\mu_1)} \left ( \frac{x\beta_2}{\beta_1} \right )^{\mu_1}G_{2,2}^{1,2}\left[ \frac{x\beta_2}{\beta_1} \bigg| \begin{array}{c} 1-\mu_1,-\mu_1 \\ 0,\hspace{0.5mm} -\mu_1\\ \end{array} \right] \end{array}$ \\ \cmidrule(lr){1-2} \multicolumn{1}{l}{Weibull$\mathlarger{\mathlarger{\mathlarger{/}}}$Weibull}& $\begin{array} {lcl} F_X(x)=\left ( \frac{x\beta_2}{\beta_1} \right )^{\frac{\alpha_1}{2}}\mathrm{H}_{2,2}^{1,2}\left[\left ( \frac{x\beta_2}{\beta_1} \right )^{\frac{\alpha_1}{2}}\bigg| \begin{array}{c} (0,1),(-k,k) \\ (0,1) ,\hspace{0.5mm} (-1,1)\\ \end{array} \right] \end{array}$ \\ \cmidrule(lr){1-2} \multicolumn{1}{l}{Weibull$\mathlarger{\mathlarger{\mathlarger{/}}}$Nakagami-$m$}& $\begin{array} {lcl} F_X(x)=\frac{1}{\Gamma(\mu_2)} \left ( \frac{x\beta_2}{\beta_1} \right )^{\frac{\alpha_1}{2}}\mathrm{H}_{2,2}^{1,2}\left[\left ( \frac{x\beta_2}{\beta_1} \right )^{\frac{\alpha_1}{2}}| \begin{array}{c} (0,1),(1-k-\mu_2,k) \\ (0,1),\hspace{0.5mm} (-1,1)\\ \end{array} \right] \end{array}$ \\ \cmidrule(lr){1-2} \multicolumn{1}{l}{Weibull$\mathlarger{\mathlarger{\mathlarger{/}}}$Rayleigh}& $\begin{array} {lcl} F_X(x)=\left ( \frac{x\beta_2}{\beta_1} \right )^{\frac{\alpha_1}{2}}\mathrm{H}_{2,2}^{1,2}\left[\left ( \frac{x\beta_2}{\beta_1} \right )^{\frac{\alpha_1}{2}}\bigg| \begin{array}{c} (0,1),(-\frac{\alpha_1}{2} ,\frac{\alpha_1}{2}) \\ (0,1) ,\hspace{0.5mm} (-1,1)\\ \end{array} \right] \end{array}$ \\ \cmidrule(lr){1-2} \multicolumn{1}{l}{Rayleigh$\mathlarger{\mathlarger{\mathlarger{/}}}$Rayleigh}& $\begin{array} {lcl} F_X(x)= \frac{x\beta_2}{\beta_1} G_{2,2}^{1,2}\left[ \frac{x\beta_2}{\beta_1} \bigg| \begin{array}{c} 0,-1 \\ 0,-1\\ \end{array} \right] \end{array}$ \\ \cmidrule(lr){1-2} \multicolumn{1}{l}{Rayleigh$\mathlarger{\mathlarger{\mathlarger{/}}}$Nakagami-$m$}& $\begin{array} {lcl} F_X(x)=\frac{x\beta_2}{\beta_1\Gamma(\mu_2)} G_{2,2}^{1,2}\left[\frac{x\beta_2}{\beta_1} \bigg| \begin{array}{c} 0,-\mu_2 \\ 0,\hspace{0.5mm}-1\\ \end{array} \right] \end{array}$ \\ \cmidrule(lr){1-2} \multicolumn{1}{l}{Rayleigh$\mathlarger{\mathlarger{\mathlarger{/}}}$Weibull}& $\begin{array} {lcl} F_X(x)= \frac{x\beta_2}{\beta_1} \mathrm{H}_{2,2}^{1,2}\left[\frac{x\beta_2}{\beta_1} \bigg| \begin{array}{c} (0,1),(-\frac{2}{\alpha_2},\frac{2}{\alpha_2}) \\ (0,1) ,\hspace{2.5mm} (-1,1)\\ \end{array} \right] \end{array}$ \\ \cmidrule(lr){1-2} \end{tabular}\label{RATIOCDF} \end{table*} \begin{table*}[t] \scriptsize \centering \caption{MGF of the Ratio for Different Distributions as Special Cases} \centering \begin{tabular}{ll} \toprule \hspace{10mm} \textbf{Ratio} & \hspace{40mm} \textbf{MGF} \\ \cmidrule(lr){1-2} \multicolumn{1}{l}{Nakagami-$m$$\mathlarger{\mathlarger{\mathlarger{/}}}$Nakagami-$m$}& $\begin{array} {lcl} {\cal M}_X(s)= \frac{1 }{2 \Gamma (\mu_2)\Gamma (\mu_1)} \left ( \frac{\beta_2}{s\beta_1} \right )^{\mu_1} G_{2,1}^{1,2}\left[\frac{\beta_2}{s\beta_1}\bigg| \begin{array}{c} 1-\mu_2-\mu_1,1-\mu_1 \\ 0 \\ \end{array} \right]\end{array}$ \\ \cmidrule(lr){1-2} \multicolumn{1}{l}{Nakagami-$m$$\mathlarger{\mathlarger{\mathlarger{/}}}$Weibull}&$\begin{array} {lcl} {\cal M}_X(s)= \frac{1 }{ \Gamma (\mu_1)} \left ( \frac{\beta_2}{s\beta_1} \right )^{\mu_1} \mathrm{H}_{2,1}^{1,2}\left[\frac{\beta_2}{s\beta_1} \bigg| \begin{array}{c} (-\frac{2\mu_1}{\alpha_2},\frac{2}{\alpha_2}),(1-\mu_1,1) \\ (0,1) \\ \end{array} \right]\end{array}$ \\ \cmidrule(lr){1-2} \multicolumn{1}{l}{Nakagami-$m$$\mathlarger{\mathlarger{\mathlarger{/}}}$Rayleigh}&$\begin{array} {lcl} {\cal M}_X(s)= \frac{1}{ \Gamma (\mu_1)}\left ( \frac{\beta_2}{s\beta_1} \right )^{\mu_1}G_{2,1}^{1,2}\left[\frac{\beta_2}{s\beta_1} \bigg| \begin{array}{c} -\mu_1,1-\mu_1 \\ 0 \\ \end{array} \right]\end{array}$ \\ \cmidrule(lr){1-2} \multicolumn{1}{l}{Weibull$\mathlarger{\mathlarger{\mathlarger{/}}}$Weibull}& $\begin{array} {lcl} {\cal M}_X(s)= \frac{\alpha_{1} }{2} \left ( \frac{\beta_2}{s\beta_1} \right )^{\frac{\alpha_1}{2}}\mathrm{H}_{2,1}^{1,2}\left[\left ( \frac{\beta_2}{s\beta_1} \right )^{\frac{\alpha_1}{2}}| \begin{array}{c} (-k,k),(1-\frac{ \alpha_1}{2},\frac{\alpha_1}{2}) \\ (0,1) \\ \end{array} \right]\end{array}$ \\ \cmidrule(lr){1-2} \multicolumn{1}{l}{Weibull$\mathlarger{\mathlarger{\mathlarger{/}}}$Nakagami-$m$}& $\begin{array} {lcl} {\cal M}_X(s)= \frac{\alpha_{1} }{2 \Gamma (\mu_2)} \left ( \frac{\beta_2}{s\beta_1} \right )^{\frac{\alpha_1}{2}}\mathrm{H}_{2,1}^{1,2}\left[\left ( \frac{\beta_2}{s\beta_1} \right )^{\frac{\alpha_1}{2}}| \begin{array}{c} (1-\mu_2-\frac{\alpha_1}{2} ,\frac{\alpha_1}{2} ),(1-\frac{ \alpha_1}{2},\frac{\alpha_1}{2}) \\ (0,1) \\ \end{array} \right]\end{array}$ \\ \cmidrule(lr){1-2} \multicolumn{1}{l}{Weibull$\mathlarger{\mathlarger{\mathlarger{/}}}$Rayleigh}& $\begin{array} {lcl} {\cal M}_X(s)= \frac{\alpha_{1} }{2 } \left ( \frac{\beta_2}{s\beta_1} \right )^{\frac{\alpha_1}{2}}\mathrm{H}_{2,1}^{1,2}\left[\left ( \frac{\beta_2}{s\beta_1} \right )^{\frac{\alpha_1}{2}}\bigg| \begin{array}{c} (-\frac{\alpha_1}{2},\frac{\alpha_1}{2}),(1-\frac{ \alpha_1}{2},\frac{\alpha_1}{2}) \\ (0,1) \\ \end{array} \right]\end{array}$ \\ \cmidrule(lr){1-2} \multicolumn{1}{l}{Rayleigh$\mathlarger{\mathlarger{\mathlarger{/}}}$Rayleigh}& $\begin{array} {lcl} {\cal M}_X(s)= \frac{\beta_2}{s\beta_1} G_{2,1}^{1,2}\left[\frac{\beta_2}{s\beta_1} \bigg| \begin{array}{c} -1,0 \\ 0 \\ \end{array} \right]\end{array}$ \\ \cmidrule(lr){1-2} \multicolumn{1}{l}{Rayleigh$\mathlarger{\mathlarger{\mathlarger{/}}}$Nakagami-$m$}& $\begin{array} {lcl} {\cal M}_X(s)= \frac{\beta_2 }{ s \beta_1\Gamma (\mu_2)}G_{2,1}^{1,2}\left[ \frac{\beta_2}{s\beta_1} \bigg| \begin{array}{c} -\mu_2,0 \\ 0\\ \end{array} \right]\end{array}$ \\ \cmidrule(lr){1-2} \multicolumn{1}{l}{Rayleigh$\mathlarger{\mathlarger{\mathlarger{/}}}$Weibull}& $\begin{array} {lcl} {\cal M}_X(s)= \frac{\beta_2}{s\beta_1} \mathrm{H}_{2,1}^{1,2}\left[\frac{\beta_2}{s\beta_1} \bigg| \begin{array}{c} (-\frac{2}{\alpha_2},\frac{2}{\alpha_2}),(0,1) \\ (0,1) \\ \end{array} \right]\end{array}$ \\ \cmidrule(lr){1-2} \end{tabular}\label{RATIOMGF} \end{table*} \begin{figure*}[hbt] \begin{footnotesize} \begin{equation} \mathrm{H}_1=\begin{cases} \sum_{h=0}^{\infty}\frac{z^{h}\Gamma\left (k (h+\mu_1)+\mu_2\right )}{\left (-1\right )^h h!}, & \text{$k\leq 1$, if $k=1\rightarrow $ $\abs{z}<1$}.\\ \sum_{h=0}^{\infty}\frac{ z^{-\frac{h+k\mu_1+\mu_2}{k}}\Gamma\left (\frac{h+k\mu_1+\mu_2}{k} \right )}{\left (-1\right )^h k h!}, & \text{$k\geq 1$, if $k=1\rightarrow $ $\abs{z}>1$}. \end{cases} \label{eq:FoxbyResidues1} \end{equation} \end{footnotesize} \hrulefill \end{figure*} \begin{figure*}[hbt] \begin{footnotesize} \begin{equation} \mathrm{H}_2=\begin{cases} \sum_{h=0}^{\infty}\frac{z^{h}\Gamma\left (k(h+\mu_1)+\mu_2\right )}{\left (-1\right )^h (h+\mu_1) \Gamma\left ( 1+h \right )}, & \text{$k\leq 1$, if $k=1\rightarrow $ $\abs{z}<1$}.\\ \sum_{h=0}^{\infty}\frac{z^{-h-\mu_1} \Gamma\left (h+\mu_1\right )\Gamma\left (-hk+\mu_2\right )}{\left (-1\right )^{h-2} \Gamma \left ( 1-h \right )h!}+ \sum_{h=0}^{\infty}\frac{z^{-\frac{h}{k}-\mu_1-\frac{\mu_2}{k}}\Gamma\left (\frac{-h-\mu_2}{k}\right )\Gamma\left (\frac{h+k\mu_1+\mu_2}{k} \right )}{\left (-1\right )^{h-2} \Gamma \left ( \frac{-h+k-\mu_2}{k} \right ) kh!}, & \text{$k\geq 1$, if $k=1\rightarrow $ $\abs{z}>1$}. \end{cases} \label{eq:FoxbyResidues2} \end{equation} \end{footnotesize} \hrulefill \end{figure*} \begin{figure*}[hbt] \begin{footnotesize} \begin{equation} \mathrm{H}_3=\begin{cases} \sum_{h=0}^{\infty}\frac{\Gamma\left (hk+k\mu_X+\mu_Y\right )z^{h}}{\left (-1\right )^h h!}, & \text{$k\leq 1$, if $k=1\rightarrow $ $\abs{z}<1$}.\\ \sum_{h=0}^{\infty}\frac{\Gamma\left (\frac{h+k\mu_X+\mu_Y}{k}\right )z^{-\frac{h+k\mu_X+\mu_Y}{k}}}{\left (-1\right )^h k h!}, & \text{$k\geq 1$, if $k=1\rightarrow $ $\abs{z}>1$}. \end{cases} \label{eq:FoxbyResidues3} \end{equation} \end{footnotesize} \hrulefill \end{figure*} \section{Applications} In this section, we present some illustrative application uses of our analytical expressions in the context of key enabling technologies for 5G and beyond networks, including PLS, CR, and FD relaying. \subsection{Physical Layer Security and Secrecy Outage Probability} In the context of physical layer security, a widely used metric to evaluate the secrecy performance of wireless networks is the secrecy outage probability. Thus, let us consider the Wyner's wiretap channel as depicted in Fig.~\ref{sistema1}, where a legitimate transmitter (Alice) sends confidential messages to the legitimate receiver (Bob) through the main channel, while the eavesdropper (Eve) tries to intercept these messages from its received signal over the eavesdropper channel. Furthermore, assume that both the main and eavesdropper channels experience independent $\alpha$-$\mu$ distributed fading. \begin{figure}[H] \centering \psfrag{A}[Bc][Bc][0.8]{A} \psfrag{B}[Bc][Bc][0.8]{B} \psfrag{E}[Bc][Bc][0.8]{E} \psfrag{U}[Bc][Bc][0.8]{$h_{\mathrm{AB}}$} \psfrag{w}[Bc][Bc][0.8][-20]{$h_{\mathrm{AE}}$} \psfrag{Main channel}[Bc][Bc][0.6]{Main channel} \psfrag{Wiretap channel}[Bc][Bc][0.6]{Wiretap channel} \includegraphics[width=0.7\linewidth]{./Figures/sistema1.eps} \caption{The system model of a wiretap channel consisting of two legitimate correspondents and one eavesdropper.} \label{sistema1} \end{figure} According to~\cite{Wyner}, the secrecy capacity is obtained as \begin{align}\label{eq:8} C_s&=\!\text{max}\left \{C_B-C_E,0 \right \} \nonumber \\ &=\!\text{max}\left \{\log_2\!\left (1\!+\!\frac{|h_{\mathrm{AB}}|^2P_{\mathrm{A}}}{N_{\mathrm{0}}} \!\right )\!-\!\log_2\!\left (1\!+\!\frac{|h_{\mathrm{AE}}|^2P_{\mathrm{A}}}{N_{\mathrm{0}}}\!\right ),0 \right \} \nonumber \\ &=\!\text{max}\left \{\log_2(1+\gamma_B)-\log_2(1+\gamma_E),0 \right \} \nonumber \\ &=\left\{ \begin{array}{ll} \hspace*{1mm} \log_2\left ( \frac{1+\gamma_B}{1+\gamma_E} \right ), \quad \text{if} \enspace \gamma_B>\gamma_E\\ \hspace*{1mm} 0, \hspace{6em} \text{if} \enspace \gamma_B \leq \gamma_E, \end{array} \right. \vspace{2mm} \end{align} where $P_{\mathrm{A}}$ is the transmit power at Alice, $N_{\mathrm{0}}$ is the average noise power, and $C_B$ and $C_E$ are the capacities of the main and wiretap channels, respectively. Hence, the secrecy outage probability (SOP) is defined as the probability that the instantaneous secrecy capacity falls below a target secrecy rate threshold $R_{th}$~\cite{Wyner}, thus being given by \begin{align}\label{eq:sop} \text{SOP}&=\Pr\left \{ C_s\left ( \gamma_B,\gamma_E \right ) < R_{th} \right \} =\Pr\left \{ \left ( \frac{1+\gamma_B}{1+\gamma_E} \right ) < 2^{R_{th}} \right \}\nonumber \\ &\stackrel{(a)}{\geq} \Pr\left \{ \frac{\gamma_B}{\gamma_E}< 2^{R_{th}}\buildrel \Delta \over = \tau_1 \right \}\nonumber \\ &=F_{X_1}(\tau_1) \end{align} where $X_1=\gamma_B/\gamma_E$ and $F_{X_1}(\cdot)$ is a CDF given as in~\eqref{eq:CDFRATIO}. In step $(a)$, we have considered a lower bound of the SOP, which results very tight, as shall be shown in Section \ref{sect:numericals}. It is noteworthy that, our formulation for the lower bound of the SOP is valid for non-constrained arbitrary values of the fading parameters corresponding to the main channel and eavesdropper channel (i.e., $\alpha_i$ and $\mu_i$, for $i$ $\in \left \{ B,E \right \}$). This is in contrast to previous works~\cite{Lei,Kong} related to the performance analysis of physical layer security over single-input single-output (SISO) $\alpha$-$\mu$ fading channels, where constraints on the fading parameter values were considered (more specifically, $\alpha_B{=}\alpha_E$ in~\cite{Kong}, and $\alpha_B$, $\alpha_E$ must be co-prime integers in~\cite{Lei}). Therefore, our expressions are a generalization of the aforementioned approaches. \subsection{Outage Performance of Cognitive Relaying Networks} Cognitive relaying networks is another application where the statistics of the ratio of RVs appear. In particular, consider the cognitive relaying network depicted in Fig.~\ref{sistema2}. \begin{figure}[!b] \centering \psfrag{P}[Bc][Bc][0.8]{P} \psfrag{S}[Bc][Bc][0.8]{S} \psfrag{R}[Bc][Bc][0.8]{R} \psfrag{D}[Bc][Bc][0.8]{D} \psfrag{V}[Bc][Bc][0.8][30]{$h_{\mathrm{SP}}$} \psfrag{Z}[Bc][Bc][0.8][0]{$h_{\mathrm{SR}}$} \psfrag{W}[Bc][Bc][0.8][0]{$h_{\mathrm{RD}}$} \psfrag{U}[Bc][Bc][0.8][40]{$h_{\mathrm{RP}}$} \includegraphics[width=0.7\linewidth]{./Figures/sistema2.eps} \caption{System model of an underlay cognitive relaying network. The data links are represent by solid lines, while the interference links are represented by dashed lines. } \label{sistema2} \end{figure} In this system, a secondary network consisting of one secondary source (S), one secondary decode-and-forward (DF) relay (R), and one secondary destination (D), operate by sharing the spectrum belonging to a primary network. Thus, the secondary transmissions are subject to power constraints inflicted by a primary destination (P) in an underlay spectrum-sharing scenario, so that a predetermined level of interference temperature on the primary receiver is satisfied~\cite{art:haykin}. In this system, the direct link is neglected, as it is considered to be extremely attenuated, and all terminals are assumed to be equipped with a single antenna. The channel coefficients of the data links ${\mathrm{S}}\rightarrow {\mathrm{R}}$ and ${\mathrm{R}}\rightarrow {\mathrm{D}}$ are denoted by $h_{\mathrm{SR}}$ and $h_{\mathrm{RD}}$, respectively; and the channel coefficients of the interference links ${\mathrm{S}}\rightarrow {\mathrm{P}}$ and ${\mathrm{R}}\rightarrow {\mathrm{P}}$ are denoted by $h_{\mathrm{SP}}$ and $h_{\mathrm{RP}}$, respectively. Thus, the corresponding channel power gains $g_{i,j}=\abs{h_{i,j}}^{2},$ with $i \in \left \{ {\mathrm{R}}, {\mathrm{S}} \right \}$ and $j\in \left \{ {\mathrm{D}}, {\mathrm{P}}, {\mathrm{R}} \right \} $, are subject to block $\alpha$-$\mu$ fading. The maximum interference power tolerated at ${\mathrm{P}}$, coming from the cognitive network, is denoted by $I$. It is assumed that the transmit powers at the secondary source and relay are $P_{\mathrm{S}}{=}I/g_{\mathrm{SP}}$ and $P_R{=}I/g_{\mathrm{RP}}$, respectively. In addition, $\overline{\gamma}_I \buildrel \Delta \over = I/N_{\mathrm{0}}$ is defined as the maximum interference-to-noise ratio tolerated at the primary destination. Then, the instantaneous received signal-to-noise ratio (SNR) at the secondary relay and the secondary destination are given, respectively, by \begin{align} \gamma_{\mathrm{SR}}&=\frac{g_{\mathrm{SR}} P_{\mathrm{S}}}{N_{\mathrm{0}}}=\frac{g_{\mathrm{SR}} I}{g_{\mathrm{SP}} N_{\mathrm{0}}}=\frac{g_{\mathrm{SR}} \overline{\gamma}_I}{g_{\mathrm{SP}} },\\ \gamma_{\mathrm{RD}} & =\frac{g_{\mathrm{RD}} P_{\mathrm{R}}}{N_{\mathrm{0}}}=\frac{g_{\mathrm{RD}} I}{g_{\mathrm{RP}} N_{\mathrm{0}}}=\frac{g_{\mathrm{RD}} \overline{\gamma}_I}{g_{\mathrm{RP}}}. \end{align} The outage probability of the secondary network for the DF relaying protocol can be written as~\cite{edgar} \begin{align} \nonumber P_{\mathrm{out}}=&\Pr \left(\min\bigg\{\gamma_{\mathrm{SR}},\gamma_{\mathrm{RD}}\bigg\}<2^{2\mathcal{R}}-1 \buildrel \Delta \over = \tau_2\right)\\ =& F_{X_2}\left(\tau_2\right) +F_{X_3}\left(\tau_2\right)- F_{X_2}\left(\tau_2\right) F_{X_3}\left(\tau_2\right), \end{align} where $F_{X_2}(\cdot)$ and $F_{X_3}(\cdot)$ are the CDFs for the RVs $X_2{=}\gamma_{\mathrm{SR}}$ and $X_3{=}\gamma_{\mathrm{RD}}$, respectively, which can be evaluated as in~\eqref{eq:CDFRATIO}, $\mathcal{R}$ is the target rate and $\tau_2$ is the target SNR threshold. \subsection{Outage Performance of Full-Duplex Relaying Networks} Another application where the statistics of the ratio of independent squared $\alpha$-$\mu$ random variables are considered is in FD relaying systems~\cite{art:osorio,art:olivo}. Let us consider the system depicted in Fig.~\ref{sistema3}, which illustrates a two-hop FD relaying network composed of three nodes: one single-antenna source (S), one single-antenna destination (D), and one DF relay (R) equipped with one transmit antenna and one receive antenna to operate in full-duplex mode. \begin{figure}[!t] \centering \psfrag{S}[Bc][Bc][0.8]{S} \psfrag{R}[Bc][Bc][0.8]{R} \psfrag{D}[Bc][Bc][0.8]{D} \psfrag{X}[Bc][Bc][0.8][0]{$h_{\mathrm{SR}}$} \psfrag{Y}[Bc][Bc][0.8][0]{$h_{\mathrm{RD}}$} \psfrag{U}[Bc][Bc][0.8]{$h_{\mathrm{RR}}$} \includegraphics[width=0.7\linewidth]{./Figures/sistema3.eps} \caption{System model of a three-node FD relaying network (data link: solid line; interference link: dashed line). } \label{sistema3} \end{figure} In this system, it is assumed that the direct link is highly attenuated, thus being neglected. Moreover, all channels in this network are subject to block $\alpha$-$\mu$ fading. Thus, $\gamma_{\mathrm{SR}}=|h_{\mathrm{SR}}|^2\gamma_P/2$ and $\gamma_{\mathrm{RD}}=|h_{\mathrm{RD}}|^2\gamma_P/2$ are the instantaneous received SNRs for the first- and second-hop relaying links, respectively, where $h_{\mathrm{SR}}$ and $h_{\mathrm{RD}}$ are the corresponding channel coefficients, and $\gamma_P$ is the transmit system SNR. Moreover, due to imperfect stages of interference cancellation at the FD relay, a residual self-interference (RSI) is considered, which can be modeled as a Rayleigh fading loop back channel~\cite{art:olivo,art:osorio}, with channel coefficient $h_{\mathrm{RR}}{\sim}\mathcal{CN}\left(0,\sigma^2\right)$, such that $\gamma_{\mathrm{RR}}=|h_{\mathrm{RR}}|^2\gamma_P/2$ is the instantaneous received SNR. Considering the DF relaying protocol, the outage probability of the system under study can be formulated as~\cite{art:olivo} \begin{align} \nonumber P_{\mathrm{out}} = &\Pr \left(\min\bigg\{\dfrac{\gamma_{\mathrm{SR}}}{\gamma_{\mathrm{RR}}+1},\gamma_{\mathrm{RD}}\bigg\}<2^{\mathcal{R}}-1 \buildrel \Delta \over = \tau_3\right)\\ \approx & F_{X_4}\left(\tau_3\right) +F_{X_5}\left(\tau_3\right)- F_{X_4}\left(\tau_3\right) F_{X_5}\left(\tau_3\right), \end{align} where, by considering an interference-limited scenario, such that $\gamma_{\mathrm{SR}}/(\gamma_{\mathrm{RR}}+1)\approx \gamma_{\mathrm{SR}}/\gamma_{\mathrm{RR}}$, $F_{X_4}(\cdot)$ is the CDF of the RV $X_4={\gamma_{\mathrm{SR}}}/{\gamma_{\mathrm{RR}}}$ and $F_{X_5}(\cdot)$ is the CDF of the RV $\gamma_{\mathrm{RD}}$, both of which being straightforwardly evaluated as in~\eqref{eq:CDFRATIO}. \section{Numerical results and discussions} \label{sect:numericals} In this section, we validate the accuracy of the proposed expressions for some representative cases via Monte Carlo simulations. Figs.~\ref{PDFV2} and~\ref{PCFV2} respectively show the PDF and CDF obtained for the ratio of two squared $\alpha$-$\mu$ RVs, by considering different values of fading parameters. In both figures, the values of the fading parameters are chosen to show the wide range of shapes that the distribution of the ratio can assume. Fig.~\ref{PDFV2} illustrates the resulting PDF for different values of $\left \{\mu_1, \mu_2 \right \}$, with $\left \{\alpha_1, \alpha_2 \right \}=\left \{1.5, 1.1 \right \}$ and $\bar{\Upsilon}_1=\bar{\Upsilon}_2= 0$ dB. It can be observed that our expressions perfectly match the Monte Carlo simulations, thus validating our results. Fig.~\ref{PCFV2} shows the resulting CDF for distinct values of $\left \{\alpha_1, \alpha_2 \right \}$, with $\left \{\mu_1, \mu_2 \right \}=\left \{3.5, 2.8 \right \}$ and $\bar{\Upsilon}_1=\bar{\Upsilon}_2= 0$~dB. Once again, it is observed that our expressions perfectly match the Monte Carlo simulations. It can also be noticed from the cases presented in those figures that our expressions allow non-constrained arbitrary values of fading. \begin{figure}[H] \centering \includegraphics[width=0.9\columnwidth]{Figures/RatioPDFV2.eps} \vspace{-4mm} \caption{PDF of the ratio of two squared $\alpha$-$\mu$ RVs for different values of $\left \{ \mu_1, \mu_2\right \}$, with $\left \{\alpha_1, \alpha_2 \right \} = \left \{ 1.5,1.1 \right \} $ and $\bar{\Upsilon}_1=\bar{\Upsilon}_2= 0$ dB. } \label{PDFV2} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.9\columnwidth]{Figures/RatioCDFV2.eps} \vspace{-4mm} \caption{CDF of the ratio of two squared $\alpha$-$\mu$ RVs for different values of $\left \{ \alpha_1, \alpha_2\right \}$, with $\left \{\mu_1, \mu_2 \right \} = \left \{ 3.5, 2.8 \right \} $ and $\bar{\Upsilon}_1=\bar{\Upsilon}_2= 0$ dB.} \label{PCFV2} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.9\columnwidth]{Figures/SOPV2.eps} \vspace{-4mm} \caption{Secrecy outage probability versus $\bar{\Upsilon}_\mathrm{B}$ for different combinations of $\left \{ \alpha_\mathrm{B}, \mu_\mathrm{B}, \alpha_\mathrm{E}, \mu_\mathrm{E} \right \} $, and $\bar{\Upsilon}_\mathrm{E}=1$ dB and $\tau_1= 1$.} \label{SOPV2} \end{figure} Fig.~\ref{SOPV2} shows the SOP versus $\bar{\Upsilon}_\mathrm{B}$ for different combinations of fading parameters, with $\bar{\Upsilon}_\mathrm{B}=1$~dB and $\tau_1= 0$~dB. More specifically, we set the fading parameters~to the following cases: \begin{itemize} \item \textit{Case 1:} Nakagami-$m$\\$\left \{ \alpha_\mathrm{B}, \mu_\mathrm{B}\right \}=\left \{2, 4.5\right \}$, $\left \{ \alpha_\mathrm{E},\mu_\mathrm{E} \right \}=\left \{2, 0.6\right \}$. \item \textit{Case 2:} Weibull \\ $\left \{\alpha_\mathrm{B},\mu_\mathrm{B}\right \}=\left \{3.9, 1\right \}$, $\left \{\alpha_\mathrm{E},\mu_\mathrm{E}\right \}=\left \{1.3, 1\right \}$. \item \textit{Case 3:} Rayleigh \\ $\left \{\alpha_\mathrm{B},\mu_\mathrm{B}\right \}=\left \{2, 1\right \}$, $\left \{\alpha_\mathrm{E},\mu_\mathrm{E}\right \}=\left \{2, 1\right \}$. \item \textit{Case 4:} Weibull\\ $\left \{ \alpha_\mathrm{B},\mu_\mathrm{B}\right \}=\left \{ 1.2, 1\right \}$, $\left \{ \alpha_\mathrm{E},\mu_\mathrm{E}\right \}=\left \{4.5, 1\right \}$. \item \textit{Case 5:}Nakagami-$m$ \\ $\left \{\alpha_\mathrm{B},\mu_\mathrm{B}\right \}=\left \{ 2, 0.5\right \}$, $\left \{\alpha_\mathrm{E},\mu_\mathrm{E}\right \}=\left \{ 2, 3.1\right \}$. \end{itemize} For all cases, it can be noticed that the proposed lower bound is very tight to the exact SOP obtained by Monte Carlo simulations. Also, it is observed that, in general, the secrecy performance worsens as $\alpha_\mathrm{B}$, $\mu_\mathrm{B}$ decrease and $\alpha_\mathrm{E}$, $\mu_\mathrm{E}$ increase (see, e.g., cases~2, 3, and 4), which are the fading parameters of the main and eavesdropper channels, respectively. In contrast, note that the secrecy performance improves as $\alpha_\mathrm{B}$, $\mu_\mathrm{B}$ increase and $\alpha_\mathrm{E}$, $\mu_\mathrm{E}$ decrease (i.e., the eavesdropper channel is in a worse channel condition). Importantly, this fact implies that the fading conditions can be exploited to prevent the information from being overheard by an eavesdropper. \begin{figure}[H] \centering \includegraphics[width=0.9\columnwidth]{Figures/CRNVersion2.eps} \vspace{-4mm} \caption{ Outage performance versus interference power constraint $\overline{\gamma}_I$ of a cognitive relaying network, for different values of fading parameters. Notation: $\mathrm{Ry}\rightarrow $ Rayleigh, $\mathrm{Wb}^+\rightarrow $ severe Weibull, $\mathrm{Wb}^-\rightarrow $ weak Weibull, $\mathrm{Nak}^+\rightarrow $ severe Nakagami-$m$, $\mathrm{Nak}^-\rightarrow $ weak Nakagami-$m$. } \label{CRNV2} \end{figure} Fig.~\ref{CRNV2} illustrates the influence of the fading parameters on the outage performance of a cognitive relaying network. This figure shows the outage probability versus the maximum interference power constraint at the primary receiver, $\overline{\gamma}_I$, for $\bar{\Upsilon}_{\mathrm{SP}}=\bar{\Upsilon}_{\mathrm{SR}}=\bar{\Upsilon}_{\mathrm{RP}}=\bar{\Upsilon}_{\mathrm{RD}}= 1$ dB and a target SNR threshold $\tau_2=0$~dB. For these scenarios, the fading parameters are set to the next cases: \begin{itemize} \item \textit{Case 6:} $\left \{ \alpha_\mathrm{SR}, \mu_\mathrm{SR}\right \}=\left \{4.2, 1\right \}$,$\left \{ \alpha_\mathrm{SP},\mu_\mathrm{SP} \right \}=\left \{2, 4.1\right \}$, $\left \{ \alpha_\mathrm{RD},\mu_\mathrm{RD} \right \}=\left \{3.9, 1\right \}$, $\left \{ \alpha_\mathrm{RP},\mu_\mathrm{RP} \right \}=\left \{2, 3.8\right \}$. \item \textit{Case 7:} $\left \{ \alpha_\mathrm{SR},\mu_\mathrm{SR}\right \}=\left \{2, 1\right \}$, $\left \{ \alpha_\mathrm{SP},\mu_\mathrm{SP}\right \}=\left \{2, 1\right \}$, $\left \{ \alpha_\mathrm{RD},\mu_\mathrm{RD} \right \}=\left \{2, 1 \right \}$, $\left \{ \alpha_\mathrm{RP},\mu_\mathrm{RP} \right \}=\left \{ 2, 1\right \}$. \item \textit{Case 8:} $\left \{ \alpha_\mathrm{SR},\mu_\mathrm{SR}\right \}=\left \{2, 0.6\right \}$, $\left \{ \alpha_\mathrm{SP},\mu_\mathrm{SP}\right \}=\left \{ 0.8, 1\right \}$, $\left \{ \alpha_\mathrm{RD},\mu_\mathrm{RD} \right \}=\left \{2, 0.9\right \}$, $\left \{ \alpha_\mathrm{RP},\mu_\mathrm{RP} \right \}=\left \{0.7, 1\right \}$. \item \textit{Case 9:} $\left \{ \alpha_\mathrm{SR},\mu_\mathrm{SR}\right \}=\left \{0.6, 1\right \}$, $\left \{ \alpha_\mathrm{SP},\mu_\mathrm{SP}\right \}=\left \{2, 4.2\right \}$, $\left \{ \alpha_\mathrm{RD},\mu_\mathrm{RD} \right \}=\left \{4.1, 1\right \}$, $\left \{ \alpha_\mathrm{RP},\mu_\mathrm{RP} \right \}=\left \{2, 0.8\right \}$. \end{itemize} It can be observed from all the curves that our analytical expression matches the Monte Carlo simulations. Moreover, note that, as the fading parameters increase, i.e., for better channel conditions (see, e.g., Case~6, and Case~7), the outage performance improves, as expected. In the opposite scenario, i.e., signals with lower values of the fading parameters (see, e.g., Case~8, and Case~9), the outage performance worsens, due to poor channel conditions. In addition, the outage behavior improves as $\overline{\gamma}_I$ increases, as expected. \begin{figure}[H] \centering \includegraphics[width=0.9\columnwidth]{Figures/FDVersion2.eps} \vspace{-4mm} \caption{ Outage performance versus $\overline{\gamma}_P$ of a FD relaying network, by considering distinct values of average channel power gain at the RSI link, $\bar{\Upsilon}_{RR}$. Two cases are considered: severe fading (case 10) and weak fading (case 11). } \label{FDV2} \end{figure} Fig.~\ref{FDV2} shows the outage performance of a FD relaying network versus the transmit system SNR for $\bar{\Upsilon}_{SR}=\bar{\Upsilon}_{RD}=0$ dB, $\tau_3= 0$~dB, and different values of average channel power gain at the RSI link, namely, $\bar{\Upsilon}_{RR}=-10, -20, -30$ dB. For these scenarios, the values of the fading parameters are set to teh next cases: \begin{itemize} \item \textit{Case~10:} Severe fading \\ $\left \{ \alpha_{\mathrm{SR}},\mu_{\mathrm{SR}}\right \}=\left \{1.8, 0.8\right \}$, $\left \{ \alpha_{\mathrm{RR}},\mu_{\mathrm{RR}} \right \}=\left \{2.2, 0.7\right \}$, $\left \{ \alpha_{\mathrm{RD}},\mu_{\mathrm{RD}} \right \}=\left \{2.1, 0.6\right \}$. \item \textit{Case~11:} Weak fading \\ $\left \{ \alpha_{\mathrm{SR}},\mu_{\mathrm{SR}}\right \}=\left \{1.9, 2.3\right \}$, $\left \{ \alpha_{\mathrm{RR}},\mu_{\mathrm{RR}} \right \}=\left \{2.1, 2.8\right \}$, $\left \{ \alpha_{\mathrm{RD}},\mu_{\mathrm{RD}} \right \}=\left \{2.2, 2.9\right \}$. \end{itemize} In a similar manner, note that our analytical results are highly accurate with respect to the Monte Carlo simulations, thus confirming the correctness of our derivations. We can also observe that the average channel power gain of the RSI link affects the system performance in a different manner according to the fading parameters of the channel. For instance, when dealing with weak fading (e.g., Case~11), the outage performance shows significant improvements as the level of RSI decreases. On the other hand, for a severe fading case (e.g., Case~10), it is observed that, even though for lower values of the level of RSI, the improvement on the performance is not significant. Also, it is observed a performance floor in the medium-to-high SNR regime. This behavior is caused by the RSI at the FD relay. In this context, it is worth mentioning that self-interference mitigation techniques play a pivotal role in exploiting the potential benefits of FD relaying, mainly at the medium-to-high SNR region. Additionally, it can be noticed that the consideration of a more general distribution in the case of FD, lead us to a more comprehensive analysis of different scenarios according to the severity of fading. Also, interested readers can revise~\cite{FD1,FD2} for further guidance about self-interference cancellation on FD relay systems. \section{Conclusions} In this paper, novel exact analytical expressions for the PDF, CDF, MGF, and higher order moments of the ratio of two squared $\alpha$-$\mu$ RVs in terms of the Fox H-function were derived. Importantly, this expressions, unlike previous related works, are valid for any values of the fading parameters $\alpha$ and $\mu$. Additionally, a series representation for the formulations are also provided. Based on these results, analytical expressions for the statistics of the ratio of well-known distributions, such as Nakagami-$m$, Weibull, and Rayleigh, were also provided as byproducts. These novel statistics represent a useful tool to assess the performance of wireless communication schemes considering generalized fading-channel models with applicability in scenarios for next-generation wireless networks. For illustration purposes, we analyze three application uses by analyzing ($i$) the secrecy outage probability for PLS-based wireless networks, ($ii$) the outage performance for cognitive relaying networks, and ($iii$) the outage performance for FD relaying networks. The obtained analytical expressions were validated by Monte Carlo simulations. Finally, it is worthwhile to mention that the analytical results presented in this work can be evaluated in a straightforward and efficient manner through mathematical software packages. For this purpose, we have also provided an implementation of the Fox H-function. \appendices \section{ Proof of Proposition~\ref{prop:pdf}} \label{ap:statistics} Assuming that $\Upsilon_1$ and $\Upsilon_2$ are statistically independent, the PDF of $X$ can be obtained as~\cite{Leonardo} \begin{align}\label{eq:aneA1} f_X(x)&=\int_{0}^{\infty}y f_{\Upsilon_1}\left (x y \right )f_{\Upsilon_2}(y)dy. \end{align} Now, by substituting~\eqref{eq:6} into~\eqref{eq:aneA1}, it follows that \begin{align}\label{eq:aneA2} f_X(x)&=\frac{\alpha_1\alpha_2 x^{\frac{\alpha_1\mu_1}{2}-1}}{4\beta_2^{\frac{\alpha_2\mu_2}{2}} \beta_1^{\frac{\alpha_1\mu_1}{2}}\Gamma (\mu_2)\Gamma (\mu_1)} \nonumber \\ & \times \int_{0}^{\infty}y^{\frac{\alpha_2\mu_2}{2}+\frac{\alpha_1\mu_1}{2}-1}G_{0,1}^{1,0}\left[ \left ( \frac{x y}{\beta_1} \right )^{\frac{\alpha_1}{2}} \bigg| \begin{array}{c} 0\\ \end{array} \right] \nonumber \\ &\times G_{0,1}^{1,0}\left[ \left ( \frac{y}{\beta_2} \right )^{\frac{\alpha_2}{2}} \bigg| \begin{array}{c} 0\\ \end{array} \right]dy. \end{align} After some mathematical manipulations in~\eqref{eq:aneA2}, we have that \begin{align}\label{eq:aneA3} f_X(x)&=\frac{\alpha_1 x^{\frac{\alpha_1\mu_1}{2}-1}}{2\beta_2^{\frac{\alpha_2\mu_2}{2}} \beta_1^{\frac{\alpha_1\mu_1}{2}}\Gamma (\mu_2)\Gamma (\mu_1)} \underset{I_1}{\underbrace{\int_{0}^{\infty}w^{\mu_2+k\mu_1-1} }}\nonumber \\ & \underset{I_1}{\underbrace{\times G_{0,1}^{1,0}\left[\frac{w}{\beta_2^{\frac{\alpha_2}{2}}} \bigg| \begin{array}{c} 0\\ \end{array} \right]G_{0,1}^{1,0}\left[ \frac{w^{k}}{\left ( \frac{x}{\beta_1} \right )^{\frac{-\alpha_1}{2}} }\bigg| \begin{array}{c} 0\\ \end{array} \right]dw,}} \end{align} where $w{=}y^{\alpha_2/2}$ and recalling that $k{=}\frac{\alpha_1}{\alpha_2}$. Then, by using~\cite[Eq. (07.34.21.0009.01)]{Wolfram1}, $I_1$ in~\eqref{eq:aneA3} can be solved in a straightforward manner as \begin{align}\label{eq:aneA4} I_1&=\left (\frac{1}{\beta_2^{\frac{\alpha_2}{2}}}\right ) ^{-(\mu_2+k\mu_1)}\nonumber \\ &\times \mathrm{H}_{1,1}^{1,1}\left[\left ( \frac{x\beta_2}{\beta_1} \right )^{\frac{\alpha_2}{2}}\bigg| \begin{array}{c} (1-\mu_2-k\mu_1,k) \\ (0,1)\\ \end{array} \right]. \end{align} Finally, by replacing $I_1$ into~\eqref{eq:aneA3}, we obtain the expression in~\eqref{pdfRatio}. On the other hand, the CDF of $X =\Upsilon_1/\Upsilon_2 $ can be formulated as \begin{align}\label{eq:cdfratios} F_X(x)&=\Pr\left \{ X \leq x \right \}\nonumber \\ &=\Pr\left \{\frac{\Upsilon_1}{\Upsilon_2}\leq x \right \}\nonumber \\ &=\Pr\left \{\Upsilon_1\leq x\Upsilon_2 \right \}\nonumber \\ &= \int_{0}^{\infty}F_{\Upsilon_1}\left (xy \right )f_{\Upsilon_2}(y)dy. \end{align} Then, by replacing~\eqref{eq:6} and~\eqref{eq:7} into~\eqref{eq:cdfratios}, we get \begin{align}\label{eq:30} F_X(x)&=\frac{\alpha_2x^{\frac{\mu_1\alpha_1}{2}}}{2\beta_2^{\frac{\alpha_2\mu_2}{2}}\beta_1^{\frac{\alpha_1\mu_1}{2}}\Gamma (\mu_2)\Gamma (\mu_1)} \nonumber \\ & \times \int_{0}^{\infty}y^{\frac{\alpha_2\mu_2}{2}+\frac{\alpha_1\mu_1}{2}-1}G_{0,1}^{1,0}\left[ \left ( \frac{y}{\beta_2} \right )^{\frac{\alpha_2}{2}} \bigg| \begin{array}{c} 0\\ \end{array} \right] \nonumber \\ &\times G_{1,2}^{1,1}\left[ \left ( \frac{x y}{\beta_1} \right ) ^{\frac{\alpha_1}{2}} \bigg| \begin{array}{c} 1-\mu_1 \\ 0,-\mu_1 \\ \end{array} \right]dy. \end{align} Here we proceed by following a similar procedure as in the derivation of the PDF of $X$. By replacing $w=y^{\alpha_Y/2}$ and $k=\frac{\alpha_1}{\alpha_2}$ into~\eqref{eq:30}, it follows that \begin{align}\label{eq:31} F_X(x)&=\frac{x^{\frac{\mu_1\alpha_1}{2}}}{\beta_2^{\frac{\alpha_2\mu_2}{2}}\beta_1^{\frac{\alpha_1\mu_1}{2}}\Gamma (\mu_2)\Gamma (\mu_1)} \underset{I_2}{\underbrace{\int_{0}^{\infty}w^{\mu_1k+\mu_2-1}}} \nonumber \\ &\underset{I_2}{\underbrace{ \times G_{0,1}^{1,0}\left[ \frac{w}{\beta_Y^{\frac{\alpha_Y}{2}}} \bigg| \begin{array}{c} 0\\ \end{array} \right] G_{1,2}^{1,1}\left[ \left ( \frac{x}{\beta_1} \right ) ^{\frac{\alpha_1}{2}} w^k\bigg| \begin{array}{c} 1-\mu_1 \\ 0,-\mu_X \\ \end{array} \right]dw.}} \end{align} Now, by using~\cite[Eq. (07.34.21.0009.01 )]{Wolfram1}, $I_2 $ in~\eqref{eq:31} can be solved in a straightforward manner as \begin{align}\label{eq:33} I_2&=\left (\frac{1}{\beta_2^{\frac{\alpha_2}{2}}}\right ) ^{-\left (\mu_1k+\mu_2\right )}\nonumber \\ &\times \mathrm{H}_{2,2}^{1,2}\left[\left ( \frac{x\beta_2}{\beta_1} \right )^{\frac{\alpha_1}{2}}\bigg| \begin{array}{c} (1-\mu_1,1),(1-\mu_1k-\mu_2,k) \\ (0,1) ,\hspace{0.5mm} (-\mu_1,1)\\ \end{array} \right]. \end{align} Finally, substituting~\eqref{eq:33} into~\eqref{eq:31}, a closed-form expression for the CDF of $X{=}\Upsilon_1/\Upsilon_2$ can be calculated as in~\eqref{eq:CDFRATIO}. Now, the MGF of $X=\Upsilon_1/\Upsilon_2$ can be obtained, by definition, as~\cite{mgf} \begin{align}\label{eq:mgf1} {\cal M}_\text{X}(s)& \buildrel \Delta \over = \mathbb{E}\left[ \text{e}^{-s X} \right ]=\int_{0}^{\infty }\exp\left (-sx \right )f_X(x)dx.\nonumber \\ \end{align} By replacing $f_X(x)$ given as in~\eqref{pdfRatio} into~\eqref{eq:mgf1}, we obtain \begin{align}\label{mgf2} {\cal M}_\text{X}(s)&=\frac{\alpha_{1} }{2 \Gamma (\mu_2)\Gamma (\mu_1)}\left ( \frac{\beta_2}{\beta_1} \right )^{\frac{\alpha_1\mu_1}{2}} \int_{0}^{\infty }x^{\frac{\alpha_1\mu_1}{2}-1} \nonumber \\ &\times e^{-s x H_{1,1}^{1,1}\left[\left ( \frac{x\beta_2}{\beta_1} \right )^{\frac{\alpha_1}{2}}\bigg| \begin{array}{c} (1-\mu_2-k\mu_1,k)\\ (0,1)\\ \end{array} \right]dx. \end{align} Substituting the Fox H-function in~\eqref{mgf2} by its Mellin-Barnes type contour integral as in~\cite[Eq. (1.2)]{Fox}, interchanging the order of integrations, and performing some simplifications, we obtain \begin{align}\label{mgf3} {\cal M}_\text{X}(s)&=\frac{\alpha_{1} \left ( \frac{\beta_2}{\beta_1} \right )^{\frac{\alpha_1\mu_1}{2}} }{2 \Gamma (\mu_2)\Gamma (\mu_1)}\int_{0}^{\infty }x^{\frac{\alpha_1\mu_1}{2}-1} \exp\left (-sx \right ) \nonumber \\ & \times \frac{1}{2\pi \mathrm{i}}\int_{\mathcal{C}}^{ }\Gamma(z)\Gamma(\mu_2+k\mu_1-kz)\left [ \left ( \frac{x\beta_2}{\beta_1} \right )^{\frac{\alpha_1}{2}} \right ]^{-z}dzdx \nonumber \\ &= \frac{\alpha_{1} \left ( \frac{\beta_2}{\beta_1} \right )^{\frac{\alpha_1\mu_1}{2}} }{4 \Gamma (\mu_2)\Gamma (\mu_1)\mathrm{i}}\int_{\mathcal{C}}^{ }\Gamma(z)\Gamma(\mu_2+k\mu_1-kz)\nonumber \\ &\times \left [ \left ( \frac{\beta_2}{\beta_1} \right )^{\frac{\alpha_1}{2}} \right ]^{-z}\int_{0}^{\infty }x^{\frac{\alpha_1\mu_1}{2}-\frac{z\alpha_1}{2}-1} \exp\left (-sx \right ) dxdz\nonumber \\ &=\frac{\alpha_{1} \left ( \frac{\beta_2}{\beta_1} \right )^{\frac{\alpha_1\mu_1}{2}} s^{-\frac{\alpha_1 \mu_1}{2}} }{2 \Gamma (\mu_2)\Gamma (\mu_1)} \underset{I_3}{\underbrace{ \frac{1}{2\pi \mathrm{i}}\int_{\mathcal{C}}^{ }\Gamma(z)\Gamma(\mu_2+k\mu_1-kz) }}\nonumber \\ &\underset{I_3}{\underbrace{\times\Gamma \left ( \frac{\mu_1 \alpha_1}{2}-\frac{\alpha_1 z}{2} \right ) \left [ \left ( \frac{\beta_2}{s\beta_1} \right )^{\frac{\alpha_1}{2}} \right ]^{-z}dz. }} \end{align} Then, by substituting $I_3$ in~\eqref{mgf3} by its corresponding Fox H-function with the use of~\cite[Eq. (1.1)]{Fox}, we obtain the expression in~\eqref{eq:MGF}, thus accomplishing the proof. \section{ }\label{ap:mathimplementation} \begin{table}[H] \caption{MATHEMATICA\textregistered IMPLEMENTATION OF THE FOX-H FUNCTION} \vspace{-2mm} \centering \includegraphics[width=\columnwidth]{Figures/codigoHoxGeneral.eps} \label{Figura1} \end{table} \section{ }\label{ap:residues} Here, for illustration purposes, the Fox H-function in~\eqref{eq:CDFRATIO} is expressed as a sum of residue ~\cite{Carlos}. To this end, we start by defining the Fox H-function as~\cite[Eq.~(1.1)]{Fox} \begin{align}\label{eq:12} \mathrm{H}_{p,q}^{m,n}\left [ z \right ]&=\mathrm{H}_{p,q}^{m,n}\left[z \bigg| \begin{array}{c} (a_1,A_1),\dots, (a_p,A_p) \\ (b_1,B_1),\dots, (b_q,B_q) \\ \end{array} \right]\nonumber \\ &= \frac{1}{2\pi \mathrm{i}}\int_{\mathcal{C}}^{ }\Theta(s)z^{-s}ds, \end{align} where $m$, $n$, $p$, $q$ $\in \field{N}^0$, with $0\leq n\leq p$, $1\leq m\leq q$, $z\in\mathbb{C}\backslash\{0\}$. Here \begin{multline}\label{eq:13} \Theta(s)=\frac{\left \{ \prod_{j=1}^{m}\Gamma\left ( b_j+B_js \right ) \right \}}{\left \{\prod_{j=m+1}^{q}\Gamma\left (1-b_j-B_js \right ) \right \}}\\ \times \frac{\left \{ \prod_{j=1}^{n}\Gamma\left (1-a_j-A_js \right ) \right \}}{\left \{\prod_{j=n+1}^{p}\Gamma\left (a_j+A_js \right ) \right \}}. \end{multline} An empty product is always interpreted as unity, $A_i, B_j \in \mathbb{R}^+$, $a_i, b_j \in \mathbb{C}$, $i=1,\dots,p$; $j=1,\dots,q$. In addition, $\mathcal{C}=\left (c-i\infty, c+i\infty \right )$ is a contour of integration separating the poles of $\Gamma(1-a_j-A_js)$, $j=1,\cdots,n$ from those of $\Gamma(b_j+B_js)$, $j=1,\cdots,m$. On the other hand, the contour integral $\mathcal{C}$ in~\eqref{eq:12} can be obtained by the sum of residues technique, evaluated at all poles of $\Theta(s)$~\cite{Fox}. Hence, \begin{align}\label{eq:residuos} \frac{1}{2\pi \mathrm{i}}\!\int_{\mathcal{C}}^{}\!\Theta(s)z^{\!-\!s}ds\!=\!\!\sum_{h=0}^{\infty}\!\lim_{s \to \pm \chi(h)} \left (s\pm\chi(h) \right )\Theta(s)z^{-s}, \end{align} where $\chi(h)$ is a specific pole of $\Theta(s)$. Now, using~\eqref{eq:12} and~\eqref{eq:13}, $\mathrm{H}_2$ in~\eqref{eq:CDFRATIO} can be rewritten as \begin{equation}\label{eq:14} \mathrm{H}_2=\frac{1}{2\pi \mathrm{i}} \int_{\mathcal{C}}^{ }\frac{\Gamma(s)\Gamma\left ( \mu_1-s \right )\Gamma(k\mu_1+\mu_2-k s)z^{-s}ds}{\Gamma(1+\mu_1-s)}, \end{equation} where the suitable contour $\mathcal{C}$ separates all the poles of $\Gamma(s)$ to the left from those of $\Gamma\left ( \mu_1-s \right )$ and $\Gamma(k\mu_1+\mu_2-k s)$ to the right. Then, we can evaluate~\eqref{eq:14} as the sum of residues, as follows \begin{align}\label{eq:15} \mathrm{H}_2= S_1+S_2, \end{align} where we have split the analysis of the Fox H-function given in~\eqref{eq:14} into two sums of residues\footnote{It is worth mentioning that $S_1$ corresponds to the sum of residues with respect to the pole of $\Gamma(s)$. On the other hand, $S_2$ corresponds to the sum of residues regarding the poles of $\Gamma\left ( \mu_1-s \right )$ and $\Gamma(k\mu_1+\mu_2-k s)$.}, according to the following ranges of values of $k$: $\left (i\right)$ $\chi(h)=-h$, for $k\leq 1$; $\left (ii\right)$ $\chi(h)=\mu_1+h$ and $\chi(h)=\frac{k\mu_1+\mu_2+h}{k}$, for $k\geq 1$. Now, by using~\eqref{eq:residuos} and the condition for $k\leq 1$ into~\eqref{eq:14}, the term $S_1$ can be formulated as \begin{align}\label{eq:S1Residuo} S_1&=\sum_{h=0}^{\infty}\lim_{s \to -h } \frac{\Gamma(s)\Gamma\left ( \mu_1-s \right )\Gamma(k\mu_1+\mu_2-k s)}{\left ( s+h \right )^{-1}\Gamma(1+\mu_1-s) z^{s}}\nonumber \\ &=\sum_{h=0}^{\infty}\frac{z^{h}\Gamma\left (k(h+\mu_1)+\mu_2\right )}{\left (-1\right )^h (h+\mu_1) \Gamma\left ( 1+h \right )}. \end{align} Likewise, by using~\eqref{eq:residuos} and the condition for $k\geq 1$ into~\eqref{eq:14}, the term $S_2$ can be expressed as. \begin{align}\label{eq:S2Residuo} S_2&=- \sum_{h=0}^{\infty}\lim_{s \to h+\mu_1} \frac{\Gamma(s)\Gamma\left ( \mu_1-s \right )\Gamma(k\mu_1+\mu_2-k s)}{\left ( s-h-\mu_1 \right )^{-1}\Gamma(1+\mu_1-s)z^{s}} \nonumber \\ &- \sum_{h=0}^{\infty}\lim_{s \to \frac{k\mu_1+\mu_2+h}{k} } \frac{\Gamma(s)\Gamma\left ( \mu_1-s \right )\Gamma(k\mu_1+\mu_2-k s)}{\left ( s-\frac{k\mu_1+\mu_2+h}{k} \right )^{-1}\Gamma(1+\mu_1-s)z^{s}} \nonumber \\ &= \sum_{h=0}^{\infty}\frac{z^{-h-\mu_1} \Gamma\left (h+\mu_1\right )\Gamma\left (-hk+\mu_2\right )}{\left (-1\right )^{h-2} \Gamma \left ( 1-h \right )h!}\nonumber \\ &+ \sum_{h=0}^{\infty}\frac{z^{-\frac{h}{k}-\mu_1-\frac{\mu_2}{k}}\Gamma\left (\frac{-h-\mu_2}{k}\right )\Gamma\left (\frac{h+k\mu_1+\mu_2}{k} \right )}{\left (-1\right )^{h-2} \Gamma \left ( \frac{-h+k-\mu_2}{k} \right ) kh!}. \end{align} By following a similar procedure as in the solution for $\mathrm{H}_2$, the series representation for $\mathrm{H}_1$ and $\mathrm{H}_3$ can be obtained as in~\eqref{eq:FoxbyResidues1} and~\eqref{eq:FoxbyResidues3}, respectively. \section{ Proof of Proposition~\ref{prop:moments}} \label{ap:momentinv} Let $Y_i$ be the inverse of $R_i$, with PDF given by~\cite[Eq.~(4)]{inversePDF} \begin{equation}\label{eq:inversePDF} f_{Y_i}(y)=\frac{\alpha_i \hat{r_i}^{\mu_i \alpha_i} y^{-1-\alpha_i\mu_i}}{\mu_i^{\mu_i} \Gamma (\mu_i)}\exp\left(-\frac{\hat{r_i}^{\alpha_i}}{\mu_i y^{\alpha_i}} \right ). \end{equation} From~\eqref{eq:inversePDF}, the $n$th moment $\mathbb{E}\left [ Y_i^n \right ]$ can be expressed as \begin{equation}\label{eq:inversemoments} \mathbb{E}\left [ Y_i^n \right ]= \hat{r_i}^{n} \frac{ \Gamma\left ( \mu_i-n/\alpha_i \right )}{\mu_i^{n/\alpha_i} \Gamma (\mu_i)}, \hspace{2mm} n>\mu_i \alpha_i. \end{equation} Next, substituting $\mathbb{E}\left [\Upsilon_1^{n}\right ]=\mathbb{E}\left [R_1^{2n}\right ]$ by~\eqref{eq:moments}, and $\mathbb{E}\left [Z^n\right ]=\mathbb{E}\left [Y_2^{2n}\right ]$ by~\eqref{eq:inversemoments} into $\mathbb{E}\left [X^n\right ]=\mathbb{E}\left [\Upsilon_1^n\right ] \mathbb{E}\left [Z^{n}\right ]$, we obtain~\eqref{eq:HigherMoments}, thus completing the proof.
1,116,691,497,508
arxiv
\section{Introduction} Double-lined eclipsing binaries provide a unique opportunity to test various theoretical aspects of modern astrophysics. Their absolute parameters of masses and radii can be determined with outstanding accuracy, often with errors of 1\% \citep{Torres2010}. Because of the precise estimates of stellar parameters, such systems are used as benchmarks for testing the theory of stellar evolution. Of particular interest are binary systems with pulsating components as they can provide independent constraints on parameters of the model and theory. The $\delta$\,Scuti ($\delta$\,Sct) stars are the intermediate mass stars, typically in the range of 1.5 - 2.5\mass, and with spectral types A0--F5 \citep[e.g.][]{Rodriguez2000,Aerts2010}. They are located in the Hertzsprung-Russell (HR) diagram, in the lower part of the classical instability strip at the intersection with the main sequence \citep[e.g.][]{Dupret2005,Liakos2017}. The pulsations of $\delta$ Sct variables are dominated by low-order pressure (p) and gravity (g) modes. The $\delta$\,Sct instability region in the HR diagram partially overlaps with the $\gamma$\,Doradus ($\gamma$\,Dor) group, multi-periodic stars pulsating in high-order g modes with typical masses between 1.5 and 1.8\mass. They are located near the red edge of the classical instability strip of pulsations \citep[e.g.][]{Dupret2005,Aerts2010}. Historically, $\delta$\,Sct and $\gamma$\,Dor stars constituted two separate groups. However, in the modern era of space-based photometric observations, hybrid $\delta$\,Sct/$\gamma$\,Dor pulsators with the frequencies typical for both types are becoming rather a rule than an exception \citep[e.g.][]{Grigahcene2010,Balona2015,Antoci2019}. Low-order p/g modes in $\delta$\,Sct models are excited by the $\kappa$-$\gamma$ mechanism (the opacity mechanism) acting in the partial He\,II ionisation zone \citep[eg.][]{Pamyatnykh1999}. However as noted by \cite{Antoci2014}, the turbulent pressure in the hydrogen ionization zone can play a role in the excitation of high-order p modes. In the case of $\gamma$\,Dor it is widely believed that high-order g modes are excited by the interaction of convection and pulsations \cite[e.g.][]{Guzik2000,Grigahcene2005,Dupret2005,Xiong2016}. A considerable fraction of \dsct\ stars are members of binary systems \citep{Liakos2017}. That gives an opportunity for a powerful test of stellar structure and evolution theory. In particular, such systems enable the determination of current evolutionary status of components and the system age \citep[see e.g.,][]{Higl2017,Daszynska2019}, provide the possibility to study tidal interactions \citep{Bowman2019} and the effect of the mass exchange and its impact on the system evolution. As was shown by \cite{Claret2016,Claret2017,Claret2018,Claret2019}, double-lined eclipsing binary systems can provide also an estimate of overshooting efficiency from the convective core. KIC\,10661783 is a close binary system of spectral type A5IV \citep{Frasca2016}. It was studied for the first time by \citet{Pigulski2009}, who reported that its light curve exhibits eclipses of both components. However, this study was based on the low number of \textit{ASAS} \citep{Pojmanski1997} observational points and no detailed study of the system was possible. The first thorough study of KIC\,10661783 was done by \cite{Southworth2011} who used short cadence (SC, Q2.3) and long cadence (LC, Q0-1) \textit{Kepler} satellite observations. The authors found 68 frequency peaks in the systems light curve with 58 identified as the independent frequencies. These independent peaks were assigned to the primary component, defined as the more massive star at the current stage of the system evolution. From the modelling of the eclipsing light curve, they derived two possible solutions. The first one with detached geometry and the mass ratio $q=M_2/M_1=0.25$ and the second one with semi-detached geometry of the system and $q=0.06$. According to the authors, the second determination of $q$ was preferred by their preliminary spectroscopic measurements. However, as the authors noted, their light curve fit was unsatisfactory and required unphysically high albedo for the primary star. This mass-ratio discrepancy was resolved by \cite{Lehmann2013} who gathered 85 spectra of the system and determined the new value of $q=0.09$, that was slightly higher than the value for semi-detached configuration found by \cite{Southworth2011}. The authors state that the system is a post-mass transfer detached binary with the fundamental stellar parameters: $M_A=2.100 \pm 0.028$\mass, $R_A=2.575 \pm 0.015\,R_{\odot}$ for the primary and $M_B=0.1913 \pm 0.0025$\mass, $R_B=1.124 \pm 0.019\,R_{\odot}$ for the secondary. Recent work of \cite{Miszuda2020}, presented the preliminary analysis of the whole available \textit{Kepler} photometry of the system. In this paper we present an extended study of KIC\,10661783 based on the \textit{Kepler} data. In Section\,\ref{sec:observations} we give a short description of the used observations. Sections\,\ref{sec:binarymodelling} and \ref{sec:binaryevolution} are devoted to the eclipsing light curve modelling and binary evolution, respectively. In Section\,\ref{sec:freqanalysis} we analyse the variability of KIC\,10661783 and we extract the frequencies from its light curve residuals. Interpretation of the oscillation spectrum is given in Section\,\ref{sec:PulsationModelling}. Discussion and conclusions in Section\,\ref{sec:conclusions} end the paper. Finally, in Appendix\,\ref{sec:appendix}, we provide a list of all significant frequencies with their amplitudes and phases. \section{Observations} \label{sec:observations} \textit{Kepler Space Telescope} was a space observatory under the subject of NASA space agency. Its core goal was to detect Earth-size planets orbiting solar-like stars, however it has a great potential for asteroseismology \cite[for the mission overwiew, see e.g.,][]{Borucki2010,Koch2010}. It was operating in its original mission form since early 2009 until mid-2013, resulting in $\sim$ 4 years of nearly continuous observations of the fixed field of view. After a technical failure the mission changed to \textit{K2} that observed various fields until the end of 2018, when the spacecraft ran out of fuel. The mission allowed for the creation of a unique catalogue of targets observed with unprecedented data quality, time span and duty cycle. \textit{Kepler} was observing its target stars in two observational modes: short cadence (SC) and long cadence (LC). Each of the modes is composed of the summed up multiple 6.02\,s exposures followed by the 0.52\,s readout time, resulting in total 58.9\,s of exposure in short cadence and 29.4\,min exposure in long cadence \citep{Gilliland2010}. KIC\,10661783 was observed almost continuously for over four years in the \textit{Kepler} long cadence mode. In addition, the star was observed in short cadence mode for over 2 years. These observations were reduced in a similar way as in \cite{Szewczuk2018}. We extracted the flux from the target pixel files using the \texttt{PYKE} code \citep{Still2012} with the custom defined apertures (masks). Our masks contain pixels for which signal to noise ratio exceeds 100. From this raw light curve we removed outliers. To this end the 4-$\sigma$ clipping was used. In order to remove some common instrumental trends we used the so-called co-trending basis vectors. Then, some outliers were found and removed once again by an eye inspection. Finally, data were divided by second order polynomials fitted to the out-of-eclipses data in each quarter separately. The final light curve of KIC\,10661783 consists of over 58\,000 points spread over 1500 days for LC (Q0-Q17) and over 470\,000 points for SC (Q2.3, Q6.1-Q8.3, Q10.1-Q10.3) spread over a period of 769 days. A comparison of the light curves from both, the SC and LC modes can be seen in Fig.\,\ref{lc}. As one can see, the light curve exhibits both, the eclipses and pulsations. \begin{figure} \centering \includegraphics[width=0.5\textwidth,clip]{img/lc_inset.png} \caption{A comparison of the full \textit{Kepler} short cadence (top panel) and long cadence (bottom panel) light curves of KIC\,10661783. The insets show the zoomed area covering 4 days of observations in order to visualize both binary and pulsation variability. The \texttt{WD} model (described in Sect.\,\ref{sec:binarymodelling}) is marked with the red line.} \label{lc} \end{figure} \section{Binary light curve modelling} \label{sec:binarymodelling} We modelled the eclipsing light curve with the \texttt{JKTEBOP} code \citep{Southworth2004} and Wilson-Devinney code \citep[\texttt{WD}, see][]{Wilson1971,Wilson1979}. We used the \texttt{WD} version of May 22, 2015, in which the \textit{Kepler} passband is included, enabling us to properly model the passband-dependent features. The computed model for both SC and LC can be seen in the insets of Fig.\,\ref{lc}. We started the modelling of the LC light curve with the \texttt{JKTEBOP} code. Firstly, we confirmed the results of \cite{Lehmann2013} that the system has a zero eccentricity. At this step we determined a rough value of the orbital period which was used as a starting value for the later analysis. The \texttt{WD} analysis was performed in a detached mode (mode 2) in a time domain. Using the LC data we refined an orbital period value to $\rm P=1.231\,363\,26 \pm 0.000\,000\,03\,d$ which is slightly longer than the period derived by \citet{Southworth2011}, i.e., $1.23136220 \pm 0.00000024$\,d. The difference is lower than 0.1\,s, however exceeds the Southworth's 4$\sigma$ error. We tested the possibility that the system may exhibit a period change, however we found no evidence for that. We also neglected the presence of third body as we noticed no signs of it in the light curve. \begin{table} \centering \caption{The values of the effective temperature of the primary component of KIC\,10661783.} \label{tab:primary_temperature} \begin{tabular}{lc} \hline \hline Source & $T_{\rm eff}$ [K] \\ \hline \cite{Lehmann2013} & 7764 $\pm$ 54 \\ \citeauthor{GAIA2018} (\citeyear{GAIA2018}, DR2) & 7654 $\pm$ 286 \\ \citeauthor{KIC2011} (\citeyear{KIC2011}, Kepler Input Catalog) & 7887 \\ \hline \end{tabular} \end{table} The values of the primary's effective temperature gathered from the literature are summarized in Table\,\ref{tab:primary_temperature}. We allowed for the wider range of the effective temperature derived by \cite{GAIA2018}, i.e., $T_{\rm eff}\in (7368, 7940)$\,K, for a better estimation of uncertainties in the parameters from the \texttt{WD} solution. This range includes the more precise determination of \cite{Lehmann2013}. In the first run we fixed the effective temperature of the primary with the value resulting from the spectroscopic analysis of \cite{Lehmann2013}. The values of the semi-major axis and mass ratio were adopted from \cite{Lehmann2013} as well. We note, that the \texttt{WD} output of the effective temperature for the secondary fits well into its spectroscopic determination within 1$\sigma$ error. We were able to obtain a smaller value of albedo, i.e., 1.4 $\pm$ 0.14 comparing to 2.46 in \cite{Lehmann2013}, but it is still larger than 1.0. Our study confirms some other results obtained by \citet{Lehmann2013}. In particular we obtained almost identical values of the inclination angle $i$, the surface potentials and the value of $T_{\rm eff}$ for the secondary. In the top panel of Fig.\,\ref{lc_phase_folded} with blue dots we plot all available SC observations as a function of the orbital phase. The \texttt{WD} binary light curve model is represented with a red solid line. The middle and bottom panels show residuals from unbinned and binned observations respectively. The SC observations were binned to 1000 points in a phase space. We do not plot those bins in the top panel as they coincide with the model almost perfectly. However, the residuals from binned points exhibit regular trends. Since all pulsational variability, that is not a multiple of orbital frequency, has been averaged by phasing and binning the observations, this phenomenon is another kind of variability that manifests itself by the presence of the orbital harmonics. To estimate uncertainties in all parameters of the system we checked the effect of changing the effective temperature of the primary on such parameters as: $T_{\rm eff}$ of the secondary, luminosities and radii of both components etc. To this aim we repeated the \texttt{WD} model fitting with the fixed minimum and maximum values of $T_{\rm eff}$. We also took into account the whole measured range of $q = 0.0898 - 0.092$ \citep{Lehmann2013}, different mesh sizes and two different limb darkening laws; linear and square root law. The effect of changing the mesh size and the limb darkening law is negligible. However, this time to speed up the computations, instead of using HJD times of observations, we supplied the program with the binned, phased-folded light curve. Next, we determined the luminosities of both components, using a simple formula $\log L/L_{\odot} = 4 \log(T_{\rm eff}/T_{\rm eff \odot}) + 2 \log(R/R_{\odot})$ and adopting the 1$\sigma$ errors of both radius and temperature. Table\,\ref{tab:system_parameters} gives a summary of all control and fixed parameters that were used during the \texttt{WD} runs. \begin{figure*} \centering \includegraphics[width=0.8\textwidth,clip]{img/model_KIC_bin.png} \caption{The upper panel presents the SC light curve phased with the orbital period. Those observations are depicted with blue points. The binary light curve model that was calculated using the \texttt{WD} code is plotted with the red solid line. The bottom panels presents the residuals after subtracting the binary light curve model from the observations and from the binned points.} \label{lc_phase_folded} \end{figure*} \begin{table*} \caption{The physical and orbital parameters of KIC\,10661783 found from the \texttt{WD} modelling. In the last two rows, we give the mesh size parameters.} \label{tab:system_parameters} \begin{tabular}{lccc} \hline \hline Parameter & Primary & Secondary & System \\ \hline Orbital period (days) & ... & ... & $1.2313632588 \pm 3.26 \times 10^{-8} $ \\ $dP/dt$ (days/year) & ... & ... & $0.00 \pm 0.16\times 10^{-9}$ \\ Time of primary minimum (BJD-2\,454\,900) & ... & ... & $164.5464621147 \pm 4.34 \times 10^{-5}$ \\% 0.0000434218$\\ Orbital inclination (degree) & ... & ... & $82.03 \pm 0.01$ \\ Orbital eccentricity $e$ & ... & ... & $0.0^{\star}$ \\ Semi-major axis ($R_{\odot}$)& ... & ... & $6.375^{\star1}$ \\ Mass ratio $q=M_2/M_1$ & ... & ... & $0.09109^{\star1}$ \\ Mass ($M_{\odot}$) & $2.100^{1}$ & $0.1913^{1}$ & ... \\ Radius ($R_{\odot}$) & $2.5793 \pm 0.0223$ & $1.1320 \pm 0.0020$ & ... \\ $T_{\rm eff}$ (K) & $7654 \pm 286^{\star}$& $6136 \pm 203$ & ... \\ $\log L/L_{\odot}$ & $1.3113 \pm 0.0652$ & $0.2121 \pm 0.0575$ & ... \\ Surface potential, $\Omega$& $2.60$ & $1.99$ & ... \\ Albedo & 1.4 $\pm$ 0.14 & 0.62 $\pm$ 0.05 & ...\\ Limb darkening law & Square root$^{\star}$ & Square root$^{\star}$ & ...\\ N1, N2 & 90$^{\star}$ & 90$^{\star}$ & ...\\ N1L, N2L & 60$^{\star}$ & 60$^{\star}$ & ...\\ \hline \multicolumn{4}{l}{\textbf{Notes:} $^{\star}$ Fixed, $^1$ \cite{Lehmann2013}}\\ \end{tabular} \end{table*} \section{Frequency analysis} \label{sec:freqanalysis} Apart from the eclipses, the light curve of KIC\,10661783 shows clear additional variability (see Fig.\,\ref{lc}) that mainly can be attributed to pulsations. In order to extract frequencies, we subtracted our eclipsing model from the original data and analysed the residua. We calculated amplitude spectra by means of a discrete Fourier transform \citep{Deeming1975,Kurtz1985} and followed the standard pre-whitening procedure. Given the total number of points in the analysed SC light curve, performing the Fourier analysis up to the Nyquist frequency ($\sim 730$\cpd) is very time consuming. Since our preliminary study of the periodograms calculated for SC data light curve showed numerous frequency peaks, and none over the $200$\cpd, therefore we decided to stop calculating periodograms on the original light curve at $200$\cpd for both, SC and LC data. \begin{figure} \centering \includegraphics[width=\columnwidth,clip]{img/periodograms.png} \caption{Periodograms calculated for the \textit{Kepler} SC observations corrected for the binary orbit. The amplitude spectra for the original data are shown in the top panel. The middle and bottom panels present the periodograms calculated for the light curve pre-whitened for 350 and 750 frequencies, respectively. The 4\,S/N level is marked with the red lines. Note that the Y-axis scale differs between the panels.} \label{periodograms} \end{figure} We assumed the signal-to-noise ratio $\rm S/N=4$ as a threshold for significant frequencies \cite[see][]{Breger1993,Kuschnig1997}, however those with S/N<5 should be treated with caution \citep{Baran2015}. The noise was calculated as an average amplitude value in a 1\cpd\ window centred at a given peak before its extraction. Our careful analysis has revealed 750 frequency peaks for the SC data. Fig.\,\ref{periodograms} presents the periodograms calculated for the original SC data corrected for the binary orbit (the top panel), after pre-whitening for 350 (the middle panel) and for 750 found frequencies (the bottom panel). The residual periodogram, presented on the bottom of Fig.\,\ref{periodograms}, has some visible humps around 25, 35, 50 and 75\,\cpd. Those humps are most probably due to the unresolved signal left in the data. Many of the frequencies detected in the SC light curve are present in the LC data set as well. Naturally, in the case of the LC observations there is a problem with aliasing due to low value of the pseudo-Nyquist frequency ($\sim 25$\cpd), however the LC data gives better frequency resolution and extracted frequencies can be compared with those found from the SC analysis. In the next step, we applied a selection criterion to these 750 frequencies using the resolution condition. We adopted the resolution of 1.5 times the Rayleigh limit i.e., 1.5/T, where T is the total time span of the observations \citep{Loumos1978}. This gives the values of $\Delta f_{\rm R, SC} = 0.00195$\cpd\ and $\Delta f_{\rm R, LC} = 0.00102$\cpd\ for SC and LC data, respectively. Then, we checked whether frequencies found in the SC are separated by the distance lower than $\Delta f_{\rm R, SC}$. If so, and both frequencies have their equivalents in the LC data (with the accuracy of $\Delta f_{\rm R, SC}$) with the separation greater than $\Delta f_{\rm R, LC}$ then we accept those frequencies as real ones. If their separation is less than $\Delta f_{\rm R, LC}$, we treated the frequency with lower amplitude as spurious and remove it from the list. After this procedure, we rejected 160 frequencies from the SC. The remaining set of 590 significant frequencies we regard as a final one for the further identification of possible combinations. We looked for possible combinations of all significant frequencies and the orbital frequency using simple formula: $m \times f_i + n \times f_j$, with $m$ and $n$ being integers between -10 and 10. Moreover, we identified the orbital harmonics, i.e., $N \times f_{\rm orb}$, with N being an integer greater than zero. Finally, we determined that 207 amongst all significant frequencies seem to be independent within the adopted frequency resolution. In Fig.\,\ref{osc}, we show the final results of the frequency analysis. In the five panels from the top to bottom, we can see: all significant frequency peaks found in the SC data, harmonics of the orbital frequency (83 peaks), combinations with the orbital frequency, combinations of independent frequencies and, in the bottom panel, only the independent frequency peaks. A complete list of the frequencies after the rejection of those regarded as spurious due to the adopted frequency resolution is in Appendix\,\ref{sec:appendix}. The possible combinations are listed in the Remarks column. \begin{figure} \centering \includegraphics[width=\columnwidth,clip]{img/osc_spectrum_separate_panels.pdf} \caption{The frequency peaks from the analysis of the SC data after subtraction of the orbital model and after a selection for the frequency resolution. In the top panel, we show all 590 significant frequencies found in the SC data. The lower three panels show: the orbital frequency harmonics, combinations with the orbital frequency and combinations between the independent frequencies. The bottom panel shows only the independent frequencies (207 peaks). Note that the X-axis scales differ between the panels.} \label{osc} \end{figure} Despite the fact, that the selected frequencies range up to $\sim$ 180\cpd, above $80$\cpd\ we observe only harmonics of the orbital frequency (up to $223 \times f_{\rm orb}$). Such frequencies can appear whenever one considers a light curve corrected for imperfect binary model or when a tidally-locked pulsations occur. What is more, we also found combinations of the pulsation frequencies and the orbital frequency which may appear when the amplitude depends on which side of the star is oriented towards the observer (i.e. on the orbital phase). The second plausible explanation is that, since we analyse the system undergoing eclipses, the component's contribution to the total light changes with the orbital period. Even if the primary component would pulsate with the constant amplitude, the observed amplitude would change during eclipses. In Fig.\,\ref{f1_combinations}, we show the combinations of the strongest pulsational frequency with the orbital frequency. \begin{figure} \centering \includegraphics[width=\columnwidth,clip]{img/f1_orbital_combinations.pdf} \caption{Combinations of the dominant pulsational frequency with the orbital frequency ($f_1 + N\times f_{\rm orb}$). All frequency peaks are equidistant by the orbital frequency with the accuracy of $\Delta f_{\rm R, SC}$.} \label{f1_combinations} \end{figure} To investigate this phenomenon closely, we removed all periodicities from the light curve except for $f_1$ and its combinations with the orbital frequency. Next, we divided the light curve into intervals in the orbital phase $\Delta \phi = 0.1$ and fitted the frequency $f_1$ to determine the amplitudes and phases in each interval separately. Such procedure was also repeated for $f_2$ and $f_3$. The amplitude change can be seen in Fig.\,\ref{ampl_modulation}, where we plotted the amplitudes for three strongest modes as a function of the orbital phase. As one can see, these amplitudes change with the orbital phase. In the case of $f_1$, it resembles sinusoidal variability with the amplitude maxima occurring near the orbital phase 0.3 and 0.7. Moreover, the amplitude of $f_2$ seems to be in anti-phase with the amplitudes of $f_1$ and $f_3$. Such amplitude modulation can suggest that one may be able to distinguish various parts of the primary with different values of the pulsational amplitude. Such a case was explored by \cite{Springer2013} where the authors studied pulsational distribution on the tidally deformed component of the binary system. They concluded that the more the star is tidally deformed the more pulsations are trapped in the circumpolar region located in the hemisphere that is facing outwards the system. Their theoretical results resemble our observations. Only recently \cite{Handler2020} reported the first ever found binary system with a star that pulsates only on the one hemisphere, facing either the first or third Lagrange point ($L1$ or $L3$). CO Cam is the second reported case of a system exhibiting pulsations on a hemisphere facing the $L1$ point at least in four frequencies \citep{Kurtz2020}. \cite{Fuller2020} explained this phenomenon as a result of tidal mode coupling and called it \textit{tidally tilted pulsators}. KIC\,10661783 may be another star pulsating in a similar way. \begin{figure} \centering \includegraphics[width=\columnwidth,clip]{img/ampl_modulation.pdf} \caption{The amplitude modulation effect presented for three strongest, independent frequencies. Details are described in the text.} \label{ampl_modulation} \end{figure} \section{Binary evolution models} \label{sec:binaryevolution} Binary systems present a wide variety of interactions altering the evolution of both components and the system as a whole. Mass-transfer events causing the rejuvenation of one of the components is only one of many effects that are crucial to be taken into account when modelling the stellar evolution. To model the binary KIC\,10661783 as a product of a binary evolution, we used the Modules for Experiments in Stellar Astrophysics \citep[\texttt{MESA},][]{Paxton2011, Paxton2013, Paxton2015, Paxton2018, Paxton2019} in its 12115 version with the \texttt{MESA-binary} module. \texttt{MESA} relies on the variety of the input microphysics data. The \texttt{MESA} EOS is a blend of the OPAL \citep{Rogers2002}, SCVH \citep{Saumon1995}, FreeEOS \citep{Irwin2004}, HELM \citep{Timmes2000}, and PC \citep{Potekhin2010} EOSes. Radiative opacities are primarily from the OPAL project \citep{Iglesias1993,Iglesias1996}, with data for lower temperatures from \citet{Ferguson2005} and data for high temperatures, dominated by Compton-scattering from \citet{Buchler1976}. Electron conduction opacities are from \citet{Cassisi2007}. Nuclear reaction rates are from JINA REACLIB \citep{Cyburt2010} plus additional tabulated weak reaction rates from \citet{Fuller1985}, \cite{Oda1994} and \cite{Langanke2000}. Screening is included via the prescription of \citet{Chugunov2007}. Thermal neutrino loss rates are from \citet{Itoh1996}. The \texttt{MESA-binary} module allows to construct a binary model and to run and compute the evolution of both components including such effects as mass transfer and orbital elements evolution. Roche lobe radii in binary systems are computed using the fit of \citet{Eggleton1983}. Mass transfer rates in Roche lobe overflowing binary systems are determined following the prescription of \citet{Ritter1988}. For our evolutionary compuations, we used \cite{Asplund2009} chemical mixture and OPAL opacity tables. We adopted the Ledoux criterion for the convective instability with the mixing-length theory description by \cite{Henyey1965} and semi-convective mixing with the parameter $\alpha_{\rm SC}=0.01$. The diffusive exponential overshooting scheme described by \cite{Herwig2000} and parametrized by the parameter $f_{\rm ov}$ was applied. For the large-scale effects we used the mass transfer of Kolb type \citep{Kolb1990}. We included stellar winds from both components following the prescription of \cite{Vink2001}. For the sake of simplicity of computations we ignored the rotation of stars. Whenever one considers binary evolution the crucial point is a choice between conservative and non-conservative mass transfer scheme, which means that a mass is not lost or lost from the system during the transfer, respectively. However, it is still discussed which type should be considered \cite[see e.g.,][]{Kolb1990,Sarna1992,Sarna1993,Guo2017}. Recently, \cite{Chen2017} concluded that the formation of stars similar to KIC\,10661783 can be fully explained in non-conservative way. Here, we rely on the non-conservative transfer described as the fraction of mass lost from the vicinity of the accretor in the form of a fast wind during the mass transfer \citep{Tauris2006}. This mass fraction is denoted by the parameter $\beta$, with the value varying between 0, meaning totally conservative mass transfer, and 1 describing totally non-conservative mass transfer scheme. Except of the initial masses, the main parameter controlling the evolution of the binary undergoing the mass-transfer is the initial orbital period, $P_{\rm in}$. It is also a main parameter that distinguishes between \textit{A} and \textit{B} Roche-lobe overflow cases \citep{Kippenhahn1967}. Case \textit{A} describes the mass exchange with a donor in the main-sequence phase and case \textit{B} the mass exchange in the rapid core contraction phase preceding helium ignition. We followed the evolution of both components with the initial orbital periods between 1.8 to 4.0\,d with the step reaching the value down to $\Delta {P}=10^{-5}$\,d. In order to demonstrate the importance of the orbital period on the systems evolution, in Fig.\,\ref{P_dependence} we show evolutionary tracks of the 1.71\mass\ donor, that evolves in a binary system with the 1.15\mass accretor. Calculations were done for $Z=0.020$ and $X_0=0.70$. All tracks start from a black dot. It is clearly visible that the moment when the mass transfer starts (marked with red stars) is fully controlled by the initial orbital period. In order to obtain a model reproducing the system at its current evolutionary stage, with masses and radii as determined by \cite{Lehmann2013} and the orbital period from our analysis, we built an extensive grid of models\footnote{For inlists see \url{https://doi.org/10.5281/zenodo.4618112}}. Our grid covered a wide range of initial masses; 0.7 to 1.7\mass\ for the accretor and 1.0 to 2.5\mass\ for the donor, with the step descending iteratively, down to the value of $\Delta \rm{M}=0.001$\mass, whenever a local minimum of fitting was found. \begin{figure*} \centering \includegraphics[width=1.75\columnwidth,clip]{img/P_dependence_evolution_colorbar.png} \caption{The Hertzsprung-Russel diagram with the donor evolutionary tracks computed as a results of the binary evolution of components with the initial masses $M_{\rm don,ini}=1.71$\,\mass\ and $M_{\rm acc,ini}=1.15$\,\mass and with six values of the initial orbital period, $P_{\rm in}$. The other input parameters are: $X_0=0.70$, $Z=0.020$, $f_{\rm ov}=0.01$, $\alpha_{\rm MLT}=2.0$ and $\beta=0.5$. All tracks start from a black dot. The beginning of the mass transfer is marked with the red stars. The colour of the tracks depends on the mass transfer rate at a given evolution moment, as specified in the colour-bar. The dashed line separates the two mass transfer scenarios: case A and B. The position of the secondary component of KIC\,10661783 is marked with the 3$\sigma$ error box.} \label{P_dependence} \end{figure*} Firstly, a sparse grid of models for $Z=0.014$ and $X_0=0.7$ was computed for aforementioned mass and period ranges. We tested different values of $\beta$ (between 0 and 1 with $\Delta \beta$=0.1) and $f_{\rm ov}$ (between 0.00 and 0.04, with $\Delta f_{\rm ov}$=0.01) to find their preferable values. To find the best models, for each star we calculated the discriminant $D^2$: $$ D^2 = \frac{1}{N}\sum_{\rm i=1}^{\rm N} \left( \frac{X_{\rm obs, i}-X_{\rm model, i}}{\sigma_{\rm obs, i}} \right)^2$$ where $X$ denotes the considered parameters, i.e., the orbital period, mass, radius, effective temperature and luminosity for each star. $N$ is the total number of considered parameters. Then, the mean value of $D^2$ was computed as $$<D^2> = \frac{D_1^2 + D_2^2}{2},$$ where $D_1$ and $D_2$ are the discriminants for the primary and secondary, respectively. In the first step, from the computed grid, we selected the models, that for a given value of $P_{\rm in}$ reproduce masses and radii of both components within 3$\sigma$. The most preferable initial value of the accretor mass was $0.9$\mass\ while the initial donor mass was about 1.4\mass. The totally conservative mass transfer ($\beta=0.0$) and the overshooting $f_{\rm ov}=0.02$ were preferred. However as we noted, our models had difficulties with reproducing the observed radius of the donor. In order to fix that discrepancy, we tested the dependence between the final masses and radii of both components on additional parameters, like metallicity $Z$, initial hydrogen abundance $X_0$ and mixing length parameter $\alpha_{\rm MLT}$. We found that increasing metallicity to $Z=0.025$, the initial hydrogen abundance to $X_0=0.73$ and adopting $\alpha_{\rm MLT}=1.3$ helps to improve the agreement. However for such parameters we had to move away from the conservative mass transfer assumption to $\beta=0.05$, i.e. 5\% of the transferred mass being lost from the system. In order to find the best model we used the already found preferable values of the parameters and changed the approach. Instead of making a finer mesh of models, we used diagrams showing the dependence of the final values of masses and radii on the initial period, $P_{\rm in}$. The examples of such diagrams are presented in Fig.\,\ref{R_M}, where we plotted models varying only in the initial period $P_{\rm in}$, while the following parameters have been fixed: $M_{\rm don,ini}=1.45$\mass, $M_{\rm acc,ini}=0.91$\mass, $Z=0.025$, $X_0=0.73$, $f_{\rm ov}=0.02$, $\alpha_{\rm MLT}=1.3$ and $\beta=0.05$. As one can see, there is a clear dependence of the final masses and radii on the initial period. Too high value of $P_{\rm in}$ allows the donor to reach the red giant phase, after which it enters the nearly-constant luminosity phase leading further to several flashes in a H-burning shell before it cools down. In such case, the orbital period begins to increase rapidly, at the onset of mass transfer, reaching values of tenths of days in a cooling phase. On the other hand, too low value of $P_{\rm in}$ causes a slow rate of mass transfer, at which the donor's radius and the systems orbital period are decreasing. However, the donor is not able to contract to become the helium white dwarf (He-WD). The boundary between those scenarios is set by the so-called \textit{bifurcation period}, at which there is a sudden drop, visible in Fig.\,\ref{R_M}. \begin{figure} \centering \includegraphics[width=\columnwidth,clip]{img/R_M_P_mlt_1.3.pdf} \caption{The dependency of the final masses and radii of each component on the initial orbital period $P_{\rm in}$. The models have parameters: $M_{\rm don,in}=1.45$\mass, $M_{\rm acc,in}=0.91$\mass, $Z=0.025$, $X_0=0.73$, $f_{\rm ov}=0.02$, $\alpha_{\rm MLT}=1.3$ and $\beta=0.05$. The horizontal lines give the observed $3\sigma$ range of masses and radii. The vertical, grey lines on the insets mark the best solution.} \label{R_M} \end{figure} The parameters of the best model (in terms of the discriminant) reproducing the orbital period, masses and radii of both components as well as their positions in the HR diagram are presented in Table\,\ref{tab:parameters}. However, the secondary component in our best model lies slightly outside 3$\sigma$ error box in the HR diagram. The model has $M_{\rm don,ini}=1.45$\mass, $M_{\rm acc,ini}=0.91$\mass and the initial period, $P_{\rm in}=3.70805$\,d. In Fig.\,\ref{HR_best_models}, we plotted the corresponding evolutionary tracks for the accretor (the orange line) and donor (the blue line) in the HR diagram. For a comparison, we show also evolutionary tracks calculated for a single star evolution with grey lines, adopting the same values of $Z$, $X_0$, $f_{\rm ov}$ and $\alpha_{\rm MLT}$ as in the binary case. The positions of the primary and secondary components are shown with their 1$\sigma$ and 3$\sigma$ errors, inside which we marked the best fitting models. \begin{figure*} \centering \includegraphics[width=1.75\columnwidth,clip]{img/HR_full.pdf} \caption{Binary evolutionary tracks in the HR diagram computed for the best matching set of parameters, as described in Table\,\ref{tab:parameters}. The orange and blue tracks mark binary evolution models for the accretor and donor, respectively. Grey evolutionary tracks calculated from single evolution of various initial masses are plotted for a comparison. Thick grey lines represent tracks for masses corresponding to the current masses of the binary components. Positions of the primary and secondary components of KIC\,10661783 are marked with 1 and 3$\sigma$ boxes. Orange dot, inside the 1$\sigma$ box of the primary component, marks the position of the best model reproducing the mass, radius, effective temperature and luminosity. Grey dot represents the position of the model from single evolution. Blue dot marks the position of the donor model from binary evolution.} \label{HR_best_models} \end{figure*} We found that in order to fit the secondary component within its observed ranges of mass and radius, we must ensure that the donor looses most of its outer shell before it evolves towards the red giant phase. For such mass sets, describing both initial and current state of KIC\,10661783 system, this can be obtained only in the case A, i.e., with mass transfer occurring in the main sequence phase of the donor. In this scenario, during the mass transfer, donor enters the nearly constant-luminosity phase with ever-growing effective temperature, towards H-shell flashes, to the cooling sequence of the helium dwarf stage. In the next step, we studied the effect of binary evolution on the internal abundance profiles of H and He of both components. In Fig.\,\ref{abundance_profiles}, we present the H and He profiles as a function the relative radius (the top panels) and temperature (the bottom panels) for both components for the best binary model we found. For a comparison, we show the abundance profiles for the single-evolution model of the primary with dashed lines. This model was calculated for the observed value of the primary mass, i.e., $M=2.1$\mass, using the same set of parameters, i.e. $Z=0.025$, $X_0=0.73$, $\alpha_{\rm MLT}=1.3$, $f_{\rm ov}=0.02$, and assuming the same prescription and parametrisation of stellar wind. This model is marked in Fig.\,\ref{HR_best_models} with the grey dot. Since the single primary's model undergoes a standard main sequence evolution, it has an extended hydrogen envelope surrounding a H-burning core enriched in helium up to 60\%. Its binary equivalent manifests the influence of the past-mass transfer history on its interior. Because it is much older than the corresponding single model (6.39 vs 0.97 Gyr), it has much higher abundance of He in the core. The outer shell reveals interesting property of the H and He profiles. Near the core boundary its structure reflects the initial hydrogen and helium composition, however the further away from the core, the higher abundance of helium relative to the hydrogen abundance. At the surface we got the reversed ratio of H/He. It results in the outermost He rich layer being a direct manifestation of mass transfer in the past. The secondary, on the onset of mass transfer, is still fusing H to He. By that time, which is nearly 4.7 Gyr, it was able to create almost fully-He core (of about 92\%) with only 5\% of H left. That means, that the mass-transfer event happened right before the overall contraction. Although it is systematically being stripped out of its outer layers, the inner-core fusion is still producing He. The moment of the core conversion to fully helium and the beginning of the shell H-burning corresponds roughly to the minimum of effective temperature and luminosity of the donor track in Fig.\,\ref{HR_best_models}. Since now, the luminosity and effective temperature of the donor increase, due to the new region of H-burning, leading it to the current observed status as a helium-core pre-white dwarf. The profile of the metallicity $Z$ is almost constant throughout the interior of both components and almost independent of the evolutionary past of the system. As determined by \citet{Lehmann2013} from spectroscopy, KIC\,10661783 exhibits some anomalies in the abundances of elements like C, N and O, what can be explained by the mass transfer in the past. From their determinations, the abundance of nitrogen is much greater than the solar value of \citet{Asplund2009} (AGSS09), the abundance of oxygen is about the same, and carbon is less abundant than the value of AGSS09. The abundances of N and O from our binary-evolution modelling agree with the results of \citet{Lehmann2013} within the 2$\sigma$ error, while the abundance of C of the primary is below $4\sigma$ of the Lehmann's value. Our abundances of CNO are given in Table\,\ref{tab:abundances} together with determinations of \citet{Lehmann2013} and the AGSS09 solar values for a comparison. \begin{figure} \centering \includegraphics[width=\columnwidth,clip]{img/XYZ_profiles_r+single.pdf} \\ \includegraphics[width=\columnwidth,clip]{img/XYZ_profiles_T+single.pdf} \caption{The abundance profiles of H and He as a function of the relative radius (the top panel) and temperature (the bottom panel) for both components. The profiles obtained from the the binary-evolution computations are plotted with solid lines whereas the profiles resulting from the single evolution with dashed lines. The primary mass for, both, single and binary-evolution computations is 2.1\mass, while the secondary has a mass of 0.187\mass.} \label{abundance_profiles} \end{figure} \begin{table*} \caption{The parameters of the system and both components from our WD and from the analysis of \citet{Lehmann2013}. In the last column we give the set of parameters obtained from the best fitting of the evolutionary \texttt{MESA-binary} models.} \label{tab:parameters} \begin{tabular}{lccc} \hline \hline Parameters & \texttt{WD} & \cite{Lehmann2013} & \texttt{MESA} \\ & Model & & Model \\ \hline \multicolumn{4}{l}{\textbf{----- Initial parameters -----}} \\ Initial orbital period $P_{\rm in}$ (d) & -- & -- & 3.70805 \\ Donor initial mass (\mass) & -- & -- & 1.450 \\ Accretor initial mass (\mass) & -- & -- & 0.910 \\ Initial Z & -- & -- & 0.025 \\ Initial $X_0$ & -- & -- & 0.73 \\ \multicolumn{4}{l}{\textbf{----- Orbital parameters -----}} \\ Orbital period P (d) & \textbf{$1.23136326 \pm 3 \times 10^{-8}$} & \textbf{$1.23136220 \pm 2.4 \times 10^{-7}$} & 1.23136326 \\ $M_2/M_1$ & 0.09109$^{\star}$ & 0.09109 & 0.08906 \\ Age (Gyr) & -- & -- & 6.39 \\ \multicolumn{4}{l}{\textbf{----- Primary star (accretor) -----}} \\ Mass (\mass) & -- & 2.100 $\pm$ 0.028 & 2.100 \\ Radius (R$_{\odot}$) & 2.5793 $\pm$ 0.0224 & 2.575 $\pm$ 0.015 & 2.618 \\ $\log T_{\rm eff}$ (K) & 3.8840 $\pm$ 0.0170 & 3.890 $\pm$ 0.003 & 3.886 \\ $\log L/L_{\odot}$ & 1.3113 $\pm$ 0.0651 & 1.335 $\pm$ 0.013 & 1.335 \\ \multicolumn{4}{l}{\textbf{----- Secondary star (donor) -----}} \\ Mass (\mass) & -- & 0.1913 $\pm$ 0.0025 & 0.187 \\ Radius (R$_{\odot}$) & 1.1320 $\pm$ 0.0020 & 1.124 $\pm$ 0.019 & 1.125 \\ $\log T_{\rm eff}$ (K) & 3.7878 $\pm$ 0.0146 & 3.778 $\pm$ 0.007 & 3.838 \\ $\log L/L_{\odot}$ & 0.2121 $\pm$ 0.0575 & 0.161 $\pm$ 0.026 & 0.407 \\ \hline \multicolumn{4}{l}{\textbf{Notes:} $^{\star}$ Fixed}\\ \end{tabular} \end{table*} \begin{table} \centering \caption{Element abundances for both primary and secondary component compared to the Sun values, which are given below the element designation.} \label{tab:abundances} \begin{tabular}{lccc} \hline \hline & C & N & O \\ Solar \citep{Asplund2009} & 8.43 $\pm$ 0.05 & 7.83 $\pm$ 0.05 & 8.69 $\pm$ 0.05 \\ \hline \multicolumn{4}{c}{\textbf{------- \cite{Lehmann2013} (spectroscopy)---------}}\\ Primary & 8.21 $\pm$ 0.28 & 8.95 $\pm$ 0.34 & 8.6 $\pm$ 0.50 \\ Secondary & 7.56 $\pm$ 0.25 & -- & -- \\ \multicolumn{4}{c}{\textbf{----------- This paper (binary evolution)---------}}\\ Primary & 7.08 & 9.54 & 9.18 \\ Secondary & 7.08 & 9.54 & 9.18 \\ \hline \end{tabular} \end{table} \section{Pulsation modelling} \label{sec:PulsationModelling} The essential step of asteroseismic modelling is mode identification. In the case of one-colour \textit{Kepler} data for KIC\,10661783, with no regularities in frequencies or periods, we were unable to identify any pulsational mode. Therefore, we limited our seismic study to reproducing the mode instabilities in the observed frequency range. In the modern era of space-based photometric observations, the hybrid stars pulsating in both g- and p-mode regimes are rather a rule than an exception. In the case of $\delta$ Sct models the low frequencies face a problem because the current standard opacity models predict instability only for higher frequency modes ($f \gtrsim 4$\cpd), while low frequency modes ($f \lesssim 4$\cpd) remain stable \cite[see e.g.][]{Balona2015}. This can be due to insufficient understanding of the theory of stellar pulsations or due to still existing uncertainties in opacity data. The same problem exists in the case of $\beta$\,Cep/SPB hybrid pulsators. This discrepancy cannot be explained by changing model parameters as hydrogen or metal contents in the star as well as by the effects of rotation. To fix that problem, opacity modifications near the Z-bump for $\beta$\,Cep were proposed \citep[e.g.][]{Daszynska2017}. It was found by \citet{Cugier2012,Cugier2014} in the Kurucz model atmospheres \citep{Castelli2003} that both OPAL and OP opacities are underestimated near $\log T\approx5.06$\,K. Guided by this result, \cite{Balona2015} showed that increasing the mean opacity at this temperature excites low-frequency dipole modes in $\delta$\,Sct models. For the best model obtained from the binary-evolution computations, we calculated pulsations for the main component using the non-adiabatic code for linear pulsations \citep{Dziembowski1977}. We considered modes with the harmonic degree $\ell = 0 - 4$. We studied the effect of the binary evolution on the pulsational characteristics by comparing the instability parameter $\eta$ of the binary and single evolution model for the primary component. The parameter $\eta$, introduced by \cite{Stellingwerf1978}, is a normalised work integral computed over the pulsational cycle. The value of $\eta$ greater than 0 means that a driving mechanism overcomes damping and the pulsation mode is excited (unstable). Such comparison can be seen in Fig.\,\ref{opacity_OPAL}, where on the right Y-axis we plotted the instability parameter $\eta$ for representative models of the primary component. In addition, we also show the observed independent frequencies of the KIC\,10661783 with the values of amplitudes on the left Y-axis. The single-evolution model, showed in the left panel, has the parameters: $M=2.1$\mass, $R=2.59$\,R$_{\odot}$, $\log T_{\rm eff}=3.875$ and $\log L/L_{\odot}=1.28$ and the binary-evolution model, showed in the right panel has the parameters: $M=2.1$\mass, $R=2.618$\,R$_{\odot}$, $\log T_{\rm eff}=3.886$ and $\log L/L_{\odot}=1.335$. As can be seen the binary-evolution model shows instability in both low and intermediate frequency range whereas the single-evolution model is pulsational stable in the whole range of frequencies. Thus Fig.\,\ref{opacity_OPAL} shows a direct effect of the binary evolution on the pulsational properties of $\delta$ Sct variables. The conclusion is that an accreting matter has a huge impact on a star, not only in term of mass gain. Therefore, computing the pulsations of $\delta$ Sct stars in binary systems such as KIC\,10661783 one cannot neglect a mass-exchange in the past. Clearly, an incoming mass changes physical conditions and compositions of the outer layers. This fact is best demonstrated by the abundance profiles of H and He plotted in Fig.\,\ref{abundance_profiles}. Accumulation of helium in the outer layers of the primary has a great influence on excitation of pulsations. \begin{figure*} \centering \begin{tabular}{cc} \multicolumn{2}{c}{\textbf{OPAL opacities}} \\ \textbf{Single star evolution} & \textbf{Binary evolution} \\ \includegraphics[width=\columnwidth,clip]{img/single_OPAL.pdf} & \includegraphics[width=\columnwidth,clip]{img/binary_OPAL.pdf} \\ \end{tabular} \caption{A comparison of the instability parameter $\eta$ between single (the left panel and binary (the right panel) star equilibrium models, for modes with $\ell \le 4$. The observed frequency peaks are marked as vertical lines with their amplitudes on the left Y-axis. We choose the representative models for each, single and binary case, which fit the measured radius, mass, $T_{\rm eff}$ and $\log L/L_{\sun}$ within 1$\sigma$ errors. Both models have very similar parameters (see text) and were calculated for $Z=0.025$, $X_0=0.73$, $f_{\rm ov}=0.02$, $\alpha_{\rm MLT}=1.3$ using OPAL opacity tables. The dashed, horizontal line marks $\eta=0.0$, demarcating the excited ($\eta>0.0$) and suppressed ($\eta<0.0$) modes.} \label{opacity_OPAL} \end{figure*} In order to obtain the pulsational instability covering the whole observed frequency range, we followed the procedure of opacity modifications of \cite{Daszynska2017}. Since our binary model predicts low frequencies to be unstable, our main aim was to excite the modes with frequencies higher than 20\cpd. Because in $\delta$ Sct models, p modes are excited by the opacity mechanism acting in the partial HeII ionisation zone, we modified the OPAL tables by increasing the mean opacity near $\log T=4.69$\,K by 100\%. Unfortunately, such modification decreased the instability of low frequency modes. To prevent damping of gravity modes we increased the mean opacity at $\log T=5.06$\,K as suggested by \cite{Balona2015}. The increase by 300\% allowed us to obtain instabilities in both low and intermediate frequency ranges. We show that results in the top panels of Fig.\,\ref{opacity_mod} for both, single and binary evolution models, in a similar way as in Fig.\,\ref{opacity_OPAL}. Moreover, the middle panels show a comparison of the standard and modified mean opacity profiles, $\kappa(T)$, as a function of temperature. The logarithmic temperature derivatives, $\kappa_T = \partial \log \kappa (T) / \partial \log T$, are plotted with red lines. We chose models matching, within 3$\sigma$, the observed values of the radius, effective temperature and luminosity. The single-evolution model has the parameters: $M=2.1$\mass, $R=2.582$\,R$_{\odot}$, $\log T_{\rm eff}=3.875$ and $\log L/L_{\odot}=1.278$ while the binary-evolution model has the parameters: $M=2.09$\mass, $R=2.545$\,R$_{\odot}$, $\log T_{\rm eff}=3.890$ and $\log L/L_{\odot}=1.325$. Although we do not take into account the effects of rotation, at least rotational splitting of pulsational modes has to be included. To this end, each theoretical frequency was split according to the formula $$ f = f_0 + m f_{\rm rot}(1-C_{nl}),$$ where $f_0$ is the mode frequency in the non-rotating star, $m$ is the azimuthal order, $f_{\rm rot}$ is the rotational frequency of a star and $C_{nl}$ is a Ledoux constant dependent on a stellar structure and on the mode. The primary of KIC\,10661783 rotates with the frequency of about 0.6\cpd, which is about 20\% of the critical value of the rotational frequency. The ranges of the rotationally split unstable frequencies are marked in the bottom panels of Fig.\,\ref{opacity_mod}. As one can see, by increasing the mean opacity near $\log T\approx4.69$ and $\log T\approx5.06$ and taking into account rotational splitting of pulsational modes, we are able to cover almost whole region of the observed frequencies, in particular those with the highest amplitudes. Unfortunately, the region of frequencies higher than 35\cpd remains stable no matter what modifications to the mean opacity profile we apply. We interpret this as a result of the convection treatment in the pulsational code we use, that relies on the convective flux freezing approximation. Therefore, if the mechanism responsible for those mode excitation is based on the convection-pulsation interaction, as suggested by \cite{Antoci2014} and \cite{Xiong2016}, we cannot excite them in our models. \begin{figure*} \centering \begin{tabular}{cc} \multicolumn{2}{c}{\textbf{Modified OPAL opacities}} \\ \textbf{Single star evolution} & \textbf{Binary evolution} \\ \includegraphics[width=1.0\columnwidth,clip]{img/single_osc00070_kappa_T0_4.69a0_0.50b0_1.00T1_5.06a1_0.02b1_3.00a09.pdf} & \includegraphics[width=1.0\columnwidth,clip]{img/binary_osc01739_kappa_T0_4.69a0_0.50b0_1.00T1_5.06a1_0.02b1_3.00a09.pdf} \\ \includegraphics[width=1.0\columnwidth,clip]{img/subplot_nad_freq_single_osc00070_kappa_T0_4.69a0_0.50b0_1.00T1_5.06a1_0.02b1_3.00a09.pdf} & \includegraphics[width=1.0\columnwidth,clip]{img/subplot_nad_freq_binary_osc01739_kappa_T0_4.69a0_0.50b0_1.00T1_5.06a1_0.02b1_3.00a09.pdf} \\ \end{tabular} \caption{ Top panels: the instability parameter $\eta$ on the right Y-axis as a function of the frequency for modes with $\ell \le 4$ for single-evolution (left-hand side) and binary-evolution (right-hand side) models. The left Y-axis gives the amplitudes of the observed frequencies. The representative models for single and binary case have very similar parameters (see text) and fit the observed values of $R$, $M$, $T_{\rm eff}$ and $\log L/L_{\sun}$ within their 3$\sigma$ errors. The models were calculated for $Z=0.025$, $X_0=0.73$, $f_{\rm ov}=0.02$, $\alpha_{\rm MLT}=1.3$ using the modified OPAL data with an increase of opacities by 100\% near $\log T=4.69$\,K and by 300\% near $\log T=5.06$\,K. The middle panels show the runs of the standard and modified mean opacities (black lines) and their logarithmic temperature derivatives (red lines). In bottom panels, we marked the range of rotationaly split unstable mode.} \label{opacity_mod} \end{figure*} \section{Discussion and conclusions} \label{sec:conclusions} We performed complex studies of the binary system KIC\,10661783 with the main component being the pulsating star of $\delta$\,Sct type. Firstly, using the whole \textit{Kepler} photometry, we made the light curve modelling using the WD code. After subtraction of the eclipse light curve, we looked for variability in the residua by applying the Fourier analysis and standard pre-whitening procedure. This analysis allowed us to identify 590 significant frequencies, that is with S/N>4, with 207 of which being independent. Most of 207 peaks occupy the frequency range typical for $\delta$\,Sct pulsators but there are large number of low frequency peaks that, as in the case of other $\delta$ Scuti stars observed from the space, correspond most probably to high-order g mode pulsations. Besides, we found a numerous orbital harmonics that can originate from the subtraction of imperfect eclipsing model or internal variability with the frequencies that are multiples of the orbital frequency. The last possibility can be associated with tidally excited pulsations. Moreover, the amplitude modulation with the orbital phase was found. Such behaviour is known from the tidally tilted pulsators and is interpreted as the variable pulsational amplitude over the stellar disk. An in-depth study of this phenomenon is beyond the scope of this paper. We computed the binary evolution of the system using the \texttt{MESA} code that includes mass transfer as well as the evolution of orbital elements. We found the binary model that reproduces masses and radii of both components within the 3$\sigma$ error. That model also reproduces well the position of the components in the HR diagram, however the secondary extends slightly beyond the $3\sigma$ error box. It demanded to assume slightly non-conservative mass transfer with $\beta=0.05$. The other parameters of the best fit are: $X_0=0.73$, $Z=0.025$, $f_{\rm ov}=0.02$ and $\alpha_{\rm MLT}=1.3$. The internal structure of our model differs drastically from the single evolution one. In particular, due to the mass transfer episode, the outer layers of the main component are enormously enriched with helium and depleted of hydrogen. However, we cannot confront this result with observations because there is no determination of helium abundance in the literature. On the other hand, from our binary-evolution modelling, we obtained that abundances of CNO elements agree with the observational determinations within 2$\sigma$ except for the carbon of the primary. Then, for the first time, we examined the impact of binary evolution, in term of the internal structure changes, on pulsational properties of $\delta$ Sct star models. We found that in the case of the single-evolution model, adequate for the main component, all pulsational modes are stable. Instead, the binary-evolution counterpart exhibits instability, both, for p modes and high-order g-modes. However, to cover the wider range of the observed frequencies the modification of opacity data was necessary. To this end, we increased the mean opacities by 100\% at the temperature $\log T=4.69$ (i.e., around the HeII ionization zone) and by 300\% at the temperature $\log T=5.06$. Including the rotational splitting of unstable modes allowed us to account for instability in the frequency range of about $(0,~35$\,d$^{-1}$). The frequencies higher than 35\,d$^{-1}$ are associated with high-order p modes excited, most probably, by another mechanism, e.g., the turbulent pressure in the H ionization zone, as proposed by \cite{Antoci2014}. Our evolutionary and pulsational modelling clearly showed that such systems as KIC\,10661783 should be modelled as a binary and not as single stars as it is often done. These multi-faceted studies has led to the construction of a complex stellar model that explains the current stage of the binary system and both components, their evolutionary past, and account for pulsational instability in almost the entire range of the observed frequencies. The study of more binaries of this type may allow to draw more general conclusions on evolution and pulsation, and indicate the directions of development of asteroseismology of binary stars. \section*{Acknowledgements} This work was financially supported by the Polish National Science Centre grant 2018/29/B/ST9/02803. Calculations have been carried out using resources provided by Wroc\l aw Centre for Networking and Supercomputing (http://wcss.pl), grant no. 265. Funding for the Kepler mission is provided by the NASA Science Mission directorate. Some of the data presented in this paper were obtained from the Multi mission Archive at the Space Telescope Science Institute (MAST). STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract. \section*{Data Availability} The target pixel files were downloaded from the public data archive at MAST. The light curves will be shared upon reasonable request. The full list of frequencies is available as a supplementary material to this paper. We make all inlists needed to recreate our \texttt{MESA-binary} results publicly available at Zenodo. Those can be downloaded at \url{https://doi.org/10.5281/zenodo.4618112}. \bibliographystyle{mnras} \interlinepenalty=10000
1,116,691,497,509
arxiv
\section{Introduction and summary} \noindent Noncommutative deformations of gauge field theory provide a controlled theoretical framework beyond locality~\cite{SWDNS}. Of particular importance are noncommutative instantons (see e.g.~\cite{NSNCH, LPTWI} and references therein), which are BPS~configurations in four dimensions solving the Yang-Mills self-duality equations. In the string context, these solutions describe arrangements of noncommutative branes (see e.g.~\cite{DHWB} and references therein). Natural BPS-type equations for gauge fields in more than four dimensions~\cite{CW, DU} appear in superstring compactification as the conditions for the survival of at least one supersymmetry~\cite{GSW88}. Various solutions to these first-order equations were found e.g.~in~\cite{FNP, GFKL}, and their noncommutative generalizations have been considered e.g.~in~\cite{ncgen, Po}. For U$(n)$ gauge theory on a K\"ahler manifold these BPS-type equations specialize to the Hermitian Yang-Mills equations~\cite{DU}. In this Letter we consider the noncommutative space $\mathbb C^n_\theta$ and construct an explicit $u(n)$-valued solution of the Hermitian Yang-Mills equations. In the commutative limit our configuration coincides with the instanton solution on $\mathbb C P^n$ given in local coordinates on a patch $\mathbb C^n$ of $\mathbb C P^n$. We also describe a noncommutative deformation of a local form of the Abelian configuration on $\mathbb C P^n$. \vspace{5mm} \section{Noncommutative space $\mathbb R^{2n}_\theta$} \noindent Classical field theory on the noncommutative deformation~$\mathbb R^{2n}_\theta$ of the space~$\mathbb R^{2n}$ may be realized in a star-product formulation or in an operator formalism~\cite{SWDNS}. While the first approach alters the product of functions on~$\mathbb R^{2n}$ the second one turns these functions~$f$ into operators~$\hat f$ acting on the $n$-harmonic-oscillator Fock space~$\cal H$. The noncommutative space~$\mathbb R^{2n}_\theta$ may then be defined by declaring its coordinate functions $\hat x^\mu$ with $\mu =1,\ldots,2n$ to obey the Heisenberg algebra relations \begin{equation} [ \hat{x}^\mu\,,\,\hat{x}^\nu ] \= \mbox{i}\,\theta^{\mu\nu} \end{equation} with a constant antisymmetric tensor~$\theta^{\mu\nu}$. The coordinates can be chosen in such a way that the matrix~$(\theta^{\mu\nu})$ will be block-diagonal with non-vanishing components \begin{equation}\label{tha} \theta^{{2a-1}\ {2a}} \= -\theta^{{2a}\ {2a-1}} \ =:\ \theta^a\quad\mbox{for}\quad a=1,\ldots ,n \ . \end{equation} We assume that all $\theta^a\ge0$; the general case does not hide additional complications. Both approaches are related by the Moyal-Weyl map~\cite{SWDNS}. {}For the noncommutative version of the complex coordinates \begin{equation}\label{yyb} y^a\=x^{2a-1}+\mbox{i}\,x^{2a} \qquad\textrm{and}\qquad \bar{y}^{\bar{a}}\=x^{2a-1}-\mbox{i}\,x^{2a} \end{equation} we have \begin{equation}\label{yhybh} [\hat{y}^a,\hat{\bar{y}}^{\bar{b}} ] \= 2\delta^{a\bar{b}}\,\theta^a \ =:\ \theta^{a\bar{b}} \ge 0\ . \end{equation} The Fock space~${\cal H}$ is spanned by the basis states \begin{equation} |k_1,k_2,\ldots,k_n\>\=\prod_{a=1}^{n}(2\theta^a k_a!)^{-1/2}(\hat{\bar{y}}^{a})^{k_a} | 0,\ldots ,0\> \quad \textrm{for} \quad k_a=0,1,2,\ldots \ , \end{equation} which are connected by the action of creation and annihilation operators subject to \begin{equation} \Bigl[\,\frac{\hat{y}^{a}}{\sqrt{2\theta^a}}\ ,\ \frac{\hat{\bar{y}}^{\bar{b}}}{\sqrt{2\theta^b}}\, \Bigr] \= \delta^{a\bar{b}} \ . \end{equation} For simplicity we consider the case $\theta^a =\theta$ for all~$a$ and drop the hats from now on. \vspace{5mm} \section{Flat $u(n{+}1)$-connection on $\mathbb C^n_\theta$} \noindent We begin by collecting the coordinates into\footnote{ Here, $\+$ means Hermitian conjugation.} \begin{equation} Y\ :=\ \begin{pmatrix}y^1 \\ \vdots \\ y^n \end{pmatrix} \qquad\textrm{and}\qquad Y^\+ \= (\bar{y}^1,\ldots,\bar{y}^n) \ , \end{equation} so that \begin{equation} Y^\+ Y \= \bar{y}^a y^a \= \gamma^2-1-n\theta \end{equation} with the definition \begin{equation} \gamma\ :=\ \sqrt{x^\mu x^\mu +1}\= \sqrt{\bar{y}^a y^a +1 + n\theta} \ . \end{equation} As this is an invertible operator, we may also introduce the $n{\times}n$ matrix \begin{equation} \Lambda\ :=\ {\bf 1}_n\ -\ Y\frac{1}{\gamma\,(\gamma +\sqrt{1{+}n\theta})}Y^\+ \ , \end{equation} which obeys \begin{equation}\label{idntts} \Lambda\,Y\=Y\,\frac{\sqrt{1{+}n\theta}}{\gamma} \quad,\qquad Y^\+\Lambda \=\frac{\sqrt{1{+}n\theta}}{\gamma}\,Y^\+ \qquad\textrm{and}\qquad \Lambda^2\={\bf 1}_n -Y\frac{1}{\gamma^2}Y^\+\ . \end{equation} Since all matrix entries are operators acting in the Fock space~${\cal H}$, their ordering is essential, in constrast to the commutative case. In the present section and the following one, all objects are operator-valued in this sense. Basic for our construction are the $(n{+}1){\times}(n{+}1)$ matrices \begin{equation}\label{VV+} V = \begin{pmatrix}\sqrt{1{+}n\theta}\,\gamma^{-1} & -\gamma^{-1}Y^\+ \\ Y\gamma^{-1} & \Lambda \end{pmatrix} \qquad\textrm{and}\qquad V^\+=\begin{pmatrix}\sqrt{1{+}n\theta}\,\gamma^{-1} & \gamma^{-1}Y^\+ \\ -Y\gamma^{-1} & \Lambda\end{pmatrix} \ . \end{equation} With the help of the identities~(\ref{idntts}), one can show that \begin{equation} V^\+V\={\bf 1}_{n+1}\=VV^\+ \quad,\qquad\textrm{i.e.}\quad V\in\textrm{U}(n{+}1)\ . \end{equation} Using $V$, we build a connection one-form \begin{equation}\label{Acal} {\cal A}\= V^\+\mbox{d} V \ , \end{equation} which defines the zero curvature \begin{equation}\label{Fcal} {\cal F}\=\mbox{d}{\cal A} + {\cal A}\wedge{\cal A} \= \mbox{d} V^\+\wedge\mbox{d} V + V^\+\mbox{d} V\wedge V^\+\mbox{d} V\=0 \end{equation} on the free module $\mathbb C^{n+1}{\otimes}{\cal H}$ over $\mathbb C^n_\theta$. \vspace{5mm} \section{Nontrivial $u(1)$ and $u(n)$ gauge fields} \noindent Let us rewrite ${\cal A}$ of (\ref{Acal}) in the block form \begin{equation}\label{Ablock} {\cal A} \=\begin{pmatrix} a & -\phi^\+ \\ \phi & A \end{pmatrix} \qquad\textrm{with}\qquad a\in u(1) \quad\textrm{and}\quad A\in u(n) \ , \end{equation} Clearly, $\phi$ is an $n{\times}1$ matrix and $\phi^\+$ its hermitian conjugate. {}From the definition (\ref{Acal}) we find that \begin{align} \label{a} a &\= \gamma\, \mbox{d}\gamma^{-1} + \gamma^{-1}Y^\+(\mbox{d} Y)\gamma^{-1}\ ,\\[8pt] \label{A} A &\= Y\gamma^{-1}(\mbox{d}\gamma^{-1}) Y^\+ +Y\gamma^{-2}\mbox{d} Y^\+{+}\Lambda\,\mbox{d}\Lambda\ ,\\[8pt] \phi &\= \Lambda\,(\mbox{d} Y)\, \gamma^{-1} \= \bigl( \mbox{d} Y-Y(\gamma^2+\gamma\,\sqrt{1{+}n\theta})^{-1}\,Y^\+\mbox{d} Y\bigr)\gamma^{-1}\ ,\\[8pt] \label{p+} \phi^{\+} &\= \gamma^{-1}(\mbox{d} Y^\+)\Lambda \= \gamma^{-1} \bigl(\mbox{d} Y^\+ -(\mbox{d} Y^\+)Y(\gamma^2+\gamma\,\sqrt{1{+}n\theta})^{-1}\,Y^\+ \bigr) \ . \end{align} Introducing the components $\phi^a$ of the column $\phi = (\phi^a)$, the last two equations read \begin{align} \phi^a &\= \bigl(\mbox{d} y^a - y^a\,(\gamma^2 +\gamma\,\sqrt{1{+}n\theta} )^{-1}\, \delta_{\bar{b} c}\, \bar{y}^{\bar{b}}\, \mbox{d} y^c \bigr) \, \gamma^{-1}\ , \\[8pt] {\bar{\phi}}^{\bar{a}} &\= \gamma^{-1}\bigl(\mbox{d} \bar{y}^{\bar{a}} - \mbox{d}\bar{y}^{\bar{c}}\, \delta_{b\bar{c}}\, y^b\, (\gamma^2 +\gamma\,\sqrt{1{+}n\theta} )^{-1}\,\bar{y}^{\bar{a}}\bigr) \ . \end{align} The (1,0)-forms $\phi^a$ and the (0,1)-forms ${\bar{\phi}}^{\bar{b}}$ constitute a basis for the forms of type (1,0) and~(0,1), respectively. Substituting (\ref{Ablock}) into (\ref{Fcal}), we obtain \begin{align} \label{Fu1} F_{u(1)} &\ :=\ \mbox{d} a + a\wedge a \,\= \,\phi^\+\wedge\phi \= \delta_{\bar{a} b}\,\bar{\phi}^{\bar{a}}\wedge\phi^b \= \bar{\phi}^1\wedge\phi^1 + \ldots + \bar{\phi}^n\wedge\phi^n \ , \\[8pt] \label{Fun} F_{u(n)} &\ :=\ \mbox{d} A {+} A\wedge A \= \phi\wedge\phi^\+ \= (\phi^a\wedge\bar{\phi}^{\bar{b}}) \= \begin{pmatrix} \phi^1\wedge\bar{\phi}^{\bar{1}} & \cdots & \phi^1\wedge\bar{\phi}^{\bar{n}} \\ \vdots & \ddots & \vdots \\ \phi^n\wedge\bar{\phi}^{\bar{1}} & \cdots & \phi^n\wedge\bar{\phi}^{\bar{n}} \end{pmatrix} \end{align} as well as \begin{equation} 0 \= \mbox{d}\phi + \phi\wedge a + A\wedge\phi \qquad\textrm{and}\qquad 0\= \mbox{d}\phi^\+ + a\wedge \phi^\+ + \phi^\+\wedge A \ . \end{equation} {}From (\ref{Fu1}) and (\ref{Fun}) one sees that the gauge fields $F_{u(1)}$ and $F_{u(n)}$ have vanishing (2,0) and (0,2) components, i.e.~they are of type~(1,1). Moreover, (\ref{Fun}) expresses $F_{u(n)}$ in the basis $\{\phi^a\wedge\bar{\phi}^{\bar{b}}\}$ of (1,1)-forms as \begin{equation} F_{u(n)} \= F_{a\bar{b}}\,\phi^a\wedge \bar{\phi}^{\bar{b}} \qquad\Longrightarrow\qquad F_{ab} \= 0 \= F_{\bar{a}\bar{b}} \qquad\textrm{and}\qquad F_{a\bar{b}} \= e_{ab} \= -F_{\bar{b} a} \ , \end{equation} where the basis matrix $e_{ab}$ has a unit entry in the $(ab)$ position and is zero elsewhere. It is apparent that the operator-valued components of the $u(n)$-valued gauge field $F_{u(n)}$ satisfy the Hermitian Yang-Mills equations\footnote{ Their general form for the structure group U$(k)$ reads $F_{ab}{=}0{=}F_{\bar{a}\bar{b}}\ ,\ F_{1\bar{1}}+\ldots +F_{n\bar{n}}{=}\tau{\bf 1}_k$, where $\tau$ is a constant.} \begin{equation} F_{ab} \= 0 \= F_{\bar{a}\bar{b}} \qquad\textrm{and}\qquad F_{1\bar{1}}+\ldots +F_{n\bar{n}} \= {\bf 1}_n\ . \end{equation} In the commutative case these equations are the conditions of stability for a holomorphic vector bundle over $\mathbb C P^n$ with finite characteristic classes~\cite{DU}. In the star-product formulation obtained by the inverse Moyal-Weyl transform, the gauge field~(\ref{Fun}) describes a smooth Moyal deformation of the instanton-type gauge field configuration given in local coordinates on a patch $\mathbb C^n$ of~$\mathbb C P^n$. This is why we call the configuration (\ref{A}) and~(\ref{Fun}) the `noncommutative U$(n)$ instanton on~$\mathbb C P^n$'. Likewise, the Abelian field strength~(\ref{Fu1}) with components $f_{a\bar{b}}:=-\delta_{a\bar{b}}$ satisfies the Hermitian Maxwell equations \begin{equation} f_{ab} \= 0 \= f_{\bar{a}\bar{b}} \qquad\textrm{and}\qquad f_{1\bar{1}}+\ldots +f_{n\bar{n}} \= -n \ , \end{equation} whence the configuration (\ref{a}) and (\ref{Fu1}) is the `noncommutative U$(1)$ instanton on~$\mathbb C P^n$'. \vspace{5mm} \section{Commutative limit} \noindent In the commutative limit, $\theta\to0$, the gauge potential $A$ defining $F_{u(n)}$ coincides with the instanton-type canonical connection on~$\mathbb C P^n$, which is described as follows~\cite{NR}. Consider the group U$(n{+}1)$, its Grassmannian subset $\mathbb C P^n =\textrm{U}(n{+}1)/\textrm{U}(1){\times}\textrm{U}(n)$ and the fibration \begin{equation}\label{bundle} \begin{CD} \textrm{U}(n{+}1)@>{\textrm{U}(1)\times\textrm{U}(n)}>> \mathbb C P^n \end{CD} \end{equation} with fibres $\textrm{U}(1){\times}\textrm{U}(n)$. For $g\in\textrm{U}(n{+}1)$ the canonical one-form $\Omega = g^\+\mbox{d} g$ on U$(n{+}1)$ takes values in the Lie algebra $u(n{+}1)$ and satisfies the Maurer-Cartan equation \begin{equation}\label{MC} \mbox{d} \Omega\ +\ \Omega\wedge\Omega \=0\ . \end{equation} The matrix $V$ from (\ref{VV+}) defines a local section of the bundle (\ref{bundle}) over a patch $\mathbb C^n\subset\mathbb C P^n$, viz.~the embedding of $\mathbb C P^n$ into U$(n{+}1)$. For such an embedding the one-form $\Omega$ coincides with the flat connection~${\cal A}$ given by~(\ref{Acal}). It follows that (\ref{Fcal}) is the Maurer-Cartan equation~(\ref{MC}) reduced to $\mathbb C^n\subset\mathbb C P^n$, and the block form~(\ref{Ablock}) results from the splitting of ${\cal A}$ into components $\phi$ and $\phi^\+$ tangent\footnote{ They are basis one-forms on $\mathbb C P^n$ taking values in the complexified tangent bundle of $\mathbb C P^n$.} to $\mathbb C P^n$ and into one-forms $a$ and $A$ on $\mathbb C P^n$ with values in the tangent space $u(1){\oplus}u(n)$ to the fibre U$(1){\times}$U$(n)$ of the bundle~(\ref{bundle}). By construction, the one-form $A$ from~(\ref{A}) is the canonical connection on the Stiefel bundle \begin{equation}\label{st1} \begin{CD} \textrm{U}(n{+}1)/\textrm{U}(1) @>{\textrm{U}(n)}>> \mathbb C P^n \end{CD} \end{equation} given by~\cite{NR} $$ A\={\cal S}^\+ \mbox{d} {\cal S}\ , $$ where $\cal S$ is an $(n{+}1){\times}n$ matrix-valued section of the bundle (\ref{st1}) such that ${\cal S}^\+{\cal S}={\bf 1}_n$. In our case it is chosen as \begin{equation} {\cal S}\=\begin{pmatrix} -\gamma^{-1}Y^\+ \\ \Lambda \end{pmatrix}\ , \end{equation} i.e.~as the $(n{+}1){\times}n$-part of the matrix $V$ from (\ref{VV+}). Similarly, the one-form~$a$ from~(\ref{a}) in the commutative limit coincides with the canonical Abelian connection \begin{equation} a\={s}^\+\mbox{d} {s} \end{equation} on another Stiefel bundle: \begin{equation}\label{st2} \begin{CD} S^{2n+1}\=\textrm{U}(n{+}1)/\textrm{U}(n) @>{\textrm{U}(1)}>> \mathbb C P^n\ . \end{CD} \end{equation} In our case, $s=\left( \begin{smallmatrix} 1 \\ Y \end{smallmatrix} \right)\gamma^{-1}$ is the $(n{+}1){\times}1$ matrix complementing $\cal S$ inside the matrix $V$. Moreover, the Abelian gauge field $F_{u(1)}= -\delta_{a\bar{b}}\ \phi^a\wedge \phi^{\bar{b}}$ is proportional to the two-form \begin{equation} \omega \=\sfrac{\mbox{i}}{2}\, \delta_{a\bar{b}}\ \phi^a\wedge \phi^{\bar{b}}\ , \end{equation} which is the canonical K\"ahler two-form on $\mathbb C P^n$. \vspace{10mm} \noindent {\bf Acknowledgements} \medskip \noindent T.A.I.~acknowledges the Heisenberg-Landau program and RFBR (grant 06-01-00627-a) for partial support and the Institut f\"ur Theoretische Physik der Universit\"at Hannover for its hospitality. The work of O.L. was partially supported by the Deutsche Forschungsgemeinschaft (DFG). \bigskip
1,116,691,497,510
arxiv
\section{Introduction} Recently there has been a burst of activity dealing with quadratic gravitation. For example, the curvature-squared terms added to the usual Einstein action with cosmological constant have played a role in two recent investigations of four-dimensional gravity: in critical gravity [1], and in a pure Weyl-squared action considered by Maldacena [2]. The critical gravity provides a consistent toy model for quantum gravity as a useful simplified arena for studying some aspects of a potentially renormalisable theory of massless spin-2 fields in four dimensions. The conformal gravity theory has been advanced as a candidate alternative to standard Einstein gravity. As a quantum theory the conformal theory is both renormalizable and unitary, with unitarity being obtained because the theory is a PT symmetric rather than a Hermitian theory. Because the variation of the conformal action leads to fourth-order equations of motion, it had long been thought that the theory would not be unitary. However, as has been shown by Bender and Mannheim [3] that one can find a realization of the theory that is unitary. Consequently, conformal gravity is to be regarded as a bona fide quantum gravitational theory. The conformal gravity theory can quite naturally handle some of the most troublesome problems in physics, the quantum gravity problem, the vacuum energy problem, and the dark matter problem. [4] As a modified gravity theory quadratic gravitation has been used in cosmology [5]. In order to explain observable acceleration of cosmological expansion some authors introduce torsion terms in quadratic gravitation [6]. The quantum aspects of torsion theory and the possibility of the space-time torsion to exist and to be detected have been discussed in [7].The\ astronomical observations show that our universe is probably an asymptotically de Sitter (dS) one with a positive cosmological constant \Lambda $ [8]. If a gravitational theory of Yang-Mills type is constructed starting from de Sitter gauge invariance principle, its gravitational Lagrangian naturally turns out to be the one of quadratic gravitation with torsion as will be shown in this paper. Therefore, a investigation of quadratic gravitation with torsion and its cosmological solutions expressed by de Sitter critic points will be carried out. The field equations will be derived. These equations are quite different from the equations obtained from Riemannian geometry based quadratic Lagrangians when varied with respect to the metric. Applying to the space flat FRW cosmology some de Sitter critical point solutions will be obtained. The stability of them will be analyzed. The paper is organized as follows. In section II, starting from a Clifford algebra $C\left( 3,1\right) $ the gravitational Lagrangian of a de Sitter gauge theory is constructed, the Lagrange equations of gravitational fields are derived. Applying them to a spatial flat universe the cosmological equations are obtained in section III. The vacuum solutions of these equations in two specific cases are presented in section IV. These two models correspond to the conformal cosmology of Mannheim [4,9] and the zero-energy gravity of Deser and Tekin [10], respectively. In contrast to them, the tetrad and the spin connection are taken to be the basic field variables and the torsion plies a important role here. In these specific models the cosmological equations are written as some dynamical systems, the real de Sitter critical points of them are obtained. Among these points, the stable ones which turn out to be exact constant solutions and describe the asymptotic behavior of the universe are found. In section V some concluding remarks are given. In Appendixes the calculations for stability analysis are presented. \section{Lagrangian and field equations} We begin with a brief introduction of a de Sitter gauge theory. In a gravitational gauge theory coupled to matter sources involving Dirac fields it is convenient to take Dirac matrices $\gamma _I$ and their commutators \sigma _{IJ}=\frac 12\left[ \gamma _I,\gamma _J\right] $ as the basis of the gauge algebra. In this case we are led to a de Sitter gauge theory. Let \left\{ \gamma _I\right\} \;\left( I=0,1,2,3\right) $ be a basis of an inner product space with signature $\left( -,+,+,+\right) $. A Clifford algebra C\left( 3,1\right) $ can be constructed by introducing the condition \begin{equation} \gamma _I\gamma _J+\gamma _J\gamma _I=2\eta _{IJ}I. \end{equation} with $\eta _{IJ}=$diag$(-1;1;1;1)$. There is a 10-dimensional subspace of C\left( 3,1\right) $ which is a Lie algebra with basis $\gamma _5\gamma _I$ and $\sigma _{IJ}=\frac 12\left[ \gamma _I,\gamma _J\right] $. This is the Lie algebra of a de Sitter group. We can introduce a connection [11,12] \begin{equation} \omega =\Gamma +\frac 1l\gamma _5{\bf e}, \end{equation} defined by \begin{equation} {\bf e}=e{}^I{}_\mu \gamma _I\otimes dx^\mu , \end{equation} and \[ \Gamma =\frac 14\Gamma ^{IJ}{}_\mu \sigma _{IJ}\otimes dx^\mu , \] where $l$ denotes a constant with the dimension of length. The curvature of \omega $ is \begin{equation} \Omega =d\omega +\frac 12\left[ \omega ,\omega \right] ={\bf R}+\frac 1 \gamma _5{\bf T}-\frac 1{l^2}{\bf V}, \end{equation} where \begin{eqnarray} {\bf R} &=&d\Gamma +\frac 12\left[ \Gamma ,\Gamma \right] , \nonumber \\ {\bf T} &=&d{\bf e}+\left[ \Gamma ,{\bf e}\right] , \nonumber \\ {\bf V} &=&\frac 12\left[ {\bf e},{\bf e}\right] . \end{eqnarray} The Lorentz curvature ${\bf R}$, the torsion ${\bf T}$, and the cosmological term ${\bf V}$ are given by, respectively, \begin{eqnarray} {\bf R} &=&\frac 18R^{IJ}{}_{\mu \nu }\sigma _{IJ}\otimes dx^\mu \wedge dx^\nu , \nonumber \\ {\bf T} &=&\frac 12T^I{}_{\mu \nu }\sigma _{IJ}\otimes dx^\mu \wedge dx^\nu , \nonumber \\ {\bf V} &=&e{}^I{}_\mu e{}^J{}_\nu \sigma _{IJ}\otimes dx^\mu \wedge dx^\nu , \end{eqnarray} with \begin{equation} R^{IJ}{}_{\mu \nu }=\partial _\mu \Gamma {}^{IJ}{}_\nu -\partial _\nu \Gamma {}^{IJ}{}_\mu +\eta _{KL}\Gamma {}^{IK}{}_\mu \Gamma {}^{LJ}{}_\nu -\eta _{KL}\Gamma {}^{IK}{}_\nu \Gamma {}^{LJ}{}_\mu , \end{equation} and \begin{equation} T{}^I{}_{\mu \nu }=\partial _\mu e{}^I{}_\nu -\partial _\nu e{}^I{}_\mu +\Gamma {}^I{}_{J\mu }e{}^J{}_\nu -\Gamma {}^I{}_{J\nu }e{}^J{}_\mu . \end{equation} Based on the local gauge invariance principle the gravitational Lagrangian can be made up of a quadratic term of the curvature $\Omega $ and its Hodge dual $*\Omega $: \begin{equation} {\cal L}=-\frac 18Tr\left( *\Omega \wedge \Omega \right) =\left( \frac 1{32 R_{\mu \nu }{}^{\rho \sigma }R^{\mu \nu }{}_{\rho \sigma }-\frac 1 l^{-2}T{}^\mu {}_{\nu \rho }T{}_\mu {}^{\nu \rho }+\frac 1 l^{-2}R-12l^{-4}\right) e, \end{equation} where \begin{equation} e=\det \left| e^I{}_\mu \right| . \end{equation} {\em \ }In four dimensional spacetime the Gauss-Bonnet term $\sqrt{-g}\left[ R_{\mu \nu \lambda \tau }R^{\mu \nu \lambda \tau }-4R_{\mu \nu }R^{\mu \nu }+R^2\right] $ is purely topological and then the Lagrangian can be taken as \begin{equation} {\cal L}=-\frac 18Tr\left( *\Omega \wedge \Omega \right) =\left( \frac 1 R_{\mu \nu }R^{\mu \nu }-\frac 1{32}R^2-\frac 14l^{-2}T{}^\mu {}_{\nu \rho }T{}_\mu {}^{\nu \rho }+\frac 12l^{-2}R-12l^{-4}\right) e. \end{equation} For the sake of a neater argument we extend the Lagrangian to including the coefficients \begin{equation} \beta =\frac 18l^2,\alpha =-\frac 1{32}l^2,\gamma =-\frac 14, \end{equation} and rewrite (10) as \begin{equation} {\cal L}=\left( \beta l^{-2}R_{\mu \nu }R^{\mu \nu }+\alpha l^{-2}R^2+\gamma l^{-2}T{}^\mu {}_{\nu \rho }T{}_\mu {}^{\nu \rho }+\frac 1 l^{-2}R-12l^{-4}\right) e=Le, \end{equation} with \begin{equation} L=\beta l^{-2}R_{\mu \nu }R^{\mu \nu }+\alpha l^{-2}R^2+\gamma l^{-2}T{}^\mu {}_{\nu \rho }T{}_\mu {}^{\nu \rho }+\frac 12l^{-2}R-12l^{-4}. \end{equation} ${\cal L}$ is just the Lagrangian of quadratic-curvature gravities [10] with torsion. The variational principle yields the field equations for the tetrad{\em \ } e_I{}^\mu $ and the spin connection $\Gamma {}^{IJ}{}_\mu $ \begin{eqnarray} \frac{\delta {\cal L}}{\delta e_I{}^\mu } &=&eE^I{}_\mu , \nonumber \\ \frac{\delta {\cal L}}{\delta \Gamma ^{IJ}{}_\mu } &=&es_{IJ}{}^\mu , \end{eqnarray} where $E^I{}_\mu $ and $s_{IJ}{}^\mu $ are energy- momentum and spin tensors of the matter source, respectively, the variational derivatives are given by \begin{eqnarray} &&\frac{\delta {\cal L}}{\delta e_I{}^\mu } \nonumber \\ &=&\{\beta l^{-2}\left( 2e{}^{I\sigma }R{}^\rho {}_\sigma R{}_{\rho \mu }+2e^J{}_\rho R{}^{\rho \sigma }{}R{}^I{}_{J\mu \sigma }-e{}^I{}_\mu R_{\rho \sigma }R^{\rho \sigma }\right) +\alpha l^{-2}\left( 4e^{I\nu }R{}_{\nu \mu }-e{}^I{}_\mu R\right) R \nonumber \\ &&+\gamma l^{-2}\left( 4e{}^{I\nu }T{}^\lambda {}_{\nu \tau }T{}{}_{\lambda \mu }{}^\tau -4\partial _\nu \left( e{}^{I\lambda }T{}_{\mu \lambda }{}^\nu \right) -e{}^I{}_\mu T{}^\lambda {}_{\rho \sigma }T{}_\lambda {}^{\rho \sigma }+\left( 4e{}^{I\lambda }T{}_{\mu \lambda }{}^\nu \right) e{}^K{}_\tau \partial _\nu e_K{}^\tau \right) \nonumber \\ &&+l^{-2}\left( e^{I\nu }R{}_{\nu \mu }-\frac 12e{}^I{}_\mu R\right) +12l^{-4}e{}^I{}_\mu \}e, \end{eqnarray} \begin{eqnarray} \frac{\delta {\cal L}}{\delta \Gamma ^{IJ}{}_\mu } &=&\{2\beta l^{-2}e_J{}^\lambda [e_I{}^\mu \partial _\nu R_\lambda {}^\nu -e_I{}^\nu \partial _\nu R_\lambda {}^\mu +\left( e_I{}^\nu R_\lambda {}^\mu -e_I{}^\mu R_\lambda {}^\nu \right) e{}^K{}_\tau \partial _\nu e_K{}^\tau \nonumber \\ &&+e_I{}^\tau \Gamma ^\nu {}_{\nu \tau }R_\lambda {}^\mu +e_I{}^\nu \Gamma ^\tau {}_{\nu \lambda }R_\tau {}^\mu -e_I{}^\mu \Gamma ^\tau {}_{\nu \lambda }R_\tau {}^\nu -e_I{}^\tau \Gamma ^\mu {}_{\nu \tau }R_\lambda {}^\nu ] \nonumber \\ &&+2\alpha l^{-2}[\left( e_I{}^\nu e_J{}^\tau -e_J{}^\nu e_I{}^\tau \right) \Gamma ^\mu {}_{\nu \tau }R+\left( e_J{}^\mu e_I{}^\nu -e_I{}^\mu e_J{}^\nu \right) \left( \Gamma ^\lambda {}_{\lambda \nu }R-\partial _\nu R\right) \nonumber \\ &&+\left( e_I{}^\nu e_J{}^\mu -e_I{}^\mu e_J{}^\nu \right) Re{}^K{}_\tau \partial _\nu e_K{}^\tau ]+4\gamma l^{-2}e_{I\nu }e{}_J{}^\tau T{}^{\nu \mu }{}_\tau \nonumber \\ &&+\frac 12l^{-2}[\left( e_I{}^\nu e_J{}^\tau -e_J{}^\nu e_I{}^\tau \right) \Gamma ^\mu {}_{\nu \tau }+\left( e_I{}^\nu e_J{}^\mu -e_I{}^\mu e_J{}^\nu \right) \left( \Gamma ^\lambda {}_{\lambda \nu }+e{}^K{}_\tau \partial _\nu e_K{}^\tau \right) ]\}e. \end{eqnarray} That may be, the two main field equations are rather complicated. They really look nothing like the familiar, well-analyzed equations of GR. To help understand the significance of these new relations, and to use our previous experience, we will do a translation of (16,17) into a certain effective Riemannian form--transcribing from quantities expressed in terms of the tetrad $e_I{}^\mu $ and spin connection $\Gamma {}^{IJ}{}_\mu $ into the ones expressed in terms of the metric $g_{\mu \nu }$ and torsion T^\lambda {}_{\mu \nu }$ (or contortion $K^\lambda {}_{\mu \nu }$). As is well-known, the affine connection $\Gamma ^\lambda {}_{\mu \nu }$ can be represented in the form \begin{eqnarray} \Gamma ^\lambda {}_{\mu \nu } &=&e_I{}^\lambda \partial _\mu e^I{}_\nu +e_J{}^\lambda e^I{}_\nu \Gamma {}^J{}_{I\mu } \nonumber \\ &=&\left\{ _\mu {}^\lambda {}_\nu \right\} +K^\lambda {}_{\mu \nu }, \end{eqnarray} where $\left\{ _\mu {}^\lambda {}_\nu \right\} $, $K^\lambda {}_{\mu \nu }$ are the Christoffel symbol and the contortion, separately, with \begin{eqnarray} K^\lambda {}_{\mu \nu } &=&-\frac 12\left( T^\lambda {}_{\mu \nu }+T_{\mu \nu }{}^\lambda +T_{\nu \mu }{}^\lambda \right) , \nonumber \\ T^\lambda {}_{\mu \nu } &=&e_I{}^\rho T^I{}_{\mu \nu }=\Gamma ^\lambda {}_{\mu \nu }-\Gamma ^\lambda {}_{\nu \mu }. \end{eqnarray} Accordingly the curvature can be represented as \begin{eqnarray} R^\rho {}_{\sigma \mu \nu } &=&e_I{}^\rho e^J{}_\sigma R^I{}_{J\mu \nu }=\partial _\mu \Gamma ^\rho {}_{\sigma \nu }-\partial _\nu \Gamma ^\rho {}_{\sigma \mu }+\Gamma ^\rho {}_{\lambda \mu }\Gamma ^\lambda {}_{\sigma \nu }-\Gamma ^\rho {}_{\lambda \nu }\Gamma ^\lambda {}_{\sigma \mu }, \nonumber \\ &=&R_{\left\{ {}\right\} }^\rho {}_{\sigma \mu \nu }+\partial _\mu K^\rho {}_{\sigma \nu }-\partial _\nu K^\rho {}_{\sigma \mu }+K^\rho {}_{\lambda \mu }K^\lambda {}_{\sigma \nu }-K^\rho {}_{\lambda \nu }K^\lambda {}_{\sigma \mu } \nonumber \\ &&+\left\{ _\lambda {}^\rho {}_\mu \right\} K^\lambda {}_{\sigma \nu }-\left\{ _\lambda {}^\rho {}_\nu \right\} K^\lambda {}_{\sigma \mu }+\left\{ _\sigma {}^\lambda {}_\nu \right\} K^\rho {}_{\lambda \mu }-\left\{ _\sigma {}^\lambda {}_\mu \right\} K^\rho {}_{\lambda \nu }, \end{eqnarray} where \[ R_{\left\{ {}\right\} }^\rho {}_{\sigma \mu \nu }=\partial _\mu \left\{ _\sigma {}^\rho {}_\nu \right\} -\partial _\nu \left\{ _\sigma {}^\rho {}_\mu \right\} +\left\{ _\lambda {}^\rho {}_\mu \right\} \left\{ _\sigma {}^\lambda {}_\nu \right\} -\left\{ _\lambda {}^\rho {}_\nu \right\} \left\{ _\sigma {}^\lambda {}_\mu \right\} , \] is the curvature of the Christoffel symbol. \section{Cosmological equations} For the space flat Friedmann-Robertson-Walker metric \begin{equation} g_{\mu \nu }=\text{diag}\left( -1,a\left( t\right) ^2,a\left( t\right) ^2,a\left( t\right) ^2\right) , \end{equation} we have \begin{eqnarray} \left\{ _0{}^0{}_0\right\} &=&0,\left\{ _0{}^0{}_i\right\} =\left\{ _i{}^0{}_0\right\} =0,\left\{ _i{}^0{}_j\right\} =a\stackrel{\cdot }{a \delta _{ij}, \nonumber \\ \left\{ _0{}^i{}_0\right\} &=&0,\left\{ _j{}^i{}_0\right\} =\left\{ _0{}^i{}_j\right\} =\frac{\stackrel{\cdot }{a}}a\delta _j^i,\left\{ _j{}^i{}_k\right\} =0,i,j,k,...=1,2,3. \end{eqnarray} The non-vanishing torsion components with holonomic indices are given by two functions $h$ and $f$ [13]: \begin{eqnarray} T_{110} &=&T_{220}=T_{330}=a^2h, \nonumber \\ T_{123} &=&T_{231}=T_{312}=a^3f, \end{eqnarray} and then contortion components are \begin{eqnarray} K^1{}_{10} &=&K^2{}_{20}=K^3{}_{30}=0, \nonumber \\ K^1{}_{01} &=&K^2{}_{02}=K^3{}_{03}=h, \nonumber \\ K^0{}_{11} &=&K^0{}_{22}=K^0{}_{22}={}a^2h, \nonumber \\ K^1{}_{23} &=&K^2{}_{31}=K^3{}_{12}=-\frac 12af, \nonumber \\ K^1{}_{32} &=&K^2{}_{13}=K^3{}_{21}=\frac 12af. \end{eqnarray} Among the torsion components, only the pseudotrace axial ingredient given by $f$ couples to spinors in a minimal way. The scalar mode $h$ of\ torsion could be considered as a ''phantom'' field, at least in the matter-dominated epoch, since it will not interact directly with matter; it only interacts indirectly via gravitation. The non-vanishing components of the curvature $R_{\left\{ {}\right\} }^\rho {}_{\sigma \mu \nu }$ are \begin{eqnarray} R_{\left\{ {}\right\} }^0{}_{101} &=&R_{\left\{ {}\right\} }^0{}_{202}=R_{\left\{ {}\right\} }^0{}_{303}=a^2\left( \stackrel{\cdot }{H +H^2+Hh+\stackrel{\cdot }{h}\right) , \nonumber \\ R_{\left\{ {}\right\} }^0{}_{123} &=&-R_{\left\{ {}\right\} }^0{}_{213}=R_{\left\{ {}\right\} }^0{}_{312}=a^3f\left( H+h\right) , \nonumber \\ R_{\left\{ {}\right\} }^1{}_{203} &=&-R_{\left\{ {}\right\} }^1{}_{302}=R_{\left\{ {}\right\} }^2{}_{301}=-\frac 12a\left( Hf+\stackrel \cdot }{f}\right) , \nonumber \\ R_{\left\{ {}\right\} }^1{}_{212} &=&R_{\left\{ {}\right\} }^1{}_{313}=R_{\left\{ {}\right\} }^2{}_{323}=a^2\left( H^2+2Hh+h^2-\frac 1 f^2\right) , \end{eqnarray} \begin{eqnarray} R_{\left\{ {}\right\} }{}_{00} &=&-3\stackrel{\cdot }{H}-3\stackrel{\cdot }{ }-3H^2-3Hh, \nonumber \\ R_{\left\{ {}\right\} }{}_{11} &=&a^2\left( \stackrel{\cdot }{H}+3H^2+5Hh \stackrel{\cdot }{h}+2h^2-\frac 12f^2\right) , \end{eqnarray} \begin{equation} R_{\left\{ {}\right\} }{}=6\stackrel{\cdot }{H}+12H^2+18Hh+6\stackrel{\cdot }{h}+6h^2-\frac 32f^2, \end{equation} where $H=\stackrel{\cdot }{a}\left( t\right) /a\left( t\right) $ is the Hubble parameter. Using these results and (16---20) we can compute \begin{eqnarray} e_{I0}\frac{\delta {\cal L}}{\delta e_I{}^0} &=&l^{-2}\{\left( \beta +3\alpha \right) [-12\left( \stackrel{\cdot }{H}+\stackrel{\cdot }{h}\right) ^2-24\left( \stackrel{\cdot }{H}+\stackrel{\cdot }{h}\right) H\left( H+h\right) \nonumber \\ &&+12h\left( h+2H\right) \left( h+H\right) ^2-6\left( h+H\right) ^2f^2+\allowbreak \frac 34f^4] \nonumber \\ &&+\gamma \left( 18h^2+6f^2\right) +3H^2+6Hh+3h^2-\frac 34f^2-12l^{-2}\}e, \end{eqnarray} \begin{eqnarray} e_{I1}\frac{\delta {\cal L}}{\delta e_I{}^1} &=&-l^{-2}\allowbreak a^2\{\left( \beta +3\alpha \right) [-4\left( \stackrel{\cdot }{H}+\stackrel \cdot }{h}\right) ^2-8\left( \stackrel{\cdot }{H}+\stackrel{\cdot }{h \right) \left( H^2+Hh\right) \nonumber \\ &&+\allowbreak 4h\left( h+2H\right) \left( h+H\right) ^2-2\left( h+H\right) ^2f^2+\frac 14f^4] \nonumber \\ &&-2\gamma \left( 2\stackrel{\cdot }{h}+8Hh+h^2{}+f^2\right) +2\left( \stackrel{\cdot }{H}+\stackrel{\cdot }{h}\right) +3H^2 \nonumber \\ &&+4Hh+h^2-\frac 14f^2-12l^{-2}\}e, \end{eqnarray} \begin{eqnarray} \frac{\delta {\cal L}}{\delta \Gamma ^{\ 01}{}_1} &=&-2a^{-1}l^{-2}\{\left( \beta +6\alpha \right) \left( \stackrel{\cdot \cdot }{H}+\stackrel{\cdot \cdot }{h}\right) +3\left( \beta +4\alpha \right) \left( hH^2+2H\stackrel \cdot }{H}+2h\stackrel{\cdot }{H}\right) \nonumber \\ &&+\left( \allowbreak 5\beta +18\alpha \right) \left( H\stackrel{\cdot }{h}+ \stackrel{\cdot }{h}+h^2H\right) +\left( \beta +3\alpha \right) \left( 2h^3- \stackrel{\cdot }{f}-\frac 12hf^2\right) +\frac 14h\}e, \end{eqnarray} \begin{eqnarray} &&\frac{\delta {\cal L}}{\delta \Gamma ^{12}{}_3}=a^{-1}l^{-2}f\{2\left( \beta +6\alpha \right) \left( \stackrel{\cdot }{H}+\stackrel{\cdot }{h \right) +6\left( \beta +\allowbreak 4\alpha \right) H^2 \nonumber \\ &&+2\left( 5\beta +18\alpha \right) Hh+\left( \beta \ +3\alpha \right) \left( 4h^2-f^2\right) \nonumber \\ &&-4\gamma +\frac 12\}e, \end{eqnarray} Suppose the matter source is a fluid characterized by the density $\rho $ the pressure $p$ and the spin $s_{IJ}{}^\mu $. The system of field equations (15) consists of four independent ones: \begin{eqnarray} e_{I0}\frac{\delta {\cal L}}{\delta e_I{}^0} &=&-e_{I0}\frac{\delta {\cal L _\psi }{\delta e_I{}^0}=\rho , \nonumber \\ e_{I1}\frac{\delta {\cal L}}{\delta e_I{}^1} &=&-e_{I1}\frac{\delta {\cal L _\psi }{\delta e_I{}^1}=g_{11}p, \nonumber \\ \frac{\delta {\cal L}}{\delta \Gamma {}^{01}{}_1} &=&-\frac{\delta {\cal L _\psi }{\delta \Gamma {}^{01}{}_1}=e_1{}^1s_{01}{}^1, \nonumber \\ \frac{\delta {\cal L}}{\delta \Gamma {}^{12}{}_3} &=&-\frac{\delta {\cal L _\psi }{\delta \Gamma {}^{12}{}_3}=e_3{}^3s_{12}{}^3. \end{eqnarray} Using (28-31) the Lagrange equations (32) can be written as \begin{eqnarray} &&\left( \beta +3\alpha \right) [-12\left( \stackrel{\cdot }{H}+\stackrel \cdot }{h}\right) ^2-24\left( \stackrel{\cdot }{H}+\stackrel{\cdot }{h \right) H\left( H+h\right) \nonumber \\ &&+12h\left( h+2H\right) \left( h+H\right) ^2-6\left( h+H\right) ^2f^2+\allowbreak \frac 34f^4] \nonumber \\ &&+\gamma \left( 18h^2+6f^2\right) +3H^2+6Hh+3h^2-\frac 3 f^2-12l^{-2}-l^2\rho =0, \end{eqnarray} \begin{eqnarray} &&\left( \beta +3\alpha \right) [-4\left( \stackrel{\cdot }{H}+\stackrel \cdot }{h}\right) ^2-8\left( \stackrel{\cdot }{H}+\stackrel{\cdot }{h \right) \left( H^2+Hh\right) \nonumber \\ &&+\allowbreak 4h\left( h+2H\right) \left( h+H\right) ^2-2\left( h+H\right) ^2f^2+\frac 14f^4] \nonumber \\ &&+2\gamma \left( 2\stackrel{\cdot }{h}+8Hh+h^2{}+f^2\right) -2\left( \stackrel{\cdot }{H}+\stackrel{\cdot }{h}\right) -3H^2 \nonumber \\ &&-4Hh-h^2+\frac 14f^2+12l^{-2}+l^2p=0, \end{eqnarray} \begin{eqnarray} &&-2\{\left( \beta +6\alpha \right) \left( \stackrel{\cdot \cdot }{H} \stackrel{\cdot \cdot }{h}\right) +3\left( \beta +4\alpha \right) \left( hH^2+2H\stackrel{\cdot }{H}+2h\stackrel{\cdot }{H}\right) +\left( \allowbreak 5\beta +18\alpha \right) \left( H\stackrel{\cdot }{h}+h\stackrel \cdot }{h}+h^2H\right) \nonumber \\ &&+\left( \beta +3\alpha \right) \left( 2h^3-f\stackrel{\cdot }{f}-\frac 1 hf^2\right) +\frac 14h\}-l^2s_{01}{}^1=0, \end{eqnarray} \begin{eqnarray} &&f\{2\left( \beta +6\alpha \right) \left( \stackrel{\cdot }{H}+\stackrel \cdot }{h}\right) +6\left( \beta +\allowbreak 4\alpha \right) H^2 \nonumber \\ &&+2\left( 5\beta +18\alpha \right) Hh+\left( \beta \ +3\alpha \right) \left( 4h^2-f^2\right) -4\gamma +\frac 12\}-l^2s_{12}{}^3=0. \end{eqnarray} Assuming $s_{\mu \nu }{}^\lambda $ $=0$ (i.e., the source spin current is negligible), the Eq. (36) reads \begin{eqnarray} &&f\{2\left( \beta +6\alpha \right) \left( \stackrel{\cdot }{H}+\stackrel \cdot }{h}\right) +6\left( \beta +\allowbreak 4\alpha \right) H^2 \nonumber \\ &&+2\left( 5\beta +18\alpha \right) Hh+\left( \beta \ +3\alpha \right) \left( 4h^2-f^2\right) -4\gamma +\frac 12\}=0, \end{eqnarray} and gives \begin{equation} f=0, \end{equation} or \begin{eqnarray} &&2\left( \beta +6\alpha \right) \left( \stackrel{\cdot }{H}+\stackrel{\cdot }{h}\right) +6\left( \beta +\allowbreak 4\alpha \right) H^2 \nonumber \\ &&+2\left( 5\beta +18\alpha \right) Hh+\left( \beta \ +3\alpha \right) \left( 4h^2-f^2\right) -4\gamma +\frac 12=0. \end{eqnarray} Therefore, we have two cases. In the first case, $f=0$, the Eqs (33) and (34) read \begin{eqnarray} &&\left( \beta +3\alpha \right) [-\left( \stackrel{\cdot }{H}+\stackrel \cdot }{h}\right) ^2-2\left( \stackrel{\cdot }{H}+\stackrel{\cdot }{h \right) H\left( H+h\right) +h\left( h+2H\right) \left( h+H\right) ^2] \nonumber \\ &&+\frac 32\gamma h^2+\frac 14H^2+\frac 12Hh+\frac 14h^2-l^{-2}-\frac l^2\rho }{12}=0, \end{eqnarray} and \begin{eqnarray} &&\left( \beta +3\alpha \right) [-\left( \stackrel{\cdot }{H}+\stackrel \cdot }{h}\right) ^2-2\left( \stackrel{\cdot }{H}+\stackrel{\cdot }{h \right) \left( H^2+Hh\right) +h\left( h+2H\right) \left( h+H\right) ^2] \nonumber \\ &&+\frac 12\gamma \left( 2\stackrel{\cdot }{h}+8Hh+h^2{}\right) -\frac 1 \left( \stackrel{\cdot }{H}+\stackrel{\cdot }{h}\right) -\frac 34H^2-Hh \frac{h^2}4+3l^{-2}+\frac{l^2p}4=0, \end{eqnarray} which lead to \begin{equation} \stackrel{\cdot }{H}=\left( 2\gamma -1\right) \stackrel{\cdot }{h -2H^2+\left( 8\gamma -3\right) Hh{}-\left( 2\gamma +1\right) h^2+8l^{-2} \frac{l^2}6\left( \rho +3p\right) , \end{equation} and \begin{eqnarray} &&-4\gamma ^2\stackrel{\cdot }{h}^2+\gamma \left( 4H^2+8\left( 1-4\gamma \right) Hh+4\left( 2\gamma +1\right) h^2-\frac 23l^2\left( \rho -3p\right) \frac{32}{l^2}\right) \stackrel{\cdot }{h} \nonumber \\ &&+16H^3h\gamma +\left( 28\gamma -64\gamma ^2\right) H^2h^2+8\left( 4\gamma +1\right) \gamma h^3H-4\left( \gamma +1\right) \gamma h^4 \nonumber \\ &&+\left( \frac{16}{l^2}+\frac 13l^2\left( \rho -3p\right) \right) H^2+\frac 1{4\left( \beta +3\alpha \right) }H^2+\left( 1-4\gamma \right) \left( \frac 32}{l^2}+\allowbreak \frac 23l^2\left( \rho -3p\right) \right) Hh \nonumber \\ &&+\frac 1{2\left( \beta +3\alpha \right) }Hh+\left( 1+2\gamma \right) \left( \frac{16}{l^2}+\allowbreak \frac 13l^2\left( \rho -3p\right) \right) h^2+\frac{\left( 6\gamma +1\right) }{4\left( \beta +3\alpha \right) }h^2 \nonumber \\ &&-\frac{l^2\rho }{12\left( \beta +3\alpha \right) }-\frac 83\left( \rho -3p\right) -\allowbreak \frac 1{36}l^4\left( \rho -3p\right) ^2-\frac 1 \left( \beta +3\alpha \right) l^2}-\frac{64}{l^4} \nonumber \\ &=&0. \end{eqnarray} So we have the equations (42), (43) and \begin{eqnarray} &&\left( \beta +6\alpha \right) \left( \stackrel{\cdot \cdot }{H}+\stackrel \cdot \cdot }{h}\right) +3\left( \beta +4\alpha \right) \left( hH^2+2 \stackrel{\cdot }{H}+2h\stackrel{\cdot }{H}\right) +\left( \allowbreak 5\beta +18\alpha \right) \left( H\stackrel{\cdot }{h}+h\stackrel{\cdot }{h +h^2H\right) \nonumber \\ &&+2\left( \beta +3\alpha \right) h^3+\frac 14h=0, \end{eqnarray} for the unknown functions $H$ and $h$. $\allowbreak $ In the second case, $f$ satisfies the condition (39). The Eqs. (33) and (34) yield \begin{equation} \stackrel{\cdot }{H}=\left( 2\gamma -1\right) \stackrel{\cdot }{h {}-2H^2+\left( 8\gamma -3\right) Hh-\left( 2\gamma +1\right) h^2+\frac 1 f^2+8l^{-2}+\frac{l^2}6\left( \rho +3p\right) , \end{equation} and \begin{eqnarray} &&-4\gamma ^2\stackrel{\cdot }{h}^2+\gamma \left( 4H^2+8\left( 1-4\gamma \right) Hh+4\left( 1+2\gamma \right) h^2-f^2-\frac 23l^2B-\frac{32}{l^2 \right) \stackrel{\cdot }{h} \nonumber \\ &&+16H^3h\gamma +4\gamma \left( 7-16\gamma \right) H^2h^2+8\gamma \left( 1+4\gamma \right) h^3H-4\gamma \left( 1+\gamma \right) h^4-\allowbreak 4\gamma Hhf^2+\gamma h^2f^2 \nonumber \\ &&+\left( \frac{16}{l^2}+\frac 13l^2\left( \rho +3p\right) \right) H^2+\frac 1{4\left( \beta +3\alpha \right) }H^2+\left( 1-\allowbreak 4\gamma \right) \left( \frac{32}{l^2}+\allowbreak \frac 23l^2B\right) Hh+\frac 1{2\left( \beta +3\alpha \right) }Hh \nonumber \\ &&+\frac{16}{l^2}\left( 1+2\gamma \right) h^2+\allowbreak \frac 13\left( 1+2\gamma \right) l^2\left( \rho +3p\right) h^2+\frac{6\gamma +1}{4\left( \beta +3\alpha \right) }h^2-\left( \frac 4{l^2}+\frac 1{12}l^2\left( \rho +3p\right) \right) f^2 \nonumber \\ &&+\frac{8\gamma -1}{16\left( \beta +3\alpha \right) }f^2-\frac 1{\left( \beta +3\alpha \right) }l^{-2}-\frac{l^2}{12\left( \beta +3\alpha \right) \rho -\frac 83\left( \rho +3p\right) -\allowbreak \frac 1{36}l^4\left( \rho +3p\right) ^2-\frac{64}{l^4} \nonumber \\ &=&0. \end{eqnarray} $\allowbreak $ The Eqs. (45) and (39) gives \begin{eqnarray} f^2 &=&8\gamma \frac{\left( \beta +6\alpha \right) }\beta \stackrel{\cdot }{ }{}+4H^2+8\left( 1+4\gamma \frac{\left( \beta +6\alpha \right) }\beta \right) Hh+\left( 4-8\gamma \frac{\left( \beta +6\alpha \right) }\beta \right) h^2 \nonumber \\ &&+\frac{1-8\gamma }\beta +\frac{32\left( \beta +6\alpha \right) }{\beta l^2 +\frac{2\left( \beta +6\alpha \right) }{3\beta }l^2\left( \rho +3p\right) . \end{eqnarray} Substituting into (45) and (46) yields \begin{eqnarray} &&-12\gamma ^2\frac{\beta +4\alpha }\beta \stackrel{\cdot }{h}^2 \nonumber \\ &&+\left( -32\frac \gamma \beta \left( \gamma \allowbreak \left( \beta +6\alpha \right) -2\left( \beta +3\alpha \right) \right) Hh+8\frac \gamma \beta \left( \gamma \left( \beta +6\alpha \right) +2\left( \beta +3\alpha \right) \right) h^2\right) \stackrel{\cdot }{h} \nonumber \\ &&+\left( \frac{8\gamma -1}\beta \left( \frac{\gamma \left( \beta +6\alpha \right) }{2\left( \beta +3\alpha \right) }+1\right) -\frac 4{l^2}\allowbreak \frac{17\beta +48\alpha }\beta -\frac{17\beta +48\alpha }{12\beta }l^2\left( \rho +3p\right) \right) \stackrel{\cdot }{h} \nonumber \\ &&-192\gamma ^2\frac{\beta +4\alpha }\beta H^2h^2+96\gamma ^2\frac{\beta +4\alpha }\beta Hh^3-12\gamma ^2\frac{\beta +4\alpha }\beta h^4{}\allowbreak \nonumber \\ &&+\frac{2\gamma }{\beta +3\alpha }H^2+\left[ 2\gamma \frac{24\gamma \beta +96\gamma \alpha -\beta -12\alpha }{\beta \left( \beta +3\alpha \right) -384\gamma \frac{\beta +4\alpha }{\beta l^2}\allowbreak -8\gamma \frac{\beta +4\alpha }\beta l^2\left( \rho +3p\right) \right] Hh \nonumber \\ &&+\left[ -\gamma \frac{-5\beta +12\gamma \beta +48\gamma \alpha -6\alpha } \beta \left( \beta +3\alpha \right) }+96\gamma \frac{\beta +4\alpha }{\beta l^2}+2\gamma \frac{\beta +4\alpha }\beta l^2\left( \rho +3p\right) \right] h^2 \nonumber \\ &&+\frac{1-8\gamma }\beta \frac{8\gamma -1}{16\left( \beta +3\alpha \right) +\frac{48\gamma \beta +192\gamma \alpha -7\beta -24\alpha }{\beta \left( \beta +3\alpha \right) l^2}-192\frac{\beta +4\alpha }{\beta l^4} \nonumber \\ &&-\frac{l^2}{12\left( \beta +3\alpha \right) }\rho +\left( \frac{8\gamma -1 8\frac{\beta +4\alpha }{\beta \left( \beta +3\alpha \right) }l^2-8\frac \beta +4\alpha }\beta \right) \left( \rho +3p\right) -\frac{\beta +4\alpha } 12\beta }l^4\left( \rho +3p\right) ^2 \nonumber \\ &=&0. \end{eqnarray} and \begin{eqnarray} \stackrel{\cdot }{H} &=&\left( \frac{4\gamma \left( \beta +3\alpha \right) \beta -1\right) \stackrel{\cdot }{h}{}-H^2+\left( 16\gamma \frac{\beta +3\alpha }\beta -1\right) Hh-4\gamma \frac{\beta +3\alpha }\beta h^2 \nonumber \\ &&+\frac{1-8\gamma }{4\beta }+\allowbreak 16\frac{\beta +3\alpha }{\beta l^2 +\frac{\beta +3\alpha }{3\beta }l^2\left( \rho -3p\right) . \end{eqnarray} Differentiating (47) gives \begin{eqnarray*} -f\stackrel{\cdot }{f} &=&-4\gamma \frac{\left( \beta +6\alpha \right) }\beta \stackrel{\cdot \cdot }{h}{}-4\left( 1+4\gamma \frac{\left( \beta +6\alpha \right) }\beta \right) h\stackrel{\cdot }{H}-4H\stackrel{\cdot }{H {}-4\left( 1+4\gamma \frac{\left( \beta +6\alpha \right) }\beta \right) \stackrel{\cdot }{h}{}{}-\left( 4-8\gamma \frac{\left( \beta +6\alpha \right) }\beta \right) h\stackrel{\cdot }{h}{} \\ &&-\frac{\left( \beta +6\alpha \right) }{3\beta }l^2\left( \stackrel{\cdot } \rho }+3\stackrel{\cdot }{p}\right) . \end{eqnarray*} Substituting it and (47) into (35) and letting $s_{01}{}^1=0$ give \begin{eqnarray} &&\stackrel{\cdot \cdot }{H}+\left( 1-4\gamma \frac{\left( \beta +3\alpha \right) }\beta \right) \stackrel{\cdot \cdot }{h}+\allowbreak 2H\stackrel \cdot }{H}+\allowbreak 2\left( 1-\frac{8\gamma }\beta \left( \beta +3\alpha \right) \right) h\stackrel{\cdot }{H} \nonumber \\ &&+\left( 1-16\gamma \frac{\left( \beta +3\alpha \right) }\beta \right) \stackrel{\cdot }{h}{}{}+\left( 4\gamma \frac{\left( \beta +3\alpha \right) \beta +1\right) h\stackrel{\cdot }{h}{} \nonumber \\ &&+hH^2+\left( 1-16\gamma \frac{\left( \beta +3\alpha \right) }\beta \right) Hh^2+4\gamma \frac{\left( \beta +3\alpha \right) }\beta h^3 \nonumber \\ &&+\left( \allowbreak \allowbreak 4\gamma \frac{\beta +3\alpha }{\beta \left( \beta +6\alpha \right) }\allowbreak -\frac 1{4\beta }\right) h-\frac 16\left( \beta +3\alpha \right) }{\beta l^2}h-\frac{\left( \beta +3\alpha \right) }{3\beta }l^2h\left( \rho +3p\right) \nonumber \\ &&-\frac{\left( \beta +3\alpha \right) }{3\beta }l^2\left( \stackrel{\cdot } \rho }+3\stackrel{\cdot }{p}\right) \nonumber \\ &=&0. \end{eqnarray} So we have the equations (48), (49), and (50) for the unknown functions $H$ and $h$. The unknown function $f$ is given by (47). \section{Two specific models} In order to emphasize the geometrical nature of the effect of acceleration of cosmological expansion we concentrate on vacuum solutions in two specific cases and discuss only the acceleration solutions. \subsection{When $\beta =-3\alpha $} This corresponds to conformal (Weyl) gravity which has been investigated by numerous authors [recent, see 2 and 9] but it must be pointed out that the principle and structure between the theory here and higher-derivative gravity in Mannheim's theory are quite\ different. According to last section, the equation (37) gives two cases. In the first case $f=0$, the functions $H$ and $h$ now satisfy the equations (40), (41) and (44), i.e., \begin{equation} \left( 6\gamma +1\right) h^2+H^2+2Hh-4l^{-2}=0, \end{equation} \begin{equation} \left( 4\gamma -2\right) \stackrel{\cdot }{h}-2\stackrel{\cdot }{H}+\left( 16\gamma -4\right) Hh+\left( 2\gamma {}-1\right) h^2-3H^2+12l^{-2}=0, \end{equation} \begin{eqnarray} &&\stackrel{\cdot \cdot }{H}+\stackrel{\cdot \cdot }{h}+2\left( H+h\right) \stackrel{\cdot }{H} \nonumber \\ &&+\left( H+h\right) \stackrel{\cdot }{h}+hH^2+h^2H+\frac 1{12\alpha }h=0. \end{eqnarray} Eq. (51) has the roots \begin{equation} h=\frac{-H\pm \sqrt{-6\gamma H^2+4\left( 6\gamma +1\right) l^{-2}}}{\left( 6\gamma +1\right) }. \end{equation} Eq. (52) gives \begin{equation} \stackrel{\cdot }{h}=\frac 1{\left( 2\gamma -1\right) }\stackrel{\cdot }{H} \frac{\left( 8\gamma -2\right) }{\left( 2\gamma -1\right) }Hh-\frac 12h^2 \frac 3{2\left( 2\gamma -1\right) }H^2-\frac{6l^{-2}}{\left( 2\gamma -1\right) }, \end{equation} and then \begin{equation} \stackrel{\cdot \cdot }{h}=\frac 1{\left( 2\gamma -1\right) }\stackrel{\cdot \cdot }{H}-\frac{\left( 8\gamma -2\right) }{\left( 2\gamma -1\right) } \stackrel{\cdot }{H}-\left( \frac{\left( 8\gamma -2\right) }{\left( 2\gamma -1\right) }H+\frac 12h\right) \stackrel{\cdot }{h}+\frac 3{2\left( 2\gamma -1\right) }H\stackrel{\cdot }{H}. \end{equation} Substituting (54), (55) and (56) into (53) yields \begin{eqnarray} \stackrel{\cdot \cdot }{H} &=&-\left( \frac{48\gamma ^3-50\gamma ^2-7\gamma +2}{2\gamma \left( 2\gamma -1\right) \left( 6\gamma +1\right) }H\mp \frac 8\gamma -1}{4\gamma }\frac{\sqrt{-6\gamma H^2+4\left( 6\gamma +1\right) l^{-2}}}{\left( 6\gamma +1\right) }\right) \stackrel{\cdot }{H}+\frac 2\left( 504\gamma ^3+324\gamma ^2-26\gamma -3\right) }{\left( 6\gamma +1\right) ^3\left( 2\gamma -1\right) }H^3 \nonumber \\ &&+\allowbreak \frac{840\gamma ^3-4\gamma ^2-6\gamma -5}{\left( 2\gamma -1\right) \left( 6\gamma +1\right) ^2\gamma l^2}H+\frac{2\gamma -1}{24\alpha \gamma \left( 6\gamma +1\right) }H \nonumber \\ &&\mp \left( \frac{-476\gamma ^2+2592\gamma ^4+744\gamma ^3-18\gamma +1} 4\gamma \left( 6\gamma +1\right) ^3\left( 2\gamma -1\right) }H^2+\frac 3\left( 2\gamma +1\right) }{\gamma \left( 6\gamma +1\right) ^2l^2}+\frac 2\gamma -1}{24\alpha \gamma \left( 6\gamma +1\right) }\right) \sqrt{-6\gamma H^2+4\left( 6\gamma +1\right) l^{-2}}, \end{eqnarray} Let \[ \stackrel{\cdot }{H}=X. \] We have the dynamical system \begin{eqnarray} \stackrel{\cdot }{H} &=&X, \nonumber \\ \stackrel{\cdot }{X} &=&-\left( \frac{48\gamma ^3-50\gamma ^2-7\gamma +2} 2\gamma \left( 2\gamma -1\right) \left( 6\gamma +1\right) }H\mp \frac 8\gamma -1}{4\gamma }\frac{\sqrt{-6\gamma H^2+4\left( 6\gamma +1\right) l^{-2}}}{\left( 6\gamma +1\right) }\right) X \nonumber \\ &&+AH^3+\allowbreak BH\mp (CH^2+D)\sqrt{-6\gamma H^2+4\left( 6\gamma +1\right) l^{-2}} \end{eqnarray} where \begin{eqnarray} A &=&\frac{2\left( 504\gamma ^3+324\gamma ^2-26\gamma -3\right) }{\left( 6\gamma +1\right) ^3\left( 2\gamma -1\right) }, \nonumber \\ B &=&\frac{840\gamma ^3-4\gamma ^2-6\gamma -5}{\left( 2\gamma -1\right) \left( 6\gamma +1\right) ^2\gamma l^2}+\frac{2\gamma -1}{24\alpha \gamma \left( 6\gamma +1\right) }, \nonumber \\ C &=&\frac{-476\gamma ^2+2592\gamma ^4+744\gamma ^3-18\gamma +1}{4\gamma \left( 6\gamma +1\right) ^3\left( 2\gamma -1\right) }, \nonumber \\ D &=&\frac{3\left( 2\gamma +1\right) }{\gamma \left( 6\gamma +1\right) ^2l^2 +\frac{2\gamma -1}{24\alpha \gamma \left( 6\gamma +1\right) }. \end{eqnarray} The Jacobian elements are \[ \frac{\partial \stackrel{\cdot }{H}}{\partial H}=0,\frac{\partial \stackrel \cdot }{H}}{\partial X}=1, \] \begin{eqnarray*} \frac{\partial \stackrel{\cdot }{X}}{\partial H} &=&\left( -\frac{48\gamma ^3-50\gamma ^2-7\gamma +2}{2\gamma \left( 2\gamma -1\right) \left( 6\gamma +1\right) }\mp \frac{3\left( 8\gamma -1\right) H}{2\left( 6\gamma +1\right) \sqrt{-6\gamma H^2+4\left( 6\gamma +1\right) l^{-2}}}\right) X \\ &&+AH^3+\allowbreak BH\mp (CH^2+D)\sqrt{-6\gamma H^2+4\left( 6\gamma +1\right) l^{-2}}, \end{eqnarray*} \begin{equation} \frac{\partial \stackrel{\cdot }{X}}{\partial X}=-\frac{48\gamma ^3-50\gamma ^2-7\gamma +2}{2\gamma \left( 2\gamma -1\right) \left( 6\gamma +1\right) H\pm \frac{8\gamma -1}{4\gamma \left( 6\gamma +1\right) }\sqrt{-6\gamma H^2+4\left( 6\gamma +1\right) l^{-2}}. \end{equation} The critical point equations are \begin{eqnarray} X &=&0, \nonumber \\ AH^3+\allowbreak BH\mp \left( CH^2+D\right) \sqrt{-6\gamma H^2+4\left( 6\gamma +1\right) l^{-2}} &=&0. \end{eqnarray} Rationalization gives \begin{equation} H^6+aH^4+b\allowbreak H^2\allowbreak +c=0, \end{equation} where \begin{eqnarray} a &=&\frac{\allowbreak 2\left( l^2AB-12C^2\gamma -2C^2+6CD\gamma l^2\right) }{\allowbreak \allowbreak \left( A^2+6C^2\gamma \right) l^2}, \nonumber \\ b &=&\frac{B^2l^2-48\gamma DC-8CD+6D^2\gamma l^2}{\allowbreak \allowbreak \left( A^2+6C^2\gamma \right) l^2}, \nonumber \\ c &=&-\frac{4\left( 6\gamma +1\right) }{\allowbreak \allowbreak \left( A^2+6C^2\gamma \right) l^2}D^2. \end{eqnarray} The equation (62) has the roots \begin{eqnarray} H_1^2 &=&\left( -\frac q2+\sqrt{\Delta }\right) ^{1/3}+\left( -\frac q2 \sqrt{\Delta }\right) ^{1/3}-\frac a3, \nonumber \\ H_2^2 &=&\left( -\frac q2+\sqrt{\Delta }\right) ^{1/3}\omega +\left( -\frac 2-\sqrt{\Delta }\right) ^{1/3}\omega ^2-\frac a3, \nonumber \\ H_3^2 &=&\left( -\frac q2+\sqrt{\Delta }\right) ^{1/3}\omega ^2+\left( \frac q2-\sqrt{\Delta }\right) ^{1/3}\omega -\frac a3. \end{eqnarray} where \begin{eqnarray*} p &=&\left( -\frac 13a^2+b\right) , \\ q &=&\frac 2{27}a^3-\frac 13ba+c, \end{eqnarray*} \begin{equation} \Delta =\left( \frac q2\right) ^2+\left( \frac p3\right) ^3, \end{equation} and \begin{equation} \omega =\frac{-1+\sqrt{3}i}2. \end{equation} Now we have the critical points \begin{eqnarray*} H_1 &=&\pm \sqrt{\left( -\frac q2+\sqrt{\Delta }\right) ^{1/3}+\left( -\frac q2-\sqrt{\Delta }\right) ^{1/3}-\frac a3},X_1=0, \\ H_2 &=&\pm \sqrt{\left( -\frac q2+\sqrt{\Delta }\right) ^{1/3}\omega +\left( -\frac q2-\sqrt{\Delta }\right) ^{1/3}\omega ^2-\frac a3},X_2=0, \\ H_3 &=&\pm \sqrt{\left( -\frac q2+\sqrt{\Delta }\right) ^{1/3}\omega ^2+\left( -\frac q2-\sqrt{\Delta }\right) ^{1/3}\omega -\frac a3},X_3=0. \end{eqnarray*} In order to analyze their stability we give the parameter $\alpha $ and \gamma $ specific values and then obtain the results: When \[ \alpha =\frac 1{32}l^2,\gamma =-\frac 14, \] the equations (62) become \[ 431H^6l^6-13700H^4l^4+15798H^2l^2+3600=0. \] It has the roots \[ H^2\approx 1.\,4024/l^2,H^2\approx -.\,19478/l^2-2.0\times 10^{-9}i/l^2,H^2\approx 10.\,596/l^2+.\,92212i/l^2, \] the first root $H^2=1.\,4024/l^2$ corresponds a positive critical point \[ H=1.\,1842/l,X=0. \] At this point, for \[ h=\frac{-H+\sqrt{-6\gamma H^2+4\left( 6\gamma +1\right) l^{-2}}}{\left( 6\gamma +1\right) }=\frac{1.\,725}l, \] the dynamic system (58) reads \begin{eqnarray*} \stackrel{\cdot }{H} &=&X, \\ \stackrel{\cdot }{X} &=&-\left( \frac 13H+3\sqrt{\left( 6H^2-\frac 8{l^2 \right) }\right) X \\ &&+\frac{508}3H^3-\frac{196}{l^2}H-(\frac{412}3H^2-\frac{40}{l^2})\sqrt \frac 32H^2-\frac 2{l^2}}. \end{eqnarray*} The Jacobian \[ {\cal M}=\left( \begin{array}{ll} 0 & 1 \\ -\frac{430.\,76}{l^2} & -\frac{2.\,325}l \end{array} \right) \] has the eigenvalues $-1.1625/l-20.7222i/l$, $-1.1625/l+20.7222i/l$. Therefore, the critical point \[ H=1.\,1842/l,X=0, \] is stable, where \[ h=\frac{1.\,725}l,f=0. \] When \[ \alpha =\left( \frac 1{32}l^2\right) ,\gamma =\left( \frac 14\right) , \] the equations (62) become \[ 1827H^6l^6+5226H^4l^4+2579H^2l^2-2312=0. \] It has roots \[ H^2\approx 0.\,4412/l^2,H^2\approx -1.\,6508/l^2+0.\,37826i/l^2,H^2\approx -1.\,6508/l^2-0.\,37826i/l^2, \] the first root $H^2=0.\,4412/l^2$ corresponds a positive critical point \[ H=0.\,66423/l,X=0. \] At this point, for \[ h=\frac{-H-\sqrt{-6\gamma H^2+4\left( 6\gamma +1\right) l^{-2}}}{\left( 6\gamma +1\right) }=-\frac{1.\,488}l, \] the dynamic system (58) reads \begin{eqnarray*} \stackrel{\cdot }{H} &=&X, \\ \stackrel{\cdot }{X} &=&-\left( \frac{17}5H+\frac 15\sqrt{\left( -6H^2+\frac 40}{l^2}\right) }\right) X \\ &&-\frac{596}{125}H^3-\frac{692}{75l^2}BH+(\frac{184}{125}H^2+\frac{136} 75l^2})\sqrt{-\frac 32H^2+\frac{10}{l^2}}. \end{eqnarray*} The Jacobian \[ {\cal M}=\left( \begin{array}{ll} 0 & 1 \\ -\frac{10.\,365}{l^2} & -\frac{3.\,4807}l \end{array} \right) \] has the eigenvalues $-1.74035/l-2.70854i/l$, $-1.74035/l+2.70854i/l$. Therefore, the critical point \[ H=0.\,66423/l,X=0, \] is stable, there \[ h=-\frac{1.\,488}l,f=0. \] For the two points \[ X=\stackrel{\cdot }{H}=0, \] which corresponds to a de Sitter spacetime. Following Lu and Pope [1], we chose $\alpha =-\frac 1{2\Lambda }$, which means \[ \alpha =-\frac{l^2}{48}. \] In contrast with them we deal with a de Sitter spacetime with torsion and the gravitational Lagrangian including a term $\gamma l^{-2}T{}^\mu {}_{\nu \rho }T{}_\mu {}^{\nu \rho }$. When we chose \[ \gamma =-\frac 14, \] (59) and (63) give, respectively, \[ A=\frac{508}3,B=-\frac{156}{l^2},C=\frac{412}3,D=0, \] and \[ a=-\frac{17000}{431l^2},b=\frac{27378}{431l^4},c=0. \] The dynamical system (58) becomes \begin{eqnarray*} \stackrel{\cdot }{H} &=&X, \\ \stackrel{\cdot }{X} &=&-\left( \frac 13H\pm \sqrt{\frac 32H^2-\frac 2{l^2} \right) X+\frac{508}3H^3-\frac{156}{l^2}H\mp \frac{412}3H^2\sqrt{\frac 32H^2 \frac 2{l^2}} \end{eqnarray*} (62) becomes \[ \left( H^4-\frac{17000}{431l^2}H^2+\frac{27378}{431l^4}\right) \allowbreak H^2\allowbreak =0, \] and has the roots \[ H_1=0,H_2=\sqrt{\frac{8500-103\sqrt{5698}}{431}}/l,H_3=\sqrt{\frac{8500+10 \sqrt{5698}}{431}}/l. \] Therefore we get three critical points \begin{eqnarray*} H_1 &=&0,X_1=0, \\ H_2 &=&\sqrt{\frac{8500-103\sqrt{5698}}{431}}/l,X_2=0, \\ H_3 &=&\sqrt{\frac{8500+103\sqrt{5698}}{431}}/l,X_3=0. \end{eqnarray*} At the point \[ H_1=0,X_1=0, \] the Jacobian \[ M=\left( \begin{array}{ll} 0 & 1 \\ -\frac{156}{l^2} & \mp \frac{6\sqrt{2}i}l \end{array} \right) \] has the eigenvalues \[ \left( -3\sqrt{2}+\sqrt{174}\right) i/l,\left( -3\sqrt{2}-\sqrt{174}\right) i/l. \] This point is a center. At the point \[ H_2=\sqrt{\frac{8500-103\sqrt{5698}}{431}}/l,X_2=0, \] the Jacobian \[ M=\left( \begin{array}{ll} 0 & 1 \\ -\frac{180.\,44}{l^2} & -\frac{4.\,7728}l \end{array} \right) \] has the eigenvalues \[ -2.3864/l+13.2191i/l,-2.3864/l-13.2191i/l. \] This is a stable critical point, where \[ h=1.\,1472/l,f=0. \] At the point \[ H_3=\sqrt{\frac{8500+103\sqrt{5698}}{431}}/l,X_3=0 \] the Jacobian \[ M=\left( \begin{array}{ll} 0 & 1 \\ \frac{84.\,5}{l^2} & -\frac{46.\,4}l \end{array} \right) \] has the eigenvalues \[ -48.15/l,1.755/l. \] This is a unstable critical point. If we chose \[ \gamma =\frac 14, \] (59) and (63) give, respectively, \[ A=-\frac{596}{125},B=-\frac{164}{25l^2},C=\frac{184}{125},D=\frac{112}{25l^2 , \] and \[ a=\frac{474}{203l^2},b=-\frac{459}{203l^4},c=-\frac{224}{29l^6}. \] The dynamical system (58) becomes \begin{eqnarray*} \stackrel{\cdot }{H} &=&X, \\ \stackrel{\cdot }{X} &=&-\left( \frac{17}5H\mp \frac 25\sqrt{-\frac 32H^2 \frac{10}{l^2}}\right) X-\frac{596}{125}H^3-\frac{164}{25l^2}H\mp (\frac{18 }{125}H^2+\frac{112}{25l^2})\sqrt{-\frac 32H^2+\frac{10}{l^2}}. \end{eqnarray*} (62) becomes \[ H^6+\frac{474}{203l^2}H^4-\frac{459}{203l^4}H^2\allowbreak -\frac{224}{29l^6 =0 \] It has a real root \[ H=1.30134/l. \] At the critical point \[ H=1.30134/l,X=0, \] the Jacobian \[ M=\left( \begin{array}{ll} 0 & 1 \\ -\frac{36.\,265}{l^2} & -\frac{3.\,3321}l \end{array} \right) \] has the eigenvalues \[ -1.666/l-5.787i/l,-1.666/l+5.787i/l. \] This is a stable critical point, where \[ h=-1.\,613/l,f=0. \] In the{\bf \ }second case, the functions $H$, $h$ and $f$ satisfy the equations (33-36) which now read \begin{equation} f^2=\frac 4{1-8\gamma }H^2+\frac 8{1-8\gamma }Hh+\frac{4\left( 6\gamma +1\right) }{1-8\gamma }h^2-\frac{16}{\left( 1-8\gamma \right) l^2}, \end{equation} \begin{equation} -2\stackrel{\cdot }{H}+2\left( 2\gamma -1\right) \stackrel{\cdot }{h -3H^2+4\left( 4\gamma -1\right) Hh+\left( 2\gamma -1\right) h^2+\frac 8\gamma +1}4f^2+12l^{-2}=0, \end{equation} \begin{equation} \stackrel{\cdot \cdot }{H}+\stackrel{\cdot \cdot }{h}+2\left( H+h\right) \stackrel{\cdot }{H}+\left( H+h\right) \stackrel{\cdot }{h}+hH^2+h^2H+\frac {12\alpha }h=0, \end{equation} \begin{equation} \allowbreak \stackrel{\cdot }{H}+\stackrel{\cdot }{h}+H^2+Hh-\frac{2\gamma } \allowbreak 3\alpha }+\frac 1{12\alpha }=0. \end{equation} Eqs. (67), (68) and (70) give \[ \stackrel{\cdot }{h}=\frac 4{8\gamma -1}H^2-\allowbreak \frac{4\left( 8\gamma -3\right) }{8\gamma -1}Hh+\frac{2\left( 4\gamma +3\right) }{8\gamma -1}h^2-\frac{2\left( 16\gamma -1\right) }{\gamma \left( 8\gamma -1\right) l^ }+\frac{8\gamma -1}{24\gamma \alpha }=0. \] \[ \stackrel{\cdot }{H}=-\frac{8\gamma +3}{8\gamma -1}H^2+\frac{24\gamma -11} 8\gamma -1}\allowbreak Hh-2\frac{4\gamma +3}{8\gamma -1}h^2+\allowbreak \frac{16\gamma -1}{\gamma \left( 8\gamma -1\right) l^2}+\frac{\left( 8\gamma -1\right) \left( 2\gamma -1\right) }{24\gamma \alpha }, \] and then \[ \stackrel{\cdot \cdot }{h}{}=\frac 8{8\gamma -1}H\stackrel{\cdot }{H}-4\frac -3+8\gamma }{8\gamma -1}h\stackrel{\cdot }{H}-4\frac{-3+8\gamma }{8\gamma -1 H\stackrel{\cdot }{h}+\frac{4\left( 3+4\gamma \right) }{\left( -1+8\gamma \right) }h\stackrel{\cdot }{h}, \] \[ \stackrel{\cdot \cdot }{H}=-2\frac{3+8\gamma }{8\gamma -1}H\stackrel{\cdot } H}+\frac{24\gamma -11}{8\gamma -1}h\stackrel{\cdot }{H}+\frac{24\gamma -11} 8\gamma -1}\allowbreak H\stackrel{\cdot }{h}-\frac{4\left( 3+4\gamma \right) }{\left( -1+8\gamma \right) }h\stackrel{\cdot }{h}. \] Substituting into (69) yields \[ h\left( \stackrel{\cdot }{H}+\stackrel{\cdot }{h}\right) +hH^2+h^2H+\frac 1 12\alpha }h=0. \] This equation and (70) lead to \[ h=0. \] Then (69) becomes \[ \stackrel{\cdot \cdot }{H}+2H\stackrel{\cdot }{H}=0. \] It has the solution \[ \stackrel{\cdot }{H}=-H^2+C, \] \[ H=\sqrt{C}\frac{e^{2\sqrt{C}\left( t-t_0\right) }+1}{e^{2\sqrt{C}\left( t-t_0\right) }-1}. \] The deceleration parameter is \[ q=-\frac{\stackrel{\cdot }{H}}{H^2}-1=-\frac C{H^2}. \] When \[ C=0 \] we have \[ -\frac{dH}{H^2}=dt, \] and then \[ \frac 1H-\frac 1{H_0}=t-t_0. \] \subsection{When $\beta =-4\alpha $} In this case the gravitational Lagrangian{\em \ }is the square of the traceless Ricci tensor $\widetilde{R}_{\mu \nu }=R_{\mu \nu }-\frac 14g_{\mu \nu }R$ [10]. According to section III, the equation (37) gives two cases. In the first case $f=0$, the nonvanishing functions $H$ and $h$ satisfy the equations (40), (41) and (44), i.e., \begin{eqnarray} &&\left( \stackrel{\cdot }{H}+\stackrel{\cdot }{h}\right) ^2+2\left( \stackrel{\cdot }{H}+\stackrel{\cdot }{h}\right) H\left( H+h\right) -h\left( h+2H\right) \left( h+H\right) ^2 \nonumber \\ &&+\frac{6\gamma +1}{4\alpha }h^2+\frac 1{4\alpha }H^2+\frac 1{2\alpha }Hh \frac 1\alpha l^{-2}=0, \end{eqnarray} \begin{equation} 4\left( \stackrel{\cdot }{H}+\stackrel{\cdot }{h}\right) -8\gamma \stackrel \cdot }{h}+\allowbreak 4\left( 2\gamma +1\right) h^2-4\left( 8\gamma -3\right) hH+8H^2-32l^{-2}=0, \end{equation} \begin{equation} \left( \stackrel{\cdot \cdot }{H}+\stackrel{\cdot \cdot }{h}\right) -\left( \stackrel{\cdot }{h}+h\stackrel{\cdot }{h}+h^2H\right) -h^3+\frac 1{8\alpha h=0. \end{equation} They can be rewritten as\newline \begin{eqnarray} \stackrel{\cdot \cdot }{h} &=&\left( \frac{4\gamma +3}{2\gamma }h-\frac 8\gamma -4}{2\gamma }H\right) \stackrel{\cdot }{h}-\left( \frac{8\gamma -3} 2\gamma }h-\frac 2\gamma H\right) \stackrel{\cdot }{H} \nonumber \\ &&+\frac 1{2\gamma }h^2H+\frac 1{2\gamma }h^3-\frac 1{16\alpha \gamma }h, \end{eqnarray} \begin{eqnarray} \stackrel{\cdot \cdot }{H} &=&\left( -\frac{2\gamma +3}{2\gamma }h+\frac 5\gamma -2}\gamma H\right) \stackrel{\cdot }{h}+\left( \frac{8\gamma -3} 2\gamma }h-\frac 2\gamma H\right) \stackrel{\cdot }{H} \nonumber \\ &&+\frac{2\gamma -1}{2\gamma }h^2H+\frac{2\gamma -1}{2\gamma }h^3-\frac 2\gamma -1}{16\alpha \gamma }h, \end{eqnarray} and \begin{eqnarray} &&4\gamma ^2\stackrel{\cdot }{h}^2-4\gamma \left( \left( 2\gamma +1\right) h^2-\allowbreak \allowbreak \left( 8\gamma -2\right) Hh\allowbreak +H^2 \frac 8{l^2}\right) \stackrel{\cdot }{h} \nonumber \\ &&+\allowbreak 4\gamma \left( 1+\gamma \right) h^4\allowbreak -8\gamma \left( 1+4\gamma \right) h^3H+\allowbreak 4\gamma \left( 16\gamma -7\right) h^2H^2\allowbreak -16\gamma H^3h \nonumber \\ &&+\left( -\frac{16\left( 2\gamma +1\right) }{l^2}+\frac{6\gamma +1}{4\alpha }\right) h^2+\left( \allowbreak \allowbreak 32\frac{4\gamma -1}{l^2}+\frac 1 2\alpha }\right) Hh \nonumber \\ &&-32\frac{H^2}{l^2}+\left( -\frac{16}{l^2}+\frac 1{4\alpha }\right) H^2 \frac{64}{l^4}-\frac 1\alpha l^{-2} \nonumber \\ &=&0. \end{eqnarray} Let \[ \stackrel{\cdot }{H}=X,\stackrel{\cdot }{h}=Y. \] We have \begin{eqnarray} \stackrel{\cdot }{Y} &=&\left( \frac{4\gamma +3}{2\gamma }h-\frac{8\gamma - }{2\gamma }H\right) Y-\left( \frac{8\gamma -3}{2\gamma }h-\frac 2\gamma H\right) X \nonumber \\ &&+\frac 1{2\gamma }h^2H+\frac 1{2\gamma }h^3-\frac 1{16\alpha \gamma }h, \\ \stackrel{\cdot }{X} &=&\left( -\frac{2\gamma +3}{2\gamma }h+\frac{5\gamma - }\gamma H\right) Y+\left( \frac{8\gamma -3}{2\gamma }h-\frac 2\gamma H\right) X \nonumber \\ &&+\frac{2\gamma -1}{2\gamma }h^2H+\frac{2\gamma -1}{2\gamma }h^3-\frac 2\gamma -1}{16\alpha \gamma }h, \end{eqnarray} and \begin{eqnarray} &&\allowbreak Y^2-\left( \frac{\left( 2\gamma +1\right) }\gamma h^2-\frac 2\left( 4\gamma -1\right) }\gamma hH+\frac 1\gamma H^2-\frac 8{\gamma l^2 \right) Y \nonumber \\ &&+\frac{\left( \gamma +1\right) }\gamma h^4\allowbreak -\frac{2\left( 4\gamma +1\right) }\gamma h^3H+\allowbreak \frac{\left( 16\gamma -7\right) \gamma h^2H^2-\allowbreak \frac 4\gamma hH^3 \nonumber \\ &&+\left( -\frac{4\left( 2\gamma +1\right) }{\gamma ^2l^2}\allowbreak +\frac 6\gamma +1}{16\alpha \gamma ^2}\right) h^2+\left( \frac{8\left( 4\gamma -1\right) }{\gamma ^2l^2}+\frac 1{8\alpha \gamma ^2}\right) Hh \nonumber \\ &&+\left( -\frac 4{\gamma ^2l^2}+\frac 1{16\alpha \gamma ^2}\right) H^2 \frac{16}{\gamma ^2l^4}-\frac 1{4\alpha \gamma ^2}l^{-2} \nonumber \\ &=&0. \end{eqnarray} The constraint equation (79) has the roots \begin{equation} Y=-\frac b2\pm \sqrt{\left( \frac b2\right) ^2-c}=Y\left( H,h\right) , \end{equation} where \begin{eqnarray} b &=&-\left( \frac{\left( 2\gamma +1\right) }\gamma h^2-\frac{2\left( 4\gamma -1\right) }\gamma hH+\frac 1\gamma H^2-\frac 8{\gamma l^2}\right) \\ c &=&\frac{\left( \gamma +1\right) }\gamma h^4\allowbreak -\frac{2\left( 4\gamma +1\right) }\gamma h^3H+\allowbreak \frac{\left( 16\gamma -7\right) \gamma h^2H^2-\allowbreak \frac 4{\gamma ^2}hH^3\gamma \nonumber \\ &&+\left( -\frac{4\left( 2\gamma +1\right) }{\gamma ^2l^2}\allowbreak +\frac 6\gamma +1}{16\alpha \gamma ^2}\right) h^2+\left( \frac{8\left( 4\gamma -1\right) }{\gamma ^2l^2}+\frac 1{8\alpha \gamma ^2}\right) Hh \nonumber \\ &&+\left( -\frac 4{\gamma ^2l^2}+\frac 1{16\alpha \gamma ^2}\right) H^2 \frac{16}{\gamma ^2l^4}-\frac 1{4\alpha \gamma ^2}l^{-2}, \end{eqnarray} So we are left with only three independent unknown functions $h$, $H$, and X $, which satisfies the equations \begin{eqnarray} \stackrel{\cdot }{H} &=&X, \nonumber \\ \stackrel{\cdot }{h} &=&Y\left( H,h\right) , \nonumber \\ \stackrel{\cdot }{X} &=&\left( -\frac{2\gamma +3}{2\gamma }h+\frac{5\gamma - }\gamma H\right) Y\left( H,h\right) +\left( \frac{8\gamma -3}{2\gamma }h \frac 2\gamma H\right) X \nonumber \\ &&+\frac{2\gamma -1}{2\gamma }h^2H+\frac{2\gamma -1}{2\gamma }h^3-\frac 2\gamma -1}{16\alpha \gamma }h. \end{eqnarray} The critical point equations are \begin{eqnarray} X &=&0, \\ Y\left( H,h\right) &=&0, \\ \frac{2\gamma -1}{2\gamma }h^2H+\frac{2\gamma -1}{2\gamma }h^3-\frac{2\gamma -1}{16\alpha \gamma }h &=&0. \end{eqnarray} The Eq.(86) means \begin{equation} h=0, \end{equation} or \begin{equation} hH+h^2-\frac 1{8\alpha }=0. \end{equation} For \[ h=0, \] the equations (85) has the roots \[ H=\pm \left( \frac 2l\right) . \] $\allowbreak $So we have the first pair of critical points \[ X=0,h=0,H=\pm \left( \frac 2l\right) . \] For \[ hH+h^2-\frac 1{8\alpha }=0, \] the critical point equations become \begin{eqnarray} X &=&0, \\ H &=&\left( -h+\frac 1{8\alpha h}\right) , \\ &&\allowbreak h^6-\left( \frac 1{5\alpha }-\frac 3{200\gamma \alpha }+\frac {5\gamma l^2}\right) h^4 \nonumber \\ &&+\left( \frac 1{100\alpha ^2}+\allowbreak \frac 1{320\gamma \alpha ^2} \frac{16\gamma -1}{100\gamma ^2l^2\alpha }+\frac{16}{25\gamma ^2l^4}\right) h^2 \nonumber \\ &&-\frac{8\gamma -1}{25600\alpha ^3\gamma ^2}\allowbreak -\frac 1{400\gamma ^2l^2\alpha ^2} \nonumber \\ &=&0. \end{eqnarray} The equation (91) has the roots \begin{eqnarray} h_1^2 &=&\left( -\frac q2+\sqrt{\Delta }\right) ^{1/3}+\left( -\frac q2 \sqrt{\Delta }\right) ^{1/3}-\frac A3, \nonumber \\ h_2^2 &=&\left( -\frac q2+\sqrt{\Delta }\right) ^{1/3}\omega +\left( -\frac 2-\sqrt{\Delta }\right) ^{1/3}\omega ^2-\frac A3, \nonumber \\ h_3^2 &=&\left( -\frac q2+\sqrt{\Delta }\right) ^{1/3}\omega ^2+\left( \frac q2-\sqrt{\Delta }\right) ^{1/3}\omega -\frac A3, \end{eqnarray} where \begin{eqnarray} \Delta &=&\left( \frac q2\right) ^2+\left( \frac p3\right) ^3, \nonumber \\ \omega &=&\frac 12\left( -1+\sqrt{3}i\right) , \nonumber \\ p &=&B-\frac 13A^2, \nonumber \\ q &=&\frac 2{27}A^3-\frac 13AB+C, \end{eqnarray} with \begin{eqnarray} A &=&-\left( \frac 1{5\alpha }-\frac 3{200\gamma \alpha }+\frac 8{5\gamma l^ }\right) ,-\frac A3=+\frac 1{15\alpha }-\frac 1{200\gamma \alpha }+\frac 8 15\gamma l^2} \nonumber \\ B &=&\frac 1{100\alpha ^2}+\allowbreak \frac 1{320\gamma \alpha ^2}+\frac 16\gamma -1}{100\gamma ^2l^2\alpha }+\frac{16}{25\gamma ^2l^4} \nonumber \\ C &=&-\left( \frac{8\gamma -1}{25600\alpha ^3\gamma ^2}\allowbreak +\frac 1 400\gamma ^2l^2\alpha ^2}\right) . \end{eqnarray} The equations (89), (90) and (92) give the critical points $\left\{ X,H,h\right\} $. Every one of these point corresponds to a de Sitter spacetime. The dynamical system (83) has the Jacobian elements \begin{eqnarray} \frac{\partial \stackrel{\cdot }{H}}{\partial H} &=&0,\frac{\partial \stackrel{\cdot }{H}}{\partial h}=0,\frac{\partial \stackrel{\cdot }{H}} \partial X}=1, \nonumber \\ \frac{\partial \stackrel{\cdot }{h}}{\partial H} &=&\frac{\partial Y} \partial H},\frac{\partial \stackrel{\cdot }{h}}{\partial h}=\frac{\partial }{\partial h},\frac{\partial \stackrel{\cdot }{h}}{\partial X}=0, \nonumber \\ \frac{\partial \stackrel{\cdot }{X}}{\partial H} &=&\frac{5\gamma -2}\gamma Y\left( H,h\right) +\left( -\frac{2\gamma +3}{2\gamma }h+\frac{5\gamma -2 \gamma H\right) \frac{\partial Y}{\partial H}-\frac 2\gamma X+\frac{2\gamma -1}{2\gamma }h^2, \nonumber \\ \frac{\partial \stackrel{\cdot }{X}}{\partial h} &=&-\frac{2\gamma +3} 2\gamma }Y\left( H,h\right) +\left( -\frac{2\gamma +3}{2\gamma }h+\frac 5\gamma -2}\gamma H\right) \frac{\partial Y}{\partial h}+\frac{8\gamma -3} 2\gamma }X \nonumber \\ &&+\frac{2\gamma -1}\gamma hH+3\frac{2\gamma -1}{2\gamma }h^2-\frac{2\gamma -1}{16\alpha \gamma }, \nonumber \\ \frac{\partial \stackrel{\cdot }{X}}{\partial X} &=&\frac{8\gamma -3} 2\gamma }h-\frac 2\gamma H, \end{eqnarray} where \begin{eqnarray} \frac{\partial Y}{\partial H} &=&-\frac{\left( 4\gamma -1\right) }\gamma h \frac 1\gamma H \nonumber \\ &&\pm \frac 1{2\sqrt{\left( \frac b2\right) ^2-c}}\left( \frac 1{\gamma ^2 h^3+\allowbreak \allowbreak \frac 3{\gamma ^2}h^2H+\allowbreak \frac 3 \gamma ^2}hH^2+\frac 1{\gamma ^2}H^3-\frac 1{8\gamma ^2\alpha }h-\frac 1 8\alpha \gamma ^2}H\right) , \end{eqnarray} \begin{eqnarray} \frac{\partial Y}{\partial h} &=&\frac{\left( 2\gamma +1\right) }\gamma h \frac{\left( 4\gamma -1\right) }\gamma H \nonumber \\ &&\pm \frac 1{2\sqrt{\left( \frac b2\right) ^2-c}}\left( \frac 1{\gamma ^2 h^3+\allowbreak \frac 3{\gamma ^2}h^2H+\allowbreak \allowbreak \frac 3 \gamma ^2}hH^2+\allowbreak \frac 1{\gamma ^2}H^3-\frac{6\gamma +1}{8\gamma ^2\alpha }h\allowbreak -\frac 1{8\gamma ^2\alpha }H\right) . \end{eqnarray} In order to analyze their stability we give the parameter $\alpha $ and \gamma $ specific values and then obtain the results: For the critical point $X=0,h=0,H=2/l$, corresponding calculation indicates it is unstable for $\alpha =\frac 1{32}l^2.$ In the case $X=0,$ $hH+h^2-\frac 1{8\alpha }=0$, we have When \[ \alpha =\left( \frac 1{32}l^2\right) ,\gamma =\left( \frac 14\right) \] the equations (92) and (90) give \begin{eqnarray*} h_1^2 &=&\frac{1.\,9814}{l^2},h_1=\pm \frac{1.\,4076}l,H_1=\pm \frac{1.\,434 }l \\ h_2^2 &=&\frac{4.\,4493+3.\,3485i}{l^2}, \\ h_3^2 &=&\frac{4.\,4493-3.\,3485i}{l^2}, \end{eqnarray*} At \[ h_1=\left( \frac{1.\,4076}l\right) ,H_1=\left( \frac{1.\,4341}l\right) , \] for \[ Y=-\frac b2+\sqrt{\left( \frac b2\right) ^2-c}=0, \] the dynamical system (83) has the form \begin{eqnarray*} \stackrel{\cdot }{H} &=&X, \\ \stackrel{\cdot }{h} &=&Y\left( H,h\right) , \\ \stackrel{\cdot }{X} &=&-\left( 7h+3H\right) Y\left( H,h\right) -\left( 7h+3H\right) X \\ &&-h^2H-h^3+\frac 4{l^2}h, \end{eqnarray*} with \begin{eqnarray*} Y &=&3h^2+2H^2-\frac{16}{l^2} \\ &&+(4h^4+24h^2H^2-\frac{80}{l^2}h^2+4H^4-\frac{32}{l^2}H^2 \\ &&+\frac{128}{l^4}+16h^3H+\allowbreak 16hH^3-64h\frac H{l^2})^{1/2}. \end{eqnarray*} Its Jacobian \[ {\cal M}=\left( \begin{array}{lll} 0 & 0 & 1 \\ \frac{21.\,324}l & \frac{12.\,665}l & 0 \\ -\frac{303.\,83}{l^2} & -\frac{185.\,26}{l^2} & -\frac{14.\,288}l \end{array} \right) \] has the eigenvalues: $-0.\,39225/l+11.\,048i/l,\allowbreak -0.\,39225/l-11.\,048i/l,\allowbreak -0.\,8385/l$. The critical point \[ h_1=\left( \frac{1.\,4076}l\right) ,H_1=\left( \frac{1.\,4341}l\right) , \] is stable, where $f=0$. For \[ Y=-\frac b2-\sqrt{\left( \frac b2\right) ^2-c}=-\frac{11.\,886}{l^2}, \] the dynamical system (83) has the Jacobian \[ {\cal M}=\left( \begin{array}{lll} 0 & 0 & 1 \\ -\frac{9.\,8509}l & \frac{4.\,2259}l & 0 \\ \frac{173.\,12}{l^2} & \frac{17.\,401}{l^2} & -\frac{14.\,288}l \end{array} \right) \] with the eigenvalues: $-22.\,33/l,\allowbreak 6.\,1339/l+1.\,6776i/l,\allowbreak 6.\,1339/l-1.\,6776i/l$. The critical point is unstable. In the case $f\neq 0$, (33), (34), (35) and (39) read (in vacuum) \begin{eqnarray} &&\left( \stackrel{\cdot }{H}+\stackrel{\cdot }{h}\right) ^2+2\left( \stackrel{\cdot }{H}+\stackrel{\cdot }{h}\right) H\left( H+h\right) \nonumber \\ &&-h\left( h+2H\right) \left( h+H\right) ^2+\frac 12\left( h+H\right) ^2f^2-\allowbreak \frac 1{16}f^4 \nonumber \\ &&+\frac{6\gamma +1}{4\alpha }h^2+\frac 1{4\alpha }H^2+\frac 1{2\alpha }Hh \frac{8\gamma -1}{16\alpha }f^2-\frac 1\alpha l^{-2}=0, \end{eqnarray} \begin{equation} \left( \stackrel{\cdot \cdot }{H}+\stackrel{\cdot \cdot }{h}\right) -\left( \stackrel{\cdot }{h}+h\stackrel{\cdot }{h}+h^2H\right) -\frac 12\left( 2h^3- \stackrel{\cdot }{f}-\frac 12hf^2\right) +\frac 1{8\alpha }h=0, \end{equation} \begin{equation} 4\alpha \left( \stackrel{\cdot }{H}+\stackrel{\cdot }{h}\right) \allowbreak -4\alpha Hh-\alpha \ \left( 4h^2-f^2\right) -4\gamma +\frac 12=0. \end{equation} \begin{equation} f^2=4\left( \stackrel{\cdot }{H}+\stackrel{\cdot }{h}\right) -8\gamma \stackrel{\cdot }{h}+\allowbreak 4\left( 2\gamma +1\right) h^2-4\left( 8\gamma -3\right) hH+8H^2-32l^{-2}, \end{equation} These equations have the solution \begin{eqnarray} \alpha &=&\left( \frac 1{64}-\frac 14\gamma \right) l^2, \\ H^2 &=&\frac{32\gamma }{\left( 16\gamma -1\right) l^2}, \\ f^2 &=&\allowbreak -\frac{32\left( 8\gamma -1\right) }{l^2\left( 16\gamma -1\right) }. \end{eqnarray} For \[ \gamma >\frac 1{16}, \] or \[ \gamma <0, \] we have a de Sitter solution \begin{equation} H=\frac 4l\sqrt{\frac{2\gamma }{16\gamma -1}}. \end{equation} When \[ \left| \gamma \right| \gg 1, \] we have \[ H^2\approx \frac 2{l^2}, \] a value speculated [14]. When \[ \frac 1{16}<\gamma \leq \frac 18 \] $f$ is real. In this case, it is the pseudotrace axial ingredient $f$ of torsion that produces the effect of acceleration of cosmological expansion. \section{Conclusions} Stating from a de Sitter gauge theory a gravitational Lagrangian (13) which is identified with the Lagrangian of quadratic-curvature gravities with torsion has been constructed. The cosmological equations (33-36) for spatial flat universe have been obtained. To search for vacuum solutions of them in two specific models, the conformal model and the zero-energy (Deser-Tekin) model, the dynamical systems have been derived, some de Sitter critical points and their stability have been investigated. These points are always exact constant solutions in the context of autonomous dynamical systems and describe the asymptotic behavior. Some stable de Sitter critical points have been found. For any physical theories, to find exact mathematical solutions is an important topic. Next comes the physical interpretations of the solution thus obtained. Mathematically,\ de Sitter as the maximally space is undoubtedly important for any gravity theories. From observational side, recent studies illuminate that both the early universe (inflation) and the late-time universe (cosmic acceleration) can be regarded as fluctuations on a de Sitter background. So de Sitter takes a pivotal status in gravity, especially in modern cosmology. The solutions in section IV indicate that when $f=0$, $h\neq 0$, the cosmological equations have stable de Sitter critical points. This means that the scalar ingredient $h$ of{\em \ }torsion could be considered as a ''phantom'' field, since it does not interact directly with matter; it only interacts indirectly via gravitation. In the case $f\neq 0$, $h=0$, it is the pseudotrace axial ingredient $f$ of torsion that produces the effect of acceleration of cosmological expansion. Therefore the spacetime in the vacuum has the structure of de Sitter spacetime with torsion including the pseudotrace axial ingredient $f$ as well as the scalar ingredient $h$. In summary, in the framework of gauge theory of gravity some cosmological models can be constructed to explain observable acceleration of cosmological expansion. The effect of acceleration of cosmological expansion in these models has the geometrical nature and is connected with geometrical structure of physical spacetime. The spacetime in the vacuum has the structure of de Sitter spacetime with torsion.
1,116,691,497,511
arxiv
\section{Introduction}\label{sec:intro} A \emph{flat waveguide} is a domain $\Omega$ in $\mathbb{R}^{n+m}$ which can be written as a product of a bounded open subset $\omega$ with $\mathbb{R}^{n}$: \begin{equation*} \omega \subseteq \mathbb{R}^{m},\qquad \Omega=\mathbb{R}^{n}\times \omega\subseteq \mathbb{R}^{n}_{x} \times \mathbb{R}^{m}_{y},\qquad n,m\ge1. \end{equation*} Throughout the paper we shall denote with $x$ the group of the first $n$ variables and with $y$ the last $m$ variables in $\mathbb{R}^{n+m}$. Waveguides appear in many concrete applications, since they can be used to model various interesting physical structures such as \emph{wires} and \emph{plates} (see Figure \ref{fig:flat}). \begin{figure}[htbp] \begin{minipage}[bt]{.4\linewidth} \centering \includegraphics[height=1in]{wireasy.pdf} \end{minipage} \hspace{.05\linewidth} \begin{minipage}[bt]{.4\linewidth} \centering \includegraphics[height=.6in]{plateasy.pdf} \end{minipage} \label{fig:flat} \caption{(a) $n=1$, $m=2$; (b) $n=2$, $m=1$} \end{figure} The Laplace operator on $\Omega$ with Dirichlet or Neumann boundary conditions has a natural splitting \begin{equation*} \Delta_{x,y}=\Delta_{x}+\Delta_{y} \end{equation*} where $\Delta_{x}$ is the free Laplacian on $\mathbb{R}^{n}$ and $\Delta_{y}$ is the Dirichlet resp.~Neumann Laplacian on $\Omega$ (we shall also write \begin{equation*} \nabla=(\nabla_{x},\nabla_{y}) \end{equation*} with obvious meaning). Thus the operator has a simple spectral structure: indeed, if we choose an orthonormal set of eigenfunctions $\{\phi_{j}(y)\}_{j\ge1}$ for $-\Delta_{y}$ on $\omega$ and denote by $\lambda_{j}^{2}$ the corresponding eigenvalues, the operator $-\Delta_{x,y}$ is equivalent to the sequence of operators on $\mathbb{R}^{n}$ \begin{equation*} -\Delta_{x}+\lambda_{j}^{2}. \end{equation*} As a consequence, the study of linear and nonlinear evolution equations on flat waveguides is quite similar to the standard case of free equations on $\mathbb{R}^{n}$. The theory was initiated in \cite{LeskyRacke03-a} and developed in \cite{MetcalfeSoggeStewart05-a} and \cite{LeskyRacke08-a}. Despite the simplicity of the theory, it is clear that the flatness assumption on the domain is not always realistic. Thus a natural question is whether a similar theory can be developed for more general, non flat waveguides. Here we begin to address this question, by investigating the smoothing and dispersive properties of wave and Schr\"odinger equations in more general situations. Such properties, which are usually expressed as global in time estimates on solutions of the linear equations, are the key ingredients for the nonlinear theory. To the best of our knowledge, the results in the present paper are the first ones concerning dispersive phenomena on non flat waveguides. We start with a quick overview of the dispersive properties for the linear Schr\"odinger and wave-Klein-Gordon equations in the flat case. \begin{example}\label{exa:schroflat} Consider the Schr\"odinger equation \begin{equation}\label{eq:schreqhom} iu_{t}-\Delta u=0,\qquad u(0,x,y)=f(x,y) \end{equation} with Dirichlet boundary conditions on $\Omega=\mathbb{R}^{n}\times \omega$, with $\omega$ a bounded open set in $\mathbb{R}^{m}$. Let $\phi_{j}$, $\lambda_{j}^{2}$ be as above, then by expanding \begin{equation*} u=\sum_{j\ge1}u_{j}(t,x)\phi_{j}(y), \qquad f=\sum_{j\ge1}f_{j}(x)\phi_{j}(y) \end{equation*} we can rewrite equation \eqref{eq:schreqhom} as the equivalent family of independent equations \begin{equation}\label{eq:schreqj} i \partial_{t} u_{j}-\Delta_{x} u_{j}+\lambda_{j}^{2} u_{j} =0,\qquad u_{j}(0,x)=f_{j}(x). \end{equation} The term $\lambda_{j}^{2}u$ can be absorbed in $iu_{t}$ via the gauge transformation $u_{j}\to e^{i \lambda_{j}^{2}t}u_{j}$, leaving the $L^{p}$ norm of the solution unchanged. Thus from the explicit representation of the solution we have the \emph{dispersive estimates} \begin{equation}\label{eq:dispj} \|u_{j}(t)\|_{L^{\infty}(\mathbb{R}^{n})}\le |t|^{-n/2} \|f_{j}\|_{L^{1}(\mathbb{R}^{n})} \end{equation} and summing over $j$ we obtain \begin{equation}\label{eq:dispsum} \|u(t)\|_{L^{\infty}(\Omega)}\le |t|^{-n/2} \sum_{j\ge1}\|\phi_{j}\|_{L^{\infty}(\omega)} \|f_{j}\|_{L^{1}(\mathbb{R}^{n})}\equiv|t|^{-n/2}\|f\|_{Z}. \end{equation} A more explicit expression of the norm $\|f\|_{Z}$ requires some information on the growth of the maximum norm of eigenfunctions. Typically one has \begin{equation*} \|\phi_{j}\|_{L^{\infty}(\omega)}\lesssim \lambda_{j}^{\sigma} \end{equation*} for some $\sigma>0$, and this leads to a dispersive estimate of the form \begin{equation}\label{eq:dispflat} \|u(t)\|_{L^{\infty}(\Omega)}\lesssim |t|^{-n/2}\|(1-\Delta_{y})^{\sigma/2+\epsilon} f\| _{L^{1}_{x}L^{2}_{y}(\omega)} \end{equation} The pointwise estimate \eqref{eq:dispflat} is quite strong and we shall not be able to prove an analogous in the non flat case. However Schr\"odinger equations satisfy weaker but more general estimates called \emph{Strichartz estimates}, which can be extended to our situation. Consider for maximum generality the nonhomogeneous equation \begin{equation}\label{eq:schreq} iu_{t}-\Delta u=F(t,x,y),\qquad u(0,x,y)=f(x,y) \end{equation} with Dirichlet boundary conditions on $\Omega$ as above. Here we assume for simplicity $n\ge3$. Expanding again \begin{equation*} F=\sum_{j\ge1}F_{j}(t,x)\phi_{j}(y) \end{equation*} we are led to the equations \begin{equation}\label{eq:schreqj} i \partial_{t} u_{j}-\Delta_{x} u_{j}+\lambda_{j}^{2} u_{j} =F_{j}(t,x),\qquad u_{j}(0,x)=f_{j}(x). \end{equation} The endpoint Strichartz estimate (see \cite{GinibreVelo85-d}, \cite{KeelTao98-a}) for $u_j$ states that \begin{equation}\label{eq:strichj} \|u_{j}\|_{L^{2}_{t}L^{\frac{2n}{n-2}}_{x}}\lesssim \|f_{j}\|_{L^{2}_{x}}+ \|F_{j}\|_{L^{2}_{t}L^{\frac{2n}{n+2}}_{x}} \end{equation} with constants independent of $j$. Squaring and summing over $j$ we obtain the endpoint Strichartz estimate for flat waveguides: \begin{equation}\label{eq:strichsum} \|u\|_{L^{2}_{t}L^{2}_{y}L^{\frac{2n}{n-2}}_{x}} \lesssim \|f\|_{L^{2}_{x,y}(\Omega)}+ \|F\|_{L^{2}_{tL^{2}_{y}}L^{\frac{2n}{n+2}}_{x}}. \end{equation} We write the estimate in operator form as follows, where $\Delta=\Delta_{x,y}$ with Dirichlet b.c.~on $\Omega$, $n\ge3$: \begin{equation}\label{eq:strichflat} \|e^{it \Delta}f\|_{L^{2}_{t}L^{2}_{y}L^{\frac{2n}{n-2}}_{x}} \lesssim \|f\|_{L^{2}(\Omega)},\qquad \left\| \int_{0}^{t}e^{i(t-s)\Delta}F(s)ds \right\|_{L^{2}_{t}L^{2}_{y}L^{\frac{2n}{n-2}}_{x}} \lesssim \|F\|_{L^{2}_{t}L^{2}_{y}L^{\frac{2n}{n+2}}_{x}}. \end{equation} Similar estimates hold when $n=1,2$. An even weaker and more general form of estimates are the \emph{smoothing estimates}, which go back at least to \cite{Kato65-a}, see also \cite{Ben-ArtziKlainerman92-a}. For equations \eqref{eq:schreqj} they take the form \begin{equation}\label{eq:smoothj} \|\langle x\rangle^{-1/2-\epsilon}|D_{x}|^{1/2}u_{j}\|_{L^{2}_{t}L^{2}_{x}} \lesssim\|f_{j}\|_{L^{2}(\mathbb{R}^{n})} +\|\langle x\rangle^{1/2+\epsilon}|D_{x}|^{-1/2}F_{j}\|_{L^{2}_{t}L^{2}_{x}} \end{equation} where we are using the notations \begin{equation*} |D_{x}|=(-\Delta_{x})^{1/2},\qquad \langle x\rangle=(1+|x|^{2})^{1/2}. \end{equation*} Squaring and summing over $j$ we obtain \begin{equation}\label{eq:smoosum} \|\langle x\rangle^{-1/2-\epsilon}|D_{x}|^{1/2}u\|_{L^{2}_{t}L^{2}(\Omega)} \lesssim \|f\|_{L^{2}(\Omega)} +\|\langle x\rangle^{1/2+\epsilon}|D_{x}|^{-1/2}F\|_{L^{2}_{t}L^{2}(\Omega)}. \end{equation} \end{example} \begin{example}\label{exa:waveflat} Consider the wave-Klein-Gordon equation for $u=u(t,x,y)$ \begin{equation}\label{eq:WE} u_{tt}-\Delta_{x,y}u+m^{2}u=0, \qquad m\ge0,\qquad(x,y)\in \Omega =\mathbb{R}^{n}\times \omega \end{equation} with Dirichlet boundary conditions. Proceeding as above we obtain the family of problems on $\mathbb{R}^{n}$ \begin{equation}\label{eq:WEj} \partial^{2}_{t} u_j-\Delta_{x}u_j+ (\lambda_{j}^{2}+m^{2})u_j=0. \end{equation} Notice that in the case of Dirichlet b.c., even if we start from a wave equation for $u$ (i.e. $m=0$), the equations for $u_j$ will always be of Klein-Gordon type since $\lambda_{j}^{2}>0$ for all $j$. Now, sharp dispersive estimates are known for the free equations \eqref{eq:WEj}, and summing over $j$ we shall obtain dispersive estimates for the original equation \eqref{eq:WE}. Indeed, using the notations $\langle D\rangle=(1-\Delta)^{2}$, $\langle D\rangle_{M} =(M^{2}-\Delta)^{1/2}$, we can represent the solution of $\square v+M^{2}v=0$ on $\mathbb{R}^{n}_{x}$ as \begin{equation*} v(t,x)=\cos(t\langle D\rangle_{M} )v(0)+\frac{\sin(t\langle D\rangle_{M} )}{\langle D\rangle_{M} }v_{t}(0), \end{equation*} thus we see that the solution can be expressed via the operator $e^{it\langle D\rangle_{M} }$. To prove a dispersive estimate for it, we may use the following estimate in terms of Besov spaces \begin{equation*} \|e^{it\langle D\rangle}f\|_{L^{\infty}_{x}} \le \frac{C}{|t|^{n/2}}\|f\|_{B^{\frac n2+1}_{1,1}} \end{equation*} (see e.g.~the Appendix of \cite{DanconaFanelli08-a}), and by the scaling $v(t,x)\to v(Mt,Mx)$ we obtain \begin{equation*} \|e^{it\langle D\rangle_{M} }f\|_{L^{\infty}_{x}}\le C\frac{M^{\frac n2}}{|t|^{n/2}} \|f(M \cdot)\|_{B^{\frac n2+1}_{1,1}} \end{equation*} The Besov norm in the estimate is not homogeneous, however at least for $M\ge c_{0}>0$ we get \begin{equation}\label{eq:dispM} |e^{it\langle D\rangle_{M} }f\|_{L^{\infty}_{x}} \le C(c_{0})\frac{M^{n+1}}{|t|^{n/2}} \|f\|_{B^{\frac n2+1}_{1,1}}. \end{equation} We can now apply this estimate to equation \eqref{eq:WE} i.e.~to the sequence of problems \eqref{eq:WEj}. The relevant operator for \eqref{eq:WE} is \begin{equation*} e^{it(m^{2}-\Delta_{x,y})^{1/2}}f= \sum_{j\ge1}e^{it\langle D\rangle_{M_{j}}}f_{j}(x)\phi_{j}(y),\qquad M_{j}^{2}=m^{2}+\lambda_{j}^{2} \end{equation*} where of course $f(x,y)=\sum f_{j}(x)\phi_{j}(y)$. We obtain \begin{equation*} \|e^{it(m^{2}-\Delta_{x,y})^{1/2}}f\|_{L^{\infty}_{x,y}}\le C|t|^{-n/2}\sum_{j\ge1}(m^{2}+\lambda_{j}^{2})^{\frac{n+1}{2}} \|f_{j}(x)\|_{B^{\frac n2+1}_{1,1}}\|\phi_{j}\|_{L^{\infty}}. \end{equation*} The last sum defines a norm of the initial data $f$ which can be estimated by the $W^{N,1}$ norm of $f$ for $N$ large enough. See \cite{LeskyRacke03-a}, \cite{MetcalfeSoggeStewart05-a} for more details and the applications to nonlinear wave equations. Following the same lines, one can prove Strichartz estimates for the Wave-Klein-Gordon equation on $\Omega$. Finally, smoothing estimates for the operators $e^{it\langle D\rangle_{M} }$ connected to the equation on $\Omega$ \begin{equation*} u_{tt}-\Delta_{x,y}u+M^{2}u=0 \end{equation*} take the form \begin{equation}\label{eq:smooKGintr} \|\langle x\rangle^{-1/2-\epsilon}e^{it\langle D\rangle_{M} }f\|_{L^{2}_{t}L^{2}(\Omega)}\lesssim \|f\|_{L^{2}(\Omega)}. \end{equation} \end{example} The above approach, based on splitting and diagonalizing part of the operator, requires the domain to be of product type and breaks down for more general domains. Even the spectral problem is difficult, as the following considerations suggest. \begin{remark}\label{rem:spect} For flat waveguides we have a purely continuous spectrum, also for {\em certain} locally perturbed waveguides, in particular for any local perturbation $\Omega$ of $(0,1) \times \mathbb{R}^{n-1}$, for which $\nu(x)\cdot x'\leq 0$ holds for any $x=(x_1,x')$ on the boundary $\partial\Omega$, see construct local perturbations where the Dirichlet Laplacian has eigenvalues below its essential spectrum. But there may also exist eigenvalues embedded into the essential spectrum; see e.g.~\cite{Witsch90-a}, where the following example is constructed. Let $D\subset \mathbb{R}^2$ be bounded, star-shaped with respect to the origin and invariant under the orthogonal group. Let $\rho\in C^0(\mathbb{R}^k)$ be positive, $\rho(x)=1$ for large $|x|$, $\max\,\rho > 1$. Then the perturbed wave guide $$ \Omega := \cap_{x\in \mathbb{R}^k} \left(\{x\} \times \rho(x)D\right) $$ has an unbounded sequence of multiple eigenvalues embedded into the continuous spectrum. Notice that the presence of embedded eigenvalues and hence of stationary solutions is in contrast with the decay of the solution. Thus we see that suitable conditions of \emph{repuslivity} on the shape of the domain are essential in order to exclude eigenvalues and ensure dispersion; conversely, in presence of bumps in the wrong direction, even small, we expect in general concentration of energy and disruption of dispersion. \end{remark} In order to ensure dispersion, it is reasonable to assume that the sections of $\Omega$ at fixed $y$ \begin{equation*} \{x\in \mathbb{R}^{n}\colon (x,y)\in \Omega\} \end{equation*} be nontrapping exterior domains. Actually, in order to prove smoothing we shall need the following stronger condition (see Figure \ref{fig:rep}): \begin{definition}\label{def:rep} Let $\Omega$ be an open subset of $\mathbb{R}^{n}_{x}\times \mathbb{R}^{m}_{y}$ with Lipschitz boundary, $n,m\ge1$. We say that $\Omega$ is \emph{repulsive with respect to the $x$ variables} if, denoting by $\nu$ the exterior normal to $\partial \Omega$, we have at all points of the boundary \begin{equation}\label{eq:repulsive} \nu \cdot(x,0)\le0. \end{equation} \end{definition} \begin{figure}[h] \begin{minipage}{.4\linewidth} \centering \includegraphics[height=.9in]{repulsiveasy.pdf} \end{minipage} \hspace{.1\linewidth} \begin{minipage}{.4\linewidth} \centering \includegraphics[height=1.2in]{nonrepulsiveasy.pdf} \end{minipage} \caption{A repulsive (left) and nonrepulsive (right) domain w.r.to $x$} \label{fig:rep} \end{figure} We can now state our results. We shall always consider a waveguide $\Omega$ satisfying condition \eqref{eq:repulsive}, with $n\ge3$ and $m\ge1$, and a selfadjoint Schr\"odinger operator \begin{equation*} H= -\Delta u +V(x,y) \end{equation*} with Dirichlet b.c., with a locally bounded potential $V(x,y)$ satisfying the assumptions \begin{equation}\label{eq:assVintro} V\ge0,\qquad -x \cdot \nabla_{x}(|x|V)\ge0. \end{equation} The conditions on the potential can be substantially relaxed, for instance by admitting a negative part, small in a suitable sense. We did not strive for maximum generality. \subsection*{Resolvent estimate}\label{sub:resolvent_estimate} Our approach is based on the Kato smoothing theory (see \cite{Kato65-a}, see also \cite{RodnianskiSchlag04-a}). The crucial tool, which can be considered the fundamental result of the paper, is a uniform resolvent estimate for the operator $H$. To this end we adapt the method of Morawetz multipliers in the version of \cite{BarceloRuizVega06-a}. Using the non isotropic Morrey-Campanato norms \begin{equation*}% \|f\|_{X}=\sup_{R>0}R^{-1/2}\|f\|_{L^{2}(|x|\le R)},\qquad \|f\|_{X_{1}}=\sup_{R>0}R^{-3/2}\|f\|_{L^{2}(|x|\le R)}, \end{equation*} \begin{equation*} \|f\|_{X^{*}}=\sum_{j\in \mathbb{Z}}2^{j/2} \|f\|_{L^{2}(2^{j-1}\le |x|\le 2^{j})} \end{equation*} (which are asymmetric in $x$ and $y$), our estimate for the resolvent operator $R(z)=(H-z)^{-1}$ can be stated as follows \begin{equation*} \|\nabla_{x}R(z)f\|_{X}^{2}+\|R(z)f\|_{X_{1}}^{2} +|z|\|R(z)f\|_{X}^{2} \le 5000n^{2}\|f\|_{X^{*}}^{2} \end{equation*} for all $z\not\in \mathbb{R}$ (see Theorem \ref{the:resest}). \subsection*{Smoothing estimates}\label{sub:smoothing_estimates} Using the previous resolvent estimate, an application of Kato's theory of smooth operators allows us to prove the following smoothing estimates for the Schr\"odinger flow $e^{itH}$ \begin{equation}\label{eq:smoHV1intro} \|\langle x\rangle^{-1/2-\epsilon} |D_{x}|^{1/2} e^{itH}f\|_{L^{2}_{t}L^{2}(\Omega)}+ \|\langle x\rangle^{-1-\epsilon}e^{itH}f\|_{L^{2}_{t}L^{2}(\Omega)}\lesssim \|f\|_{L^{2}(\Omega)}, \end{equation} while the nonhomogeneous form of the estimates is \begin{equation}\label{eq:smoointro} \begin{split} \left\|\langle x\rangle^{-1/2-\epsilon} \int_{0}^{t} \nabla_{x} e^{i(t-s)H}F(s)ds \right\|_{L^{2}_{t}L^{2}(\Omega)}+& \\ +\left\|\langle x\rangle^{-1-\epsilon} \int_{0}^{t} e^{i(t-s)H}F(s)ds \right\|_{L^{2}_{t}L^{2}(\Omega)} &\lesssim \|\langle x\rangle^{1+\epsilon} F\|_{L^{2}_{t}L^{2}(\Omega)} \end{split} \end{equation} (see Theorems \ref{the:smoosch}, \ref{the:smoosch2}, \ref{the:smoVH3bis}). On the other hand, for the wave-Klein-Gordon equation we prove the estimate ($\mu\ge0$) \begin{equation}\label{eq:smowaveintro} \|\langle x\rangle^{-1/2-\epsilon}e^{it \sqrt{H+\mu^{2}}}f\|_{L^{2}_{t}L^{2}(\Omega)} \lesssim \|f\|_{L^{2}(\Omega)} \end{equation} and, for the inhomogeneous operator, \begin{equation}\label{eq:smowaveinhintro} \left\|\int_{0}^{t}\langle x\rangle^{-1/2-\epsilon}e^{i(t-s) \sqrt{H+\mu^{2}}}F(s)ds \right\|_{L^{2}_{t}L^{2}(\Omega)} \lesssim \|\langle x\rangle^{1/2+\epsilon} F\|_{L^{2}_{t}L^{2}(\Omega)} \end{equation} (see Theorem \ref{the:smoowave}). Notice that our results are comparable with the flat case outlined in Examples \ref{exa:schroflat} and \ref{exa:waveflat}. \subsection*{Strichartz estimates}\label{sub:strichart_estimates} A typical application of the smoothing estimates is to deduce Strichartz estimates. We were only able to prove Strichartz estimates for the Schr\"odinger flow $e^{itH}$, under the additional assumption that the waveguide $\Omega$ coincides with a flat waveguide outside some bounded region. In this case, we can recover the full set of Strichartz estimates, however with a loss of 1/2 derivatives: indeed, we can prove for all $n\ge3$ and $m\ge1$ the endpoint estimate \begin{equation}\label{eq:strichnintro} \|e^{itH}f\|_{L^{2}_{t}L^{2}_{y}L^{\frac{2n}{n-2}}_{x}} \lesssim (1+\|\langle x\rangle^{1+\epsilon} V\|_{L^{2}_{y}L^{n}_{x}}) \Bigl( \|f\|_{L^{2}(\Omega)}+\||D_{x}|^{1/2}f\|_{L^{2}(\Omega)} \Bigr) \end{equation} (see Theorem \ref{the:strichschro}). \subsection*{Absence of eigenvalues}\label{sub:eigenvalues} As an immediate corollary to the smoothing estimates, we deduce that, under the conditions on the domain and on the Schr\"odinger operator $H$ given above (i.e., $n\ge3$, $m\ge1$, $\Omega$ repulsive w.r.to $x$ and $V$ as in \eqref{eq:assVintro}), there are no eigenvalues of $H$, since the presence of bound states would contradict the $L^{2}$ integrability in time of the solution. This generalizes the known results for the special cases in \cite{Faulhaber82-a} and \cite{MorgenrotherWerner87-a} described in Remark \ref{rem:spect}. \bigskip The natural domain of application of our estimates are problems of local and global existence for nonlinear evolution equations. We prefer not to pursue this line of research here; the applications to nonlinear Schr\"odinger and wave equations on non flat waveguides will be the object of future works. \section{A resolvent estimate}\label{sec:resolvent} This section is devoted to a study of the resolvent equation $u=R(\lambda+i \epsilon)f$ or equivalently \begin{equation}\label{eq:reseq} -\Delta u -(\lambda+i \epsilon)u+V(x,y)u=f. \end{equation} We shall follow the classical Morawetz multiplier method \cite{Morawetz68-a}, in the framework of Morrey-Campanato spaces as introduced in \cite{PerthameVega99-a}, see also \cite{BarceloRuizVegaVilela-a} and \cite{DanconaFanelli08-a}. Here additional difficulties are the presence of a boundary, and the necessity to handle the variables $x$ and $y$ in a different way. Moreover, our estimate \eqref{eq:fundresest} is stronger than the results in \cite{BarceloRuizVegaVilela-a} in that it provides a uniform control of the operator $\langle x\rangle^{-1/2-}|z|^{1/2}R(z)\langle x\rangle^{-1/2-}$ (corresponding to the last term at the l.h.s. of \eqref{eq:fundresest}); this will allow us to prove a sharp smoothing estimate for the wave equation in Theorem \ref{the:smoowave}. The Morrey-Campanato type norms needed here are the following: \begin{equation}\label{eq:MC} \|f\|_{X}=\sup_{R>0}R^{-1/2}\|f\|_{L^{2}(|x|\le R)},\qquad \|f\|_{X_{1}}=\sup_{R>0}R^{-3/2}\|f\|_{L^{2}(|x|\le R)}, \end{equation} \begin{equation}\label{eq:Xs} \|f\|_{X^{*}}=\sum_{j\in \mathbb{Z}}2^{j/2} \|f\|_{L^{2}(2^{j-1}\le |x|\le 2^{j})} \end{equation} and \begin{equation}\label{eq:surfMC} \|f\|_{X_{2}}=\sup_{R>0}R^{-1}\|f\|_{L^{2}(|x|=R)}. \end{equation} Notice that the decomposition involves the variables $x$ only. The $X^{*}$ norm is actually dual to the $X$ norm, but we shall not need this fact. For functions $f\in L^{2}_{loc}(\Omega)$ we extend the definition of these norms by restriction, meaning that \begin{equation*} \|f\|_{X}=\|Ef\|_{X}, \qquad Ef=f \ \text{on $\Omega$,} \qquad Ef=0 \ \text{on $\mathbb{R}^{n}\setminus\Omega$,} \end{equation*} We shall use the following elementary inequalities: \begin{equation}\label{eq:MCin1} \|fg\|_{L^{1}(\Omega)}\le \|f\|_{X}\|g\|_{X^{*}}, \end{equation} \begin{equation}\label{eq:MCin3} \|fg\|_{L^{1}(\Omega\cap\{R\le|x|\le 2R\})} \le 4R^{2}\|f\|_{X}\|g\|_{X_{1}} \end{equation} and \begin{equation}\label{eq:MCin4} \|fgh\|_{L^{1}(\Omega)} \le 2\|f\|_{X_{1}}\|g\|_{X^{*}}\||x|h\|_{L^{\infty}} \end{equation} which implies in particular \begin{equation}\label{eq:MCin2} \|fg\|_{L^{1}(\Omega\cap\{|x|\le R\})} \le 2R \|f\|_{X_{1}}\|g\|_{X^{*}}, \end{equation} Moreover it is easy to see that \begin{equation}\label{eq:comparnorm} \|f\|_{X_{1}}\le \|f\|_{X_{2}}. \end{equation} It will also be useful in the following to compare the above norms with standard weighted $L^{2}$ norms, with weights of the form \begin{equation}\label{eq:xxR} \langle x\rangle_{R} =(R+|x|^{2}/R)^{1/2},\qquad \langle x\rangle=(1+|x|^{2})^{1/2}. \end{equation} We notice that for all real $s>0$, and for $u$ defined on $\Omega$ (after extending $u$ as zero on $\mathbb{R}^{n}\times \mathbb{R}^{m}$ outside $\Omega$ for simplicity of notation) \begin{equation*} \begin{split} \int(R+|x|^{2}/R)^{-s}|u|^{2}dxdy\le R^{-s}\int_{|x|\le R}|u|^{2}+ R^{s}\int_{|x|> R}|x|^{-2s}|u|^{2} \\ \le R^{-s}\int_{|x|\le R}|u|^{2}+ 2^{2s}\sum_{j\ge j_{R}}R^{s} 2^{-2js} \int_{C_{j}}|u|^{2} \end{split} \end{equation*} where $j_{R}=[\log_{2}R]$ and $C_{j}=\left\{(x,y):2^{j-1}\le|x|<2^{j}\right\}$. The second term is bounded by \begin{equation*} \left( \sup_{\rho>0}\rho^{-s}\int_{|x|<\rho}|u|^{2} \right) 2^{2s}R^{s}\sum_{j\ge j_{R}}2^{-js} \le \frac{2^{2s}}{1-2^{-s}} \left( \sup_{\rho>0}\rho^{-s}\int_{|x|<\rho}|u|^{2} \right) \end{equation*} so that we have the inequality \begin{equation}\label{eq:MCtoweightgen} \int\langle x\rangle_{R} ^{-2s}|u|^{2}dxdy\le \frac{2^{4s}}{2^{s}-1} \sup_{\rho>0}\frac{1}{\rho^{s}}\int_{|x|<\rho}|u|^{2}. \end{equation} In particular we have, for any $R>0$, \begin{equation}\label{eq:MCtoweight} \|\langle x\rangle_{R} ^{-1}u\|_{L^{2}(\Omega)}\le 4\|u\|_{X},\qquad \|\langle x\rangle_{R} ^{-3}u\|_{L^{2}(\Omega)}\le 10\|u\|_{X_{1}}. \end{equation} By a similar proof we obtain for any $R>0$ \begin{equation}\label{eq:weighttoMC} \|u\|_{X^{*}}\le 16 \|\langle x\rangle_{R} u\|_{L^{2}(\Omega)}. \end{equation} Finally, we notice the following inequality, valid for all $\gamma>0$ and $\epsilon>0$: \begin{equation}\label{eq:weight1} \|\langle x\rangle^{-\frac{\gamma}{2}-\epsilon}u\|_{L^{2}}\le C(\gamma,\epsilon) \sup_{R>0} \|\langle x\rangle_{R} ^{-\gamma}u\|_{L^{2}}. \end{equation} which evidently holds also with $L^{2}(\Omega)$ in place of $L^{2}$. To prove it is sufficient to write \begin{equation*} \int\langle x\rangle^{-\gamma-2 \epsilon}|u|^{2}\le \int_{|x|\le1}|u|^{2}+\sum_{j\ge0}2^{-j(\gamma+2 \epsilon)} \int_{2^{j}\le|x|<2^{j+1}}|u|^{2} \end{equation*} \begin{equation*} \le (1+2^{\gamma})\sum_{j\ge0}2^{-2j \epsilon} \sup_{R>0}\frac{1}{R^{\gamma}}\int_{|x|\le R}|u|^{2} \end{equation*} and observe that \begin{equation*} \frac{1}{R^{\gamma}}\one{|x|\le R}\le 2^{\gamma} \langle x\rangle_{R} ^{-2\gamma}. \end{equation*} \begin{theorem}\label{the:resest} Let $\Omega \subseteq \mathbb{R}^{n}_{x}\times \mathbb{R}^{m}_{y}$, $n\ge3$, $m\ge1$, be a domain repulsive with respect to the variables $x$, with Lipschitz boundary. Assume the potential $V(x,y)$ satisfies \begin{equation}\label{eq:assV} V\ge0,\qquad -\partial_{x}(|x|V)\ge0 \end{equation} and let $u(x,y)\in H^{1}_{0}(\Omega)$ be a solution of equation \eqref{eq:reseq}. Then the following estimate holds: \begin{equation}\label{eq:fundresest} \|\nabla_{x}u\|_{X}^{2}+\|u\|_{X_{1}}^{2} +(|\lambda|+|\epsilon|)\|u\|_{X}^{2} \le 5000n^{2}\|f\|_{X^{*}}^{2}. \end{equation} \end{theorem} \begin{proof Consider two real valued functions $\psi(x)$ and $\phi(x)$, \emph{independent of the variable} $y$, such that \begin{equation}\label{eq:assphipsi} \nabla \psi, \Delta \psi, \nabla \Delta \psi,\phi,\nabla \phi \ \text{are bounded for $|x|$ large}. \end{equation} and \begin{equation}\label{eq:asspsi} \nu \cdot \nabla \psi \le0 \ \text{at $\partial \Omega$}. \end{equation} Notice that for a function $\psi(x)$ depending only on $x$ in a radial way, we have \begin{equation*} \nu \cdot\nabla \psi=\nu \cdot(x,0)|x|^{-1}\partial_{x}\psi \end{equation*} and recalling Definition \ref{def:rep}, we see that \eqref{eq:asspsi} is equivalent to the condition that the radial derivative of $\psi$ be non negative: \begin{equation}\label{eq:asspsi2} x \cdot \nabla_{x}\psi\ge0. \end{equation} Then we can form the Morawetz multiplier \begin{equation}\label{eq:mult} (\Delta \psi-\phi)\overline{u}+2 \nabla \psi \cdot \nabla \overline{u}. \end{equation} Multiplying the resolvent equation \eqref{eq:reseq} by the quantity \eqref{eq:mult} and taking the real part we obtain the identity \begin{equation}\label{eq:ident1} \begin{split} \nabla u(2D^{2}\psi-\phi I) & \nabla \overline{u}+ \frac12 \Delta(\phi-\Delta \psi)|u|^{2} +\phi \lambda|u|^{2} -(\nabla V \cdot \nabla \psi+\phi V)|u|^{2} +\nabla \cdot \Re Q_{1} = \\ & =\nabla \cdot\Re Q+ \Re f(2 \nabla \psi \cdot \nabla \overline{u} +(\Delta \psi-\phi) \overline{u}) - 2 \epsilon\Im(\nabla \psi \cdot \nabla \overline{u}\ u) \end{split} \end{equation} where \begin{equation}\label{eq:Q} Q =\Delta \psi \overline{u} \nabla u -\frac12 \nabla \Delta \psi|u|^{2} -(V-\lambda)\nabla \psi|u|^{2} +\frac12 \nabla \phi|u|^{2} -\phi \overline{u}\nabla u \end{equation} and \begin{equation}\label{eq:Q1} Q_{1}=\nabla \psi|\nabla u|^{2} -2\nabla u(\nabla \psi \cdot \nabla\overline{u}) \end{equation} Our goal is to integrate \eqref{eq:fundest} on $\Omega$, with a suitable choice of the weights $\phi$ and $\psi$. First of all we show how to handle the last term at the right hand side. Multiplying \eqref{eq:reseq} by $\overline{u}$ and splitting real and imaginary parts we obtain the two identities \begin{equation}\label{eq:imu} \Im \nabla \cdot\left\{\nabla u \overline{u}\right\} +\epsilon|u|^{2}=-\Im(f \overline{u}) \quad \implies \quad \pm\Im \nabla \cdot\left\{\nabla u \overline{u}\right\} +|\epsilon||u|^{2}=\mp\Im(f \overline{u}) \end{equation} $\pm$ being the sign of $\epsilon$, and \begin{equation}\label{eq:reu} \Re \nabla \cdot\left\{-\nabla u \overline{u}\right\} +|\nabla u|^{2}=(\lambda-V)|u|^{2}+\Re(f \overline{u}). \end{equation} From the second one we deduce (with $\lambda^{+}=\max\{\lambda,0\}$) \begin{equation*} |\epsilon||\nabla u|^{2}\le |\epsilon| \lambda^{+}|u|^{2}+|\epsilon|\Re(f \overline{u})+ \Re \nabla \cdot\left\{|\epsilon| \nabla u \overline{u}\right\} \end{equation*} by the positivity of $V(x,y)$, and using \eqref{eq:imu} \begin{equation*} =\mp\lambda^{+}\Im(f \overline{u}) +\nabla \cdot\left\{\pm\Im \lambda^{+}u\nabla \overline{u} +\Re |\epsilon| \nabla u \overline{u} \right\} +|\epsilon|\Re(f \overline{u}) \end{equation*} and hence \begin{equation}\label{eq:reu2} |\epsilon||\nabla u|^{2}\le (\lambda^{+}+|\epsilon|)|f \overline{u}| +\nabla \cdot\left\{\pm\Im \lambda^{+}u\nabla \overline{u} +\Re |\epsilon| \nabla u \overline{u} \right\}. \end{equation} Now by Cauchy-Schwarz we can write \begin{equation*} 2 |\epsilon u\nabla \overline{u}|\le |\epsilon|(\lambda^{+}+|\epsilon|)^{1/2}|u|^{2}+ |\epsilon|(\lambda^{+}+|\epsilon|)^{-1/2}|\nabla u|^{2} \end{equation*} and using \eqref{eq:imu}, \eqref{eq:reu2} \begin{equation*} \le 2(\lambda^{+}+|\epsilon|)^{1/2}|f \overline{u}|\mp \nabla \cdot\left\{\Im\nabla \overline{u}u\right\} (\lambda^{+}+|\epsilon|)^{1/2} + \nabla \cdot\left\{\pm\Im \lambda^{+}u\nabla \overline{u} +\Re |\epsilon| \nabla u \overline{u}\right\} (\lambda^{+}+|\epsilon|)^{-1/2}. \end{equation*} In conclusion we have the estimate \begin{equation}\label{eq:auxest} 2 |\epsilon u\nabla \overline{u}|\le 2(\sqrt{|\epsilon|} +\sqrt{\lambda^{+}})|f \overline{u}| + \nabla \cdot A \end{equation} with \begin{equation}\label{eq:A} A=\frac{|\epsilon|\Re \nabla u \overline{u} \pm(2 \lambda^{+}+|\epsilon|)\Im \nabla \overline{u}u} {(\lambda^{+}+|\epsilon|)^{1/2}}, \qquad \pm=\ \text{sign of $\epsilon$}. \end{equation} We insert this in our basic identity \eqref{eq:ident1} obtaining the inequality \begin{equation}\label{eq:fundest} \begin{split} \nabla u(2D^{2}\psi-\phi I) & \nabla \overline{u}+ \frac12 \Delta(\phi-\Delta \psi)|u|^{2} +\phi \lambda|u|^{2} -(\nabla V \cdot \nabla \psi+\phi V)|u|^{2} +\nabla \cdot\Re Q_{1} \le \\ & \le 2|f \nabla \psi \cdot \nabla \overline{u}| +|f(\Delta \psi-\phi) \overline{u}| +2\|\nabla \psi\|_{L^{\infty}} (\sqrt{|\epsilon|}+ \sqrt{\lambda^{+}})|f \overline{u}| +\nabla \cdot \Re P \end{split} \end{equation} where \begin{equation}\label{eq:P} P=Q+\|\nabla \psi\|_{L^{\infty}}A \end{equation} with $A,Q,Q_{1}$ given by \eqref{eq:A}, \eqref{eq:Q} \eqref{eq:Q1} respectively. Next we show how to estimate the integral over $\Omega$ of the right hand side of \eqref{eq:fundest}. We need an additional estimate, obtained by multiplying \eqref{eq:reseq} by $\chi \overline{u}$ and taking the imaginary part: as in \eqref{eq:imu} we get \begin{equation}\label{eq:imu2} \pm\Im \nabla \cdot\left\{\chi\nabla u \overline{u}\right\} +|\epsilon|\chi|u|^{2}=\mp\Im(\chi f \overline{u}) \mp\Im(\nabla \chi \cdot \nabla \overline{u}u). \end{equation} We choose $\chi$ as a radial function of the variables $x$ only, and precisely \begin{equation*} \chi= \begin{cases} 1 &\text{if $ |x|<R $,}\\ 0 &\text{if $ |x|>2R $,}\\ 2-|x|/R &\text{if $ R\le |x|\le 2R $.} \end{cases} \end{equation*} Then integrating \eqref{eq:imu2} on $\Omega$ and noticing that the boundary terms disappear (thanks to the Dirichlet b.c.), we arrive at the inequality \begin{equation*} |\epsilon|\int_{\Omega\cap\{|x|\le R\}}|u|^{2} \le\int_{\Omega\cap\{|x|\le 2R\}}|f \overline{u}|+ \frac1R \int_{\Omega\cap\{R\le|x|\le 2R\}}|\nabla_{x} u| |u| \end{equation*} since $\chi$ depends only on $x$. We estimate the right hand side using \eqref{eq:MCin2}, \eqref{eq:MCin3}, and dividing by $R$ we obtain \begin{equation*} \frac{|\epsilon|}R \int_{\Omega\cap\{|x|\le R\}}|u|^{2}\le 4\|f\|_{X^{*}}\|u\|_{X_{1}} +4\|\nabla_{x} u\|_{X}\|u\|_{X_{1}} \end{equation*} and taking the sup in $R$ we conclude \begin{equation}\label{eq:basic2} |\epsilon|\|u\|_{X}^{2}\le 4\left(\|f\|_{X^{*}}+\|\nabla_{x} u\|_{X}\right) \|u\|_{X_{1}} \end{equation} Now consider the quantity \begin{equation*} 2(\sqrt{\lambda^{+}}+\sqrt{|\epsilon|} ) \|f \overline{u}\|_{L^{1}(\Omega)}\le 2(\sqrt{\lambda^{+}}+\sqrt{|\epsilon|} ) \|f\|_{X^{*}}\|u\|_{X} \end{equation*} where we used again \eqref{eq:MCin1}. By \eqref{eq:basic2} we have \begin{equation*} \le 2\sqrt{\lambda^{+}}\|f\|_{X^{*}}\|u\|_{X}+ 4\|f\|_{X^{*}}(\|f\|_{X^{*}}+\|\nabla_{x}u\|_{X})^{1/2} \|u\|_{X_{1}}^{1/2} \end{equation*} and hence, for all $\delta\in(0,1)$, \begin{equation}\label{eq:basic3} 2(\sqrt{\lambda^{+}}+\sqrt{|\epsilon|} ) \|f \overline{u}\|_{L^{1}(\Omega)} \le\delta(\lambda^{+}\|u\|_{X}^{2} +\|\nabla_{x} u\|_{X}^{2}+\|u\|_{X_{1}}^{2}) +5 \delta^{-1}\|f\|_{X^{*}}^{2}. \end{equation} This inequality will be used to estimate the third term in the r.h.s. of \eqref{eq:fundest}. We consider now the term $\nabla \cdot \Re P =\Re\nabla \cdot(Q+\|\nabla \psi\|_{L^{\infty}}A)$, which vanishes after integration. To see this, we define the cylinder \begin{equation*} C_{R}=\left\{(x,y)\colon |x|<R,\ y\in \mathbb{R}^{m} \right\}, \end{equation*} we integrate $\nabla \cdot P$ on $\Omega\cap C_{R}$ and let $R\to+\infty$. The boundary of $\Omega\cap C_{R}$ is the union of the two sets \begin{equation*} S_{1}=\partial\Omega\cap C_{R} \quad\text{and}\quad S_{2}=\partial C_{R}\cap \Omega= \left\{(x,y)\in \Omega \colon |x|=R\right\} \end{equation*} and orrespondingly, we get two surface integrals. The integral on $S_{1}$ vanishes thanks to the Dirichlet boundary condition, thus we are left with the boundary integral \begin{equation*} \int_{S_{2}}\nu \cdot P d \sigma. \end{equation*} By the first assumption \eqref{eq:assphipsi} on the weights $\phi,\psi$ we have evidently \begin{equation}\label{eq:liminf} \liminf_{R\to+\infty}\int_{S_{2}}\nu \cdot P d \sigma=0 \end{equation} since the function $u$ is in $H^{1}(\Omega)$. This proves that \begin{equation*} \int_{\Omega}(\nabla \cdot P) dx dy=0. \end{equation*} Concerning the first and the second term at the right hand side of \eqref{eq:fundest}, we estimate their integrals using \eqref{eq:MCin1} \begin{equation*} 2\int_{\Omega}|f \nabla \psi \cdot \nabla \overline{u}| \le 2\|\nabla \psi\|_{L^{\infty}}\|f\|_{X^{*}}\|\nabla_{x} u\|_{X} \end{equation*} (recall $\psi=\psi(x)$) and \eqref{eq:MCin4} \begin{equation*} \int_{\Omega}|f (\Delta \psi -\phi) \overline{u}|\le 2\||x|(\Delta \psi -\phi)\|_{L^{\infty}} \|f\|_{X^{*}}\|u\|_{X_{1}}. \end{equation*} Summing up, the integral over $\Omega$ of the right hand side of \eqref{eq:fundest} is bounded by \begin{equation}\label{eq:RHSint} C(\phi,\psi)\delta(\lambda^{+}\|u\|_{X}^{2} +\|\nabla_{x} u\|_{X}^{2}+\|u\|_{X_{1}}^{2}) +C(\phi,\psi)\delta^{-1}\|f\|_{X^{*}}^{2} \end{equation} with \begin{equation}\label{eq:cphipsi} C(\phi,\psi)=10\|\nabla \psi\|_{L^{\infty}}+ 10\||x|(\Delta \psi -\phi)\|_{L^{\infty}}. \end{equation} Consider now the left hand side of \eqref{eq:fundest}. The term in divergence form $\nabla \cdot \Re Q_{1}$, with \begin{equation*} Q_{1}=\nabla \psi|\nabla u|^{2} -2\nabla u(\nabla \psi \cdot \nabla\overline{u}) \end{equation*} can be handled as above by integrating first on the cylinder $C_{R}$ and then letting $R\to+\infty$. The integral on $S_{2}$ satisfies again \eqref{eq:liminf} and vanishes in the limit. As to the integral on $S_{1} \subseteq \partial\Omega$, we notice that $\nabla u$ at $\partial \Omega$ must be normal to the boundary, because of the Dirichlet boundary condition; in other words, denoting the normal derivative at $\partial \Omega$ with $\partial_{\nu}u=\nu \cdot\nabla u$, we must have \begin{equation*} \nabla u=\nu \partial_{\nu}u \ \text{\ \ at\ \ $\partial \Omega$} \end{equation*} so that \begin{equation*} \nu \cdot\nabla Q_{1}= \nu \cdot \nabla \psi|\partial_{\nu}u|^{2}- 2 \partial_{\nu}u(\nabla \psi \cdot\nu\ \partial_{\nu}\overline{u})= -(\nu \cdot \nabla \psi)|\partial_{\nu}u|^{2}. \end{equation*} Thus the integral on $S_{1}$ can be written \begin{equation*} I_{R}=-\int_{S_{1}}\nu \cdot \nabla \psi|\partial_{\nu}u|^{2}d \sigma \end{equation*} and under the second assumption \eqref{eq:asspsi} on the weight $\psi$ we obtain \begin{equation*} I_{R}\ge0 \ \text{for all $R$}. \end{equation*} Hence we can drop $I_{R}$ from the computation, and recalling also \eqref{eq:RHSint} we obtain the basic integral inequality \begin{equation}\label{eq:fundestint} \begin{split} & \int_{\Omega} \bigl[ \nabla u(2D^{2}\psi-\phi I)\nabla \overline{u}+ \frac12 \Delta(\phi-\Delta \psi)|u|^{2} +\phi \lambda|u|^{2} -(\nabla V \cdot \nabla \psi+\phi V)|u|^{2}\bigr] \le \\ & \qquad \qquad \le C(\phi,\psi)\delta(\lambda^{+}\|u\|_{X}^{2} +\|\nabla_{x} u\|_{X}^{2}+\|u\|_{X_{1}}^{2}) +C(\phi,\psi)\delta^{-1}\|f\|_{X^{*}}^{2} \end{split} \end{equation} It remains to choose the functions $\phi,\psi$ in an appropriate way. When $\lambda>0$ we make the following choice, inspired by \cite{BarceloRuizVegaVilela-a}: \begin{equation}\label{eq:psiphi1} \psi(x,y)= \begin{cases} |x| &\text{if $ |x|\ge R $,}\\ \frac R2+\frac{|x|^{2}}{2R} &\text{if $ |x|< R $,} \end{cases} \qquad \phi(x,y)= \begin{cases} 0 &\text{if $ |x|\ge R $,}\\ \frac1R &\text{if $ |x|<R $.} \end{cases} \end{equation} Notice that assumptions \eqref{eq:assphipsi} and \eqref{eq:asspsi} (i.e.~\eqref{eq:asspsi2}) are satisfied. We compute the quantities relevant to our estimate: we have \begin{equation*} \phi-\Delta \psi= \begin{cases} -\frac{n-1}{|x|} &\text{if $ |x|\ge R $,}\\ -\frac{n-1}{R} &\text{if $ |x|<R $} \end{cases} \end{equation*} (with a cancelation of the singularity at $|x|=R$). Thus we have, in distribution sense, \begin{equation*} \Delta(\phi-\Delta \psi)= \frac{n-1}{R^{2}}\delta_{|x|=R}+ \begin{cases} \frac{\mu_{n}}{|x|^{3}} &\text{if $ |x|\ge R $,}\\ 0 &\text{if $ |x|<R $,} \end{cases} \qquad \mu_{n}=(n-1)(n-3) \end{equation*} and also \begin{equation*} \|\nabla \psi\|_{L^{\infty}}=1,\qquad \||x|(\Delta \psi-\phi)\|_{L^{\infty}}=n-1 \qquad \implies \qquad C(\phi,\psi)=10n. \end{equation*} For the first term in \eqref{eq:fundestint} we need the elementary formula, valid for a radial function $\psi=\sigma(|x|)$ \begin{equation*} \nabla uD^{2}\psi\nabla \overline{u}= \sigma''|\partial_{x} u|^{2}+ \frac{\sigma'}{|x|}|\nabla_{x}u-\widehat{x}\ \partial_{x}u|^{2} \end{equation*} which implies \begin{equation*} \nabla u(2D^{2}\psi-\phi I)\nabla \overline{u}= \begin{cases} \frac2R|\nabla_{x}u-\widehat{x} \partial_{x}u|^{2} &\text{if $ |x|\ge R $,}\\ \frac1R|\nabla_{x}u|^{2} &\text{if $ |x|<R $.} \end{cases} \end{equation*} Finally, the terms containing the potential $V$ are easily seen to be positive, thanks to assumption \eqref{eq:assV}, and we can drop them. Thus \eqref{eq:fundestint} implies \begin{equation*} \begin{split} \frac1R\|\nabla_{x}u\|^{2}_{L^{2}(\Omega\cap\{|x|\le R\})}+ & \frac{n-1}{2R^{2}}\int_{\Omega\cap\{|x|=R\}}|u|^{2}d \sigma +\frac \lambda R\|u\|^{2}_{L^{2}(\Omega\cap\{|x|\le R\})}\le \\ & \le 10n\delta(\lambda\|u\|_{X}^{2} +\|\nabla_{x} u\|_{X}^{2}+\|u\|_{X_{1}}^{2}) +10n\delta^{-1}\|f\|_{X^{*}}^{2} \end{split} \end{equation*} and taking the sup in $R>0$ we obtain \begin{equation*} \|\nabla_{x}u\|_{X}^{2}+\frac{n-1}{2}\|u\|_{X_{2}}^{2} +\lambda\|u\|_{X}^{2}\le 10n\delta(\lambda\|u\|_{X}^{2} +\|\nabla_{x} u\|_{X}^{2}+\|u\|_{X_{1}}^{2}) +10n\delta^{-1}\|f\|_{X^{*}}^{2} \end{equation*} Recalling that the $X_{2}$ norm dominates the $X_{1}$ norm and choosing $\delta=(20n)^{-1}$ we finally obtain in the case $\lambda>0$ \begin{equation}\label{eq:final1} \|\nabla_{x}u\|_{X}^{2}+\|u\|_{X_{1}}^{2} +\lambda\|u\|_{X}^{2}\le 400 n^{2}\|f\|_{X^{*}}^{2},\qquad \lambda>0. \end{equation} In the case $\lambda\le0$ we make a different choice of weights. Following \cite{DanconaFanelli08-a}, we take simply $\phi \equiv0$ and we define \begin{equation}\label{eq:defpsi} \psi(x)=\int_{0}^{|x|}\alpha(r)dr,\qquad \alpha(r)= \begin{cases} \frac1n-\frac{1}{2n(n+2)} \frac{R^{n-1}}{r^{n-1}} &\text{if $ r\ge R $,}\\ \frac{1}{2n}+\frac{r}{2nR}- \frac{1}{2n(n+2)} \frac{r^{3}}{R^{3}} &\text{if $ r<R $.} \end{cases} \end{equation} We have now, after some elementary computations, \begin{equation}\label{eq:lappsi} \Delta \psi= \begin{cases} \frac{3(n-1)}{n}\frac1r &\text{if $ r\ge R $,}\\ \frac{1}{2R}+\frac{n-1}{nr}- \frac{r^{2}}{2nR^{3}} &\text{if $ r<R $,} \end{cases} \end{equation} moreover \begin{equation*} \|\nabla \psi\|_{L^{\infty}}=\frac1n,\qquad \||x|\Delta \psi\|_{L^{\infty}}\le1-\frac1n \qquad \implies \qquad C(\phi,\psi)\le 10, \end{equation*} for $n=3$ \begin{equation*} -\Delta^{2}\psi=\frac{1}{R^{3}}\chi_{|x|<R}+8\pi \delta_{0}(x) \end{equation*} where $\delta_{0}(x)$ is the Dirac delta at 0 in the variables $x$ and $\chi_{A}$ is the characteristic function of the set $A$, while for $n\ge4$ we have ($\mu_{n}=(n-1)(n-3)$) \begin{equation*} -\Delta^{2}\psi= \left( \frac{1}{R^{3}}+\frac{\mu_{n}}{2n|x|^{3}} \right)\chi_{|x|<R}+ \frac{\mu_{n}}{n|x|^{3}}\chi_{|x|\ge R}+ \frac{n-3}{2nR^{2}}\delta_{|x|=R} \end{equation*} so that in all cases $n\ge3$ we have \begin{equation*} -\Delta^{2}\psi\ge \frac{1}{R^{3}}\chi_{|x|<R}. \end{equation*} Moreover, \begin{equation*} \nabla uD^{2}\psi\nabla \overline{u}\ge \frac{n-1}{2n(n+2)}\frac1R|\nabla_{x} u|^{2}\chi_{|x|<R}. \end{equation*} Thus, proceeding exactly as above, we obtain \begin{equation*} \frac{n-1}{n(n+2)} \|\nabla_{x}u\|_{X}^{2}+\|u\|_{X_{1}}^{2} \le 10 \delta(\|\nabla u\|_{X}^{2}+\|u\|_{X_{1}}^{2}) +10 \delta^{-1}\|f\|_{X^{*}}^{2} \end{equation*} and choosing $\delta=(40n)^{-1}$ we conclude, for $\lambda\le0$, \begin{equation}\label{eq:final2} \|\nabla_{x}u\|_{X}^{2}+\|u\|_{X_{1}}^{2}\le 800n^{2}\|f\|_{X^{*}}^{2}. \end{equation} We collect \eqref{eq:final1} and \eqref{eq:final2} in the estimate, valid for all $\lambda\in \mathbb{R}$, \begin{equation}\label{eq:final3} \|\nabla_{x}u\|_{X}^{2}+\|u\|_{X_{1}}^{2} +\lambda^{+}\|u\|_{X}^{2}\le 800 n^{2}\|f\|_{X^{*}}^{2}. \end{equation} As a last step, we show that the factor $\lambda^{+}$ in \eqref{eq:final3} can be improved to $|\lambda|+|\epsilon|$. First of all, recalling \eqref{eq:basic2}, and using \eqref{eq:final3}, we see that \begin{equation}\label{eq:final4} |\epsilon|\|u\|^{2}_{X}\le 4(\|f\|^{2}_{X^{*}}+\|\nabla u\|_{X}) \| u\|_{X_{1}}\le 3320 n^{2}\|f\|^{2}_{X^{*}}. \end{equation} Assume now $\lambda=-\lambda^{-}\le0$. We multiply the resolvent equation \eqref{eq:reseq} by $\overline{u}$ and take real parts, obtaining \begin{equation*} |\nabla u|^{2}+\lambda^{-}|u|^{2}+V|u|^{2}= \Re(f \overline{u})+\frac12 \Delta|u|^{2}; \end{equation*} then we multiply by a weight function $\mu(x)$ and we get \begin{equation* \mu |\nabla u|^{2}+(\lambda^{-}+V)\mu|u|^{2}= \Re(\mu f \overline{u})+\frac12 \Delta\mu |u|^{2}+ \nabla \cdot(2^{-1}\nabla(\mu|u|^{2})). \end{equation*} We now integrate on $\Omega$ as above; the term in divergence form vanishes by the Dirichlet b.c., and we obtain, using the positivity of $V$, \begin{equation}\label{eq:lastid} \int_{\Omega}\mu(|\nabla u|^{2}+ \lambda^{-}|u|^{2})\le \int_{\Omega}\mu|f \overline{u}|+ \frac12\int_{\Omega}\Delta\mu|u|^{2}. \end{equation} We now choose $\mu=\Delta\psi$ with $\psi$ defined as in \eqref{eq:defpsi}. Notice that $\Delta\mu=\Delta^{2}\psi\le0$ so we can drop the last term from the computation; on the other hand \begin{equation*} \Delta\psi\ge \frac1{2R}\chi_{|x|<R},\qquad \||x|\Delta\psi\|_{L^{\infty}}\le1 \end{equation*} and recalling property \eqref{eq:MCin4} we obtain \begin{equation*} \frac{1}{2R}\int_{|x|<R}(|\nabla u|^{2}+\lambda^{-}|u|^{2})\le 2\|f\|_{X^{*}}\|u\|_{X_{1}}. \end{equation*} Taking the sup in $R>0$ this gives \begin{equation}\label{eq:final5} \|\nabla u\|_{X}^{2}+\lambda^{-}\|u\|_{X}^{2}\le 4\|f\|_{X^{*}}\|u\|_{X_{1}}\le 120 n\|f\|_{X^{*}}^{2} \end{equation} again by \eqref{eq:final3}. Collecting \eqref{eq:final5}, \eqref{eq:final4} and \eqref{eq:final3} we conclude the proof of \eqref{eq:fundresest}. \end{proof} \begin{remark}\label{rem:MCweight} When $z=\lambda+i \epsilon$ does not belong to the spectrum of the selfadjoint operator $H=-\Delta+V$ with Dirichlet b.c.~on $L^{2}(\Omega)$ (this includes some cases when $\epsilon=0$), given an $f\in L^{2}(\Omega)$, we can represent the solution of \eqref{eq:reseq} as $u=R(z)f$, where $R(z)=(H-z)^{-1}$. Since we know that $u\in H^{1}_{0}(\Omega)$, all the preceding computations apply and in particular estimate \eqref{eq:fundresest} holds. As a consequence, using \eqref{eq:MCtoweight} and \eqref{eq:weighttoMC}, we can write for all $R,S>0$, \begin{equation}\label{eq:simpleresest} \|\langle x\rangle_{R} ^{-3}R(z)f\|_{L^{2}(\Omega)}\le 2^{13}n\|\langle x\rangle_{S} f\|_{L^{2}(\Omega)}. \end{equation} Thus \eqref{eq:simpleresest} is in fact a weighted $L^{2}$ estimate for the resolvent $R(z)$. By duality we have the equivalent estimate \begin{equation}\label{eq:simpledual} \|\langle x\rangle_{R} ^{-1}R(z)f\|_{L^{2}(\Omega)}\le c_{n}\|\langle x\rangle_{S} ^{3} f\|_{L^{2}(\Omega)} \end{equation} and by (complex) interpolation we have also \begin{equation*} \|\langle x\rangle_{R} ^{-2}R(z)f\|_{L^{2}(\Omega)}\le c_{n}\|\langle x\rangle_{S} ^{2} f\|_{L^{2}(\Omega)}, \end{equation*} uniformly in $z\not\in \sigma(H)$, which we shall write more symmetrically as follows: \begin{equation}\label{eq:simpleinterp} \|\langle x\rangle_{R} ^{-2}R(z)\langle x\rangle_{S} ^{-2}f\|_{L^{2}(\Omega)}\le c_{n}\|f\|_{L^{2}(\Omega)}. \end{equation} A similar computation, using the other two terms in \eqref{eq:fundresest}, shows that \begin{equation}\label{eq:gradest} \|\langle x\rangle_{R} ^{-1}\nabla_{x} R(z)\langle x\rangle_{S} ^{-1}f\|_{L^{2}(\Omega)} +|z|^{1/2} \|\langle x\rangle_{R} ^{-1} R(z)\langle x\rangle_{S} ^{-1}f\|_{L^{2}(\Omega)} \le c_{n}\|f\|_{L^{2}(\Omega)} \end{equation} uniformly in $z\not\in \sigma(H)$. In particular this applies to $z=-\delta$ for all $\delta>0$ since the operator $H$ is positive. At this point we need the following elementary \begin{lemma}\label{lem:weights} If a linear operator $A$ satisfies for all $R,S>0$ the estimate \begin{equation}\label{eq:basest} \|\langle x\rangle_{R} ^{-\gamma}A\langle x\rangle_{S} ^{-\gamma}u\|_{L^{2}} \le C_{0}\|u\|_{L^{2}} \end{equation} with a constant independent of $R,S$, then it satisfies also, for all $\epsilon>0$, the estimate \begin{equation}\label{eq:basest1} \|\langle x\rangle^{-\frac{\gamma}{2}-\epsilon}A \langle x\rangle^{-\frac{\gamma}{2}-\epsilon}u\|_{L^{2}}\le C_{0}C(\gamma,\epsilon)\|u\|_{L^{2}}. \end{equation} \end{lemma} \begin{proof Write \eqref{eq:basest} in the form \begin{equation}\label{eq:basest2} \|\langle x\rangle_{R} ^{-\gamma}Av\|_{L^{2}} \le C_{0}\|\langle x\rangle_{S} ^{\gamma}v\|_{L^{2}}, \end{equation} decompose $v=v_{0}+\sum_{j\ge1} v_{j}$, with $v_{j}$ supported in $2^{j-1}\le|x|<2^{j}$ for $j\ge1$ and $v_{0}$ in $|x|<1$, apply the \eqref{eq:basest2} to each $v_{j}$ with $S=2^{j}$, and sum over $j$ (all norms in the rest of the proof are in $L^{2}$): \begin{equation*} \|\langle x\rangle_{R} ^{-\gamma}Av\| \le \|\langle x\rangle_{R} ^{-\gamma}Av_{0}\|+ \sum_{j}\|\langle x\rangle_{R} ^{-\gamma}Av_{j}\| \le C_{0}\|\langle x\rangle^{\gamma}v_{0}\|+ C_{0}\sum\|\langle x\rangle_{2^{j}}^{\gamma}v_{j}\|. \end{equation*} Now notice that for $j\ge1$ \begin{equation*} \langle x\rangle_{2^{j}}^{2\gamma} =\left(2^{j}+\frac{|x|^{2}}{2^{j}}\right)^{\gamma} \le 2^{\gamma}2^{\gamma j}\le 2^{2 \gamma}|x|^{\gamma}\le 2^{2 (\gamma+\epsilon)}2^{-2 \epsilon j}|x|^{\gamma+2 \epsilon} \quad\text{on the support of $v_{j}$} \end{equation*} so that \begin{equation*} \|\langle x\rangle_{R} ^{-\gamma}Av\|\le C_{0}2^{\gamma} \|v_{0}\|+C_{0}2^{\gamma+\epsilon} \sum_{j\ge1} 2^{-\epsilon j} \||x|^{\frac{\gamma}{2}+\epsilon}v_{j}\|\le C_{0}C(\gamma,\epsilon)\|\langle x\rangle^{\frac{\gamma}{2}+\epsilon}v\| \end{equation*} by Cauchy-Schwarz. Using \eqref{eq:weight1} we obtain \eqref{eq:basest1}. \end{proof} In particular, applying the Lemma to \eqref{eq:simpleinterp} and to \eqref{eq:gradest} we obtain the estimates, valid for all $\epsilon>0$: \begin{equation}\label{eq:simpleinterpbis} \|\langle x\rangle^{-1-\epsilon}R(z)\langle x\rangle^{-1-\epsilon}f\|_{L^{2}(\Omega)}\le c_{n,\epsilon}\|f\|_{L^{2}(\Omega)}, \end{equation} \begin{equation}\label{eq:gradestbis} \|\langle x\rangle^{-\frac12-\epsilon}\nabla_{x} R(z)\langle x\rangle^{-\frac12-\epsilon}f\|_{L^{2}(\Omega)}\le c_{n,\epsilon}\|f\|_{L^{2}(\Omega)}, \end{equation} \begin{equation}\label{eq:gradestter} |z|^{1/2} \|\langle x\rangle^{-\frac12-\epsilon} R(z)\langle x\rangle^{-\frac12-\epsilon}f\|_{L^{2}(\Omega)}\le c_{n,\epsilon}\|f\|_{L^{2}(\Omega)}. \end{equation} \end{remark} \section{Smoothing estimates}\label{sec:smoothing} The concept of \emph{$H$-smoothing} was introduced by Kato \cite{Kato65-a} in the context of scattering theory, and its usefulness for dispersive equations was revealed in \cite{RodnianskiSchlag04-a}. An operator $A$ is \emph{$H$-smooth} (actually, supersmooth) whenever one of the two equivalent estimates \eqref{eq:resH}, \eqref{eq:smoH1} in the following theorem holds. We shall use a version of the result adapted to the applications we have in mind; for a more complete reference see \cite{ReedSimon75-a}, \cite{Mochizuki-preprint} \begin{theorem}[Kato]\label{the:katosmoo} Assume $K$ is a selfadjoint operator in a Hilbert space $\mathcal{H}$, let $\mathcal{R}(z)=(K-z)^{-1}$ be its resolvent operator for $z\in \mathbb{C}\setminus \mathbb{R}$, and let $A$ be a densely defined closed operator from $\mathcal{H}$ to a second Hilbert space $\mathcal{H}_{1}$ with $D(A)\supseteq D(K)$. Assume that $A,\mathcal{R}(z)$ satisfy the estimate \begin{equation}\label{eq:resH} \sup_{z\not\in \mathbb{R}}\|A \mathcal{R}(z)A^{*}f\|_{\mathcal{H}_{1}} \le c_{0}^{2}\|f\|_{\mathcal{H}_{1}} \end{equation} for all $f\in D(A^{*})$. Then the following estimates hold: \begin{equation}\label{eq:smoH1} \|Ae^{itK}f\|_{L^{2}_{t}\mathcal{H}_{1}}\le c_{0}\|f\|_{\mathcal{H}}, \end{equation} \begin{equation}\label{eq:smooH2} \left\| \int_{0}^{t} A e^{i(t-s)K}A^{*}h(s)ds \right\|_{L^{2}_{t}\mathcal{H}_{1}} \le c_{0}^{2}\|h\|_{L^{2}_{t}\mathcal{H}_{1}} \end{equation} for all $f\in \mathcal{H}$, $h\in L^{2}_{t}\mathcal{H}_{1}$. Estimate \eqref{eq:smoH1} still holds when \eqref{eq:resH} is replaced by the weaker assumption \begin{equation}\label{eq:resH2} \sup_{z\not\in \mathbb{R}}\|A\Im( \mathcal{R}(z))A^{*} f\|_{\mathcal{H}_{1}} \le c_{0}^{2}\|f\|_{\mathcal{H}_{1}}, \end{equation} where we use the notation $\Im T=(2i)^{-1}(T-T^{*})$. \end{theorem} Recalling \eqref{eq:simpleinterp} in Remark \ref{rem:MCweight}, we see that with the choices \begin{equation*} \mathcal{H}=\mathcal{H}_{1}= L^{2}(\Omega),\qquad K=H=-\Delta+V(x,y),\qquad A=\langle x\rangle^{-1-\epsilon} \end{equation*} estimate \eqref{eq:simpleinterpbis} reduces precisely to \eqref{eq:resH}. Thus from Theorem \ref{the:katosmoo} and \eqref{eq:simpleinterpbis} we obtain immediately the following smoothing estimates for the Schr\"odinger flow associated to the operator $H=-\Delta+V(x,y)$: \begin{theorem}\label{the:smoosch} Let the domain $\Omega \subseteq \mathbb{R}^n_{x}\times\mathbb{R}^{m}_{y}$, $n\ge3$, $m\ge1$ be repulsive with respect to the $x$ variables, with a Lipschitz boundary. Assume the operator $H=-\Delta+V(x,y)$ with Dirichlet b.c. is selfadjoint on $L^{2}(\Omega)$. Finally, assume that the potential $V$ satisfies on $\Omega$ the inequalities \begin{equation}\label{eq:assVdel} V(x,y)\ge0, \qquad -\partial_{x}(|x|V(x,y))\ge0. \end{equation} Then the Schr\"odinger flow associated to $H$ satisfies the following smoothing estimates: for any $\epsilon>0$, \begin{equation}\label{eq:smoHV1} \|\langle x\rangle^{-1-\epsilon}e^{itH}f\|_{L^{2}_{t}L^{2}(\Omega)}\lesssim \|f\|_{L^{2}(\Omega)}, \end{equation} \begin{equation}\label{eq:smooHV2} \left\|\langle x\rangle^{-1-\epsilon} \int_{0}^{t} e^{i(t-s)H}F(s)ds \right\|_{L^{2}_{t}L^{2}(\Omega)} \lesssim \|\langle x\rangle^{1+\epsilon} F\|_{L^{2}_{t}L^{2}(\Omega)} \end{equation} for all $f(x,y)\in L^{2}(\Omega)$ and $F(t,x,y)$ with $\langle x\rangle^{1+\epsilon} F\in L^{2}_{t}L^{2}(\Omega)$. \end{theorem} We can obtain an estimate also for the derivatives of $e^{itH}f$, with a gain of a half derivative, by a different choice of the operator $A$ and some functional analytic arguments; to this end we must introduce suitable functional spaces. For functions on $\mathbb{R}^{n+m}$ and $z\in \mathbb{C}$, we introduce the operators acting only on the $x$ variables \begin{equation*} |D_{x}|^{z}f(x,y)=(2\pi)^{-n} \int_{\mathbb{R}^{n}}|\xi|^{z} \widehat{f}(\xi,y)e^{i\xi x}dx, \end{equation*} \begin{equation*} \langle D_{x}\rangle ^{z}f(x,y)=(2\pi)^{-n} \int_{\mathbb{R}^{n}}\langle\xi\rangle^{z} \widehat{f}(\xi,y)e^{i\xi x}dx, \end{equation*} where $\widehat{f}(\xi,y)$ is the Fourier transform of $f(x,y)$ with respect to the variable $x$ only. By standard calculus we have the equivalence \begin{equation*} \||D_{x}|f\|_{L^{2}(\mathbb{R}^{n+m})}\simeq \|\nabla_{x} f\|_{L^{2}(\mathbb{R}^{n+m})}. \end{equation*} We introduce the norms, and the corresponding Hilbert spaces, \begin{equation}\label{eq:xsobolev} \|f\|_{\dot H^{s,0}}= \||D_{x}|^{s}f\|_{L^{2}(\mathbb{R}^{n+m})},\qquad \|f\|_{H^{s,0}}= \|\langle D_{x}\rangle^{s}f\|_{L^{2}(\mathbb{R}^{n+m})}. \end{equation} Notice that, if the boundary of $\Omega$ satisfies a uniform Lipschitz condition, the extension as 0 of a function $f\in H^{1}_{0}(\Omega)$ to all of $\mathbb{R}^{n+m}$ gives a function $Ef\in H^{1}\mathbb{R}^{n+m}$ with the same norm. Thus for $f\in H^{1}_{0}(\Omega)$ and $0\le\Re z\le1$ we can extend the definition of the operators as \begin{equation*} |D_{x}|^{z}f=|D_{x}|^{z}Ef,\qquad \langle D_{x}\rangle^{z}f=|D_{x}|^{z}Ef. \end{equation*} By density of $C^{\infty}_{c}(\Omega)$ in $H^{1}_{0}(\Omega)$ we obtain also that \begin{equation}\label{eq:equiv} \||D_{x}|f\|_{L^{2}(\Omega)}\simeq \|\nabla_{x} f\|_{L^{2}(\Omega))}. \end{equation} Recall now the estimate ($y\in \mathbb{R}$) \begin{equation}\label{eq:riesz} \|\langle x\rangle^{-s}|D_{x}|^{1+iy}f\|_{L^{2}(\mathbb{R}^{n+m})}\simeq \|\langle x\rangle^{-s}\nabla_{x} f\|_{L^{2}(\mathbb{R}^{n+m})}, \qquad s>-\frac n2 \end{equation} which holds since the Riesz operators $\partial_{x_{j}}|D_{x}|^{-1}$ and the operators $|D_{x}|^{iy}$ are bounded in weighted $L^{2}$ with $A_{2}$ weights, and $\langle x\rangle^{-s}\in A_{2}(\mathbb{R}^{n})$ for $s>-n/2$; notice also that the constant in the estimate depends on $y\in \mathbb{R}$ but with a polynomial growth as $|y|\to \infty$ (see \cite{Stein93-a} for the general theory of singular integrals in weighted $L^{2}$ spaces, and more specifically \cite{SikoraWright01-a}, \cite{CacciafestaDAncona09-a} for the polynomial growth of the norms). The estimate extends to \begin{equation}\label{eq:equivw} \|\langle x\rangle^{-s}|D_{x}|^{1+iy}f\|_{L^{2}(\Omega)}\simeq \|\langle x\rangle^{-s}\nabla_{x} f\|_{L^{2}(\Omega))},\qquad s>-\frac n2,\quad f\in H^{1}_{0}(\Omega) \end{equation} by a density argument as above. As a consequence of \eqref{eq:equivw}, \eqref{eq:gradestbis} implies the estimate \begin{equation}\label{eq:gradest2} \|\langle x\rangle^{-1/2-\epsilon}|D_{x}|^{1+iy} R(z)\langle x\rangle^{-1/2-\epsilon}f\|_{L^{2}(\Omega)} \le c_{n,\epsilon}\|f\|_{L^{2}(\Omega)} \end{equation} which by duality is equivalent to \begin{equation}\label{eq:gradest2dual} \|\langle x\rangle^{-1/2-\epsilon} R(z)|D_{x}|^{1+iy}\langle x\rangle^{-1/2-\epsilon}f\|_{L^{2}(\Omega)} \le c_{n,\epsilon}\|f\|_{L^{2}(\Omega)} \end{equation} Thus by complex interpolation for the analytic family of operators $T_{z}=$ we also obtain the estimate \begin{equation}\label{eq:gradesthalf} \|\langle x\rangle^{-1/2-\epsilon}|D_{x}|^{1/2} R(z)|D_{x}|^{1/2}\langle x\rangle^{-1/2-\epsilon}f\|_{L^{2}(\Omega)} \le c_{n,\epsilon}\|f\|_{L^{2}(\Omega)} \end{equation} Now we make the following choice: \begin{equation*} \mathcal{H}=\dot H^{1/2,0}(\Omega),\qquad \mathcal{H}_{1}=L^{2}(\Omega),\qquad H=-\Delta+V(x,y) \end{equation*} where the space $\dot H^{1/2,0}(\Omega)$ is defined as the completion of $C^{\infty}_{c}(\Omega)$ in the norm \begin{equation*} \|f\|_{\dot H^{1/2,0}(\Omega)}= \||D_{x}|^{1/2}f\|_{L^{2}(\Omega)}, \end{equation*} The closed unbounded operator $A:\mathcal{H}\to \mathcal{H}_{1}$ is now defined as \begin{equation*} A=\langle x\rangle^{-1/2-\epsilon}|D_{x}| \end{equation*} and its adjoint $A^{*}$ is computed as follows \begin{equation*} \begin{split} (Af,g)_{\mathcal{H}_{1}}= & (\langle x\rangle^{-1/2-\epsilon}|D_{x}|f,g)_{L^{2}(\Omega)}= (|D_{x}|f,\langle x\rangle^{-1/2-\epsilon}g)_{L^{2}(\Omega)}= \\ & =(|D_{x}|^{1/2}f,|D_{x}|^{1/2}\langle x\rangle^{-1/2-\epsilon}g)_{L^{2}(\Omega)}= (f,\langle x\rangle^{-1/2-\epsilon}g)_{\mathcal{H}}= (f,A^{*}g)_{\mathcal{H}}. \end{split} \end{equation*} With these choices, estimate \eqref{eq:gradestbis} takes precisely the form \eqref{eq:resH} and Kato theory applies. We obtain the following \begin{theorem}\label{the:smoosch2} Assume $\Omega$, $V$, $H$ as in Theorem \ref{the:smoosch}. Then the Schr\"odinger flow associated to $H$ satisfies the smoothing estimates \begin{equation}\label{eq:smoHV3} \|\langle x\rangle^{-1/2-\epsilon}\nabla_{x} e^{itH}f\|_{L^{2}_{t}L^{2}(\Omega)} \lesssim \||D_{x}|^{1/2}f\|_{L^{2}(\Omega)}, \end{equation} \begin{equation}\label{eq:smooHV4} \left\|\langle x\rangle^{-1/2-\epsilon} \int_{0}^{t} \nabla_{x} e^{i(t-s)H}F(s)ds \right\|_{L^{2}_{t}L^{2}(\Omega)} \lesssim \|\langle x\rangle^{1/2+\epsilon} F\|_{L^{2}_{t}L^{2}(\Omega)} \end{equation} for all $f(x,y)\in H^{1}_{0}(\Omega)$ and $F(t,x,y)$ with $\langle x\rangle^{1/2+\epsilon} F\in L^{2}_{t}L^{2}(\Omega)$. \end{theorem} Notice that a different choice is possible: namely, if we set \begin{equation*} \mathcal{H}= \mathcal{H}_{1}=L^{2}(\Omega),\qquad H=-\Delta+V(x,y) \end{equation*} and \begin{equation*} A=\langle x\rangle^{-1/2-\epsilon}|D_{x}|^{1/2},\qquad A^{*}=|D_{x}|^{1/2}\langle x\rangle^{-1/2-\epsilon} \end{equation*} we obtain the (essentially equivalent) result: \begin{theorem}\label{the:smoVH3bis} Assume $\Omega$, $V$, $H$ as in Theorem \ref{the:smoosch}. Then the Schr\"odinger flow associated to $H$ satisfies the smoothing estimates \begin{equation}\label{eq:smoHV3bis} \|\langle x\rangle^{-1/2-\epsilon}|D_{x}|^{1/2} e^{itH}f\|_{L^{2}_{t}L^{2}(\Omega)} \lesssim \|f\|_{L^{2}(\Omega)}, \end{equation} \begin{equation}\label{eq:smooHV4bis} \left\|\langle x\rangle^{-1/2-\epsilon} \int_{0}^{t} |D_{x}|^{1/2} e^{i(t-s)H}F(s)ds \right\|_{L^{2}_{t}L^{2}(\Omega)} \lesssim \|\langle x\rangle^{1/2+\epsilon} |D_{x}|^{-1/2}F\|_{L^{2}_{t}L^{2}(\Omega)} \end{equation} for all $f(x,y)\in L^{2}(\Omega)$ and $F(t,x,y)$ with $\langle x\rangle^{1/2+\epsilon} F\in L^{2}_{t}L^{2}(\Omega)$. \end{theorem} Handling the wave and Klein-Gordon equations requires some additional effort. We start from the standard representation \begin{equation}\label{eq:matrix1} K= \begin{pmatrix} 0 & 1 \\ H & 0 \end{pmatrix} \qquad\implies \qquad \exp(itK)= \begin{pmatrix} \cos(t \sqrt{H}) & \frac{i}{\sqrt{H}}\sin(t \sqrt{H}) \\ i \sqrt{H}\sin(t \sqrt{H})& \cos(t \sqrt{H}) \end{pmatrix} \end{equation} so that \begin{equation}\label{eq:matrix2} e^{itK} \begin{pmatrix} f \\ \sqrt{H}f \end{pmatrix}= \begin{pmatrix} e^{it \sqrt{H}}f \\ \sqrt{H} e^{it \sqrt{H}}f \end{pmatrix} \end{equation} is the flow associated to the wave equation \begin{equation*} u_{tt}+Hu=0. \end{equation*} Now we choose \begin{equation*} \mathcal{H}=D(\sqrt{H})\times L^{2}(\Omega),\qquad \mathcal{H}_{1}= L^{2}(\Omega),\qquad H=-\Delta+V(x,y) \end{equation*} with $K$ as in \eqref{eq:matrix1}, and $A:\mathcal{H}\to L^{2}(\Omega)$ defined by \begin{equation*} A \begin{pmatrix} f \\ g \end{pmatrix} =\langle x\rangle^{-1/2-\epsilon}H^{1/2}f \qquad \implies \qquad A^{*}g= \begin{pmatrix} H^{-1/2}\langle x\rangle^{-1/2-\epsilon}g \\ 0 \end{pmatrix}. \end{equation*} Then the resolvent $\mathcal{R}(z)=(K-z)^{-1}$ can be written in terms of the resolvent $R(z)=(H-z)^{-1}$ as \begin{equation}\label{eq:resres} \mathcal{R}(z)= \begin{pmatrix} zR(z^{2}) & R(z^{2}) \\ HR(z^{2}) & zR(z^{2}) \end{pmatrix}. \end{equation} Thus we see that, in order to apply the Kato theory to $e^{itK}$, we need to prove that the following operator is bounded on $L^{2}(\Omega)$, uniformly in $z\not\in \mathbb{R}$: \begin{equation*} Q(z)= A \mathcal{R}(z)A^{*}\equiv \langle x\rangle^{-1/2-\epsilon} zR(z^{2})\langle x\rangle^{-1/2-\epsilon}. \end{equation*} This is precisely what is expressed by estimate \eqref{eq:gradestter}. Then by Theorem \ref{the:katosmoo} we obtain \begin{equation*} \left\| A e^{itK} \begin{pmatrix} f \\ \sqrt{H}f \end{pmatrix} \right\|_{L^{2}_{t}\mathcal{H}_{1}}\lesssim \left\| \begin{pmatrix} f \\ \sqrt{H}f \end{pmatrix} \right\|_{\mathcal{H}} \end{equation*} which means \begin{equation*} \|\langle x\rangle ^{-1/2-\epsilon}H^{1/2}e^{it \sqrt{H}}f\|_{L^{2}_{t}L^{2}(\Omega)} \lesssim \|H^{1/2}f\|_{L^{2}(\Omega)} \end{equation*} or equivalently \begin{equation*} \|\langle x\rangle ^{-1/2-\epsilon}e^{it \sqrt{H}}f\|_{L^{2}_{t}L^{2}(\Omega)} \lesssim \|f\|_{L^{2}(\Omega)}. \end{equation*} A similar estimate is obtained for the Duhamel term. All the previous computations are valid if we replace the operator $H$ with $H+\mu^{2}$, $\mu\ge0$; this gives an analogous estimate for the flow $e^{it \sqrt{H+\mu^{2}}}$ associated to the Klein-Gordon equation. In conclusion, we have proved: \begin{theorem}\label{the:smoowave} Let $\mu\ge0$ and assume $\Omega$, $V$, $H$ as in Theorem \ref{the:smoosch}. Then the wave flow associated to $H+\mu^{2}$ satisfies the smoothing estimates \begin{equation}\label{eq:smowaveHV1} \|\langle x\rangle^{-1/2-\epsilon} e^{it \sqrt{H+\mu^{2}}}f\|_{L^{2}_{t}L^{2}(\Omega)} \lesssim \|f\|_{L^{2}(\Omega)}, \end{equation} \begin{equation}\label{eq:smoowaveHV2} \left\|\langle x\rangle^{-1/2-\epsilon} \int_{0}^{t} e^{i(t-s)\sqrt{H+\mu^{2}}}F(s)ds \right\|_{L^{2}_{t}L^{2}(\Omega)} \lesssim \|\langle x\rangle^{1/2+\epsilon} F\|_{L^{2}_{t}L^{2}(\Omega)} \end{equation} for all $f(x,y)\in L^{2}(\Omega)$ and $F(t,x,y)$ with $\langle x\rangle^{1/2+\epsilon} F\in L^{2}_{t}L^{2}(\Omega)$. \end{theorem} \section{Strichartz estimates for the Schr\"odinger equation}\label{sec:strichartz_estimates} From now on we reduce to the simpler situation when the domain $\Omega$, besides being $x$-repulsive, is a compactly supported perturbation of a product domain. More precisely we assume that there exist a constant $M$ and an open set $\omega \subseteq \mathbb{R}^{m}$ such that \begin{equation}\label{eq:assO} \Omega\cap \left\{(x,y):|x|>M\right\}= (\mathbb{R}^{n}\times\omega)\cap \left\{(x,y):|x|>M\right\}. \end{equation} We recall the estimates proved in Example \ref{exa:schroflat} in the flat case \begin{equation}\label{eq:strichflat} \|e^{it \Delta}f\|_{L^{2}_{t}L^{2}_{y}L^{\frac{2n}{n-2}}_{x}} \lesssim \|f\|_{L^{2}(\Omega)},\qquad \left\| \int_{0}^{t}e^{i(t-s)\Delta}F(s)ds \right\|_{L^{2}_{t}L^{2}_{y}L^{\frac{2n}{n-2}}_{x}} \lesssim \|F\|_{L^{2}_{t}L^{2}_{y}L^{\frac{2n}{n+2}}_{x}} \end{equation} where $\Delta$ is the Dirichlet Laplacian on $\mathbb{R}^{n}\times \omega$. In the following, we shall also need a mixed Strichartz-smoothing nonhomogeneous estimate, which follows like \eqref{eq:strichflat} from a corresponding estimate on the whole space. Indeed, Ionescu and Kenig proved that for the standard Laplace operator on $\mathbb{R}^{n}$, $n\ge3$, one has \begin{equation}\label{eq:IK} \left\|\int_{0}^{t}e^{i(t-s)\Delta}F(s)ds\right\| _{L^{2}_{t}L^{\frac{2n}{n+2}}_{x}}\lesssim \|\langle x\rangle^{1/2+\epsilon}|D|^{-1/2}F\|_{L^{2}_{t}L^{2}_{x}} \end{equation} (see Lemma 3 in \cite{IonescuKenig05-a}, which is actually the dual form of \eqref{eq:IK}, and in a sharper version). By mimicking the proof of \eqref{eq:strichflat} we obtain the following mixed estimate on a flat waveguide: \begin{equation}\label{eq:IKflat} \left\|\int_{0}^{t}e^{i(t-s)\Delta}F(s)ds\right\| _{L^{2}_{t}L^{2}_{y}L^{\frac{2n}{n+2}}_{x}}\lesssim \|\langle x\rangle^{1/2+\epsilon}|D_{x}|^{-1/2}F\|_{L^{2}_{t}L^{2}_{y}L^{2}_{x}} \end{equation} where again $\Delta$ denotes the Dirichlet Laplacian on $\mathbb{R}^{n}\times \omega$. Assume now the domain $\Omega$ is repulsive with respect to $x$ and satisfies in addition the condition \eqref{eq:assO}, and let $u(t,x,y)$ be a solution on $\Omega$ of the equation \begin{equation}\label{eq:schreqV} iu_{t}-\Delta u+V(x,y)u=0,\qquad u(0,x,y)=f(x,y) \end{equation} Recall that by \eqref{eq:smoHV1}, \eqref{eq:smoHV3} and \eqref{eq:smoHV3bis} $u$ satisfies \begin{equation}\label{eq:smooforu} \|\langle x\rangle^{-1-\epsilon}u\|_{L^{2}_{t}L^{2}(\Omega)} \lesssim\|f\|_{L^{2}(\Omega)} ,\qquad \|\langle x\rangle^{-1/2-\epsilon}\nabla u\|_{L^{2}_{t}L^{2}(\Omega)} \lesssim\||D_{x}|^{1/2}f\|_{L^{2}(\Omega)} \end{equation} and \begin{equation}\label{eq:smooforubis} \|\langle x\rangle^{-1/2-\epsilon}|D_{x}|^{1/2} u\|_{L^{2}_{t}L^{2}(\Omega)} \lesssim\|f\|_{L^{2}(\Omega)}. \end{equation} Fix a cutoff function $\chi(x)$ equal to 1 on the ball $B(0.M)$ and vanishing outside $B(0,M+1)$ and split the solution as \begin{equation*} u=v+w,\qquad v(t,x,y)=\chi(x)u(t,x,y),\quad w(t,x,y)=(1-\chi(x))u(t,x,y). \end{equation*} Then $w$ is a solution of the following Schr\"odinger equation \begin{equation}\label{eq:perturb1} iw_{t}-\Delta w=G_{1}+G_{2},\qquad G_{1}=-V(x,y)(1-\chi)u+\Delta_{x}\chi \ u,\quad G_{2}=2 \nabla_{x} \chi \cdot \nabla_{x}u, \end{equation} \begin{equation*} w(0,x,y)=(1-\chi(x))f(x,y) \end{equation*} on $\mathbb{R}^{n}\times \omega$ with Dirichlet boundary conditions. We can now represent $w(t,x,y)$ as \begin{equation*} w=e^{it \Delta}(1-\chi)f+ i\int_{0}^{t}e^{i(t-s)\Delta}G_{1}(s)ds+ i\int_{0}^{t}e^{i(t-s)\Delta}G_{2}(s)ds \equiv I+II+III. \end{equation*} We plan to use estimates \eqref{eq:strichflat} on the first two terms and \eqref{eq:IKflat} on the third one. The $L^{2}_{t}L^{2}_{y}L^{\frac{2n}{n-2}}_{x}$ norm of the first term $I$ is estimated directly using \eqref{eq:strichflat}. Again by \eqref{eq:strichflat}, the $L^{2}_{t}L^{2}_{y}L^{\frac{2n}{n-2}}_{x}$ norm of $II$ is estimated using H\"older's inequality as follows \begin{equation}\label{eq:G1} \|\Delta_{x}\chi(x)u\|_{L^{2}_{t}L^{2}_{y}L^{\frac{2n}{n+2}}_{x}} \lesssim \|\langle x\rangle^{-1-\epsilon} u\|_{L^{2}_{t}L^{2}(\Omega)} \lesssim\|f\|_{L^{2}(\Omega)}, \end{equation} and \begin{equation}\label{eq:G3} \|Vu\|_{L^{2}_{t}L^{2}_{y}L^{\frac{2n}{n+2}}_{x}}\le \|\langle x\rangle^{1+\epsilon} V\|_{L^{2}_{y}L^{n}_{x}} \|\langle x\rangle^{-1-\epsilon}u\|_{L^{2}_{t}L^{2}(\Omega)} \lesssim \|\langle x\rangle^{1+\epsilon} V\|_{L^{2}_{y}L^{n}_{x}} \|f\|_{L^{2}(\Omega)}. \end{equation} using the smoothing estimate \eqref{eq:smoHV1} in both cases. For the third term $III$, on the other hand, we use the mixed estimate \eqref{eq:IKflat} so that \begin{equation*} \|III\|_{L^{2}_{t}L^{2}_{y}L^{\frac{2n}{n-2}}_{x}}\lesssim \|\langle x\rangle^{1/2+\epsilon}|D_{x}|^{-1/2} (\nabla_{x}\chi \cdot \nabla_{x}u)\|_{L^{2}_{t}L^{2}_{y}L^{2}_{x}}. \end{equation*} Let now $\psi(x)$ be a cutoff function supported in $|x|\le M+3$ and equal to 1 on $|x|\le M+1$ (note $\chi$ is supported in $B(0,M+1)$) and recall the explicit formula \begin{equation*} |D_{x}|^{-1/2}g=c_{n}\int \frac{g(z)}{|x-z|^{n-1/2}}dz \end{equation*} (here and in the following, integrals extend over all $\mathbb{R}^{n}$). After integration by parts we can split the quantity to estimate as follows: \begin{equation*} \langle x\rangle^{1/2+\epsilon}|D_{x}|^{-1/2} (\nabla_{x}\chi \cdot \nabla_{x}u)\simeq \int \beta(x,z)u(z)dz+ \int \gamma(x,z)\nabla u(z)dz \end{equation*} where \begin{equation*} \beta(x,z)=-\nabla_{z}\left( \frac{\langle x\rangle^{1/2+\epsilon}\psi(x) [\nabla\chi(z)-\nabla\chi(x)]} {|x-z|^{n-1/2}} \right) \end{equation*} and \begin{equation*} \gamma(x,z)=\frac{\langle x\rangle^{1/2+\epsilon}\psi(x)\nabla\chi(x)}{|x-z|^{n-1/2}}. \end{equation*} In the following we extend the function $u$ as 0 outside $\Omega$ but keep the same notation for brevity. We have \begin{equation*} \int \gamma(x,z)u(z)dz= \langle x\rangle^{1/2+\epsilon}\psi(x)\nabla\chi(x)|D_{x}|^{-1/2}\nabla_{x} u \end{equation*} which implies, since $\psi$ has compact support, \begin{equation*} \left\|\int \gamma(x,z)u(z)dz\right\|_{L^{2}_{x}}\lesssim \|\langle x\rangle^{-1-\epsilon}|D_{x}|^{-1/2}\nabla_{x} u\|_{L^{2}_{x}}\lesssim \|\langle x\rangle^{-1-\epsilon}|D_{x}|^{1/2} u\|_{L^{2}_{x}} \end{equation*} where in the last step we used \eqref{eq:equivw}. Finally, $\beta$ satisfies for all $N$ \begin{equation*} |\beta(x,z)|\lesssim\langle z\rangle^{-N}|x-z|^{\frac12-n} \end{equation*} so that \begin{equation*} \left\|\int \beta(x,z)u(z)dz\right\|_{L^{2}_{x}}\lesssim \||x|^{\frac12-n}*(\langle z\rangle^{-N}u)\|_{L^{2}_{x}}\lesssim \|\langle z\rangle^{-N}u\|_{L^{\frac{2n}{n+2}}}\lesssim \|\langle x\rangle^{-1-\epsilon}u\|_{L^{2}_{x}} \end{equation*} by Hardy-Littlewood-Sobolev followed by H\"older's inequality (for $N$ large enough). Summing up, and integrating also in the remaining variables $t,y$, we arrive at \begin{equation*} \|III\|_{L^{2}_{t}L^{2}_{y}L^{\frac{2n}{n-2}}_{x}}\lesssim \|\langle x\rangle^{-1-\epsilon}u\|_{L^{2}_{t}L^{2}_{y}L^{2}_{x}(\Omega)}+ \|\langle x\rangle^{-1/2-\epsilon}|D_{x}|^{1/2}u\|_{L^{2}_{t}L^{2}_{y}L^{2}_{x}(\Omega)} \lesssim\|f\|_{L^{2}(\Omega)} \end{equation*} by \eqref{eq:smooforu}, \eqref{eq:smooforubis}. In conclusion, putting together the estimates for $I$, $II$, $III$, we obtain \begin{equation}\label{eq:estw} \|w\|_{L^{2}_{t}L^{2}_{y}L^{\frac{2n}{n-2}}_{x}} \lesssim (1+\|\langle x\rangle^{1+\epsilon} V\|_{L^{2}_{y}L^{n}_{x}}) \|f\|_{L^{2}(\Omega)}. \end{equation} The remaining part $v=\chi(x)u$ can be estimated directly via the Sobolev embedding \begin{equation}\label{eq:sobol} \|g\|_{L^{\frac{2n}{n-2}}(A)} \lesssim\|\nabla g\|_{L^{2}(A)} \end{equation} which holds for any open set $A \subset \mathbb{R}^{n}$ (even unbounded) and any $g\in H^{1}_{0}(\Omega)$, with a constant independent of $A$. Then we have \begin{equation}\label{eq:estv} \|\chi u\|_{L^{2}_{t}L^{2}_{y}L^{\frac{2n}{n-2}}_{x}} \lesssim \|u\nabla \chi\| _{L^{2}_{t}L^{2}_{x,y}} +\|\chi \nabla u\| _{L^{2}_{t}L^{2}_{x,y}}\lesssim \|f\|_{L^{2}(\Omega)}+\||D_{x}|^{1/2}f\|_{L^{2}(\Omega)} \end{equation} again by \eqref{eq:smooforu}. Summing up \eqref{eq:estw} and \eqref{eq:estv}, we have proved the following \begin{theorem}\label{the:strichschro} Assume the domain $\Omega \subseteq \mathbb{R}^{n}_{x}\times \mathbb{R}^{m}_{y}$, with $n\ge3$ and $m\ge1$, has a Lipschitz boundary, is repulsive w.r.to the $x$ variables and satisfies assumption \eqref{eq:assO}. Assume the potential $V(x,y)$ satisfies on $\Omega$ the inequalities \begin{equation}\label{eq:assVdelbis} V(x,y)\ge0, \qquad -\partial_{x}(|x|V(x,y))\ge0. \end{equation} and the operator $H=-\Delta_{x,y}+V(x,y)$ with Dirichlet boundary conditions is selfadjoint on $L^{2}(\Omega)$. Then the Schr\"odinger flow of $H$ satisfies the following endpoint Strichartz estimate for all $f\in H^{1}_{0}(\Omega)$: \begin{equation}\label{eq:strichnonflat} \|e^{itH}f\|_{L^{2}_{t}L^{2}_{y}L^{\frac{2n}{n-2}}_{x}} \lesssim (1+\|\langle x\rangle^{1+\epsilon} V\|_{L^{2}_{y}L^{n}_{x}}) \Bigl( \|f\|_{L^{2}(\Omega)}+\||D_{x}|^{1/2}f\|_{L^{2}(\Omega)} \Bigr). \end{equation} \end{theorem} \begin{remark}\label{rem:nonhomg} Proving \emph{nonhomogeneous} Strichartz estimates is more difficult because of analytical technicalities. Recall that the solution to the nonhomogenous Schr\"odinger equation \begin{equation}\label{eq:schrnonh} iu_{t}+H u=F(t,x,y),\qquad u(0,x,y)=f(x,y) \end{equation} on $\Omega$ can be represented as \begin{equation*} u=e^{itH}f+i\int_{0}^{t}e^{i(t-s)H}F(s)ds; \end{equation*} we have already estimated the first term in Theorem \ref{the:strichschro}, and it remains to study the Duhamel operator \begin{equation}\label{eq:duhamel} \int_{0}^{t}e^{i(t-s)H}F(s)ds. \end{equation} To this end, we introduce the norm \begin{equation}\label{eq:H12} \|g\|_{H^{1/2,0}(\Omega)}= \|g\|_{L^{2}(\Omega)}+\||D_{x}|^{1/2}g\|_{L^{2}(\Omega)} \simeq \|\langle D_{x}\rangle^{1/2}g\|_{L^{2}(\Omega)} \end{equation} and the corresponding Hilbert space $H^{1/2,0}(\Omega)$ defined as the closure of $C^{\infty}_{c}(\Omega)$ in this norm. Moreover we denote by $H^{-1/2,0}(\Omega)$ the dual of this space; its norm can be characterized as \begin{equation*} \|g\|_{H^{-1/2,0}(\Omega)} \simeq \|\langle D_{x}\rangle^{-1/2}g\|_{L^{2}(\Omega)}. \end{equation*} Then estimate \eqref{eq:strichnonflat} can be written \begin{equation}\label{eq:strichhom} \|e^{itH}f\|_{L^{2}_{t}L^{2}_{y}L^{\frac{2n}{n-2}}_{x}} \lesssim \|f\|_{H^{1/2,0}(\Omega)},\qquad f\in H^{1}_{0}(\Omega). \end{equation} By interpolation with the conservation of energy \begin{equation*} \|e^{itH}f\|_{L^{2}_{t}L^{2}(\Omega)} =\|f\|_{L^{2}(\Omega)}\le \|f\|_{H^{1/2,0}(\Omega)} \end{equation*} we obtain the full family of Strichartz estimates \begin{equation}\label{eq:strfull} \|e^{itH}f\|_{L^{p}_{t}L^{2}_{y}L^{q}_{x}} \lesssim \|f\|_{H^{1/2,0}(\Omega)},\qquad \end{equation} for all \emph{admissible couples} $(p,q)$ of indices, i.e., such that \begin{equation}\label{eq:admind} \frac n2=\frac2p+\frac nq,\quad 2\le q\le \frac{2n}{n-2}. \end{equation} By duality, for any $F(t,x,y)\in L^{2}_{t}H^{1}_{0}(\Omega)$, we have also \begin{equation}\label{eq:dualstrich} \left\| \langle D_{x}\rangle^{-1/2} \int e^{-sH}F(s)ds \right\|_{L^{2}(\Omega)}\le C(V) \|F\|_{L^{p'}_{t}L^{2}_{y}L^{q'}_{x}} \end{equation} for $(p,q)$ admissible. We also notice that estimates \eqref{eq:strfull} can be written in the form \begin{equation}\label{eq:strfullbis} \|e^{itH}\langle D_{x}\rangle^{-1}f\|_{L^{p}_{t}L^{2}_{y}L^{q}_{x}} \lesssim \|\langle D_{x}\rangle^{-1/2}f\|_{L^{2}(\Omega)},\qquad \frac n2=\frac2p+\frac nq,\quad 2\le q\le \frac{2n}{n-2}. \end{equation} Now we can combine \eqref{eq:dualstrich} and \eqref{eq:strfullbis} to obtain \begin{equation}\label{eq:almostnonh} \left\| \int e^{itH}\langle D_{x}\rangle^{-1}e^{-isH}F(s)ds \right\|_{L^{p}_{t}L^{2}_{y}L^{q}_{x}}\lesssim C(V)\|F\|_{L^{\widetilde{p}'}_{t}L^{2}_{y}L^{\widetilde{q}'}_{x}}. \end{equation} We can apply a standard trick and use the Christ-Kiselev lemma as in \cite{KeelTao98-a}, which permits to replace the integral over $\mathbb{R}$ with a truncated integral over $[0,t]$, provided the indices satisfy the additional condition $p>\widetilde{p}'$. This implies the estimate \begin{equation}\label{eq:almostnonh2} \left\| \int_{0}^{t} e^{itH}\langle D_{x}\rangle^{-1}e^{-isH}F(s)ds \right\|_{L^{p}_{t}L^{2}_{y}L^{q}_{x}}\lesssim C(V)\|F\|_{L^{\widetilde{p}'}_{t}L^{2}_{y}L^{\widetilde{q}'}_{x}} \end{equation} for all $(p,q)$ and $(\widetilde{p},\widetilde{q})$ admissible such that $(p,\widetilde{p})\neq(2,2)$. To complete the proof we would need an additional functional analytic assumption: \emph{the operator $\langle D_{x}\rangle$ commutes with the flow $e^{itH}$}; this happens for instance when $V \equiv 0$. Then replacing $F$ with $\langle D_{x}\rangle F$ in \eqref{eq:almostnonh2} we finally obtain \begin{equation}\label{eq:almostnonh3} \left\| \int_{0}^{t} e^{i(t-s)H}F(s)ds \right\|_{L^{p}_{t}L^{2}_{y}L^{q}_{x}}\lesssim \|\langle D_{x}\rangle F\|_{L^{\widetilde{p}'}_{t}L^{2}_{y}L^{\widetilde{q}'}_{x}}, \end{equation} i.e., the solution of \eqref{eq:schrnonh} satisfies \begin{equation}\label{eq:strichlast} \|u\|_{L^{p}_{t}L^{2}_{y}L^{q}_{x}}\lesssim \|\langle D_{x}\rangle^{1/2}f\|_{L^{2}(\Omega)}+ \|\langle D_{x}\rangle F\|_{L^{\widetilde{p}'}_{t}L^{2}_{y}L^{\widetilde{q}'}_{x}} \end{equation} for all admissible couples $(p,q)$ and $(\widetilde{p},\widetilde{q})$ with $(p,\widetilde{p})\neq(2,2)$. \end{remark} \begin{remark}\label{rem:NLS} In forthcoming works we shall apply the above Strichartz estimates to investigate the existence of small global solutions for nonlinear Schr\"odinger and wave equations on non flat waveguides. \end{remark}
1,116,691,497,512
arxiv
\section{Introduction} Accurate 3D human-pose estimation from a monocular RGB image finds applications to robotics, virtual/augmented reality, surveillance, and human computer interaction. The diverse variations in background, clothing, pose, occlusions, illumination, and camera parameters in real-world scenarios makes it a challenging problem. The popular 3D-pose annotated datasets do not cover these variations appropriately. Recent advancements in real-world 2D-pose estimation \cite{NewellYD16, wei2016convolutional} has led to several multi-stage architectures, where the 3D-pose is \emph{regressed} either from both the image features and an intermediate 2D representation \cite{bogo2016keep, dabral2018learning, omran2018neural, Zhou_2017_ICCV}, or only the estimated 2D-pose \cite{akhter2015pose,martinez2017simple, Moreno-Noguer_2017_CVPR, varunECCV2012, zhou2016sparseness}.\footnotetext[1]{Majority of this work was done while author was at Indian Institute of Technology, Bombay.} Unfortunately, regression based approaches using only the estimated 2D-pose, ignore the ambiguity in lifting 2D human-pose to 3D: \emph{an inherently ill-posed problem}. Motivated by this shortcoming, we propose to learn a generative 3D-pose model conditioned on the corresponding 2D-pose that affords sampling diverse samples from the learnt 3D-pose distribution. To the best of our knowledge, we are the first to employ a Deep Conditional Variational Autoencoder \cite{sohn2015learning} (CVAE for short) for 2D-to-3D generative human-pose modeling and demonstrate its advantages over direct regression based approaches. We also show that our generative 2D-to-3D module can be trained on a separate MoCap dataset that doesn't have any intersection with the evaluation image-to-3D dataset, and still performs reasonably well. Therefore, our modular approach tackles the infeasibility (or high cost) of obtaining 3D-pose annotation for images in real-world and works well with separately collected 2D-pose annotations of real-world images and indoor motion capture data \cite{yasin2016dual}. Our pipeline is depicted in Figure~\ref{fig:teaser}. First, the \emph{2DPoseNet} head of a deep convolutional network backbone, $C$, estimates 2D pose, $\hat{P}_{2D}$, from a monocular RGB image, $I$. The estimated 2D pose, $\hat{P}_{2D}$, and a latent code $z$, sampled from a prior distribution $p(z)\sim\mathcal{N}(0,1)$, are fed to the decoder of the \emph{MultiPoseNet} CVAE to sample a 3D pose, $\hat{P}^k_{3D}$. Multiple samples, $z^k \in \{z^1, z^2 \dots z^{K}\}$, from $p(z)$ yield a diverse set of 3D pose samples, $\mathcal{S} = \{\hat{P}^k_{3D}: k \in \{1,2,\ldots K\}\}$, consistent with $\hat{P}_{2D}$. Then we employ pairwise depth ordering of body-joints encoded in the estimated joint-ordinal relation matrix, $\hat{M}$, obtained from the \emph{OrdinalNet} head of $C$, to obtain scores, $\{f(\hat{P}^k_{3D}):k \in\{1,2 \dots K\}\}$ for the elements of $S$. These scores are finally fed to Softmax operator to obtain a probability distribution over $S$, reflecting the consistency of the 3D-pose samples to the predicted ordinal relations. The final 3D pose, $\hat{P}_{3D}$, is computed as the expectation of this distribution. Moreover, in order to estimate the upper-bound performance of our generative model, we also report the accuracy w.r.t. the sample, $\hat{P}^{Oracle}_{3D}$, that is the closest match to the ground truth 3D-pose, $P_{3D}$. The \emph{Oracle} upper-bound outperforms all existing state-of-the-art methods, without leveraging recently introduced ordinal dataset, temporal information, or end-to-end training of the multi-stage architectures. This observation supports the strength of our CVAE-based generative model for 2D-to-3D lifting. A summary of our contributions is as follows - \begin{itemize} \item We tackle the inherent ill-posed problem of lifting 2D-to-3D human-pose by learning a deep generative model that synthesizes diverse 3D-pose samples conditioned on the estimated 2D-pose. \vspace {-0.75em} \item We employ CVAE for 3D human-pose estimation for the first time. \vspace {-0.75em} \item We derive joint-ordinal depth relations from an RGB image and employ them to rank 3D-pose samples. \vspace {-0.75em} \item We show that the oracle-based pose sample obtained from our proposed generative model achieves state-of-the-art results on two benchmark datasets, Human3.6M \cite{h36m_pami} and Human-Eva \cite{sigal2010humaneva}. \vspace {-0.75em} \item We show competitive performance over \emph{Baseline} even when our 2D-to-3D module is trained on a separate MoCap dataset \emph{with no images}. \end{itemize} \section{Related Work} \myparagraph{Lifting 2D to 3D} Our approach belongs to the large body of work that obtains 3D-pose from estimated 2D-pose. In \cite{varunECCV2012}, a set of 3D-shape bases, pre-trained using 3D mocap data\cite{cmu_mocap}, is used to learn a sparse representation of human 3D-pose by optimising for reprojection error. It was extended by \cite{zhou2017sparse} via convex relaxation to address bad initialisation in this scheme. Anatomical constraints to regularize the predicted poses w.r.t. limb lengths were introduced in \cite{wang2014robust}. Further use of anatomical constraints in the form of joint-angle-limits and learned pose priors was proposed in \cite{akhter2015pose} to extend \cite{varunECCV2012}. In \cite{Moreno-Noguer_2017_CVPR}, Euclidean inter-joint distance matrix was used to represent 2D and 3D poses with multi-dimensional scaling to obtain 3D-pose from the predicted 3D distance matrix. Some approaches, \cite{bogo2016keep}, estimate the 3D-pose and shape by fitting a 3D statistical model \cite{loper2015smpl} to 2D-pose and leverage inter-penetration constraints. Different from all the previous approaches we employ CVAE to implicitly learn the anatomical constraints and sample 3D-pose candidates. The method in \cite{kanazawa2018end}, builds upon the framework of \cite{bogo2016keep} to describe a model that estimates the shape, underlying 3D-pose and camera parameters using a re-projection and adversarial loss, which can be trained with 2D-pose datasets and unpaired MoCap datasets. In \cite{martinez2017simple}, a baseline model is proposed that uses a simple fully connected linear network for this task which surprisingly outperforms past approaches. Unlike these discriminative approaches that predict only one 3D-pose from a given 2D-pose, we generate a diverse sample set of 3D-poses. \myparagraph{Hypothesis Generation} Some previous approaches sample multiple 3D-poses via heuristics. The work in \cite{li2015maximum}, finds the nearest neighbors in a learned latent embedding of human images to estimate the 3D-pose. The approaches in \cite{lee2004proposal} and \cite{sminchisescu2003kinematic}, enumerate 3D-poses using "kinematic-flipping" of the 3D joints, for estimation and tracking, respectively. The Bayesian framework from \cite{simo2013joint} employs a latent-variable generative model with a set of HOG-based 2D part detectors and performs inference using evolutionary algorithms. More recently, \cite{Chen_2017_CVPR} retrieves 3D-pose using nearest neighbor search. \cite{Jahangiri:ICCV2017} uses the pose prior model of \cite{akhter2015pose} to generate multiple hypothesis from a seed 3D-pose, while \cite{wan2017deepskeleton} use "skeleton maps" at different scales to regress 3D-pose hypothesis. Unlike the previous methods, our CVAE based generative model implicitly learns an anatomically consistent pose prior \emph{conditioned} on the input 2D-pose. It affords efficient sampling of a set of candidate 3D-poses without requiring expensive MCMC or graphical model inference or an existing MoCap library. Also, it doesn't need additional image features or structural cues. Closest to our approach are prior arts that employ generative models for hand-pose estimation. In \cite{spurr2019cross}, \textbf{one-to-one correspondence} is assumed between hand-pose samples in different modalities--RGB, Depth, 2D-pose \& 3D-pose--and a joint latent space is learned via multi-modal VAE. Unfortunately, this assumption between 2D-and-3D poses ignores the inherent ambiguity in 2D-to-3D lifting, while, we explicitly tackle it via CVAE-based probabilistic framework. The work in \cite{disconet2016} generates multiple hand-poses from depth-map to address the prediction uncertainty due to occlusions/missing-values in the input depth-map and uses Maximum-Expected-Utility (MEU) to obtain a pointwise prediction from the generated samples. We use CVAE for generation and employ geometry-inspired ordinal scoring to score and merge multiple samples. \cite{crossingNet2017} learns a probabilistic mapping from depth-map to 3D-pose, to exploit unlabeled data, which is not provably ill-posed. We, however, employ CVAE inspired probabilistic framework to tackle the provable ill-posed nature of 2D-to-3D pose lifting. \myparagraph{Ordinal Relations} Ordinal relations have previously been explored to estimate depth \cite{zoran2015learning,chen2016single} and reflectance \cite{zhou2015learning,narihira2015learning}. Recently, \cite{pavlakos2018ordinal} and \cite{ronchi2018s} used 2D datasets with ordinal annotations as weak supervision for monocular 3D-pose estimation by imposing a penalty for violation of ordinal depth constraints. Our ordinal prediction network is similar in spirit to \cite{pons2014posebits} that uses a Structural-SVM conditioned on HOG features to predict pose-bits that capture qualitative attributes to facilitate 3D-pose prediction and image retrieval. Unlike \cite{pons2014posebits}, we leverage deep-networks to jointly predict the 2D-pose and depth-ordinal, and generate a diverse sample set of 3D-poses. Concurrent with our work, \cite{wang2018drpose3d} also predict depth ranking and regress 3D-pose from 2D-pose with depth rankings in a coarse-to-fine network. We differ in the formulation of predicting ordinals as spatial maps, which co-locate with the 2D-pose. \iffalse \myparagraph{Variational Autoencoders} \label{sec:cvae} Variational Autoencoders have already shown promising results in generating many kinds of complicated data, including handwritten digits \cite{kingma2013auto}, faces \cite{kingma2013auto}, house numbers \cite{kingma2014semi}, CIFAR images \cite{gregor2015draw}, physical models of scenes [4], segmentation [7], and predicting the future from static images \cite{walker2016uncertain}. The recently proposed Conditional Variational Autoencoders ( \cite{sohn2015learning}, \cite{kingma2014semi} ) additionally incorporate a conditioning variable, that allows to model multi-modal distributions for structured output problems. They have been used for object segmentation \cite{sohn2015learning}, future prediction \cite{lee2017desire}, conditional appearance and shape generation \cite{esser2018variational}, generating people in clothing \cite{lassner2017generative}, etc.. \fi \section{Proposed Approach} In this Section, we describe the proposed approach. Sec.~\ref{subsection:2DPoseNet} discusses \emph{2DPoseNet} to obtain 2D-pose from an input RGB image followed by Sec.~\ref{subsection:MultiPoseNet} that describes our novel \emph{MultiPoseNet}, for generating multiple 3D-pose samples conditioned on the estimated 2D-pose. In Sec.~\ref{subsection:OrdinalNet}, we discuss \emph{OrdinalNet} to obtain joint-ordinal relations from the image and the estimated 2D-pose. Finally, Sec.~\ref{subsection:OrdinalScore} and ~\ref{subsection:Oracle} describe our strategies for predicting the final 3D-pose from the generated samples : \textbf{(a)} by scoring the generated sample set using ordinal relations, referred to as OrdinalScore, and \textbf{(b)} by using supervision from an Oracle with access to the ground truth 3D-pose, referred to as \emph{OracleScore}. \iffalse \subsection{Notations} \label{subsection:Notations} We denote 3D human pose as an ordered set of 3D joint locations $J_{3D} = \{J_1, J_2, \ldots, J_N \} \in R^{NX3}$, the corresponding 2D pose is denoted by $J_{2D}$. Probability distributions are written in capital boldface, $\mathbf{X}$, a random variable in lowercase italics, $x$ and an observed sample in capital italics, $X$. \fi \begin{figure}[t] \centering \includegraphics[width = \linewidth]{MultiPoseNet_final.png} \caption{ MultiPoseNet architecture in training. Note: in GSNN, we sample $z\sim\mathcal{N}(0,I)$ and only need the Decoder.} \label{fig:cvae} \vspace{-1em} \end{figure} \subsection{2DPoseNet: 2D-Pose from Image} \label{subsection:2DPoseNet} We use the Stacked Hourglass Model \cite{NewellYD16} with two stacks, as our backbone $C$. The \emph{2DPoseNet} head applies a 1x1 convolution to the intermediate feature representations to regress per-joint heatmaps (Gaussian bumps at target location), from which the predicted 2D pose in pixel coordinates, $\hat{P}_{2D}$, is obtained using Argmax operator. \subsection{MultiPoseNet: Multiple 3D-Poses from 2D} \label{subsection:MultiPoseNet} Recently, Variational Auto-encoders and Generative Adversarial Networks have demonstrated tremendous success in density estimation and synthetic sample generation. Specifically, CVAEs can generate realistic samples conditioned on input variables which is well suited for multi-modal regression mappings \cite{sohn2015learning}. Therefore, we extend the \emph{Baseline} regression model from \cite{martinez2017simple} into a CVAE to tackle the inherent multi-modality of the 2D-to-3D pose mapping and sample an accurate and diverse 3D-pose candidate set $\mathcal{S} = \{\hat{P}^k_{3D}: k \in \{1,2,\ldots K\}\}$ conditioned on the estimated 2D-pose $\hat{P}_{2D}$. We observe that $\mathcal{S}$ has diverse anatomically plausible samples and contains a close match to the actual ground-truth, $P_{3D}$. The detailed architecture for \emph{MultiPoseNet} is depicted in Figure~\ref{fig:cvae}. \par \myparagraph{Training} The 3D-pose generating CVAE~\cite{sohn2015learning} consists of \begin{itemize} \item Recognition Network, or Encoder : $Enc(P_{3D},\hat{P}_{2D})$, which operates on an input 3D-pose $P_{3D}$ and a condition $\hat{P}_{2D}$ to output the mean and diagonal covariance for the posterior ${q(\hat{z}|P_{3D},\hat{P}_{2D})}$. \vspace{-0.75em} \item Decoder : $Dec(\hat{z},\hat{P}_{2D})$, which reconstructs the ground truth $P_{3D}$ by taking as input a latent $\hat{z}$ sampled from the posterior ${q(\hat{z}|P_{3D},\hat{P}_{2D})}$ and the condition 2D-pose $\hat{P}_{2D}$. \end{itemize} During training, we optimize the following: \begin{align} \label{eq:cvae} \mathcal{L}_{CVAE} & = \lambda_1 KL(q(\hat{z}|P_{3D}, \hat{P}_{2D})||p(z|\hat{P}_{2D})) \\ & + \lambda_2\mathbb{E}_{z\sim q(\hat{z}|P_{3D},\hat{P}_{2D})}{||P_{3D} - Dec(\hat{z},\hat{P}_{2D})||}^{2}_{2}, \nonumber \end{align} where the prior distribution $p(z|\hat{P}_{2D}))$ is assumed to be $\mathcal{N}(0,I)$, and $KL(x||y)$ is the Kullback-Leibler divergence with $\lambda$s used as hyper-parameters to weight the losses. The expectation in the second term for the reconstruction loss is taken over $K_{train}$ number of samples. At inference time, the Encoder network is discarded, and $z$ is drawn from the prior $p(z)\sim\mathcal{N}(0,I)$, which introduces inconsistency between the prediction and training pipelines. To remedy this, we set the Encoder equal to the prior network $p(z)\sim\mathcal{N}(0,I)$, that leads to the Gaussian Stochastic Neural Network framework, or GSNN, proposed in \cite{sohn2015learning}. Combining the two we get a hybrid training objective, weighted with $\alpha$: \begin{align} \mathcal{L}_{GSNN} &= \mathbb{E}_{z\sim{N}(0,1)}{||P_{3D} - Dec(z,\hat{P}_{2D})||}^{2}_{2} \\ \mathcal{L}_{hybrid} &= \alpha{L}_{CVAE} +({1-\alpha}){L}_{GSNN}, \end{align} \myparagraph{Inference} We sample $z\sim\mathcal{N}(0,1)$, and feed ($z$, $\hat{P}_{2D}$) to the Decoder, to obtain $\mathcal{S}_{test} =$ \{$\hat{P}^k_{3D}$: $k \in \{1,2,\ldots,K_{test}\}$\}. \subsection{OrdinalNet: Image to Joint-Ordinal Relations} \label{subsection:OrdinalNet} The backbone architecture for \emph{OrdinalNet} is same as our \emph{2DPoseNet} i.e. $C$. In order to obtain joint-ordinal relations, we augment $C$ with two additional hourglass stacks. For each human-body joint location $j \in \{1, 2, \ldots, N \}$, three ordinal maps ( $\hat{OM_{1j}}$, $\hat{OM_{2j}}$, and $\hat{OM_{3j}}$ ) are predicted to capture the \emph{lesser than}, \emph{greater than} and \emph{equal} depth relations between joint $j$ and all other joints $i \in \{1, 2, \ldots, N\} $. The ground-truth ordinal maps are generated so that for each joint $j$ there is a Gaussian peak for joint $i \in \{1, 2, \ldots, N\} $ in $one$ of the three ordinal maps ( $OM_{1j}$, $OM_{2j}$, and $OM_{3j}$ ), depending on the depth relation between joint $i$ and joint $j$. We combine the intermediate feature representations and 2D-pose heatmaps from backbone $C$ and \emph{2DPoseNet} as the input, and use L2 loss over predicted ordinal maps, for training our \emph{OrdinalNet}. \iffalse We regress for the aforementioned ordinal heat-maps, $\hat{OM}_{kj}$ $, k\in \{1, 2, 3\} ~ \textnormal{and} ~ j \in \{1, 2, \ldots, N\}$. $OM_{1j}$, $OM_{2j}$, and $OM_{3j}$ maps correspond to the relations $J_j < J_i$, $J_j > J_i$ or $J_j \approx J_i$ respectively between joint $j$ and all other joints $i \in \{1, 2, \ldots, N\} $. \p{repeated line may be we can talk about predicted} \fi We post-process our estimated ordinal relations via non-maximal suppression on the predicted ordinal maps and associate each peak to its nearest joint-location, which are finally converted into a $16\times16$ joint-ordinal relation matrix $\hat{M}$. The relation between depths $D_{i}, D_{j}$ of joints $i,j \in \{1, 2, \ldots, N \}$ and ground-truth matrix $M$ is: \[ \hat{M_{ij}} = \left\{ \begin{array}{lr} 1 & : D_{i} - D_{j} > 0 \\ 2 & : D_{i} - D_{j} < 0 \\ 3 & : D_{i} - D_{j} \approx 0 \end{array} \right. \] \subsection{OrdinalScore: Scoring and Aggregating Generated 3D samples} \label{subsection:OrdinalScore} So far we have generated a diverse set of estimated 3D-poses from $\hat{P}_{2D}$ only. Next, we seek motivation from the fact that under orthogonal camera projection with constant bone length constraint 2D-pose and joint-ordinal relations between keypoints can ‘almost‘ resolve the true 3D-pose \cite{Taylor:2000:RAO:364058.364079}. The estimated ordinal matrix $\hat{M}$ is used to assign scores to each of the samples $\hat{P^k_{3D}} \in \mathcal{S}$ by the scoring function: \begin{equation} \label{eq:ord_scoring} f(\hat{P}^k_{3D}) = \sum\limits_{i, j}\mathbbm{1}{( \hat{M_{ij}}==g(\hat{P}^k_{3D})_{ij} }) \end{equation} where $\mathbbm{1}$(condition) is an indicator function, where $g(\hat{P}^k_{3D})$ is the function that computes the $16\times16$ ordinal matrix for a given 3D-pose and $g(\hat{P}^k_{3D})_{ij}$ represents the ordinal relation of joint i and j. The set of scores for the sampled 3D-poses obtained from an image, $\mathcal{F} = \{f(\hat{P}^k_{3D}): k \in\{1, 2, \dots |\mathcal{S}|\}\}$, is passed through a Softmax operator parameterized by temperature $T$ to obtain a probability distribution function, $p(\hat{P}^k_{3D})=e^{Tf(\hat{P}^k_{3D})}/\sum_{k}{e^{Tf(\hat{P}^k_{3D})}}$. The final output $\hat{P}_{3D}$ is computed as the expectation over the candidates- \begin{equation} \label{eq:ord_aggregation} \hat{P}_{3D} = \sum_{k}^{|\mathcal{S}|}{p(\hat{P}^k_{3D}). \hat{P}^k_{3D}} \end{equation} The temperature-based Softmax affords a fine control over the contribution strength of high-score samples vs. the low-scoring samples towards the final aggregation, which makes it robust to noisy pose candidates with respect to the predicted ordinal matrix $\hat{M}$. \subsection{Supervision from an Oracle} \label{subsection:Oracle} The upper-bound accuracy for our approach is given by choosing the closest sample, $\hat{P}^{oracle}_{3D}$, to the ground-truth, $P_{3D}$, from $\mathcal{S}$ using an Oracle that has access to $P_{3D}$. \begin{equation} \label{eq:oracle} \hat{P}^{oracle}_{3D} = \argmin_{s \in \mathcal{S}} \|P_{3D} - s\|_2 \end{equation} \section{Experiments} This section discusses the empirical evaluation of the proposed approach. First, we describe the benchmarks that we employed for quantitative evaluation, and provide some important implementation details of our approach. Then, we present quantitative results and compare our method with the state-of-the-art, and provide ablation studies to analyze the performance of our generative model. \subsection{Datasets} We make use of the following datasets for training various modules of our pipeline : \par \myparagraph{CMU Mocap} motion capture dataset consists of diverse 3D-poses with 144 different subjects performing different actions. We obtain 2D projections from the 3D skeletons using virtual cameras from multiple views, with assumed intrinsic parameters. We employ the obtained 2D-to-3D pose data to train \emph{MultiPoseNet} and the \emph{Baseline} model from \cite{martinez2017simple} for experiments under \emph{unpaired} setting, while \emph{2DPoseNet} and \emph{OrdinalNet} are trained on Human3.6M. Therefore, effectively we train our networks without using any image-to-3D ground-truth data. \par \myparagraph{Human3.6M} dataset consists of 3.6 million 3D-poses. It consists of videos and MoCap data of 5 female and 6 male subjects, captured from 4 different viewpoints while they are performing common activities (talking on the phone, walking, greeting, eating, etc.). \myparagraph{HumanEva-I} is a small dataset containing 3 subjects (S1, S2, S3) with 3 camera views and fewer actions than Human3.6M. This is a standard dataset for 3D-pose estimation used for benchmarking in previous works. \subsection{Implementation Details} \myparagraph{Data Pre-processing:} We take a tight $224\times224$ crop around the person in the input RGB image, $I$, using ground-truth bounding boxes. Following \cite{martinez2017simple}, we process the 3D-poses in camera coordinates and apply standard normalization to the 2D-pose inputs and 3D-pose outputs by subtracting the mean and dividing by the standard deviation, and zero-center the 3D-pose around the hip joint. The 2D-pose contains N=16, and the 3D-pose contains N=17 and N=16 joints for Human3.6M and HumanEva-I respectively. \myparagraph{2DPoseNet:} We use publicly available Stacked-Hourglass pretrained on MPII \cite{andriluka20142d} as backbone $C$ and \emph{2DPoseNet}, and finetune on Human3.6M and HumanEva-I, following \cite{martinez2017simple}. \myparagraph{MultiPoseNet:} Its architecture is based on the \emph{Baseline} model in \cite{martinez2017simple} (details in supplementary material). At training time, the expectation in Eq.\ref{eq:cvae} is estimated using $K_{train}=10$ samples. $\lambda_1, \lambda_2$ and $\alpha$ are set to 10, 100, and 0.5 respectively. The network is trained for 200 epochs using Adam \cite{kingma2014adam}, starting with a learning rate of 2.5e-4 with exponential decay and mini-batches size of 256. At test time, we generate $K_{test}=200$ 3D-pose candidates to get a diverse sample set $\mathcal{S}$. \emph{MultiPoseNet} takes 10 hours to train on a Titan 1080ti GPU. \myparagraph{OrdinalNet:} We freeze the weights of our backbone $C$ and \emph{2DPoseNet} after fine-tuning, and train the \emph{OrdinalNet} module using ground-truth ordinal maps for 60 epochs with standard L2 Loss. \emph{OrdinalNet} takes 12 hours to train, on a Titan 1080ti GPU. \myparagraph{OrdinalScore} The temperature, $T$, is obtained using cross-validation and set to 0.9 for ground truth ordinals, and 0.3 for predicted ordinals. In practice, \emph{OrdinalNet} can sometimes predict contradictory relations i.e $\hat{M}_{ij} \neq \hat{M}_{ji}, \hat{M}_{ii} \neq 3$; we resolve it by setting the diagonal entries of $\hat{M}$ to 3 and mask out elements where $\hat{M}_{ij} \neq \hat{M}_{ji}$ during scoring. Note that for Human3.6M, the ordinal relations w.r.t the extra joint in the 3D-pose are not taken into account by the scoring function in Eq.\ref{eq:ord_scoring}. \myparagraph{Runtime Details} The run-time for different modules of our pipeline are - \emph{OrdinalNet:} 20ms/image, \emph{MultiPoseNet:} 0.5ms/sample, we take 200 samples/image for inference. The entire pipeline runs at ~10 fps on a commodity graphics card, which is slightly worse than other real-time methods. \vspace{2mm} { \setlength{\tabcolsep}{4pt} \renewcommand{\arraystretch}{1.1} \begin{table*}[ht] \centering \resizebox{\linewidth}{!}{ \begin{tabular}{l l c c c c c c c c c c c c c c c c } \\ & {Protocol 1} & {Direct.} & {Discuss} & {Eating} & {Greet} & {Phone} & {Photo} & {Pose} & {Purch.} & {Sitting} & {SitingD} & {Smoke} & {Wait} & {WalkD} & {Walk} & {WalkT} & {Avg} \\ \Xhline{4\arrayrulewidth} \multirow{15}{*}{PAIR} & Pavlakos {\it et al.}~\cite{Pavlakos_2017_CVPR} &67.4 &71.9 & 66.7 & 69.1 & 72.0 & 77.0 & 65.0& 68.3& 83.7& 96.5 & 71.7 & 65.8 & 74.9 & 59.1 & 63.2 & 71.9\\ & Zhou {\it et al.}~\cite{Zhou_2017_ICCV} & 54.82 &60.70 &58.22 &71.4 & 62.0 & {65.5} &53.8 &55.6 &75.2 &111.6 &64.1 &66.0 &51.4 &63.2 &55.3 &64.9 \\ & Martinez {\it et al.}~\cite{martinez2017simple} & 51.8 &56.2&58.1&59.0&69.5&78.4&55.2&58.1&74.0&94.6&62.3&59.1&65.1&49.5&52.4&62.9 \\ & Sun {\it et al.}~\cite{DBLP:conf/iccv/0001SLW17} & 52.8 &54.8 & { 54.2} & {54.3} & {61.8} & 67.2 & 53.1 & {53.6} &71.7 &86.7 &61.5 & \ {53.4}&61.6 &47.1 &53.4 &59.1 \\ & Fang {\it et al.}~\cite{DBLP:conf/aaai/FangXWLZ18} &50.1 & {54.3} &57.0 &57.1 &66.6 &73.3 &53.4 &55.7 &72.8 &88.6 &60.3 &57.7 &62.7 &47.5 &50.6 &60.4 \\ \cline{2-18} & *Pavlakos {\it et al.}~\cite{pavlakos2018ordinal} &48.5 &54.4 &54.4 &52.0 &59.4 &65.3 &49.9 &52.9 &65.8 &71.1 &56.6 &52.9 &60.9 &44.7 &47.8 &56.2 \\ & **Hossain {\it et al.}-\cite{hossain2018exploiting} & 44.2 & 46.7 & 52.3 & 49.3 & 59.9 & 59.4 & 47.5 & 46.2 & 59.9 & 65.6 & 55.8 & 50.4 & 52.3 & 43.5 & 45.1 & 51.9 \\ & **Dabral {\it et al.}-\cite{dabral2018learning} & 44.8 & 50.4 & 44.7 & 49.0 & 52.9 & 61.4 & 43.5 & 45.5 & 63.1 & 87.3 & 51.7 & 48.5 & 37.6 & 52.2 & 41.9 & 52.1 \\ & ***Sun {\it et al.}~\cite{sun2018integral} &47.5 &47.7 &49.5 &50.2 &51.4 &43.8 &46.4 &58.9 &65.7 &49.4 &55.8 &47.8 &38.9 &49.0 &43.8 &49.6 \\ \cline{2-18} & \textbf{Ours (\emph{PRED Ordinals})} & {48.6} & $54.5$ & 54.2 & $55.7$ & $62.6$ & $72.0$ & {50.5 } & $54.3$ & ${70.0}$ & $ {78.3}$ & ${58.1}$ & $55.4$ & ${61.4}$ & ${45.2}$ & ${49.7}$ & ${58.0}$ \\ \cline{2-18} & Ours (\emph{GT Ordinals}) & $42.9$ & $48.1$ & $47.8$ & $50.2$ & $56.1$ & $65.0$ & $44.9$ & $48.6$ & $61.8$ & $69.9$ & $52.6$ & $50.4$ & $56.0$ & $42.1$ & $45.1$ & $52.1$ \\ & Ours (\emph{Oracle}) & $37.8$ & $43.2$ & $43.0$ & $44.3$ & $51.1$ & $57.0$ & $39.7$ & $43.0$ & $56.3$ & $64.0$ & $48.1$ & $45.4$ & $50.4$ & $37.9$ & $39.9$ & $46.8$ \\ \Xhline{4\arrayrulewidth} \multirow{4}{*}{UNPAIR} & Martinez {\it et al.}~\cite{martinez2017simple} & $109.9$ & $112$ & $103.8$ & $115.3$ & $119.3$ & $119.3$ & $114$ & $116.6$ & $118.9$ & $127.3$ & $112.2$ & $119.8$ & $113.4$ & $119.8$ & $111.9$ & $115.6$ \\ \cline{2-18} & \textbf{Ours (\emph{PRED Ordinals})} & $99.9$ & ${102.7}$ & ${97.9}$ & ${105.9}$ & ${112.0}$ & ${111.7}$ & ${103.9}$ & ${109.4}$ & ${111.7}$ & ${119.4}$ & ${104.8}$ & ${110.8}$ & ${103.2}$ & ${106.9}$ & ${102.3}$ & ${106.8}$ \\ \cline{2-18} & Ours (\emph{GT Ordinals}) & $97.9$ & $100.5$ & $95.4$ & $103.7$ & $109.4$ & $108.5$ & $102.0$ & $108.0$ & $107.9$ & $115.4$ & $102.2$ & $108.9$ & $100.8$ & $105.8$ & $100.8$ & $104.4$ \\ & Ours (\emph{Oracle}) & $92.6$ & $94.6$ & $90.6$ & $98.4$ & $103.8$ & $103.3.6$ & $96.6$ & $101.8$ & $101.7$ & $108.8$ & $96.6$ & $102.7$ & $95.3$ & $100.6$ & $96.1$ & $98.9$ \\ \end{tabular} } \vspace{-1em} \caption{Detailed results on Human3.6M under Protocol 1(no rigid alignment in post-processing). Error is in millimeters(mm). Top: Paired methods (PAIR), Bottom: unpaired methods (UNPAIR). Results for \cite{martinez2017simple} in the unpaired setting were obtained using their publicly available code. * - use additional ordinal training data from MPII and LSP. ** - use temporal information. *** - use soft-argmax for end-to-end training. These strategies are complementary with our approach. } \label{tab:h36m_prot1} \end{table*} } { \vspace{-1em} \setlength{\tabcolsep}{4pt} \renewcommand{\arraystretch}{1.1} \begin{table*}[ht] \centering \resizebox{\linewidth}{!}{ \begin{tabular}{l l c c c c c c c c c c c c c c c c } \\ & {Protocol 2} & {Direct.} & {Discuss} & {Eating} & {Greet} & {Phone} & {Photo} & {Pose} & {Purch.} & {Sitting} & {SitingD} & {Smoke} & {Wait} & {WalkD} & {Walk} & {WalkT} & {Avg} \\ \Xhline{4\arrayrulewidth} \multirow{9}{*}{PAIR} & Zhou {\it et al.}~\cite{Zhou_2017_ICCV} & 47.9 & 48.8 & 52.7 & 55.0 & 56.8 &49.0 & 45.5 & 60.8 & 81.1 & 53.7 & 65.5 & 51.6 & 50.4 & 54.8 &55.9 & 55.3 \\ & Pavlakos {\it et al.}~\cite{Pavlakos_2017_CVPR} & 47.5 & 50.5 & 48.3 & 49.3 & 50.7 & 55.2 & 46.1 & 48.0 & 61.1 & 78.1 & 51.1 & 48.3 & 52.9 & 41.5 & 46.4 &51.9 \\ & Martinez {\it et al.}~\cite{martinez2017simple} & 39.5 & 43.2 & 46.4 & 47.0 & 51.0 &56.0 & 41.4 & 40.6 & 56.5 & 69.4 & 49.2 & 45.0 & 49.5 & 38.0 & 43.1 &47.7 \\ & Fang {\it et al.}~\cite{DBLP:conf/aaai/FangXWLZ18} & 38.2 & 41.7 & 43.8 & 44.9 & 48.5 & 55.3 & 40.2 & 38.2 & 54.5& 64.4 & 47.2 & 44.3& 47.3& 36.7& 41.7& 45.7\\ & Sun {\it et al.}~\cite{DBLP:conf/iccv/0001SLW17} & 42.1 & 44.3 & 45.0 &45.4& 51.5& 53.0 &43.2 &41.3& 59.3 & 73.3 & 51.0 & 44.0 & 48.0 &38.3& 44.8& 48.3\\ \cline{2-18} & *Pavlakos {\it et al.}~\cite{pavlakos2018ordinal} & 34.7 & 39.8 & 41.8 & 38.6 & 42.5 & 47.5 & 38.0 & 36.6 & 50.7 & 56.8 & 42.6 & 39.6 & 43.9 & 32.1 & 36.5 & 41.8\\ & **Hossain {\it et al.}-\cite{hossain2018exploiting} & 36.9 & 37.9 & 42.8 & 40.3 & 46.8 & 46.7 & 37.7 & 36.5 & 48.9 & 52.6 & 45.6 & 39.6 & 43.5 & 35.2 & 38.5 & 42.0 \\ & **Dabral {\it et al.}-\cite{dabral2018learning} & 28.0 & 30.7 & 39.1 & 34.4 & 37.1 & 44.8 & 28.9 & 31.2 & 39.3 & 60.6 & 39.3 & 31.1 & 25.3 & 37.8 & 28.4 & 36.3 \\ & ***Sun {\it et al.}~\cite{sun2018integral} & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & 40.6 \\ \cline{2-18} & \textbf{Ours (\emph{PRED Ordinals})} & $35.3$ & $35.9$ & $45.8$ & $42.0$ & $40.9$ & $52.6$ & $36.9$ & $35.8$ & $43.5$ & $51.9$ & $44.3$ & $38.8$ & $45.5$ & $29.4$ & $34.3$ & $40.9$ \\ \cline{2-18} & Ours (\emph{GT Ordinals}) & $31.3$ & $31.0$ & $39.3$ & $37.0$ & 37.2 & 47.8 & 32.5 & 32.1 & 39.8 & 47.3 & 40.0 & 34.7 & 41.8 & 27.5 & $31.0$ & $36.7$ \\ & Ours (\emph{Oracle}) & 27.6 & 27.5 & 34.9 & 32.3 & 33.3 & 42.7 & 28.7 & 28.0 & 36.1 & 42.7 & 36.0 & 30.7 & 37.6 & 24.3 & 27.1 & 32.7 \\ \Xhline{4\arrayrulewidth} \multirow{4}{*}{UNPAIR} & Martinez {\it et al.}~\cite{martinez2017simple} & 62.6 & 64.3 & 62.5 & 67.4 & 72.2 & 70.8 & 64.9 & 61.2 & 82.1 & 92.4 & 76.8 & 66.7 & 71.7 & 79.5 & 73.1 & 71.3 \\ \cline{2-18} & \textbf{Ours(\emph{PRED Ordinals})} & 62.9 & 65.6 & 61.8 & 67.1 & 72.2 & 69.3 & 65.6 & 63.8 & 81.3 & 91.0 & 74.5 & 66.5 & 70.8 & 74.7 & 70.9 & 70.5 \\ \cline{2-18} & Ours(\emph{GT Ordinals}) & 62.9 & 65.3 & 60.7 & 66.9 & 71.3 & 68.4 & 65.2 & 63.2 & 80.1 & 89.3 & 73.5 & 66.1 & 70.5 & 74.7 & 70.9 & 70.0 \\ & Ours (\emph{Oracle}) & 56.8 & 59.2 & 55.0 & 59.6 & 65.6 & 62.0 & 58.4 & 56.5 & 74.2 & 82.8 & 67.6 & 60.0 & 63.6 & 68.2 & 64.3 & 63.6\\ \end{tabular} } \vspace{-1em} \caption{Detailed results on Human3.6M under Protocol 2(rigid alignment in post-processing). Top: Paired methods (PAIR), Bottom: unpaired methods (UNPAIR). Results for \cite{martinez2017simple} in the unpaired setting were obtained using their publicly available code.} \label{tab:h36m_prot2} \vspace{-1em} \end{table*} } \subsection{Quantitative Evaluation} In this sub-section, we report the results of our model and compare it against the prior state-of-the-art on Human3.6M and HumanEva-I dataset. We report three evaluation metrics to demonstrate the benefits of our approach: \par \emph{PRED Ordinals}: Uses the OrdinalScore strategy with the ordinal relations predicted by \emph{OrdinalNet}. \par \emph{GT Ordinals}: Uses the OrdinalScore strategy with the ground truth ordinal relations. \par \emph{Oracle}: Uses the Oracle for final prediction, which gives the best results. \subsubsection{Evaluation on Human3.6M} Following the literature, we use two standard protocols to train and evaluate our results. \myparagraph{Protocol-1}: The training set consists of 5 subjects (S1, S5, S6, S7, S8), while the test set includes 2 subjects (S9, S11). The original 50FPS frame rate is down-sampled to 10 FPS and the evaluation is carried out on sequences coming from all 4 cameras and all trials. The reported error metric is Mean Per Joint Position Error (MPJPE) i.e. the Euclidean distance from the estimated 3D-pose, $\hat{P}_{3D}$, to the ground-truth, $P_{3D}$, averaged over 17 joints of the Human3.6M skeletal model. \myparagraph{Protocol-2}: Subjects S1, S5, S6, S7, S8 and S9 are used for training and S11 for testing. The error metric used is Procrustes Aligned MPJPE (PA MPJPE) which is the MPJPE calculated after rigidly aligning the predicted pose with the ground-truth. \par Table~\ref{tab:h36m_prot1} and Table~\ref{tab:h36m_prot2} show our results for Protocol-1 and Protocol-2, respectively. In the paired setting, we train each module, that is, \emph{2DPoseNet}, \emph{OrdinalNet} and \emph{MultiPoseNet}, using paired image-to-3D pose annotations from Human3.6M. Under this setting, we achieve competitive results using \emph{PRED Ordinals} for scoring. The use of \emph{GT Ordinals} takes us close to the state-of-the-art. We are worse only to the methods that either use additional ordinal training data \cite{pavlakos2018ordinal}, temporal information \cite{dabral2018learning, hossain2018exploiting} and/or soft-argmax \cite{sun2018integral} (denoted by *s), all of which is compatible with our approach and is expected to improve the performance further. Finally, we outperform all existing methods using \emph{Oracle} supervision. Although it's an unfair comparison, it demonstrates that our CVAE-generated sample set contains candidate poses that are very close to the ground-truth pose, thus validating our sample-generation based approach. \par \myparagraph{Without Paired 3D Supervision:} The modular nature of our pipeline allows us to train the 2D-to-3D lifting module on a separate MoCap library that has no intersection with the training images for \emph{2DPoseNet}, \emph{OrdinalNet}. It affords training our pipeline without the costly and laborious acquisition of paired image-to-3D annotations. We demonstrate it by training \emph{MultiPoseNet} on the CMU MoCap dataset, which consists of only 3D MoCap data, and report the results on the test-set of Human3.6M. Note that the MoCap dataset is only needed for training, not for testing. The 3D-poses from CMU MoCap are virtually projected to their corresponding 2D-projections, with the camera at the origin and pelvis at a distance of 5500mm. We have used the intrinsic camera parameters from Human3.6M to bring the distribution of 2D-projections closer to the Human3.6M test set. We also rotate the 3D-poses by 90, 180, and 270 degrees, for data augmentation. The obtained 2D-to-3D pose dataset is used to train the \emph{Baseline} model \cite{martinez2017simple} and \emph{MultiPoseNet}. The estimated 2D-poses and ordinals are obtained from \emph{2DPoseNet} and \emph{OrdinalNet}, both of which are trained on Human3.6M. We emphasize that Human3.6M is only used for learning 2D-pose and ordinal estimation, therefore, we don't use any image-to-3D annotation during training. Since, two different sources are used for the image-to-2D/ordinal and 2D-to-3D modules, we call it \emph{unpaired setting}. The results of these experiments are reported in Table~\ref{tab:h36m_prot1} and \ref{tab:h36m_prot2} in the bottom rows. Our \emph{PRED Ordinals} based method outperforms the \emph{Baseline} regression model \cite{martinez2017simple} and with the use of \emph{GT Ordinals} and \emph{Oracle} the performance only increases. It shows that our framework can learn without image-to-3D annotation and is also robust to domain shift. \subsubsection{Evaluation on HumanEva-I} Under the protocol from \cite{Kostrikov2014DepthSR}, we evaluate our model on HumanEva-I. Training uses subjects S1, S2, S3 under different view-points and action-sequences Jogging and Walking, while testing is carried out on the validation sequences for all three subjects as testing data. All the modules are trained using HumanEva-I. The model error is reported as the reconstruction error after rigid transformation. We obtain state-of-the-art results using the \emph{Oracle} estimate and close to state-of-the-art with \emph{PRED Ordinals} and \emph{GT Ordinals} on HumanEva-I, reported in Table~\ref{tab:heva}. \begin{table}[h!] \resizebox{ \linewidth}{!}{ \vspace{-2em} \begin{tabular}{ c | c c c | c c c | c } \multirow{2}{*}{} & \multicolumn{3}{c}{Jogging} & \multicolumn{3}{c}{Walking} \\ \hline & S1 & S2 & S3 & S1 & S2 & S3 & Avg\\ \hline Kostrikov {\it et al.}~\cite{Kostrikov2014DepthSR} & 44.0 & 30.9 & 41.7 & 57.2 & 35.0 & 33.3 & 40.3 \\ Yasin {\it et al.}~\cite{yasin2016dual} & 35.8 & 32.4 & 41.6 & 46.6 & 41.4 & 35.4 & 38.9 \\ Moreno-Noguer {\it et al.}~\cite{Moreno-Noguer_2017_CVPR} & 19.7 & 13.0 & 24.9 & 39.7 & 20.0 & 21.0 & 26.9 \\ Pavlakos {\it et al.}~\cite{Pavlakos_2017_CVPR} & 22.1 & 21.9 &29.0 & 29.8 & 23.6 & 26.0 & 25.5 \\ Martinez {\it et al.}~\cite{martinez2017simple} & 19.7 & 17.4 & 46.8 &26.9 &18.2 & 18.6& 24.6 \\ \hline \textbf{Ours (\emph{PRED Ordinals})} & 19.3 & 12.5 & 41.8 & 40.9 & 22.1 & 18.6 & 25.9\\ \hline Ours (\emph{GT Ordinals}) & 19.1 & 12.4 & 41.5 & 40.6 & 21.9 & 18.5 & 25.7\\ \hline Ours (\emph{Oracle}) & 17.4& 11.0 & 39.5 & 38.5 & 20.1 & 16.7 & 23.9 \\ \end{tabular} } \small \vspace{-1em} \caption{Results of our model on HumanEva-I dataset and a comparison with previous work. Numbers reported are mean reconstruction error in mm computed after rigid transformation. } \label{tab:heva} \end{table} \begin{figure*}[ht] \centering \begin{subfigure}[t]{0.49\textwidth} \includegraphics[width=\textwidth, ]{error_plot_iccv_camready.png} \caption{ Oracle vs OrdinalScore vs MEAN } \label{fig:Oracle_vs_OrdinalScore} \end{subfigure} \begin{subfigure}[t]{0.49\textwidth} \includegraphics[width=\textwidth, ]{error_plot_sampling_camready.png} \caption{ MultiPoseNet vs Baseline sampling } \label{fig:CVAE_vs_NaiveSampling} \end{subfigure} \vspace{-3mm} \caption{Ablation studies. (a) Effect of increasing number of samples on Oracle, OrdinalScore and MEAN estimate (b) Comparison of MultiPoseNet versus Baseline sampling using Oracle supervision.} \label{fig:ablation} \vspace{-2mm} \end{figure*} \begin{figure*}[ht] \begin{subfigure}[t]{0.18\textwidth} \includegraphics[width=\textwidth]{img1.jpg} \vspace{-2mm} \end{subfigure} \begin{subfigure}[t]{0.18\textwidth} \includegraphics[width=\textwidth]{img1_jointVariances_crop.png} \vspace{-2mm} \end{subfigure} \begin{subfigure}[t]{0.18\textwidth} \includegraphics[width=\textwidth]{img1_sample_1.png} \vspace{-2mm} \end{subfigure} \begin{subfigure}[t]{0.18\textwidth} \includegraphics[width=\textwidth]{img1_sample_2.png} \vspace{-2mm} \end{subfigure} \begin{subfigure}[t]{0.18\textwidth} \includegraphics[width=\textwidth]{img1_sample_3.png} \vspace{-2mm} \end{subfigure} \begin{subfigure}[t]{0.03\textwidth} \includegraphics[width=\textwidth, height=85pt]{std_colorbar.png} \vspace{-2mm} \end{subfigure} \begin{subfigure}[t]{0.18\textwidth} \includegraphics[width=\textwidth]{img2.jpg} \vspace{-2mm} \end{subfigure} \quad \begin{subfigure}[t]{0.18\textwidth} \includegraphics[width=\textwidth]{img2_jointVariances_crop.png} \vspace{-2mm} \end{subfigure} \begin{subfigure}[t]{0.18\textwidth} \includegraphics[width=\textwidth]{img2_sample_1.png} \vspace{-2mm} \end{subfigure} \begin{subfigure}[t]{0.18\textwidth} \includegraphics[width=\textwidth]{img2_sample_2.png} \vspace{-2mm} \end{subfigure} \begin{subfigure}[t]{0.18\textwidth} \includegraphics[width=\textwidth]{img2_sample_3.png} \vspace{-2mm} \end{subfigure}\quad \begin{subfigure}[t]{0.03\textwidth} \includegraphics[width=\textwidth, height=85pt]{std_colorbar.png} \end{subfigure} \vspace{-3mm} \caption{Sample diversity on Human3.6M test-set. From L-R: Input Image, MEAN Pose with per-joint standard deviation around each joint, and 3 different SAMPLES overlaid on top of MEAN pose. MEAN is solid and SAMPLE is dashed, with displacement field in between. Note that wrist and elbow show maximum variance. Best viewed in color with zoom.} \label{fig:qualitative_viz} \vspace{-3mm} \end{figure*} \begin{figure*}[ht] \centering \begin{subfigure}[t]{0.49\textwidth} \includegraphics[width=\textwidth, ]{isoMap_4993.png} \vspace{-2mm} \end{subfigure} \begin{subfigure}[t]{0.49\textwidth} \includegraphics[width=\textwidth, ]{isoMap_5313.png} \vspace{-2mm} \end{subfigure} \vspace{-2mm} \caption{Samples from MultiPoseNet and Baseline ( using a variance of 100 ) mapped to Euclidean space using ISOMAP \cite{tenenbaum2000global}. Note that MultiPoseNet produces much more diverse samples that are likely to be near the GT pose. } \vspace{-3mm} \label{fig:sample diversity} \end{figure*} \subsection{OrdinalNet Accuracy} The \emph{OrdinalNet} accuracy is obtained by comparing the ground-truth ordinals, $M$, with the predicted ordinals, $\hat{M}$. The results on the validation set for Human3.6M and HumanEva-I are 86.8\% and 81\% respectively. \subsection{Ablation Studies} \myparagraph{Effect of Increasing Sample Set Size:} In Figure~\ref{fig:Oracle_vs_OrdinalScore}, we plot the value of different error estimates on Protocol-1 of Human3.6 with increasing number of samples. \emph{MEAN} denotes the uniform average of all samples. We observe that the \emph{MEAN} improves with the number of samples, but saturates quickly. The \emph{Oracle} performance keeps on improving with the number of samples, which validates the intuition that the chance of obtaining close to ground-truth pose increases with more samples. Consequently, the estimated 3D-pose, either using \emph{PRED Ordinals} or \emph{GT Ordinals}, keeps improving with more samples, as is evident from their respective curves. This demonstrates that the proposed ordinal scoring is an effective strategy for weighted averaging of the generated samples. \myparagraph{Sampling Baseline:} Here, we compare a \emph{Baseline} sampling strategy against our CVAE-based generative sampling. \emph{Baseline} sampling treats each joint-location as independent Gaussian distribution with the mean as the output of the \emph{Baseline} regression model\cite{martinez2017simple} and variance from \{1,5,10,20,100,400\}. Each joint-location is sampled independently to obtain a 3D-pose. \emph{Oracle} supervision is used for both \emph{Baseline} sampling and our \emph{MultiPoseNet} sampling to obtain the final 3D-pose. Figure~\ref{fig:CVAE_vs_NaiveSampling} shows the comparison of \emph{MultiPoseNet} with \emph{Baseline} sampling on Protocol-1 of Human3.6 with increasing number of samples. It's evident that \emph{Baseline} performs poorly and does not improve steeply with more number of samples. It also begins to worsen with higher variance of 400mm as the samples become more absurd. On the other hand, \emph{MultiPoseNet} improves its estimate by close to 20mm and the slope of the curve indicates further potential gains by sampling more. \subsection{Sample Diversity} \myparagraph{Qualitative Analysis:} To assess the feasibility of the proposed approach to generate a diverse set of plausible 3D-pose candidates from a given 2D-pose, we show the \emph{MEAN} pose, per-joint standard deviation, and a few candidate 3D poses for two different images from the Human3.6M test set in Figure~\ref{fig:qualitative_viz}. We observe meaningful variations across different body parts and poses with relatively higher variance around, the hardest to predict, wrist and elbow joints. \myparagraph{Visualisation Using Dimensionality Reduction:} To visualize the distribution of generated candidate 3D-poses, we map the samples from \emph{MultiPoseNet} and \emph{Baseline} sampling (with a variance of 100) into Euclidean space using Isomap \cite{tenenbaum2000global}. Fig.~\ref{fig:sample diversity} shows 1000 samples using both \emph{MultiPoseNet} and \emph{Baseline} sampling for two different input 2D-poses, along with the ground truth 3D-pose and the \emph{MEAN} estimate of \emph{MultiPoseNet}. Interestingly, the samples from \emph{Baseline} are clustered narrowly around the \emph{MEAN}, whereas \emph{MultiPoseNet} samples are diverse and are more likely to be near the GT 3D-pose. \vspace{-0.25em} \section{Conclusion and Future Work} This article presented a novel framework for monocular 3D-pose estimation that uses a conditional variational autoencoder for sampling 3D-pose candidates which are scored and weighted-averaged together using ordinal relations, predicted from a deep CNN. The proposed method achieves close to state-of-the-art results on two benchmark datasets using OrdinalScore, and state-of-the-art results using an Oracle with access to the ground truth 3D-pose. The CVAE has been shown to learn a generative model that synthesizes diverse 3D-pose samples consistent with the input 2D-pose, thereby dealing with the ambiguity in lifting from 2D-to-3D. It can also be trained without paired image-to-3D annotations, and still yields competitive results. \section{Acknowledgements} This research was funded in part by Mercedes Benz Research and Development, India. We also thank Bernt Schiele for providing valuable feedback on the manuscript. {\small \bibliographystyle{ieee_fullname}
1,116,691,497,513
arxiv
\section{Introduction} \label{sec:introduction} Deterministic thermostats are widely used to simulate equilibrium physical systems described by ensembles other than microcanonical (constant energy and volume, $(E, V)$), such as constant temperature-volume $(T, V)$ or temperature-pressure $(T, p)$ \cite{Nose91,Morriss98,Hoover04,Leimkuhler04a,Hunenberger05,Bond07}, or for simulations of nonequilibrium phenomena \cite{Evans90,Hoover91,Mundy00,RomeroBastida06,Hoover07a,Jepps10}. Deterministic thermostats are obtained by augmenting the phase space variables of the physical system of interest with a set of additional variables whose role is to alter the standard Hamiltonian system dynamics in such a way that a suitable invariant measure in the system phase space is preserved \cite{Nose84,Nose91,Hoover85}. For example, in the familiar Nos\'{e}-Hoover (NH) thermostat \cite{Nose84,Hoover85} the exact dynamics preserves both an extended energy and a suitable invariant measure, ensuring that, \emph{provided} the extended system dynamics is effectively ergodic on the timescale of the simulation, the physical system will sample its phase space according to the canonical (constant $T$) measure. Extended system thermostat dynamics can be either Hamiltonian \cite{Dettmann96,Dettmann97,Dettmann99,Bond99} or non-Hamiltonian \cite{Evans90,Tuckerman99,Tuckerman01,Sergi01,Sergi03,Ezra04,Tarasov05,Sergi07,Sergi10}. For example, NH dynamics is non-Hamiltonian \cite{Hoover85}. (The NH system is however conformally symplectic \cite{Wojtkowski98,Choquard98}, meaning that the dynamics can be rendered Hamitonian by a coordinate-dependent scaling of the vector field \cite{Wojtkowski98}. This is not true for other non-Hamiltonian thermostats such as Nos\'{e}-Hoover chains \cite{Martyna92} or Bulgac-Kusnezov global demons \cite{Kuznezov90}.) There has been considerable interest in the formulation of Hamiltonian deterministic thermostats such as the Nos\'{e}-Poincar\'{e} system \cite{Bond99}. In this approach, the extended Hamiltonian for the physical system plus thermostat variables incorporates a coordinate-dependent time scaling of Poincar\'{e}-Sundman type \cite{Bond98,Benest02}. Restricting the dynamics to a fixed value (zero) of the extended Hamiltonian results in the system variables sampling their phase space according to (for example) the canonical density \cite{Bond99} (again, subject to the assumption of ergodicity). An important motivation for the introduction of Hamiltonian thermostats is the possibility of using symplectic integration algorithms to integrate trajectories \cite{SanzSerna94,Hairer02,Leimkuhler04a}. As already indicated, a fundamental question concerning deterministic thermostats has to do with the effective ergodicity of the dynamics on the timescale of the simulation. If the dynamics is not effectively ergodic, then trajectory simulations will not generate the correct invariant measure \cite{Cornfeld82,Sturman06}. It has long been recognized, for example, that the dynamical system consisting of a single harmonic oscillator degree of freedom coupled to the NH thermostat variable is not ergodic \cite{Hoover85} (see also refs \onlinecite{Golo04,Watanabe07,Watanabe07a}); in fact, for typical parameter values the system phase space exhibits extensive persistence of invariant tori (quasiperiodic motion) \cite{Legoll07,Legoll09}. A large amount of effort has been expended in attempts to design thermostats exhibiting dynamics more ergodic than the basic NH system \cite{Martyna92,Leimkuhler04a,Hunenberger05,Leimkuhler05a}. (For a careful discussion of the question of ergodicity for `typical' interatomic potentials, see \cite{Tupper05}; for related fundamental questions in molecular dynamics, see also ref.\ \cite{Skeel09}.) The question of ergodicity in thermostats is conceptually closely related to the problem of statistical versus nonstatistical behavior in the (classical) theory of unimolecular reaction rates \cite{Robinson72,Gilbert90,Baer96,Forst03}. Broadly speaking, in this case one would like to know whether a molecule will behave according to a statistical model such as RRKM theory, or whether it will exhibit significant deviations from such a theory, ascribable to nonstatistical dynamics \cite{Brumer88,Rice81,DeLeon81,Rice96,Carpenter05,Ezra09a}. Such `nonstatisticality', which can arise from a number of dynamical effects, can be considered to be analogous to the failure of ergodicity in deterministic thermostats. In recent years there have been significant theoretical and computational advances in the application of dynamical systems theory \cite{Wiggins90,Wiggins92,Wiggins94} to study reaction dynamics and phase space structure in multimode models of molecular systems, and to probe the dynamical origins of nonstatistical behavior \cite{wwju,Komatsuzaki02,Komatsuzaki05,Jaffe05,Wiesenfeld05,WaalkensSchubertWiggins08}. The fundamental chemical concept of the \emph{transition state}, defined as a surface of no return in phase space, has been successfully and rigorously generalized from the well-established 2 degree of freedom case \cite{Pechukas81} to systems with $N \geq 3$ degrees of freedom \cite{wwju,Komatsuzaki02,Komatsuzaki05,Jaffe05,Wiesenfeld05,WaalkensSchubertWiggins08}. Moreover, dynamical indicators exist (determination of reactive phase space volume, behavior of the reactive flux) to diagnose nonstatistical behavior (see, for example, Ref.\ \cite{DeLeon81,Ezra09a,Lourderaj09}). Nevertheless, relatively little work has been done applying the powerful techniques from dynamical system theory \cite{Wiggins90,Wiggins92,Wiggins94} to study the phase space structure and dynamics of deterministic thermostats. Hoover and coworkers have investigated the fractal nature of various phase space structures for equilibrium \cite{Posch86} and nonequilibrium \cite{Posch97,Hoover98b,Hoover01a} versions of the NH thermostat. Using a Hamiltonian formulation of the isokinetic thermostat \cite{Dettmann96} (cf.\ Section \ref{sec:isokinetic_thermostat} of this paper), Morriss and Dettmann \cite{Morriss98} have mapped the dynamics of the isokinetically thermostatted Lorentz gas onto the geodesic motion of a free particle on a particular Riemannian manifold. Leimkuhler and Sweet have used Poincar\'{e} surfaces of section and dynamical frequency analysis in their analysis of optimal coupling constants for NH and related thermostats \cite{Leimkuhler05a}, while Legoll and coworkers have applied KAM theory to rigorously prove the existence of invariant tori in the NH thermostatted harmonic oscillator \cite{Legoll07,Legoll09}. D'Alessandro, Tenenbaum and Amadei have computed Lyapunov exponents for a thermostatted united-atom model of the butane molecule \cite{DAlessandro02}; both Nos\'{e}-Hoover and isokinetic Gaussian thermostats were considered. Thermostatted systems present several difficulties for detailed studies from a phase space perspective. First, such systems usually have high dimensionality; for example, a NH chain thermostatted system has at least 3 degrees of freedom. For such a system the Poincar\'{e} surface of section analysis, standard for 2 degrees of freedom, does not afford the advantage of dimensional reduction and global visualization since the surface of section is four dimensional for three degrees of freedom. Second, there is a lack of readily computable diagnostics that can be used to establish ergodicity \cite{Sturman06}, other than comparison of coordinate distributions obtained using the given thermostat with those obtained using other methods. In the present paper we analyze the Hamiltonian isokinetic thermostat \cite{Morriss98} from the perspective of reaction rate theory. Although not as widely used as the NH thermostat and its many variants, the non-Hamiltonian version of the isokinetic thermostat has been developed and applied to several problems of chemical interest by Minary et al.\ \cite{Minary03a,Minary03b}. In this thermostat, the particle momenta are subject to a nonholonomic constraint that keeps the kinetic energy constant. The resulting dynamics generates a canonical (constant tenperature) distribution in configuration space \cite{Morriss98}. In this work we investigate a slightly modified version of the Hamiltonian isokinetic thermostat given by Litniewski \cite{Litniewski93} and Morishita \cite{Morishita03}. The structure of the present paper is as follows: Section \ref{sec:isokinetic_thermostat} reviews the Hamiltonian formulation of the isokinetic thermostat. The non-Hamiltonian equations of motion for a Hamiltonian system subject to the isokinetic constraint correspond to Hamiltonian dynamics at zero energy under an effective Hamiltonian whose potential is obtained from the physical potential by exponentiation. The model Hamiltonians analyzed in the present paper are introduced in Section \ref{sec:hamiltonians}. In these systems, the physical potential describes $n$ uncoupled oscillators, $n-1$ harmonic modes together with a bistable thermalizing \cite{Minary03a,Minary03b} or isomerization coordinate. For these Hamiltonians the physical potential exhibits a saddle of index one, as for the case of a bistable reaction profile coupled to one or more transverse confining modes. In Section \ref{sec:hamiltonian_nonhamiltonian} we briefly review earlier results \cite{Ezra09} establishing that the isokinetic Hamiltonian dynamical system corresponding to the physical Hamiltonian exhibits a normally hyperbolic invariant manifold (NHIM) associated with the saddle \cite{Wiggins94}, at least for a limited range of energies above that of the saddle. The phase space formulation of unimolecular reaction rate theory in terms of the gap time \cite{Slater56,Thiele62,Dumont86,Ezra09a} and related concepts is discussed in Sect.\ \ref{sec:unimolecular}. The classical spectral theorem \cite{Brumer80,Pollak81,Binney85,Meyer86,WaalkensBurbanksWiggins05,WaalkensBurbanksWiggins05c} provides a relation between the distribution of gap times and the phase space volume occupied by reactive phase points; unless the measure of the region swept out by isomerizing trajectories equals that of the energy shell, the system cannot be ergodic. This necessary condition for ergodicity establishes a connection between the concepts of reaction rate theory and the properties of the Hamiltonian isokinetic thermostat. Numerical results on 3 and 4 degree of freedom Hamiltonian isokinetic thermostats are reported in Section \ref{sec:numerical_results}. Section \ref{sec:summary} concludes. Analytical results for the density of states for exponentiated harmonic potential are given in Appendix \ref{sec:dos}, while details of the phase space sampling procedures used in our numerical work appear in Appendix \ref{sec:sampling}. A brief discussion of the theoretical framework developed here was given in ref.\ \onlinecite{Ezra09}. \newpage \section{The Hamiltonian Isokinetic Thermostat} \label{sec:isokinetic_thermostat} The role of the isokinetic thermostat is to dynamically constrain the kinetic energy of a simulated system to have a constant value proportional to $k_{\text{B}} T$, where $T$ is the desired temperature and $k_{\text{B}}$ is Boltzmann's constant. Non-Hamiltonian equations of motion for the isokinetic thermostat have been obtained by applying Gauss' principle of least constraint \cite{Evans90,Hoover91,Morriss98}, which is the appropriate dynamical principle for nonholonomically constrained systems \cite{Evans83,Evans90}. Provided the underlying dynamics is effectively ergodic on the timescale of the simulation, a trajectory of the non-Hamiltonian isokinetic thermostat samples configuration space according to the invariant measure associated with the equilibrium canonical ensemble \cite{Evans83a,Morriss98}. Algorithms for the non-Hamiltonan isokinetic thermostat have been developed and applied to a variety of systems by Minary, Martyna and Tuckerman \cite{Minary03a,Minary03b}. A Hamiltonian formulation of the isokinetic thermostat has been given by Dettmann and Morriss \cite{Dettmann96,Morriss98}. The Hamiltonian used in the Dettmann-Morriss approach incorporates a coordinate-dependent scaling of time \cite{Szebehely67,Benest02}, and the physically relevant dynamics is restricted to the zero energy hypersurface. A noncanonical tranformation of variables then leads to a set of dynamical equations equivalent to the non-Hamiltonian version. An important motivation for the development of Hamiltonian versions of the isokinetic thermostat is the possibility of using symplectic integration algorithms to ensure qualitatively correct behavior of integrated trajectories over long times \cite{SanzSerna94,Hairer02}. In this section we briefly review the Hamiltonian formulation of the isokinetic thermostat (for more detailed discussion, see \cite{Ezra09}.) As our focus here is on systems with $n \geq 3$ degrees of freedom (DoF), we discuss a Hamiltonian version of the isokinetic thermostat that generates the invariant measure associated with the configurational canonical ensemble, while at the same time allowing use of the simplest Verlet-type \cite{Verlet67} symplectic integration algorithm \cite{Litniewski93,Morishita03}. \subsection{Hamiltonian isokinetic thermostat} The physical Hamiltonian of interest is assumed to have the standard form \begin{equation} \label{ham_1} H(q, p) = \frac{1}{2} p^2 + \Phi(q), \end{equation} where $(q, p) \in \mathbb{R}^n \times \mathbb{R}^n$, $\Phi(q)$ is the potential energy, and we set all masses equal to unity for simplicity. Corresponding physical Hamiltonian equations of motion are \begin{subequations} \label{ham_eq} \begin{align} \dot{q} & = +\pd{H}{p} \\ \dot{p} &= - \pd{H}{q}. \end{align} \end{subequations} For $n \geqslant 3$ degrees of freedom, we define a Hamiltonian $\mathcal{H} = \mathcal{H} (\pi, q)$ as \cite{Litniewski93,Morishita03} \begin{equation} \label{sp_7} \mathcal{H} = \frac{1}{2}\pi^2 - \frac{\nu}{ 2 \bar{\beta}} \, e^{-2 \bar{\beta} \Phi}, \end{equation} where $\pi \in \mathbb{R}^n$ and $\bar{\beta}$ and $\nu$ are parameters to be determined. The associated Hamiltonian equations of motion are \begin{subequations} \label{sp_6p} \begin{align} {q}^\prime & = + \, \pd{\mathcal{H}}{\pi} = \pi\\ {\pi}^\prime & = -\, \pd{\mathcal{H}}{q} = -\nu \Phi_q \, e^{-2 \bar{\beta} \Phi}, \end{align} \end{subequations} where the time derivative is denoted by a prime (${}^\prime$) to indicate that the derivative is actually taken with respect to a suitably scaled time variable (see \cite{Morriss98,Ezra09}). As the kinetic energy term in \eqref{sp_7} is coordinate-independent, a basic Verlet-type symplectic integrator can be used to integrate Hamilton's equations for $\mathcal{H}$ \cite{Verlet67,SanzSerna94,Hairer02,Leimkuhler04a}. (Higher-order symplectic algorithms can of course also be used \cite{Hairer02,Leimkuhler04a}.) Making a \emph{noncanonical} change of coordinates from variable $(\pi, q)$ to physical variables $(p, q)$, $(\pi, q) \mapsto (p, q)$, where \begin{equation} \pi = e^{-\bar{\beta} \Phi} p \end{equation} the Hamiltonian function $\mathcal{H}$ becomes \begin{equation} \mathcal{H} = e^{-2 \bar{\beta} \Phi} \frac{1}{2} \left[ p^2 - \frac{\nu}{\bar{\beta}} \right]. \end{equation} Setting $\mathcal{H} = 0$ then automatically enforces an \emph{isokinetic constraint} in terms of the momentum variables $p$ \cite{Morriss98} \begin{equation} \frac{p^2}{2} = \frac{\nu}{ 2 \bar{\beta}}. \end{equation} For trajectories run at $\mathcal{H}=0$, and only for this value, we have the invariant volume element \cite{Morriss98} \begin{subequations} \begin{align} {\rm d} V & = {\rm d}^n \pi {\rm d}^n q \, \delta(\mathcal{H}) \\ & = {\rm d}^n p {\rm d}^n q \, e^{-n\bar{\beta}\Phi} \delta(\mathcal{H}) \\ & = {\rm d}^n p {\rm d}^n q \, e^{-n\bar{\beta}\Phi} \delta\left[\tfrac{1}{2}e^{-2\bar{\beta}\Phi}\left(p^2 - \frac{\nu}{\bar{\beta}} \right)\right]\\ & = {\rm d}^n p {\rm d}^n q \, 2e^{-(n-2)\bar{\beta}\Phi} \delta\left[p^2 - \frac{\nu}{\bar{\beta}} \right]. \end{align} \end{subequations} We therefore set ($\beta \equiv 1/k_{\text{B}} T$ as usual) \begin{equation} \label{betab_1} \bar{\beta} = \frac{1}{(n-2)k_{\text{B}} T} = \frac{\beta}{(n-2)}\,, \end{equation} to ensure that the invariant measure is proportional to the canonical density (recall $n>2$) \begin{equation} \rho(q) \propto \exp[-\Phi(q)/k_{\text{B}} T]. \end{equation} With \begin{equation} \nu = \frac{n}{(n-2)} \end{equation} the kinetic energy \begin{equation} \frac{p^2}{2} = \frac{n}{2} \,k_{\text{B}} T \end{equation} as required for an $n$ degree of freedom system at temperature $T$. Nevertheless, the important aspect of the dynamics for computing configurational averages is the invariant measure, not the magnitude of the constrained KE. The quantity $\nu$ can then be treated as a free parameter, which can be used to move the potential saddle closer to energy $\mathcal{E}=0$ (cf.\ Section \ref{sec:hamiltonian_nonhamiltonian}). In fact, we shall take $\nu =1$ in our calculations. For $n \geq 3$ degrees of freedom, and \emph{provided the dynamics is ergodic on the $\mathcal{H} = 0$ energy surface}, Hamiltonian dynamics \eqref{sp_6p} will therefore yield the canonical measure in configuration space. Since the aim is to obtain a canonical distribution, the Hamiltonian dynamics derived using \eqref{sp_7} is in this respect equivalent in principle to the original isokinetic thermostat \cite{Morriss98}. For completeness, we note that, in order to treat the $n=2$ DoF case, it is necessary to use a different Hamiltonian $\mathcal{K}$ (cf.\ refs \cite{Morriss98,Ezra09}), with \begin{equation} \label{sp_8} \mathcal{K} = \frac{1}{2}e^{\bar{\beta} \Phi} \pi^2 - \frac{\nu}{ 2 \bar{\beta}} \, e^{-\bar{\beta} \Phi}. \end{equation} To obtain the correct canonical measure case we must then take ($n \geqslant 2$) \begin{equation} \bar{\beta} = \frac{1}{(n-1)k_{\text{B}} T} = \frac{\beta}{(n-1)} \end{equation} and \begin{equation} \nu = \frac{n}{(n-1)} \, . \end{equation} \newpage \section{Model Hamiltonians} \label{sec:hamiltonians} In this section we introduce the model Hamiltonians studied in the remainder of the paper. We define systems with three and four DoF suitable for studying the Hamiltonian isokinetic thermostat on the zero energy surface $\mathcal{H}=0$. The phase space structure of these systems is discussed in Section \ref{sec:hamiltonian_nonhamiltonian}, while numerical computations of gap times, reactive volumes and coordinate distributions obtained using the Hamiltonian isokinetic thermostat are described in Sec.\ \ref{sec:numerical_results}. The systems considered here consist of $n$ uncoupled oscillators: $n-1$ harmonic modes plus a single bistable mode. Although the $n$ modes are uncoupled in the physical Hamiltonian $H(p, q)$, exponentiation of the potential $\Phi(q)$ introduces intermode coupling in the isokinetic thermostat Hamiltonian $\mathcal{H}$. The bistable mode can be interpreted either as a thermalizing degree of freedom (following Minary, Martyna and Tuckerman (MMT) \cite{Minary03a,Minary03b}), or as a reactive degree of freedom associated with an isomerization process. Adopting the latter perspective, we can apply concepts and methods recently developed for understanding reaction dynamics (in particular, transition state theory) in phase space \cite{wwju,Komatsuzaki02,Komatsuzaki05,Jaffe05,Wiesenfeld05,WaalkensSchubertWiggins08,Ezra09a} to investigate the problem of thermostat dynamics. \subsection{Double-well potential} The double well potential for the thermalizing mode is taken to be a temperature-independent quartic having the following form: \begin{equation} \label{mmt_pot} \chi(y) = \frac{1}{2} \left(y^4 - \alpha y^2 \right) \end{equation} (Note that the Minary, Martyna \& Tuckerman version of the double-well potential is effectively temperature dependent \cite{Minary03a,Minary03b}.) The potential \eqref{mmt_pot} has stationary points at $y=0$ (a maximum) and $y=\pm \sqrt{\alpha/2}$ (minima), with corresponding values $0$ and $-\alpha^2 /8$. Expanding $\chi(y)$ about the minima $y=\pm \sqrt{\alpha/2}$ we find that the effective frequency for harmonic motion in the vicinity of the minima is \begin{equation} \bar{\omega} = \sqrt{ 2 \alpha}. \end{equation} \subsection{Model Hamiltonians} \subsubsection{Three degrees of freedom} The 3 DoF potential corresponds to a separable bistable $(y)$ plus harmonic bath modes $(x_1, x_2)$:\begin{equation} \Phi (x_1, x_2 ,y) = \frac{m \omega_1^2 x_1^2}{2} + \frac{m \omega_2^2 x_2^2}{2} + \frac{1}{2} \left(y^4 - \alpha y^2 \right). \label{3dof_pot} \end{equation} Setting $m=1$ and choosing potential parameters $\omega_1 = 1$, $\omega_2 = \sqrt{2}$ and $\alpha=2$ we have \begin{equation} \Phi (x_1, x_2 ,y) = \frac{x_1^2}{2} + x_2^2 + \frac{y^4}{2} - y^2 . \label{3dof_pot_a} \end{equation} The isokinetic thermostat Hamiltonian $\mathcal{H}$ is then \begin{equation} \label{eq:ham_3dof} {\cal H}(x_1, x_2, y, \pi_{x_1},\pi_{x_2}, \pi_y) = \frac{\pi_{x_1}^2}{2} + \frac{\pi_{x_2}^2}{2} + \frac{\pi_y^2}{2} - \frac{\nu}{2\beta} \exp \left[ -2 \beta \left\{\frac{ x_1^2}{2} + x_2^2 + \frac{y^4}{2} - y^2 \right\}\right]. \end{equation} The origin is an equilibrium point of saddle-center-center type with energy ${\cal H}(0, 0, 0, 0, 0, 0) =-\frac{\nu}{2 \beta}$, and $\nu$ and $\beta$ are parameters that we can vary. As mentioned above, the value of the parameter $\nu$ determines the (constant) value of the physical kinetic energy $p^2/2$. It also determines the energy of the saddle point with respect to the zero of energy, $\mathcal{H} = 0$. To obtain the correct canonical invariant density, it is only necessary that $p^2$ be constant; the parameter $\nu$ is therefore effectively a free parameter in addition to the temperature $T$. For the numerical computations to be discussed below we take $\alpha =2 $, $\nu = 1$. Plots of the $x_2 =0$ slice through the physical potential $\Phi$ for the 3 DoF system and the corresponding exponentiated potential ($\beta =1$) are shown in Figure \ref{fig:pot_1}. \subsubsection{Four degrees of freedom} The 4 DoF potential corresponds to a separable bistable $(y)$ plus harmonic bath modes $(x_1, x_2, x_3)$: \begin{equation} \Phi (x_1, x_2, x_3, y) = \frac{m \omega_1^2 x_1^2}{2} + \frac{m \omega_2^2 x_2^2}{2} + \frac{m \omega_3^2 x_3^2}{2} + \frac{1}{2} \left(y^4 - \alpha y^2 \right). \label{4dof_pot} \end{equation} Setting $m=1$, $\omega_1 = 1$, $\omega_2 = \sqrt{2}$, $\omega_3 = \sqrt{3}$ and $\alpha=2$ we have \begin{equation} \Phi (x_1, x_2, x_3, y) = \frac{x_1^2}{2} + x_2^2 + \frac{3 x_3^2}{2} + \frac{y^4}{2} - y^2 . \label{4dof_pot_a} \end{equation} From eq.\ \eqref{betab_1}, the parameter $\bar{\beta} = \beta/2$ for $n=4$ DoF, so that the isokinetic thermostat Hamiltonian $\mathcal{H}$ is: \begin{equation} \label{eq:ham_4dof} \begin{split} {\cal H}(x_1, x_2, x_3, y, \pi_{x_1}, \pi_{x_2}, \pi_{x_3}, \pi_y) & = \frac{\pi_{x_1}^2}{2} + \frac{\pi_{x_2}^2}{2} + \frac{\pi_{x_3}^2}{2} + \frac{\pi_y^2}{2} \\ & - \frac{\nu}{\beta} \exp \left[ -\beta \left\{\frac{x_1^2}{2} + x_2^2 + \frac{3 x_3^2}{2} + \frac{y^4}{2} - y^2 \right\}\right]. \end{split} \end{equation} \noindent The origin is an equilibrium point of saddle-center-center-center type with energy ${\cal H}(0, 0, 0, 0, 0, 0, 0, 0) =-\frac{\nu}{\beta}$, and $\nu$ and $\beta$ are parameters that we can vary. We set the parameter $\nu=1$. In Section \ref{sec:numerical_results} we present numerical results for both the 4 DoF and 3 DoF systems corresponding to three values of the temperature: $\beta =1$, $3$, and $5$. \newpage \section{Microcanonical Phase space structure: Hamiltonian and Non-Hamiltonian Isokinetic Thermostat} \label{sec:hamiltonian_nonhamiltonian} In this section, we briefly discuss the phase space structure in the vicinity of the saddle-center equilibrium points for Hamiltonians \eqref{ham_1} and \eqref{sp_7}. In previous work, we have established the following \cite{Ezra09}: \begin{enumerate} \item If the physical Hamiltonian system defined by eq.\ \eqref{ham_1} has an equilibrium point of saddle-centre-$\ldots$-centre stability type \cite{Wiggins94} at the origin then the Hamiltonian system defined by an extended Hamiltonian of the form \eqref{sp_8} corresponding to the isokinetic thermostat has a equilibrium point at the origin of saddle-centre-$\ldots$-centre stability type. \item The energy of the equilibrium is such that on the zero energy surface of the extended Hamiltonian $\mathcal{K}$ the phase space structures present in the physical Hamiltonian also exist in the extended Hamiltonian phase space. \item The phase space structures on the zero energy surface corresponding to the Hamiltonian isokinetic thermostat map to phase space structures in the non-Hamiltonian thermostatted system obtained by transforming the Hamiltonian equations of motion for evolution $(\pi, q)$ under $\mathcal{K}$ to equations of motion for non-canonical variables $(p, q)$ under the constraint of zero energy. \end{enumerate} Although these results were previously established for the Hamiltonian $\mathcal{K}$, eq.\ \eqref{sp_8}, they also hold for Hamiltonian $\mathcal{H}$, eq.\ \eqref{sp_7} as we describe below. More precisely, we have the following: assume that the physical Hamiltonian system \eqref{ham_eq} has an equilibrium point at $(q,p) = (q^\ast, p^\ast) = (0,0)$. The energy of this equilibrium point is $H(0,0) = \Phi(0)$. The stability of the equilibrium point is determined by the eigenvalues of the derivative of the Hamiltonian vector field (or Hessian of the Hamiltonian function) evaluated at the equilibrium point. This is given by the $2n \times 2n$ matrix: \begin{equation} \mbox{Hess}_{\mbox{\tiny sys}} = \left( \begin{array}{cc} 0_{n \times n} & \mbox{id}_{n \times n} \\ -\Phi_{qq}(0) & 0_{n \times n} \end{array} \right), \label{Hess_ham} \end{equation} \noindent where $0_{n \times n}$ denotes the $n \times n$ matrix of zeros and $\mbox{id}_{n \times n}$ denotes the $n \times n$ identity matrix. We require the equilibrium point to be of saddle-centre-$\ldots$-centre stability type. This means that the $2n \times 2n$ matrix $\mbox{Hess}_{\mbox{\tiny sys}}$ has eigenvalues $\pm \lambda, \pm i \omega_i$, $i=2, \ldots, n$ where $\lambda$ and $\omega_i$ are real. Eigenvalues $\gamma$ of $\mbox{Hess}_{\mbox{\tiny sys}}$ are obtained by solving the characteristic equation $\det(\mbox{Hess}_{\mbox{\tiny sys}}-\gamma\mbox{id}_{2n \times 2n})=0$. The block structure of the $2n \times 2n$ matrix $\mbox{Hess}_{\mbox{\tiny sys}}$ implies that (cf.\ Theorem 3 of \cite{silvester}) \begin{equation} \det(\mbox{Hess}_{\mbox{\tiny sys}} - \gamma \mbox{id}_{2n \times 2n}) = \det(\Phi_{qq}(0) + \gamma^2 \mbox{id}_{n \times n})=0 \label{char_sys} \end{equation} \noindent so that the $2n$ eigenvalues $\gamma$ are given in terms of $\sigma$, the eigenvalues of the $n \times n$ Hessian matrix $\Phi_{qq} (0)$ associated with the potential, as follows \begin{equation} \gamma_{k}, \gamma_{k+n} = \pm \sqrt{-\sigma_k}, \;\; k=1,\ldots, n. \label{sys_eivalues} \end{equation} \noindent Therefore, if $\Phi(q)$ has a rank-one saddle at $q=0$, so that one eigenvalue is strictly negative and the rest are strictly positive, then $(q,p) = (0,0)$ is a saddle-centre-$\ldots$-centre type equilibrium point for \eqref{ham_eq} as described above. Next, we consider Hamilton's equations associated with the extended Hamiltonian \eqref{sp_7}, eq.\ \eqref{sp_6p}, corresponding to the isokinetic thermostat. It is easy to verify that $(q, \pi) = (0, 0)$ is an equilibrium point for \eqref{sp_6p} with energy ${\cal H} (0,0) = -\frac{\nu}{2 \bar{\beta}} e^{-2 \bar{\beta}\Phi(0)}$. Proceeding as above, we linearize \eqref{sp_6p} about $(q, \pi) = (0, 0)$ and compute the eigenvalues of the matrix associated with the linearization. These are given by: \begin{equation} \tilde{\gamma}_{k}, \tilde{\gamma}_{k+n} = \pm \sqrt{\nu} e^{- \bar{\beta} \Phi (0)}\sqrt{-\sigma_k}, \;\; k=1,\ldots, n. \label{therm_eivalues} \end{equation} \noindent In other words, the eigenvalues of the linearization of \eqref{sp_6p} about $(q, \pi) = (0, 0)$ correspond to the eigenvalues of the matrix associated with the linearization of \eqref{ham_eq} about $(q, p) = (0, 0)$, but with each eigenvalue multiplied by the positive constant $\sqrt{\nu} e^{- \bar{\beta} \Phi (0)}$. Hence, it follows that if the potential of the physical Hamiltonian, $\Phi(q)$, has a rank-one saddle at $q=0$, so that one eigenvalue is strictly negative and the rest are strictly positive, then $(q, \pi) = (0,0)$ is a saddle-centre-$\ldots$-centre type equilibrium point for \eqref{sp_6p}. Moreover, if the purely imaginary eigenvalues in \eqref{sys_eivalues} satisfy a non-resonance condition, then the purely imaginary eigenvalues in \eqref{therm_eivalues} satisfy the {\em same} non-resonance condition \cite{Ezra09}. The equilibrium point $(q,\pi)=(0,0)$ has energy ${\cal H} (0,0) = -\frac{\nu}{2 \bar{\beta}} e^{-2 \bar{\beta} \Phi(0)}$. However, we are only interested in the dynamics on the ${\cal H}=0$ energy surface. All of the phase space structure discussed above exists for a certain range of energies above that of the saddle-centre-$\ldots$-centre, and will do so on the ${\cal H} = 0$ surface if $-\frac{\nu}{2\bar{\beta}} e^{-2 \bar{\beta}\Phi(0)}$ is close enough to zero. Putting together the results above, we have the following \cite{Ezra09}: \begin{quotation} \noindent \em Suppose the physical Hamiltonian system \eqref{ham_eq} has an equilibrium point of saddle-centre-$\ldots$-centre stability type at the origin. Then the Hamiltonian system \eqref{sp_6p} corresponding to the isokinetic thermostat for $n \geq 3$ DoF also has an equilibrium point of saddle-centre-$\ldots$-centre stability type at the origin. Moreover, the energy of this equilibrium point can be chosen so that on the energy surface corresponding to ${\cal H}=0$ there exists a normally hyperbolic invariant manifold (NHIM) \cite{Wiggins94}, with associated stable and unstable manifolds, a ``dividing surface'' of no-return and minimal flux, and a foliation of the reaction region by $n$-dimensional invariant Lagrangian submanifolds \cite{WaalkensSchubertWiggins08}. \end{quotation} Following the general arguments outlined in \cite{Ezra09}, it is straightforward to show that the phase space structure of (\ref{sp_6p}) on ${\cal H}=0$ exists in the non-Hamiltonian thermostatted system in the original physical variables. These general results allow us to conclude that, under the noncanonical transformation of variables $(\pi, q) \mapsto (p, q)$, \begin{itemize} \item {\em The $2n-1$ dimensional invariant energy surface $\mathcal{H} =0$ corresponding to the Hamiltonian isokinetic thermostat (\ref{sp_6p}) maps to a $2n-1$ dimensional invariant manifold for the non-Hamiltonian thermostatted system.} \item {\em The $2n-3$ dimensional NHIM, its $2n-2$ dimensional stable and unstable manifolds, the $n$-dimensional invariant Lagrangian submanifolds, and the $2n-2$ dimensional dividing surface map to a $2n-3$ dimensional NHIM, its $2n-2$ dimensional stable and unstable manifolds, $n$-dimensional invariant submanifolds, and the $2n-2$ dimensional dividing surface in the $2n-1$ dimensional invariant manifold for the non-Hamiltonian thermostatted system.} \end{itemize} \newpage \section{Phase Space Geometrical Structures, Unimolecular Reaction Rates and Thermostat Dynamics: Gap Times and Reactive Volumes} \label{sec:unimolecular} Our analysis of thermostat dynamics will be carried out in phase space, and our approach to probing the dynamics, and especially the question of ergodicity, will be based on the general formulation of unimolecular reaction rates based upon that originally given by Thiele \cite{Thiele62}. In their general form the rate expressions derived by Thiele explicitly invoke the existence of a \emph{phase space} dividing surface separating reactants and products; such surfaces, discussed by Wigner \cite{Wigner39} (see also refs \onlinecite{Keck67,Anderson95}), have only recently become amenable to direct computation via the use of normal form approaches \cite{wwju,ujpyw,WaalkensBurbanksWiggins04,WaalkensWiggins04,WaalkensBurbanksWigginsb04, WaalkensBurbanksWiggins05,WaalkensBurbanksWiggins05c,SchubertWaalkensWiggins06,WaalkensSchubertWiggins08, Komatsuzaki00,Komatsuzaki02,Komatsuzaki05}. Our discussion of Thiele's reaction rate theory theory follows ref.\ \onlinecite{Ezra09a}. \subsection{Phase space dividing surfaces: definition and properties} Interpreting the bistable thermalizing mode \cite{Minary03a,Minary03b} in the model Hamiltonians of Section \ref{sec:hamiltonians} as an isomerization coordinate, we see that the thermostat dynamics associated with Hamiltonian \eqref{sp_7} is equivalent to an isomerization reaction at constant energy ${\cal H}=0$. We therefore consider the rate of unimolecular isomerization, at fixed energy, for a system described by a time-independent, $n$ degree-of-freedom (DOF) classical Hamiltonian. Points in the $2n$-dimensional system phase space $\mathcal{M} = \mathbb{R}^{2n}$ are denoted $\boldsymbol{z} \equiv (\pi, q) \in \mathcal{M}$. The system Hamiltonian is $\mathcal{H}(\boldsymbol{z})$, and the $(2n-1)$ dimensional energy shell at energy $E$, $\mathcal{H}(\boldsymbol{z}) = E$, is denoted $\Sigma_E \subset \mathcal{M}$. The corresponding microcanonical phase space density is $\delta(E - \mathcal{H}(\boldsymbol{z}))$, and the associated density of states for the complete energy shell at energy $E$ is \begin{equation} \rho(E) = \Int{\boldsymbol{z}}{\mathcal{M}}{} \delta(E - \mathcal{H}(\boldsymbol{z})). \end{equation} In Appendix \ref{sec:dos}, we provide analytical results for $\rho(E)$ for $n$-dimensional systems with Hamiltonians of the form \eqref{sp_7}, with $\Phi (q)$ an isotropic harmonic potential and $n$ even. The first step in the analysis is to define regions of the energy surface corresponding to reactant and product. For the isokinetic thermostat Hamiltonians considered here, there is a natural divison of phase space into reactant region ($y<0$, say) and product region $y >0$. The dividing surface between reactant and product is determined by symmetry to be the codimension-1 surface $y=0$, and we shall be concerned with the evaluation of the microcanonical reactive flux across this surface. (For fundamental work on reactive flux correlation functions and associated relaxation kinetics, see \cite{Dumont89,Dumont89a,Chandler78,Chandler87,Gray87}.) For multidimensional systems such as polyatomic molecules ($n \geq 3$ DoF), it is \emph{in general} not possible to define or compute a dividing surface with desirable dynamical attributes such as the no-recrossing property by working in configuration space alone, and a phase space perspective is necessary \cite{wwju,ujpyw,WaalkensBurbanksWiggins04,WaalkensWiggins04,WaalkensBurbanksWigginsb04, WaalkensBurbanksWiggins05,WaalkensBurbanksWiggins05c,SchubertWaalkensWiggins06,WaalkensSchubertWiggins08, Komatsuzaki00,Komatsuzaki02,Komatsuzaki05}. As discussed in Section \ref{sec:hamiltonian_nonhamiltonian}, Hamilton's equations for the thermostat Hamiltonian $\mathcal{H}$ have an equilibrium of saddle-center-$\ldots$-center stability type. The significance of saddle points of this type for Hamilton's equations is that, for a range of energies above that of the saddle, the energy surfaces have the {\em bottleneck property} in a phase space neighborhood near the saddle, i.e., the $2n-1$ dimensional energy surface {\em locally} has the geometrical structure of the product of a $2n-2$ dimensional sphere and an interval, $S^{2n-2} \times I$. In the vicinity of the bottleneck, we are able to construct a dividing surface depending on $E$, $\text{DS}(E)$, with very desirable properties: For each energy in this range above the saddle, $\text{DS}(E)$ locally ``disconnects'' the energy surface into two disjoint pieces with the consequence that the only way to pass from one piece of the energy surface to the other is to cross $\text{DS}(E)$. The dividing surface has the geometrical structure of a $2n-2$ dimensional sphere, $S^{2n-2}$, which is divided into two $2n-2$ dimensional hemispheres, denoted $\text{DS}_{\text{in}}(E)$ and $\text{DS}_{\text{out}}(E)$ that are joined at an equator, which is a $2n-3$ dimensional sphere, $S^{2n-3}$. The hemisphere $\text{DS}_{\text{in}}(E)$ corresponds to initial conditions of trajectories that enter the reaction region while $\text{DS}_{\text{out}}(E)$ corresponds to initial conditions of trajectories that exit the reaction region, both by passing through the bottleneck in the energy surface. The equator $S^{2n-3}$ is an invariant manifold of saddle stability type, a so-called {\em normally hyperbolic invariant manifold} (NHIM) \cite{Wiggins94}. The NHIM is of great physical significance: it is the actual ``saddle'' in phase space identified as the ``activated complex'' of reaction rate dynamics \cite{Pollak78,Truhlar96,WaalkensSchubertWiggins08}. In the context of microcanonical rates, it has been shown that $\text{DS}_{\text{in}}(E)$ and $\text{DS}_{\text{out}}(E)$ have the essential \emph{no-recrossing property} and that the flux across them is minimal \cite{WaalkensWiggins04}. We denote the directional flux across these hemispheres by $\phi_{\text{in}} (E)$ and $\phi_{\text{out}} (E)$, respectively, and note that $\phi_{\text{in}} (E)+ \phi_{\text{out}}(E)=0$. The magnitude of the flux is $\vert \phi_{\text{in}} (E)\vert = \vert \phi_{\text{out}}(E)\vert \equiv \phi (E)$. Most significantly, the hemisphere $\text{DS}_{\text{in/out}}(E)$ is the correct surface across which to compute the ``exact'' flux into/out of the reaction region. \subsection{Phase space volumes and gap times} \label{subsec:volumes} The disjoint regions of phase space corresponding to species A (reactant) and B (product) will be denoted $\mathcal{M}_{\text{A}}$ and $\mathcal{M}_{\text{B}}$, respectively \cite{footnote0}. As discussed above, the DS can be rigorously defined to be locally a surface of no return (transition state). The microcanonical density of states for reactant species A is \begin{equation} \rho_{\text{A}}(E) = \Int{\boldsymbol{z}}{\mathcal{M}_{\text{A}}}{} \delta(E - H(\boldsymbol{z})) \end{equation} with a corresponding expression for the density of states $\rho_{\text{B}}(E)$ for product B for the case of compact product energy shell. For isokinetic thermostat Hamiltonians $\mathcal{H}$, the $\mathcal{H} = 0$ energy surface extends to $\pm \infty$ in configuration space. Although the phase space volume $N(E)$ is finite for $E\to 0$, the corresponding density of states $\rho(E) = {\rm d} N/{\rm d} E$ may diverge as $E\to 0$. In Appendix \ref{sec:dos} we derive analytical expressions for $\rho(E)$ associated with isotropic harmonic potentials $\Phi(\boldsymbol{q})$, $\boldsymbol{q} \in \mathbb{R}^n$, with $n$ even. While $\rho(E)$ diverges as $E \to 0$ for $n=2$, it is finite in this limit for $n \geq 4$. Analytical expressions for $\rho(E)$ are not available for the model potentials studied here. Provided that the flow is everywhere transverse to $\text{DS}_{\text{in, out}}(E)$, those phase points in the reactant region $\mathcal{M}_{\text{A}}$ that lie on crossing trajectories \cite{DeLeon81,Berne82} (i.e., that will eventually react) can be specified uniquely by coordinates $(\bar{\boldsymbol{q}}, \bar{\boldsymbol{\pi}}, \psi)$, where $(\bar{\boldsymbol{q}}, \bar{\boldsymbol{\pi}}) \in \text{DS}_{\text{in}}(E)$ is a point on $\text{DS}_{\text{in}}(E)$, the incoming half of the DS, specified by $2(n-1)$ coordinates $(\bar{\boldsymbol{q}}, \bar{\boldsymbol{\pi}})$, and $\psi$ is a time variable. (Dividing surfaces constructed by normal form algorithms are guaranteed to be transverse to the vector field, except at the NHIM, where the vector field is tangent \cite{wwju,ujpyw}.) The point $\boldsymbol{z}(\bar{\boldsymbol{q}}, \bar{\boldsymbol{\pi}}, \psi)$ is reached by propagating the initial condition $(\bar{\boldsymbol{q}}, \bar{\boldsymbol{p}}) \in \text{DS}_{\text{in}}(E)$ forward for time $\psi$ \cite{Thiele62,Ezra09a}. As all initial conditions on $\text{DS}_{\text{in}}(E)$ (apart from a set of trajectories of measure zero lying on stable manifolds) will leave the reactant region in finite time by crossing $\text{DS}_{\text{out}}(E)$, for each $(\bar{\boldsymbol{q}}, \bar{\boldsymbol{\pi}}) \in \text{DS}_{\text{in}}(E)$ we can define the \emph{gap time} $s = s(\bar{\boldsymbol{q}}, \bar{\boldsymbol{\pi}})$, which is the time it takes for the incoming trajectory to traverse the reactant region. That is, $\boldsymbol{z}(\bar{\boldsymbol{q}}, \bar{\boldsymbol{\pi}}, \psi = s(\bar{\boldsymbol{q}}, \bar{\boldsymbol{p}})) \in \text{DS}_{\text{out}}(E)$. For the phase point $\boldsymbol{z}(\bar{\boldsymbol{q}}, \bar{\boldsymbol{\pi}}, \psi)$, we therefore have $0 \leq \psi \leq s(\bar{\boldsymbol{q}}, \bar{\boldsymbol{\pi}})$. The coordinate transformation $\boldsymbol{z} \to (E, \psi, \bar{\boldsymbol{q}}, \bar{\boldsymbol{\pi}})$ is canonical \cite{Arnold78,Thiele62,Binney85,Meyer86}, so that the phase space volume element is \begin{equation} \label{coord_1} {\rm d}^{2n} \boldsymbol{z} = {\rm d} E \, {\rm d} \psi \, {\rm d} \sigma \end{equation} with ${\rm d} \sigma \equiv {\rm d}^{n-1} \bar{q} \, {\rm d}^{n-1} \bar{\pi}$ an element of $2n-2$ dimensional area on the DS. The magnitude $\phi(E)$ of the flux through dividing surface $\text{DS}(E)$ at energy $E$ is given by \begin{equation} \label{flux_1} \phi(E) = \left\vert\Int{\sigma}{\text{DS}_{\text{in}}(E)}{} \right\vert, \end{equation} where the element of area ${\rm d} \sigma$ is precisely the restriction to the DS of the appropriate flux $(2n-2)$-form $\omega^{(n-1)}/(n-1)!$ corresponding to the Hamiltonian vector field associated with $H(\boldsymbol{z})$ \cite{Toller85,Mackay90,Gillilan90,WaalkensWiggins04}. The reactant phase space volume occupied by points initiated on the dividing surface $\text{DS}_{\text{in}}$ with energies between $E$ and $E + {\rm d} E$ is therefore \cite{Thiele62,Brumer80,Pollak81,Binney85,Meyer86,WaalkensBurbanksWiggins05,WaalkensBurbanksWiggins05c} \begin{subequations} \label{vol_1} \begin{align} {\rm d} E \Int{\sigma}{\text{DS}_{\text{in}}(E)}{} \Int{\psi}{0}{s} & = {\rm d} E \Int{\sigma}{\text{DS}_{\text{in}}(E)}{} s \\ &= {\rm d} E \,\, \phi(E) \, \bar{s} \end{align} \end{subequations}where the \emph{mean gap time} $\bar{s}$ is defined as \begin{equation} \bar{s} = \frac{1}{\phi(E)} \, \Int{\sigma}{\text{DS}_{\text{in}}(E)}{} s \end{equation} and is a function of energy $E$. The reactant density of states $\rho^{\text{C}}_{\text{A}}(E)$ associated with crossing trajectories only (those trajectories that enter and exit the reactant region \cite{Berne82}) is then \begin{equation} \label{vol_1p} \rho^{\text{C}}_{\text{A}}(E) = \phi(E) \, \bar{s} \end{equation} where the superscript $\text{C}$ indicates the restriction to crossing trajectories. The result \eqref{vol_1p} is essentially the content of the so-called classical spectral theorem \cite{Brumer80,Pollak81,Binney85,Meyer86,WaalkensBurbanksWiggins05,WaalkensBurbanksWiggins05c}. If \emph{all} points in the reactant region of phase space eventually react (that is, all points lie on crossing trajectories \cite{DeLeon81,Berne82}) then \begin{equation} \label{equality_1} \rho^{\text{C}}_{\text{A}}(E) = \rho_{\text{A}}(E), \end{equation} so that the crossing density of states is equal to the full reactant phase space density of states. Apart from a set of measure zero, all phase points $\boldsymbol{z} \in \mathcal{M}_{\text{A}}$ can be classified as either trapped (T) or crossing (C) \cite{Berne82}. A phase point in the trapped region $\mathcal{M}_{\text{A}}^{\text{T}}$ never crosses the DS, so that the associated trajectory does not contribute to the reactive flux. Phase points in the crossing region $\mathcal{M}_{\text{A}}^{\text{C}}$ do however eventually cross the dividing surface, and so lie on trajectories that contribute to the reactive flux. In general, however, as a consequence of the existence of trapped trajectories (either trajectories on invariant \emph{trapped} $n$-tori \cite{DeLeon81,Berne82} or trajectories asymptotic to other invariant objects of zero measure), we have the inequality \cite{Thiele62,Berne82,Hase83} \begin{equation} \label{vol_2} \rho_{\text{A}}^{\text{C}}(E) \leq \rho_{\text{A}}(E). \end{equation} From the perspective of thermostat dynamics, the equality \eqref{equality_1} is a \emph{necessary condition for ergodicity}. If equality \eqref{equality_1} does not hold, then there is a region of phase space of nonzero measure that is trapped on the reactant side of the dividing surface. If $\rho_{\text{A}}^{\text{C}}(E) < \rho_{\text{A}}(E)$, then it is in principle necessary to introduce corrections to statistical estimates of reaction rates \cite{Berne82,Hase83,Gray87,Berblinger94,Grebenshchikov03,Stember07}. Numerical results for $\rho^{\text{C}}(E)$ and $\rho(E)$ for the HCN molecule are discussed in \cite{WaalkensBurbanksWigginsb04,WaalkensBurbanksWiggins05,Ezra09a}. \subsection{Gap time and reactant lifetime distributions} \label{subsec:gaps} The \emph{gap time distribution}, $\mathcal{P}(s; E)$ is of central interest in unimolecular kinetics \cite{Slater56,Thiele62}: the probability that a phase point on $\text{DS}_{\text{in}}(E)$ at energy $E$ has a gap time between $s$ and $s +{\rm d} s$ is equal to $\mathcal{P}(s; E) {\rm d} s$. An important idealized gap distribution is the random, exponential distribution \begin{equation} \label{exp_1} \mathcal{P}(s; E) = k(E) \, e^{-k(E) s} \end{equation} characterized by a single decay constant $k$ (where $k$ depends on energy $E$), with corresponding mean gap time $\bar{s} = k^{-1}$. An exponential distribution of gap times is taken to be a necessary condition for `statistical' behavior in unimolecular reactions \cite{Slater56,Slater59,Thiele62,Dumont86}. The lifetime (time to cross the dividing surface $\text{DS}_{\text{out}}(E)$) of phase point $\boldsymbol{z}(\bar{\boldsymbol{q}}, \bar{\boldsymbol{p}}, \psi)$ is $t = s(\bar{\boldsymbol{q}}, \bar{\boldsymbol{p}}) - \psi$, and the corresponding (normalized) reactant lifetime distribution function $\mathbb{P}(t; E)$ at energy $E$ is \cite{Slater56,Slater59,Thiele62,Bunker62,Bunker64,Bunker73,Dumont86} \begin{subequations} \label{life_1} \begin{align} \label{life_1a} \mathbb{P}(t; E) &= -\frac{{\rm d}}{{\rm d} t'}\; \text{Prob}(t \geq t'; E) \Big\vert_{t'=t} \\ \label{life_1b} &= \frac{1}{\bar{s}} \, \Int{s}{t}{+\infty} \mathcal{P}(s; E) \end{align} \end{subequations} where the fraction of interesting (reactive) phase points having lifetimes between $t$ and $t + {\rm d} t$ is $\mathbb{P}(t; E) {\rm d} t$. Equation \eqref{life_1a} gives the general relation between the lifetime distribution and the fraction of trajectories having lifetimes greater than a certain value for arbitrary ensembles \cite{Bunker62,Bunker64,Bunker73}. Note that an exponential gap distribution \eqref{exp_1} implies that the reactant lifetime distribution $\mathbb{P}(t; E)$ is also exponential \cite{Slater56,Slater59,Thiele62,Bunker62,Bunker64,Bunker73}; both gap and lifetime distributions for realistic molecular potentials have been of great interest since the earliest days of trajectory simulations of unimolecular decay, and many examples of non-exponential lifetime distributions have been found \cite{Thiele62a,Bunker62,Bunker64,Bunker66,Bunker68,Bunker73,Hase76,Grebenshchikov03,Lourderaj09}. \subsection{Reaction rates and the inverse gap time} The quantity \begin{equation} \label{k_RRKM} k^{\text{RRKM}}_f(E) \equiv \frac{\phi(E)}{\rho_{\text{A}}(E)} \end{equation} is the statistical (RRKM) microcanonical rate for the forward reaction (A $\to$ B) at energy $E$, the ratio of the magnitude of the flux $\phi(E)$ through $\text{DS}_{\text{in}}(E)$ to the total reactant density of states \cite{Robinson72,Forst03}. Clearly, if $\rho_{\text{A}}(E) = \rho_{\text{A}}^{\text{C}}(E)$, then \begin{equation} k^{\text{RRKM}}_f(E) = \frac{1}{\bar{s}} \end{equation} the inverse mean gap time. In general, the inverse of the mean gap time is \begin{subequations} \label{k2} \begin{align} \frac{1}{\bar{s}} &= \frac{\phi(E)}{\rho_{\text{A}}^{\text{C}}} \\ & = k^{\text{RRKM}}_f \, \left[\frac{\rho_{\text{A}}(E)}{\rho_{\text{A}}^{\text{C}}(E)}\right] \\ & \geq k^{\text{RRKM}}_f. \end{align} \end{subequations} The inverse gap time can then be interpreted as the statistical unimolecular reaction rate corrected for the volume of trapped trajectories in the reactant phase space \cite{Dumont86,Berne82,Hase83,Gray87,Berblinger94}. In the next section we discuss our numerical calculations of the following quantities for the isokinetic thermostats Hamiltonian at constant energy $\mathcal{H} = 0$: gap time distributions, mean gap time $\bar{s}$, reactive flux $\phi (E=0)$, reactive volume $\bar{s} \phi$ and reactant density of states, $\rho(E=0)$. \newpage \section{Numerical computations for isokinetic thermostat} \label{sec:numerical_results} \subsection{Computations} In this Section we discuss the computation of various dynamical quantities for the isokinetic thermostat Hamiltonians defined in Section \ref{sec:hamiltonians}. The potentials for these isokinetic thermostat Hamiltonians with 3 and 4 DoF have the form of an exponentiated double well plus harmonic modes potential. As already noted, for the cases examined here, we can exploit the symmetry of the potential functions. Thus, the dividing surface (DS) between `reactant' and `product' is simply defined to be the symmetry plane $y=0$ for the double well potential. For more general non-symmetric potentials, the dividing surface in phase space can be computed using a normal form expansion \cite{WaalkensSchubertWiggins08}. \subsubsection{System parameters} In addition to the frequencies characterizing the modes transverse to the reaction coordinate, it is necesssary to choose values for the parameters $\beta \equiv 1/k_{\text{B}} T$, $\alpha$, and $\nu$. We adopt the following notation for presentation of our results: the system denoted by H$\beta \alpha \nu$ has 3 DoF and parameter values indicated, while the system J$\beta \alpha \nu$ has 4 DoF. We present numerical results for the 3 DoF systems H121, H321, H521 (Hamiltonian eq.\ \eqref{eq:ham_3dof}) and 4 DoF systems J121, J321, J521 (Hamiltonian eq.\ \eqref{eq:ham_4dof}). In the units used here the height of the barrier to isomerization in the physical potential corresponds to $\beta = 2$. The temperature values studied here therefore span a range of energy scales from well below the barrier height ($\beta = 5$) to well above ($\beta = 1$). \subsubsection{Computations} After choosing a set of parameters, we compute the following quantities (further details of computational methodology are given in Appendix \ref{sec:sampling}): \begin{enumerate} \item {Gap time distribution and mean gap time} The distribution of gap times is obtained by starting trajectories on the forward dividing surface and propagating them until they cross the backward dividing surface. Initial conditions are obtained by uniform random sampling of the dividing surface (see Appendix \ref{sec:sampling}). A discretized approximation to the gap time distribution, $\mathcal{P}(s)$, is obtained by binning the gap times for the trajectory ensemble. The mean gap time $\bar{s}$ is calculated as an unweighted average of computed gap times for the trajectory ensemble. \item {Lifetime distribution} The lifetime distribution $\mathbb{P} (t)$ is obtained from the (discretized) gap time distribution by numerical integration (cf.\ eq.\ \eqref{life_1}). The form of the lifetime distibution is of interest: in particular, deviations from exponentiality are suggestive of ``nonstatistical'' dynamics. The average lifetime $\langle t \rangle $ for the normalized lifetime distribution $\mathbb{P}(\tau)$ is defined as \begin{equation} \langle t \rangle = \Int{t}{0}{\infty} \mathbb{P}(t) \, t. \end{equation} The random (exponential) lifetime distribution $\mathbb{P} = \bar{k} e^{-\bar{k} t}$ extremizes the information entropy \begin{equation} S_{\mathbb{P}} \equiv -\Int{t}{0}{\infty} \mathbb{P}(t) \log[\mathbb{P}(t)] \end{equation} where $\bar{k} = \langle t \rangle^{-1}$ (see, for example, \cite{Chekmarev08}). One measure of the extent to which a calculated lifetime distribution $\mathbb{P}(t)$ characterized by mean lifetime $\langle t \rangle$ deviates from exponentiality is the entropy deficit \cite{Chekmarev08} \begin{subequations} \label{delta_s} \begin{align} \Delta S_{\mathbb{P}} & \equiv S_{\text{Random}} - S(\mathbb{P}) \\ & = 1 + \log\langle t \rangle + \Int{t}{0}{\infty} \mathbb{P}(t) \log[\mathbb{P}(t)]. \end{align} \end{subequations} \item {Flux through DS} The flux through the DS is obtained by integrating the flux form ${\rm d} \sigma$ (cf.\ eq.\ \eqref{flux_1}) over the DS. As the flux form is simply the phase space volume element associated with the `activated complex', the reactive flux $\phi$ can be computed by uniform (random) sampling of the DS phase space (see Appendix \ref{sec:sampling}). The associated reactive volume $2 \phi \times \bar{s}$ is the total phase space volume on the energy shell $\mathcal{H} = 0$ traced out by all the trajectories passing through the dividing surface (in both directions, hence the factor of 2). Except for a set of measure zero, each trajectory returns to the DS, and only contributes to the reactive volume up until the gap time. \item {Reactant density of states} The classical density of states $\rho(E)$ of the reactant region at energy $\mathcal{H} =0$ is obtained by calculating a discretized approximation to $\rho(E)$ as a function of energy for $E<0$, fitting $\rho(E)$ to a polynomial function in $E$, and evaluating the fitted $\rho(E)$ at $E=0$. The discretized approximation to $\rho(E)$ is obtained by randomly sampling phase points inside a suitably chosen hypercube, and binning the energies for sampled points with $E \leq 0$. (See Appendix \ref{sec:sampling}.) Although in all cases the phase space volume (integrated density of states) $N(E)$ is found to be finite as $E \to 0$, for the 3 DoF isokinetic Hamiltonian (eq.\ \eqref{eq:ham_3dof}) it is not clear from our numerical results whether or not the density of states actually diverges as $E \to 0$, and we have not been able to evaluate this limit analytically. For the isokinetic thermostat with $n=4$ DoF (eq.\ \eqref{eq:ham_4dof}), our numerical results suggest that $\rho(E)$ is finite as $E\to 0$. In Appendix \ref{sec:dos}, we show analytically for isotropic harmonic potentials that, while $\rho(E)$ diverges logarithmically as $E\to 0$ for $n=2$ DoF, for $n\geq 4$ DoF, with $n$ even, $\rho (E=0)$ is finite. The value of the density of states $\rho(E)$ at $E=0$ is of some interest, as equality between the reactive density of states determined as the product of the mean gap time with the reactive flux and the total reactant density of states $\rho(E=0)$ is a \emph{necessary} condition for ergodicity of the thermostat dynamics. \item Thermostat dynamics It of course important to examine the effectiveness of the Hamiltonian isokinetic thermostats defined here as thermostats; that is, to assess how well time-averaged coordinate distributions evaluated over a single thermostat trajectory reproduce the Boltzmann distributions associated with the relevant temperature parameter $\beta$. We therefore pick an initial condition at the coordinate origin on the DS with random momenta and $\mathcal{H} =0$, and propagate it for a long time ($t=20000$, many periods of the harmonic oscillator modes). Trajectories are integrated in Mathematica \cite{Mathematica7} using the function {\tt NDSolve} with the {\tt SymplecticPartitionedRungeKutta} method and fixed step size, $\Delta t = 0.01$. By binning coordinate values over such a long trajectory, we obtain a discretized probability distribution function that can be compared with the desired canonical (Boltzmann) coordinate distribution. Moments of powers $x^k$ and $y^k$ obtained as time averages over the trajectory can also be compared with thermal averages at temperature $T$. \end{enumerate} \subsection{Numerical results: 3 DoF} \subsubsection{Lifetime distributions} Computed lifetime distributions for the 3 DoF thermostats H121, H321 and H521 are shown in Figure \ref{fig:lifetime_3dof}. It can be seen qualitatively from these plots of $\log[\mathbb{P} (t)]$ versus $t$ that, following initial transient decay, the lifetime distributions become more exponential as the temperature decreases (i.e., as $\beta$ increases). The entropic measure of the deviation defined in eq.\ \eqref{delta_s} was computed for (renormalized) lifetime decay curves with transients excluded (that is, we take $t \geq \langle t \rangle$). The resulting values of $\Delta S$ are shown in Table \ref{table:gap_times}; the $\Delta S$ values reflect the qualitative observation that the lifetime distributions become more exponential as $\beta$ increases. \subsubsection{Phase space volumes} Numerical values for the flux, mean gap time and reactant phase space volumes are given in Table \ref{table:gap_times}. The magnitude of the reactive flux $\phi$ decreases with temperature (as $\beta$ increases), as does the inverse gap time. Smaller inverse gap times therefore correlate with lifetime distributions that are more nearly exponential (cf.\ \cite{DeLeon81}). Note that energy surface volumes for the 3 DoF systems are not shown in Table \ref{table:gap_times}. The reason for this omission is that, on the basis of our numerical computations (results not shown here), we cannot be sure whether or not $\rho(E)$ diverges for 3 DoF systems as $E \to 0$ (cf.\ Appendix \ref{sec:dos}). As we do not have an analytical proof that the density of states at $E=0$ is finite, we cannot rule out the possibility of a divergence. \subsubsection{Thermostat coordinate distributions} Coordinate distributions for the 3 DoF Hamiltonian isokinetic thermostat are shown in Figures \ref{fig:coord1_3dof}, \ref{fig:coord2_3dof} and \ref{fig:coord3_3dof}. The histogrammed distributions are obtained by time-averaging over a single trajectory, while the solid red curves are the associated canonical (Boltzmann) distributions. Moments $\langle q^k \rangle$ for $q=x_1$ are shown in Figure \ref{fig:moments1_3dof}. Qualitatively, it is apparent from these results that the effectiveness of the Hamiltonian isokinetic thermostat, as judged by the similarity of trajectory-based and Boltzmann coordinate distributions and moments, increases with temperature. Recall that the lifetime distributions become more nearly exponential as temperature decreases. Despite the possibility of an infinite energy shell volume, the Hamiltonian isokinetic thermostat for 3 DoF does in fact serve to thermalize the 2 uncoupled harmonic modes quite effectively. Note, however, that even for the long trajectories considered, numerical distributions obtained for the thermalizing ($y$) coordinate are not symmetric about $y=0$. \begin{table}[tpb] \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline & Mean gap time & Flux & Reactive volume & Energy surface volume & $\Delta S$ \\ \hline\hline H121 & {16.57} & {6.978} & {231.28} & -- & 0.034 \\ \hline H321 & {48.88} & {0.773} & {75.61} & -- & 0.026 \\ \hline H521 & {130.41} & {0.280} & {72.99} & -- & 0.019 \\ \hline J121 & {12.69} & {41.490} & {1053.36} & {1053.48} & 0.037 \\ \hline J321 & {38.51} & {1.536} & {118.31} & {118.66} & 0.021 \\ \hline J521 & {101.60} & {0.334} & {67.87} & {69.47} & 0.010 \\ \hline \end{tabular} \caption{\label{table:gap_times} Computed mean gap times, fluxes, reactive phase space volumes, energy surface volumes and entropy deficits for lifetime distributions for 3 DoF and 4 DoF model Hamiltonians. Details of the computations are discussed in Appendix \ref{sec:sampling}. } \end{center} \end{table}% \subsection{Numerical results: 4 DoF} \subsubsection{Lifetime distributions} Computed lifetime distributions for the 4 DoF thermostats J121, J321 and J521 are shown in Figure \ref{fig:lifetime_4dof}. Values of the lifetime distribution entropy deficit $\Delta S$ are shown in Table \ref{table:gap_times}. As for the 3 DoF systems, the 4 DoF lifetime distributions become more exponential as the temperature decreases. \subsubsection{Phase space volumes} Numerical values for the flux, mean gap time and reactant phase space volumes for 4 DoF systems are given in Table \ref{table:gap_times}. Both the magnitude of the reactive flux $\phi$ and the inverse gap time decrease with temperature (as $\beta$ increases), while the lifetime decay curves become more nearly exponential as $\beta$ increases. Comparison between reactive volumes and total energy surface volumes shows that both J121 and J321 systems satisfy (at least, within numerical error) the necessary condition for ergodicity, while the J521 system shows a minor deviation from equality. \subsubsection{Thermostat coordinate distributions} Distributions for cordinates $x_1$ and $y$ computed for the 4 DoF Hamiltonian isokinetic thermostat are shown in Figures \ref{fig:coord1_4dof} and \ref{fig:coord4_4dof}, respectively. The histogrammed distributions are obtained by time-averaging over a single trajectory, while the solid red curves are the associated canonical (Boltzmann) distributions. Moments $\langle q^k \rangle$ for $q=x_1$ are shown in Figure \ref{fig:moments1_4dof}. As for 3 DoF, the effectiveness of the Hamiltonian isokinetic thermostat, as judged by the similarity of trajectory-based and Boltzmann coordinate distributions and moments for the harmonic oscillator coordinates, increases with temperature. Recall that the lifetime distributions become more nearly exponential as temperature decreases. \newpage \section{Summary and conclusions} \label{sec:summary} In this paper we have investigated the phase space structure and dynamics of a Hamiltonian isokinetic thermostat. By design, ergodic thermostat trajectories at fixed (zero) energy generate a canonical distribution in configuration space \cite{Morriss98,Litniewski93,Morishita03,Ezra09a}. The physical potentials studied consist of a single bistable mode (the thermalizing degree of freedom \cite{Minary03a,Minary03b}) plus transverse harmonic modes. Although these modes are not coupled in the physical potential, the potential for the Hamiltonian thermostat is obtained by exponentiation of the physical potential, which introduces coupling between the modes. Interpreting the bistable mode as a reaction (isomerization) coordinate, we are able to establish connections with the theory of unimolecular reaction rates \cite{Slater59,Bunker66,Robinson72,Forst03}, in particular the formulation of isomerization rates in terms of gap times \cite{Slater56,Thiele62}. In Thiele's general formulation \cite{Thiele62}, the gap time is the time taken for a reactive trajectory initiated on a dividing surface in phase space to return to the surface. (Such phase space dividing surfaces in multidimensional systems have been defined and computed using normal form theory \cite{wwju,Komatsuzaki02,Komatsuzaki05,Jaffe05,Wiesenfeld05,WaalkensSchubertWiggins08}.) The distribution of gap times for a microcanonical ensemble initiated on the dividing surface is of great dynamical significance; an exponential distribution of the lifetimes for the reactive ensemble is usually taken to be an indicator of `statistical' behavior. Moreover, comparison of the magnitude of the phase space volume swept out by reactive trajectories as they pass through the reactant region with the total phase space volume (classical density of states) for the reactant region provides a necessary condition for ergodic dynamics. If the total density of states is appreciably larger than the reactive volume, the system cannot be ergodic. We have computed gap times, associated lifetime distributions, mean gap times, reactive fluxes, reactive volumes and total reactant phase space volumes for model systems with 3 and 4 DoF. The symmetry of the model potentials studied means that in all cases the dividing surface is defined by a single condition on the thermalizing coordinate, $y=0$. The thermostats were studied at three different temperatures. For 4 DoF, the necessary condition for ergodicity is approximately satisfied at all three temperatures. For both 3 and 4 DoF systems, nonexponential lifetime distributions are found at high temperatures ($\beta=1$, where the potential barrier to isomerization is $1/2$ in the same units), while at low temperatures ($\beta=5$) the lifetime distribution is more nearly exponential. We have quantified the degree of exponentiality of the lifetime distribution by computing the information entropy deficit with respect to pure exponential decay. From the standpoint of unimolecular reaction rate theory, the decay becomes more ``statistical'' at lower $T$ (smaller flux). This finding is in accord with the early observations of Deleon and Berne \cite{DeLeon81} on isomerization dynamics in a model 2-mode system. We have examined the efficacy of the Hamiltonian isokinetic thermostat by computing coordinate distributions averaged over a single long trajectory initiated at random on the dividing surface. For the parameter values used here, coordinate distributions are more nearly canonical (Boltzmann-like) at lower temperatures. It remains for future work to establish more quantitative correlations between dynamical attributes of the Hamiltonian thermostat and thermostat effectiveness. \begin{acknowledgments} PC and SW acknowledge the support of the Office of Naval Research Grant No.~N00014-01-1-0769. All three authors acknowledge the stimulating environment of the NSF sponsored Institute for Mathematics and its Applications (IMA) at the University of Minnesota, where the work reported in this paper was begun. \end{acknowledgments} \newpage
1,116,691,497,514
arxiv
\section{Introduction and results} In the following we will consider the standard even dimensional symplectic vector space $(\ensuremath{\mathdj{C}}^n,\omega_0:=dx_1\wedge dy_1+\hdots+dx_n \wedge dy_n)$, as well as the projective space $(\ensuremath{\ensuremath{\mathdj{C}}}P^n,\omega_{\operatorname{FS},r})$ endowed with the Fubini-Study symplectic two-form. We here normalise $\omega_{\operatorname{FS},r}$ so that a line $\ell \subset \ensuremath{\ensuremath{\mathdj{C}}}P^n$ has symplectic area equal to $\int_\ell \omega_{\operatorname{FS},r}=\pi r^2$. We also write $\omega_{\operatorname{FS}}:=\omega_{\operatorname{FS},1/\sqrt{\pi}}$. See Section \ref{sec:prel} for more details. Neck-stretching techniques were successfully used in \cite{PuncturedHolomorphic} by K. Cieliebak and K. Mohnke in order to prove the Audin conjecture, first formulated in \cite{FibresNormaux} by M. Audin: Every Lagrangian torus in $\ensuremath{\mathdj{C}}^n$ or $\ensuremath{\ensuremath{\mathdj{C}}}P^n$ bounds a disc of positive symplectic area and Maslov index equal to two. The same techniques were also used to deduce properties concerning the following quantity for a Lagrangian submanifold, which was introduced in the same article. (We here restrict our attention to Lagrangian tori.) Given a Lagrangian torus $L \subset (X,\omega)$ inside an arbitrary symplectic manifold, we define \[ A_{\operatorname{min}}(L):=\inf_{A \in \pi_2(X,L) \atop \int_A \omega > 0} \int_A \omega \in [0,+\infty].\] This quantity can then be used in order to define a capacity for the symplectic manifold $(X,\omega)$ as follows: \[ c_{\operatorname{Lag}}(X,\omega) := \sup_{L\subset (X,\omega) \text{ Lag. torus}} A_{\operatorname{min}}(L) \in [0,+\infty].\] We refer to \cite{PuncturedHolomorphic} for the properties satisfied by this capacity. In view of this it is natural to consider: \begin{defn}[\cite{PuncturedHolomorphic}] A Lagrangian torus $L \subset (X,\omega)$ satisfying \[A_{\operatorname{min}}(L)=c_{\operatorname{Lag}}(X,\omega) \] is called \emph{extremal}. \end{defn} The above capacity has been computed only for a limited number of symplectic manifolds, notably: \begin{thm}[Theorem 1.1 and Corollary 1.3 in \cite{PuncturedHolomorphic}] \label{thm:cap} We have \begin{eqnarray} \label{1} & & c_{\operatorname{Lag}}(B^{2n},\omega_0) = \pi/n,\\ \label{2} & & c_{\operatorname{Lag}}(\ensuremath{\ensuremath{\mathdj{C}}}P^n,\omega_{\operatorname{FS},r}) = r^2\pi/(n+1), \end{eqnarray} and in particular $c_{\operatorname{Lag}}(D^{2n},\omega_0) = \pi/n$. \end{thm}A straight-forward calculation shows that the $n$-dimensional Clifford torus \[L_{\operatorname{Cl}}:=\left(S^1_{\frac{1}{\sqrt{n}}}\right)^n \subset S^{2n-1} = \partial D^{2n} \subset (\ensuremath{\mathdj{C}}^n,\omega_0),\] contained inside the boundary of the $2n$-dimensional unit disc is extremal. In the case when $n=1$, the Clifford torus is clearly the only extremal Lagrangian torus. Furthermore, a monotone Lagrangian torus $L \subset (\ensuremath{\ensuremath{\mathdj{C}}}P^n,\omega_{\operatorname{FS}})$ is extremal, as follows by elementary topological considerations together with the fact that there exists a representative of $\pi_2(\ensuremath{\ensuremath{\mathdj{C}}}P^2,L)$ having Maslov index two and positive symplectic area by \cite[Theorems 1.1, 1.2]{PuncturedHolomorphic}. (For previous related results, consider \cite{NewObstruction}, \cite{Polterovich:MaslovClass}, \cite{Oh:spectral}, \cite{FloerAnomalyI}, \cite{Buhovsky:MaslovClass}, and \cite{FloerHomUniv}.) In \cite{PuncturedHolomorphic} the author learned about the following two conjectures, the first one originally due to L. Lazzarini: \begin{Conj} \label{conj:1} All extremal Lagrangian tori $L \subset (D^{2n},\omega_0)$ are contained inside the boundary $\partial D^{2n}=S^{2n-1}$. \end{Conj} \begin{Conj} \label{conj:2} All extremal Lagrangian tori $L \subset (\ensuremath{\ensuremath{\mathdj{C}}}P^n,\omega_{\operatorname{FS}})$ are monotone. \end{Conj} Our main result is a positive answer to Conjecture \ref{conj:1} in dimension four. \begin{thm} \label{thm:main} All extremal Lagrangian tori $L \subset (D^4,\omega_0)$ are contained inside the boundary, i.e.~$L \subset S^3 = \partial D^4$. \end{thm} After a consideration of the possible Lagrangian tori inside the three-dimensional unit sphere using classical techniques, we also obtain the following classification result. \begin{cor} \label{cor:main} All extremal Lagrangian tori $L \subset (D^4,\omega_0)$ are isotopic to the Clifford torus $S^1_{1/\sqrt{2}} \times S^1_{1/\sqrt{2}} \subset S^3 = \partial D^4$ by a Hamiltonian isotopy of $(D^4,\omega_0)$ preserving the boundary set-wise. \end{cor} The proof of our main result Theorem \ref{thm:main} consists of analysing the pseudoholomorphic curves produced in Cieliebak-Mohnke's proof of Theorem \ref{thm:cap} from \cite{PuncturedHolomorphic}. Their proof is based upon a pseudoholomorphic curve technique called the ``splitting construction'' or ``stretching the neck'', which first appeared in the setting of symplectic field theory in the work \cite{IntroSFT} by Y. Eliashberg, A. Givental, and H. Hofer. Pseudoholomorphic curves were introduced by M. Gromov \cite{Gromov}. We will restrict our attention to real four-dimensional symplectic manifolds. In this setting pseudoholomorphic curves behave particularly well, since one can apply techniques such as positivity of intersection due to D. McDuff \cite{LocalCurve}, and automatic transversality which in the present setting is due to C. Wendl \cite{Auttrans}. These four-dimensional techniques are crucial to our proof, and it is not clear to the author if the argument can be modified to work in arbitrary dimensions. The so-called splitting construction involves studying the limit of pseudoholomorphic curves under a sequence $J^\tau$, $\tau \ge 0$, of tame almost complex structures on $(X,\omega)$ which stretches the neck around a hypersurface $Y \subset (X,\omega)$ of contact type. Loosely speaking, such a sequence introduces a neck of the form $[-(\tau+\epsilon),\tau+\epsilon] \times Y \hookrightarrow X$ in a neighbourhood of $Y$, where the almost complex structure is cylindrical. In our case, the hypersurface $Y \subset X= \ensuremath{\ensuremath{\mathdj{C}}}P^n$ of contact type will be taken to be the unit cotangent bundle of the Lagrangian torus. The compactness results for sequences of $J^\tau$-holomorphic curves as $\tau \to +\infty$ were obtained in \cite{CompSFT} by F. Bourgeois, Y. Eliashberg, C. Wysocki, and E. Zehnder, and independently in \cite{CompactnessPunctured} by K. Cieliebak and K. Mohnke. Due to the fact that tori admit flat metrics which, moreover, induce foliations by families of closed geodesics, the limit of pseudoholomorphic curves under a stretching of the neck behave particularly well in this case. In addition to \cite{PuncturedHolomorphic}, Lagrangian tori have previously been studied using the splitting construction in a series of work; among others, see \cite{Unlinking} by the author together with J. D. Evans, \cite{Polydisks} by R. Hind and S. Lisi, and \cite{LagIsoTori} by the author together with E. Goodman and A. Ivrii. \section{Preliminaries} \label{sec:prel} This paper concerns Lagrangian tori inside the open unit ball and closed unit disc \[B^{2n} \subset D^{2n} \subset (\ensuremath{\mathdj{C}}^n,\omega_0=dx_1\wedge dy_1 + \hdots + dx_n \wedge dy_n)\] endowed with the standard symplectic two-form $\omega_0$, as well as the projective space $(\ensuremath{\ensuremath{\mathdj{C}}}P^n,\omega_{\operatorname{FS}})$ endowed with the Fubini-Study symplectic two-form. Recall that a half-dimensional submanifold of a symplectic manifold is said to be {\bf Lagrangian}, if the symplectic form vanishes on its tangent space. It will be useful to compactify the open ball $B^{2n}_r \subset (\ensuremath{\mathdj{C}}^n,\omega_0)$ of radius $r>0$ to the projective plane \[(\ensuremath{\ensuremath{\mathdj{C}}}P^n,\omega_{\operatorname{FS}_r}) \supset (\ensuremath{\ensuremath{\mathdj{C}}}P^n \setminus D_\infty,\omega_{\operatorname{FS},r}) = (B^{2n}_r,\omega_0)\] endowed with the Fubini-Study symplectic form, where $D_\infty$ denotes the divisor at infinity. The Fubini-Study form $\omega_{\operatorname{FS},r}$ is here normalised so that a surface $\ell \subset \ensuremath{\ensuremath{\mathdj{C}}}P^n$ of degree one, i.e.~$[\ell]\in H_2(\ensuremath{\ensuremath{\mathdj{C}}}P^n)$ is the generator of positive symplectic area, satisfies $\int_\ell \omega_{\operatorname{FS},r}=\pi r^2$. We also write $\omega_{\operatorname{FS}}:=\omega_{\operatorname{FS},1/\sqrt{\pi}}$. The above symplectic manifolds $(X,\omega)$ are {\bf monotone}: the first Chern class satisfies $c_1(X,\omega) = \kappa [\omega] \in H^2(X;\ensuremath{\mathdj{R}})$ for some $\kappa >0$. More precisely, $\kappa >0$ can be chosen arbitrarily for $(\ensuremath{\mathdj{C}}^n,\omega_0)$, while $\kappa=(n+1)/(\pi r^2)$ for $(X,\omega)=(\ensuremath{\ensuremath{\mathdj{C}}}P^n,\omega_{\operatorname{FS},r})$. Recall that a Lagrangian torus inside $(X,\omega)$ is said to be {\bf monotone} provided that its Maslov class satisfies $\mu_L=2\kappa[\omega] \in H^2(X,L;\ensuremath{\mathdj{R}})$ for a number $\kappa>0$ as above. Pseudoholomorphic curve techniques work particularly well for monotone Lagrangian tori, and they are known to satisfy many rigidity properties. Recall that a {\bf tame almost complex structure} $J$ on a symplectic manifold $(X,\omega)$ is an endomorphism $J \in \operatorname{End}(TX)$ satisfying $J^2 =-\operatorname{Id}_{TX}$ together with the property that $\omega(v,Jv)>0$ whenever $v \neq 0$. The space of tame almost complex structures is contractible by \cite{Gromov}. In the latter article Gromov also established his celebrated compactness theorem for pseudoholomorphic curves for a tame almost complex structure. Recall that, given a choice of almost complex structure $J$ and a Riemann surface $(\Sigma,i)$, a map \[ u \colon (\Sigma,i) \to (X,J) \] is {\bf pseudoholomorphic} given that the fully non-linear Cauchy-Riemann type equation $J \circ du = du \circ i$ is satisfied. The present article will mainly consider punctured pseudoholomorphic spheres in non-compact symplectic manifolds. More precisely, outside of an open pre-compact sub-domain with smooth boundary, the symplectic manifold will consist of convex and concave cylindrical ends symplectomorphic to half symplectisations \begin{gather*} [0,+\infty) \times Y_+ \subset (\ensuremath{\mathdj{R}} \times Y_+,d(e^t\alpha_+)), \\ (-\infty,0] \times Y_- \subset (\ensuremath{\mathdj{R}} \times Y_-,d(e^t\alpha_-)), \end{gather*} respectively, where $t$ is a coordinate on the $\ensuremath{\mathdj{R}}$-factor, and $\alpha_\pm$ are contact one-forms on the closed manifolds $Y_\pm$. Recall the definition of the {\bf Reeb vector field} $R \in \Gamma(TY)$ on a contact manifold $(Y,\alpha)$ with contact form $\alpha$, which is determined by the equations \[\alpha(R)=1, \:\: d\alpha(R,\cdot)=0.\] Following \cite{CompSFT}, a tame almost complex structure $J$ on $(\ensuremath{\mathdj{R}} \times Y,d(e^t\alpha))$ is said to be {\bf cylindrical} given that: \begin{itemize} \item $J$ is invariant under translations of the $t$ coordinate; \item $J\partial_t=R$; and \item $J\xi =\xi$, where $\xi:=\ker \alpha \subset TY$. \end{itemize} The aforementioned article extends Gromov's compactness theorem to the space of pseudoholomorphic curves for tame almost complex structures that are cylindrical outside of a compact subset, given that these curves have a uniform bound on their Hofer energy (see \cite{CompSFT} for the definition). By a {\bf finite energy curve} we will mean a pseudoholomorphic curve of finite (Hofer) energy. We will not give the formal definition, but we point out that a proper punctured pseudoholomorphic curve \[ u \colon (\Sigma \setminus \{ p_1,\hdots,p_m\},i) \to (X,\omega),\] where $(\Sigma,i)$ is a closed Riemann surface and $(X,\omega)$ has non-compact cylindrical ends, is a finite energy curve if and only if $u$ is asymptotic to cylinders $\ensuremath{\mathdj{R}} \times \gamma_i \subset \ensuremath{\mathdj{R}} \times Y_\pm$ contained in the cylindrical ends of $X$ at each of its punctures $p_i$, $i=1,\hdots,m$. Here $\gamma_i \subset Y_\pm$ are periodic integral curves of the Reeb vector field $R_\pm$ -- usually called {\bf periodic Reeb orbits}. \section{The proof of Corollary \ref{cor:main}} Since the torus $L \subset S^3 \subset B^4$ is Lagrangian, it is foliated by integral curves of the characteristic foliation \[\ker \omega_0 |_{TS^3} \subset TS^3 \subset TB^4.\] These integral curves are the periodic Reeb orbits \[S^1=\ensuremath{\mathdj{R}}/2\pi\ensuremath{\mathdj{Z}} \ni \theta \mapsto e^{i\theta}\mathbf{z} \in S^3, \:\: \mathbf{z} \in S^3 \subset \ensuremath{\mathdj{C}}^2,\] of the standard contact form $\alpha_{\operatorname{std}}:=\frac{1}{2}\sum_{i=1}^2(x_idy_i-y_idx_i)$ on $S^3$. The foliation of the sphere by these Reeb orbits induces the Hopf fibration \begin{gather*} p \colon S^3 \to \ensuremath{\ensuremath{\mathdj{C}}}P^1,\\ (z_1,z_2) \mapsto [z_1:z_2], \end{gather*} with the above periodic Reeb orbits as $S^1$-fibres, while the base is given by the orbit space diffeomorphic to $\ensuremath{\ensuremath{\mathdj{C}}}P^1$. Recall that the latter space is endowed with the canonical symplectic form $\omega_{\operatorname{FS},1}$ obtained from the corresponding symplectic reduction $(\ensuremath{\mathdj{C}}^2,\omega_0) \supset S^3 \xrightarrow{p} (\ensuremath{\ensuremath{\mathdj{C}}}P^1,\omega_{\operatorname{FS},1})$. Since $L$ is foliated by periodic Reeb orbits of the same period, the Reeb flow on $S^3$ (i.e.~multiplication by $e^{i2t}$) induces a smooth and free $S^1$-action on $L$. In other words, we may consider $L$ as an $S^1$-bundle $q \colon L \to L/S^1 \cong S^1$, with base given as the quotient of $L$ by this action. By topological reasons this must be a trivial $S^1$-bundle over $S^1$. In particular, for any choice of section $\sigma \colon L/S^1 \to L$ of $q$, the projection $p \circ \sigma$ is an embedded closed curve $\gamma := p \circ \sigma \colon S^1 \hookrightarrow (\ensuremath{\ensuremath{\mathdj{C}}}P^1,\omega_{\operatorname{FS},1})$. Using the assumption that $L$ is extremal, we conclude that the two smooth discs $\ensuremath{\ensuremath{\mathdj{C}}}P^1 \setminus \gamma(S^1)$ each must be of symplectic area equal to $\pi/2$. Otherwise, one could readily lift the disc of smaller area to a disc inside $S^3 \to \ensuremath{\ensuremath{\mathdj{C}}}P^1$ having boundary on $L$ and being of the same area, thus contradicting the assumption that $L$ is extremal. Using elementary methods one can construct a Hamiltonian isotopy \[\phi^s_{H_s} \colon (\ensuremath{\ensuremath{\mathdj{C}}}P^1,\omega_{\operatorname{FS,1}}) \to (\ensuremath{\ensuremath{\mathdj{C}}}P^1,\omega_{\operatorname{FS,1}}),\] induced by the time-dependent Hamiltonian $H_s \colon \ensuremath{\ensuremath{\mathdj{C}}}P^1 \to \ensuremath{\mathdj{R}}$, taking the curve $\gamma$ to the equator \begin{gather*} \gamma_0 \colon S^1 =\ensuremath{\mathdj{R}}/2\pi\ensuremath{\mathdj{Z}} \to \ensuremath{\ensuremath{\mathdj{C}}}P^1,\\ \theta \mapsto p(e^{i\theta/2},e^{-i\theta/2}). \end{gather*} Observe that $p^{-1}(\gamma_0)=S^1_{1/\sqrt{2}} \times S^1_{1/\sqrt{2}} \subset S^3$ is the sought Clifford torus. The Hamiltonian isotopy $\phi^s_{H_s}$ lifts to a \emph{contact-form preserving} isotopy $\phi^s \colon (S^3,\alpha_{\operatorname{std}}) \to (S^3,\alpha_{\operatorname{std}})$ induced by the contact Hamiltonian $H_s \circ p \colon S^3 \to \ensuremath{\mathdj{R}}$, i.e.~$p\circ \phi^s=\phi^s_{H_s}$. Since the contactomorphisms $\phi^s$ preserve the contact form $\alpha_{\operatorname{std}}$ it follows that \begin{gather*} (\ensuremath{\mathdj{R}} \times S^3,d(e^t\alpha_{\operatorname{std}})) \to (\ensuremath{\mathdj{R}} \times S^3,d(e^t\alpha_{\operatorname{std}})), \\ (t,y) \mapsto (t,\phi^s(y)), \end{gather*} defines a one-parameter family of symplectomorphisms which, moreover, is generated by the time-dependent Hamiltonian of the form $e^t H_s \circ p \colon \ensuremath{\mathdj{R}} \times S^3 \to \ensuremath{\mathdj{R}}$. Utilising the symplectic identification \begin{gather*} (\ensuremath{\mathdj{C}}^2 \setminus \{ 0\},\omega_0) \to (\ensuremath{\mathdj{R}} \times S^3,d(e^t\alpha_{\operatorname{std}})), \\ \mathbf{z} \mapsto (2\log \|\mathbf{z}\|,\mathbf{z}/\|\mathbf{z}\|), \end{gather*} we obtain a time-dependent Hamiltonian \begin{gather*} \widetilde{H}_s \colon \ensuremath{\mathdj{C}}^2 \setminus \{0\} \to \ensuremath{\mathdj{R}}, \\ \mathbf{z} \mapsto \|\mathbf{z}\|^2 \cdot H_s \circ p(\mathbf{z}), \end{gather*} whose induced flow on $\ensuremath{\mathdj{C}}^2 \setminus \{0\}$ preserves each concentric sphere $S^3_r$, $r>0$, while it satisfies the property that $p \circ \phi^s_{\widetilde{H}_s} = \phi^s_{H_s}$. After a smoothing of $\widetilde{H}_s$ in a small neighbourhood of the origin $0 \in \ensuremath{\mathdj{C}}^2$, we have finally produced our sought Hamiltonian isotopy of $(D^4,\omega_0)$. \section{The proof of Theorem \ref{thm:main}} \label{sec:proof} Our proof follows from the techniques in \cite{PuncturedHolomorphic} that are used to prove part \eqref{1} of Theorem \ref{thm:cap}, combined with the automatic transversality result from \cite{Auttrans}. Observe that the latter theory only is applicable to four-dimensional symplectic manifolds. By contradiction we assume that $\varphi \colon \T^2 \hookrightarrow (D^4,\omega_0)$ is a fixed Lagrangian torus $L$ which is extremal, but not contained entirely in the boundary $S^3 =\partial D^4$. Here we make the identification $\T^2=S^1 \times S^1=\ensuremath{\mathdj{R}}/2\pi\ensuremath{\mathdj{Z}} \times \ensuremath{\mathdj{R}} / 2\pi\ensuremath{\mathdj{Z}}$, with the induced circle-valued coordinates $\theta_1,\theta_2$. We start by fixing a so-called Weinstein neighbourhood of $L$ (see e.g.~\cite{SympTop}), which is an extension of $\varphi$ to a symplectomorphism \begin{gather*} \Phi \colon (T^*_{3\epsilon_0}\T^2,d\lambda_{\T^2}) \to (\ensuremath{\mathdj{C}}^2,\omega_0),\\ \Phi|_{\T^2}=\varphi, \end{gather*} for some fixed $\epsilon_0 >0$. Here $T^*_{s}\T^2:=\T^2 \times B^2_s \subset \T^2 \times \ensuremath{\mathdj{R}}^2$ is the co-disc bundle of radius $s>0$ for the flat torus, and where the Liouville one-form is given by $\lambda_{\T^2}=p_1d\theta_1+p_2d\theta_2$ for the standard coordinates $(p_1,p_2)$ on the $\ensuremath{\mathdj{R}}^2$-factor. Writing $S^*_s\T^2 := \T^2 \times S^1_s \subset \T^2 \times \ensuremath{\mathdj{R}}^2$ for the corresponding co-sphere bundle, we will once and for all fix a single fibre $F_p$ of $S^*_{2\epsilon_0}\T^2$ above a point $p \in \T^2 \subset \Phi^{-1}(B^4)$. After making an appropriate choice of $p \in \T^2$, and possibly after replacing $\epsilon_0>0$ with a sufficiently small number, we may assume that \[S_0:= \Phi(F_p) \subset B^4 \setminus L = D^4 \setminus (\partial D^4 \cup L).\] Here we have obviously used the assumption that $L$ is not contained entirely inside $\partial D^4$. See Figure \ref{fig:fibre} for a schematic picture. \begin{figure}[htp] \begin{center} \vspace{3mm} \labellist \pinlabel $T^*_\epsilon L$ at 51 38 \pinlabel $U_0$ at 49 90 \pinlabel $\color{red}S_0$ at 69 87 \pinlabel $L$ at 106 75 \pinlabel $B^4_{1+\delta}$ at 144 21 \pinlabel $p$ at 90 51 \pinlabel $B^4$ at 74 20 \endlabellist \includegraphics{fibre} \caption{The embedded circle $S_0 \subset B^4$ being the image of a fibre $F_p \subset S^*_{2\epsilon_0}L$ over $p \in L$ and which non-trivially links the Lagrangian torus $L$. The co-disc bundle $T^*_\epsilon L$ is disjoint from the neighbourhood $U_0$ of this fibre, and is contained inside $B^4_{1+\delta}$, given that $0<\epsilon<\epsilon_0$ is sufficiently small.} \label{fig:fibre} \end{center} \end{figure} The following property will be crucial to us. We fix an open neighbourhood $U_0$ of $S_0$ whose closure satisfies \[ \overline{U_0} \subset B^4 \setminus \Phi(T^*_{\epsilon_0}\T^2). \] We also fix a compatible almost complex structure $J_{U_0}$ on $(U_0,\omega_0)$ which can be extended to a compatible almost complex structure on $(\ensuremath{\mathdj{C}}^2,\omega_0)$. The monotonicity property for pseudoholomorphic curves \cite[Proposition 4.3.1(ii)]{SomeProp} implies that \begin{enumerate}[label=(M), ref=(M)] \item \label{M} There exists a constant $\hbar>0$ for which the following holds: Any proper $J_{U_0}$-holomorphic curve $C \subset U_0$ satisfying $C \cap S_0 \neq \emptyset$ has symplectic area bounded from below by $\int_C \omega_0 \ge \hbar$. \end{enumerate} \subsection{The proof of part \eqref{1} of Theorem \ref{thm:cap} in dimension four} We here reprove \eqref{1} in the four-dimensional case, which originally was established in \cite{PuncturedHolomorphic} by Cieliebak-Mohnke in arbitrary dimensions. We follow exactly the same ideas, with the only difference that, instead of applying \cite[Corollary 3.3]{PuncturedHolomorphic} which holds in the nondegenerate setting, we adapt this argument to a setting which is degenerate in the Morse-Bott sense. Theorem \ref{thm:main} will then be seen to follow after a more careful investigation of the involved pseudoholomorphic curves; this is done in Section \ref{sec:further} below. For each $\delta>0$, we consider the symplectic embeddings \[(B^4,\omega_0) \subset (B^4_{1+\delta},\omega_0) = (\ensuremath{\ensuremath{\mathdj{C}}}P^2 \setminus D_\infty,\omega_{\operatorname{FS},1+\delta})\subset (\ensuremath{\ensuremath{\mathdj{C}}}P^2,\omega_{\operatorname{FS},1+\delta}),\] together with a number $0<\epsilon<\epsilon_0$ depending on $\delta>0$ for which $\Phi(T^*_{\epsilon}\T^2) \subset B^4_{1+\delta/2}$. \subsubsection{The Morse-Bott contact form on $S^*_1L$} First we define the tame almost complex structure $J_0$ on $T^*\T^2$ determined by \[J_0(\partial_{\theta_i})=-\rho(\| \mathbf{p}\|)\partial_{p_i}, \:\:i=1,2,\] where $\rho \colon \ensuremath{\mathdj{R}}_{\ge 0} \to \ensuremath{\mathdj{R}}_{\ge 0}$ satisfies $\rho'(t) \ge 0$, $\rho(t) \equiv \epsilon/4$ for $t \le \epsilon/4$, and $\rho(t)=t$ for $t \ge \epsilon/3$. We also define the tame almost complex structure $J_{\operatorname{cyl}}$ on $T^*\T^2 \setminus 0_{\T^2}$ determined by \[ J_{\operatorname{cyl}}(\partial_{\theta_i})=-\| \mathbf{p}\|\partial_{p_i}, \:\:i=1,2,\] which hence coincides with $J_0$ in the subset $\{ \|\mathbf{p}\| \ge \epsilon/3\}$. Note that the latter almost complex structure is \emph{cylindrical} in the sense of \cite{CompSFT}, given that we use the exact symplectic identification of $(T^*\T^2 \setminus 0_{\T^2},d\lambda_{\T^2})$ with the symplectisation \[ (\ensuremath{\mathdj{R}} \times S^*_1\T^2,d(e^t\alpha_0)), \:\: \alpha_0:=\lambda_{\T^2}|_{T(S^*_1\T^2)},\] which sends the level set $\{\| \mathbf{p}\|=c\}$ to the level set $\{ t= \log c\}$. We observe that the Reeb flow on $(S^*_1L,\alpha_0)$ coincides with the so-called cogeodesic flow for the canonical flat metric on $L \cong \T^2=(\ensuremath{\mathdj{R}}/2\pi\ensuremath{\mathdj{Z}})^2$. In particular, it follows that the periodic Reeb orbits come in manifolds diffeomorphic to $S^1$: there is one such family $\Gamma_\eta \simeq S^1$ of periodic Reeb orbits for each non-zero homology class $\eta \in H_1(L) \setminus \{0\}$. These manifolds of orbits are moreover non-degenerate in the Morse-Bott sense; see \cite{BourgeoisBott} for more details. \subsubsection{A sequence of almost complex structures stretching the neck} \label{sec:neckstretch} Consider a sequence of almost complex structures $J^\tau$, $\tau \ge 0$, which satisfy the following properties: \begin{itemize} \item Inside $\ensuremath{\ensuremath{\mathdj{C}}}P^2 \setminus \Phi(T^*_{\epsilon/2}\T^2)$, the family $J^\tau$ is independent of $\tau$. Furthermore, we require that: \begin{itemize} \item In the subset $U_0 \subset B^4 \setminus \Phi(T^*_{\epsilon}\T^2)$ we have $J^\tau|_{U_0}=J_{U_0}$. \item Inside $\ensuremath{\ensuremath{\mathdj{C}}}P^2 \setminus B^4_{1+\delta/2}$ we have $J^\tau=i$, where the $i$ denotes the standard integrable complex structure on $(\ensuremath{\ensuremath{\mathdj{C}}}P^2,\omega_{\operatorname{FS},1+\delta})$. (In particular, the line $D_\infty$ at infinity is $J^\tau$-holomorphic for each $\tau \ge 0$.) \end{itemize} \item In the subset $\Phi(T^*_{\epsilon/3}\T^2)$, each $J^\tau$ is the push-forward of $J_0$ under the map $\Phi$. \item In the subset $\Phi(T^*_{\epsilon/2}\T^2) \setminus \Phi(T^*_{\epsilon/3}\T^2)$, identified with \[ ([\log{\epsilon/3},\log{\epsilon/2}) \times S^*_1\T^2,d(e^t\alpha_0)),\] the almost complex structure $J^\tau$, $\tau \ge 0$, is the push-forward of $J_{\operatorname{cyl}}$ under the (non-symplectic!) identification \[ [\log{\epsilon/3},\log{\epsilon/2}+\tau) \times S^*_1\T^2 \cong [\log{\epsilon/3},\log{\epsilon/2}) \times S^*_1\T^2,\] induced by a diffeomorphism $[\log{\epsilon/3},\log{\epsilon/2}) \cong [\log{\epsilon/3},\log{\epsilon/2}+\tau)$. \end{itemize} The above sequence $J^\tau$, $\tau \ge 0$, of tame almost complex structures is said to ``stretch the neck'' around the hypersurface \[\Phi(S^*_{\epsilon/2}\T^2) \subset (\ensuremath{\ensuremath{\mathdj{C}}}P^2,\omega_{\operatorname{FS},1+\delta})\] of contact type as $\tau \to +\infty$, where this hypersurface has been endowed with the contact form $\alpha_0$ described above. We also need to specify the tame almost complex structure $J^\infty$ on the symplectic manifold $\ensuremath{\ensuremath{\mathdj{C}}}P^2 \setminus L$ with a concave cylindrical end, which is determined by \begin{itemize} \item $J^\infty=J^\tau=J^0$ inside $\ensuremath{\ensuremath{\mathdj{C}}}P^2 \setminus \Phi(T^*_{\epsilon/2}\T^2)$; and \item $J^\infty$ is the push-forward of $J_{\operatorname{cyl}}$ under $\Phi$ in the subset $\Phi(T^*_{\epsilon/2}\T^2 \setminus 0_{\T^2}) \subset \ensuremath{\ensuremath{\mathdj{C}}}P^2 \setminus L$. \end{itemize} Again, observe that the line at infinity $D_\infty \subset \ensuremath{\ensuremath{\mathdj{C}}}P^2$ is $J^\infty$-holomorphic by construction. \subsubsection{The existence of pseudoholomorphic buildings of degree one} The following existence result for pseudo-holomorphic curves in $(\ensuremath{\ensuremath{\mathdj{C}}}P^2,\omega_{\operatorname{FS},r})$ due to Gromov is crucial. \begin{thm}[\cite{Gromov}] \label{thm:gromov} Let $J$ be an arbitrary tame almost complex structure on $(\ensuremath{\ensuremath{\mathdj{C}}}P^2,\omega_{\operatorname{FS},r})$. There exists a unique $J$-holomorphic sphere of degree one which satisfies either of the following: \begin{itemize} \item A point constraint at two different points $p,q \in \ensuremath{\ensuremath{\mathdj{C}}}P^2$; or \item A tangency condition to $\ensuremath{\mathdj{C}}\mathbf{v} \subset T_p\ensuremath{\ensuremath{\mathdj{C}}}P^2$ at a given point $p \in \ensuremath{\ensuremath{\mathdj{C}}}P^2$, where $\mathbf{v} \neq 0$. \end{itemize} Furthermore, this sphere is embedded, has Fredholm index 4, and is transversely cut out. \end{thm} Consider a sequence of $J^{\tau_i}$-holomorphic spheres $\ell_{\tau_i} \subset \ensuremath{\ensuremath{\mathdj{C}}}P^2$ of degree one, for which $\lim_{i \to \infty} \tau_i=\infty$, and where we have used a neck-stretching sequence of tame almost complex structures as described in Section \ref{sec:neckstretch}. Gromov's compactness theorem extended to the SFT setting, more precisely \cite[Theorem 10.3]{CompSFT}, or alternatively \cite[Theorem 1.2]{CompactnessPunctured}, now gives the following. After passing to a subsequence, this sequence of parametrised spheres converges to a ``pseudoholomorphic building'', where we refer to the latter papers for a description of the relevant topology. By a pseudoholomorphic building, we mean a collection of parametrised punctured spheres of the following form: \begin{itemize} \item A {\bf top level} consisting of a finite number of punctured $J^\infty$-holomorphic spheres $A_1,\hdots,A_k \subset \ensuremath{\ensuremath{\mathdj{C}}}P^2 \setminus L$; \item A (possibly zero) number of {\bf middle levels} consisting of a finite number of punctured $J_{\operatorname{cyl}}$-holomorphic spheres $B_1,\hdots,B_l \subset \ensuremath{\mathdj{R}} \times S^*_1L$; and \item A (possibly empty) {\bf bottom level} consisting of a finite number of punctured $J_0$-holomorphic spheres $C_1, \hdots, C_m \subset T^*L$. \end{itemize} All of the above punctured spheres are of finite energy, and the asymptotic orbits match in order for the different levels to topologically glue to form a cycle in $\ensuremath{\ensuremath{\mathdj{C}}}P^2$ which is a sphere of degree one. Again, we refer to the papers above for more details. Note that we often abuse notation and suppress the parametrisation from the notation of a pseudoholomorphic curve, as well as from the notation of a component involved in a pseudoholomorphic building. A one-punctured pseudoholomorphic sphere will be referred to as a {\bf pseudoholomorphic plane}, while a two-punctured pseudoholomorphic sphere will be referred to as a {\bf pseudoholomorphic cylinder}. Observe that the middle levels contain pseudoholomorphic cylinders of the form $\ensuremath{\mathdj{R}} \times \gamma \subset \ensuremath{\mathdj{R}} \times S^*_1L$, where $\gamma \subset (S^*_1L,\alpha_0)$ is a periodic Reeb orbit. The latter cylinders will be called {\bf trivial cylinders}. By the SFT compactness theorem, every non-empty middle level must contain at least one punctured sphere which is not a trivial cylinder. We also state the following simple, but useful, lemma. \begin{lem} \label{lem:noplanes} For the almost complex structures $J_0$ and $J_{\operatorname{cyl}}$ on $T^*L$ and $\ensuremath{\mathdj{R}} \times S^*_1L$, respectively, there are no non-constant pseudoholomorphic planes of finite energy. \end{lem} \begin{proof} There are no contractible geodesics on $L$ for the flat metric, and an appropriate compactification of such a plane would produce a null-homology of the geodesic to which it is asymptotic. \end{proof} \subsubsection{The heart of the proof: producing a pseudoholomorphic building containing three planes} We are now ready to state the main result in this subsection, from which part \eqref{1} of Theorem \ref{thm:cap} can be seen to follow. We follow the method of \cite[Section 4]{PuncturedHolomorphic}, but applied in the current Morse-Bott setting. Pick a generic tangency condition $\ensuremath{\mathdj{C}}\mathbf{v} \subset T_p \ensuremath{\ensuremath{\mathdj{C}}}P^2$ for $p \in L$, and consider the sequence of $J^\tau$-holomorphic spheres of degree one satisfying this condition with $\tau \to +\infty$ (they exist by Gromov's Theorem \ref{thm:gromov}). After passing to a subsequence, the SFT compactness result produces a limit pseudoholomorphic building containing one component $C_0 \subset T^*L$ passing through the point $p \in L$ and satisfying the same tangency, when considered as a parametrised curve. We observe that, by definition, any given tangency condition is satisfied at a singular point of a parametrised pseudoholomorphic curve (e.g.~the image of a branched point). \begin{prop} \label{prop:three} After a perturbation of $J_0$ inside an arbitrarily small neighbourhood of $p \in T^*L$, the limit pseudoholomorphic building produced above consists of at least two $J^\infty$-holomorphic planes $A_1,A_2 \subset \ensuremath{\ensuremath{\mathdj{C}}}P^2 \setminus L$ that are disjoint from the line at infinity $D_\infty$. Moreover, the planes $A_1$ and $A_2$ are connected to the unique component $A_\infty \subset \ensuremath{\ensuremath{\mathdj{C}}}P^2 \setminus L$ of the building passing through $D_\infty$ via the component $C_0 \subset T^*L$. See Figure \ref{fig:building} for a schematic picture. \end{prop} \begin{figure}[htp] \begin{center} \vspace{3mm} \labellist \pinlabel $\ensuremath{\ensuremath{\mathdj{C}}}P^2\setminus L$ at -25 58 \pinlabel $T^*L$ at -14 20 \pinlabel $\color{blue}D_\infty$ at 32 82 \pinlabel $\color{blue}D_\infty$ at 220 82 \pinlabel $A_\infty$ at 51 68 \pinlabel $A_1$ at 82 68 \pinlabel $A_2$ at 123 68 \pinlabel $C_0$ at 125 10 \pinlabel $p$ at 64 9 \pinlabel $p$ at 188 9 \pinlabel $\mathbf{v}$ at 78 21 \pinlabel $\mathbf{v}$ at 202 21 \endlabellist \includegraphics{buildingsB} \caption{Proposition \ref{prop:three} produces a pseudoholomorphic building as shown on the left. For a general Lagrangian torus $L$ it is possible that the planes $A_1$, $A_2$ are connected to $C_0$ via intermediate components, and that $A_\infty$ itself is a building (i.e.~a broken plane).} \label{fig:building} \end{center} \end{figure} \begin{proof} By Lemma \ref{lem:noplanes} the component $C_0 \subset T^*L$ passing through $p \in L$ has at least two punctures. We will show that, in fact, it must have at least three punctures. The proposition readily follows from the latter statement. Namely, first observe that precisely one component in $\ensuremath{\ensuremath{\mathdj{C}}}P^2 \setminus L$ of the building intersects $D_\infty$ by positivity of intersection together with $[D_\infty] \bullet [D_\infty]=1$ (recall that $D_\infty$ is $J^\infty$-holomorphic by assumption). Using Lemma \ref{lem:noplanes} together with a topological consideration one now deduces the existence of the two sought planes. What remains is thus to prove that $C_0$ has at least three punctures. A pseudoholomorphic cylinder $C \subset T^*L$ has Fredholm index given by \[ \operatorname{index}(C)= (\operatorname{RS}(\Gamma_1)+1/2)+(\operatorname{RS}(\Gamma_2)+1/2)\] as follows from \cite[Formula (3)]{PuncturedHolomorphic}. Here $\Gamma_1$ and $\Gamma_2$ are the families of periodic Reeb orbits containing the asymptotic orbits of $C$, and $\operatorname{RS}(\Gamma_i)$ denotes the Robbin-Salamon index defined in \cite[Remark 5.4]{MaslovIndex}. Here the latter index has been computed using the canonical trivialisation of the complex determinant bundle of $T(T^*L)$. In this setting, the Robbin-Salamon index can be related to the Morse index $\iota_\mu$ and nullity $\iota_\nu$ of the corresponding geodesics on $L$ by the formula \[ \operatorname{RS}(\Gamma_i) = \iota_\mu+(1/2)\iota_\nu=1/2\] in \cite[Equation 60]{CieFra}. Here we have used the fact that $\i_\mu=0$ and $i_\nu=0$ for all closed geodesics on the flat $\T^2$; they are all of minimal length in their homology class, and come in one-dimensional families. In conclusion, we have shown that \[ \operatorname{index}(C)= 2.\] After a perturbation of $J_0$ supported in an arbitrarily small neighbourhood of $p \in L$, we may assume that every simply covered pseudoholomorphic cylinder in $T^*L$ is transversely cut out by a standard transversality argument; see \cite[Section 3.4]{JHolCurves}. It thus follows that the moduli space of unparametrised $J_0$-holomorphic cylinders is a two-dimensional manifold. In particular, it follows that no simply covered cylinder satisfies the tangency condition $\ensuremath{\mathdj{C}}\mathbf{v} \subset T_pL$ after a generic perturbation of $J_0$, given that the point $p \in L$ and the tangency condition both were chosen generically. There are now two possibilities for the limit pseudoholomorphic building produced above: \begin{itemize} \item The component $C_0$ of the limit building is a multiple cover of a $J_0$-holomorphic cylinder in $T^*L$ branched at $p$; or \item The component $C_0$ of the limit building has a non-zero tangency to $\ensuremath{\mathdj{C}}\mathbf{v}$. \end{itemize} In either of the two cases, the component $C_0$ can be seen to have at least three punctures. \end{proof} \begin{rem} There are two alternative approaches to proving Proposition \ref{prop:three} which do not require a perturbation of the almost complex structure $J_0$ on $T^*L$: \begin{enumerate} \item The space of simple $J_0$-holomorphic cylinders can be described explicitly and moreover seen to come in a two-dimensional family (as expected); see \cite{LagIsoTori}; \item Since the component $C_0$ is the limit of embedded pseudoholomorphic spheres, it can be shown to be a (possibly trivial) branched cover of an embedded punctured pseudoholomorphic sphere. Given that the underlying simply covered sphere is a cylinder, Wendl's automatic transversality theorem in \cite{Auttrans} shows that it is transversely cut out. \end{enumerate} \end{rem} \subsection{A further analysis of the obtained pseudoholomorphic building} \label{sec:further} Using the assumption that $D_\infty$ is $J^\infty$-holomorphic (see Section \ref{sec:neckstretch}), there is a unique punctured $J^\infty$-holomorphic sphere $A_\infty$ in the top level of the above building which intersects the line $D_\infty$ at infinity transversely in precisely one point. Here we rely on positivity of intersection \cite{LocalCurve} together with $[D_\infty]\bullet[D_\infty]=1$. We proceed to establish certain properties of the punctured sphere $A_\infty$ passing through the line at infinity that will be needed for the proof. \begin{prop} \label{prop:area} The punctured $J^\infty$-holomorphic sphere $A_\infty$ intersecting the line at infinity is a simply covered plane (i.e.~a one-punctured sphere) of symplectic area $0<\int_{A_\infty}\omega_{\operatorname{FS},{1+\delta}} \le \pi \delta$. \end{prop} \begin{proof} The area estimate follows from Proposition \ref{prop:three}, since $\int_{A_i}\omega_{\operatorname{FS},{1+\delta}}\ge \pi/2$, $i=1,2$, by assumption, and since the total symplectic area of the components in the top level is equal to $\pi(1+\delta)$ by topological reasons. To that end, we must use the fact that every punctured sphere in the top level has positive symplectic area. We continue by arguing that $A_\infty$ has a single puncture. Otherwise, using Lemma \ref{lem:noplanes}, in addition to the two planes $A_1,A_2 \subset \ensuremath{\ensuremath{\mathdj{C}}}P^2 \setminus L$ established by Proposition \ref{prop:three}, $A_\infty$ must be connected to an additional $J^\infty$-holomorphic plane $A \subset \ensuremath{\ensuremath{\mathdj{C}}}P^2 \setminus L$ disjoint from $D_\infty$. By the assumption that $L$ is extremal, the symplectic area of this plane satisfies $\int_A\omega_{\operatorname{FS},1+\delta} \ge \pi/2$. Using an area argument as above, the existence of these three planes in $\ensuremath{\ensuremath{\mathdj{C}}}P^2 \setminus L$ disjoint from $D_\infty$ now leads to a contradiction. \end{proof} Similarly, we also obtain \begin{prop} \label{prop:nobubbling} The connected component $\mathcal{M}$ containing $A_\infty$ of the moduli space of pseudoholomorphic planes cannot bubble given that $\pi \delta < \pi/2$. It thus follows by \cite{CompSFT} that this component is compact. \end{prop} \begin{proof} By Lemma \ref{lem:noplanes}, a hypothetical limit pseudoholomorphic building of a sequence of planes in the moduli space must consist of at least two $J^\infty$-holomorphic components in the top level $\ensuremath{\ensuremath{\mathdj{C}}}P^2 \setminus L$. Positivity of intersection moreover implies that exactly one of these components intersects the line at infinity $D_\infty$. Assuming that the broken pseudoholomorphic building exists, we pick one such component $A \subset \ensuremath{\ensuremath{\mathdj{C}}}P^2 \setminus L$ which is disjoint from $D_\infty$. Since every component in $\ensuremath{\ensuremath{\mathdj{C}}}P^2 \setminus L$ has positive symplectic area, with total sum equal to at most $\pi\delta$ by Proposition \ref{prop:area}, it follows that $0<\int_A\omega_{\operatorname{FS},1+\delta} < \pi \delta$. This however contradicts the assumption that $L$ is extremal. Namely, the compactification of $A$ produces a class in $H_2(D^4,L)$ of the same symplectic area. Observe that even in the case when $A$ itself is not a plane, since $D^4$ is simply connected, this class can be represented by a disc. \end{proof} For the embeddedness properties shown in the following proposition, we need to make heavy use of positivity of intersection. The proof does hence not generalise to higher dimensions. \begin{prop} \label{prop:embed} All planes in the connected component $\mathcal{M}$ containing $A_\infty$ of the moduli space of pseudoholomorphic planes are embedded. \end{prop} \begin{proof} Positivity of intersection \cite{LocalCurve} together with the nature of convergence \cite{CompactnessPunctured} implies that every component in the limit building is a (possibly trivial) branched cover of an \emph{embedded} punctured pseudoholomorphic sphere. Here we have used the fact that the spheres in the sequence constructed by Gromov's Theorem \ref{thm:gromov} all are embedded (this again follows from positivity of intersection), together with the fact that a discrete intersection point of a singularity would contribute positively to the local self-intersection index defined by D. McDuff in \cite{LocalCurve}. Since $A_\infty$ is intersects $D_\infty$ with intersection number one, it is simply covered and hence embedded. The properties of the local self-intersection index in \cite{LocalCurve} moreover implies that all planes in the component $\mathcal{M}$ must be embedded. To that end, observe that these planes necessarily are embedded outside of a fixed compact subset by \cite[Corollary 2.6]{Siefring:Relative} together with Proposition \ref{prop:nobubbling}. \end{proof} \begin{prop} \label{prop:index} After a perturbation of $J^\infty$ in $U\setminus D_\infty \subset \ensuremath{\ensuremath{\mathdj{C}}}P^2$, where $U$ is an arbitrarily small neighbourhood of the line at infinity $D_\infty$, we may assume the following. The $J^\infty$-holomorphic plane $A_\infty$ is of odd and positive Fredholm index. (This Fredholm index denotes the expected dimension of the component of the moduli space $\mathcal{M} \ni A_\infty$ of \emph{unparametrised} pseudoholomorphic planes for which the asymptotic Reeb orbit is allowed to vary.) \end{prop} \begin{proof} Let $\Gamma$ be the one-dimensional family of periodic Reeb orbits containing the asymptotic of $A_\infty$, and let $\gamma$ be the oriented geodesic on $L$ corresponding to this asymptotic orbit. (The orientation of $\gamma$ is induced by the flow of the Reeb vector field.) Note that the oriented boundary of the compactification of $A_\infty$ to a disc in $X$ is equal to the geodesic $-\gamma$ on $L$, i.e.~the geodesic endowed with the opposite orientation. Using \cite[Formula (3)]{PuncturedHolomorphic} we can express the sought Fredholm index as \[\operatorname{index}(A_\infty)=-1+2c_1(A_\infty)-(\operatorname{RS}(\Gamma)-1/2),\] where $\operatorname{RS}(\Gamma)$ denotes the Robbin-Salamon index defined in \cite[Remark 5.4]{MaslovIndex}. Using an appropriate trivialisation of the complex determinant bundle of $T\ensuremath{\ensuremath{\mathdj{C}}}P^2$ in order to compute the above relative first Chern class and Robbin-Salamon index, we obtain the identity \[-(\operatorname{RS}(\Gamma)-1/2)=\mu_L(-\gamma)\] by \cite[Lemma 2.1]{PuncturedHolomorphic} (this was originally proven in \cite{NewObstruction}). Here $\mu_L(\gamma)$ denotes the Maslov index of the closed geodesic $\gamma$ on $L \subset (\ensuremath{\mathdj{C}}^2,\omega_0)$ computed using the canonical trivialisation of $T\ensuremath{\mathdj{C}}^2$. In particular, since $\mu_L(\gamma)$ is even by the orientability of $L$, we conclude that $\operatorname{index}(A_\infty)$ must be \emph{odd}. Since $A_\infty$ is simply covered by Proposition \ref{prop:area}, a standard transversality argument implies the following (see \cite[Section 3.4]{JHolCurves}). After a perturbation of $J^\infty$ supported in $U \setminus D_\infty$, where $U \subset \ensuremath{\ensuremath{\mathdj{C}}}P^2$ is an arbitrarily small neighbourhood of $D_\infty$, the solution $A_\infty$ may be assumed to be transversely cut out. Its index can thus be assumed to be \emph{non-negative}, since it is equal to the dimension of the moduli space containing it. \end{proof} \begin{rem} Utilising the formula for how the Fredholm indices of the components of a building compare to the Chern number of the building, one can even show that the Fredholm index of $A_\infty$ is equal to one; see \cite{LagIsoTori}. This fact will however not be needed. \end{rem} \begin{rem} Even if the pseudoholomorphic plane $A_\infty$ is simply covered, it is possible that its unique puncture is asymptotic to a multiply covered periodic Reeb orbit. The example of the Chekanov torus in \cite[Corollary A.2]{PuncturedHolomorphic} shows that this case indeed can occur. \end{rem} \subsection{A null-homology of $L$ by a chain foliated by planes} Consider the embedded plane $A_\infty \subset \ensuremath{\ensuremath{\mathdj{C}}}P^2 \setminus L$ obtained above which is asymptotic to a Reeb orbit in $\Gamma$, and let $\mathcal{M}$ denote the connected component containing $A_\infty$ of the moduli space of \emph{unparametrised} $J^\infty$-holomorphic planes. For a fixed Reeb orbit $\gamma \in \Gamma$ we let $\mathcal{M}_\gamma \subset \mathcal{M}$ denote the subspace of those planes having $\gamma$ as its asymptotic orbit. By Propositions \ref{prop:embed} and \ref{prop:index}, the equality in \cite[Remark 1.2]{Auttrans} is satisfied for the planes in both $\mathcal{M}$ and $\mathcal{M}_\gamma$. The automatic transversality result \cite[Theorem 1]{Auttrans} thus applies both of the moduli spaces $\mathcal{M}$ and $\mathcal{M}_\gamma$. These two results combine to show that the component $\mathcal{M}$ is transversely cut out and, moreover, that the asymptotic evaluation map \[\operatorname{ev}_\infty \colon \mathcal{M} \to \Gamma \simeq S^1\] is a submersion. Since $\mathcal{M}$ is compact by Proposition \ref{prop:nobubbling}, we have shown that this moduli space is the total space of a smooth locally trivial fibre bundle over $S^1$ with compact fibre diffeomorphic to $\operatorname{ev}^{-1}_\infty(\gamma)=\mathcal{M}_\gamma$. Let $\varphi \in \operatorname{Diff}(\mathcal{M}_\gamma)$ be the clutching function for this fibre bundle. Consider the pull-back bundle $\widetilde{\mathcal{M}} \to S^1$ induced by the $n$-fold covering $S^1 \to S^1$, which thus can be constructed using the clutching function $\varphi^n$. Since $|\pi_0(\mathcal{M}_\gamma)| < \infty$ holds by compactness, taking $k := |\pi_0(\mathcal{M}_\gamma)| \ge 1$ it follows that $\widetilde{\mathcal{M}}$ admits a section $\sigma_{n} \colon S^1 \to \widetilde{\mathcal{M}}$ whenever $k!|n$. For any contractible open subset $U \subset \mathcal{M}$, one can define evaluation maps \[ \operatorname{ev} \colon U \times \ensuremath{\mathdj{C}} \to \ensuremath{\ensuremath{\mathdj{C}}}P^2 \setminus L\] for the planes in $U$. These evaluation maps obviously depend on the choice of a holomorphic parametrisation of the domain, i.e.~the plane $\ensuremath{\mathdj{C}}$. Recall that any two such parametrisations of $\ensuremath{\mathdj{C}}$ are related by a biholomorphism $z \mapsto az+b$ for some $a\in \ensuremath{\mathdj{C}}^*$ and $b \in \ensuremath{\mathdj{C}}$. The constant $b \in \ensuremath{\mathdj{C}}$ together with the modulus $|a|>0$ of $a$ both constitute contractible choices when fixing a parametrisation. We are left with the choice of $\arg a \in S^1 \subset \ensuremath{\mathdj{C}}^*$. Let $m \ge 1$ denote the multiplicity of the Reeb orbits in $\Gamma$, and fix a smoothly depending choice of starting point for each Reeb orbit in $\Gamma$. We can make $\arg a$ uniquely determined up to multiplication by a power of $e^{i2\pi /m} \in \ensuremath{\mathdj{C}}^*$ by requiring the parametrisation to be asymptotic to the chosen starting point of its asymptotic Reeb orbit as $\mathfrak{R}(z)=x \to +\infty$ along the positive real line. (This uses the asymptotic convergence to a Reeb orbit satisfied by a finite energy plane; see e.g.~\cite{CompSFT}.) By the above we can fix smoothly depending choices of parametrisations of the $S^1$-family of planes $\sigma_{k!m!}$. Namely, the (homotopy classes of) parametrisations of these planes form a bundle over $S^1$ with fibre $\ensuremath{\mathdj{Z}}/\ensuremath{\mathdj{Z}} m$ and clutching function $\hat{\varphi}^{m!}=\operatorname{Id}_{\ensuremath{\mathdj{Z}} / \ensuremath{\mathdj{Z}} m}$. In conclusion, it is possible to construct a globally defined evaluation map for the $S^1$-family of planes $\sigma_{k!m!} \colon S^1 \to \widetilde{\mathcal{M}}$. Using the property of asymptotic convergence to Reeb orbits satisfied by finite energy planes (again see e.g.~\cite{CompSFT}), the latter evaluation map can be compactified to produce a continuous map \[ (S^1 \times D^2,S^1\times S^1) \to (\ensuremath{\ensuremath{\mathdj{C}}}P^2,L), \] whose restriction to the boundary is a map $S^1 \times S^1 \to L$ of degree $k m l >0$ for some $l>0$. For the latter property we use the fact established above that the asymptotic evaluation map is a submersion, together with the fact that each plane is asymptotic to a closed geodesic on $L$ for the flat metric. Note that the image of each $\{\theta\} \times D^2$ under the above map is $J^\infty$-holomorphic away from the boundary. Let us choose $\delta>0$ sufficiently small in order for $\hbar > \pi\delta>0$ to be satisfied, where $\hbar$ is the constant given by \ref{M}. Since the linking number of $L$ and the fibre $S_0 \subset D^4 \setminus L$ constructed in Section \ref{sec:proof} is non-zero inside $\ensuremath{\ensuremath{\mathdj{C}}}P^2$, namely there is a disc with boundary equal to $S_0$ which intersects $L$ in a single transverse point $p \in L$, the above chain must intersect $S_0$. However, the monotonicity property \ref{M} holds (recall that $J^\infty=J_{U_0}$ in the neighbourhood $U_0 \supset S_0$; see Section \ref{sec:neckstretch}), thus leading to the sought contradiction with Proposition \ref{prop:area}. See Figure \ref{fig:disc} for an illustration. \begin{figure}[htp] \begin{center} \vspace{3mm} \labellist \pinlabel $\color{red}S_0$ at 69 87 \pinlabel $\color{blue}D$ at 78 66 \pinlabel $L$ at 106 75 \pinlabel $\color{blue}\infty$ at 153 70 \pinlabel $(\ensuremath{\ensuremath{\mathdj{C}}}P^2,\omega_{\operatorname{FS},1+\delta})$ at 166 21 \pinlabel $B^4$ at 74 20 \pinlabel $P$ at 50 40 \endlabellist \includegraphics{disc} \caption{The disc $D$ with boundary equal to $S_0$ intersects $L$ transversely in a single point. A null-homology of $L$ must thus intersect $S_0 \subset \ensuremath{\ensuremath{\mathdj{C}}}P^2 \setminus L$. In particular, one of the planes $P \in \mathcal{M}$ must intersect $S_0$.} \label{fig:disc} \end{center} \end{figure} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$}